diff --git a/-9E2T4oBgHgl3EQfQgbg/content/tmp_files/2301.03772v1.pdf.txt b/-9E2T4oBgHgl3EQfQgbg/content/tmp_files/2301.03772v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..18299168f734ac7a26b046d69c9a4557879fd191 --- /dev/null +++ b/-9E2T4oBgHgl3EQfQgbg/content/tmp_files/2301.03772v1.pdf.txt @@ -0,0 +1,1322 @@ +Influence of illumination on the quantum lifetime in selectively doped single GaAs +quantum wells with short-period AlAs/GaAs superlattice barriers +A. A. Bykov, D. V. Nomokonov, A. V. Goran, I. S. Strygin, I. V. Marchishin, A. K. Bakarov +Rzhanov Institute of Semiconductor Physics, Russian Academy of Sciences, Siberian Branch, +Novosibirsk, 630090, Russia +The influence of illumination on a high mobility two-dimensional electron gas with high +concentration of charge carriers is studied in selectively doped single GaAs quantum wells with +short-period AlAs/GaAs superlattice barriers at a temperature T = 4.2 K in magnetic fields +B < 2 T. It is shown that illumination at low temperatures in the studied heterostructures leads to +an increase in the concentration, mobility, and quantum lifetime of electrons. An increase in the +quantum lifetime due to illumination of single GaAs quantum wells with modulated superlattice +doping is explained by a decrease in the effective concentration of remote ionized donors. +Introduction +Persistent photoconductivity (PPC), which occurs in selectively doped GaAs/AlGaAs +heterostructures at low temperatures (T) as the result of visible light illumination, is widely used +as a method for changing the concentration (ne), mobility () and quantum lifetime (q) of +electrons in such two-dimensional (2D) systems [1-5]. PPC is also used in one-dimensional +lateral superlattices based on high mobility selectively doped GaAs/AlGaAs heterostructures +[6, 7]. One of the causes of PPC is the change in the charge state of DX centers in doped AlGaAs +layers under illumination [8, 9]. PPC is undesirable in high mobility heterostructures intended for +the manufacturing of field-effect transistors, as it introduces instability into their performance. +One of the ways to suppress PPC is to use short-period AlAs/GaAs superlattices as barriers to +single GaAs quantum wells [10]. In this case, the sources of free charge carriers are thin -doped +GaAs layers located in short-period superlattice barriers in which DX centers do not appear. +Another motivation for remote superlattice doping of single GaAs quantum wells is the +fabrication of 2D electronic systems with simultaneously high ne and . In selectively doped +GaAs/AlGaAs heterostructures, to suppress the scattering of 2D electron gas on a random +potential of ionized donors, the charge transfer region is separated from the doping region by an +undoped AlGaAs layer (spacer) [4]. High  in such a system is achieved due to a “thick” spacer +(dS > 50 nm) with a relatively low concentration ne ~ 31015 m-2. To implement high mobility 2D +electron systems with a “thin” spacer (dS < 50 nm) and high ne, it was proposed in [11] to use +short-period AlAs/GaAs superlattices as barriers to single GaAs quantum wells (Fig. 1). In this +case, the suppression of scattering by ionized Si donors is achieved not only by separation of the +regions of doping and transport, but also by the screening effect of X-electrons localized in AlAs +layers [11-13]. +1 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)Fig. 1. (a) Schematic view of a single GaAs quantum well with side barriers of short-period +AlAs/GaAs superlattices. (b) An enlarged view of a portion of the -doped layer in a narrow +GaAs quantum well with adjacent AlAs layers. Ellipses show compact dipoles formed by +positively charged Si donors in the -doped layer and X-electrons in AlAs layers [13]. +Superlattice doping of single GaAs quantum wells is used not only to implement high +mobility 2D electronic systems with a thin spacer [11, 12], but also to achieve ultrahigh  in 2D +electronic systems with a thick spacer [14-16]. In GaAs/AlAs heterostructures with modulated +superlattice doping, PPC due to a change in the charge states of DX centers should not arise [10]. +However, it has been found that in selectively doped single GaAs quantum wells with short- +period AlAs/GaAs superlattice barriers and a thin spacer, illumination increases ne and  [17-19], +and with a thick spacer, it increases q [20]. The increase in q was explained by the redistribution +of X-electrons in AlAs layers adjacent to thin -doped GaAs layers. However, the effect of +illumination on q in single GaAs quantum wells with a thin spacer and superlattice doping +remains unexplored. +2 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)One of the features of GaAs/AlAs heterostructures with a thin spacer and superlattice doping +grown by molecular beam epitaxy on (001) GaAs substrates is the anisotropy of  [21]. In such +structures, y in the [-110] crystallographic direction can exceed x in the [110] direction by +several times [22]. The anisotropy of  is due to scattering on the heterointerface roughness +oriented along the [-110] direction and arising during the growth of heterostructures [23, 24]. +This work is devoted to studying the effect of illumination on a 2D electron gas with an +anisotropic  in single GaAs quantum wells with a thin spacer and superlattice doping. It has +been established that illumination increases ne, , and q in the heterostructures under study. It is +shown that the increase in q after illumination is due to a decrease in the effective concentration +of remote ionized donors. +Quantum lifetime +The traditional method of measuring q in a 2D electron gas is based on studying the +dependence of the amplitude of the Shubnikov – de Haas (SdH) oscillations on the magnetic +field (B) [25-30]. In 2D electron systems with isotropic  low field SdH oscillations are +described by the following relation [28]: +SdH = 4 0 X(T) exp(-/cq) cos(2F/ħc - ), (1) +where SdH is the oscillating component of the dependence xx(B), 0 = xx(B = 0) is the Drude +resistance, X(T) = (22kBT/ħc)/sinh(22kBT/ħc), c = eB/m*, F is the Fermi energy. Using the +results of [26], it is easy to generalize (1) for a 2D system with anisotropic mobility d. In this +case, the normalized amplitude of SdH oscillations will be determined by the following +expression [31]: +Ad +SdH = d +SdH/0d X(T) = A0d +SdH exp(-/cqd), (2) +where the index d corresponds to the main mutually perpendicular directions x and y, and A0d +SdH += 4. +The value of q in single GaAs quantum wells with short-period AlAs/GaAs superlattice +barriers is determined mainly by small-angle scattering [11, 12]. In this case, q can be expressed +by the relation [32-34]: +q  qR = (2m*/) (kFdR)/nR +eff, (3) +where qR is the quantum lifetime upon scattering on a random potential of a remote impurity, kF += (2ne)0.5, dR = (dS + dSQW/2), dSQW is the thickness of a single GaAs quantum well, and neff +R is +the effective concentration of remote ionized donors. The value of neff +R takes into account the +change in the scattering potential of remote donors when they are bound to X-electrons (Fig. 1b) +[13]. The dependence of neff +R on ne in the heterostructures under study is described by the +following phenomenological relation [35]: +neff +R = neff +R0/{exp[(ne - a)/b] + 1}  neff +R0 fab(ne), (4) +where neff +R0, a and b are fitting parameters. fab is the fraction of ionized remote donors not +associated with X-electrons into compact dipoles. +3 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)Samples under study and details of the experiment +The GaAs/AlAs heterostructures under study were grown using molecular beam epitaxy on +semi-insulating GaAs (100) substrates. They were single GaAs quantum wells with short-period +AlAs/GaAs superlattice barriers [11, 12]. Two Si -doping layers located at distances dS1 and dS2 +from the upper and lower heterointerfaces of the GaAs quantum well served as the sources of +electrons. L-shaped bridges oriented along the [110] and [-110] directions were fabricated based +on the heterostructures grown by optical lithography and liquid etching. The bridges were +100 µm long and 50 µm wide. The bridge resistance was measured at an alternating current Iac +< 1 μA with a frequency fac ~ 0.5 kHz at a temperature T = 4.2 K in magnetic fields B < 2 T. A +red LED was used for illumination. +Table 1. Heterostructure parameters: dSQW is the quantum well thickness; dS = (dS1 + dS2)/2 is +the spacer average thickness; nSi is the total concentration of remote Si donors in -doped thin +GaAs layers; ne is the electron concentration; x is the mobility in the [110] direction; y is the +mobility in the direction [-110]; y/x is the mobility ratio. The asterisk marks the values +obtained after illumination. +Structure +number +dSQW +(nm) +dS +(nm) +nSi +(1016 m-2) +ne +(1015 m-2) +y +(m2/V s) +x +(m2/V s) +y/x +1 +13 +29.4 +3.2 +7.48 +8.42* +124 +206* +80.5 +103* +1.54 +2* +2 +10 +10.8 +5 +11.5 +14.5* +14.7 +27.2* +9.33 +18.6* +1.58 +1.46* +Experimental results and discussion +Fig. 2a shows the experimental dependences of d(B) at T = 4.2 K for heterostructure no. 1 +before illumination (curves 1 and 2) and after illumination (curves 3 and 4). In the region of +B > 0.5 T, SdH oscillations are observed, the period of which in the reverse magnetic field +decreased after illumination, which indicates an increase in ne. After illumination, the values of +0d also decreased, which is due not only to an increase in ne, but also to an increase in d. The +illumination also led to an increase in the positive magnetoresistance (MR) of the 2D electron +gas, which indicates an increase in the quantum lifetime [36, 37]. The dependences of Ad +SdH on +1/B for structure no. 1 are shown in Fig. 2b. In accordance with formula (2), the slope of the +dependences Ad +SdH(1/B) on a semilogarithmic scale is determined by the value qd. A decrease in +slope after illumination indicates an increase in qd. At the same time, the values of qd measured +in the directions [110] and [-110] are equal with an accuracy of 5%. +4 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)Fig. 2. (a) Experimental dependences of d on B measured on an L-shaped bridge at T = 4.2 K +before illumination (1, 2) and after illumination (3, 4) (no. 1). 1, 3 – xx(B). 2, 4 – yy(B). The +inset shows the geometry of the L-shaped bridge. (b) Dependences of Ad +SdH on 1/B before +illumination (1, 2) and after illumination (3, 4). Symbols are experimental data. Solid lines – +calculation by formula (2): 1 – A0x +SdH = 5.02; qx = 1.44 ps; 2 – A0y +SdH = 4.57; qy = 1.38 ps; 3 – +A0x +SdH = 6.29; qx = 2.72 ps; 4 – A0y +SdH = 4.66; qy = 3.01 ps. +Fig. 3a shows the experimental dependences of d(B) at T = 4.2 K for heterostructure no. 2 +before illumination (curves 1 and 2) and after illumination (curves 3 and 4). For this structure, as +well as for structure no. 1, short-term illumination at low temperature leads to an increase in ne +and d. However, for structure no. 2, in contrast to no. 1, the dependences xx(B) do not show +quantum positive MR, while a classical negative MR is observed [38], which decreases +significantly after illumination. Dependences td(ne) are presented in Fig. 3b. These dependences +are not described by the theory [32], which takes into account only the change in kF with +increasing ne, which is due to the change in neff +R after illumination. A similar behavior of td on ne +is also observed when the concentration of the 2D electron gas is changed using a Schottky gate +[12, 35]. +5 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)Fig. 3. (a) Dependences of xx(B) and yy(B) measured on the L-shaped bridge at T = 4.2 K +(no. 2): 1, 2 – before illumination; 3, 4 - after short-term illumination by a red LED. (b) +Dependencies of tx(ne) and ty(ne). Squares and circles - experimental data: 1 - tx; 2 - ty. Solid +lines – calculation according to the formula: td  ne +1.5: 1 – tx; 2 – ty. +The experimental dependences qd(ne) for structure no. 2 (Fig. 4a) show that qd for different +crystallographic directions are equal with an accuracy of 5%, which agrees with [31]. The +experimental data are well described by formula (3) for the effective concentration of positively +charged Si donors calculated by formula (4). The agreement between the experimental +dependences qd(ne) and the calculated one indicates that the increase in the quantum lifetime of +electrons in a single GaAs quantum well after low-temperature illumination is due to a decrease +in neff +R. +6 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)Fig. 4. (a) Dependences of qd(ne): squares are the experimental values of qy; circles – +experimental values of qx; the solid line is the calculation for neff +R = neff +R0fab. (b) Dependences of +neff +R and neff +R0fab on ne: squares and circles are the values of neff +R calculated from the experimental +values of qx and qy; solid line – neff +R0fab for neff +R0 = 1.261016 m-2, a = 1.371016 m-2 and b = +0.0821016 m-2. +Conclusion +The influence of illumination on the low-temperature transport in a 2D electron gas with +anisotropic mobility in selectively doped single GaAs quantum wells with short-period +AlAs/GaAs superlattice barriers in classically strong magnetic fields was studied. It has been +shown that, in the heterostructures under study, illumination by a red LED at low temperatures +leads to an increase in the concentration, mobility, and quantum lifetime of electrons. An +increase in the quantum lifetime of electrons in single GaAs quantum wells with modulated +superlattice doping after illumination is explained by a decrease in the effective concentration of +remote ionized donors. +7 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)Funding +This work was supported by the Russian Foundation for Basic Research (project no. 20-02- +00309). +References +[1] H. Stormer, R. Dingle, A. Gossard, W. Wiegmann, and M. Sturge, Solid State Commun. 29, +705 (1979). +[2] E. F. Schubert, J. Knecht, and K. Ploog, J. Phys. C: Solid State Phys. 18, L215 (1985). +[3] R. G. Mani and J. R. Anderson, Phys. Rev. B 37, 4299(R) (1988). +[4] Loren Pfeiffer, K. W. West, H. L. Stormer, and K. W. Baldwin, Appl. Phys. Lett. 55, 1888 +(1989). +[5] M. Hayne, A. Usher, J. J. Harris, V. V. Moshchalkov, and C. T. Foxon, Phys. Rev. B 57, +14813 (1998). +[6] D. Weiss, K. v. Klitzing, K. Ploog, and G. Weimann, Europhys. Lett. 8, 179 (1989). +[7] C. Hnatovsky, M. A. Zudov, G. D. Austing, A. Bogan, S. J. Mihailov, M. Hilke, K. W. West, +L. N. Pfeiffer, and S. A. Studenikin, J. Appl. Phys. 132, 044301 (2022). +[8] R. J. Nelson, Appl. Phys. Lett. 31, 351 (1977). +[9] D. V. Lang, R. A. Logan, and M. Jaros, Phys. Rev. B 19, 1015 (1979). +[10] Toshio Baba, Takashi Mizutani, and Masaki Ogawa, Jpn. J. Appl. Phys. 22, L627 (1983). +[11] K. J. Friedland, R. Hey, H. Kostial, R. Klann, and K. Ploog, Phys. Rev. Lett. 77, 4616 +(1996). +[12] D. V. Dmitriev, I. S. Strygin, A. A. Bykov, S. Dietrich, and S. A. Vitkalov, JETP Lett. 95, +420 (2012). +[13] M. Sammon, M. A. Zudov, and B. I. Shklovskii, Phys. Rev. Materials 2, 064604 (2018). +[14] V. Umansky, M. Heiblum, Y. Levinson, J. Smet, J. Nübler, M. Dolev, J. Cryst. Growth 311, +1658 (2009). +[15] G. C. Gardner, S. Fallahi, J. D. Watson, M. J. Manfra, J. Cryst. Growth 441, 71 (2016). +[16] Y. J. Chung, K. A. Villegas Rosales, K.W. Baldwin, K.W. West, M. Shayegan, and L. N. +Pfeiffer, Phys. Rev. Materials 4, 044003 (2020). +[17] A. A. Bykov, I. V. Marchishin, A. K. Bakarov, Jing-Qiao Zhang and S. A. Vitkalov, JETP +Lett. 85, 63 (2007). +[18] A. A. Bykov, I. S. Strygin, I. V. Marchishin, and A. V. Goran, JETP Lett. 99, 303 (2014). +[19] A. A. Bykov, I. S. Strygin, E. E. Rodyakina, W. Mayer, and S. A. Vitkalov, JETP Lett. 101, +703 (2015). +[20] X. Fu, A. Riedl, M. Borisov, M. A. Zudov, J. D. Watson, G. Gardner, M. J. Manfra, K. W. +Baldwin, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 98, 195403 (2018). +[21] A. A. Bykov, A. K. Bakarov, A. V. Goran, A. V. Latyshev, A. I. Toropov, JETP Lett. 74, +164 (2001). +8 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2)[22] K.-J. Friedland, R. Hey, O. Bierwagen, H. Kostial, Y. Hirayama, and K. H. Ploog, Physica +E 13, 642 (2002). +[23] Y. Tokura, T. Saku, S. Tarucha, and Y. Horikoshi, Phys. Rev. B 46 15558 (1992). +[24] M. D. Johnson, C. Orme, A. W. Hunt, D. Graff, J. Sudijono, L. M. Sander, and B. G. Orr, +Phys. Rev. Lett. 72, 116 (1994). +[25] I. M. Lifshits and A. M. Kosevich, Zh. Eksp. Teor. Fiz. 29, 730 (1955) [Sov. Phys. JETP 2, +636 (1956)]. +[26] A. Isihara and L. Smrcka, J. Phys. C: Solid State Phys. 19, 6777 (1986). +[27] P. T. Coleridge, R. Stoner, and R. Fletcher, Phys. Rev. B 39, 1120 (1989). +[28] P. T. Coleridge, Phys. Rev. B 44, 3793 (1991). +[29] S. D. Bystrov, A. M. Kreshchuk, S. V. Novikov, T. A. Polyanskaya, I. G. Savel'ev, Fiz. +Tekh. Poluprov. 27, 645 (1993) [Semiconductors 27, 358 (1993)]. +[30] S. D. Bystrov, A. M. Kreshchuk, L. Taun, S. V. Novikov, T. A. Polyanskaya, I. G. Savel’ev, +and A. Ya. Shik, Fiz. Tekh. Poluprov. 28, 91 (1994) [Semiconductors 28, 55 (1994)]. +[31] D. V. Nomokonov, A. K. Bakarov, A. A. Bykov, in the press. +[32] A. Gold, Phys. Rev. B 38, 10798 (1988). +[33] J. H. Davies, The Physics of Low Dimensional Semiconductors (Cambridge Univ. Press, +New York, 1998). +[34] I. A. Dmitriev, A. D. Mirlin, D. G. Polyakov, and M. A. Zudov, Rev. Mod. Phys. 84, 1709 +(2012). +[35] A. A. Bykov, I. S. Strygin, A. V. Goran, D. V. Nomokonov, and A. K. Bakarov, JETP Lett. +112, 437 (2020). +[36] M. G. Vavilov and I. L. Aleiner, Phys. Rev. B 69, 035303 (2004). +[37] Scott Dietrich, Sergey Vitkalov, D. V. Dmitriev, and A. A. Bykov, Phys. Rev. B 85, 115312 +(2012). +[38] A. A. Bykov, A. K. Bakarov, A. V. Goran, N. D. Aksenova, A. V. Popova, A. I. Toropov, +JETP Lett. 78, 134 (2003). +9 + +0.6 +口1 +2 +0.4 +(sd) +0.2 +(a) +0.0 +(b) +1.2 +0.8 +d +口 +1 +0.4 +0 +2 +0.0 +1.0 +1.2 +1.4 +1.6 +ne (1016 m*2)(a) +AlAs/GaAs +Si-S-doping +SPSL +dsi +GaAs SQW +SQW +AlAs/GaAs +↓ Si-8-doping +SPSL +(b) +AlAs +GaAs +Si- ++ ++ ++ +AIAs18 +Pyy +12 +1 +3 +2 +6 +4 +(a) +0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +B (T) +6 +(b) +3 +5 +4 +4 +3 +1 +口 +1 +A +2 +2 +2 +△ +3 +4 +0.0 +0.4 +0.8 +1.2 +1.6 +2.0 +1/B (1/T)60 +1 +40 +2 +3 +20 +4 +(a) +0 +0.0 +0.6 +1.2 +1.8 +B (T) +12 +(b) +2 +0 +0 +8 +口 +(sd) +1 +口 +4. +口 +1 +2 +0 +1.1 +1.2 +1.3 +1.4 +1.5 +ne (1016 m*2) \ No newline at end of file diff --git a/-9E2T4oBgHgl3EQfQgbg/content/tmp_files/load_file.txt b/-9E2T4oBgHgl3EQfQgbg/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..34c44934e8d6dd78967308f158eb7d82428fcc21 --- /dev/null +++ b/-9E2T4oBgHgl3EQfQgbg/content/tmp_files/load_file.txt @@ -0,0 +1,852 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf,len=851 +page_content='Influence of illumination on the quantum lifetime in selectively doped single GaAs quantum wells with short-period AlAs/GaAs superlattice barriers A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Nomokonov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Goran, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Strygin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Marchishin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bakarov Rzhanov Institute of Semiconductor Physics, Russian Academy of Sciences, Siberian Branch, Novosibirsk, 630090, Russia The influence of illumination on a high mobility two-dimensional electron gas with high concentration of charge carriers is studied in selectively doped single GaAs quantum wells with short-period AlAs/GaAs superlattice barriers at a temperature T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 K in magnetic fields B < 2 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' It is shown that illumination at low temperatures in the studied heterostructures leads to an increase in the concentration, mobility, and quantum lifetime of electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' An increase in the quantum lifetime due to illumination of single GaAs quantum wells with modulated superlattice doping is explained by a decrease in the effective concentration of remote ionized donors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Introduction Persistent photoconductivity (PPC), which occurs in selectively doped GaAs/AlGaAs heterostructures at low temperatures (T) as the result of visible light illumination, is widely used as a method for changing the concentration (ne), mobility (\uf06d) and quantum lifetime (\uf074q) of electrons in such two-dimensional (2D) systems [1-5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' PPC is also used in one-dimensional lateral superlattices based on high mobility selectively doped GaAs/AlGaAs heterostructures [6, 7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' One of the causes of PPC is the change in the charge state of DX centers in doped AlGaAs layers under illumination [8, 9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' PPC is undesirable in high mobility heterostructures intended for the manufacturing of field-effect transistors, as it introduces instability into their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' One of the ways to suppress PPC is to use short-period AlAs/GaAs superlattices as barriers to single GaAs quantum wells [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In this case, the sources of free charge carriers are thin \uf064-doped GaAs layers located in short-period superlattice barriers in which DX centers do not appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Another motivation for remote superlattice doping of single GaAs quantum wells is the fabrication of 2D electronic systems with simultaneously high ne and \uf06d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In selectively doped GaAs/AlGaAs heterostructures, to suppress the scattering of 2D electron gas on a random potential of ionized donors, the charge transfer region is separated from the doping region by an undoped AlGaAs layer (spacer) [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' High \uf06d in such a system is achieved due to a “thick” spacer (dS > 50 nm) with a relatively low concentration ne ~ 3\uf0b41015 m-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' To implement high mobility 2D electron systems with a “thin” spacer (dS < 50 nm) and high ne, it was proposed in [11] to use short-period AlAs/GaAs superlattices as barriers to single GaAs quantum wells (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In this case, the suppression of scattering by ionized Si donors is achieved not only by separation of the regions of doping and transport, but also by the screening effect of X-electrons localized in AlAs layers [11-13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (a) Schematic view of a single GaAs quantum well with side barriers of short-period AlAs/GaAs superlattices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (b) An enlarged view of a portion of the \uf064-doped layer in a narrow GaAs quantum well with adjacent AlAs layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Ellipses show compact dipoles formed by positively charged Si donors in the \uf064-doped layer and X-electrons in AlAs layers [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Superlattice doping of single GaAs quantum wells is used not only to implement high mobility 2D electronic systems with a thin spacer [11, 12], but also to achieve ultrahigh \uf06d in 2D electronic systems with a thick spacer [14-16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In GaAs/AlAs heterostructures with modulated superlattice doping, PPC due to a change in the charge states of DX centers should not arise [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' However, it has been found that in selectively doped single GaAs quantum wells with short- period AlAs/GaAs superlattice barriers and a thin spacer, illumination increases ne and \uf06d [17-19], and with a thick spacer, it increases \uf074q [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The increase in \uf074q was explained by the redistribution of X-electrons in AlAs layers adjacent to thin \uf064-doped GaAs layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' However, the effect of illumination on \uf074q in single GaAs quantum wells with a thin spacer and superlattice doping remains unexplored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)One of the features of GaAs/AlAs heterostructures with a thin spacer and superlattice doping grown by molecular beam epitaxy on (001) GaAs substrates is the anisotropy of \uf06d [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In such structures, \uf06dy in the [-110] crystallographic direction can exceed \uf06dx in the [110] direction by several times [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The anisotropy of \uf06d is due to scattering on the heterointerface roughness oriented along the [-110] direction and arising during the growth of heterostructures [23, 24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' This work is devoted to studying the effect of illumination on a 2D electron gas with an anisotropic \uf06d in single GaAs quantum wells with a thin spacer and superlattice doping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' It has been established that illumination increases ne, \uf06d, and \uf074q in the heterostructures under study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' It is shown that the increase in \uf074q after illumination is due to a decrease in the effective concentration of remote ionized donors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Quantum lifetime The traditional method of measuring \uf074q in a 2D electron gas is based on studying the dependence of the amplitude of the Shubnikov – de Haas (SdH) oscillations on the magnetic field (B) [25-30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In 2D electron systems with isotropic \uf06d low field SdH oscillations are described by the following relation [28]: \uf072SdH = 4 \uf0720 X(T) exp(-\uf070/\uf077c\uf074q) cos(2\uf070\uf065F/ħ\uf077c - \uf070), (1) where \uf072SdH is the oscillating component of the dependence \uf072xx(B), \uf0720 = \uf072xx(B = 0) is the Drude resistance, X(T) = (2\uf0702kBT/ħ\uf077c)/sinh(2\uf0702kBT/ħ\uf077c), \uf077c = eB/m*, \uf065F is the Fermi energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Using the results of [26], it is easy to generalize (1) for a 2D system with anisotropic mobility \uf06dd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In this case, the normalized amplitude of SdH oscillations will be determined by the following expression [31]: Ad SdH = \uf072d SdH/\uf0720d X(T) = A0d SdH exp(-\uf070/\uf077c\uf074qd), (2) where the index d corresponds to the main mutually perpendicular directions x and y, and A0d SdH = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The value of \uf074q in single GaAs quantum wells with short-period AlAs/GaAs superlattice barriers is determined mainly by small-angle scattering [11, 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In this case, \uf074q can be expressed by the relation [32-34]: \uf074q \uf040 \uf074qR = (2m*/\uf070\uf068) (kFdR)/nR eff, (3) where \uf074qR is the quantum lifetime upon scattering on a random potential of a remote impurity, kF = (2\uf070ne)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5, dR = (dS + dSQW/2), dSQW is the thickness of a single GaAs quantum well, and neff R is the effective concentration of remote ionized donors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The value of neff R takes into account the change in the scattering potential of remote donors when they are bound to X-electrons (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1b) [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The dependence of neff R on ne in the heterostructures under study is described by the following phenomenological relation [35]: neff R = neff R0/{exp[(ne - a)/b] + 1} \uf0ba neff R0 fab(ne), (4) where neff R0, a and b are fitting parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' fab is the fraction of ionized remote donors not associated with X-electrons into compact dipoles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)Samples under study and details of the experiment The GaAs/AlAs heterostructures under study were grown using molecular beam epitaxy on semi-insulating GaAs (100) substrates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' They were single GaAs quantum wells with short-period AlAs/GaAs superlattice barriers [11, 12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Two Si \uf064-doping layers located at distances dS1 and dS2 from the upper and lower heterointerfaces of the GaAs quantum well served as the sources of electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' L-shaped bridges oriented along the [110] and [-110] directions were fabricated based on the heterostructures grown by optical lithography and liquid etching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The bridges were 100 µm long and 50 µm wide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The bridge resistance was measured at an alternating current Iac < 1 μA with a frequency fac ~ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 kHz at a temperature T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 K in magnetic fields B < 2 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A red LED was used for illumination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Heterostructure parameters: dSQW is the quantum well thickness;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' dS = (dS1 + dS2)/2 is the spacer average thickness;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' nSi is the total concentration of remote Si donors in \uf064-doped thin GaAs layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' ne is the electron concentration;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' \uf06dx is the mobility in the [110] direction;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' \uf06dy is the mobility in the direction [-110];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' \uf06dy/\uf06dx is the mobility ratio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The asterisk marks the values obtained after illumination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Structure number dSQW (nm) dS (nm) nSi (1016 m-2) ne (1015 m-2) \uf06dy (m2/V s) \uf06dx (m2/V s) \uf06dy/\uf06dx 1 13 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='48 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='42* 124 206* 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 103* 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='54 2* 2 10 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 5 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5* 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='7 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2* 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='33 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6* 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='46* Experimental results and discussion Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2a shows the experimental dependences of \uf072d(B) at T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 K for heterostructure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1 before illumination (curves 1 and 2) and after illumination (curves 3 and 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In the region of B > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 T, SdH oscillations are observed, the period of which in the reverse magnetic field decreased after illumination, which indicates an increase in ne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' After illumination, the values of \uf0720d also decreased, which is due not only to an increase in ne, but also to an increase in \uf06dd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The illumination also led to an increase in the positive magnetoresistance (MR) of the 2D electron gas, which indicates an increase in the quantum lifetime [36, 37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The dependences of Ad SdH on 1/B for structure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1 are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' In accordance with formula (2), the slope of the dependences Ad SdH(1/B) on a semilogarithmic scale is determined by the value \uf074qd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A decrease in slope after illumination indicates an increase in \uf074qd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' At the same time, the values of \uf074qd measured in the directions [110] and [-110] are equal with an accuracy of 5%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (a) Experimental dependences of \uf072d on B measured on an L-shaped bridge at T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 K before illumination (1, 2) and after illumination (3, 4) (no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1, 3 – \uf072xx(B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2, 4 – \uf072yy(B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The inset shows the geometry of the L-shaped bridge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (b) Dependences of Ad SdH on 1/B before illumination (1, 2) and after illumination (3, 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Symbols are experimental data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Solid lines – calculation by formula (2): 1 – A0x SdH = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='02;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' \uf074qx = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='44 ps;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2 – A0y SdH = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='57;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' \uf074qy = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='38 ps;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 3 – A0x SdH = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='29;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' \uf074qx = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='72 ps;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 4 – A0y SdH = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='66;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' \uf074qy = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='01 ps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 3a shows the experimental dependences of \uf072d(B) at T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 K for heterostructure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2 before illumination (curves 1 and 2) and after illumination (curves 3 and 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' For this structure, as well as for structure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1, short-term illumination at low temperature leads to an increase in ne and \uf06dd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' However, for structure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2, in contrast to no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 1, the dependences \uf072xx(B) do not show quantum positive MR, while a classical negative MR is observed [38], which decreases significantly after illumination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Dependences \uf074td(ne) are presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 3b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' These dependences are not described by the theory [32], which takes into account only the change in kF with increasing ne, which is due to the change in neff R after illumination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A similar behavior of \uf074td on ne is also observed when the concentration of the 2D electron gas is changed using a Schottky gate [12, 35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (a) Dependences of \uf072xx(B) and \uf072yy(B) measured on the L-shaped bridge at T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 K (no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2): 1, 2 – before illumination;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 3, 4 - after short-term illumination by a red LED.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (b) Dependencies of \uf074tx(ne) and \uf074ty(ne).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Squares and circles - experimental data: 1 - \uf074tx;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2 - \uf074ty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Solid lines – calculation according to the formula: \uf074td \uf0b5 ne 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5: 1 – \uf074tx;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2 – \uf074ty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The experimental dependences \uf074qd(ne) for structure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 2 (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 4a) show that \uf074qd for different crystallographic directions are equal with an accuracy of 5%, which agrees with [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The experimental data are well described by formula (3) for the effective concentration of positively charged Si donors calculated by formula (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' The agreement between the experimental dependences \uf074qd(ne) and the calculated one indicates that the increase in the quantum lifetime of electrons in a single GaAs quantum well after low-temperature illumination is due to a decrease in neff R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (a) Dependences of \uf074qd(ne): squares are the experimental values of \uf074qy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' circles – experimental values of \uf074qx;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' the solid line is the calculation for neff R = neff R0fab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' (b) Dependences of neff R and neff R0fab on ne: squares and circles are the values of neff R calculated from the experimental values of \uf074qx and \uf074qy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' solid line – neff R0fab for neff R0 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='26\uf0b41016 m-2, a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='37\uf0b41016 m-2 and b = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='082\uf0b41016 m-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Conclusion The influence of illumination on the low-temperature transport in a 2D electron gas with anisotropic mobility in selectively doped single GaAs quantum wells with short-period AlAs/GaAs superlattice barriers in classically strong magnetic fields was studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' It has been shown that, in the heterostructures under study, illumination by a red LED at low temperatures leads to an increase in the concentration, mobility, and quantum lifetime of electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' An increase in the quantum lifetime of electrons in single GaAs quantum wells with modulated superlattice doping after illumination is explained by a decrease in the effective concentration of remote ionized donors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)Funding This work was supported by the Russian Foundation for Basic Research (project no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 20-02- 00309).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' References [1] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Stormer, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Dingle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Gossard, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Wiegmann, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Sturge, Solid State Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 29, 705 (1979).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [2] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Schubert, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Knecht, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Ploog, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' C: Solid State Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 18, L215 (1985).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [3] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Mani and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Anderson, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 37, 4299(R) (1988).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [4] Loren Pfeiffer, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' West, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Stormer, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Baldwin, Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 55, 1888 (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [5] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Hayne, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Usher, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Harris, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Moshchalkov, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Foxon, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 57, 14813 (1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [6] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Weiss, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Klitzing, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Ploog, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Weimann, Europhys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 8, 179 (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [7] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Hnatovsky, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Zudov, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Austing, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bogan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Mihailov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Hilke, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' West, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Pfeiffer, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Studenikin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 132, 044301 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [8] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Nelson, Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 31, 351 (1977).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [9] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Lang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Logan, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Jaros, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 19, 1015 (1979).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [10] Toshio Baba, Takashi Mizutani, and Masaki Ogawa, Jpn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 22, L627 (1983).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [11] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Friedland, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Hey, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Kostial, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Klann, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Ploog, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 77, 4616 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Dmitriev, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Strygin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Dietrich, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Vitkalov, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 95, 420 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Sammon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Zudov, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Shklovskii, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Materials 2, 064604 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [14] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Umansky, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Heiblum, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Levinson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Smet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Nübler, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Dolev, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Cryst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Growth 311, 1658 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [15] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Gardner, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Fallahi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Watson, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Manfra, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Cryst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Growth 441, 71 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [16] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Chung, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Villegas Rosales, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Baldwin, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' West, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Shayegan, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Pfeiffer, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Materials 4, 044003 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [17] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Marchishin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bakarov, Jing-Qiao Zhang and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Vitkalov, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 85, 63 (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [18] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Strygin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Marchishin, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Goran, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 99, 303 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Strygin, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rodyakina, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Mayer, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Vitkalov, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 101, 703 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [20] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Fu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Riedl, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Borisov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Zudov, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Watson, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Gardner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Manfra, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Baldwin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Pfeiffer, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' West, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 98, 195403 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bakarov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Goran, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Latyshev, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Toropov, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 74, 164 (2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)[22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Friedland, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Hey, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bierwagen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Kostial, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Hirayama, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Ploog, Physica E 13, 642 (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [23] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Tokura, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Saku, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Tarucha, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Horikoshi, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 46 15558 (1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [24] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Johnson, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Orme, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Hunt, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Graff, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Sudijono, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Sander, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Orr, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 72, 116 (1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [25] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Lifshits and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Kosevich, Zh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Eksp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Teor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Fiz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 29, 730 (1955) [Sov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' JETP 2, 636 (1956)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [26] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Isihara and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Smrcka, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' C: Solid State Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 19, 6777 (1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [27] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Coleridge, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Stoner, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Fletcher, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 39, 1120 (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [28] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Coleridge, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 44, 3793 (1991).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bystrov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Kreshchuk, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Novikov, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Polyanskaya, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=" Savel'ev, Fiz." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Tekh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Poluprov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 27, 645 (1993) [Semiconductors 27, 358 (1993)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [30] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bystrov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Kreshchuk, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Taun, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Novikov, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Polyanskaya, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Savel’ev, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Ya.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Shik, Fiz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Tekh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Poluprov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 28, 91 (1994) [Semiconductors 28, 55 (1994)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [31] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Nomokonov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bakarov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, in the press.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [32] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Gold, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 38, 10798 (1988).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [33] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Davies, The Physics of Low Dimensional Semiconductors (Cambridge Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Press, New York, 1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [34] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Dmitriev, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Mirlin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Polyakov, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Zudov, Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 84, 1709 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [35] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Strygin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Goran, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Nomokonov, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bakarov, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 112, 437 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [36] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Vavilov and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Aleiner, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 69, 035303 (2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [37] Scott Dietrich, Sergey Vitkalov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Dmitriev, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' B 85, 115312 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' [38] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bykov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Bakarov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Goran, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Aksenova, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Popova, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' Toropov, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 78, 134 (2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 口1 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 (sd) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 d 口 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 ne (1016 m*2)(a) AlAs/GaAs Si-S-doping SPSL dsi GaAs SQW SQW AlAs/GaAs ↓ Si-8-doping SPSL (b) AlAs GaAs Si- + + + AIAs18 Pyy 12 1 3 2 6 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 B (T) 6 (b) 3 5 4 4 3 1 口 1 A 2 2 2 △ 3 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 1/B (1/T)60 1 40 2 3 20 4 (a) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='8 B (T) 12 (b) 2 0 0 8 口 (sd) 1 口 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content=' 口 1 2 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} +page_content='5 ne (1016 m*2)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9E2T4oBgHgl3EQfQgbg/content/2301.03772v1.pdf'} diff --git a/-tAzT4oBgHgl3EQfhPwa/content/tmp_files/2301.01480v1.pdf.txt b/-tAzT4oBgHgl3EQfhPwa/content/tmp_files/2301.01480v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..acbc4a66aff7ce94b37b1883c13f184edb88c8a3 --- /dev/null +++ b/-tAzT4oBgHgl3EQfhPwa/content/tmp_files/2301.01480v1.pdf.txt @@ -0,0 +1,1864 @@ +A new over-dispersed count model +Anupama Nandi, Subrata Chakraborty, Aniket Biswas +Dibrugarh University +January 5, 2023 +Abstract +A new two-parameter discrete distribution, namely the PoiG distribution is derived +by the convolution of a Poisson variate and an independently distributed geometric +random variable. This distribution generalizes both the Poisson and geometric distri- +butions and can be used for modelling over-dispersed as well as equi-dispersed count +data. A number of important statistical properties of the proposed count model, +such as the probability generating function, the moment generating function, the +moments, the survival function and the hazard rate function. Monotonic properties +are studied such as the log concavity and the stochastic ordering are also investi- +gated in detail. Method of moment and the maximum likelihood estimators of the +parameters of the proposed model are presented. It is envisaged that the proposed +distribution may prove to be useful for the practitioners for modelling over-dispersed +count data compared to its closest competitors. +Keywords Geometric distribution, Poisson distribution, Conway-Maxwell Poisson dis- +tribution; BerG distribution; BerPoi distribution; Incomplete gamma function. +MSC 2010 60E05, 62E15. +1 +arXiv:2301.01480v1 [stat.ME] 4 Jan 2023 + +1 +Introduction +The phenomenon of the variance of a count data being more than its mean is commonly +termed as over-dispersion in the literature. Over-dispersion is relevant in many modelling +applications and it is encountered more often compared to the phenomena of under- +dispersion and equi-dispersion. A number of count models are available in the literature +for over-dispersed data. However, addition of a simple yet adequate model is of importance +given the ongoing research interest in this direction ([37], [25], [32], [35], [30], [29], [9], +[19], [26], [34], [5], [2] and [36]). The simplest and the most common count data model +is the Poisson distribution. Its equi-dispersion characteristic is well-known. This is a +limitation for the Poisson model and to overcome this issue, several alternatives have +been developed and used for their obvious advantage over the classical Poisson model. +Notable among these distributions are the hyper-Poisson (HP) of Bardwell and Crow +[6], generalized Poisson distribution of Jain and Consul [20], double-Poisson of Efron +[16], weighted Poisson of Castillo and Pérez-Casany [15], weighted generalized Poisson +distribution of Chakraborty [10], Mittag-Leffler function distribution of Chakraborty and +Ong [13] and the popular COM-Poisson distribution Shmueli et al. [31]. COM-Poisson +generalizes the binomial and the negative binomial distribution. The classical geometric +and negative binomial models are also used for over-dispersed count datasets. The gamma +mixture of the Poisson distribution generates the negative binomial distribution [17]. +Thus unlike the Poisson distribution, these two count models posses the over-dispersion +characteristic. Consequently, several extensions of the geometric distribution have been +introduced in the literature for over-dispersed count data modelling ([11], [12], [18], [20], +[22], [27], [28], and [33] among others). Two most widely used distributions for over- +dispersed data are of course the negative binomial and COM-Poisson. As pointed out +earlier, there is still plenty of opportunity for developing new discrete distributions with +simple structure and explicit interpretation, appropriate for over-dispersed data. +Recently, Bourguignon et al. have introduced the BerG distribution [8] by using the +convolution of a Bernoulli random variable and a geometric random variable. In a very +recent publication, Bourguignon et al. have introduced the BerPoi distribution from a +similar motivation [7]. This is a convolution of a Bernoulli random variable and a Poisson +random variable. The first one is capable of modelling over-dispersed, under-dispersed +and equi-dispersed data whereas the second one is efficient for modelling under-dispersed +data. This approach is simple and has enormous potential. Here we use this idea to +develop a novel over-dispersed count model. +In this article, we propose a new discrete distribution derived from the convolution of +two independent count random variables. The random variables are Poisson and geomet- +ric. Hence we identify the proposed model as PoiG. This two-parameter distribution +has many advantages. Structural simplicity is one of them. It is easy to comprehend +unlike the COM-Poisson distribution, which involves a difficult normalising constant in +its probability mass function. A model with closed-form expressions of the mean and the +variance is well-suited for regression modelling. Unlike the COM-Poisson distribution, +mean and variance of the proposed distribution can be written in closed form expressions. +The proposed distribution extends both the Poisson and geometric distributions. +Rest of the article is organized as follows. In section 2, we present the PoiG distribution. +In Section 3, we describe its important statistical properties such as recurrence relation, +generating functions, moments, dispersion index, mode, reliability properties, monotonic +2 + +properties and stochastic ordering. In Section 4, we present the moment and the max- +imum likelihood methods of parameter estimation. We conclude the article with a few +limitations and future scopes of the current study. +2 +The PoiG distribution +In this section, we introduce a novel discrete distribution by considering two independent +discrete random variables Y1 and Y2. +Let us denote the set of non-negative integers, +{0, 1, 2, ...} by N0. Also let, Y1 and Y2 follow the Poisson distribution with mean λ > 0 +and the geometric distribution with mean 0 < 1/θ < 1, respectively. Both Y1 and Y2 have +the same support N0. For convenience, we write Y1 ∼ P(λ) and Y2 ∼ G(θ). Consider, +Y = Y1 + Y2. Then, +Pr(Y = y) = +y +� +i=0 +Pr(Y1 = i) Pr(Y2 = y − i) += +y +� +i=0 +e−λλi +i! +θ(1 − θ)y−i += θ(1 − θ)ye−λ +y +� +i=0 +1 +i! +� +λ +1 − θ +�i +, +y = 0, 1, 2, ... . +(1) +The distribution in (1) being the convolution Poisson and geometric, is named the PoiG +distribution and we write Y ∼ PoiG(λ, θ). Thus, the probability mass function (pmf) of +PoiG(λ, θ) can be written as +pY (y) = θ(1 − θ)y +Γ(y + 1) exp +� λθ +1 − θ +� +Γ +� +y + 1, +λ +1 − θ +� +, +y = 0, 1, 2, ... . +(2) +Figure 1 exhibits nature of the pmf for different choices of (λ, θ). The cumulative distri- +bution function (cdf) of PoiG distribution is +FY (y) = Pr(Y1 + Y2 ≤ y) += +y +� +y1=0 +y−y1 +� +y2=0 +pY (y1)pY (y2) += +y +� +y1=0 +FG(y − y1)pY (y1) += +y +� +y1=0 +(1 − (1 − θ)y−y1+1)pY (y1) += +y +� +y1=0 +e−λλy1 +y1! +− (1 − θ)y+1e−λ +y +� +y1=0 +1 +y1! +� +λ +1 − θ +�y1 +. +(3) +An explicit expression of (3) is given by +FY (y) = Γ(y + 1, λ) +Γ(y + 1) +− (1 − θ)y+1 +Γ(y + 1) exp +� λθ +1 − θ +� +Γ +� +y + 1, +λ +1 − θ +� +, +y = 0, 1, 2, ... . +(4) +3 + +Figure 2 exhibits nature of the cdf for different choices of (λ, θ). The mean and variance +of the PoiG(λ, θ) distribution are given as follows. +E(Y ) = µ = λ + 1 − θ +θ +and +V (Y ) = σ2 = λ + 1 − θ +θ2 +(5) +Special cases +• For λ −→ 0, PoiG(λ, θ) behaves like G(θ). +• For θ −→ 1, PoiG(λ, θ) behaves like P(λ). +Remark 1 +• The incomplete gamma function [1] is defined as Γ(n, x) = +� ∞ +x t(n−1)e−tdt and it +can also be rewrite as Γ(n, x) = (n − 1)! �n−1 +k=0 +e−xxk +k! +, which is valid for positive +values of n and any value of x. Thus the incomplete gamma function in (2) can be +rewritten as +Γ +� +y + 1, +λ +1 − θ +� += Γ(y + 1) +y +� +i=0 +1 +Γ(i + 1) exp +� +− +λ +1 − θ +�� +λ +1 − θ +�i +, +where Γ(y + 1) = y! and Γ(i + 1) = i!. +• FY (0) = pY (0) = θe−λ. Thus, the proportion of zeros in case of the PoiG distribu- +tion tends to θ as λ → 0 and to zero as λ → ∞. +3 +Properties of the PoiG distribution +In this section, we explore several important statistical properties of the proposed PoiG(λ, θ) +distribution. Some of the distributional properties studied here are the recurrence relation, +probability generating function (pgf), moment generating function (mgf), characteristic +function (cf), cumulant generating function (cgf), moments, coefficient of skewness and +kurtosis. We also study the reliability properties such as the survival function and the +hazard rate function. Log-concavity and stochastic ordering of the proposed model are +also investigated. +3.1 +Recurrence relation +Probability recurrence relation helps in finding the subsequent term using the preceding +term. It usually proves to be advantageous in computing the masses at different values. +Note that, +pY (y) = θ(1 − θ)y +Γ(y + 1) exp +� λθ +1 − θ +� +Γ +� +y + 1, +λ +1 − θ +� += θ(1 − θ)ye−λ +y +� +i=0 +1 +Γ(i + 1) +� +λ +1 − θ +�i += θ(1 − θ)ye−λsy. +4 + +Figure 1: +Probability mass function of PoiG(λ, θ) for λ ∈ {0, 0.5, 5, 10} and θ ∈ +{0.2, 0.4, 0.6, 0.8}. +The (i, j)th plot corresponds to the ith value of λ and jth value of +θ for i, j = 1, 2, 3, 4. +5 + +0.20 +0.4 f +0.6 t +0.8+ +0.5 +0.15 +0.3 +0.4 +0.6 +0.10 +0.2 +0.4 +0.05 +0.1 +0.2 +0.1 +0.2 +0 +10 +15 +20 +0 +2 +4 +to +10 +0 +2 +0 +2 +3 +4 +5 +0.15 F ° +0.25 +0.5 § +0.35 +0.20 +0.30 +0.4 +0.10 +0.15 +0.25 +0.20 +0.3 E +0.05 +0.10 +0.15 +0.2 +0.05 +0.10 +0.05 +0.1 +0 +5 +10 +15 +20 +2 +4 +6 +8 +10 +12 +0 +2 +4 +6 +8 +0 +2 +3 +d +5 +6 +0.10 E +0.14 E +0.12 +0.15 +0.08 +0.10 +0.06 +0.08 +0.10 +0.10 +0.04 +0.06 +0.04 +0.05 +0.05 +0.02 +0.02 +: +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +0 +5 +10 +5 +0 +2 +4 +6 +8 +10 +0.08 +0.12 +0.12 E +0.12 E +0.10 +0.10 +0.10 +0.06 +0.08 +0.08 +0.08 +0.04 +0.06 +0.06 +0.06 +0.04 +0.02日 +0.04 +0.04 +0.02 +0.02 +0.02 +5 +0 +5 +10 +15 +20 +25 +0 +10 +15 +20 +0 +5 +10 +15 +20Figure 2: Cumulative distribution function of PoiG(λ, θ) for λ ∈ {0, 0.5, 5, 10} and θ ∈ +{0.2, 0.4, 0.6, 0.8} .The (i, j)th plot corresponds to the ith value of λ and jth value of θ for +i, j = 1, 2, 3, 4. +6 + +1.0 E +1.0 +1.0 +.0 +0.8 E +0.8 +0.8 +0.8. +0.6 +0.6 +0.6. +0.6 +0.4 +0.4 +0.4 +0.4 +0.2. +0.2 +0.2 +0.2 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +1.0 E +1.0 F +1.0 E +1.0 E +0.8 E +0.8 +0.8 +0.8 E: +0.6 E +0.6 +0.6 +0.6 E +0.4 +0.4 +0.4 +0.2 +0.2 +0.2 +0.2 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +1.0 E +1.0 F +1.0 E +1.0 E +0.8 E +0.8 E +0.8 E +0.8 E +0.6 E +0.6 +0.6 +0.6 E +0.4 E +0.4 +0.4 +0.4 +0.2 +0.2 +0.2 +0.2 +0 +5 +10 +15 +20 +25 +30 +5 +10 +15 +20 +25 +30 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +1.0 F +1.0 F +1.0 +1.0 +0.8 E +0.8 E +0.8 E +180 +0.6 E +0.6 +0.6 +0.6 F +0.4 E +0.4 E +0.4 +0.4 E +0.2 +0.2 +0.2 +0.2 +0 +5 +10 +15 +20 +25 +30 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30Where, +sy = +y +� +i=0 +1 +Γ(i + 1) +� +λ +1 − θ +�i +and +sy+1 = sy + +1 +Γ(y + 2) +� +λ +1 − θ +�y+1 +. +Now, +pY (y + 1) = θ(1 − θ)y+1e−λsy+1 += θ(1 − θ)y+1e−λ +� +sy + +1 +Γ(y + 2) +� +λ +1 − θ +�y+1� += (1 − θ)pY (y) + θe−λ +λy+1 +Γ(y + 2). +(6) +This is the recurrence formula of the PoiG distribution. It is easy to check that +sy+1 +sy += 1 + +1 +syΓ(y + 2) +� +λ +1 − θ +�y+1 += 1 +as y −→ ∞, +and +pY (y + 1) +pY (y) += (1 − θ) + θe−λ +pY (y) +λy+1 +Γ(y + 2) = 1 − θ +as y −→ ∞. +(7) +From (7), it is clear that the behaviour of the tail of the distribution depends on θ. When +θ −→ 0, the tail of the distribution decays relatively slowly, which implies long tail. when +θ −→ 1, the tail of the distribution decays fast, which implies short tail. This can easily +be verified from Figure 1. +3.2 +Generating functions +We use the notation H to denote a pgf and use the notation of the corresponding random +variable in the subscript. For Y1 ∼ P(λ) and Y2 ∼ G(θ), +HY1(s) = eλ(s−1) +and +HY2(s) = +θ +1 − (1 − θ)s. +Now by using the convolution property of probability generating function we obtain the +pgf of PoiG(λ, θ) as +HY (s) = +θeλ(s−1) +1 − s + θs. +(8) +Similar methods are used to obtain the other generating functions, including the mgf +MY (t), cf φY (t) and cgf KY (t). These are given below. +MY (t) = +θeλ(t−1) +1 − (1 − θ)t +(9) +7 + +φY (t) = +θeλ(eit−1) +1 − (1 − θ)eit +(10) +KY (t) = λ(et − 1) + log +� +θ +1 − (1 − θ)et +� +(11) +Let us discuss some useful definitions and notations for Result 1 given below. The no- +tation G(θ) has already been introduced in Section 2. Let R be the number of failures +preceding the first success in a sequence of independent Bernoulli trials. If the probability +of success is θ ∈ (0, 1), then R is said to follow G(θ). Suppose, we wait for the rth success. +Then the number of failures is a negative binomial random variable with index r and the +parameter θ. Let NB(r, θ) denote this distribution. Suppose Ri ∼ G(θ), for i = 1, 2, ..., r +independently and S ∼ NB(r, θ). Then S = R1 + R2 + ... + Rr. Thus, it is clear that +the G(θ) is a particular case of NB(r, θ) with r = 1. Similar to the genesis of PoiG +model, if we add one Poisson random variable and an independently distributed negative +binomial random variable, it is possible to obtain a generalization of the PoiG model. +An appropriate notation for this distribution would have been PoiNB. The objective of +the current work is not to study this three-parameter distribution in detail. However, the +following result establishes that the generalization from the geometric distribution to the +negative binomial distribution translates similarly to the PoiG − PoiNB case. This may +prove to be a motivation for generalizing the proposed model to PoiNB in future. +Result 1 The distribution of the sum of n independent PoiG random variables is a +PoiNB random variable for fixed θ. Mathematically, if Yi ∼ PoiG(λi, θ) for each i = +1, 2, ..., n then, +n +� +i=1 +Yi ∼ PoiNB( +n +� +i=1 +λi, n, θ). +Proof of Result 1 From (8), the pgf of Yi ∼ PoiG(λi, θ) is +HYi(s) = +θeλi(s−1) +1 − s + θs +for i = 1, 2, ..., n. We can derive the pgf of sum of n independent PoiG(λi, θ) variates +based on the convolution property of the pgf. Let, Z = Y1 + Y2 + .... + Yn. Then, +HZ(s) = +n +� +i=1 +HYi(s) += +θn +(1 − s + θs)ne +�n +i=1 λi(s−1). +(12) +The term θn/(1−s+θs)n in (12) is the pgf of NB(n, θ) which is a generalisation of geomet- +ric distribution and e +�n +i=1 λi(s−1) is pgf of P (�n +i=1 λi). Thus �n +i=1 Yi ∼ PoiNB (�n +i=1 λi, n, θ). +8 + +3.3 +Moments and related concepts +The rth order raw moment of Y ∼ PoiG(λ, θ) can be obtained using the general expres- +sions of the raw moments of Y1 ∼ P(λ) and Y2 ∼ G(θ) as follows. +E(Y r) = E +� +r +� +j=0 +�Y +j +� +Y1 +jY2 +y−j +� += +r +� +j=0 +�Y +j +� +E(Y1 +j)E(Y2 +y−j) +Note that, +E(Y1 +j) = +∞ +� +Y1=0 +Y1 +j e−λλY1 +Y1! += +∞ +� +Y1=0 +λY1S(j, Y1) += φj(λ). +Here, S(j, Y1) is the Stirling number of the second kind [1] and φj(λ) is the Bell polynomial +[24]. Again, +E(Y2 +y−j) = +∞ +� +Y2=0 +Y2 +y−jθ(1 − θ)Y2 += θ Li−(y−j)(1 − θ), +where Li−(y−j)(1 − θ) is the polylogarithm of negative integers [14]. Hence +E(Y r) = +r +� +j=0 +�Y +j +� +φj(λ)θ Li−(y−j)(1 − θ). +(13) +The rth order raw moment can also be calculated by differentiating the mgf in (9) r times +with respect to t and putting t = 0. That is, +E(Y r) = M (r) +Y (0) = dr +dtr [MY (t)]t=0. +Explicit expressions of the first four moments are listed below. +E(Y ) = λ + 1 − θ +θ +(14) +E(Y 2) = 1 +θ2[θ2(λ2 − λ + 1) + θ(2λ − 3) + 2] +(15) +E(Y 3) = 1 +θ3[θ3(λ3 + λ − 1) + θ2(3λ2 − 6λ + 7) + θ(6λ − 12) + 6] +(16) +E(Y 4) = 1 +θ4[θ4(λ4 + 2λ3 + λ2 − λ + 1) + θ3(4λ3 − 6λ2 + 14λ − 15) ++ 2θ2(6λ2 − 18λ + 25) + 12θ(2λ − 5) + 24] +(17) +9 + +Using the above, explicit expressions of the first four central moments are given as follows. +µ1 = 0 +(18) +µ2 = λ + 1 − θ +θ2 +(19) +µ3 = θ3λ + θ2 − 3θ + 2 +θ3 +(20) +µ4 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9 +θ4 +(21) +The first raw and second central moments are mean and variance of the PoiG(λ, θ) dis- +tribution, respectively. Let γ1 and γ2 denote the coefficients of skewness and kurtosis, +respectively. Using the central moments, these coefficients can be derived in closed forms +as follows. +β1 = µ32 +µ23 = (θ3λ + θ2 − 3θ + 2)2 +(θ2λ − θ + 1)3 +γ1 = +� +β1 = +� +(θ3λ + θ2 − 3θ + 2)2 +(θ2λ − θ + 1)3 +β2 = µ4 +µ22 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9 +(θ2λ − θ + 1)2 +γ2 = β2 − 3 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9 +(θ2λ − θ + 1)2 +− 3 +Remark 3 +• As θ → 1, β1 → 1 +λ and as θ → 0, β1 → 4. +• As θ → 1, β2 → 3 + 1 +λ and as θ → 0, β2 → 9. +The statements made in Remark 3 can easily be realized visually from Figure 3 and Figure +4, respectively. Clearly, as λ → ∞, the distribution tends to attain normal shape with +β1 → 0 and β2 → 3. +3.4 +Dispersion index and coefficient of variation +The +dispersion index determines whether a distribution is suitable for modelling an +over, under and equi-dispersed dataset or not. Let IY denote the dispersion index of the +distribution of the random variable Y . When IY is more or less than one, the distribution +of Y can accommodate over-dispersion or under-dispersion, respectively. The notion of +equi-dispersion is indicated when IY = 1. The dispersion index is given by +IY = σ2 +µ = 1 + +(1 − θ)2 +θ(1 + λθ − θ). +10 + +Figure 3: Skewness of PoiG(λ, θ) for θ ∈ {0.2, 0.4, 0.6, 0.8}. The ith plot corresponds to +the ith value of θ for different values of λ in the x-axis. +Figure 4: Kurtosis of PoiG(λ, θ) for θ ∈ {0.2, 0.4, 0.6, 0.8}. The ith plot corresponds to +the ith value of θ for different values of λ in the x-axis. +From the expression of IY above, it follows that the PoiG distribution is equi-dispersed +when θ = 1 and over-dispersed for all 0 < θ < 1. From Figure 5, it can be observed that +IY increases with decreasing λ and θ. +The coefficient of variation (CV) is an indicator for data variability. Higher value of the +CV indicates the capability of a distribution to model data with higher variability. Note +that, +CV (Y ) = +√ +λθ2 − θ + 1 +λθ − θ + 1 +× 100%. +3.5 +Mode +In Section 3.7, we show that PoiG(λ, θ) is unimodal. Note that, +pY (1) ≤ pY (0) +=⇒ +(1 + λ − θ)θe−λ ≤ θe−λ +=⇒ +λθe−λ − θ2e−λ ≤ 0 +=⇒ +λ − θ ≤ 0 +=⇒ +λ ≤ θ. +The converse is trivially true. Thus, the distribution has mode at zero for λ ≤ θ. Figure +1 clearly shows that the mode is zero for λ = 0, 0.5 and θ > 0, 0.5. For the equality case, +11 + +71 +6 +5 +4 +2 +3 +0 +10 +20 +30 +40 +50 +0 +5 +10 +15 +20 +25 +0 +5 +10 +15 +0 +2 +4 +6 +8 +1010. +12 t +8 +8 +10 +6 +8 +6 +2 +0 +10 +20 +30 +40 +50 +0 +10 +20 +30 +40 +50 +0 +10 +20 +30 +40 +50 +0 +10 +20 +30 +40 +50Figure 5: Dispersion index of PoiG(λ, θ). +Figure 6: Probability mass function of PoiG(λ, θ) for λ = θ ∈ {0.2, 0.4, 0.6, 0.8}. +that is λ = θ, the masses at zero and at unity are the same. Figure 6 clearly exhibits this +fact. However, for the λ > θ case, the distribution has non-zero mode. Unfortunately, an +explicit expression for this non-zero mode is difficult to find, if not impossible. +3.6 +Reliability properties +Reliability function of a discrete random variable Y at y is defined as the probability of +Y assuming values greater than or equal to y. The reliability function is also termed as +the survival function. The survival function of Y ∼ PoiG(λ, θ) is +SY (y) = P(Y ≥ y) = 1 − Γ(y, λ) +Γy ++ (1 − θ)y +Γy +exp +� λθ +1 − θ +� +Γ +� +y, +λ +1 − θ +� +. +(22) +The hazard rate or failure rate of a discrete random variable T at time point t is defined +as the conditional probability of failure at t, given that the survival time is at least t. +The hazard rate function (hrf) of Y ∼ PoiG(λ, θ) can be obtained by using (1) and (4) +12 + +0=0.75 +入=0 +入=1 +入=5 +入=50 +0=0.15 +0=0.30 +0=0.45 +0=0.60 +ly +ly +7 +20 +15 +5 +4 +10 +3 +10 +20 +30 +40 +50 +0.0 +0.2 +0.4 +0.6 +0.8 +1.00.25 +0.35 +0.35 +0.15 +0.30 +0.20 +0.30 +0.25 +0.25 +0.10 +0.15 +0.20 +0.20 +0.10 +0.15 +0.15 +0.05 +0.05 +0.10 +0.10 +0.05 +0.05 +1 +2 +4 +6 +224 +0 +2 +6 +8 +10 +0 +2 +4 +8as follows. +hY (y) = P(Y = y) +P(Y ≥ y) = +θ(1 − θ)y +Γ(y + 1) exp +� λθ +1 − θ +� +Γ +� +y + 1, +λ +1 − θ +� +1 − Γ(y, λ) +Γy ++ (1 − θ)y +Γy +exp +� λθ +1 − θ +� +Γ +� +y, +λ +1 − θ +� += +θ(1 − θ)y exp +� λθ +1 − θ +� +Γ +� +y + 1, +λ +1 − θ +� +Γ(y + 1) − yΓ(y, λ) + y(1 − θ)y exp +� λθ +1 − θ +� +Γ +� +y, +λ +1 − θ +�. +(23) +The hrf for different choices of the parameters are exhibited in Figure 7. +The PoiG +distribution exhibits constant failure rate when λ is very small and it exhibits an increasing +failure rate, up to a specific time period, when λ increases. +In reliability studies, the mean residual life is the expected additional lifetime given that +a component has survived until a fixed time. If the random variable Y ∼ PoiG(λ, θ) +represents the life of a component, then the mean residual life is +µY (y) = E(Y − y|Y ≥ y) += +�∞ +y=k(y − k)P(Y = y) +P(Y ≥ y) += +�∞ +y=k ¯F(y) +¯F(k − 1) += +�∞ +y=k +� +1 − Γ(y, λ) +Γy ++ (1 − θ)y +Γy +exp +� λθ +1 − θ +� +Γ +� +y, +λ +1 − θ +�� +1 − Γ(k − 1, λ) +Γ(k − 1) ++ (1 − θ)k−1 +Γ(k − 1) exp +� λθ +1 − θ +� +Γ +� +k − 1, +λ +1 − θ +�. +(24) +3.7 +Monotonic Properties +Y ∼ PoiG(λ, θ) is log-concave if the following holds for all y ≥ 1. +p2 +Y (y) ≥ pY (y − 1)pY (y + 1) +A log-concave distribution possesses several desirable properties. Some of the notable +examples of log-concave distributions are the Bernoulli, binomial, Poisson, geometric, and +negative binomial. Convolution of two independent log-concave distributions is also a log- +concave distribution [21]. Being the convolution of Poisson and Geometric distributions, +the proposed PoiG distribution is log-concave. Consequently, the following statements +hold good for the PoiG distribution ([23] and [3]). +• Strongly unimodal. +• At most one exponential tail. +• All the moments exist. +• Log-concave survival function. +13 + +Figure 7: Hazard rate function of PoiG(λ, θ) for λ ∈ {0, 0.5, 5, 10} row-wise and θ ∈ +{0.2, 0.4, 0.6, 0.8} column-wise. The (i, j)th plot corresponds to the ith value of λ and jth +value of θ for i, j = 1, 2, 3, 4. +• Monotonically increasing hazard rate function (see Figure 7). +• Monotonically decreasing mean residual life function. +14 + +0.20 +0.4 +0.6 +0.8 +0.5 +0.15 +0.3 +0.4 +0.6 +0.10 +0.2 +0.3 +0.4 +0.05 +0.2 +0.1 +0.1 +0.2 +F. +10 +15 +20 +25 +30 +0 +5 +10 +25 +30 +0 +5 +30 +15 +20 +10 +15 +20 +25 +0 +5 +10 +15 +20 +25 +30 +0.20 E +0.4 +0.6 F +0.8 F +0.5 +0.15 +0 +0.4. +0.6 +0.10 +0.2 +0.3 +0.4 +0.2 +0.05 +0.1 +0.1 +0.2 +10 +15 +20 +25 +30 +E +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +0.20 E +0.4 E +0.6 F +0.8 t +0.5 +0.15 +0.3 +0.4 +0.6 +0.10 +0.2 +0 +0.4 +0.1 +0.2 +0.05 +0.1 +0.2 +. +.i +F.i +0 +5 +10 +15 +20 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +25 +30 +0.20 E +0.4 t +0.6 t +0.5 +0.6 +0.15 E +0.3 +0.5 +0.4 +0.4 +0.10 +0.2 +0.3 +0 +0.05 +0.1 [ +0.2 +0.2 +0.1 +0.1 +0 +5 +10 +15 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +30 +0 +5 +10 +15 +20 +25 +303.8 +Stochastic ordering +Stochastic order is an important statistical property used to compare the behaviour of +different random variables [4]. We have considered here the likelihood ratio order ≥lr. Let +X ∼ PoiG(λ1, θ) and Y ∼ PoiG(λ2, θ). Then Y is said to be smaller than X in the usual +likelihood ratio order, that is Y ≤lr X) if L(x) = pX(x)/pY (x) is an increasing function +in x, that is L(x) ≤ L(x + 1) for all 0 < θ < 1 and λ2 < λ1. Note that, +pX(x) = θ(1 − θ)xe−λ1 +x +� +i=0 +1 +Γ(i + 1) +� λ1 +1 − θ +�i +, +x = 0, 1, 2, ... +pY (x) = θ(1 − θ)xe−λ2 +x +� +i=0 +1 +Γ(i + 1) +� λ2 +1 − θ +�i +, +x = 0, 1, 2, ... . +L(x) = exp [−(λ1 − λ2)] +�x +i=0 +1 +Γ(i + 1) +� λ1 +1 − θ +�i +�x +i=0 +1 +Γ(i + 1) +� λ2 +1 − θ +�i, +x = 0, 1, 2, ... +It is easy to see that, L(x) ≤ L(x + 1) for all 0 < θ < 1 and λ2 < λ1. +Let Y ≤st X denote P(Y ≥ x) ≤ P(X ≥ x) for all x. This is the notion of stochastic +ordering. Similarly, the hazard rate order Y ≤hr X implies +pX(x) +P(X ≥ x) ≤ +pY (x) +P(Y ≥ x) +for all x. The reversed hazard rate order Y ≤rh X implies +pY (x) +P(Y ≤ x) ≤ +pX(x) +P(X ≤ x) +for all x. +From the likelihood ratio order of X and Y , the following statements are +immediate [4]. +• Stochastic order: Y ≤st X. +• Hazard rate order: Y ≤hr X. +• Reverse hazard rate order: Y ≤rh X. +4 +Estimation +Let Y = (Y1, Y2, ..., Yn) be a random sample of size n from the PoiG(λ, θ) distribution +and y = (y1, y2, ..., yn) be a realization on Y. The objective of this section is estimate the +parameters λ and θ based on the available data y. We present two different methods of +estimation. We also find asymptotic confidence intervals for both the parameters based +on the maximum likelihood estimates. +4.1 +Method of moments +Using the expressions in (14) and (19), the mean and the variance of Y ∼ PoiG(λ, θ) are +as follows. +µ +′ +1 = λ + 1 − θ +θ +and µ2 = λ + 1 − θ +θ2 +15 + +Now by subtracting µ2 from µ +′ +1, +µ +′ +1 − µ2 = 1 − θ +θ +− 1 − θ +θ2 +=⇒ µ +′ +1 − µ2 = 1 − θ +θ +� +1 − 1 +θ +� +=⇒ µ2 − µ +′ +1 = +�1 − θ +θ +�2 +=⇒ 1 − θ +θ += +� +µ2 − µ +′ +1 +=⇒ θ = +1 +1 + +� +µ2 − µ +′ +1 +(25) +By putting θ from (25) in µ +′ +1, we obtain +λ = µ +′ +1 − +� +µ2 − µ +′ +1 +(26) +This method involves equating sample moments with theoretical moments. +Thus, by +equating the first sample moment about the origin m +′ +1 = �n +i=1 yi/n to µ +′ +1 and the second +sample moment about the mean m2 = �n +i=1(yi − ¯y)2/n to µ2 in equation (25) and (26), +we obtain the following estimators for λ and θ. +ˆλMM = m +′ +1 − +� +m2 − m +′ +1 +(27) +ˆθMM = +1 +1 + +� +m2 − m +′ +1 +(28) +4.2 +Maximum likelihood method +Using the pmf of Y ∼ PoiG(λ, θ) in (1), the log-likelihood function of the parameters λ +and θ can easily be found as +l(λ, θ; y) = n log θ + ny log(1 − θ) + nλθ +1 − θ + +n +� +i=0 +log +� +� +� +� +Γ +� +yi + 1, +λ +1 − θ +� +Γ(yi + 1) +� +� +� +� . +(29) +Let us define, +β = +λ +1 − θ +and for j = 1, 2, 3, ... +αj(yi) = +e−β +Γ (yi + 1, β) +1 +(1 − θ)j . +Differentiating (29), with respect to parameters λ and θ, we get the score functions as +∂ +∂λl(λ, θ; y) = +nθ +1 − θ − +n +� +i=1 +α1(yi)βyi +(30) +∂ +∂θl(λ, θ; y) = n +θ + n(λ − ¯y) +1 − θ ++ +nλθ +(1 − θ)2 − +n +� +i=1 +λα2(yi)βyi. +(31) +16 + +Ideally, the explicit maximum likelihood estimators are obtained by simultaneously solving +the two equations obtained by setting right hand sides of (30) and (31) equal to zero. +Unfortunately, the explicit expressions of the maximum likelihood estimators could not +be obtained in this case due to the structural complexity. Thus, we directly optimize +the log-likelihood function with respect to the parameters using appropriate numerical +technique. Let ˆλML and ˆθML denote the maximum likelihood estimates (MLE) of λ and +θ respectively. +Now, our objective is to obtain asymptotic confidence intervals for both the parameters. +For this purpose, we require the information matrix. The second-order partial derivative +of the log-likelihood are given below. +∂2l(λ, θ; y) +∂λ2 += +n +� +i=1 +� +(βyi − yiβyi−1)α2(yi) − β2yiα1(yi)2� +∂2l(λ, θ; y) +∂λ∂θ += +n +(1 − θ)2 + +n +� +i=1 +� +λ(βyi − yiβyi−1)α3(yi) − βyiα2(yi) − λ(1 − θ)β2yiα1(yi)2� +∂2l(λ, θ; y) +∂θ2 += 2nλ − n¯y(1 − θ) +(1 − θ)3 +− n +θ2+ +n +� +i=1 +� +((λ2 − 2λ(1 − θ))βyi − λ2yiβyi−1)α4(yi) − λ2β2yiα2(yi)2� +The Fisher’s information matrix for (λ, θ) is +I = +� +� +� +� +� +� +−E +�∂2l(λ, θ; y) +∂λ2 +� +−E +�∂2l(λ, θ; y) +∂λ∂θ +� +−E +�∂2l(λ, θ; y) +∂λ∂θ +� +−E +�∂2l(λ, θ; y) +∂θ2 +� +. +� +� +� +� +� +� +This can be approximated by +�I = +� +� +� +� +� +−∂2l(λ, θ; y) +∂λ2 +−∂2l(λ, θ; y) +∂λ∂θ +−∂2l(λ, θ; y) +∂λ∂θ +−∂2l(λ, θ; y) +∂θ2 +. +� +� +� +� +� +(λ,θ)=(ˆλML,ˆθML) +Under some general regularity conditions, for large n, √n(ˆλML − λ, ˆθML − θ) is bivariate +normal with the mean vector (0, 0) and the dispersion matrix +ˆI−1 = +1 +I11I22 − I12I21 +� +� +I22 +−I12 +−I21 +I11 +� +� = +� +� +J11 +−J12 +−J21 +J22. +� +� +Thus, the asymptotic (1−α)×100% confidence interval for λ and θ are given respectively +by +� +�ˆλML − Zα +2 +� +J11 , ˆλML + Zα +2 +� +J11 +� +� and +� +�ˆθML − Zα +2 +� +J22 , ˆθML + Zα +2 +� +J22 +� +� . +17 + +5 +Discussion +In this article, a new two-parameter distribution is proposed, extensively studied. Core +of this work is theoretical development, its applied aspect is also important. From the +application point of view, the proposed model is easy to use for modeling over-dispersed +data. Despite the availability of several other over-dispersed count models, the proposed +model may find wide applications due to the interpretability of its parameters. +The +parameter λ controls the tail of the distribution while the parameter θ adjusts for the over- +dispersion present in a given dataset. Their combined effect gives flexibility to the shape +of the distribution. When θ dominates λ, it keeps the J-shaped mass distribution and for +large λ, the bell-shaped mass distribution. Consequently, the hump or the concentration of +the observations is well accommodated. Simulation experiment to investigate performance +of the point and asymptotic interval estimator and comparative real life data analysis will +be reported in the complete version of the article. +18 + +References +[1] Abramowitz, M., and Stegun, I. A. Handbook of mathematical functions with +formulas, graphs, and mathematical tables, vol. 55. US Government printing office, +1964. +[2] Altun, E. A new generalization of geometric distribution with properties and ap- +plications. Communications in Statistics-Simulation and Computation 49, 3 (2020), +793–807. +[3] Bagnoli, M., and Bergstrom, T. Log-concave probability and its applications. +In Rationality and Equilibrium. Springer, 2006, pp. 217–241. +[4] Bakouch, H. S., Jazi, M. A., and Nadarajah, S. A new discrete distribution. +Statistics 48, 1 (2014), 200–240. +[5] Bar-Lev, S. K., and Ridder, A. Exponential dispersion models for overdispersed +zero-inflated count data. Communications in Statistics-Simulation and Computation +(2021), 1–19. +[6] Bardwell, G., and Crow, E. A two parameter family of hyper-Poisson distribu- +tions. Journal of the American Statistical Association 59 (1964), 133–141. +[7] Bourguignon, M., Gallardo, D. I., and Medeiros, R. M. A simple and +useful regression model for underdispersed count data based on Bernoulli–Poisson +convolution. Statistical Papers 63, 3 (2022), 821–848. +[8] Bourguignon, M., and Weiß, C. H. An INAR (1) process for modeling count +time series with equidispersion, underdispersion and overdispersion. Test 26, 4 (2017), +847–868. +[9] Campbell, N. L., Young, L. J., and Capuano, G. A. Analyzing over-dispersed +count data in two-way cross-classification problems using generalized linear models. +Journal of Statistical Computation and Simulation 63, 3 (1999), 263–281. +[10] Chakraborty, S. +On some distributional properties of the family of weighted +generalized poisson distribution. Communications in Statistics—Theory and Methods +39, 15 (2010), 2767–2788. +[11] Chakraborty, S., and Bhati, D. Transmuted geometric distribution with ap- +plications in modeling and regression analysis of count data. SORT-Statistics and +Operations Research Transactions (2016), 153–176. +[12] Chakraborty, S., and Gupta, R. D. Exponentiated geometric distribution: an- +other generalization of geometric distribution. Communications in Statistics-Theory +and Methods 44, 6 (2015), 1143–1157. +[13] Chakraborty, S., and Ong, S. Mittag-leffler function distribution-a new gen- +eralization of hyper-Poisson distribution. +Journal of Statistical distributions and +applications 4, 1 (2017), 1–17. +[14] Cvijović, D. New integral representations of the polylogarithm function. Proceed- +ings of the Royal Society A: Mathematical, Physical and Engineering Sciences 463, +2080 (2007), 897–905. +19 + +[15] Del Castillo, J., and Pérez-Casany, M. Weighted poisson distributions for +overdispersion and underdispersion situations. Annals of the Institute of Statistical +Mathematics 50, 3 (1998), 567–585. +[16] Efron, B. Double exponential-families and their use in generalized linear-regression. +Journal of the American Statistical Association 81 (1986), 709–721. +[17] Fisher, R. A., Corbet, A. S., and Williams, C. B. The relation between the +number of species and the number of individuals in a random sample of an animal +population. The Journal of Animal Ecology (1943), 42–58. +[18] Gómez-Déniz, E. Another generalization of the geometric distribution. Test 19, 2 +(2010), 399–415. +[19] Hassanzadeh, F., and Kazemi, I. Analysis of over-dispersed count data with +extra zeros using the Poisson log-skew-normal distribution. Journal of Statistical +Computation and Simulation 86, 13 (2016), 2644–2662. +[20] Jain, G., and Consul, P. A generalized negative binomial distribution. SIAM +Journal on Applied Mathematics 21, 4 (1971), 501–513. +[21] Johnson, O. +Log-concavity and the maximum entropy property of the poisson +distribution. Stochastic Processes and their Applications 117, 6 (2007), 791–802. +[22] Makcutek, J. A generalization of the geometric distribution and its application in +quantitative linguistics. Romanian Reports in Physics 60, 3 (2008), 501–509. +[23] Mark, Y. A. Log-concave probability distributions: Theory and statistical testing. +Duke University Dept of Economics Working Paper, 95-03 (1997). +[24] Mihoubi, M. Bell polynomials and binomial type sequences. Discrete Mathematics +308, 12 (2008), 2450–2459. +[25] Moghimbeigi, A., Eshraghian, M. R., Mohammad, K., and Mcardle, +B. Multilevel zero-inflated negative binomial regression modeling for over-dispersed +count data with extra zeros. Journal of Applied Statistics 35, 10 (2008), 1193–1202. +[26] Moqaddasi Amiri, M., Tapak, L., and Faradmal, J. A mixed-effects least +square support vector regression model for three-level count data. Journal of Statis- +tical Computation and Simulation 89, 15 (2019), 2801–2812. +[27] Nekoukhou, V., Alamatsaz, M., and Bidram, H. A discrete analogue of the +generalized exponential distribution. +Communications in Statistics - Theory and +Methods 41, 11 (2012), 2000–2013. +[28] Philippou, A., Georghiou, C., and Philippou, G. A generalized geometric +distribution and some of its properties. Statistics and Probability Letters 1, 4 (1983), +171–175. +[29] Rodrigues-Motta, M., Pinheiro, H. P., Martins, E. G., Araújo, M. S., +and dos Reis, S. F. Multivariate models for correlated count data. Journal of +Applied Statistics 40, 7 (2013), 1586–1596. +[30] Sarvi, F., Moghimbeigi, A., and Mahjub, H. GEE-based zero-inflated gener- +alized Poisson model for clustered over or under-dispersed count data. Journal of +Statistical Computation and Simulation 89, 14 (2019), 2711–2732. +20 + +[31] Sellers, K. F., and Shmueli, G. A flexible regression model for count data. The +Annals of Applied Statistics (2010), 943–961. +[32] Tapak, L., Hamidi, O., Amini, P., and Verbeke, G. +Random effect +exponentiated-exponential geometric model for clustered/longitudinal zero-inflated +count data. Journal of Applied Statistics 47, 12 (2020), 2272–2288. +[33] Tripathi, R., Gupta, R., and White, T. Some generalizations of the geometric +distribution. Sankhya Ser. B 49, 3 (1987), 218–223. +[34] Tüzen, F., Erbaş, S., and Olmuş, H. A simulation study for count data models +under varying degrees of outliers and zeros. Communications in Statistics-Simulation +and Computation 49, 4 (2020), 1078–1088. +[35] Wang, S., Cadigan, N., and Benoît, H. Inference about regression parameters +using highly stratified survey count data with over-dispersion and repeated measure- +ments. Journal of Applied Statistics 44, 6 (2017), 1013–1030. +[36] Wang, Y., Young, L. J., and Johnson, D. E. A UMPU test for comparing means +of two negative binomial distributions. Communications in Statistics-Simulation and +Computation 30, 4 (2001), 1053–1075. +[37] Wongrin, W., and Bodhisuwan, W. Generalized Poisson–Lindley linear model +for count data. Journal of Applied Statistics 44, 15 (2017), 2659–2671. +21 + diff --git a/-tAzT4oBgHgl3EQfhPwa/content/tmp_files/load_file.txt b/-tAzT4oBgHgl3EQfhPwa/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..e68108b79d4684e55d4039d9ea437541ba51fb42 --- /dev/null +++ b/-tAzT4oBgHgl3EQfhPwa/content/tmp_files/load_file.txt @@ -0,0 +1,773 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf,len=772 +page_content='A new over-dispersed count model Anupama Nandi, Subrata Chakraborty, Aniket Biswas Dibrugarh University January 5, 2023 Abstract A new two-parameter discrete distribution, namely the PoiG distribution is derived by the convolution of a Poisson variate and an independently distributed geometric random variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This distribution generalizes both the Poisson and geometric distri- butions and can be used for modelling over-dispersed as well as equi-dispersed count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A number of important statistical properties of the proposed count model, such as the probability generating function, the moment generating function, the moments, the survival function and the hazard rate function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Monotonic properties are studied such as the log concavity and the stochastic ordering are also investi- gated in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Method of moment and the maximum likelihood estimators of the parameters of the proposed model are presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' It is envisaged that the proposed distribution may prove to be useful for the practitioners for modelling over-dispersed count data compared to its closest competitors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Keywords Geometric distribution, Poisson distribution, Conway-Maxwell Poisson dis- tribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' BerG distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' BerPoi distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Incomplete gamma function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' MSC 2010 60E05, 62E15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='01480v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='ME] 4 Jan 2023 1 Introduction The phenomenon of the variance of a count data being more than its mean is commonly termed as over-dispersion in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Over-dispersion is relevant in many modelling applications and it is encountered more often compared to the phenomena of under- dispersion and equi-dispersion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A number of count models are available in the literature for over-dispersed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' However, addition of a simple yet adequate model is of importance given the ongoing research interest in this direction ([37], [25], [32], [35], [30], [29], [9], [19], [26], [34], [5], [2] and [36]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The simplest and the most common count data model is the Poisson distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Its equi-dispersion characteristic is well-known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This is a limitation for the Poisson model and to overcome this issue, several alternatives have been developed and used for their obvious advantage over the classical Poisson model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Notable among these distributions are the hyper-Poisson (HP) of Bardwell and Crow [6], generalized Poisson distribution of Jain and Consul [20], double-Poisson of Efron [16], weighted Poisson of Castillo and Pérez-Casany [15], weighted generalized Poisson distribution of Chakraborty [10], Mittag-Leffler function distribution of Chakraborty and Ong [13] and the popular COM-Poisson distribution Shmueli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' COM-Poisson generalizes the binomial and the negative binomial distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The classical geometric and negative binomial models are also used for over-dispersed count datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The gamma mixture of the Poisson distribution generates the negative binomial distribution [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus unlike the Poisson distribution, these two count models posses the over-dispersion characteristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Consequently, several extensions of the geometric distribution have been introduced in the literature for over-dispersed count data modelling ([11], [12], [18], [20], [22], [27], [28], and [33] among others).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Two most widely used distributions for over- dispersed data are of course the negative binomial and COM-Poisson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' As pointed out earlier, there is still plenty of opportunity for developing new discrete distributions with simple structure and explicit interpretation, appropriate for over-dispersed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Recently, Bourguignon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' have introduced the BerG distribution [8] by using the convolution of a Bernoulli random variable and a geometric random variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' In a very recent publication, Bourguignon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' have introduced the BerPoi distribution from a similar motivation [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This is a convolution of a Bernoulli random variable and a Poisson random variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The first one is capable of modelling over-dispersed, under-dispersed and equi-dispersed data whereas the second one is efficient for modelling under-dispersed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This approach is simple and has enormous potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Here we use this idea to develop a novel over-dispersed count model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' In this article, we propose a new discrete distribution derived from the convolution of two independent count random variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The random variables are Poisson and geomet- ric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Hence we identify the proposed model as PoiG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This two-parameter distribution has many advantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Structural simplicity is one of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' It is easy to comprehend unlike the COM-Poisson distribution, which involves a difficult normalising constant in its probability mass function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A model with closed-form expressions of the mean and the variance is well-suited for regression modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Unlike the COM-Poisson distribution, mean and variance of the proposed distribution can be written in closed form expressions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The proposed distribution extends both the Poisson and geometric distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Rest of the article is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' In section 2, we present the PoiG distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' In Section 3, we describe its important statistical properties such as recurrence relation, generating functions, moments, dispersion index, mode, reliability properties, monotonic 2 properties and stochastic ordering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' In Section 4, we present the moment and the max- imum likelihood methods of parameter estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' We conclude the article with a few limitations and future scopes of the current study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 2 The PoiG distribution In this section, we introduce a novel discrete distribution by considering two independent discrete random variables Y1 and Y2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let us denote the set of non-negative integers, {0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='} by N0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Also let, Y1 and Y2 follow the Poisson distribution with mean λ > 0 and the geometric distribution with mean 0 < 1/θ < 1, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Both Y1 and Y2 have the same support N0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' For convenience, we write Y1 ∼ P(λ) and Y2 ∼ G(θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Consider, Y = Y1 + Y2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Then, Pr(Y = y) = y � i=0 Pr(Y1 = i) Pr(Y2 = y − i) = y � i=0 e−λλi i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' θ(1 − θ)y−i = θ(1 − θ)ye−λ y � i=0 1 i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' � λ 1 − θ �i , y = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (1) The distribution in (1) being the convolution Poisson and geometric, is named the PoiG distribution and we write Y ∼ PoiG(λ, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus, the probability mass function (pmf) of PoiG(λ, θ) can be written as pY (y) = θ(1 − θ)y Γ(y + 1) exp � λθ 1 − θ � Γ � y + 1, λ 1 − θ � , y = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (2) Figure 1 exhibits nature of the pmf for different choices of (λ, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The cumulative distri- bution function (cdf) of PoiG distribution is FY (y) = Pr(Y1 + Y2 ≤ y) = y � y1=0 y−y1 � y2=0 pY (y1)pY (y2) = y � y1=0 FG(y − y1)pY (y1) = y � y1=0 (1 − (1 − θ)y−y1+1)pY (y1) = y � y1=0 e−λλy1 y1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' − (1 − θ)y+1e−λ y � y1=0 1 y1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' � λ 1 − θ �y1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (3) An explicit expression of (3) is given by FY (y) = Γ(y + 1, λ) Γ(y + 1) − (1 − θ)y+1 Γ(y + 1) exp � λθ 1 − θ � Γ � y + 1, λ 1 − θ � , y = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (4) 3 Figure 2 exhibits nature of the cdf for different choices of (λ, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The mean and variance of the PoiG(λ, θ) distribution are given as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' E(Y ) = µ = λ + 1 − θ θ and V (Y ) = σ2 = λ + 1 − θ θ2 (5) Special cases For λ −→ 0, PoiG(λ, θ) behaves like G(θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' For θ −→ 1, PoiG(λ, θ) behaves like P(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Remark 1 The incomplete gamma function [1] is defined as Γ(n, x) = � ∞ x t(n−1)e−tdt and it can also be rewrite as Γ(n, x) = (n − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' �n−1 k=0 e−xxk k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' , which is valid for positive values of n and any value of x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus the incomplete gamma function in (2) can be rewritten as Γ � y + 1, λ 1 − θ � = Γ(y + 1) y � i=0 1 Γ(i + 1) exp � − λ 1 − θ �� λ 1 − θ �i , where Γ(y + 1) = y!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' and Γ(i + 1) = i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='. FY (0) = pY (0) = θe−λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus, the proportion of zeros in case of the PoiG distribu- tion tends to θ as λ → 0 and to zero as λ → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 3 Properties of the PoiG distribution In this section, we explore several important statistical properties of the proposed PoiG(λ, θ) distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Some of the distributional properties studied here are the recurrence relation, probability generating function (pgf), moment generating function (mgf), characteristic function (cf), cumulant generating function (cgf), moments, coefficient of skewness and kurtosis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' We also study the reliability properties such as the survival function and the hazard rate function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Log-concavity and stochastic ordering of the proposed model are also investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 Recurrence relation Probability recurrence relation helps in finding the subsequent term using the preceding term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' It usually proves to be advantageous in computing the masses at different values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Note that, pY (y) = θ(1 − θ)y Γ(y + 1) exp � λθ 1 − θ � Γ � y + 1, λ 1 − θ � = θ(1 − θ)ye−λ y � i=0 1 Γ(i + 1) � λ 1 − θ �i = θ(1 − θ)ye−λsy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 4 Figure 1: Probability mass function of PoiG(λ, θ) for λ ∈ {0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5, 5, 10} and θ ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The (i, j)th plot corresponds to the ith value of λ and jth value of θ for i, j = 1, 2, 3, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 f 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 t 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8+ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0 10 15 20 0 2 4 to 10 0 2 0 2 3 4 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 F ° 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 § 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0 5 10 15 20 2 4 6 8 10 12 0 2 4 6 8 0 2 3 d 5 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='14 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='02 : 0 5 10 15 20 25 30 0 5 10 15 0 5 10 5 0 2 4 6 8 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='12 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='12 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='02日 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='02 5 0 5 10 15 20 25 0 10 15 20 0 5 10 15 20Figure 2: Cumulative distribution function of PoiG(λ, θ) for λ ∈ {0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5, 5, 10} and θ ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='The (i, j)th plot corresponds to the ith value of λ and jth value of θ for i, j = 1, 2, 3, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 E 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 E 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 F 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 E 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 E 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 F 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 E 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0 5 10 15 20 25 30 5 10 15 20 25 30 10 15 20 25 30 0 5 10 15 20 25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 F 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 F 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 E 180 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 F 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0 5 10 15 20 25 30 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30Where, sy = y � i=0 1 Γ(i + 1) � λ 1 − θ �i and sy+1 = sy + 1 Γ(y + 2) � λ 1 − θ �y+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Now, pY (y + 1) = θ(1 − θ)y+1e−λsy+1 = θ(1 − θ)y+1e−λ � sy + 1 Γ(y + 2) � λ 1 − θ �y+1� = (1 − θ)pY (y) + θe−λ λy+1 Γ(y + 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (6) This is the recurrence formula of the PoiG distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' It is easy to check that sy+1 sy = 1 + 1 syΓ(y + 2) � λ 1 − θ �y+1 = 1 as y −→ ∞, and pY (y + 1) pY (y) = (1 − θ) + θe−λ pY (y) λy+1 Γ(y + 2) = 1 − θ as y −→ ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (7) From (7), it is clear that the behaviour of the tail of the distribution depends on θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' When θ −→ 0, the tail of the distribution decays relatively slowly, which implies long tail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' when θ −→ 1, the tail of the distribution decays fast, which implies short tail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This can easily be verified from Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 Generating functions We use the notation H to denote a pgf and use the notation of the corresponding random variable in the subscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' For Y1 ∼ P(λ) and Y2 ∼ G(θ), HY1(s) = eλ(s−1) and HY2(s) = θ 1 − (1 − θ)s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Now by using the convolution property of probability generating function we obtain the pgf of PoiG(λ, θ) as HY (s) = θeλ(s−1) 1 − s + θs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (8) Similar methods are used to obtain the other generating functions, including the mgf MY (t), cf φY (t) and cgf KY (t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' These are given below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' MY (t) = θeλ(t−1) 1 − (1 − θ)t (9) 7 φY (t) = θeλ(eit−1) 1 − (1 − θ)eit (10) KY (t) = λ(et − 1) + log � θ 1 − (1 − θ)et � (11) Let us discuss some useful definitions and notations for Result 1 given below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The no- tation G(θ) has already been introduced in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let R be the number of failures preceding the first success in a sequence of independent Bernoulli trials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' If the probability of success is θ ∈ (0, 1), then R is said to follow G(θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Suppose, we wait for the rth success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Then the number of failures is a negative binomial random variable with index r and the parameter θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let NB(r, θ) denote this distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Suppose Ri ∼ G(θ), for i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', r independently and S ∼ NB(r, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Then S = R1 + R2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' + Rr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus, it is clear that the G(θ) is a particular case of NB(r, θ) with r = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Similar to the genesis of PoiG model, if we add one Poisson random variable and an independently distributed negative binomial random variable, it is possible to obtain a generalization of the PoiG model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' An appropriate notation for this distribution would have been PoiNB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The objective of the current work is not to study this three-parameter distribution in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' However, the following result establishes that the generalization from the geometric distribution to the negative binomial distribution translates similarly to the PoiG − PoiNB case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This may prove to be a motivation for generalizing the proposed model to PoiNB in future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Result 1 The distribution of the sum of n independent PoiG random variables is a PoiNB random variable for fixed θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Mathematically, if Yi ∼ PoiG(λi, θ) for each i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', n then, n � i=1 Yi ∼ PoiNB( n � i=1 λi, n, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Proof of Result 1 From (8), the pgf of Yi ∼ PoiG(λi, θ) is HYi(s) = θeλi(s−1) 1 − s + θs for i = 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' We can derive the pgf of sum of n independent PoiG(λi, θ) variates based on the convolution property of the pgf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let, Z = Y1 + Y2 + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='. + Yn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Then, HZ(s) = n � i=1 HYi(s) = θn (1 − s + θs)ne �n i=1 λi(s−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (12) The term θn/(1−s+θs)n in (12) is the pgf of NB(n, θ) which is a generalisation of geomet- ric distribution and e �n i=1 λi(s−1) is pgf of P (�n i=1 λi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus �n i=1 Yi ∼ PoiNB (�n i=1 λi, n, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 Moments and related concepts The rth order raw moment of Y ∼ PoiG(λ, θ) can be obtained using the general expres- sions of the raw moments of Y1 ∼ P(λ) and Y2 ∼ G(θ) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' E(Y r) = E � r � j=0 �Y j � Y1 jY2 y−j � = r � j=0 �Y j � E(Y1 j)E(Y2 y−j) Note that, E(Y1 j) = ∞ � Y1=0 Y1 j e−λλY1 Y1!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' = ∞ � Y1=0 λY1S(j, Y1) = φj(λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Here, S(j, Y1) is the Stirling number of the second kind [1] and φj(λ) is the Bell polynomial [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Again, E(Y2 y−j) = ∞ � Y2=0 Y2 y−jθ(1 − θ)Y2 = θ Li−(y−j)(1 − θ), where Li−(y−j)(1 − θ) is the polylogarithm of negative integers [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Hence E(Y r) = r � j=0 �Y j � φj(λ)θ Li−(y−j)(1 − θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (13) The rth order raw moment can also be calculated by differentiating the mgf in (9) r times with respect to t and putting t = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' That is, E(Y r) = M (r) Y (0) = dr dtr [MY (t)]t=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Explicit expressions of the first four moments are listed below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' E(Y ) = λ + 1 − θ θ (14) E(Y 2) = 1 θ2[θ2(λ2 − λ + 1) + θ(2λ − 3) + 2] (15) E(Y 3) = 1 θ3[θ3(λ3 + λ − 1) + θ2(3λ2 − 6λ + 7) + θ(6λ − 12) + 6] (16) E(Y 4) = 1 θ4[θ4(λ4 + 2λ3 + λ2 − λ + 1) + θ3(4λ3 − 6λ2 + 14λ − 15) + 2θ2(6λ2 − 18λ + 25) + 12θ(2λ − 5) + 24] (17) 9 Using the above, explicit expressions of the first four central moments are given as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' µ1 = 0 (18) µ2 = λ + 1 − θ θ2 (19) µ3 = θ3λ + θ2 − 3θ + 2 θ3 (20) µ4 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9 θ4 (21) The first raw and second central moments are mean and variance of the PoiG(λ, θ) dis- tribution, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let γ1 and γ2 denote the coefficients of skewness and kurtosis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Using the central moments, these coefficients can be derived in closed forms as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' β1 = µ32 µ23 = (θ3λ + θ2 − 3θ + 2)2 (θ2λ − θ + 1)3 γ1 = � β1 = � (θ3λ + θ2 − 3θ + 2)2 (θ2λ − θ + 1)3 β2 = µ4 µ22 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9 (θ2λ − θ + 1)2 γ2 = β2 − 3 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9 (θ2λ − θ + 1)2 − 3 Remark 3 As θ → 1, β1 → 1 λ and as θ → 0, β1 → 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' As θ → 1, β2 → 3 + 1 λ and as θ → 0, β2 → 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The statements made in Remark 3 can easily be realized visually from Figure 3 and Figure 4, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Clearly, as λ → ∞, the distribution tends to attain normal shape with β1 → 0 and β2 → 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 Dispersion index and coefficient of variation The dispersion index determines whether a distribution is suitable for modelling an over, under and equi-dispersed dataset or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let IY denote the dispersion index of the distribution of the random variable Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' When IY is more or less than one, the distribution of Y can accommodate over-dispersion or under-dispersion, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The notion of equi-dispersion is indicated when IY = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The dispersion index is given by IY = σ2 µ = 1 + (1 − θ)2 θ(1 + λθ − θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 10 Figure 3: Skewness of PoiG(λ, θ) for θ ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The ith plot corresponds to the ith value of θ for different values of λ in the x-axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Figure 4: Kurtosis of PoiG(λ, θ) for θ ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The ith plot corresponds to the ith value of θ for different values of λ in the x-axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' From the expression of IY above, it follows that the PoiG distribution is equi-dispersed when θ = 1 and over-dispersed for all 0 < θ < 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' From Figure 5, it can be observed that IY increases with decreasing λ and θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The coefficient of variation (CV) is an indicator for data variability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Higher value of the CV indicates the capability of a distribution to model data with higher variability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Note that, CV (Y ) = √ λθ2 − θ + 1 λθ − θ + 1 × 100%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 Mode In Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='7, we show that PoiG(λ, θ) is unimodal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Note that, pY (1) ≤ pY (0) =⇒ (1 + λ − θ)θe−λ ≤ θe−λ =⇒ λθe−λ − θ2e−λ ≤ 0 =⇒ λ − θ ≤ 0 =⇒ λ ≤ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The converse is trivially true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus, the distribution has mode at zero for λ ≤ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Figure 1 clearly shows that the mode is zero for λ = 0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 and θ > 0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' For the equality case, 11 71 6 5 4 2 3 0 10 20 30 40 50 0 5 10 15 20 25 0 5 10 15 0 2 4 6 8 1010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 12 t 8 8 10 6 8 6 2 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50Figure 5: Dispersion index of PoiG(λ, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Figure 6: Probability mass function of PoiG(λ, θ) for λ = θ ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' that is λ = θ, the masses at zero and at unity are the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Figure 6 clearly exhibits this fact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' However, for the λ > θ case, the distribution has non-zero mode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Unfortunately, an explicit expression for this non-zero mode is difficult to find, if not impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 Reliability properties Reliability function of a discrete random variable Y at y is defined as the probability of Y assuming values greater than or equal to y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The reliability function is also termed as the survival function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The survival function of Y ∼ PoiG(λ, θ) is SY (y) = P(Y ≥ y) = 1 − Γ(y, λ) Γy + (1 − θ)y Γy exp � λθ 1 − θ � Γ � y, λ 1 − θ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (22) The hazard rate or failure rate of a discrete random variable T at time point t is defined as the conditional probability of failure at t, given that the survival time is at least t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The hazard rate function (hrf) of Y ∼ PoiG(λ, θ) can be obtained by using (1) and (4) 12 0=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='75 入=0 入=1 入=5 入=50 0=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='30 0=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='45 0=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='60 ly ly 7 20 15 5 4 10 3 10 20 30 40 50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='00.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 1 2 4 6 224 0 2 6 8 10 0 2 4 8as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' hY (y) = P(Y = y) P(Y ≥ y) = θ(1 − θ)y Γ(y + 1) exp � λθ 1 − θ � Γ � y + 1, λ 1 − θ � 1 − Γ(y, λ) Γy + (1 − θ)y Γy exp � λθ 1 − θ � Γ � y, λ 1 − θ � = θ(1 − θ)y exp � λθ 1 − θ � Γ � y + 1, λ 1 − θ � Γ(y + 1) − yΓ(y, λ) + y(1 − θ)y exp � λθ 1 − θ � Γ � y, λ 1 − θ �.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (23) The hrf for different choices of the parameters are exhibited in Figure 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The PoiG distribution exhibits constant failure rate when λ is very small and it exhibits an increasing failure rate, up to a specific time period, when λ increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' In reliability studies, the mean residual life is the expected additional lifetime given that a component has survived until a fixed time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' If the random variable Y ∼ PoiG(λ, θ) represents the life of a component, then the mean residual life is µY (y) = E(Y − y|Y ≥ y) = �∞ y=k(y − k)P(Y = y) P(Y ≥ y) = �∞ y=k ¯F(y) ¯F(k − 1) = �∞ y=k � 1 − Γ(y, λ) Γy + (1 − θ)y Γy exp � λθ 1 − θ � Γ � y, λ 1 − θ �� 1 − Γ(k − 1, λ) Γ(k − 1) + (1 − θ)k−1 Γ(k − 1) exp � λθ 1 − θ � Γ � k − 1, λ 1 − θ �.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (24) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='7 Monotonic Properties Y ∼ PoiG(λ, θ) is log-concave if the following holds for all y ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' p2 Y (y) ≥ pY (y − 1)pY (y + 1) A log-concave distribution possesses several desirable properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Some of the notable examples of log-concave distributions are the Bernoulli, binomial, Poisson, geometric, and negative binomial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Convolution of two independent log-concave distributions is also a log- concave distribution [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Being the convolution of Poisson and Geometric distributions, the proposed PoiG distribution is log-concave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Consequently, the following statements hold good for the PoiG distribution ([23] and [3]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Strongly unimodal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' At most one exponential tail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' All the moments exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Log-concave survival function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 13 Figure 7: Hazard rate function of PoiG(λ, θ) for λ ∈ {0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5, 5, 10} row-wise and θ ∈ {0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8} column-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The (i, j)th plot corresponds to the ith value of λ and jth value of θ for i, j = 1, 2, 3, 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Monotonically increasing hazard rate function (see Figure 7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Monotonically decreasing mean residual life function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 10 15 20 25 30 0 5 10 25 30 0 5 30 15 20 10 15 20 25 0 5 10 15 20 25 30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 F 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 F 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 10 15 20 25 30 E 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 F 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 t 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='i F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='i 0 5 10 15 20 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 25 30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='20 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 t 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 t 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='15 E 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='3 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 [ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 0 5 10 15 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='8 Stochastic ordering Stochastic order is an important statistical property used to compare the behaviour of different random variables [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' We have considered here the likelihood ratio order ≥lr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let X ∼ PoiG(λ1, θ) and Y ∼ PoiG(λ2, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Then Y is said to be smaller than X in the usual likelihood ratio order, that is Y ≤lr X) if L(x) = pX(x)/pY (x) is an increasing function in x, that is L(x) ≤ L(x + 1) for all 0 < θ < 1 and λ2 < λ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Note that, pX(x) = θ(1 − θ)xe−λ1 x � i=0 1 Γ(i + 1) � λ1 1 − θ �i , x = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' pY (x) = θ(1 − θ)xe−λ2 x � i=0 1 Γ(i + 1) � λ2 1 − θ �i , x = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' L(x) = exp [−(λ1 − λ2)] �x i=0 1 Γ(i + 1) � λ1 1 − θ �i �x i=0 1 Γ(i + 1) � λ2 1 − θ �i, x = 0, 1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' It is easy to see that, L(x) ≤ L(x + 1) for all 0 < θ < 1 and λ2 < λ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let Y ≤st X denote P(Y ≥ x) ≤ P(X ≥ x) for all x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' This is the notion of stochastic ordering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Similarly, the hazard rate order Y ≤hr X implies pX(x) P(X ≥ x) ≤ pY (x) P(Y ≥ x) for all x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The reversed hazard rate order Y ≤rh X implies pY (x) P(Y ≤ x) ≤ pX(x) P(X ≤ x) for all x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' From the likelihood ratio order of X and Y , the following statements are immediate [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Stochastic order: Y ≤st X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Hazard rate order: Y ≤hr X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Reverse hazard rate order: Y ≤rh X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 4 Estimation Let Y = (Y1, Y2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Yn) be a random sample of size n from the PoiG(λ, θ) distribution and y = (y1, y2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', yn) be a realization on Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The objective of this section is estimate the parameters λ and θ based on the available data y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' We present two different methods of estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' We also find asymptotic confidence intervals for both the parameters based on the maximum likelihood estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='1 Method of moments Using the expressions in (14) and (19), the mean and the variance of Y ∼ PoiG(λ, θ) are as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' µ ′ 1 = λ + 1 − θ θ and µ2 = λ + 1 − θ θ2 15 Now by subtracting µ2 from µ ′ 1, µ ′ 1 − µ2 = 1 − θ θ − 1 − θ θ2 =⇒ µ ′ 1 − µ2 = 1 − θ θ � 1 − 1 θ � =⇒ µ2 − µ ′ 1 = �1 − θ θ �2 =⇒ 1 − θ θ = � µ2 − µ ′ 1 =⇒ θ = 1 1 + � µ2 − µ ′ 1 (25) By putting θ from (25) in µ ′ 1, we obtain λ = µ ′ 1 − � µ2 − µ ′ 1 (26) This method involves equating sample moments with theoretical moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus, by equating the first sample moment about the origin m ′ 1 = �n i=1 yi/n to µ ′ 1 and the second sample moment about the mean m2 = �n i=1(yi − ¯y)2/n to µ2 in equation (25) and (26), we obtain the following estimators for λ and θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' ˆλMM = m ′ 1 − � m2 − m ′ 1 (27) ˆθMM = 1 1 + � m2 − m ′ 1 (28) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='2 Maximum likelihood method Using the pmf of Y ∼ PoiG(λ, θ) in (1), the log-likelihood function of the parameters λ and θ can easily be found as l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) = n log θ + ny log(1 − θ) + nλθ 1 − θ + n � i=0 log � � � � Γ � yi + 1, λ 1 − θ � Γ(yi + 1) � � � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (29) Let us define, β = λ 1 − θ and for j = 1, 2, 3, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' αj(yi) = e−β Γ (yi + 1, β) 1 (1 − θ)j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Differentiating (29), with respect to parameters λ and θ, we get the score functions as ∂ ∂λl(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) = nθ 1 − θ − n � i=1 α1(yi)βyi (30) ∂ ∂θl(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) = n θ + n(λ − ¯y) 1 − θ + nλθ (1 − θ)2 − n � i=1 λα2(yi)βyi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' (31) 16 Ideally, the explicit maximum likelihood estimators are obtained by simultaneously solving the two equations obtained by setting right hand sides of (30) and (31) equal to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Unfortunately, the explicit expressions of the maximum likelihood estimators could not be obtained in this case due to the structural complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Thus, we directly optimize the log-likelihood function with respect to the parameters using appropriate numerical technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Let ˆλML and ˆθML denote the maximum likelihood estimates (MLE) of λ and θ respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Now, our objective is to obtain asymptotic confidence intervals for both the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' For this purpose, we require the information matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The second-order partial derivative of the log-likelihood are given below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' ∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ2 = n � i=1 � (βyi − yiβyi−1)α2(yi) − β2yiα1(yi)2� ∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ∂θ = n (1 − θ)2 + n � i=1 � λ(βyi − yiβyi−1)α3(yi) − βyiα2(yi) − λ(1 − θ)β2yiα1(yi)2� ∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂θ2 = 2nλ − n¯y(1 − θ) (1 − θ)3 − n θ2+ n � i=1 � ((λ2 − 2λ(1 − θ))βyi − λ2yiβyi−1)α4(yi) − λ2β2yiα2(yi)2� The Fisher’s information matrix for (λ, θ) is I = � � � � � � −E �∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ2 � −E �∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ∂θ � −E �∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ∂θ � −E �∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂θ2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' � � � � � � This can be approximated by �I = � � � � � −∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ2 −∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ∂θ −∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂λ∂θ −∂2l(λ, θ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' y) ∂θ2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' � � � � � (λ,θ)=(ˆλML,ˆθML) Under some general regularity conditions, for large n, √n(ˆλML − λ, ˆθML − θ) is bivariate normal with the mean vector (0, 0) and the dispersion matrix ˆI−1 = 1 I11I22 − I12I21 � � I22 −I12 −I21 I11 � � = � � J11 −J12 −J21 J22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' � � Thus, the asymptotic (1−α)×100% confidence interval for λ and θ are given respectively by � �ˆλML − Zα 2 � J11 , ˆλML + Zα 2 � J11 � � and � �ˆθML − Zα 2 � J22 , ˆθML + Zα 2 � J22 � � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 17 5 Discussion In this article, a new two-parameter distribution is proposed, extensively studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Core of this work is theoretical development, its applied aspect is also important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' From the application point of view, the proposed model is easy to use for modeling over-dispersed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Despite the availability of several other over-dispersed count models, the proposed model may find wide applications due to the interpretability of its parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The parameter λ controls the tail of the distribution while the parameter θ adjusts for the over- dispersion present in a given dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Their combined effect gives flexibility to the shape of the distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' When θ dominates λ, it keeps the J-shaped mass distribution and for large λ, the bell-shaped mass distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Consequently, the hump or the concentration of the observations is well accommodated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Simulation experiment to investigate performance of the point and asymptotic interval estimator and comparative real life data analysis will be reported in the complete version of the article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 18 References [1] Abramowitz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Stegun, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Handbook of mathematical functions with formulas, graphs, and mathematical tables, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' US Government printing office, 1964.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [2] Altun, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A new generalization of geometric distribution with properties and ap- plications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Communications in Statistics-Simulation and Computation 49, 3 (2020), 793–807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [3] Bagnoli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Bergstrom, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Log-concave probability and its applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' In Rationality and Equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Springer, 2006, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 217–241.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [4] Bakouch, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Jazi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Nadarajah, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A new discrete distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Statistics 48, 1 (2014), 200–240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [5] Bar-Lev, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Ridder, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Exponential dispersion models for overdispersed zero-inflated count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Communications in Statistics-Simulation and Computation (2021), 1–19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [6] Bardwell, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Crow, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A two parameter family of hyper-Poisson distribu- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of the American Statistical Association 59 (1964), 133–141.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [7] Bourguignon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Gallardo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Medeiros, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A simple and useful regression model for underdispersed count data based on Bernoulli–Poisson convolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Statistical Papers 63, 3 (2022), 821–848.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [8] Bourguignon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Weiß, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' An INAR (1) process for modeling count time series with equidispersion, underdispersion and overdispersion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Test 26, 4 (2017), 847–868.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [9] Campbell, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Young, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Capuano, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Analyzing over-dispersed count data in two-way cross-classification problems using generalized linear models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Statistical Computation and Simulation 63, 3 (1999), 263–281.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [10] Chakraborty, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' On some distributional properties of the family of weighted generalized poisson distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Communications in Statistics—Theory and Methods 39, 15 (2010), 2767–2788.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [11] Chakraborty, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Bhati, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Transmuted geometric distribution with ap- plications in modeling and regression analysis of count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' SORT-Statistics and Operations Research Transactions (2016), 153–176.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [12] Chakraborty, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Gupta, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Exponentiated geometric distribution: an- other generalization of geometric distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Communications in Statistics-Theory and Methods 44, 6 (2015), 1143–1157.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [13] Chakraborty, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Ong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Mittag-leffler function distribution-a new gen- eralization of hyper-Poisson distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Statistical distributions and applications 4, 1 (2017), 1–17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [14] Cvijović, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' New integral representations of the polylogarithm function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Proceed- ings of the Royal Society A: Mathematical, Physical and Engineering Sciences 463, 2080 (2007), 897–905.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 19 [15] Del Castillo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Pérez-Casany, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Weighted poisson distributions for overdispersion and underdispersion situations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Annals of the Institute of Statistical Mathematics 50, 3 (1998), 567–585.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [16] Efron, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Double exponential-families and their use in generalized linear-regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of the American Statistical Association 81 (1986), 709–721.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [17] Fisher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Corbet, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Williams, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The relation between the number of species and the number of individuals in a random sample of an animal population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The Journal of Animal Ecology (1943), 42–58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [18] Gómez-Déniz, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Another generalization of the geometric distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Test 19, 2 (2010), 399–415.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [19] Hassanzadeh, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Kazemi, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Analysis of over-dispersed count data with extra zeros using the Poisson log-skew-normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Statistical Computation and Simulation 86, 13 (2016), 2644–2662.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [20] Jain, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Consul, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A generalized negative binomial distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' SIAM Journal on Applied Mathematics 21, 4 (1971), 501–513.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [21] Johnson, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Log-concavity and the maximum entropy property of the poisson distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Stochastic Processes and their Applications 117, 6 (2007), 791–802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [22] Makcutek, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A generalization of the geometric distribution and its application in quantitative linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Romanian Reports in Physics 60, 3 (2008), 501–509.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [23] Mark, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Log-concave probability distributions: Theory and statistical testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Duke University Dept of Economics Working Paper, 95-03 (1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [24] Mihoubi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Bell polynomials and binomial type sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Discrete Mathematics 308, 12 (2008), 2450–2459.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [25] Moghimbeigi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Eshraghian, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Mohammad, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Mcardle, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Multilevel zero-inflated negative binomial regression modeling for over-dispersed count data with extra zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Applied Statistics 35, 10 (2008), 1193–1202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [26] Moqaddasi Amiri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Tapak, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Faradmal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A mixed-effects least square support vector regression model for three-level count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Statis- tical Computation and Simulation 89, 15 (2019), 2801–2812.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [27] Nekoukhou, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Alamatsaz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Bidram, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A discrete analogue of the generalized exponential distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Communications in Statistics - Theory and Methods 41, 11 (2012), 2000–2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [28] Philippou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Georghiou, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Philippou, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A generalized geometric distribution and some of its properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Statistics and Probability Letters 1, 4 (1983), 171–175.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [29] Rodrigues-Motta, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Pinheiro, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Martins, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Araújo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and dos Reis, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Multivariate models for correlated count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Applied Statistics 40, 7 (2013), 1586–1596.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [30] Sarvi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Moghimbeigi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Mahjub, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' GEE-based zero-inflated gener- alized Poisson model for clustered over or under-dispersed count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Statistical Computation and Simulation 89, 14 (2019), 2711–2732.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 20 [31] Sellers, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Shmueli, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A flexible regression model for count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' The Annals of Applied Statistics (2010), 943–961.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [32] Tapak, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Hamidi, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Amini, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Verbeke, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Random effect exponentiated-exponential geometric model for clustered/longitudinal zero-inflated count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Applied Statistics 47, 12 (2020), 2272–2288.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [33] Tripathi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Gupta, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and White, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Some generalizations of the geometric distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Sankhya Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' B 49, 3 (1987), 218–223.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [34] Tüzen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Erbaş, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Olmuş, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A simulation study for count data models under varying degrees of outliers and zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Communications in Statistics-Simulation and Computation 49, 4 (2020), 1078–1088.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [35] Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Cadigan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Benoît, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Inference about regression parameters using highly stratified survey count data with over-dispersion and repeated measure- ments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Applied Statistics 44, 6 (2017), 1013–1030.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [36] Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', Young, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Johnson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' A UMPU test for comparing means of two negative binomial distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Communications in Statistics-Simulation and Computation 30, 4 (2001), 1053–1075.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' [37] Wongrin, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=', and Bodhisuwan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Generalized Poisson–Lindley linear model for count data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' Journal of Applied Statistics 44, 15 (2017), 2659–2671.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} +page_content=' 21' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfhPwa/content/2301.01480v1.pdf'} diff --git a/.gitattributes b/.gitattributes index 74628eaae05d0a989e42234afa8c8aba0dae464c..6e39c69d01a586b7ac2ac187e5adae5b8a3c5772 100644 --- a/.gitattributes +++ b/.gitattributes @@ -6901,3 +6901,46 @@ QdFQT4oBgHgl3EQfZTb8/content/2301.13316v1.pdf filter=lfs diff=lfs merge=lfs -tex ytE4T4oBgHgl3EQfyA1G/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text 5tE2T4oBgHgl3EQf6wj3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text QdFQT4oBgHgl3EQfZTb8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +NtE2T4oBgHgl3EQfqwjT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +PNE1T4oBgHgl3EQfaQRP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +kdE2T4oBgHgl3EQfIgZc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +X9FLT4oBgHgl3EQfUi8y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +NdFIT4oBgHgl3EQfdCtI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +FtFJT4oBgHgl3EQfDSx3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +1NFKT4oBgHgl3EQfOi14/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +j9FIT4oBgHgl3EQfqivf/content/2301.11328v1.pdf filter=lfs diff=lfs merge=lfs -text +DNAzT4oBgHgl3EQfGfv5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +j9FIT4oBgHgl3EQfqivf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +9dE4T4oBgHgl3EQfdwwo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +edFQT4oBgHgl3EQfkDbq/content/2301.13357v1.pdf filter=lfs diff=lfs merge=lfs -text +FtAzT4oBgHgl3EQfUfxu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ONE3T4oBgHgl3EQfCAkU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +cNAyT4oBgHgl3EQfwfkz/content/2301.00648v1.pdf filter=lfs diff=lfs merge=lfs -text +cdAzT4oBgHgl3EQfZvzP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ANFIT4oBgHgl3EQf-iyR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +WdE0T4oBgHgl3EQfVwBf/content/2301.02268v1.pdf filter=lfs diff=lfs merge=lfs -text +i9AyT4oBgHgl3EQf-_q2/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +FtFJT4oBgHgl3EQfDSx3/content/2301.11433v1.pdf filter=lfs diff=lfs merge=lfs -text +2NE3T4oBgHgl3EQfnwo5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +S9A0T4oBgHgl3EQfD__i/content/2301.02013v1.pdf filter=lfs diff=lfs merge=lfs -text +gtE3T4oBgHgl3EQf3wvW/content/2301.04767v1.pdf filter=lfs diff=lfs merge=lfs -text +1NFAT4oBgHgl3EQfjx2F/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +T9AyT4oBgHgl3EQfhPid/content/2301.00374v1.pdf filter=lfs diff=lfs merge=lfs -text +CdAzT4oBgHgl3EQfGftE/content/2301.01028v1.pdf filter=lfs diff=lfs merge=lfs -text +l9FKT4oBgHgl3EQfDy1k/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +9dE4T4oBgHgl3EQfdwwo/content/2301.05093v1.pdf filter=lfs diff=lfs merge=lfs -text +B9E4T4oBgHgl3EQfFQzj/content/2301.04885v1.pdf filter=lfs diff=lfs merge=lfs -text +G9E1T4oBgHgl3EQf_QZd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +6tE3T4oBgHgl3EQfpwpy/content/2301.04645v1.pdf filter=lfs diff=lfs merge=lfs -text +adAyT4oBgHgl3EQfv_mu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +X9FLT4oBgHgl3EQfUi8y/content/2301.12049v1.pdf filter=lfs diff=lfs merge=lfs -text +g9E3T4oBgHgl3EQfIQkV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ftE4T4oBgHgl3EQfRQzA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +jtE1T4oBgHgl3EQf0AX0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +KNE0T4oBgHgl3EQfigEg/content/2301.02445v1.pdf filter=lfs diff=lfs merge=lfs -text +ctFQT4oBgHgl3EQfijaX/content/2301.13350v1.pdf filter=lfs diff=lfs merge=lfs -text +wdE0T4oBgHgl3EQfcACj/content/2301.02357v1.pdf filter=lfs diff=lfs merge=lfs -text +odAzT4oBgHgl3EQfqv0U/content/2301.01632v1.pdf filter=lfs diff=lfs merge=lfs -text +xtAyT4oBgHgl3EQfnvhT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +6tE3T4oBgHgl3EQfpwpy/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +6tE0T4oBgHgl3EQffABm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text diff --git a/0NE1T4oBgHgl3EQfkwRo/content/tmp_files/load_file.txt b/0NE1T4oBgHgl3EQfkwRo/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..8294f4a3081cacb512ef2bd7627f9095387d1041 --- /dev/null +++ b/0NE1T4oBgHgl3EQfkwRo/content/tmp_files/load_file.txt @@ -0,0 +1,617 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf,len=616 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='03277v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='DG] 9 Jan 2023 Functionals for the Study of LCK Metrics on Compact Complex Manifolds Dan Popovici and Erfan Soheil Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We propose an approach to the existence problem for locally conformally K¨ahler metrics on compact complex manifolds by introducing and studying a functional that is different according to whether the complex dimension of the manifold is 2 or higher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 Introduction Let X be an n-dimensional compact complex manifold with n ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In this paper, we propose a variational approach to the existence of locally conformally K¨ahler (lcK) metrics on X by introducing and analysing a functional in each of the cases n = 2 and n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This functional, defined on the non-empty set HX of all the Hermitian metrics on X, assumes non-negative values and vanishes precisely on the lcK metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We compute the first variation of our functional on both surfaces and higher-dimensional manifolds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We will identify a Hermitian metric on X with the associated C∞ positive definite (1, 1)-form ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The set HX of all these metrics is a non-empty open convex cone in the infinite-dimensional real vector space C∞ 1, 1(X, R) of all the real-valued smooth (1, 1)-forms on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' As is well known, a Hermitian metric ω is called K¨ahler if dω = 0 and a complex manifold X is said to be K¨ahler if there exists a K¨ahler metric thereon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Meanwhile, the notion of locally conformally K¨ahler (lcK) manifold originates with I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Vaisman in [Vai76].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' There are several equivalent definitions of lcK manifolds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The one adopted in this paper stipulates that a complex manifold X is lcK if there exists an lcK metric thereon, while a Hermitian metric ω on X is said to be lcK if there exists a C∞ 1-form θ on X such that dθ = 0 and dω = ω ∧ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' When it exists, the 1-form θ is unique and is called the Lee form of ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For equivalent definitions of lcK manifolds, the reader is referred e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' to Definitions 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='18 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='29 of [OV22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' One of the early results in the theory of lcK manifolds is Vaisman’s theorem according to which any lcK metric on a compact K¨ahler manifold is, in fact, globally conformally K¨ahler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This theorem was extended to compact complex spaces with singularities by Preda and Stanciu in [PS22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The question of when lcK metrics exist on a given compact complex manifold X has been extensively studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For example, Otiman characterised the existence of such metrics with prescribed Lee form in terms of currents: given a d-closed 1-form θ on X and considering the associated twisted operator dθ = d+θ∧·, Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 in [Oti14] stipulates that X admits an lcK metric whose Lee form is θ if and only if there are no non-trivial positive (1, 1)-currents on X that are (1, 1)-components of dθ-boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' On the other hand, Istrati investigated the relation between the existence of special lcK metrics on a compact complex manifold and the group of biholomorphisms of the manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Specifically, according to Theorem 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 in [Ist19], a compact lcK manifold X admits a Vaisman metric if the group of biholomorphisms of X contains a torus T that is not purely real.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' A compact torus T of 1 biholomorphisms of a compact complex manifold (X, J) is said to be purely real (in the sense of (1) of Definition 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' in [Ist19]) if its Lie algebra t satisfies the condition t ∩ Jt = 0, where J is the complex structure of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Recall that an lcK metric ω is said to be a Vaisman metric if ∇ωθ = 0, where θ is the Lee form of ω and ∇ω is the Levi-Civita connection determined by ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The approach we propose in this paper to the issue of the existence of lcK metrics on a compact complex n-dimensional manifold X is analytic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Given an arbitrary Hermitian metric ω on X, the Lefschetz decomposition dω = (dω)prim + ω ∧ θω of dω into a uniquely determined ω-primitive part and a part divisible by ω with a uniquely de- termined quotient 1-form θω (the Lee form of ω) gives rise to the following dichotomy (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2): (i) either n = 2, in which case (dω)prim = 0 but the Lee form θω need not be d-closed, so the lcK condition on ω is equivalent to dθω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This turns out to be equivalent to ∂θ1, 0 ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Therefore, we define our functional L : HX −→ [0, +∞) in this case to be L(ω) = ||∂θ1, 0 ω ||2 ω, namely its value at every Hermitian metric ω on X is defined to be the squared L2 ω-norm of ∂θ1, 0 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) or n ≥ 3, in which case the lcK condition on ω is equivalent to the vanishing condition (dω)prim = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This is further equivalent to the vanishing of either (∂ω)prim or (¯∂ω)prim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We, therefore, define our functional L : HX −→ [0, +∞) in this case to be L(ω) = ||(¯∂ω)prim||2 ω, namely its value at every Hermitian metric ω on X is defined to be the squared L2 ω-norm of the ω-primitive part of the (1, 2)-form ¯∂ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The main results of the paper are the computations of the first variation of our functional L in each of the cases n = 2 (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='4) and n ≥ 3 (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' While the functional L is scaling-invariant when n = 2, this fails to be the case when n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In this latter case, we obtain two proofs – one as a corollary of the formula for the first variation of our functional (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3), the other as a direct consequence of the behaviour of our functional in the scaling direction (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2) – for the equivalence: ω is a critical point for the functional L if and only if ω is lcK Still in the case n ≥ 3, we introduce in Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 a normalised version �Lρ of the functional L depending on an arbitrary background Hermitian metric ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The first variation of �Lρ is then deduced in Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='6 from the analogous computation for L obtained in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' One motivation for the normalisation we propose in terms of a (possibly balanced and possibly moving) metric ρ stems from the conjecture predicting that the simultaneous existence of a balanced metric and of an lcK metric on a compact complex manifold ought to imply the existence of a K¨ahler metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We hope to be able to develop this line of thought in future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 2 At the end of §.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='6, we use our scaling-invariant functionals L (in the case of compact complex surfaces) and �Lρ (in the case of higher-dimensional compact complex manifolds) to produce positive (1, 1)-currents whose failure to be either C∞ forms or strictly positive provides possible obstructions to the existence of lcK metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Acknowledgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This work is part of the second-named author’s thesis under the supervision of the first-named author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The former wishes to thank the latter for constant support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 2 Preliminaries In this section, we recast some standard material in the language of primitive forms and make a few observations that will be used in the next sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Let X be a complex manifold with dimCX = n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We will denote by: (i) C∞ k (X, C), resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' C∞ p, q(X, C), the space of C∞ differential forms of degree k, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of bidegree (p, q) on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' When these forms α are real (in the sense that α = α), the corresponding spaces will be denoted by C∞ k (X, R), resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' C∞ p, q(X, R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) ΛkT ⋆X, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Λp, qT ⋆X, the vector bundle of differential forms of degree k, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of bidegree (p, q), as well as the spaces of such forms considered in a pointwise way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For any (1, 1)-form ρ ≥ 0, we will also use the following notation: ρk := ρk k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' , 1 ≤ k ≤ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' When ρ = ω is C∞ and positive definite (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' ω is a Hermitian metric on X), it can immediately be checked that dωk = ωk−1 ∧ dω and ⋆ω ωk = ωn−k for all 1 ≤ k ≤ n, where ⋆ = ⋆ω is the Hodge star operator induced by ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Recall the following standard Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 A C∞ positive definite (1, 1)-form (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' a Hermitian metric) ω on a complex man- ifold X is said to be locally conformally K¨ahler (lcK) if dω = ω ∧ θ for some C∞ 1-form θ satisfying dθ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The 1-form θ is uniquely determined, is real and is called the Lee form of ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The obstruction to a given Hermitian metric ω being lcK depends on whether n = 2 or n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 Let X be a complex manifold with dimCX = n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) If n = 2, for any Hermitian metric ω there exists a unique, possibly non-closed, C∞ 1-form θ = θω such that dω = ω ∧ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Therefore, ω is lcK if and only if θω is d-closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 3 Moreover, for any Hermitian metric ω, the 2-form dθω is ω-primitive, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Λω(dθω) = 0, or equivalently, ω ∧ dθω = 0, while the Lee form is real and is explicitly given by the formula: θω = Λω(dω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (1) Alternatively, if θω = θ1, 0 ω + θ0, 1 ω is the splitting of θω into components of pure types, we have θ1, 0 ω = Λω(∂ω) = −i¯∂⋆ω (2) and the analogous formulae for θ0, 1 ω = θ1, 0 ω obtained by taking conjugates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) If n ≥ 3, for any Hermitian metric ω there exists a unique ω-primitive C∞ 3-form (dω)prim and a unique C∞ 1-form θ = θω such that dω = (dω)prim + ω ∧ θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The Lee form is real and is explicitly given by the formula θω = 1 n − 1 Λω(dω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (3) Moreover, ω is lcK if and only if (dω)prim = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If ω is lcK, then θ1, 0 ω = 1 n − 1 Λω(∂ω) = − i n − 1 ¯∂⋆ω (4) and the analogous formulae obtained by taking conjugates hold for θ0, 1 ω = θ1, 0 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Recall that for any k ≤ n and any Hermitian metric ω on X, the multiplication map Ll ω = ωl ∧ · : ΛkT ⋆X −→ Λk+2lT ⋆X defined at every point of X is an isomorphism if l = n−k, is injective (but in general not surjective) for every l < n − k and is surjective (but in general not injective) for every l > n − k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' A k-form is said to be ω-primitive if it lies in the kernel of the multiplication map Ln−k+1 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Equivalently, the ω-primitive k-forms are precisely those that lie in the kernel of Λω : ΛkT ⋆X −→ Λk−2T ⋆X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Also recall that for every k ≤ n, every k-form α admits a unique ⟨ , ⟩ω-orthogonal pointwise splitting (called the Lefschetz decomposition): α = αprim + ω ∧ β(1) prim + ω2 ∧ β(2) prim + · · · + ωr ∧ β(r) prim, (5) where r is the largest non-negative integer such that 2r ≤ k, αprim, β(1) prim, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' , β(r) prim are ω-primitive forms of respective degrees k, k −2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' , k −2r ≥ 0, and ⟨ , ⟩ω is the pointwise inner product defined by ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We will call αprim the primitive part of α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Finally, recall the Hermitian commutation relation: i[Λω, ∂] = −(¯∂⋆ ω + ¯τ ⋆ ω) (6) proved in [Dem84], where τω := [Λω, ∂ω ∧ ·] is the torsion operator of order 0 and bidegree (1, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This definition of τω yields ¯τ ⋆ ωω = [(¯∂ω ∧ ·)⋆, Lω](ω) = (¯∂ω ∧ ·)⋆(ω2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 4 On the other hand, if α1, 0 is any (1, 0)-form on X, let ¯ξα be the (0, 1)-vector field defined by the requirement ¯ξα⌟ω = α1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' It is easily checked in local coordinates chosen about a given point x such that the metric ω is defined by the identity matrix at x, that the adjoint w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' ⟨ , ⟩ω of the contraction operator by ¯ξα is given by the formula (¯ξα⌟·)⋆ = −iα0, 1 ∧ ·, or equivalently − i¯ξα⌟· = (α0, 1 ∧ ·)⋆, where α0, 1 = α1, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Explicitly, if α0, 1 = � k ¯akd¯zk on a neighbourhood of x, then −i¯ξα⌟· = (α0, 1∧·)⋆ = � k ak ∂ ∂¯zk ⌟· at x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Hence, −i¯ξα⌟α0, 1 = � k |ak|2 = |α0, 1|2 ω at x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We have just got the pointwise formula − i¯ξα⌟α0, 1 = |α0, 1|2 ω = |α1, 0|2 ω (7) at every point of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, suppose that dω = ω ∧ θω for some (necessarily real) 1-form θω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Then, ¯∂ω = ω ∧ θ0, 1 ω , so (¯∂ω ∧ ·)⋆ = −iΛω(¯ξθ⌟·), where ¯ξθ := ¯ξα with α1, 0 = θ1, 0 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The above formula for ¯τ ⋆ ωω translates to ¯τ ⋆ ωω = −iΛω(¯ξθ⌟ω2) = −2iΛω(ω ∧ (¯ξθ⌟ω)) = −2i[Λω, Lω](¯ξθ⌟ω) = −2i(n − 1)θ1, 0 ω The conclusion of this discussion is that, when dω = ω ∧ θω, formula (3) translates to θ1, 0 ω = 1 n − 1 Λω(∂ω) = 1 n − 1 [Λω, ∂](ω) = 1 n − 1 i¯∂⋆ ωω + 1 n − 1 i¯τ ⋆ ωω = 1 n − 1 i¯∂⋆ ωω + 2θ1, 0 ω , which amounts to θ1, 0 ω = − 1 n−1 i¯∂⋆ ωω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This proves (4) for an arbitrary n, hence also (2) when n = 2, if the other statements in Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 have been proved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) When n = 2, the map ω ∧ · : Λ1T ⋆X −→ Λ3T ⋆X is an isomorphism at every point of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In particular, the 3-form dω is the image of a unique 1-form θ under this map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' To see that dθ is primitive, we apply d to the identity dω = ω ∧ θ to get 0 = d2ω = dω ∧ θ + ω ∧ dθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Meanwhile, multiplying the same identity by θ, we get dω ∧ θ = ω ∧ θ ∧ θ = 0 since θ ∧ θ = 0 due to the degree of θ being 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Therefore, ω ∧ dθ = 0, which means that the 2-form dθ is ω-primitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' To prove formula (1), we apply Λω to the identity dω = ω ∧ θ to get Λω(dω) = [Λω, Lω](θ) = −[Lω, Λω](θ) = −(1 − 2) θ = θ, where we used the identities Λω(θ) = 0 (for bidegree reasons) and [Lω, Λω] = (k − n) Id on k-forms (while here k = 1 and n = 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) The splitting dω = (dω)prim +ω ∧θ is the Lefschetz decomposition of dω w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' the metric ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Applying Λω, we get Λω(dω) = [Λω, Lω](θ) = −[Lω, Λω](θ) = −(1 − n) θ = (n − 1) θ, which proves (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The implication “ω lcK =⇒ (dω)prim = 0“ follows at once from the definitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' To prove the reverse implication, suppose that (dω)prim = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We have to show that θ is d-closed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The assumption means that dω = ω ∧ θ, so dω ∧ θ = ω ∧ θ ∧ θ = 0 and 0 = d2ω = dω ∧ θ + ω ∧ dθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Consequently, ω ∧ dθ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, the multiplication of k-forms by ωl is injective whenever l ≤ n − k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' When n ≥ 3, 5 if we choose l = 1 and k = 2 we get that the multiplication of 2-forms by ω is injective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Hence, the identity ω ∧ dθ = 0 implies dθ = 0, so ω is lcK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ Another standard observation is that the Lefschetz decomposition transforms nicely, hence the lcK property is preserved, under conformal rescaling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 Let ω be an arbitrary Hermitian metric and let f be any smooth real-valued function on a compact complex n-dimensional manifold X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If dω = (dω)prim + ω ∧ θω is the Lefschetz decomposition of dω w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' the metric ω (with the understanding that (dω)prim = 0 when n = 2), then d(efω) = ef(dω)prim + efω ∧ (θω + df) (8) is the Lefschetz decomposition of d(efω) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' the metric �ω := efω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Consequently, ω is lcK if and only if any conformal rescaling efω of ω is lcK, while the Lee form transforms as θef ω = θω + df.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In particular, when the lcK metric ω varies in a fixed conformal class, the Lee form θω varies in a fixed De Rham 1-class {θω}DR ∈ H1(X, R) called the Lee De Rham class associated with the given conformal class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Moreover, the map ω �→ θω defines a bijection from the set of lcK metrics in a given conformal class to the set of elements of the corresponding Lee De Rham 1-class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Differentiating, we get d(efω) = efdω + efω ∧ df = ef(dω)prim + efω ∧ (θω + df).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Meanwhile, it can immediately be checked that Λefω = e−fΛω, so ker Λefω = ker Λω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thus, the ω-primitive forms coincide with the �ω-primitive forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since Λ�ω commutes with the multiplication by any real-valued function, ef(dω)prim is �ω-primitive, so (8) is the Lefschetz decompostion of d�ω w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' �ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ When X is compact, we know from [Gau77] that every Hermitian metric ω on X admits a (unique up to a positive multiplicative constant) conformal rescaling �ω := efω that is a Gauduchon metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' These metrics are defined (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Gau77]) by the requirement that ∂ ¯∂�ωn−1 = 0, where n is the complex dimension of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This fact, combined with Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3, shows that no loss of generality is incurred in the study of the existence of lcK metrics on compact complex manifolds if we confine ourselves to Gauduchon metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We end this review of known material with the following characterisation (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [AD15, Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5]) of Gauduchon metrics on surfaces in terms of their Lee forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='4 Let ω be a Hermitian metric on a complex surface X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The following equivalence holds: ∂ ¯∂ω = 0 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' ω is a Gauduchon metric) ⇐⇒ ¯∂⋆ ωθ0, 1 ω = 0, where θ0, 1 ω is the component of type (0, 1) of the Lee form θω of ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In particular, d⋆ ωθω = 0 if ω is Gauduchon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 6 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We give a proof different from the one in [AD15] by making use of the Hermitian commutation relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' By applying ∂ to the identity ¯∂ω = ω ∧ θ0, 1 ω and using the identity ∂ω = ω ∧ θ1, 0 ω , we get ∂ ¯∂ω = ∂ω ∧ θ0, 1 ω + ω ∧ ∂θ0, 1 ω = ω ∧ (θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Taking Λω, we get Λω(∂ ¯∂ω) = [Λω, Lω](θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω ) + ω ∧ Λω(θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω ) = Λω(θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω ) ω, where the second identity follows from [Λω, Lω] = −(2 − 2) Id = 0 on 2-forms on complex surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, Λω(θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω ) is a function, so from the above identities we get the equivalences Λω(∂ ¯∂ω) = 0 ⇐⇒ Λω(θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω ) = 0 ⇐⇒ θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω is ω-primitive ⇐⇒ ω ∧ (θ1, 0 ω ∧ θ0, 1 ω + ∂θ0, 1 ω ) = 0 ⇐⇒ ∂ ¯∂ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We remember the equivalence ∂ ¯∂ω = 0 ⇐⇒ Λω(θ1, 0 ω ∧ θ0, 1 ω ) + Λω(∂θ0, 1 ω ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since Λω(iθ1, 0 ω ∧ θ0, 1 ω ) = |θ1, 0 ω |2 ω (immediate verification) and Λωθ0, 1 ω = 0 (for bidegree reasons), we get the equivalence: ∂ ¯∂ω = 0 ⇐⇒ |θ1, 0 ω |2 ω + i[Λω, ∂] θ0, 1 ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The Hermitian commutation relation i[Λω, ∂] = −(¯∂⋆ ω + ¯τ ⋆ ω) (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (6), see [Dem84]) transforms the last equivalence into ∂ ¯∂ω = 0 ⇐⇒ |θ1, 0 ω |2 ω − (¯∂⋆ ωθ0, 1 ω + ¯τ ⋆ ωθ0, 1 ω ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (9) On the other hand, ¯τ ⋆ ω = [(¯∂ω ∧ ·)⋆, ω ∧ ·].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' From this we get Formula 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 For any Hermitian metric ω on a complex surface, we have ¯τ ⋆ ωθ0, 1 ω = |θ0, 1 ω |2 ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof of Formula 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since (¯∂ω∧·)⋆θ0, 1 ω = 0 for bidegree reasons, we get ¯τ ⋆ ωθ0, 1 ω = (¯∂ω∧·)⋆(ω∧θ0, 1 ω ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since ¯∂ω = ω ∧ θ0, 1 ω , we have (¯∂ω ∧ ·)⋆ = −iΛω(¯ξθ⌟·) (see (7) and the discussion there below), where ¯ξθ is the (0, 1)-vector field defined by the requirement ¯ξθ⌟ω = θ1, 0 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Hence ¯τ ⋆ ωθ0, 1 ω = −iΛω(θ1, 0 ω ∧ θ0, 1 ω ) − iΛω[ω ∧ (¯ξθ⌟θ0, 1 ω )].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since −i¯ξθ⌟θ0, 1 ω = |θ0, 1 ω |2 ω (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (7)), we infer that ¯τ ⋆ ωθ0, 1 ω = −Λω(iθ1, 0 ω ∧ θ0, 1 ω ) + 2 |θ0, 1 ω |2 ω, since Λω(ω) = n = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Meanwhile, θ1, 0 ω = θ0, 1 ω , so we get Λω(iθ1, 0 ω ∧θ0, 1 ω ) = |θ1, 0 ω |2 ω = |θ0, 1 ω |2 ω (immediate verification in local coordinates).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Formula 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 is now proved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ End of proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Formula 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 transforms equivalence (9) into ∂ ¯∂ω = 0 ⇐⇒ (|θ1, 0 ω |2 ω − |θ0, 1 ω |2 ω) − ¯∂⋆ ωθ0, 1 ω = 0 ⇐⇒ ¯∂⋆ ωθ0, 1 ω = 0 and we are done □ 7 3 An enerygy functional for the study of lcK metrics In what follows, we will restrict attention to the set HX := {ω ∈ C∞ 1, 1(X, R) | ω > 0} of all Hermitian metrics on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This is a non-empty open cone in the infinite-dimensional vector space C∞ 1, 1(X, R) of all smooth real (1, 1)-forms on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' It will be called the Hermitian cone of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Building on Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2, we introduce the following energy functional.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' By || ||ω, respectively | |ω, we mean the L2-norm, respectively the pointwise norm, defined by ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 Let X be a compact complex manifold with dimCX = n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) If n = 2, let L : HX −→ [0, +∞) be defined by L(ω) := � X ∂θ1, 0 ω ∧ ¯∂θ0, 1 ω = ||∂θ1, 0 ω ||2 ω, where θω is the Lee form of ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) If n ≥ 3, let L : HX −→ [0, +∞) be defined by L(ω) := � X i(¯∂ω)prim ∧ (¯∂ω)prim ∧ ωn−3 = ||(¯∂ω)prim||2 ω, where (¯∂ω)prim is the ω-primitive part of ¯∂ω in its Lefschetz decomposition (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This definition is justified by the following observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 In the setup of Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1, for every metric ω ∈ HX the following equivalence holds: ω is an lcK metric ⇐⇒ L(ω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' • In the case n = 2, we know from (i) of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 that ω is lcK if and only if dθω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This condition is equivalent to L(ω) = 0, where we set L(ω) := ||dθω||2 ω = � X dθω ∧ ⋆(d¯θω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We also know from (i) of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 that dθω is ω-primitive, so we get 0 = Λω(dθω) = Λω(∂θ1, 0 ω ) + Λω(∂θ0, 1 ω + ¯∂θ1, 0 ω ) + Λω(¯∂θ0, 1 ω ) = Λω(∂θ0, 1 ω + ¯∂θ1, 0 ω ), where the last identity follows from the previous one for bidegree reasons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We infer that the (1, 1)- form ∂θ0, 1 ω + ¯∂θ1, 0 ω is ω-primitive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' But so are ∂θ1, 0 ω and ¯∂θ0, 1 ω for bidegree reasons, so we can apply the following general formula (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Voi02, Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='29, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 150]) that holds for any primitive form v of arbitrary bidegree (p, q) on any complex n-dimensional manifold: ⋆ v = (−1)k(k+1)/2 ip−q ωn−p−q ∧ v, where k := p + q, (10) 8 to get ⋆(dθω) = ∂θ1, 0 ω − (∂θ0, 1 ω + ¯∂θ1, 0 ω ) + ¯∂θ0, 1 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We infer that dθω ∧ ⋆(d¯θω) = [∂θ1, 0 ω + (∂θ0, 1 ω + ¯∂θ1, 0 ω ) + ¯∂θ0, 1 ω ] ∧ [∂θ1, 0 ω − (∂θ0, 1 ω + ¯∂θ1, 0 ω ) + ¯∂θ0, 1 ω ] = 2 ∂θ1, 0 ω ∧ ¯∂θ0, 1 ω − (∂θ0, 1 ω + ¯∂θ1, 0 ω )2 and finally that L(ω) = 2 L(ω) − � X (∂θ0, 1 ω + ¯∂θ1, 0 ω )2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (11) On the other hand, the Stokes formula implies the first of the following identities 0 = � X dθω ∧ dθω = � X [∂θ1, 0 ω + (∂θ0, 1 ω + ¯∂θ1, 0 ω ) + ¯∂θ0, 1 ω ] ∧ [∂θ1, 0 ω + (∂θ0, 1 ω + ¯∂θ1, 0 ω ) + ¯∂θ0, 1 ω ] = 2 L(ω) + � X (∂θ0, 1 ω + ¯∂θ1, 0 ω )2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (12) We conclude from (11) and (12) that L(ω) = 0 if and only if L(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thus, we have proved that ω is lcK if and only if L(ω) = 0, as claimed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The identity L(ω) = ||∂θ1, 0 ω ||2 ω follows at once from the general formula (10) applied to the prim- itive (2, 0)-form ∂θ1, 0 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Indeed, ⋆∂θ1, 0 ω = ∂θ1, 0 ω , hence ∂θ1, 0 ω ∧ ¯∂θ0, 1 ω = ∂θ1, 0 ω ∧ ⋆(∂θ1, 0 ω ) = |∂θ1, 0 ω |2 ω dVω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In the case n ≥ 3, we know from (ii) of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 that ω is lcK if and only if (dω)prim = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, (dω)prim = (∂ω)prim + (¯∂ω)prim and the forms (∂ω)prim and (¯∂ω)prim are conjugate to each other and of different pure types ((2, 1), respectively (1, 2)), so the vanishing of (dω)prim is equivalent to the vanishing of (¯∂ω)prim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Meanwhile, the standard formula (10) applied to the primitive (2, 1)-form (¯∂ω)prim = (∂ω)prim spells: ⋆ (¯∂ω)prim = i (¯∂ω)prim ∧ ωn−3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This proves the identity L(ω) = ||(¯∂ω)prim||2 ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Putting these pieces of information together, we get the following equivalences: ω lcK ⇐⇒ (dω)prim = 0 ⇐⇒ (¯∂ω)prim = 0 ⇐⇒ L(ω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The proof is complete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ 4 First variation of the functional: case of complex surfaces Let S be a compact complex surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (So, we set X = S when n = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=') We will compute the differential of the functional L : HS −→ [0, +∞) defined on the Hermitian cone of S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Let ω ∈ HS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Then, TωHS = C∞ 1, 1(S, R), so we will compute the differential dωL : C∞ 1, 1(S, R) −→ R by computing the derivative of L(ω + tγ) w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' t ∈ (−ε, ε) at t = 0 for any given real (1, 1)-form γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 9 Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 The differential at ω of the map HS ∋ ω �→ θ0, 1 ω = Λω(¯∂ω) is given by (dωθ0, 1 ω )(γ) = d dt|t=0Λω+tγ(¯∂ω + t ¯∂γ) = ⋆(γ ∧ ⋆¯∂ω) + Λω(¯∂γ), while the differential at ω of L is given by (dωL)(γ) = 2 Re � S ∂θ1, 0 ω ∧ ¯∂ � ⋆ (γ ∧ ⋆¯∂ω) + Λω(¯∂γ) � , for every form γ ∈ C∞ 1, 1(S, R), where ⋆ = ⋆ω is the Hodge star operator defined by the metric ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Before giving the proof of this lemma, we recall the following result from [DP22] that will be used several times in the sequel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 ([DP22], Lemmas 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3) For any complex manifold X of any dimension n ≥ 2, for any bidegree (p, q) and any C∞ family (αt)t∈(−ε, ε) of forms αt ∈ C∞ p, q(X, C) with ε > 0 so small that ω + tγ > 0 for all t ∈ (−ε, ε), the following formulae hold: d dt ���� t=0 (Λω+tγαt) = Λω �dαt dt ���� t=0 � − (γ ∧ ·)⋆ ω α0 = Λω �dαt dt ���� t=0 � + (−1)p+q+1 ⋆ω (γ ∧ ⋆ωα0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The former of the above equalities appears as such in Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 of [DP22], while the latter equality follows from the former and from formula (27) of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 of [DP22] which states that ⋆ω(η ∧ ·) = (η ∧ ·)⋆ ω ⋆ω for any (1, 1)-form η on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Indeed, in our case, taking η = γ we get ¯η = γ since γ is real.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Moreover, composing with ⋆ω on the right and using the standard equality ⋆ω⋆ω = (−1)p+q Id on (p, q)-forms, we get ⋆ω(γ ∧ ·)⋆ω = (−1)p+q (γ ∧ ·)⋆ ω on (p, q)-forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The formula for (dωθ0, 1 ω )(γ) is an immediate consequence of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 applied with αt = ¯∂ω + t ¯∂γ (hence also with (p, q) = (1, 2)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We further get: (dωL)(γ) = d dt|t=0L(ω + tγ) = d dt|t=0 � S ∂θ1, 0 ω+tγ ∧ ¯∂θ0, 1 ω+tγ = � S ∂ � ⋆ (γ ∧ ⋆∂ω) + Λω(∂γ) � ∧ ¯∂θ0, 1 ω + � S ∂θ1, 0 ω ∧ ¯∂ � ⋆ (γ ∧ ⋆¯∂ω) + Λω(¯∂γ) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This is the stated formula for (dωL)(γ) since the two terms of the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' expression are mutually conjugated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ We will now simplify the above expression of (dωL)(γ) starting with a preliminary observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 Let (X, ω) be an n-dimensional complex Hermitian manifold and let ⋆ = ⋆ω be the Hodge star operator defined by ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) For every (0, 1)-form α on X, we have: ⋆(α ∧ ω) = iΛω(α ∧ ωn−1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 10 Moreover, if n = 2, then ⋆(α ∧ ω) = iα for any (0, 1)-form α on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) If n = 2, then ⋆(γ ∧ α) = iΛω(γ ∧ α) for any (1, 1)-form γ and any (0, 1)-form α on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In particular, ⋆¯∂ω = iθ0, 1 ω for any Hermitian metric ω on a complex surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (iii) In arbitrary dimension n, for any (1, 1)-form γ and any (0, 1)-form α on X, we have: Λω(γ ∧ α) = (Λωγ) α + i ξα⌟γ, where ξα is the (unique) vector field of type (1, 0) defined by the requirement ξα⌟ω = iα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) From the standard formula ⋆Λω = Lω⋆ (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Dem97, VI, §.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1]) we get Λω = ⋆Lω⋆ on even-degreed forms and Λω = − ⋆ Lω⋆ on odd-degreed forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Consequently, ⋆(α ∧ ω) = ⋆Lωα = −(⋆Lω⋆) ⋆ α = Λω(⋆α) = Λω(−(1/i) α ∧ ωn−1/(n − 1)!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' ), where we used the fact that ⋆⋆ = −1 on odd-degreed forms and the standard formula (10) applied to the (necessarily primitive) (0, 1)-form α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' When n = 2, we get ⋆(α ∧ ω) = iΛω(α ∧ ω) = i[Λω, Lω] α = −i(1 − 2) α = iα after using the general formula [Lω, Λω] = (k − n) on k-forms on n-dimensional complex manifolds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) If n = 2, the map ω ∧ · : Λ1T ⋆X −→ Λ3T ⋆X is an isomorphism at every point of X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since γ ∧ α is a 3-form, there exists a unique 1-form β (necessarily of type (0, 1)) such that γ ∧ α = ω ∧ β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Moreover, β = Λω(γ ∧ α) because ω ∧ Λω(γ ∧ α) = [Lω, Λω](γ ∧ α) = γ ∧ α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Indeed, ω ∧ (γ ∧ α) = 0 for bidegree reasons (here n = 2) and [Lω, Λω] = (k − n) on k-forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thus, γ ∧ α = ω ∧ Λω(γ ∧ α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' So, applying (i) for the second identity below, we get: ⋆(γ ∧ α) = ⋆(ω ∧ Λω(γ ∧ α)) = iΛω(ω ∧ Λω(γ ∧ α)) = i[Λω, Lω](Λω(γ ∧ α)) = iΛω(γ ∧ α).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For the last equality, we used again the general formula [Lω, Λω] = (k − n) on k-forms (n = 2 here).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In order to prove the formula for ⋆¯∂ω, recall that ¯∂ω = ω ∧ θ0, 1 ω , so we get ⋆¯∂ω = ⋆(ω ∧ θ0, 1 ω ) = iΛω(ω ∧ θ0, 1 ω ) = i[Λω, Lω] θ0, 1 ω = −i(1 − 2) θ0, 1 ω , where we used the first part of (ii) to get the second identity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (iii) Since the claimed identity is pointwise and involves only zero-th order operators, we fix an arbitrary point x ∈ X and choose local holomorphic coordinates about x such that at x we have ω = n� a=1 idza ∧ d¯za and γ = n� j=1 γj¯j idzj ∧ d¯zj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Then, Λω = −i n� j=1 ∂ ∂¯zj ⌟ ∂ ∂zj ⌟· at x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If we set α = n� j=1 αj d¯zj (at any point),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' we get ξα = n� j=1 αj ∂ ∂zj (at 11 x) and the following equalities (at x): Λω(γ ∧ α) = −i n � j=1 ∂ ∂¯zj ⌟ ∂ ∂zj ⌟(γ ∧ α) (a) = −i n � j=1 ∂ ∂¯zj ⌟ �� ∂ ∂zj ⌟γ � ∧ α � = −i n � j=1 � ∂ ∂¯zj ⌟ ∂ ∂zj ⌟γ � ∧ α + i n � j=1 � ∂ ∂zj ⌟γ � ∧ � ∂ ∂¯zj ⌟α � (b) = � n � j=1 γj¯j � α − n � j=1 αjγj¯j d¯zj = (Λωγ) α + iξα⌟γ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' where (a) follows from (∂/∂zj)⌟α = 0 for bidegree reasons and (b) follows from (∂/∂zj)⌟γ = iγj¯j d¯zj and from (∂/∂¯zj)⌟α = αj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This proves the desired equality at x, hence at any point since x was arbitrary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ We can now derive a simplified form of the first variation of the functional L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='4 Let S be a compact complex surface on which a Hermitian metric ω has been fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) The differential at ω ∈ HS of the functional L : HS −→ [0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' +∞) evaluated at any form γ ∈ C∞ 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1(S,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' R) is given by any of the following three formulae: (dωL)(γ) = −2 Re � S Λω(γ) ∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ∧ ¯∂θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω − 2 Re � S ∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ∧ ¯∂Λω(γ) ∧ θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω + 2 Re � S ∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ∧ ¯∂Λω(¯∂γ) −2 Re � S i∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ∧ ¯∂(ξθ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ⌟γ) (13) = −2 Re � S Λω(γ) |∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω |2 ω dVω − 2 Re � S ∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ∧ ¯∂Λω(γ) ∧ θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω − 2 Re i⟨⟨∂ ¯∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' ∂γ⟩⟩ω −2 Re � S i∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ∧ ¯∂(ξθ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ⌟γ) (14) = −2 Re � S ∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ∧ ¯∂Λω(γ ∧ θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ) − 2 Re i⟨⟨∂ ¯∂θ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0 ω ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' ∂γ⟩⟩ω,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (15) where ⋆ = ⋆ω is the Hodge star operator defined by the metric ω and ξθ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω is the vector field of type (1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 0) defined by the requirement ξθ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ⌟ω = iθ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) In particular, for any given ω ∈ HS, if we choose γ = ∂θ0, 1 ω + ¯∂θ1, 0 ω , we have (dωL)(γ) = −2 Re � S i∂θ1, 0 ω ∧ ¯∂ � ξθ0, 1 ω ⌟γ � = −2 Re � S ∂θ1, 0 ω ∧ ¯∂Λω(γ ∧ θ0, 1 ω ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) From (ii) and (iii) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 applied with α := iθ0, 1 ω , we get ⋆(γ ∧ ⋆¯∂ω) = ⋆(γ ∧ iθ0, 1 ω ) = i Λω(γ ∧ iθ0, 1 ω ) = −Λω(γ) θ0, 1 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 12 Formula (13) follows from this and from Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' To get (14), we first notice that ¯∂θ0, 1 ω = ⋆¯∂θ0, 1 ω by the standard formula (10) applied to the (necessarily primitive) (0, 2)-form ¯∂θ0, 1 ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This accounts for the first term on the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Then, we transform the third term in (13) as follows: 2 Re � S ∂θ1, 0 ω ∧ ¯∂Λω(¯∂γ) (a) = −2 Re � S ∂θ1, 0 ω ∧ ¯∂ ⋆ Lω ⋆ (¯∂γ) (b) = 2 Re � S ¯∂∂θ1, 0 ω ∧ ⋆(ω ∧ ⋆(¯∂γ)) (c) = 2 Re i � S ¯∂∂θ1, 0 ω ∧ ⋆(¯∂γ) (d) = 2 Re i � S ⟨¯∂∂θ1, 0 ω , ∂¯γ⟩ω dVω, where we used the standard identity Λω = − ⋆ Lω⋆ on odd-degreed forms to get (a), Stokes to get (b), part (i) of Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 to get (c), and the definition of ⋆ to get (d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Finally, we recall that ¯γ = γ since γ is real.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Finally, (15) follows from Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 after using the equality ⋆(γ ∧ ⋆¯∂ω) = −Λω(γ ∧ θ0, 1 ω ) (seen above in the proof of (13)) and after transforming the third term in (13) as we did above in the proof of (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) The stated choice of γ means that γ is the component (dθω)1, 1 of type (1, 1) of the primitive 2-form dθω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (See (i) of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 for the primitivity statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=') Since Λω((dθω)2, 0) = 0 and Λω((dθω)0, 2) = 0 for bidegree reasons, we infer that Λω(γ) = Λω((dθω)1, 1) = Λω(dθω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Therefore, the first two integrals on the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of (13) vanish.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Meanwhile, to handle the third integral on the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of (13), we notice that ∂¯γ = ∂ ¯∂θ1, 0 ω and this gives the second equality below: 2 Re � S ∂θ1, 0 ω ∧ ¯∂Λω(¯∂γ) = 2 Re i � S ⟨¯∂∂θ1, 0 ω , ∂¯γ⟩ω dVω = −2 Re i||¯∂∂θ1, 0 ω ||2 ω = 0, where the first equality above followed from the proof of (14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thus, the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of formula (13) for (dωL)(γ) reduces to its last integral for this choice of γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This proves the first claimed equality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For the same reason as above, the latter term on the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of formula (15) for (dωL)(γ) vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This proves the second claimed equality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ As an application of (i) of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='4, we will now see that the differential dωL vanishes on all the real (1, 1)-forms γ that are ω-anti-primitive (in the sense that γ is ⟨ , ⟩ω-orthogonal to all the ω-primitive (1, 1)-forms, a condition which is equivalent to γ being a function multiple of ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 Let S be a compact complex surface on which a Hermitian metric ω has been fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For any real-valued C∞ function f on X, we have (dωL)(fω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In particular, for any real (1, 1)-form γ on S we have (dωL)(γ) = (dωL)(γprim), where γprim is the ω-primitive component of γ in its Lefschetz decomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 13 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Applying formula (13) with γ = fω and using the obvious equalities Λω(fω) = 2f (recall that dimCS = 2) and ξθ0, 1 ω ⌟(fω) = f (iθ0, 1 ω ), we get: (dωL)(fω) = −4 Re � S f ∂θ1, 0 ω ∧ ¯∂θ0, 1 ω − 4 Re � S ∂θ1, 0 ω ∧ ¯∂f ∧ θ0, 1 ω +2 Re � S ∂θ1, 0 ω ∧ ¯∂Λω(f ¯∂ω + ¯∂f ∧ ω) − 2 Re � S i∂θ1, 0 ω ∧ (if ¯∂θ0, 1 ω + i¯∂f ∧ θ0, 1 ω ) = T1 + T2 + T3 + T4, (16) where T1, T2, T3 and T4 stand for the four terms, listed in order, on the r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' of the above expression for (dωL)(fω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Computing T3, we get: T3 = 2 Re � S ∂θ1, 0 ω ∧ ¯∂(f θ0, 1 ω ) + 2 Re � S ∂θ1, 0 ω ∧ ¯∂ � [Λω, Lω](¯∂f) � , where we used the equalities Λω(¯∂ω) = θ0, 1 ω (see (1)) and Λω(¯∂f) = 0 (which leads to Λω(¯∂f ∧ ω) = [Λω, Lω](¯∂f)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, it is standard that [Λω, Lω] = (n − k) Id on k-forms on an n-dimensional complex manifold, so in our case we get [Λω, Lω](¯∂f) = ¯∂f since n = 2 and k = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We conclude that ¯∂([Λω, Lω](¯∂f)) = ¯∂2f = 0, hence T3 = 2 Re � S f ∂θ1, 0 ω ∧ ¯∂θ0, 1 ω + 2 Re � S ∂θ1, 0 ω ∧ ¯∂f ∧ θ0, 1 ω = T4, where the last equality follows at once from the definition of T4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thus, formula (16) translates to (dωL)(fω) = T1 + T2 + T3 + T4 = (−4 + 4) Re � S f ∂θ1, 0 ω ∧ ¯∂θ0, 1 ω + (−4 + 4) Re � S ∂θ1, 0 ω ∧ ¯∂f ∧ θ0, 1 ω = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This proves the first statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The second statement follows at once from the first, from the linearity of the map dωL and from the Lefschetz decomposition γ = γprim + (1/2) Λω(γ) ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ We hope that it will be possible in the future to prove that any Hermitian metric ω on a compact complex surface that is a critical point for the functional L is actually an lcK metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 5 First variation of the functional: case of dimension ≥ 3 In this section, we suppose that the complex dimension of X is n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The goal is to compute the differential of the energy functional L introduced in Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1-(ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Let ω be a Hermitian metric on X and let γ be a real (1, 1)-form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The latter can bee seen as a tangent vector to HX at ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 14 Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 For any Hermitian metric ω and any real (1, 1)-form γ, we have: (dωL)(γ) = � X i(¯∂ω)prim ∧ (¯∂ω)prim ∧ γ ∧ ωn−4 +2Re ⟨⟨(¯∂ω)prim, (¯∂γ)prim⟩⟩ω − 2Re ⟨⟨θ0, 1 ω ∧ γ, (¯∂ω)prim⟩⟩ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (17) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Recall (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' the conjugate of (4)) that (n−1) θ0, 1 ω = Λω(¯∂ω) for any Hermitian metric ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, for any real t sufficiency close to 0, ω + tγ is again a Hermitian metric on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Taking αt = ¯∂ω + t ¯∂γ in Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2, we get the second equality below: (n − 1) d dt ���� t=0 θ0, 1 ω+tγ = d dt ���� t=0 Λω+tγ(¯∂ω + t¯∂γ) = Λω(¯∂γ) − (γ ∧ ·)⋆ ω (¯∂ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (18) On the other hand, taking (d/dt)|t=0 in the expression for L(ω + tγ) given in (ii) of Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 (with ω + tγ in place of ω), we get: (dωL)(γ) = d dt ���� t=0 L(ω + tγ) = d dt ���� t=0 � X i(¯∂ω + t¯∂γ)prim ∧ (¯∂ω + t¯∂γ)prim ∧ (ω + tγ)n−3, (19) where the subscript prim indicates the (ω + tγ)-primitive part of the form to which it is attached.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, consider the Lefschetz decompositions (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (5)) of ¯∂ω and ¯∂γ with respect to ω: ¯∂ω = (¯∂ω)prim + θ0, 1 ω ∧ ω ¯∂γ = (¯∂γ)prim + θ0, 1 γ ∧ ω and the Lefschetz decomposition of ¯∂ω + t¯∂γ with respect to ω + tγ: ¯∂ω + t¯∂γ = (¯∂ω + t¯∂γ)prim + θ0, 1 ω+tγ ∧ (ω + tγ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' By the above equations we get: (¯∂ω + t¯∂γ)prim = (¯∂ω)prim + θ0, 1 ω ∧ ω + t (¯∂γ)prim + t θ0, 1 γ ∧ ω − θ0, 1 ω+tγ ∧ (ω + tγ), (20) where primitivity is construed w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' the metric ω + tγ in the case of the left-hand side term and w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' the metric ω in the case of (¯∂ω)prim and (¯∂γ)prim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thanks to (20), equality (19) becomes: (dωL)(γ) = d dt ����t=0 � X i � (¯∂ω)prim + θ0, 1 ω ∧ ω + t (¯∂γ)prim + t θ0, 1 γ ∧ ω − θ0, 1 ω+tγ ∧ (ω + tγ) � ∧ � (¯∂ω)prim + θ0, 1 ω ∧ ω + t (¯∂γ)prim + t θ0, 1 γ ∧ ω − θ0, 1 ω+tγ ∧ (ω + tγ) � ∧ (ω + tγ)n−3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, d dt ����t=0 � θ0, 1 ω+tγ ∧ (ω + tγ) � = θ0, 1 ω ∧ γ + � d dt ����t=0 θ0, 1 ω+tγ � ∧ ω = θ0, 1 ω ∧ γ + 1 n − 1 � Λω(¯∂γ) − (γ ∧ ·)⋆ ω(¯∂ω) � ∧ ω, 15 where formula (18) was used to get the last equality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Using this,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' straightforward computations yield: (dωL)(γ) = I1 + I1 + I2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (21) where I2 = � X i � (¯∂ω)prim + θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ∧ ω − θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ∧ ω � ∧ � (¯∂ω)prim + θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ∧ ω − θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ∧ ω � ∧ ωn−4 ∧ γ = � X i(¯∂ω)prim ∧ (¯∂ω)prim ∧ ωn−4 ∧ γ (22) and I1 = � X i � (¯∂γ)prim + θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 γ ∧ ω − θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ∧ γ − 1 n − 1 � Λω(¯∂γ) − (γ ∧ ·)⋆ ω(¯∂ω) � ∧ ω � ∧ (∂ω)prim ∧ ωn−3 = � X i(¯∂γ)prim ∧ (∂ω)prim ∧ ωn−3 − � X i θ0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1 ω ∧ γ ∧ (∂ω)prim ∧ ωn−3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (23) where the last equality follows from (∂ω)prim ∧ ωn−2 = 0 (a consequence of the ω-primitivity of the 3-form (∂ω)prim) which leads to the vanishing of the products of the second and the fourth terms (that are multiples of ω) inside the large parenthesis with (∂ω)prim ∧ωn−3 in the integral on the first line of (23).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Now, due to the ω-primitivity of the 3-form (∂ω)prim, the standard formula (10) yields: ⋆(∂ω)prim = i (∂ω)prim ∧ ωn−3, (24) where ⋆ = ⋆ω is the Hodge star operator induced by ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thus, (22) translates to I1 = � X (¯∂γ)prim ∧ ⋆(¯∂ω)prim − � X θ0, 1 ω ∧ γ ∧ ⋆(¯∂ω)prim = ⟨⟨(¯∂γ)prim, (¯∂ω)prim⟩⟩ω − ⟨⟨θ0, 1 ω ∧ γ, (¯∂ω)prim⟩⟩ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This last formula for I1, together with (21) and (22), proves the contention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ Recall that we are interested in the set of critical points of L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We now notice that a suitable choice of γ in the previous result leads to an explicit description of this set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since equation (17) is valid for all real (1, 1)-forms γ, the choice γ = ω is licit, as any other choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We get the following Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 Let X be a compact complex manifold with dimCX = n ≥ 3 and let L be the functional defined in 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1-(ii).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For any Hermitian metric ω on X, we have: (dωL)(ω) = (n − 1) ∥(¯∂ω)prim∥2 ω = (n − 1) L(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (25) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Taking γ = ω in equation (17), we get: (dωL)(ω) = � X i(¯∂ω)prim ∧ (¯∂ω)prim ∧ ω ∧ ωn−4 + 2Re ⟨⟨(¯∂ω)prim, (¯∂ω)prim⟩⟩ω −2Re ⟨⟨θ0, 1 ω ∧ ω, (¯∂ω)prim⟩⟩ω = (n − 3)i � X (¯∂ω)prim ∧ (¯∂ω)prim ∧ ωn−3 + 2 ∥(¯∂ω)prim∥2 ω − 2Re ⟨⟨θ0, 1 ω , Λω((∂ω)prim)⟩⟩ω = (n − 1)∥(¯∂ω)prim∥2 ω, 16 where the last equality followed from (¯∂ω)prim∧ωn−3 = −i ⋆(¯∂ω)prim (see (24)) and from Λω((∂ω)prim)) = 0 (due to any ω-primitive form lying in the kernel of Λω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ An immediate consequence of Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 is the following Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 Let X be a compact complex manifold with dimCX = n ≥ 3 and let ω be a Hermitian metric on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If ω is a critical point for the functional L defined in 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1-(ii), then ω is lcK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If ω is a critical point for L, then (dωL)(γ) = 0 for any real (1, 1)-form γ on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Taking γ = ω and using (25), we get (¯∂ω)prim = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' By (ii) of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2, this is equivalent to ω being lcK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ The converse follows trivially from what we already know.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Indeed, if ω is an lcK metric, L(ω) = 0 (by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2), so L achieves its minimum at ω since L ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Any minimum is, of course, a critical point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 6 Normalised energy functionals when dimCX ≥ 3 We start with the immediate observation that the functional introduced in (i) of Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 in the case of compact complex surfaces is scaling-invariant, so it does not need normalising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 Let S be a compact complex surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The functional L : HS −→ [0, +∞), L(ω) = � X ∂θ1, 0 ω ∧ ¯∂θ0, 1 ω , has the property: L(λω) = L(ω) for every constant λ > 0 and every Hermitian metric ω on S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Recall (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (2)) that θ1, 0 ω = Λω(∂ω) and θ0, 1 ω = Λω(¯∂ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' On the other hand, for any constant λ > 0 and any form α of any bidegree (p, q), we have: Λλωα = 1 λ Λωα, as can be checked right away.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Therefore, θ1, 0 λω = θ1, 0 ω and θ0, 1 λω = θ0, 1 ω for every constant λ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The contention follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ By contrast, the functional L : HX −→ [0, +∞) introduced in (ii) of Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 in the case of compact complex manifolds X with dimCX = n ≥ 3 is not scaling-invariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Indeed, it follows at once from its definition that L(λω) = λn−1 L(ω) (26) for every constant λ > 0 and every Hermitian metric ω on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This homogeneity property of L can be used to derive a short proof of the main property of L that was deduced in §.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 from the result of the computation of the first variation of L, namely from Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 17 Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 (Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 revisited) Let X be a compact complex manifold with dimCX = n ≥ 3 and let ω be a Hermitian metric on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The following equivalence holds: ω is a critical point for the functional L defined in 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1-(ii) if and only if ω is lcK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Suppose ω is a critical point for L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This means that (dωL)(γ) = 0 for every real (1, 1)-form γ on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Taking γ = ω, we get the first eqsuality below: 0 = (dωL)(ω) = d dt ����t=0 L(ω + tω) = d dt ����t=0 � (1 + t)n−1 L(ω) � = (n − 1) L(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Thus, whenever ω is a critical point for L, L(ω) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This last fact is equivalent to the metric ω being lcK thanks to Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Conversely, if ω is lcK, it is a minimum point for L, hence also a critical point, because L(ω) = 0 by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ On the other hand, recall the following by now standard Observation 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='3 Let ω be a Hermitian metric on a complex manifold X with dimCX = n ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If ω is both lcK and balanced, ω is K¨ahler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The Lefschetz decomposition of dω spells dω = (dω)prim + ω ∧ θ, where (dω)prim is an ω-primitive 3-form and θ is a 1-form on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We saw in Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 that ω is lcK if and only if (dω)prim = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' On the other hand, the following equivalences hold: ω is balanced ⇐⇒ dωn−1 = 0 ⇐⇒ ωn−2 ∧ dω = 0 ⇐⇒ dω is ω-primitive ⇐⇒ dω = (dω)prim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We infer that, if ω is both lcK and balanced, dω = 0, so ω is K¨ahler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ It is tempting to conjecture the existence of a K¨ahler metric in the more general situation where the lcK and balanced hypotheses are spread over possibly different metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Conjecture 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='4 Let X be a compact complex manifold with dimCX ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If an lcK metric ω and a balanced metric ρ exist on X, there exists a K¨ahler metric on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Together with the behaviour of L under rescaling (see (26)), this conjecture suggests a natural normalisation for our functional L when n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5 Let X be a compact complex manifold with dimCX = n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Fix a Hermitian metric ρ on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We define the ρ-dependent functional acting on the Hermitian metrics of X: �Lρ : HX → [0, +∞), �Lρ(ω) := L(ω) � � X ω ∧ ρn−1 �n−1, (27) where L is the functional introduced in (ii) of Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 18 It follows from (26) that the normalised functional �Lρ is scaling-invariant: �Lρ(λ ω) = �Lρ(ω) for every constant λ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Moreover, thanks to Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2, �Lρ(ω) = 0 if and only of ω is an lcK metric on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We now derive the formula for the first variation of the normalised functional �Lρ in terms of the similar expression for the unnormalised functional L that was computed in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='6 Let X be a compact complex manifold with dimCX = n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Fix a Hermitian metric ρ on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Then, for any Hermitian metric ω and any real (1, 1)-form γ on X, we have: (dω �Lρ)(γ) = 1 � � X ω ∧ ρn−1 �n−1 � (dωL)(γ) − (n − 1) � X γ ∧ ρn−1 � X ω ∧ ρn−1 L(ω) � , (28) where (dωL)(γ) is given by formula (17) in Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Straightforward computations yield: (dω�Lρ)(γ) = d dt � 1 � � X(ω + tγ) ∧ ρn−1 �n−1 L(ω + tγ) � t=0 = 1 � � X ω ∧ ρn−1 �n−1 (dωL)(γ) − 1 � � X ω ∧ ρn−1 �2(n−1) (n − 1) � � X ω ∧ ρn−1 �n−2 � � X γ ∧ ρn−1 � L(ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' This is formula (28).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ A natural question is whether the critical points of any (or some) of the normalised functionals �Lρ are precisely the lcK metrics (if any) on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The following result goes some way in this direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='7 Let X be a compact complex manifold with dimCX = n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Fix a Hermitian metric ρ on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Suppose a Hermitian metric ω is a critical point for �Lρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Then: (i) for every ρ-primitive real (1, 1)-form γ, (dωL)(γ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (ii) if the metric ρ is Gauduchon, (dωL)(i∂ ¯∂ϕ) = 0 for any real-valued C2 function ϕ on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (i) If γ is ρ-primitive, then γ ∧ ρn−1 = 0, so formula (28) reduces to (dω �Lρ)(γ) = (dωL)(γ) � � X ω ∧ ρn−1 �n−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Meanwhile, (dω �Lρ)(γ) = 0 for every real (1, 1)-form γ since ω is a critical point for �Lρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The contention follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 19 (ii) Choose γ := ω + i∂ ¯∂ϕ for any function ϕ as in the statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' We get: 0 (a) = � � X ω ∧ ρn−1 �n−1 (dω�Lρ)(ω + i∂ ¯∂ϕ) (b)= (dωL)(ω) − (n − 1) L(ω) + (dωL)(i∂ ¯∂ϕ) (c) = (dωL)(i∂ ¯∂ϕ), where ω being a critical point for �Lρ gave (a), formula (28) and the metric ρ being Gauduchon (the latter piece of information implying � X i∂ ¯∂ϕ ∧ ρn−1 = 0 thanks to the Stokes theorem) gave (b), while Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='2 gave (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' □ As in the case of surfaces, our hope is that it will be possible in the future to prove that any Hermitian metric ω on a compact complex manifold of dimension ≥ 3 that is a critical point for one (or all) of the normalised functionals �Lρ is actually an lcK metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Concluding remarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (a) Let X be a compact complex manifold with dimCX = n ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Fix a Hermitian metric ρ on X and consider the set Uρ of ρ-normalised Hermitian metrics ω on X such that � X ω ∧ ρn−1 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' By Definition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='5, we have �Lρ(ω) = L(ω) for every ω ∈ Uρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Moreover, since �Lρ is scaling-invariant, it is completely determined by its restriction to Uρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Let cρ := inf ω∈HX �Lρ(ω) = inf ω∈Uρ �Lρ(ω) = inf ω∈Uρ L(ω) ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' For every ε > 0, there exists a Hermitian metric ωε ∈ Uρ such that cρ ≤ L(ωε) < cρ + ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Since Uρ is a relatively compact subset of the space of positive (1, 1)-currents equipped with the weak topology of currents, there exists a subsequence εk ↓ 0 and a positive (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' the terminology of [Dem97, III-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=']) (1, 1)-current Tρ ≥ 0 on X such that the sequence (ωεk)k converges weakly to Tρ as k → +∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' By construction, we have: � X Tρ ∧ ρn−1 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' The possible failure of the current Tρ ≥ 0 to be either a C∞ form or strictly positive (for example in the sense that it is bounded below by a positive multiple of a Hermitian metric on X) constitutes an obstruction to the existence of minimisers for the functional �Lρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' If it eventually turns out that the critical points of �Lρ, if any, are precisely the lcK metrics of X, if any, they will further coincide with the minimisers of �Lρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' In that case, the currents Tρ will provide obstructions to the existence of lcK metrics on X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' (b) The same discussion as in the above (a) can be had on a compact complex surface S using the (already scaling-invariant) functional L introduced in (i) of Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='1 if one can prove that its critical points coincide with the lcK metrics on S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 20 References [AD15] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Apostolov, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Dloussky — Locally Conformally Symplectic Structures on Compact Non- K¨ahler Complex Surfaces — Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Notices, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 9 (2016) 2717-2747.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [DP22] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Dinew, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Popovici — A Variational Approach to SKT and Balanced Metrics — arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='12813v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Dem 84] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Demailly — Sur l’identit´e de Bochner-Kodaira-Nakano en g´eom´etrie hermitienne — S´eminaire d’analyse P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Lelong, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Dolbeault, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Skoda (editors) 1983/1984, Lecture Notes in Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=', no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 1198, Springer Verlag (1986), 88-97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Dem97] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Demailly — Complex Analytic and Algebraic Geometry — http://www-fourier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='ujf- grenoble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='fr/ demailly/books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='html [Gau77] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Gauduchon — Le th´eor`eme de l’excentricit´e nulle — C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Paris, S´erie A, t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 285 (1977), 387-390.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Ist19] ˙N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Istrati — Existence Criteria for Special Locally Conformally K¨ahler Metrics — Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Pura Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 198 (2) (2019), 335-353.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [OV22] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Ornea, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Verbitsky — Principles of Locally Conformally Kahler Geometry — arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='07188v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Oti14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Otiman — Currents on Locally Conformally K¨ahler Manifolds — Journal of Geometry and Physics, 86 (2014), 564-570.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Mic83] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Michelsohn — On the Existence of Special Metrics in Complex Geometry — Acta Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 143 (1983) 261-295.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [PS22] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Perdu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Stanciu — Vaisman Theorem for lcK Spaces —arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='01000v3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Vai76] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Vaisman, — On Locally Conformal Almost K¨ahler Manifolds — Israel J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' 24 (1976) 338-351.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' [Voi02] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Voisin — Hodge Theory and Complex Algebraic Geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' — Cambridge Studies in Advanced Mathematics, 76, Cambridge University Press, Cambridge, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content=' Universit´e Paul Sabatier, Institut de Math´ematiques de Toulouse 118, route de Narbonne, 31062, Toulouse Cedex 9, France Email: popovici@math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='univ-toulouse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='fr and Soheil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='Erfan@math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='univ-toulouse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} +page_content='fr 21' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NE1T4oBgHgl3EQfkwRo/content/2301.03277v1.pdf'} diff --git a/0tFLT4oBgHgl3EQfpC-v/content/tmp_files/2301.12134v1.pdf.txt b/0tFLT4oBgHgl3EQfpC-v/content/tmp_files/2301.12134v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..0764289da0b40987492aa7981de1432ccbe7374e --- /dev/null +++ b/0tFLT4oBgHgl3EQfpC-v/content/tmp_files/2301.12134v1.pdf.txt @@ -0,0 +1,553 @@ +Underwater Robotics Semantic Parser Assistant +Jake Imyak +imyak.1@osu.edu +Parth Parekh +parekh.86@osu.edu +Cedric McGuire +mcguire.389@osu.edu +Abstract +Semantic parsing is a means of taking natu- +ral language and putting it in a form that a +computer can understand. There has been a +multitude of approaches that take natural lan- +guage utterances and form them into lambda +calculus expressions - mathematical functions +to describe logic. Here, we experiment with +a sequence to sequence model to take natural +language utterances, convert those to lambda +calculus expressions, when can then be parsed, +and place them in an XML format that can be +used by a finite state machine. Experimental +results show that we can have a high accuracy +model such that we can bridge the gap between +technical and nontechnical individuals in the +robotics field. +1 +Credits +Jake Imyak was responsible for the creation of +the 1250 dataset terms and finding the RNN en- +coder/decoder model. This took 48 Hours. Cedric +McGuire was responsible for the handling of the +output logical form via the implementation of the +Tokenizer and Parser. This took 44 Hours. Parth +Parekh assembled the Python structure for behavior +tree as well as created the actions on the robot. This +took 40 Hours. All group members were responsi- +ble for the research, weekly meetings, presentation +preparation, and the paper. In the paper, each group +member was responsible for explaining their re- +spective responsibilities with a collaborative effort +on the abstract, credits, introduction, discussion, +and references. A huge thanks to our Professor Dr. +Huan Sun for being such a great guide through the +world of Natural Language Processing. +2 +Introduction +Robotics is a hard field to master. Its one of the few +fields which is truly interdisciplinary. This leads to +engineers with many different backgrounds work- +ing on one product. There are domains within this +product that engineers within one subfield may not +be able to work with. This leads to some engineers +not being able to interact with the product properly +without supervision. +As already mentioned, we aim to create an +interface for those engineers on the Underwa- +ter Robotics Team (UWRT). Some members on +UWRT specialize in other fields that are not soft- +ware engineering. They are not able to create logic +for the robot on their own. This leads to members +of the team that are required to be around when +pool testing the robot. This project wants to reduce +or remove that component of creating logic for the +robot. This project can also be applied to other +robots very easily as all of the main concepts are +generalized and only require the robots to imple- +ment the actions that are used to train the project. +3 +Robotics Background +3.1 +Usage of Natural Language in Robotics +Robots are difficult to produce logic for. One big +problem that most robotics teams have is having +non-technical members produce logical forms for +the robot to understand. Those who do not code +are not able to manually create logic quickly. +3.2 +Finite State Machines +One logical form that is common in the robotics +space is a Finite State Machine (FSM). FSMs are +popular because they allow a representation to be +completely general while encoding the logic di- +rectly into the logical form. This means things +such as control flow, fallback states, and sequences +to be directly encoded into the logical form itself. +As illustrated in Figure 1, we can easily encode +logic into this representation. Since it easily generi- +fied, FSM’s can be used across any robot which im- +arXiv:2301.12134v1 [cs.CL] 28 Jan 2023 + +Figure 1: +A FSM represented in Behaviortree.CPP +(Fanconti, 2020) (Fanconti, 2020) +plements the commands that are contained within +it. +3.3 +Underwater Robotics Team Robot +Since 2016, The Underwater Robotics Team +(UWRT) at The Ohio State University has iterated +on the foundations of a single Autonomous Under- +water Vehicle (AUV) each year to compete at the +RoboSub competition. Breaking from tradition, the +team decided to take the 2019-2021 school years +to design and build a new vehicle to compete in +the 2021 competition. Featuring an entirely new +hull design, refactored software, and an improved +electrical system, UWRT has created its brand-new +vehicle, Tempest. (Parekh, 2021) +3.3.1 +Vehicle +Tempest is a 6 Degree of Freedom (DOF) AUV +with vectored thrusters for linear axis motion and +direct drive heave thrusters. This allows the robot to +achieve any orientation in all 6 Degrees of freedom +[X, Y , Z, Roll, Pitch, Yaw]. +Figure 2: A render of Tempest +3.3.2 +Vehicle Experience +With this vehicle, the team has focused on creat- +ing a fully fleshed out experience. This includes +commanding and controlling the vehicle. One big +focus of the team was to make sure that any mem- +ber, technical or non-technical was able to manage +and operate the robot successfully. +3.3.3 +Task Code System +A step to fulfill this focus was to change the +vehicle’s task code system to use the FSM rep- +resentation. +This is done through the library +BehaviorTree.CPP (Fanconti, 2020). This generic +FSM representation allows for Tempest to use +generified logical forms that can be applied to ANY +robotic plant as long as that plant implements those +commands. This library also creates and maintains +a Graphical User Interface (GUI) which allows for +visual tracking and creation of FSM trees. Any tree +created by the GUI is stored within an XML file +to preserve the tree structure. The structure of the +output of the XML syntax is explained within the +parser section. +4 +Data +A dataset was to be created in order to use natu- +ral language utterances to lambda calculus expres- +sions that a parser would be able to recognize to +convert to a finite state machine. For reference, +the following datasets were considered: the Geo- +query set(Zettlemoyer, 2012) and General Purpose +Service Robotics commands set (Walker, 2019). +The Geoquery dataset provided a foundation for +a grammar to follow for the lambda calculus ex- +pression such that consistency would hold for our +parser. Moreover, the gpsr dataset provided an +ample amount of examples and different general +purpose robotics commands that could be extended +within the dataset we curated. +The dataset followed the following form: nat- +ural language utterance followed by a tab then a +lambda calculus expression. The lambda calcu- +lus expression is of the form ( seq ( action0 +( $0 ( parameter ) ) ) ... ( actionN ( $N ( +parameter ) ) ). The power of the following ex- +pression is that it can be extended to N number of +actions in a given sequence, meaning that a user +can hypothetically type in a very complex string +of action and an expression will be constructed for +said sequence. Moreover, the format of our dataset +allows for it to be extended for any type of robotics + +Root +root Fallback +Sequence +SubTreeExpanded +A:PassThroughwindow +door open sequence +Collapse +C: IsDooropen +A:PassThroughDoor +Sequence +door closed sequence +Inverter +C RetryUntilSuccesful +A:PassThroughDoor +A:CloseDoor +num_attempts +4 +C:IsDooropen +A:OpenDoorcommand that a user may have. They just need to +include examples in the train set with said action +and the model will consider it. +The formal grammar is: +< seq > : ( seq ( action ) [ (action) ] ) +< action > : actionName [ (parameter ] ) +< parameter > : paramName λ ( $n ( n ) ) +The dataset we created had 1000 entries in the +training dataset and 250 entries in the test dataset. +The size of the vocabulary |V | = 171 for the input +text and |V | = 46 for the output text, which is +similar in vocabulary size to the GeoQuery dataset. +The expressions currently increase in complexity +in terms of the number of actions within the se- +quence. A way to extend the complexity of the ex- +pressions would make the < seq > tag a nontermi- +nal to chain together nested sequences. The actions +within our dataset currently are as follows: move +(params: x, y, z, roll, pitch, raw), flatten (params: +num), say (params: words), clean (params: obj), +bring (params: val), find (params: val), goal, +and gate. The most complex sequence is a string +of seven subsequent actions. +5 +Model +5.1 +Seq2Seq Model +We decided to use the model presented in ”Lan- +guage to Logical Form with Neural Attention” +(Dong, 2016). There was an implementation on +GitHub (AvikDelta, 2018) utilizing Google’s Ten- +sorflow library to handle all implementation details +of the following model. The part of the paper that +was presented was the Sequence to Sequence model +with an attention mechanism. +Figure 3: Process of how input natural language are en- +coded and decoded via recurrent neural networks and +an attention mechanism to find the utterance’s respec- +tive natural language form. (Dong and Lapata, 2016) +The model interprets both the input and output +from the network as sequences of information. This +process is represented in Figure 3: input is passed +to the encoder, then passed through the decoder, +and through using the attention mechanism, we can +get an output that is a lambda calculus expression. +Both of these sequences can be represented as L- +layer recurrent neural networks with long short- +term memory (LSTM) that are used to take the +tokens from the sentences and the expressions we +have. The model creates 200 (can be changed to +increase and decrease the size of the network) units +of both LSTM cells and GRU cells. The GRU +cells are used to help compensate for the vanishing +gradient problem. These LSTM and GRU cells +are used in the input sequence to encode x1, ..., xq +into vectors. Then these vectors are what form +the hidden state of the beginning of the sequence +in the decoder. Then in the decoder, the topmost +LSTM cell predicts the t-th output token by taking +the softmax of the parameter matrix and the vector +from the LSTM cell multiplied by a one-hot vector +used to compute the probability of the output from +the probability distribution. The softmax used here +is sampled softmax, which only takes into account +a subset of our vocabulary V rather than everything +to help alleviate the difficulty of finding the softmax +of a large vocabulary. +5.2 +Attention Mechanism +The model also implemented an attention mecha- +nism to help with the predicted values. The mo- +tivation behind the attention mechanism is to use +the input sequence in the decoding process since +it is relevant information for the prediction of the +output token. To achieve this, a context vector is +created which is the weighted sums of the hidden +vectors in the encoder. Then this context vector is +used as context to find the probability of generating +a given output. +5.3 +Training +To train the model, the objective is the maximize +the likelihood of predicting the correct logical form +given some natural language expression. Hence, +the goal is to minimize the sum of the log prob- +ability of predicting logical form a given natural +language utterance q summed over all training pairs. +The model used the RMSProp algorithm which +is an extension of the Adagrad optimizer but uti- +lizes learning rate adaptation. Dropout is also used +for regularization which helps out with a smaller +datasets to prevent overfitting. We performed 90 +epochs. +5.4 +Inference +To perform inference, the argmax is found of the +probability of candidate output given the natural + +AttentionLayer +whatmicrosoftjobs +answer(J,(compa +ny(J,'microsoft).j +do not require a +ob,not(reqde +bscs? +g(J,bscs))) +Input +Sequence Sequence/Tree +Logical +Utterance +Encoder +Decoder +Formlanguage utterance. Since it is not possible to find +the probability of all possible outputs, the proba- +bility is put in a form such that a beam search can +be employed to generate each individual token of +lambda calculus expression to get the appropriate +output. +6 +Results +With the default parameters set, the Sequence to Se- +quence model achieved 86.7% accuracy for exact +matches on the test dataset. This is consistent with +the model’s performance on the Geoquery dataset, +achieving 83.9% accuracy. The test dataset pro- +vided contained a 250 entries of similar utterances +to the train dataset of various complexities ranging +anywhere from one to six actions being performed. +There are other methods of evaluating we would +like to look into in the future such as computing +something such as an F1 score rather than solely +relying on exact logical form matching. +This accuracy for exact logical forms is really +important when using the parser. It allows for FSM +representation to be easily and quickly built. We +were able to build the XML representation and +run basic commands on the robot with the model +maintaining the order we said them in. +7 +Logical Form Parser +The logical form output of our model is sent to a +custom parser. The goal of this parser is to translate +the output form into BehaviorTree XML files, in +which the robot is able to read in as a finite state +machine. +7.1 +Tokenizer +The Tokenizer comprises the initial framework of +the parser. It accepts the raw logical form as a +String object and outputs a set of tokens in a Python +List. These tokens are obtained by looking for sepa- +rator characters (in our case, a space) present in the +logical form and splitting them into an array-like +structure. The Tokenizer method permits custom +action, parameter, and variable names from the log- +ical form input, thus allowing ease of scalability +in implementing new robot actions. Our model’s +output nature is not able to generate syntactically +incorrect logical forms, thus our implementation +does not check for invalid tokens and will assume +all input is correct. The Tokenizer is stored in a +static Singleton class such that it can be accessed +anywhere in the program once initialized. It keeps +track of the current token (using getToken()) and +has an implementation to move forward to the next +token skipToken(). This functionality is impor- +tant for the object-oriented approach of the parser, +discussed in the next section. +7.2 +Parsing Lambda Calculus Expressions +The output tokens from the Tokenizer must be in- +terpreted into a proper Python from before they +are staged to be turned into XML-formatted robot- +ready trees. This is the function of the middle step +of the parser, in which a tree of Python objects +are built. The parser utilizes an object-oriented +approach. +As such, we include three objects: +Sequence, Action, and Parameter, with each +corresponding to an individual member of our cus- +tom grammar. The objects orient themselves into +a short 3-deep tree, consisting of a Sequence root, +Action children, and Parameter grand-children. +Each object has its own parse() method that will +advance the tokenizer, validate the input structure, +and assemble themselves into a Python structure to +be staged into an XML file. The validations are en- +forced through our grammar definitions in Section +4. +7.2.1 +Sequence Object +The Sequence object is the first object initialized +by the parser, along with the root of our action +tree. Each Sequence is composed of a list of 0 or +more child actions to be executed in the order they +appear. The parseSequence() method will parse +each individual action using parseSAction(), all +the while assembling a list of child actions for this +Sequence object. As of now, Sequence objects +are unable to be their own children (i.e. nesting +Sequences is not permitted). However, if required, +the Sequence object’s parseSequence() method +can be modified to recognize a nested action se- +quence and recursively parse it. +7.2.2 +Action Object +Action objects define the title of the action be- +ing performed. Similar to Sequence, Action ob- +jects have an internally stored list, however with +Parameter objects as children. There may be +any number of parameters, including none. When +parseAction() method is called, the program val- +idates the tokens and will call parseParameter() +on each Parameter child identified by the action. + +7.2.3 +Parameter Object +The Parameter object is a simple object that +stores a parameter’s name and value. The parser +does not have a check for what the name of the pa- +rameter is, nor does it have any restrictions to what +the value can be. +parseParameter() searches +through the tokens for these two items and stores +them as attributes to the Parameter object. This +implementation of parameter is scalable with robot +parameters and allows any new configuration of +parameter to pass by without any changes in the +parser as a whole. If a new parameter is needed for +the robot, it only has to be trained into the Seq2Seq +model on the frontend and into the robot itself on +the backend; the Parameter object should take care +of it all the same. +7.3 +BehaviorTree Output +In the end, the parser outputs an XML file which +can be read in to BehaviorTree.CPP (Fanconti, +2020). An example of this file structure is shown +in Figure 4. +Figure 4: +A FSM that was generated from test input +through our RNN +This file structure is useful because it encodes +sequence of actions within it. The leaves of the +sequence are always in order. The tree can also +encode subtrees into the sequence which we have +not implemented yet. +8 +Discussion +8.1 +Summary +We learned that semantic parsing is excellent tool +at bridging the gap between both technical and non- +technical individuals. The power within semantic +parsing with robotics is that any human can auto- +mate any task just through using their words. Our +dataset is written in a way that just extending the +entries with another robot’s tasks that use a behav- +ior tree to perform action, that robot’s actions can +be automated as well. +8.2 +Future Plans +Future plans with this project would be to ex- +pand the logical flow that can be implemented +with BehaviorTree.CPP. As an FSM library, Behav- +iorTree.CPP implements many more helper func- +tions to create more complicated FSMs. These +include things like if statements fallback nodes, +and subtrees. This would be a valid expansion +of our RNN’s logical output and with more time, +we could support the full range of features from +BehaviorTree.CPP +We would also like to implement a front end +user interface to make this service more accessible +to anyone who was not technical. Right now, the +only means of running our program is through the +command line which is not suitable for individuals +who are nontechnical. Moreover, including a speak- +to-text component to this project would elevate it +since an individual would be able to directly tell a +robot what commands to do, similar to a human. +8.3 +Source Code +You can view the source code here: +https:// +github.com/jrimyak/parse_seq2seq +References +Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, +I. & Hinton, G. Grammar as a Foreign Language. +(2015), +Dong, L. & Lapata, M. Language to Logical Form with +Neural Attention. (2016), +Yao, Z., Tang, Y., Yih, W., Sun, H. & Su, Y. An Im- +itation Game for Learning Semantic Parsers from +User Interaction. Proceedings Of The 2020 Confer- +ence On Empirical Methods In Natural Language +Processing (EMNLP). (2020), +Yao, Z., Su, Y., Sun, H. & Yih, W. Model-based In- +teractive Semantic Parsing: A Unified Framework +and A Text-to-SQL Case Study. Proceedings Of The +2019 Conference On Empirical Methods In Natu- +ral Language Processing And The 9th International +Joint Conference On Natural Language Processing +(EMNLP-IJCNLP). pp. 5450-5461 (2019), +Walker, N., Peng, Y. & Cakmak, M. Neural Se- +mantic Parsing with Anonymization for Command +Understanding in General-Purpose Service Robots. +Lecture Notes In Computer Science. pp. 337-350 +(2019), +Dukes, K. Supervised Semantic Parsing of Robotic +Spatial Commands .SemEval-2014 Task 6. (2014), +Walker, N. GPSR Commands Dataset. (Zenodo,2019), +https://zenodo.org/record/3244800, + +test.xm +1 + +2 + +3 + +4 + +6 + +7 + +8 + +6Avikdelta parse seq2seq. GitHub Repository. (2018), +https://github.com/avikdelta/parse_ +seq2seq, +Faconti, D. BehaviorTree - Groot. GitHub Repository. +(2020), +https://github.com/BehaviorTree/ +Groot, +Faconti, +D. BehaviorTree.CPP. Github Repository. +(2020), +https://github.com/BehaviorTree/ +BehaviorTree.CPP, +Hwang, W., Yim, J., Park, S. & Seo, M. A Compre- +hensive Exploration on WikiSQL with Table-Aware +Word Contextualization. (2019), +OSU-UWRT. +Riptide +Autonomy. +GitHub +Reposi- +tory. (2021), https://github.com/osu-uwrt/ +riptide_autonomy, +Parekh, P., et al. The Ohio State University Underwater +Robotics Tempest AUV Design and Implementa- +tion +(2021) +https://robonation.org/app/ +uploads/sites/4/2021/07/RoboSub_2021_ +The-Ohio-State-U_TDR-compressed.pdf, +Zettlemoyer, L. & Collins, M. Learning to Map Sen- +tences to Logical Form: Structured Classification +with Probabilistic Categorial Grammars. (2012), + diff --git a/0tFLT4oBgHgl3EQfpC-v/content/tmp_files/load_file.txt b/0tFLT4oBgHgl3EQfpC-v/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..aaac2df6bba1ce4beaf2a09caac263a0db976ce8 --- /dev/null +++ b/0tFLT4oBgHgl3EQfpC-v/content/tmp_files/load_file.txt @@ -0,0 +1,257 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf,len=256 +page_content='Underwater Robotics Semantic Parser Assistant Jake Imyak imyak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='1@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='edu Parth Parekh parekh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='86@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='edu Cedric McGuire mcguire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='389@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='edu Abstract Semantic parsing is a means of taking natu- ral language and putting it in a form that a computer can understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' There has been a multitude of approaches that take natural lan- guage utterances and form them into lambda calculus expressions - mathematical functions to describe logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Here, we experiment with a sequence to sequence model to take natural language utterances, convert those to lambda calculus expressions, when can then be parsed, and place them in an XML format that can be used by a finite state machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Experimental results show that we can have a high accuracy model such that we can bridge the gap between technical and nontechnical individuals in the robotics field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 1 Credits Jake Imyak was responsible for the creation of the 1250 dataset terms and finding the RNN en- coder/decoder model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This took 48 Hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Cedric McGuire was responsible for the handling of the output logical form via the implementation of the Tokenizer and Parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This took 44 Hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Parth Parekh assembled the Python structure for behavior tree as well as created the actions on the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This took 40 Hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' All group members were responsi- ble for the research, weekly meetings, presentation preparation, and the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' In the paper, each group member was responsible for explaining their re- spective responsibilities with a collaborative effort on the abstract, credits, introduction, discussion, and references.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' A huge thanks to our Professor Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Huan Sun for being such a great guide through the world of Natural Language Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 2 Introduction Robotics is a hard field to master.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Its one of the few fields which is truly interdisciplinary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This leads to engineers with many different backgrounds work- ing on one product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' There are domains within this product that engineers within one subfield may not be able to work with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This leads to some engineers not being able to interact with the product properly without supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' As already mentioned, we aim to create an interface for those engineers on the Underwa- ter Robotics Team (UWRT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Some members on UWRT specialize in other fields that are not soft- ware engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' They are not able to create logic for the robot on their own.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This leads to members of the team that are required to be around when pool testing the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This project wants to reduce or remove that component of creating logic for the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This project can also be applied to other robots very easily as all of the main concepts are generalized and only require the robots to imple- ment the actions that are used to train the project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 3 Robotics Background 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='1 Usage of Natural Language in Robotics Robots are difficult to produce logic for.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' One big problem that most robotics teams have is having non-technical members produce logical forms for the robot to understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Those who do not code are not able to manually create logic quickly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2 Finite State Machines One logical form that is common in the robotics space is a Finite State Machine (FSM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' FSMs are popular because they allow a representation to be completely general while encoding the logic di- rectly into the logical form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This means things such as control flow, fallback states, and sequences to be directly encoded into the logical form itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' As illustrated in Figure 1, we can easily encode logic into this representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Since it easily generi- fied, FSM’s can be used across any robot which im- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='12134v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CL] 28 Jan 2023 Figure 1: A FSM represented in Behaviortree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP (Fanconti, 2020) (Fanconti, 2020) plements the commands that are contained within it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3 Underwater Robotics Team Robot Since 2016, The Underwater Robotics Team (UWRT) at The Ohio State University has iterated on the foundations of a single Autonomous Under- water Vehicle (AUV) each year to compete at the RoboSub competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Breaking from tradition, the team decided to take the 2019-2021 school years to design and build a new vehicle to compete in the 2021 competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Featuring an entirely new hull design, refactored software, and an improved electrical system, UWRT has created its brand-new vehicle, Tempest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (Parekh, 2021) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='1 Vehicle Tempest is a 6 Degree of Freedom (DOF) AUV with vectored thrusters for linear axis motion and direct drive heave thrusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This allows the robot to achieve any orientation in all 6 Degrees of freedom [X, Y , Z, Roll, Pitch, Yaw].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Figure 2: A render of Tempest 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2 Vehicle Experience With this vehicle, the team has focused on creat- ing a fully fleshed out experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This includes commanding and controlling the vehicle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' One big focus of the team was to make sure that any mem- ber, technical or non-technical was able to manage and operate the robot successfully.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3 Task Code System A step to fulfill this focus was to change the vehicle’s task code system to use the FSM rep- resentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This is done through the library BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP (Fanconti, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This generic FSM representation allows for Tempest to use generified logical forms that can be applied to ANY robotic plant as long as that plant implements those commands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This library also creates and maintains a Graphical User Interface (GUI) which allows for visual tracking and creation of FSM trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Any tree created by the GUI is stored within an XML file to preserve the tree structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The structure of the output of the XML syntax is explained within the parser section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 4 Data A dataset was to be created in order to use natu- ral language utterances to lambda calculus expres- sions that a parser would be able to recognize to convert to a finite state machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' For reference, the following datasets were considered: the Geo- query set(Zettlemoyer, 2012) and General Purpose Service Robotics commands set (Walker, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The Geoquery dataset provided a foundation for a grammar to follow for the lambda calculus ex- pression such that consistency would hold for our parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Moreover, the gpsr dataset provided an ample amount of examples and different general purpose robotics commands that could be extended within the dataset we curated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The dataset followed the following form: nat- ural language utterance followed by a tab then a lambda calculus expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The lambda calcu- lus expression is of the form ( seq ( action0 ( $0 ( parameter ) ) ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' ( actionN ( $N ( parameter ) ) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The power of the following ex- pression is that it can be extended to N number of actions in a given sequence, meaning that a user can hypothetically type in a very complex string of action and an expression will be constructed for said sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Moreover, the format of our dataset allows for it to be extended for any type of robotics Root root Fallback Sequence SubTreeExpanded A:PassThroughwindow door open sequence Collapse C: IsDooropen A:PassThroughDoor Sequence door closed sequence Inverter C RetryUntilSuccesful A:PassThroughDoor A:CloseDoor num_attempts 4 C:IsDooropen A:OpenDoorcommand that a user may have.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' They just need to include examples in the train set with said action and the model will consider it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The formal grammar is: < seq > : ( seq ( action ) [ (action) ] ) < action > : actionName [ (parameter ] ) < parameter > : paramName λ ( $n ( n ) ) The dataset we created had 1000 entries in the training dataset and 250 entries in the test dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The size of the vocabulary |V | = 171 for the input text and |V | = 46 for the output text, which is similar in vocabulary size to the GeoQuery dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The expressions currently increase in complexity in terms of the number of actions within the se- quence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' A way to extend the complexity of the ex- pressions would make the < seq > tag a nontermi- nal to chain together nested sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The actions within our dataset currently are as follows: move (params: x, y, z, roll, pitch, raw), flatten (params: num), say (params: words), clean (params: obj), bring (params: val), find (params: val), goal, and gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The most complex sequence is a string of seven subsequent actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 5 Model 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='1 Seq2Seq Model We decided to use the model presented in ”Lan- guage to Logical Form with Neural Attention” (Dong, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' There was an implementation on GitHub (AvikDelta, 2018) utilizing Google’s Ten- sorflow library to handle all implementation details of the following model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The part of the paper that was presented was the Sequence to Sequence model with an attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Figure 3: Process of how input natural language are en- coded and decoded via recurrent neural networks and an attention mechanism to find the utterance’s respec- tive natural language form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (Dong and Lapata, 2016) The model interprets both the input and output from the network as sequences of information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This process is represented in Figure 3: input is passed to the encoder, then passed through the decoder, and through using the attention mechanism, we can get an output that is a lambda calculus expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Both of these sequences can be represented as L- layer recurrent neural networks with long short- term memory (LSTM) that are used to take the tokens from the sentences and the expressions we have.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The model creates 200 (can be changed to increase and decrease the size of the network) units of both LSTM cells and GRU cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The GRU cells are used to help compensate for the vanishing gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' These LSTM and GRU cells are used in the input sequence to encode x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', xq into vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Then these vectors are what form the hidden state of the beginning of the sequence in the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Then in the decoder, the topmost LSTM cell predicts the t-th output token by taking the softmax of the parameter matrix and the vector from the LSTM cell multiplied by a one-hot vector used to compute the probability of the output from the probability distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The softmax used here is sampled softmax, which only takes into account a subset of our vocabulary V rather than everything to help alleviate the difficulty of finding the softmax of a large vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2 Attention Mechanism The model also implemented an attention mecha- nism to help with the predicted values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The mo- tivation behind the attention mechanism is to use the input sequence in the decoding process since it is relevant information for the prediction of the output token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' To achieve this, a context vector is created which is the weighted sums of the hidden vectors in the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Then this context vector is used as context to find the probability of generating a given output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3 Training To train the model, the objective is the maximize the likelihood of predicting the correct logical form given some natural language expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Hence, the goal is to minimize the sum of the log prob- ability of predicting logical form a given natural language utterance q summed over all training pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The model used the RMSProp algorithm which is an extension of the Adagrad optimizer but uti- lizes learning rate adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Dropout is also used for regularization which helps out with a smaller datasets to prevent overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' We performed 90 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content="4 Inference To perform inference, the argmax is found of the probability of candidate output given the natural AttentionLayer whatmicrosoftjobs answer(J,(compa ny(J,'microsoft)." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='j do not require a ob,not(reqde bscs?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' g(J,bscs))) Input Sequence Sequence/Tree Logical Utterance Encoder Decoder Formlanguage utterance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Since it is not possible to find the probability of all possible outputs, the proba- bility is put in a form such that a beam search can be employed to generate each individual token of lambda calculus expression to get the appropriate output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 6 Results With the default parameters set, the Sequence to Se- quence model achieved 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='7% accuracy for exact matches on the test dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This is consistent with the model’s performance on the Geoquery dataset, achieving 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='9% accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The test dataset pro- vided contained a 250 entries of similar utterances to the train dataset of various complexities ranging anywhere from one to six actions being performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' There are other methods of evaluating we would like to look into in the future such as computing something such as an F1 score rather than solely relying on exact logical form matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This accuracy for exact logical forms is really important when using the parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' It allows for FSM representation to be easily and quickly built.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' We were able to build the XML representation and run basic commands on the robot with the model maintaining the order we said them in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 7 Logical Form Parser The logical form output of our model is sent to a custom parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The goal of this parser is to translate the output form into BehaviorTree XML files, in which the robot is able to read in as a finite state machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='1 Tokenizer The Tokenizer comprises the initial framework of the parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' It accepts the raw logical form as a String object and outputs a set of tokens in a Python List.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' These tokens are obtained by looking for sepa- rator characters (in our case, a space) present in the logical form and splitting them into an array-like structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The Tokenizer method permits custom action, parameter, and variable names from the log- ical form input, thus allowing ease of scalability in implementing new robot actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Our model’s output nature is not able to generate syntactically incorrect logical forms, thus our implementation does not check for invalid tokens and will assume all input is correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The Tokenizer is stored in a static Singleton class such that it can be accessed anywhere in the program once initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' It keeps track of the current token (using getToken()) and has an implementation to move forward to the next token skipToken().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This functionality is impor- tant for the object-oriented approach of the parser, discussed in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2 Parsing Lambda Calculus Expressions The output tokens from the Tokenizer must be in- terpreted into a proper Python from before they are staged to be turned into XML-formatted robot- ready trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This is the function of the middle step of the parser, in which a tree of Python objects are built.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The parser utilizes an object-oriented approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' As such, we include three objects: Sequence, Action, and Parameter, with each corresponding to an individual member of our cus- tom grammar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The objects orient themselves into a short 3-deep tree, consisting of a Sequence root, Action children, and Parameter grand-children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Each object has its own parse() method that will advance the tokenizer, validate the input structure, and assemble themselves into a Python structure to be staged into an XML file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The validations are en- forced through our grammar definitions in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='1 Sequence Object The Sequence object is the first object initialized by the parser, along with the root of our action tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Each Sequence is composed of a list of 0 or more child actions to be executed in the order they appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The parseSequence() method will parse each individual action using parseSAction(), all the while assembling a list of child actions for this Sequence object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' As of now, Sequence objects are unable to be their own children (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' nesting Sequences is not permitted).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' However, if required, the Sequence object’s parseSequence() method can be modified to recognize a nested action se- quence and recursively parse it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2 Action Object Action objects define the title of the action be- ing performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Similar to Sequence, Action ob- jects have an internally stored list, however with Parameter objects as children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' There may be any number of parameters, including none.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' When parseAction() method is called, the program val- idates the tokens and will call parseParameter() on each Parameter child identified by the action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3 Parameter Object The Parameter object is a simple object that stores a parameter’s name and value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The parser does not have a check for what the name of the pa- rameter is, nor does it have any restrictions to what the value can be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' parseParameter() searches through the tokens for these two items and stores them as attributes to the Parameter object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This implementation of parameter is scalable with robot parameters and allows any new configuration of parameter to pass by without any changes in the parser as a whole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' If a new parameter is needed for the robot, it only has to be trained into the Seq2Seq model on the frontend and into the robot itself on the backend;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' the Parameter object should take care of it all the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3 BehaviorTree Output In the end, the parser outputs an XML file which can be read in to BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP (Fanconti, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' An example of this file structure is shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Figure 4: A FSM that was generated from test input through our RNN This file structure is useful because it encodes sequence of actions within it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The leaves of the sequence are always in order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The tree can also encode subtrees into the sequence which we have not implemented yet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 8 Discussion 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='1 Summary We learned that semantic parsing is excellent tool at bridging the gap between both technical and non- technical individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The power within semantic parsing with robotics is that any human can auto- mate any task just through using their words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Our dataset is written in a way that just extending the entries with another robot’s tasks that use a behav- ior tree to perform action, that robot’s actions can be automated as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='2 Future Plans Future plans with this project would be to ex- pand the logical flow that can be implemented with BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' As an FSM library, Behav- iorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP implements many more helper func- tions to create more complicated FSMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' These include things like if statements fallback nodes, and subtrees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' This would be a valid expansion of our RNN’s logical output and with more time, we could support the full range of features from BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP We would also like to implement a front end user interface to make this service more accessible to anyone who was not technical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Right now, the only means of running our program is through the command line which is not suitable for individuals who are nontechnical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Moreover, including a speak- to-text component to this project would elevate it since an individual would be able to directly tell a robot what commands to do, similar to a human.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='3 Source Code You can view the source code here: https:// github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='com/jrimyak/parse_seq2seq References Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Kaiser, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Koo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Petrov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' & Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Grammar as a Foreign Language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2015), Dong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' & Lapata, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Language to Logical Form with Neural Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2016), Yao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Tang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Yih, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Sun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' & Su, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' An Im- itation Game for Learning Semantic Parsers from User Interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Proceedings Of The 2020 Confer- ence On Empirical Methods In Natural Language Processing (EMNLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2020), Yao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Su, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Sun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' & Yih, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Model-based In- teractive Semantic Parsing: A Unified Framework and A Text-to-SQL Case Study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Proceedings Of The 2019 Conference On Empirical Methods In Natu- ral Language Processing And The 9th International Joint Conference On Natural Language Processing (EMNLP-IJCNLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 5450-5461 (2019), Walker, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Peng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' & Cakmak, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Neural Se- mantic Parsing with Anonymization for Command Understanding in General-Purpose Service Robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Lecture Notes In Computer Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' 337-350 (2019), Dukes, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Supervised Semantic Parsing of Robotic Spatial Commands .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='SemEval-2014 Task 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2014), Walker, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' GPSR Commands Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (Zenodo,2019), https://zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='org/record/3244800, test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='xm 1 2 3 4 6 7 8 6Avikdelta parse seq2seq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' GitHub Repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2018), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='com/avikdelta/parse_ seq2seq, Faconti, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' BehaviorTree - Groot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' GitHub Repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2020), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='com/BehaviorTree/ Groot, Faconti, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Github Repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2020), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='com/BehaviorTree/ BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='CPP, Hwang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Yim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', Park, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' & Seo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' A Compre- hensive Exploration on WikiSQL with Table-Aware Word Contextualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2019), OSU-UWRT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Riptide Autonomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' GitHub Reposi- tory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2021), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='com/osu-uwrt/ riptide_autonomy, Parekh, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' The Ohio State University Underwater Robotics Tempest AUV Design and Implementa- tion (2021) https://robonation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='org/app/ uploads/sites/4/2021/07/RoboSub_2021_ The-Ohio-State-U_TDR-compressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content='pdf, Zettlemoyer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' & Collins, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' Learning to Map Sen- tences to Logical Form: Structured Classification with Probabilistic Categorial Grammars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} +page_content=' (2012),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'} diff --git a/1NFAT4oBgHgl3EQfjx2F/vector_store/index.faiss b/1NFAT4oBgHgl3EQfjx2F/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..6b1e51cfade5ce9890d2dd5560516379241e47ba --- /dev/null +++ b/1NFAT4oBgHgl3EQfjx2F/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3bd5cc73774d3a9794bb686b1252c14981e7646f255741c42411abd127d8a1ba +size 983085 diff --git a/1NFKT4oBgHgl3EQfOi14/vector_store/index.faiss b/1NFKT4oBgHgl3EQfOi14/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..ab0b288b587042446e8a78d315b7e683607cce9f --- /dev/null +++ b/1NFKT4oBgHgl3EQfOi14/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4db162a1b372b12f6d7e18b533b21750c15690dcf1b2a7ddbdf9c4f3bfc88e5e +size 2949165 diff --git a/2NE3T4oBgHgl3EQfnwo5/vector_store/index.faiss b/2NE3T4oBgHgl3EQfnwo5/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..5ac77a72c2e9e68f585a88f5e34a27422ecdd8a5 --- /dev/null +++ b/2NE3T4oBgHgl3EQfnwo5/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49bec3d77ebab617b8b56672133d0b1d577d1c9daed027f7da3c146a572ba126 +size 655405 diff --git a/39E1T4oBgHgl3EQfAgJ2/content/tmp_files/2301.02840v1.pdf.txt b/39E1T4oBgHgl3EQfAgJ2/content/tmp_files/2301.02840v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..29bdaa014fb388c49b23226b3ce2c23e9ad1a39e --- /dev/null +++ b/39E1T4oBgHgl3EQfAgJ2/content/tmp_files/2301.02840v1.pdf.txt @@ -0,0 +1,1755 @@ +arXiv:2301.02840v1 [cs.NI] 7 Jan 2023 +Network Slicing: Market Mechanism and +Competitive Equilibria +Panagiotis Promponas, and Leandros Tassiulas +Department of Electrical Engineering and Institute for Network Science, Yale University, USA +{panagiotis.promponas, leandros.tassiulas}@yale.edu +Abstract—Towards addressing spectral scarcity and enhancing +resource utilization in 5G networks, network slicing is a +promising technology to establish end-to-end virtual networks +without requiring additional infrastructure investments. By +leveraging Software Defined Networks (SDN) and Network +Function Virtualization (NFV), we can realize slices completely +isolated and dedicated to satisfy the users’ diverse Quality +of Service (QoS) prerequisites and Service Level Agreements +(SLAs). This paper focuses on the technical and economic +challenges that emerge from the application of the network +slicing architecture to real-world scenarios. We consider a market +where multiple Network Providers (NPs) own the physical +infrastructure and offer their resources to multiple Service +Providers (SPs). Then, the SPs offer those resources as slices +to their associated users. We propose a holistic iterative model +for the network slicing market along with a clock auction that +converges to a robust ǫ-competitive equilibrium. At the end of +each cycle of the market, the slices are reconfigured and the SPs +aim to learn the private parameters of their users. Numerical +results are provided that validate and evaluate the convergence +of the clock auction and the capability of the proposed market +architecture to express the incentives of the different entities of +the system. +Index Terms—Network Slicing, Mechanism Design, Network +Economics, Bayesian Inference +I. INTRODUCTION +The ascending trend of the volume of the data traffic, +as well as the vast number of connected devices, puts +pressure on the industries to enhance resource utilization in +5G wireless networks. With the advent of 5G networks and +Internet of Things (IoT), researchers aim at a technological +transformation to simultaneously improve throughput, extend +network coverage and augment the users’ quality of service +without wasting valuable resources. Despite the significant +advances brought by the enhanced network architectures and +technologies, spectral scarcity will still impede the realization +of the full potential of 5G technology. +In the future 5G networks, verticals need distinct network +services as they may differ in their Quality of Service (QoS) +requirements, Service Level Agreements (SLAs), and key +performance indicators (KPIs). Such a need highlights the +inefficiency of the previous architecture technologies which +were based on a ”one network fits all” nature. In this +This paper appeared in INFOCOM 2023. +The research work was supported by the Office of Naval Research under +project numbers N00014-19-1-2566, N00173-21-1-G006 and by the National +Science Foundation under the project number CNS-2128530. +direction, network slicing is a promising technology that +enables the transition from one-size-fits-all to one-size-per- +service abstraction [1], which is customized for the distinct +use cases in a contemporary 5G network model. +Using Software Defined Networks (SDN) and Network +Function Virtualization (NFV), those slices are associated with +completely isolated resources that can be tailored on-demand +to satisfy the diverse QoS prerequisites and SLAs. Resource +allocation in network slicing plays a pivotal role in load +balancing, resource utilization and networking performance +[2]. Nevertheless, such a resource allocation model faces +various challenges in terms of isolation, customization, and +end-to-end coordination which involves both the core but also +the Radio Access Network (RAN) [3]. +In a typical network slicing scenario, multiple Network +Providers (NPs), own the physical infrastructure and offer +their resources to multiple Service Providers (SPs). Possible +services of the SPs include e-commerce, video, gaming, virtual +reality, wearable smart devices, and other IoT devices. The +SPs offer their resources as completely isolated slices to their +associated users. Thereby, such a system contains three types +of actors that interact with each other and compete for the same +resources, either monetary or networking. This paper focuses +on the technical and economic challenges that emerge from +the application of this architecture to real-world scenarios. +A. Related Work +User +Satisfaction +& +Sigmoid +Functions: +Network +applications can be separated into elastic (e.g. email, text +file transfer) and inelastic (e.g. audio/video phone, video +conference, tele-medicine) [4]. Utilities for elastic applications +are modeled as concave functions that increase with the +resources with diminishing returns [4]. On the other hand, +the utility function for an inelastic traffic is modeled as +a non-concave and usually as a sigmoid function. Such +non-concavities impose challenges for the optimization of +a network, but are suitable with the 5G era where the +services may differ in their QoS requirements [5]. In that +direction, multiple works in the literature employ sigmoid +utility functions for the network users [5]–[13]. Nevertheless, +all of these works consider either one SP and model the +interaction between the users, or multiple SPs that compete +for a fixed amount of resources (e.g. bandwidth). +Network +Slicing +in +5G +Networks: Network slicing +introduces various challenges to the resource allocation in 5G + +networks in terms of isolation, customization, elasticity, and +end-to-end coordination [2]. Most surveys on network slicing +investigate its multiple business models motivated by 5G, the +fundamental architecture of a slice and the state-of-the-art +algorithms of network slicing [2], [14], [15]. Microeconomic +theories such as non-cooperative games and/or mechanism +design arise as perfect tools to model the trading of network +infrastructure and radio resources that takes place in network +slicing [9], [16]–[18]. +Mechanism Design in Network Slicing: Multiple auction +mechanisms have been used to identify the business model +of a network slicing market (see a survey in [16]). Contrary +to our work, the majority of the literature considers a single- +sided auction, a model that assumes that a single NP owns the +whole infrastructure of the market [9], [18]–[22]. For example, +[9] considers a Vickrey–Clarke–Groves (VCG) auction-based +model where the NP plays the role of an auctioneer and +distributes discrete physical resource blocks. We find [3] and +[17] to be closer to our work, since the authors employ +the double-sided auction introduced by [23] to maximize the +social welfare of a system with multiple NPs. Contrary to our +work, the auction proposed in [23] assumes concave utility +functions for the different actors and requires the computation +of their gradients for its convergence. The aforementioned +assumptions might lead to an over-simplification of a more +complex networking architecture (e.g. that of the network +slicing model) where the utility function for a user with +inelastic traffic is expressed as a sigmoid function [9] and that +of an SP as an optimization problem [3]. +B. Contributions +Our work develops an iterative market model for the +network +slicing +architecture, +where +multiple +NPs +with +heterogeneous Radio Access Technologies (RATs), own the +physical infrastructure and offer their resources to multiple +SPs. The latter offer the resources as slices to their associated +users. Specifically, we propose a five-step iterative model +for the network slicing market that converges to a robust ǫ- +competitive equilibrium even when the utility functions of +the different actors are non-concave. In every cycle of the +proposed model, the slices are reconfigured and the SPs learn +the private parameters of their associated end-users to make the +equilibrium of the next cycle more efficient. The introduced +market model, can be seen as a framework that suits well +to various networking problems where three types of actors +are involved: those who own the physical infrastructure, those +who lease part of it to sell services and those who enjoy the +services (e.g. data-offloading [23]). +For the interaction between the SPs and the NPs and for +the convergence of the market to an equilibrium, we propose +an iterative clock auction. Such dynamic auctions are used +in the literature to auction divisible goods [24], [25]. The +key differentiating aspects of the proposed auction, are (i) the +relaxation of the common assumptions that the utility functions +are concave and their gradients can be analytically computed, +(ii) it provides highly usable price discovery, and (iii) it is a +double-sided auction, and thus appropriate for a market with +multiple NPs. Numerical results are provided that validate +and evaluate the convergence of the clock auction and the +capability of the proposed market architecture to express the +incentives of the different entities of the system. +II. MARKET MODEL & INCENTIVES +In this section we describe the different entities of the +network slicing market and their conflicting incentives. +A. Market Model +A typical slicing system model [2], [3], [14], [15] consists of +multiple SPs represented by M = {1, 2, . . ., M} and multiple +NPs that own RANs of possibly different RATs, represented +by a set K = {1, 2, . . ., K}. Each SP owns a slice with a +predetermined amount of isolated resources (e. g., bandwidth) +and is associated with a set of users, Um, that serves through its +slices. For the rest of the paper and without loss of generality +we assume that each NP owns exactly one RAN, so we use +the terms RAN and NP interchangeably. +1) Network Providers: The multiple NPs of the system +can quantify their radio resources as the performance level of +the same network metric (e.g., downlink throughput) [3]. Let +x(m,k) denote the amount of resources NP k allocates to SP +m, and the vector xm := (x(m,k))k∈K to denote the amount of +resources m gets from every NP. Without loss of generality [3], +capacity Ck limits the amount of resources that can be offered +from NP k, i.e., �M +m=1 x(m,k) ≤ Ck. Let C = (Ck)k∈K. For +the rest of the paper, we assume that there is a constant cost +related to operation and management overheads induced to the +NP. The main goal of every NP k is to maximize its profits +by adjusting the price per unit of resources, denoted by ck. +2) Service Providers & Associated Users: The main goal of +an SP is to purchase resources from a single or multiple NPs in +order to maximize its profit, which depends on its associated +users’ satisfaction. The connectivity of a user i ∈ Um is +denoted by a vector βi = (β(k,i))k∈K, where β(k,i) is a non- +negative number representing factors such as the link quality +i.e., numbers in (0, 1] that depend on the path loss. Moreover, +each user i of the SP m, is associated with a service class, +c(i), depending on their preferences. We denote the set of the +possible service classes of SP m as Cm = {Cm +1 , . . . , Cm +cm} +and thus c(i) ∈ Cm, +∀i ∈ Um. Each SP m, is trying to +distribute the resources purchased from the NPs, i.e., xm, +to maximize its profit. This process, referred to as intra-slice +resource allocation, is described in detail in Section II-B. +Throughout the paper, we assume that the number of users +of every SP m, i.e., |Um|, is much greater than the number +of SPs, which is much greater than the number of NPs in +the market. This assumption is made often in the mechanism +design literature and is sufficient to ensure that the end-users +and the SPs have limited information of the market [23], [26]. +The latter let us consider them as price-takers. In the following +section, we describe in detail the intra-slice resource allocation +problem from the perspective of an SP who tries to maximize +the satisfaction of its associated users. + +B. Intra-Slice Resource Allocation +The problem of the intra-slice resource allocation concerns +the distribution of the resources, xm, from the SP m to +its associated users. Specifically, every SP m allocates a +portion of x(m,k) to its associated user i, denoted as r(k,i). +Let ri := (r(k,i))k∈K and rm := (ri)i∈Um. For ease of +notation, the resources, ri, of a user i ∈ Um, as well as the +connectivities, βi, are not indexed by m because i is assumed +to be a unique identifier for the user. Although every user i +is assigned with r(k,i) resources from RAN k, because of its +connectivity βi, the aggregated amount of resources it gets +is zi := βT +i ri. Moreover, let zm := (zi)i∈Um. In a feasible +intra-slice allocation it should hold that xm ⪰ � +i∈Um ri for +each SP m. +Every SP should distribute the obtained resources among +its users to maximize their satisfaction. Towards providing +intuition behind the employment of sigmoidal functions in the +literature to model user satisfaction (e.g. see [5]–[12]), note +that by making the same assumption as logistic regression, +we model the logit1 of the probability that a user is satisfied, +as a linear function of the resources. Hence, the probability +that user i is satisfied with the amount of resources zi, say +P[QoS sati], satisfies log( +P [QoS sati] +1−P [QoS sati]) = tz +c(i)(zi − kc(i)) +and thus: +P[QoS sati] = +etz +c(i)(zi−kc(i)) +1 + etz +c(i)(zi−kc(i)) , +(1) +where kc(i) ≥ 0 denotes the prerequisite amount of resources +of the user i and tz +c(i) +≥ 0 expresses how ”tight” this +prerequisite is. Note that the probability of a user being +satisfied with respect to the value of zi, is a sigmoid function +with inflection point kc(i). We assume that the user’s service +class fully determines its private parameters, hence every user +i ∈ c(i) has QoS prerequisite kc(i) and sensitivity parameter +tz +c(i). These parameters are unknown to the users, so the SP’s +goal to eventually learn them is challenging (Section III-C). +Given the previous analysis, the aggregated satisfaction of +the users of the SP m is um(rm) := � +i∈Um ui(ri) ( [10], +[7]), where +ui(ri) := +etz +c(i)(βT +i ri−kc(i)) +1 + etz +c(i)(βT +i ri−kc(i)) . +(2) +Note that the function ui(·) can be expressed as a function of +zi as well. With a slight abuse of notation, we switch between +the two by changing the input variable. We can write the final +optimization problem for the intra-slice allocation of SP m as: +(IN-SL): +max +rm +um(rm) +s.t. +ri ⪰ 0, +∀i ∈ Um +xm ⪰ +� +i∈Um +ri +In case the amount of resources obtained from every NP, xm, +is not given, SP m can optimize it together with the intra- +1The logit function is defined as logit(p) = log( +p +1−p ). +slice resource allocation. Hence, SP m can solve the following +problem +(P): +max +rm,xm +Ψm(rm, xm) := um(rm) − cT xm +s.t. +ri ⪰ 0, +∀i ∈ Um +xm ⪰ +� +i∈Um +ri +Recall that ck denotes the price per unit of resources +announced from every NP k. In Problem P , the objective +function Ψm can be thought of as the profit of SP m. Let the +solution of the above problem be ψ∗ +m. +Problems IN-SL and P are maximization problems of +a summation of sigmoid functions over a linear set of +constraints. In [27] the problem of maximizing a sum of +sigmoid functions over a convex constraint set is addressed. +This work shows that this problem is generally NP-hard and +it proposes an approximation algorithm, using a branch-and- +bound method, to find an approximate solution to the sigmoid +programming problem. +In the rest of the section, we study three variations of +problem P. Specifically, in Section II-B1, we study the case +where the end-users are charged to get the resources from +the SPs and in Sections II-B2 and II-B3 we regularize and +concavify P respectively, something that will facilitate the +analysis of the rest of the paper. +1) Price Mechanism in P: In this subsection we argue that +Problem P is expressive enough to capture the case where +every user i is charged for its assigned resources. Let pi be +the amount of money that user i should pay to receive the zi +resources. In that case, the SPs should modify Problems IN- +SL and P accordingly. First, note that user i’s satisfaction may +depend also on pi. Similarly with the previous section, we can +express the satisfaction of user i with respect to the price pi +using a sigmoid function as P[price sati] = +1 +1+e +tp +c(i)(pi−bc(i)) , +where bc(i) ≥ 0 is the budget of the user i for the prerequisite +resources kc(i), and tp +c(i) ≥ 0 expresses how ”tight” is this +budget. We can now model the acceptance probability function +[7] as P[sati] = P[price sati]P[QoS sati], and hence the +expected total revenue, or the new utility of SP m, u +′ +m, is +modeled as +u +′ +m(rm, pm) := +� +i∈Um +P[sati]pi. +(3) +From Eq. (3), it is possible for SP m to immediately determine +the optimal price ˆpi to ask from any user i ∈ Um. This follows +from the fact that for positive pi the function admits a unique +critical point, ˆp. Therefore, by just adding proper coefficients +to the terms of Problem IN-SL and P, we can embed a pricing +mechanism for the end-users in the model. For the rest of the +paper, without loss of generality in our model, we assume that +the end-users are not charged for the obtained resources. +2) Regularization of P : We can regularize Problem P , +with a small positive λm. In that manner, we encourage dense + +solutions and hence we avoid situations where a problem in +one RAN completely disrupts the operation of the SP. +( ¯ +P ): +max +rm,xm +Ψm(rm, xm) − λm∥xm∥2 +2 +s.t. +ri ⪰ 0, +∀i ∈ Um +xm ⪰ +� +i∈Um +ri +In the regularized problem +¯ +P , note that larger values of +λm penalize the vectors xm with greater L2 norms. Let the +solution of Problem ¯ +P be ¯ψ∗ +m. The Lemma below, shows that +for small λm, the optimal values ¯ψ∗ +m and ψ∗ +m are close. Its +proof is simple and thus ommited for brevity. +Lemma 1. Let (r∗ +m, x∗ +m) and (¯r∗ +m, ¯x∗ +m) be solutions of +Problems P and ¯ +P respectively. Then, +ψ∗ +m − λm∥x∗ +m∥2 +2 ≤ ¯ψ∗ +m ≤ ψ∗ +m − λm∥¯x∗ +m∥2 +2 +Lemma 1, proves that the regularization of P was (almost) +without loss of optimality. In the next section, we proceed by +concavifying Problem ¯ +P . The new concavified problem will +be a fundamental building block of the auction analysis in +Section III-A. +3) Concavification of +¯P : To concavify +¯P , we replace +every summand of um with its tightest concave envelope, +i.e., the pointwise infimum over all concave functions that are +greater or equal. For the sigmoid function ui(zi) the concave +envelope, ˆui(zi), has a closed form given by +ˆui(zi) = +� +ui(0) + ui(w)−ui(0) +w +zi +0≤zi≤w +ui(zi) +w≤zi +, +for some w > ki which can be found easily by bisection +[27]. Fig. 1 depicts the concavification of the aforementioned +sigmoid functions for kc(·) = 100 and three different values +for tz +c(·). Note that for the lowest tz +c(·) (elastic traffic) we get the +best approximation whilst for the largest (inelastic traffic/tight +QoS prerequisites) we get the worst. +To exploit the closed form of the envelope ˆui(zi), instead +of problem ¯P , we will concavify the equivalent problem: +( ˜ +P ): +max +rm,xm,zm, +� +i∈Um +fi(ri, zi) − cT xm − λm∥xm∥2 +2, +s.t. +(ri, zi) ∈ Si, +∀i ∈ Um +xm ⪰ +� +i∈Um +ri +where Si +:= +{(ri, zi) +: +ri +⪰ +0, zi += +βT +i ri } and +fi(ri, zi) := ui(zi) with domain Si. The following lemma +uses the concave envelope of the sigmoid function ui(zi), +to compute the concave envelope of fi(ri, zi) and hence the +concavification of the problem ˜ +P . Its proof is based on the +definition of the concave envelope and is omitted for brevity. +Lemma 2. The concave envelope of the function fi(ri, zi) := +e +tz +c(i)(zi−kc(i)) +1+e +tz +c(i)(zi−kc(i)) with domain Si, ˆfi(ri, zi), has the following +closed form (with domain Si): +ˆfi(ri, zi) = ˆui(zi), +∀(ri, zi) ∈ Si. +Therefore, SP m can concavify ˜P as follows: +( ˆ +P ): +max +rm,xm,zm +� +i∈Um +ˆfi(ri, zi) − cT xm − λm∥xm∥2 +2 +s.t. +(ri, zi) ∈ Si, +∀i ∈ Um +xm ⪰ +� +i∈Um +ri +Note that ˆ +P is strongly concave and thus admits a unique +maximizer. Let the solution and the optimal point of problem +ˆ +P be ˆψ∗ +m and (ˆx∗ +m, ˆr∗ +m) respectively. Ultimately, we would +like to compare the solution of the concavified ˆ +P with the +one of the original problem P . Towards that direction, we +first define the nonconcavity of a function as follows [28]: +Definition 1 (Nonconcavity of a function). We define the +nonconcavity ρ(f) of a function f : S → R with domain +S, to be +ρ(f) = sup +x ( ˆf(x) − f(x)). +Let F denote a set of possibly non-concave functions. Then +define ρ[j](F) to be the jth largest of the nonconcavities of +the functions in F. The theorem below, summarizes the main +result of this section, which is that every SP can solve the +concavified ˆ +P instead of the original P , since the former +provides a constant bound approximation of the latter. Recall +that Ψm(ˆr∗ +m, ˆx∗ +m) is the profit of SP m, evaluated at the +solution of ˆ +P and that K is the number of the NPs. +Theorem 1. Let (r∗ +m, x∗ +m) and (¯r∗ +m, ¯x∗ +m) be solutions of +Problems P +and +¯ +P +respectively. Moreover, let +ˆF +:= +{ui}i∈Um. Then, +ψ∗ +m − ǫ − δ1(λm) ≤ Ψm(ˆr∗ +m, ˆx∗ +m) ≤ ψ∗ +m + δ2(λm), +where δ1(λm) +:= +λm(∥x∗ +m∥2 +2 − ∥ˆx∗ +m∥2 +2), δ2(λm) +:= +λm(∥ˆx∗ +m∥2 +2 − ∥¯x∗ +m∥2 +2) and ǫ = �K +j=1 ρ[j]( ˆF). +Proof: +Note that ¯ψ∗ +m is also given by solving ˜ +P and that (ˆr∗ +m, ˆx∗ +m) +with the corresponding optimal value ˆψ∗ +m, are given by solving +ˆ +P . Therefore, from [28, Th. 1], we have that +¯ψ∗ +m − +K +� +j=1 +ρ[j]( ˆF) ≤ um(ˆr∗ +m) − cT ˆx∗ +m − λm∥ˆx∗ +m∥2 +2 ≤ ¯ψ∗ +m +The result follows from Lemma 1. +Remark 1. The values of δ1 and δ2 decrease as λm decreases +and hence for small regularization penalties they can get +arbitrarily close to zero. +Remark 2. The approximation error, ǫ, depends on the K +greatest nonconcavities of the set {ui}i∈Um. There are two +conditions that ensure negligible approximation error, i.e., +ǫ << ψ∗ +m: i) the end-users have concave utility functions +(in that case ǫ → 0) or, ii) the market is profitable enough +for every SP m and hence ψ∗ +m >> K. Condition ii) makes +the error negligible since ǫ ≤ K, and it can be satisfied for +example when the supply of the market, C, is sufficiently large. + +0 +50 +100 +150 +200 +250 +300 +z +i +0.2 +0.4 +0.6 +0.8 +1.0 +utility +Sigmoid Utility +Concave Envelope +(a) +0 +50 +100 +150 +200 +250 +300 +z +i +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +utility +Sigmoid Utility +Concave Envelope +(b) +0 +50 +100 +150 +200 +250 +300 +z +i +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +utility +Sigmoid Utility +Concave Envelope +(c) +Fig. 1: Concave Envelopes of sigmoid utility functions with kc(·) = +100 and (a) tz +c(·) = 0.02, (b) tz +c(·) = 0.2 and (c) tz +c(·) = 2. +Theorem 1, implies that every SP can solve Problem ˆ +P , +which is a concave program with a unique solution, to find +an approximate solution to P . This observation fosters the +convergence analysis of the proposed auction in Section III-A. +III. NETWORK SLICING MARKET CYCLE +In this section, we study the evolution of the network slicing +market using an iterative model that consists of 5-step cycles. +We refer to the following sequence of steps as a market cycle: +S1. |Um| prospective users appear to every SP m. +S2. The vector xm, i.e., the distribution of the resources from +the NPs to SP m is determined for every m. To achieve +that in a distributed fashion, an auction between the SPs +and the NPs should be realized. +S3. Given xm, each SP m determines the vectors ri and +hence the amount of resources zi for every user i ∈ Um +(intra-slice resource allocation). +S4. After receiving the resources, each user i determines and +reports to the SP whether the QoS received was enough +or not to complete its application. +S5. The SPs exploit the responses of their users, to estimate +their private parameters and hence to distribute the +resources more efficiently in the next cycle. +It is important for the vector xm to be determined before +the intra-slice resource allocation, since the first serves as the +capacity in the resources available to SP m. In the following, +we expand upon each (non-trivial) step of the market cycle. +A. Step S2 - Clock Auction for the Network Slicing Market +In this section, we develop and analyze a clock auction +between the SPs and the NPs, that converges to a market’s +equilibrium. Specifically, we describe the goal (Section +III-A1), the steps (Section III-A2), and the convergence +(Section III-A3) of the auction. +1) Auction Goal: Note that the solutions of the problems +P and ˆ +P appear to be a function of the prices c1, . . . , cK. Let +the demand of SP m, given the price vector c, be denoted as +x∗ +m(c) or ˆx∗ +m(c) depending on whether SP m uses Problem +P or ˆ +P to ask for resources. Let also r∗ +m(c) and ˆr∗ +m(c) +be optimal intra-slice resource allocation vectors respectively. +Hence, (r∗ +m(c), x∗ +m(c)) and (ˆr∗ +m(c), ˆx∗ +m(c)) are maximizers +of P and ˆ +P respectively (given c). Since Problem P may +admit multiple solutions, let the set Dm(c) be defined as +Dm(c) := +� +x∗ +m : {∃r∗ +m : {Ψm(r∗ +m, x∗ +m) = ψ∗ +m given c} +� +. +We define a Competitive equilibrium as follows: +Definition +2 +(Competitive +equilibrium). +Competitive +equilibrium of the Network Slicing Market is defined to be +any price vector c† and allocation of the resources of the +NPs x†, such that: +i. x† +m ∈ Dm(c†) for every SP m, and +ii. C = � +m∈M x† +m (the demand equals the supply). +Note that in a competitive equilibrium, every SP m gets +resources that could maximize its profit given the price vector. +Because a competitive equilibrium sets a balance between the +interests of all participants, it appears to be the settling point +of the markets in economic analysis [26], [29]. Nevertheless, +since the SPs’ demands are expressed by solving a non- +concave program, we define an ǫ-competitive equilibrium +which will be the ultimate goal of the proposed clock auction. +Definition +3 +(ǫ-Competitive +equilibrium). +ǫ-Competitive +equilibrium of the Network Slicing Market is defined to be +any price vector ˆc† and allocation of the resources of the NPs +ˆx†, such that: +i. For every SP m, there exists an ǫ ≥ 0 and a feasible intra- +slice resource allocation vector ˆr† +m (given ˆx† +m), such that: +ψ∗ +m − ǫ ≤ Ψm(ˆr† +m, ˆx† +m) ≤ ψ∗ +m + ǫ, and +ii. C = � +m∈M ˆx† +m (the demand equals the supply). +Observe that the first condition of the above definition +ensures that every SP is satisfied (up to a constant) with +the obtained resources in a sense that it operates close to its +maximum possible profit. From Theorem 1, note that if there +exists a price vector ˆc† such that C = � +m∈M ˆx∗ +m(ˆc†), then +the prices in ˆc† with the allocation ˆx† := ˆx∗(ˆc†) form an +ǫ-competitive equilibrium. Finding such a price vector, is the +motivation of the proposed clock auction. For the rest of the +paper we make the following assumption: +Assumption 1. The SPs calculate their demand and intra- +resource allocation by solving Problem ˆ +P . +This is a reasonable assumption since in Theorem 1 and the +corresponding Remarks 1 and 2, we proved that by solving +a (strictly) concave problem, every SP can operate near its +optimal profit. Therefore, for the rest of the paper, we call +ˆx∗ +m(c), the demand of SP m given the prices c. +2) Auction Description: We propose the following clock +auction that converges to an ǫ-competitive equilibrium of the +Network Slicing market (Theorem 2). As we will prove in +Theorem 3, this equilibrium is robust since the convergent +price vector is the unique one that clears the market, i.e., makes +the demand to equal the supply. + +i. An +auctioneer +announces +a +price +vector +c, +each +component of which corresponds to the price that an NP +sells a unit of its resources. +ii. The bidders (SPs) report their demands. +iii. If the aggregated demand received by an NP is greater +than its available supply, the price of that NP is increased +and vice versa. In other words, the auctioneer adjusts the +price vector according to Walrasian tatonnement. +iv. The process repeats until the price vector converges. +Note that the components of the price vector change +simultaneously and independently. Hence different brokers +can cooperate to jointly clear the market efficiently in a +decentralized fashion [23]. Let the excess demand, Z(c), be +the difference between the aggregate demand and supply: +Z(c) = −C + � +m∈M ˆx∗ +m(c). In Walrasian tatonnement, the +price vector adjusts in continuous time according to excess +demand as ˙c = f(Z(c(t))), where f is a continuous, sign- +preserving transformation [24]. For the rest of the paper, we +set f to be the identity function and thus ˙c = Z(c(t)). In +auctions based on Walrasian tatonnement, the payments are +only valid after the convergence of the mechanism [30]. +3) Auction Convergence: Towards proving the convergence +of the auction, we provide the lemma below which proves that +the concavified version of the intra-slice resource allocation +problem IN − SL, can be thought of as a concave function. +The proof is ommitted as a direct extension of [3] and [31]. +Lemma 3. The function Um(xm) shown below is concave. +Um(xm) := max +rm,zm +� +i∈Um +ˆfi(ri, zi) +s.t. +(ri, zi) ∈ Si, +∀i ∈ Um +xm ⪰ +� +i∈Um +ri +(4) +Using the function Um, we can rewrite Problem ˆ +P as +max +xm⪰0 +Um(xm) − λm − cT xm∥xm∥2 +2. +The following theorem studies the convergence of the auction. +Theorem 2. Starting from any price vector cinit, the proposed +clock auction converges to an ǫ-competitive equilibrium. +Proof: The proof relies on a global stability argument +similarly to [24], [29]. Let Vm(·) denote m’s net indirect +utility function: +Vm(c) = max +xm⪰0 +{Um(xm) − λm∥xm∥2 +2 − cT xm}. +Let a candidate Lyapunov function be V(c) := cT C + +� +m∈M Vm(c). To study the convergence of the auction we +should find the time derivative of the above Lyapunov function: +˙V(c)= ˙c· +� +CT +� +m∈M +d +dc +� +maxxm⪰0{Um(xm)−λm∥xm∥2 +2−cT xm} +�� +. +Hence, we deduce that: +˙V(c) = +� +CT + +� +m∈M +{−ˆx∗T +m (c)} +� +· ˙c = −ZT(c(t)) · Z(c(t)). +The above holds true since the function h(xm) := Um(xm)− +λm∥xm∥2 +2, has as concave conjugate the function (see [31]) +h∗(s) = max +xm⪰0{h(xm) − cT xm}, +and hence ∇h∗(s) = arg maxxm⪰0{Um(xm)− λm∥xm∥2 +2 − +cT xm}. Therefore, V(·) is a decreasing function of time and +converges to its minimum. Note that in the convergent point +the supply equals the demand for every NP. +The market might admit multiple ǫ-competitive equilibria. +Nevertheless, the equilibrium point that the clock auction +converges is robust in the following sense: given Assumption +1, the price vector that clears the market is unique. Therefore, +essentially, in Theorem 2 we proved that the proposed clock +auction converges to that unique price vector. This is formally +proposed by the following theorem. +Theorem 3. There exists a unique price vector c† such that +� +m∈M ˆx∗ +m(c†) = C. +Towards proving Theorem 3 we provide Lemmata 4 and 5. +First, we show that if a component in the price vector changes, +the demand of an SP who used to obtain resources from the +corresponding NP, should change as well. +Lemma 4. For two distinct price vectors c, ¯c with ∃k : ck ̸= +¯ck, it holds true that +ˆx∗ +m(c) = ˆx∗ +m(¯c) ⇒ ˆx∗ +(m,k)(c) = ˆx∗ +(m,k)(¯c) = 0. +Proof: Let such price vectors, ¯c and c, with ck ̸= ¯ck. +Since ˆx∗ +m(c) is the optimal point of problem ˆ +P given c, +applying KKT will give: +ˆx∗ +(m,k)(c) = 0 +or +∂{Um(xm) − λm∥xm∥2 +2} +∂x(m,k) +���� +ˆx∗m(c) += ck. (5) +However, ˆx∗ +m(¯c) is optimal for ˆ +P given ¯c. Employing a similar +equation as (5) proves that if ˆx∗ +m(c) = ˆx∗ +m(¯c) then it can only +hold that ˆx∗ +(m,k)(c) = ˆx∗ +(m,k)(¯c) = 0. +Definition +4 +(WARP property). The aggregate demand +function satisfies the Weak Axiom of Revealed Preferences +(WARP), if for different price vectors c and ¯c, it holds that: +cT · +� +m∈M +ˆx∗ +m(¯c) ≤ cT · +� +m∈M +ˆx∗ +m(c) ⇒ +¯cT · +� +m∈M +ˆx∗ +m(¯c) < ¯cT · +� +m∈M +ˆx∗ +m(c) +Lemma 5. The aggregate demand function satisfies the WARP +for distinct price vectors c, ¯c such that � +m∈M ˆx∗ +m(c) ≻ 0 +and � +m∈M ˆx∗ +m(¯c) ≻ 0. +Proof: Since c ̸= ¯c then ∃k ∈ K : ck ̸= ¯ck. Furthermore, +we have that � +m∈M ˆx∗ +m(c) ≻ 0 and hence ∃m1 ∈ M +such that ˆx∗ +m1,k(c) > 0. Using Lemma 4 we conclude that +ˆx∗ +m1(c) ̸= ˆx∗ +m1(¯c). Hence, since Problem ˆ +P admits a unique +global maximum we have that: +� +m∈M +� +Um(ˆx∗ +m(c)) − λm∥ˆx∗ +m(c)∥2 +2 − cT · ˆx∗ +m(c) +� +> + +� +m∈M +� +Um(ˆx∗ +m(¯c)) − λm∥ˆx∗ +m(¯c)∥2 +2 − cT · ˆx∗ +m(¯c) +� +Now, the above combined with the WARP hypothesis, +� +m∈M +cT · ˆx∗ +m(¯c) ≤ +� +m∈M +cT · ˆx∗ +m(c), +gives: +� +m∈M +� +Um(ˆx∗ +m(c)) − λm∥ˆx∗ +m(c)∥2 +2 +� +> +� +m∈M +� +Um(ˆx∗ +m(¯c)) − λm∥ˆx∗ +m(¯c)∥2 +2 +� +. +(6) +The result follows by switching the roles of c and ¯c and +combine the inequalities. +We can now prove Theorem 3 as follows. +proof of Theorem 3: +Towards a contradiction, assume +that there exist two distinct (non-zero) price vectors c and ¯c +that satisfy � +m∈M ˆx∗ +m(¯c) = � +m∈M ˆx∗ +m(c) = C and thus +cT · +� � +m∈M +ˆx∗ +m(¯c) − +� +m∈M +ˆx∗ +m(c) +� += 0. +(7) +Therefore, from Lemma 5 we know that: +¯cT · +� +m∈M +ˆx∗ +m(¯c) < ¯cT · +� +m∈M +ˆx∗ +m(c), +(8) +which is a contradiction because of the hypothesis. +Remark 3. Theorems 2 and 3 together with Remarks 1 and 2 +imply that if the users’ traffic is elastic, or the total capacity +C of the NPs is sufficiently large, the clock auction converges +monotonically to the unique competitive equilibrium of the +market. +At the end of step S2, the final price vector ˆc† and the final +demands of each SP m, ˆx∗ +m, have been determined. +B. Intra-Slice Resource Allocation & Feedback (Steps S3, S4) +At the beginning of step S3, every SP m is aware of the +convergent point ˆx∗ +m and hence it can allocate the resources +either by solving the sigmoid program IN − SL, or by +using the convergent approximate solution, ˆr∗ +m. At that step, +an SP can also determine whether it will overbook network +resources. Overbooking, is a common practice in airlines and +hotel industries and is now being used in the network slicing +problem [32], [33]. This management model allocates the +same resources to users of the network expecting that not +everyone uses their booked capacity. In that case, SP m solves +Problem IN−SL whilst setting increased obtained resources, +xov +m = ˆx∗ +m +α%◦ ˆx∗ +m, for a relatively small positive α. Here, +◦ denotes the component-wise multiplication operator. +During the step S4 of the cycle, each user i, receives their +resources ri, and provide feedback on whether it was satisfied +or not. In the next step, the SPs can use the these responses +to learn the private parameters of the different service classes. +C. Learning the Parameters (Step S5) +At the final step of the cycle, the SPs exploit the data they +obtained to learn the private parameters of their users. In that +fashion, the market ”learns” its equilibrium. For the rest of +the paper, for generality, we assume the pricing mechanism +introduced in Section II-B1. Therefore, for every user i, the +SPs get to know whether it is satisfied by the pair of resources- +price (zi, pi). A Bayesian inference model needs the data, a +model for the private parameters and a prior distribution. +Model: The observed data is the outcome of the Bernoulli +variables sati|θc(i) +∼ Bernoulli(P[sati]) for every user +i, where θc(i) += +(tp +c(i), bc(i), tz +c(i), kc(i)) is the tuple of +the private parameters that we want to infer. Prior: Let +the prior distribution for every parameter of θc(i) have +probability density functions πtp +c(i)(·), πbc(i)(·), πtz +c(i)(·) and +πkc(i)(·) respectively. The SPs infer the private parameters +θc(i) for each service class using the Bayes rule separately: +p(θc(i)|data) ∝ Ln(data|θc(i))π(θc(i)), where p(θc(i)|data) +is the posterior distribution of θc(i), Ln(data|θc(i)) is the +likelihood of the data given our model and π(θc(i)) is the +prior distribution. Assuming independent private parameters, +π(θc(i)) is the product of the distinct prior distributions, and +for each class c we have that: +Ln(data|θc(i)) = +� +i∈Cm +c +P[sati]fi(1 − P[sati])1−fi, +where fi is 1 when user i is satisfied and 0 when not. +The SPs can use Marcov Chain Monte Carlo (MCMC) with +Metropolis Sampling, to find the posterior distribution after +each market cycle. As the market evolves, the SPs exploit the +previous posterior distributions to find better priors for the next +cycle. +IV. CENTRALIZED SOLUTION +In case there exists a centralized entity that knows the utility +function of every SP, it can optimize the social welfare, i.e., +the summation of the utility functions of the service and the +network providers. This centralized problem can be formulated +as follows: +(SWM): +max +rm +� +m∈M +um(rm) +s.t. +ri ⪰ 0, +∀i ∈ Um +� +m∈M +� +i∈Um +ri ⪯ C +The SW M problem, can be solved with any chosen positive +approximation error, using the framework of sigmoidal +programming [27]. +V. NUMERICAL RESULTS +A. Auction Convergence & Parameter Tuning +In this section we study the convergence of the clock +auction, as well as the impact that the various parameters have +on its behavior. For this simulation, we assume a small market +with 3 NPs with capacities C1 = 850, C2 = 750, C3 = 755 + +0 +2 +4 +6 +8 +10 +12 +14 +Iterations +0 +1 +2 +3 +4 +5 +6 +7 +L2 Norm of the Excess Demand +1e6 +cinit = [0.62, 0.64, 0.58] +cinit = [1.2, 1.4, 1.1] +cinit = [0.2, 0.4, 0.1] +cinit = [0.4, 0.4, 1.1] +(a) +0 +10 +20 +30 +40 +50 +Iterations +0 +2500 +5000 +7500 +10000 +12500 +15000 +17500 +L2 Norm of the Excess Demand +κ = 10^{-4} +κ = 10^{-5} +κ = 10^{-6} +(b) +Fig. 2: L2 norm of the excess demand vector throughout the clock +auction (a) for κ = 10−4 and various initialization price vectors +cinit, and (b) for cT +init = [0.62, 0.64, 0.58] and different values of +κ. +Cost of NP1 +0.30.40.50.60.7 0.8 0.9 1.0 1.1 +Cost of NP2 +0.4 +0.6 +0.8 +1.0 +1.2 +Cost of NP3 +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +1.0 +cinit = [0.62, 0.64, 0.58] +cinit = [1.2, 1.4, 1.1] +cinit = [0.2, 0.4, 0.1] +cinit = [0.4, 0.4, 1.1] +Fig. 3: Illustrating Theorem 2. Starting from any price vector cinit, +the clock auction converges to the market clearing prices c†. +and 5 SPs with 6 users and 3 distinct service classes each. +The users’ private parameters are set as follows: for an i in +the first class tz +c(i) = tp +c(i) = 0.2, kc(i) = bc(i) = 100, for the +second class tz +c(i) = tp +c(i) = 2, kc(i) = bc(i) = 120, and for the +third class tz +c(i) = tp +c(i) = 20, kc(i) = bc(i) = 150. Such values +indicate that the users wish to pay a unit of monetary value +for a unit of offered resources. +To discretize the auction, we change the cost vector +according to a step value, κ, as ct+1 = ct + κZ(ct). Fig. 2 +depicts the L2 norm of the excess demand vector throughout +the clock auction for different cost vector initializations cinit +(Fig 2a), and for different step values κ (Fig. 2b). By +simulating the clock auction, we deduce that the clearing price +vector is c†T = [0.6116, 0.6273, 0.5811]. In Fig. 2a note that +the closer the initialization cost vector is to c†T , the faster +the convergence becomes. Fig. 2b, connotes the need for a +proper choice of the step value κ. Clearly, κ = 10−4 gives +the fastest convergence and as we decrease the step values +it becomes slower. Nevertheless, since Theorem 2 is proved +for the continuous case, large values of κ cannot guarantee +the convergence of the auction to an equilibrium. In Fig. 3 +observe that the convergence of the auction does not depend +on the initialization of the cost vector (Theorem 2). +SP1/NP1 +SP1/NP2 +SP2/NP1 +SP2/NP2 +Service Provider/Network Provider +0 +200 +400 +600 +800 +1000 +1200 +1400 +x +m, +k +Auction +SPP +oSPP(5%) +SWM +Fig. 4: Total amount of resources obtained by every SP m from +every NP k in the market, x(m,k). +B. Visualization of the Resource Allocation +In this section, we get insights on the allocation of +the resources in the market. We assume 2 NPs with +C1 = C2 = 1400 and 2 SPs with 10 users each and one shared +service class with tz +c(i) = tp +c(i) = 0.2 and kc(i) = bc(i) = 100 +for all i. The first SP (SP1) is near the first NP (NP1) +and far from NP2 and hence, we set [β(1,1), . . . , β(1,10)] = +[0.99, 0.96, 0.87, 0.85, 0.82, 0.81, 0.80, 0.80, 0.70, 0.70] +and +β(2,i) = 0.2, ∀i ∈ U1. Moreover, for the users of SP2 we set +β1,i = β2,i = 0.8, ∀i ∈ U2. +We compare the resource allocation of four different +methods. First, ’Auction’ refers to the resource allocation that +results immediately after the auction. ’SPP’ takes ˆx∗ +m from the +equilibrium but performs the intra-slice of every SP by solving +IN − SL. We also study the method ’oSPP(5%)’, which +mimics the SPP method but with 5% overbooked resources. +Finally, ’SWM’ refers to the solution of the Problem SW M. +Fig. 4 shows the amount of resources obtained from the +two SPs. All methods allocate the majority of the resources +of NP1 to SP1 since its users have greater connectivity with +it. Although the users of SP2 have equally high connectivity +with both NPs, all of the four methods were flexible enough +to allocate the resources of NP2 to SP2. Note that none of the +methods gives resources from NP2 to SP1. +Fig. 5 depicts the intra-slice resource allocations. In Fig. +5a observe that the greater the connectivity of a user is, +the less resources it gets. That is because users with good +connectivity factors meet their prerequisite QoS using less +resources and hence SP1 could maximize its expected profit +by giving them less. Note that ’SPP’ gives no resources to the +user with the worst connectivity whereas with the overbooking, +SP1 gets enough resources to make attractive offers to every +user. Therefore, ’SPP’ might make an unfair allocation, since +when the resources are not enough, it neglects the users with +bad connectivity. In Fig. 5c, note that the homogeneity in the +connectivities of the users of SP2 forces every method to fairly +divide the resources among them. +Fig. 6a shows the expected value of the total revenue, or the +social welfare. ’SWM’ gives the greatest revenue among the +methods that do not overbook. Nevertheless, although ’SPP’ +is a completely distributed solution and was not designed to +maximize the total revenue, it performs very close to ’SWM’. +Moreover, a 5% overbooking leads to greater revenues. + +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +User ID of SP1 +0 +25 +50 +75 +100 +125 +150 +175 +r +1, +i +Auction +SPP +oSPP(5%) +SWM +(a) +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +User ID of SP2 +0 +10 +20 +30 +40 +50 +r +1, +i +Auction +SPP +oSPP(5%) +SWM +(b) +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +User ID of SP2 +0 +20 +40 +60 +80 +100 +120 +140 +160 +r +2, +i +Auction +SPP +oSPP(5%) +SWM +(c) +Fig. 5: The solution of the intra-slice resource allocation problem from the perspective of the two different SPs of the market. Specifically, +how (a) SP1 distributed the resources of NP1, i.e., r1,i for every i in U1, (b) SP2 distributed the resources of NP1, i.e., r1,i for every i in +U2, and (c) SP2 distributed the resources of NP2, i.e., r2,i for every i in U2. +Auction +SPP +oSPP(5%) +SWM +Resource Allocation Method +0 +200 +400 +600 +800 +1000 +1200 +1400 +1600 +Expected T +otal Revenue +1575.31 +1598.16 +1677.81 +1611.54 +(a) +Auction +SPP +oSPP(5%) +SWM +Resource Allocation Method +0 +100 +200 +300 +400 +500 +600 +700 +800 +Expected Revenue of SP1 +746.85 +769.65 +827.42 +806.49 +(b) +Auction +SPP +οSPP(5%) +SWM +Resource Allocation Method +0 +100 +200 +300 +400 +500 +600 +700 +800 +Expected Revenue of SP2 +828.46 +828.51 +850.39 +805.06 +(c) +Fig. 6: Illustrating the expected revenue (given by Eq. (3)) for the four different resource allocation methods. Fig. (a) shows the aggregated +expected revenue, Fig. (b) shows the expected revenue of SP1, and Fig. (c) shows the expected revenue of SP2. +C. Impact of Bayesian Inference +The previous results are extracted after a sufficient number +of cycles, when the SPs have learned the parameters of the +end-users. In this section, we consider an SP with 10 users +and one service class that employs Bayesian inference to learn +the private parameter tz +c(i) for every i. We set the true value +of the parameter to be tz +c(i) = 2. The other parameters are +set tp +c(i) = 2, kc(i) = bc(i) = 120 and β1,i = 0.9, ∀i ∈ U1. +We assume one more SP with a unique service class with +tp +c(i) = tz +c(i) = 0.2, kc(i) = bc(i) = 100 and β2,i = 0.9∀i ∈ U2. +Finally, there are 2 NPs with C1 = C2 = 1200. +In this example, SP1 sets as prior distribution the normal +N(0.02, 2) and hence assumes elastic traffic. At the end +of each market cycle, the SP makes an estimation, ˆtz +c(i), +by calculating the mean of the posterior distribution. Fig. 7 +depicts the histogram of the posterior distribution for the first +two market cycles. Observe that even in the third market cycle, +SP1 can estimate with high accuracy the actual value of the +parameter. In Table I, note that the perceived revenue, i.e., the +expected revenue calculated using the estimation, is different +between the cycles that ˆtz +c(i) differs from tz +c(i). Hence, it is +impossible for the SPs to maximize their expected profits +when they don’t know the actual values of the parameters. +Indeed, observe that the bad estimate of ˆtz +c(i) = 0.02 gives +poor expected revenue compared to the last two cycles. +VI. CONCLUDING REMARKS +In this paper we focus on the technical and economic +challenges that emerge from the application of the network +slicing architecture to real world scenarios. Taking into +0 +2 +4 +6 +8 +0.0 +0.1 +0.2 +0.3 +0.4 +tc(i) +z +(a) +0 +1 +2 +3 +4 +5 +6 +0.0 +0.1 +0.2 +0.3 +0.4 +0.5 +tc(i) +z +(b) +Fig. 7: Posterior distribution of the unknown private parameter tz +c(i) +in (a) the first Market Cycle, and (b) in the second Market Cycle. +Cycle +ˆtz +c(i) +Acquired +Resources +Perceived +Revenue +Actual +Revenue +1 +0.02 +1087 +530.26 +699 +2 +1.68 +1370 +1160.77 +1163.48 +3 +2.01 +1365 +1161.42 +1161.42 +TABLE I: Bayesian inference in different market cycles. +consideration the heterogenity of the users’ service classes +we introduce an iterative market model along with a clock +auction that converges to a robust ǫ-competitive equilibrium. +Finally, we propose a Bayesian inference model, for the SPs +to learn the private parameters of their users and make the +next equilibria more efficient. Numerical results validate the +convergence of the clock auction and the capability of the +proposed framework to capture the different incentives. + +REFERENCES +[1] Q. +Zhang, +F. +Liu, +and +C. +Zeng, +“Adaptive +interference-aware +vnf placement for service-customized 5g network slices,” in IEEE +INFOCOM 2019-IEEE Conference on Computer Communications. +IEEE, 2019, pp. 2449–2457. +[2] R. +Su, +D. +Zhang, +R. +Venkatesan, +Z. +Gong, +C. +Li, +F. +Ding, +F. Jiang, and Z. Zhu, “Resource allocation for network slicing in 5g +telecommunication networks: A survey of principles and models,” IEEE +Network, vol. 33, no. 6, pp. 172–179, 2019. +[3] Q. Qin, N. Choi, M. R. Rahman, M. Thottan, and L. Tassiulas, “Network +slicing in heterogeneous software-defined rans,” in IEEE INFOCOM +2020-IEEE Conference on Computer Communications. +IEEE, 2020, +pp. 2371–2380. +[4] Q.-V. Pham and W.-J. Hwang, “Network utility maximization-based +congestion control over wireless networks: A survey and potential +directives,” IEEE Communications Surveys & Tutorials, vol. 19, no. 2, +pp. 1173–1200, 2016. +[5] J.-W. Lee, R. R. Mazumdar, and N. B. Shroff, “Non-convex optimization +and rate control for multi-class services in the internet,” IEEE/ACM +transactions on networking, vol. 13, no. 4, pp. 827–840, 2005. +[6] J. Liu, “A theoretical framework for solving the optimal admissions +control +with +sigmoidal +utility +functions,” +in +2013 +International +Conference on Computing, Networking and Communications (ICNC). +IEEE, 2013, pp. 237–242. +[7] A. Lieto, I. Malanchini, S. Mandelli, E. Moro, and A. Capone, +“Strategic network slicing management in radio access networks,” IEEE +Transactions on Mobile Computing, 2020. +[8] Q. Zhu and R. Boutaba, “Nonlinear quadratic pricing for concavifiable +utilities in network rate control,” in IEEE GLOBECOM 2008-2008 IEEE +Global Telecommunications Conference. +IEEE, 2008, pp. 1–6. +[9] L. Gao, P. Li, Z. Pan, N. Liu, and X. You, “Virtualization framework +and vcg based resource block allocation scheme for lte virtualization,” in +2016 IEEE 83rd Vehicular Technology Conference (VTC Spring). IEEE, +2016, pp. 1–6. +[10] G. Lee, H. Kim, Y. Cho, and S.-H. Lee, “Qoe-aware scheduling for +sigmoid optimization in wireless networks,” IEEE Communications +Letters, vol. 18, no. 11, pp. 1995–1998, 2014. +[11] M. Hemmati, B. McCormick, and S. Shirmohammadi, “Qoe-aware +bandwidth allocation for video traffic using sigmoidal programming,” +IEEE MultiMedia, vol. 24, no. 4, pp. 80–90, 2017. +[12] L. Tan, Z. Zhu, F. Ge, and N. Xiong, “Utility maximization resource +allocation +in wireless networks: Methods and algorithms,” IEEE +Transactions on systems, man, and cybernetics: systems, vol. 45, no. 7, +pp. 1018–1034, 2015. +[13] S. Papavassiliou, E. E. Tsiropoulou, P. Promponas, and P. Vamvakas, +“A paradigm shift toward satisfaction, realism and efficiency in wireless +networks resource sharing,” IEEE Network, vol. 35, no. 1, pp. 348–355, +2020. +[14] I. Afolabi, T. Taleb, K. Samdanis, A. Ksentini, and H. Flinck, +“Network slicing and softwarization: A survey on principles, enabling +technologies, +and +solutions,” +IEEE +Communications +Surveys +& +Tutorials, vol. 20, no. 3, pp. 2429–2453, 2018. +[15] S. Vassilaras, L. Gkatzikis, N. Liakopoulos, I. N. Stiakogiannakis, M. Qi, +L. Shi, L. Liu, M. Debbah, and G. S. Paschos, “The algorithmic aspects +of network slicing,” IEEE Communications Magazine, vol. 55, no. 8, +pp. 112–119, 2017. +[16] U. Habiba and E. Hossain, “Auction mechanisms for virtualization +in 5g cellular networks: basics, trends, and open challenges,” IEEE +Communications Surveys & Tutorials, vol. 20, no. 3, pp. 2264–2293, +2018. +[17] D. Zhang, Z. Chang, F. R. Yu, X. Chen, and T. H¨am¨al¨ainen, “A double +auction mechanism for virtual resource allocation in sdn-based cellular +network,” in 2016 IEEE 27th Annual International Symposium on +Personal, Indoor, and Mobile Radio Communications (PIMRC). +IEEE, +2016, pp. 1–6. +[18] F. Fu and U. C. Kozat, “Wireless network virtualization as a sequential +auction game,” in 2010 Proceedings IEEE INFOCOM. +IEEE, 2010, +pp. 1–9. +[19] H. Ahmadi, I. Macaluso, I. Gomez, L. DaSilva, and L. Doyle, +“Virtualization of spatial streams for enhanced spectrum sharing,” in +2016 IEEE Global Communications Conference (GLOBECOM). IEEE, +2016, pp. 1–6. +[20] B. Cao, W. Lang, Y. Li, Z. Chen, and H. Wang, “Power allocation in +wireless network virtualization with buyer/seller and auction game,” in +2015 IEEE Global Communications Conference (GLOBECOM). IEEE, +2015, pp. 1–6. +[21] K. Zhu and E. Hossain, “Virtualization of 5g cellular networks as +a hierarchical combinatorial auction,” IEEE Transactions on Mobile +Computing, vol. 15, no. 10, pp. 2640–2654, 2015. +[22] K. Zhu, Z. Cheng, B. Chen, and R. Wang, “Wireless virtualization as +a hierarchical combinatorial auction: An illustrative example,” in 2017 +IEEE Wireless Communications and Networking Conference (WCNC). +IEEE, 2017, pp. 1–6. +[23] G. +Iosifidis, +L. +Gao, +J. +Huang, +and +L. +Tassiulas, +“A +double- +auction mechanism for mobile data-offloading markets,” IEEE/ACM +Transactions on Networking, vol. 23, no. 5, pp. 1634–1647, 2014. +[24] L. M. Ausubel and P. Cramton, “Auctioning many divisible goods,” +Journal of the European Economic Association, vol. 2, no. 2-3, pp. +480–493, 2004. +[25] L. M. Ausubel, P. Cramton, and P. Milgrom, “The clock-proxy auction: +A practical combinatorial auction design,” Handbook of Spectrum +Auction Design, pp. 120–140, 2006. +[26] S. SHEN, “First fundamental theorem of welfare economics,” 2018. +[27] M. Udell and S. Boyd, “Maximizing a sum of sigmoids,” Optimization +and Engineering, pp. 1–25, 2013. +[28] ——, “Bounding duality gap for separable problems with linear +constraints,” Computational Optimization and Applications, vol. 64, +no. 2, pp. 355–378, 2016. +[29] M. Bichler, M. Fichtl, and G. Schwarz, “Walrasian equilibria from an +optimization perspective: A guide to the literature,” Naval Research +Logistics (NRL), vol. 68, no. 4, pp. 496–513, 2021. +[30] C. Courcoubetis and R. Weber, Pricing communication networks: +economics, technology and modelling. +John Wiley & Sons, 2003. +[31] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. +Cambridge university press, 2004. +[32] J. X. Salvat, L. Zanzi, A. Garcia-Saavedra, V. Sciancalepore, and +X. Costa-Perez, “Overbooking network slices through yield-driven +end-to-end orchestration,” in Proceedings of the 14th International +Conference on emerging Networking EXperiments and Technologies, +2018, pp. 353–365. +[33] C. Marquez, M. Gramaglia, M. Fiore, A. Banchs, and X. Costa-Perez, +“Resource sharing efficiency in network slicing,” IEEE Transactions on +Network and Service Management, vol. 16, no. 3, pp. 909–923, 2019. + +This figure "fig1.png" is available in "png"� format from: +http://arxiv.org/ps/2301.02840v1 + diff --git a/39E1T4oBgHgl3EQfAgJ2/content/tmp_files/load_file.txt b/39E1T4oBgHgl3EQfAgJ2/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..385ff7c35052a649344331377a28869325df5b1d --- /dev/null +++ b/39E1T4oBgHgl3EQfAgJ2/content/tmp_files/load_file.txt @@ -0,0 +1,813 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf,len=812 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='02840v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='NI] 7 Jan 2023 Network Slicing: Market Mechanism and Competitive Equilibria Panagiotis Promponas, and Leandros Tassiulas Department of Electrical Engineering and Institute for Network Science, Yale University, USA {panagiotis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='promponas, leandros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='tassiulas}@yale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='edu Abstract—Towards addressing spectral scarcity and enhancing resource utilization in 5G networks, network slicing is a promising technology to establish end-to-end virtual networks without requiring additional infrastructure investments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' By leveraging Software Defined Networks (SDN) and Network Function Virtualization (NFV), we can realize slices completely isolated and dedicated to satisfy the users’ diverse Quality of Service (QoS) prerequisites and Service Level Agreements (SLAs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This paper focuses on the technical and economic challenges that emerge from the application of the network slicing architecture to real-world scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We consider a market where multiple Network Providers (NPs) own the physical infrastructure and offer their resources to multiple Service Providers (SPs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Then, the SPs offer those resources as slices to their associated users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We propose a holistic iterative model for the network slicing market along with a clock auction that converges to a robust ǫ-competitive equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' At the end of each cycle of the market, the slices are reconfigured and the SPs aim to learn the private parameters of their users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Numerical results are provided that validate and evaluate the convergence of the clock auction and the capability of the proposed market architecture to express the incentives of the different entities of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Index Terms—Network Slicing, Mechanism Design, Network Economics, Bayesian Inference I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' INTRODUCTION The ascending trend of the volume of the data traffic, as well as the vast number of connected devices, puts pressure on the industries to enhance resource utilization in 5G wireless networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' With the advent of 5G networks and Internet of Things (IoT), researchers aim at a technological transformation to simultaneously improve throughput, extend network coverage and augment the users’ quality of service without wasting valuable resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Despite the significant advances brought by the enhanced network architectures and technologies, spectral scarcity will still impede the realization of the full potential of 5G technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In the future 5G networks, verticals need distinct network services as they may differ in their Quality of Service (QoS) requirements, Service Level Agreements (SLAs), and key performance indicators (KPIs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Such a need highlights the inefficiency of the previous architecture technologies which were based on a ”one network fits all” nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In this This paper appeared in INFOCOM 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The research work was supported by the Office of Naval Research under project numbers N00014-19-1-2566, N00173-21-1-G006 and by the National Science Foundation under the project number CNS-2128530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' direction, network slicing is a promising technology that enables the transition from one-size-fits-all to one-size-per- service abstraction [1], which is customized for the distinct use cases in a contemporary 5G network model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Using Software Defined Networks (SDN) and Network Function Virtualization (NFV), those slices are associated with completely isolated resources that can be tailored on-demand to satisfy the diverse QoS prerequisites and SLAs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Resource allocation in network slicing plays a pivotal role in load balancing, resource utilization and networking performance [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Nevertheless, such a resource allocation model faces various challenges in terms of isolation, customization, and end-to-end coordination which involves both the core but also the Radio Access Network (RAN) [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In a typical network slicing scenario, multiple Network Providers (NPs), own the physical infrastructure and offer their resources to multiple Service Providers (SPs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Possible services of the SPs include e-commerce, video, gaming, virtual reality, wearable smart devices, and other IoT devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The SPs offer their resources as completely isolated slices to their associated users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Thereby, such a system contains three types of actors that interact with each other and compete for the same resources, either monetary or networking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This paper focuses on the technical and economic challenges that emerge from the application of this architecture to real-world scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Related Work User Satisfaction & Sigmoid Functions: Network applications can be separated into elastic (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' email, text file transfer) and inelastic (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' audio/video phone, video conference, tele-medicine) [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Utilities for elastic applications are modeled as concave functions that increase with the resources with diminishing returns [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' On the other hand, the utility function for an inelastic traffic is modeled as a non-concave and usually as a sigmoid function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Such non-concavities impose challenges for the optimization of a network, but are suitable with the 5G era where the services may differ in their QoS requirements [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In that direction, multiple works in the literature employ sigmoid utility functions for the network users [5]–[13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Nevertheless, all of these works consider either one SP and model the interaction between the users, or multiple SPs that compete for a fixed amount of resources (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' bandwidth).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Network Slicing in 5G Networks: Network slicing introduces various challenges to the resource allocation in 5G networks in terms of isolation, customization, elasticity, and end-to-end coordination [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Most surveys on network slicing investigate its multiple business models motivated by 5G, the fundamental architecture of a slice and the state-of-the-art algorithms of network slicing [2], [14], [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Microeconomic theories such as non-cooperative games and/or mechanism design arise as perfect tools to model the trading of network infrastructure and radio resources that takes place in network slicing [9], [16]–[18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Mechanism Design in Network Slicing: Multiple auction mechanisms have been used to identify the business model of a network slicing market (see a survey in [16]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Contrary to our work, the majority of the literature considers a single- sided auction, a model that assumes that a single NP owns the whole infrastructure of the market [9], [18]–[22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For example, [9] considers a Vickrey–Clarke–Groves (VCG) auction-based model where the NP plays the role of an auctioneer and distributes discrete physical resource blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We find [3] and [17] to be closer to our work, since the authors employ the double-sided auction introduced by [23] to maximize the social welfare of a system with multiple NPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Contrary to our work, the auction proposed in [23] assumes concave utility functions for the different actors and requires the computation of their gradients for its convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The aforementioned assumptions might lead to an over-simplification of a more complex networking architecture (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' that of the network slicing model) where the utility function for a user with inelastic traffic is expressed as a sigmoid function [9] and that of an SP as an optimization problem [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Contributions Our work develops an iterative market model for the network slicing architecture, where multiple NPs with heterogeneous Radio Access Technologies (RATs), own the physical infrastructure and offer their resources to multiple SPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The latter offer the resources as slices to their associated users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Specifically, we propose a five-step iterative model for the network slicing market that converges to a robust ǫ- competitive equilibrium even when the utility functions of the different actors are non-concave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In every cycle of the proposed model, the slices are reconfigured and the SPs learn the private parameters of their associated end-users to make the equilibrium of the next cycle more efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The introduced market model, can be seen as a framework that suits well to various networking problems where three types of actors are involved: those who own the physical infrastructure, those who lease part of it to sell services and those who enjoy the services (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' data-offloading [23]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the interaction between the SPs and the NPs and for the convergence of the market to an equilibrium, we propose an iterative clock auction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Such dynamic auctions are used in the literature to auction divisible goods [24], [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The key differentiating aspects of the proposed auction, are (i) the relaxation of the common assumptions that the utility functions are concave and their gradients can be analytically computed, (ii) it provides highly usable price discovery, and (iii) it is a double-sided auction, and thus appropriate for a market with multiple NPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Numerical results are provided that validate and evaluate the convergence of the clock auction and the capability of the proposed market architecture to express the incentives of the different entities of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' MARKET MODEL & INCENTIVES In this section we describe the different entities of the network slicing market and their conflicting incentives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Market Model A typical slicing system model [2], [3], [14], [15] consists of multiple SPs represented by M = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', M} and multiple NPs that own RANs of possibly different RATs, represented by a set K = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', K}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Each SP owns a slice with a predetermined amount of isolated resources (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', bandwidth) and is associated with a set of users, Um, that serves through its slices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the rest of the paper and without loss of generality we assume that each NP owns exactly one RAN, so we use the terms RAN and NP interchangeably.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1) Network Providers: The multiple NPs of the system can quantify their radio resources as the performance level of the same network metric (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', downlink throughput) [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let x(m,k) denote the amount of resources NP k allocates to SP m, and the vector xm := (x(m,k))k∈K to denote the amount of resources m gets from every NP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Without loss of generality [3], capacity Ck limits the amount of resources that can be offered from NP k, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', �M m=1 x(m,k) ≤ Ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let C = (Ck)k∈K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the rest of the paper, we assume that there is a constant cost related to operation and management overheads induced to the NP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The main goal of every NP k is to maximize its profits by adjusting the price per unit of resources, denoted by ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2) Service Providers & Associated Users: The main goal of an SP is to purchase resources from a single or multiple NPs in order to maximize its profit, which depends on its associated users’ satisfaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The connectivity of a user i ∈ Um is denoted by a vector βi = (β(k,i))k∈K, where β(k,i) is a non- negative number representing factors such as the link quality i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', numbers in (0, 1] that depend on the path loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Moreover, each user i of the SP m, is associated with a service class, c(i), depending on their preferences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We denote the set of the possible service classes of SP m as Cm = {Cm 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' , Cm cm} and thus c(i) ∈ Cm, ∀i ∈ Um.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Each SP m, is trying to distribute the resources purchased from the NPs, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', xm, to maximize its profit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This process, referred to as intra-slice resource allocation, is described in detail in Section II-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Throughout the paper, we assume that the number of users of every SP m, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', |Um|, is much greater than the number of SPs, which is much greater than the number of NPs in the market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This assumption is made often in the mechanism design literature and is sufficient to ensure that the end-users and the SPs have limited information of the market [23], [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The latter let us consider them as price-takers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In the following section, we describe in detail the intra-slice resource allocation problem from the perspective of an SP who tries to maximize the satisfaction of its associated users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Intra-Slice Resource Allocation The problem of the intra-slice resource allocation concerns the distribution of the resources, xm, from the SP m to its associated users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Specifically, every SP m allocates a portion of x(m,k) to its associated user i, denoted as r(k,i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let ri := (r(k,i))k∈K and rm := (ri)i∈Um.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For ease of notation, the resources, ri, of a user i ∈ Um, as well as the connectivities, βi, are not indexed by m because i is assumed to be a unique identifier for the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Although every user i is assigned with r(k,i) resources from RAN k, because of its connectivity βi, the aggregated amount of resources it gets is zi := βT i ri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Moreover, let zm := (zi)i∈Um.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In a feasible intra-slice allocation it should hold that xm ⪰ � i∈Um ri for each SP m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Every SP should distribute the obtained resources among its users to maximize their satisfaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Towards providing intuition behind the employment of sigmoidal functions in the literature to model user satisfaction (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' see [5]–[12]), note that by making the same assumption as logistic regression, we model the logit1 of the probability that a user is satisfied, as a linear function of the resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hence, the probability that user i is satisfied with the amount of resources zi, say P[QoS sati], satisfies log( P [QoS sati] 1−P [QoS sati]) = tz c(i)(zi − kc(i)) and thus: P[QoS sati] = etz c(i)(zi−kc(i)) 1 + etz c(i)(zi−kc(i)) , (1) where kc(i) ≥ 0 denotes the prerequisite amount of resources of the user i and tz c(i) ≥ 0 expresses how ”tight” this prerequisite is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Note that the probability of a user being satisfied with respect to the value of zi, is a sigmoid function with inflection point kc(i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We assume that the user’s service class fully determines its private parameters, hence every user i ∈ c(i) has QoS prerequisite kc(i) and sensitivity parameter tz c(i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' These parameters are unknown to the users, so the SP’s goal to eventually learn them is challenging (Section III-C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Given the previous analysis, the aggregated satisfaction of the users of the SP m is um(rm) := � i∈Um ui(ri) ( [10], [7]), where ui(ri) := etz c(i)(βT i ri−kc(i)) 1 + etz c(i)(βT i ri−kc(i)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (2) Note that the function ui(·) can be expressed as a function of zi as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' With a slight abuse of notation, we switch between the two by changing the input variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We can write the final optimization problem for the intra-slice allocation of SP m as: (IN-SL): max rm um(rm) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ri ⪰ 0, ∀i ∈ Um xm ⪰ � i∈Um ri In case the amount of resources obtained from every NP, xm, is not given, SP m can optimize it together with the intra- 1The logit function is defined as logit(p) = log( p 1−p ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' slice resource allocation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hence, SP m can solve the following problem (P): max rm,xm Ψm(rm, xm) := um(rm) − cT xm s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ri ⪰ 0, ∀i ∈ Um xm ⪰ � i∈Um ri Recall that ck denotes the price per unit of resources announced from every NP k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In Problem P , the objective function Ψm can be thought of as the profit of SP m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let the solution of the above problem be ψ∗ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Problems IN-SL and P are maximization problems of a summation of sigmoid functions over a linear set of constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In [27] the problem of maximizing a sum of sigmoid functions over a convex constraint set is addressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This work shows that this problem is generally NP-hard and it proposes an approximation algorithm, using a branch-and- bound method, to find an approximate solution to the sigmoid programming problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In the rest of the section, we study three variations of problem P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Specifically, in Section II-B1, we study the case where the end-users are charged to get the resources from the SPs and in Sections II-B2 and II-B3 we regularize and concavify P respectively, something that will facilitate the analysis of the rest of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1) Price Mechanism in P: In this subsection we argue that Problem P is expressive enough to capture the case where every user i is charged for its assigned resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let pi be the amount of money that user i should pay to receive the zi resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In that case, the SPs should modify Problems IN- SL and P accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' First, note that user i’s satisfaction may depend also on pi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Similarly with the previous section, we can express the satisfaction of user i with respect to the price pi using a sigmoid function as P[price sati] = 1 1+e tp c(i)(pi−bc(i)) , where bc(i) ≥ 0 is the budget of the user i for the prerequisite resources kc(i), and tp c(i) ≥ 0 expresses how ”tight” is this budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We can now model the acceptance probability function [7] as P[sati] = P[price sati]P[QoS sati], and hence the expected total revenue, or the new utility of SP m, u ′ m, is modeled as u ′ m(rm, pm) := � i∈Um P[sati]pi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (3) From Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (3), it is possible for SP m to immediately determine the optimal price ˆpi to ask from any user i ∈ Um.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This follows from the fact that for positive pi the function admits a unique critical point, ˆp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, by just adding proper coefficients to the terms of Problem IN-SL and P, we can embed a pricing mechanism for the end-users in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the rest of the paper, without loss of generality in our model, we assume that the end-users are not charged for the obtained resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2) Regularization of P : We can regularize Problem P , with a small positive λm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In that manner, we encourage dense solutions and hence we avoid situations where a problem in one RAN completely disrupts the operation of the SP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ( ¯ P ): max rm,xm Ψm(rm, xm) − λm∥xm∥2 2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ri ⪰ 0, ∀i ∈ Um xm ⪰ � i∈Um ri In the regularized problem ¯ P , note that larger values of λm penalize the vectors xm with greater L2 norms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let the solution of Problem ¯ P be ¯ψ∗ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The Lemma below, shows that for small λm, the optimal values ¯ψ∗ m and ψ∗ m are close.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Its proof is simple and thus ommited for brevity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let (r∗ m, x∗ m) and (¯r∗ m, ¯x∗ m) be solutions of Problems P and ¯ P respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Then, ψ∗ m − λm∥x∗ m∥2 2 ≤ ¯ψ∗ m ≤ ψ∗ m − λm∥¯x∗ m∥2 2 Lemma 1, proves that the regularization of P was (almost) without loss of optimality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In the next section, we proceed by concavifying Problem ¯ P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The new concavified problem will be a fundamental building block of the auction analysis in Section III-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 3) Concavification of ¯P : To concavify ¯P , we replace every summand of um with its tightest concave envelope, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', the pointwise infimum over all concave functions that are greater or equal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the sigmoid function ui(zi) the concave envelope, ˆui(zi), has a closed form given by ˆui(zi) = � ui(0) + ui(w)−ui(0) w zi 0≤zi≤w ui(zi) w≤zi , for some w > ki which can be found easily by bisection [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1 depicts the concavification of the aforementioned sigmoid functions for kc(·) = 100 and three different values for tz c(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Note that for the lowest tz c(·) (elastic traffic) we get the best approximation whilst for the largest (inelastic traffic/tight QoS prerequisites) we get the worst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' To exploit the closed form of the envelope ˆui(zi), instead of problem ¯P , we will concavify the equivalent problem: ( ˜ P ): max rm,xm,zm, � i∈Um fi(ri, zi) − cT xm − λm∥xm∥2 2, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (ri, zi) ∈ Si, ∀i ∈ Um xm ⪰ � i∈Um ri where Si := {(ri, zi) : ri ⪰ 0, zi = βT i ri } and fi(ri, zi) := ui(zi) with domain Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The following lemma uses the concave envelope of the sigmoid function ui(zi), to compute the concave envelope of fi(ri, zi) and hence the concavification of the problem ˜ P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Its proof is based on the definition of the concave envelope and is omitted for brevity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The concave envelope of the function fi(ri, zi) := e tz c(i)(zi−kc(i)) 1+e tz c(i)(zi−kc(i)) with domain Si, ˆfi(ri, zi), has the following closed form (with domain Si): ˆfi(ri, zi) = ˆui(zi), ∀(ri, zi) ∈ Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, SP m can concavify ˜P as follows: ( ˆ P ): max rm,xm,zm � i∈Um ˆfi(ri, zi) − cT xm − λm∥xm∥2 2 s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (ri, zi) ∈ Si, ∀i ∈ Um xm ⪰ � i∈Um ri Note that ˆ P is strongly concave and thus admits a unique maximizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let the solution and the optimal point of problem ˆ P be ˆψ∗ m and (ˆx∗ m, ˆr∗ m) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Ultimately, we would like to compare the solution of the concavified ˆ P with the one of the original problem P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Towards that direction, we first define the nonconcavity of a function as follows [28]: Definition 1 (Nonconcavity of a function).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We define the nonconcavity ρ(f) of a function f : S → R with domain S, to be ρ(f) = sup x ( ˆf(x) − f(x)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let F denote a set of possibly non-concave functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Then define ρ[j](F) to be the jth largest of the nonconcavities of the functions in F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The theorem below, summarizes the main result of this section, which is that every SP can solve the concavified ˆ P instead of the original P , since the former provides a constant bound approximation of the latter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Recall that Ψm(ˆr∗ m, ˆx∗ m) is the profit of SP m, evaluated at the solution of ˆ P and that K is the number of the NPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let (r∗ m, x∗ m) and (¯r∗ m, ¯x∗ m) be solutions of Problems P and ¯ P respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Moreover, let ˆF := {ui}i∈Um.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Then, ψ∗ m − ǫ − δ1(λm) ≤ Ψm(ˆr∗ m, ˆx∗ m) ≤ ψ∗ m + δ2(λm), where δ1(λm) := λm(∥x∗ m∥2 2 − ∥ˆx∗ m∥2 2), δ2(λm) := λm(∥ˆx∗ m∥2 2 − ∥¯x∗ m∥2 2) and ǫ = �K j=1 ρ[j]( ˆF).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Proof: Note that ¯ψ∗ m is also given by solving ˜ P and that (ˆr∗ m, ˆx∗ m) with the corresponding optimal value ˆψ∗ m, are given by solving ˆ P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, from [28, Th.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1], we have that ¯ψ∗ m − K � j=1 ρ[j]( ˆF) ≤ um(ˆr∗ m) − cT ˆx∗ m − λm∥ˆx∗ m∥2 2 ≤ ¯ψ∗ m The result follows from Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The values of δ1 and δ2 decrease as λm decreases and hence for small regularization penalties they can get arbitrarily close to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The approximation error, ǫ, depends on the K greatest nonconcavities of the set {ui}i∈Um.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' There are two conditions that ensure negligible approximation error, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', ǫ << ψ∗ m: i) the end-users have concave utility functions (in that case ǫ → 0) or, ii) the market is profitable enough for every SP m and hence ψ∗ m >> K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Condition ii) makes the error negligible since ǫ ≤ K, and it can be satisfied for example when the supply of the market, C, is sufficiently large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 0 50 100 150 200 250 300 z i 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 utility Sigmoid Utility Concave Envelope (a) 0 50 100 150 200 250 300 z i 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 utility Sigmoid Utility Concave Envelope (b) 0 50 100 150 200 250 300 z i 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 utility Sigmoid Utility Concave Envelope (c) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1: Concave Envelopes of sigmoid utility functions with kc(·) = 100 and (a) tz c(·) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='02, (b) tz c(·) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 and (c) tz c(·) = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Theorem 1, implies that every SP can solve Problem ˆ P , which is a concave program with a unique solution, to find an approximate solution to P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This observation fosters the convergence analysis of the proposed auction in Section III-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' NETWORK SLICING MARKET CYCLE In this section, we study the evolution of the network slicing market using an iterative model that consists of 5-step cycles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We refer to the following sequence of steps as a market cycle: S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' |Um| prospective users appear to every SP m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The vector xm, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', the distribution of the resources from the NPs to SP m is determined for every m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' To achieve that in a distributed fashion, an auction between the SPs and the NPs should be realized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' S3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Given xm, each SP m determines the vectors ri and hence the amount of resources zi for every user i ∈ Um (intra-slice resource allocation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' S4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' After receiving the resources, each user i determines and reports to the SP whether the QoS received was enough or not to complete its application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' S5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The SPs exploit the responses of their users, to estimate their private parameters and hence to distribute the resources more efficiently in the next cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' It is important for the vector xm to be determined before the intra-slice resource allocation, since the first serves as the capacity in the resources available to SP m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In the following, we expand upon each (non-trivial) step of the market cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Step S2 - Clock Auction for the Network Slicing Market In this section, we develop and analyze a clock auction between the SPs and the NPs, that converges to a market’s equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Specifically, we describe the goal (Section III-A1), the steps (Section III-A2), and the convergence (Section III-A3) of the auction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1) Auction Goal: Note that the solutions of the problems P and ˆ P appear to be a function of the prices c1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' , cK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let the demand of SP m, given the price vector c, be denoted as x∗ m(c) or ˆx∗ m(c) depending on whether SP m uses Problem P or ˆ P to ask for resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let also r∗ m(c) and ˆr∗ m(c) be optimal intra-slice resource allocation vectors respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hence, (r∗ m(c), x∗ m(c)) and (ˆr∗ m(c), ˆx∗ m(c)) are maximizers of P and ˆ P respectively (given c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Since Problem P may admit multiple solutions, let the set Dm(c) be defined as Dm(c) := � x∗ m : {∃r∗ m : {Ψm(r∗ m, x∗ m) = ψ∗ m given c} � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We define a Competitive equilibrium as follows: Definition 2 (Competitive equilibrium).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Competitive equilibrium of the Network Slicing Market is defined to be any price vector c† and allocation of the resources of the NPs x†, such that: i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' x† m ∈ Dm(c†) for every SP m, and ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' C = � m∈M x† m (the demand equals the supply).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Note that in a competitive equilibrium, every SP m gets resources that could maximize its profit given the price vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Because a competitive equilibrium sets a balance between the interests of all participants, it appears to be the settling point of the markets in economic analysis [26], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Nevertheless, since the SPs’ demands are expressed by solving a non- concave program, we define an ǫ-competitive equilibrium which will be the ultimate goal of the proposed clock auction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Definition 3 (ǫ-Competitive equilibrium).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ǫ-Competitive equilibrium of the Network Slicing Market is defined to be any price vector ˆc† and allocation of the resources of the NPs ˆx†, such that: i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For every SP m, there exists an ǫ ≥ 0 and a feasible intra- slice resource allocation vector ˆr† m (given ˆx† m), such that: ψ∗ m − ǫ ≤ Ψm(ˆr† m, ˆx† m) ≤ ψ∗ m + ǫ, and ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' C = � m∈M ˆx† m (the demand equals the supply).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Observe that the first condition of the above definition ensures that every SP is satisfied (up to a constant) with the obtained resources in a sense that it operates close to its maximum possible profit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' From Theorem 1, note that if there exists a price vector ˆc† such that C = � m∈M ˆx∗ m(ˆc†), then the prices in ˆc† with the allocation ˆx† := ˆx∗(ˆc†) form an ǫ-competitive equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Finding such a price vector, is the motivation of the proposed clock auction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the rest of the paper we make the following assumption: Assumption 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The SPs calculate their demand and intra- resource allocation by solving Problem ˆ P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This is a reasonable assumption since in Theorem 1 and the corresponding Remarks 1 and 2, we proved that by solving a (strictly) concave problem, every SP can operate near its optimal profit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, for the rest of the paper, we call ˆx∗ m(c), the demand of SP m given the prices c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2) Auction Description: We propose the following clock auction that converges to an ǫ-competitive equilibrium of the Network Slicing market (Theorem 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' As we will prove in Theorem 3, this equilibrium is robust since the convergent price vector is the unique one that clears the market, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', makes the demand to equal the supply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' An auctioneer announces a price vector c, each component of which corresponds to the price that an NP sells a unit of its resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The bidders (SPs) report their demands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' iii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' If the aggregated demand received by an NP is greater than its available supply, the price of that NP is increased and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In other words, the auctioneer adjusts the price vector according to Walrasian tatonnement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' iv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The process repeats until the price vector converges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Note that the components of the price vector change simultaneously and independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hence different brokers can cooperate to jointly clear the market efficiently in a decentralized fashion [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let the excess demand, Z(c), be the difference between the aggregate demand and supply: Z(c) = −C + � m∈M ˆx∗ m(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In Walrasian tatonnement, the price vector adjusts in continuous time according to excess demand as ˙c = f(Z(c(t))), where f is a continuous, sign- preserving transformation [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the rest of the paper, we set f to be the identity function and thus ˙c = Z(c(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In auctions based on Walrasian tatonnement, the payments are only valid after the convergence of the mechanism [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 3) Auction Convergence: Towards proving the convergence of the auction, we provide the lemma below which proves that the concavified version of the intra-slice resource allocation problem IN − SL, can be thought of as a concave function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The proof is ommitted as a direct extension of [3] and [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The function Um(xm) shown below is concave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Um(xm) := max rm,zm � i∈Um ˆfi(ri, zi) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (ri, zi) ∈ Si, ∀i ∈ Um xm ⪰ � i∈Um ri (4) Using the function Um, we can rewrite Problem ˆ P as max xm⪰0 Um(xm) − λm − cT xm∥xm∥2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The following theorem studies the convergence of the auction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Starting from any price vector cinit, the proposed clock auction converges to an ǫ-competitive equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Proof: The proof relies on a global stability argument similarly to [24], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let Vm(·) denote m’s net indirect utility function: Vm(c) = max xm⪰0 {Um(xm) − λm∥xm∥2 2 − cT xm}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Let a candidate Lyapunov function be V(c) := cT C + � m∈M Vm(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' To study the convergence of the auction we should find the time derivative of the above Lyapunov function: ˙V(c)= ˙c· � CT +� m∈M d dc � maxxm⪰0{Um(xm)−λm∥xm∥2 2−cT xm} �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hence, we deduce that: ˙V(c) = � CT + � m∈M {−ˆx∗T m (c)} � ˙c = −ZT(c(t)) · Z(c(t)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The above holds true since the function h(xm) := Um(xm)− λm∥xm∥2 2, has as concave conjugate the function (see [31]) h∗(s) = max xm⪰0{h(xm) − cT xm}, and hence ∇h∗(s) = arg maxxm⪰0{Um(xm)− λm∥xm∥2 2 − cT xm}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, V(·) is a decreasing function of time and converges to its minimum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Note that in the convergent point the supply equals the demand for every NP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The market might admit multiple ǫ-competitive equilibria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Nevertheless, the equilibrium point that the clock auction converges is robust in the following sense: given Assumption 1, the price vector that clears the market is unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, essentially, in Theorem 2 we proved that the proposed clock auction converges to that unique price vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This is formally proposed by the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' There exists a unique price vector c† such that � m∈M ˆx∗ m(c†) = C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Towards proving Theorem 3 we provide Lemmata 4 and 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' First, we show that if a component in the price vector changes, the demand of an SP who used to obtain resources from the corresponding NP, should change as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For two distinct price vectors c, ¯c with ∃k : ck ̸= ¯ck, it holds true that ˆx∗ m(c) = ˆx∗ m(¯c) ⇒ ˆx∗ (m,k)(c) = ˆx∗ (m,k)(¯c) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Proof: Let such price vectors, ¯c and c, with ck ̸= ¯ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Since ˆx∗ m(c) is the optimal point of problem ˆ P given c, applying KKT will give: ˆx∗ (m,k)(c) = 0 or ∂{Um(xm) − λm∥xm∥2 2} ∂x(m,k) ���� ˆx∗m(c) = ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (5) However, ˆx∗ m(¯c) is optimal for ˆ P given ¯c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Employing a similar equation as (5) proves that if ˆx∗ m(c) = ˆx∗ m(¯c) then it can only hold that ˆx∗ (m,k)(c) = ˆx∗ (m,k)(¯c) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Definition 4 (WARP property).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The aggregate demand function satisfies the Weak Axiom of Revealed Preferences (WARP), if for different price vectors c and ¯c, it holds that: cT · � m∈M ˆx∗ m(¯c) ≤ cT · � m∈M ˆx∗ m(c) ⇒ ¯cT · � m∈M ˆx∗ m(¯c) < ¯cT · � m∈M ˆx∗ m(c) Lemma 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The aggregate demand function satisfies the WARP for distinct price vectors c, ¯c such that � m∈M ˆx∗ m(c) ≻ 0 and � m∈M ˆx∗ m(¯c) ≻ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Proof: Since c ̸= ¯c then ∃k ∈ K : ck ̸= ¯ck.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Furthermore, we have that � m∈M ˆx∗ m(c) ≻ 0 and hence ∃m1 ∈ M such that ˆx∗ m1,k(c) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Using Lemma 4 we conclude that ˆx∗ m1(c) ̸= ˆx∗ m1(¯c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hence, since Problem ˆ P admits a unique global maximum we have that: � m∈M � Um(ˆx∗ m(c)) − λm∥ˆx∗ m(c)∥2 2 − cT · ˆx∗ m(c) � > � m∈M � Um(ˆx∗ m(¯c)) − λm∥ˆx∗ m(¯c)∥2 2 − cT · ˆx∗ m(¯c) � Now, the above combined with the WARP hypothesis, � m∈M cT · ˆx∗ m(¯c) ≤ � m∈M cT · ˆx∗ m(c), gives: � m∈M � Um(ˆx∗ m(c)) − λm∥ˆx∗ m(c)∥2 2 � > � m∈M � Um(ˆx∗ m(¯c)) − λm∥ˆx∗ m(¯c)∥2 2 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (6) The result follows by switching the roles of c and ¯c and combine the inequalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We can now prove Theorem 3 as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' proof of Theorem 3: Towards a contradiction, assume that there exist two distinct (non-zero) price vectors c and ¯c that satisfy � m∈M ˆx∗ m(¯c) = � m∈M ˆx∗ m(c) = C and thus cT · � � m∈M ˆx∗ m(¯c) − � m∈M ˆx∗ m(c) � = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (7) Therefore, from Lemma 5 we know that: ¯cT · � m∈M ˆx∗ m(¯c) < ¯cT · � m∈M ˆx∗ m(c), (8) which is a contradiction because of the hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Theorems 2 and 3 together with Remarks 1 and 2 imply that if the users’ traffic is elastic, or the total capacity C of the NPs is sufficiently large, the clock auction converges monotonically to the unique competitive equilibrium of the market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' At the end of step S2, the final price vector ˆc† and the final demands of each SP m, ˆx∗ m, have been determined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Intra-Slice Resource Allocation & Feedback (Steps S3, S4) At the beginning of step S3, every SP m is aware of the convergent point ˆx∗ m and hence it can allocate the resources either by solving the sigmoid program IN − SL, or by using the convergent approximate solution, ˆr∗ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' At that step, an SP can also determine whether it will overbook network resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Overbooking, is a common practice in airlines and hotel industries and is now being used in the network slicing problem [32], [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This management model allocates the same resources to users of the network expecting that not everyone uses their booked capacity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In that case, SP m solves Problem IN−SL whilst setting increased obtained resources, xov m = ˆx∗ m +α%◦ ˆx∗ m, for a relatively small positive α.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Here, denotes the component-wise multiplication operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' During the step S4 of the cycle, each user i, receives their resources ri, and provide feedback on whether it was satisfied or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In the next step, the SPs can use the these responses to learn the private parameters of the different service classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Learning the Parameters (Step S5) At the final step of the cycle, the SPs exploit the data they obtained to learn the private parameters of their users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In that fashion, the market ”learns” its equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For the rest of the paper, for generality, we assume the pricing mechanism introduced in Section II-B1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, for every user i, the SPs get to know whether it is satisfied by the pair of resources- price (zi, pi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' A Bayesian inference model needs the data, a model for the private parameters and a prior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Model: The observed data is the outcome of the Bernoulli variables sati|θc(i) ∼ Bernoulli(P[sati]) for every user i, where θc(i) = (tp c(i), bc(i), tz c(i), kc(i)) is the tuple of the private parameters that we want to infer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Prior: Let the prior distribution for every parameter of θc(i) have probability density functions πtp c(i)(·), πbc(i)(·), πtz c(i)(·) and πkc(i)(·) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The SPs infer the private parameters θc(i) for each service class using the Bayes rule separately: p(θc(i)|data) ∝ Ln(data|θc(i))π(θc(i)), where p(θc(i)|data) is the posterior distribution of θc(i), Ln(data|θc(i)) is the likelihood of the data given our model and π(θc(i)) is the prior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Assuming independent private parameters, π(θc(i)) is the product of the distinct prior distributions, and for each class c we have that: Ln(data|θc(i)) = � i∈Cm c P[sati]fi(1 − P[sati])1−fi, where fi is 1 when user i is satisfied and 0 when not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The SPs can use Marcov Chain Monte Carlo (MCMC) with Metropolis Sampling, to find the posterior distribution after each market cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' As the market evolves, the SPs exploit the previous posterior distributions to find better priors for the next cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' CENTRALIZED SOLUTION In case there exists a centralized entity that knows the utility function of every SP, it can optimize the social welfare, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', the summation of the utility functions of the service and the network providers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This centralized problem can be formulated as follows: (SWM): max rm � m∈M um(rm) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ri ⪰ 0, ∀i ∈ Um � m∈M � i∈Um ri ⪯ C The SW M problem, can be solved with any chosen positive approximation error, using the framework of sigmoidal programming [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' NUMERICAL RESULTS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Auction Convergence & Parameter Tuning In this section we study the convergence of the clock auction, as well as the impact that the various parameters have on its behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' For this simulation, we assume a small market with 3 NPs with capacities C1 = 850, C2 = 750, C3 = 755 0 2 4 6 8 10 12 14 Iterations 0 1 2 3 4 5 6 7 L2 Norm of the Excess Demand 1e6 cinit = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='62, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='64, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='58] cinit = [1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1] cinit = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1] cinit = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1] (a) 0 10 20 30 40 50 Iterations 0 2500 5000 7500 10000 12500 15000 17500 L2 Norm of the Excess Demand κ = 10^{-4} κ = 10^{-5} κ = 10^{-6} (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2: L2 norm of the excess demand vector throughout the clock auction (a) for κ = 10−4 and various initialization price vectors cinit, and (b) for cT init = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='62, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='64, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='58] and different values of κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cost of NP1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1 Cost of NP2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 Cost of NP3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 cinit = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='62, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='64, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='58] cinit = [1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1] cinit = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1] cinit = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 3: Illustrating Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Starting from any price vector cinit, the clock auction converges to the market clearing prices c†.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' and 5 SPs with 6 users and 3 distinct service classes each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The users’ private parameters are set as follows: for an i in the first class tz c(i) = tp c(i) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2, kc(i) = bc(i) = 100, for the second class tz c(i) = tp c(i) = 2, kc(i) = bc(i) = 120, and for the third class tz c(i) = tp c(i) = 20, kc(i) = bc(i) = 150.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Such values indicate that the users wish to pay a unit of monetary value for a unit of offered resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' To discretize the auction, we change the cost vector according to a step value, κ, as ct+1 = ct + κZ(ct).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2 depicts the L2 norm of the excess demand vector throughout the clock auction for different cost vector initializations cinit (Fig 2a), and for different step values κ (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' By simulating the clock auction, we deduce that the clearing price vector is c†T = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='6116, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='6273, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='5811].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2a note that the closer the initialization cost vector is to c†T , the faster the convergence becomes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2b, connotes the need for a proper choice of the step value κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Clearly, κ = 10−4 gives the fastest convergence and as we decrease the step values it becomes slower.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Nevertheless, since Theorem 2 is proved for the continuous case, large values of κ cannot guarantee the convergence of the auction to an equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 3 observe that the convergence of the auction does not depend on the initialization of the cost vector (Theorem 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' SP1/NP1 SP1/NP2 SP2/NP1 SP2/NP2 Service Provider/Network Provider 0 200 400 600 800 1000 1200 1400 x m, k Auction SPP oSPP(5%) SWM Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 4: Total amount of resources obtained by every SP m from every NP k in the market, x(m,k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Visualization of the Resource Allocation In this section, we get insights on the allocation of the resources in the market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We assume 2 NPs with C1 = C2 = 1400 and 2 SPs with 10 users each and one shared service class with tz c(i) = tp c(i) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 and kc(i) = bc(i) = 100 for all i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The first SP (SP1) is near the first NP (NP1) and far from NP2 and hence, we set [β(1,1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' , β(1,10)] = [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='99, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='96, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='87, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='85, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='82, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='81, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='80, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='80, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='70, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='70] and β(2,i) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2, ∀i ∈ U1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Moreover, for the users of SP2 we set β1,i = β2,i = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='8, ∀i ∈ U2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We compare the resource allocation of four different methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' First, ’Auction’ refers to the resource allocation that results immediately after the auction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ’SPP’ takes ˆx∗ m from the equilibrium but performs the intra-slice of every SP by solving IN − SL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We also study the method ’oSPP(5%)’, which mimics the SPP method but with 5% overbooked resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Finally, ’SWM’ refers to the solution of the Problem SW M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 4 shows the amount of resources obtained from the two SPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' All methods allocate the majority of the resources of NP1 to SP1 since its users have greater connectivity with it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Although the users of SP2 have equally high connectivity with both NPs, all of the four methods were flexible enough to allocate the resources of NP2 to SP2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Note that none of the methods gives resources from NP2 to SP1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 5 depicts the intra-slice resource allocations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 5a observe that the greater the connectivity of a user is, the less resources it gets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' That is because users with good connectivity factors meet their prerequisite QoS using less resources and hence SP1 could maximize its expected profit by giving them less.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Note that ’SPP’ gives no resources to the user with the worst connectivity whereas with the overbooking, SP1 gets enough resources to make attractive offers to every user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Therefore, ’SPP’ might make an unfair allocation, since when the resources are not enough, it neglects the users with bad connectivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 5c, note that the homogeneity in the connectivities of the users of SP2 forces every method to fairly divide the resources among them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 6a shows the expected value of the total revenue, or the social welfare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' ’SWM’ gives the greatest revenue among the methods that do not overbook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Nevertheless, although ’SPP’ is a completely distributed solution and was not designed to maximize the total revenue, it performs very close to ’SWM’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Moreover, a 5% overbooking leads to greater revenues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1 2 3 4 5 6 7 8 9 10 User ID of SP1 0 25 50 75 100 125 150 175 r 1, i Auction SPP oSPP(5%) SWM (a) 1 2 3 4 5 6 7 8 9 10 User ID of SP2 0 10 20 30 40 50 r 1, i Auction SPP oSPP(5%) SWM (b) 1 2 3 4 5 6 7 8 9 10 User ID of SP2 0 20 40 60 80 100 120 140 160 r 2, i Auction SPP oSPP(5%) SWM (c) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 5: The solution of the intra-slice resource allocation problem from the perspective of the two different SPs of the market.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Specifically, how (a) SP1 distributed the resources of NP1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', r1,i for every i in U1, (b) SP2 distributed the resources of NP1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', r1,i for every i in U2, and (c) SP2 distributed the resources of NP2, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', r2,i for every i in U2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Auction SPP oSPP(5%) SWM Resource Allocation Method 0 200 400 600 800 1000 1200 1400 1600 Expected T otal Revenue 1575.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='31 1598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='16 1677.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='81 1611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='54 (a) Auction SPP oSPP(5%) SWM Resource Allocation Method 0 100 200 300 400 500 600 700 800 Expected Revenue of SP1 746.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='85 769.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='65 827.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='42 806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='49 (b) Auction SPP οSPP(5%) SWM Resource Allocation Method 0 100 200 300 400 500 600 700 800 Expected Revenue of SP2 828.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='46 828.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='51 850.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='39 805.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='06 (c) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 6: Illustrating the expected revenue (given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (3)) for the four different resource allocation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (a) shows the aggregated expected revenue, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (b) shows the expected revenue of SP1, and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' (c) shows the expected revenue of SP2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Impact of Bayesian Inference The previous results are extracted after a sufficient number of cycles, when the SPs have learned the parameters of the end-users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In this section, we consider an SP with 10 users and one service class that employs Bayesian inference to learn the private parameter tz c(i) for every i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We set the true value of the parameter to be tz c(i) = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' The other parameters are set tp c(i) = 2, kc(i) = bc(i) = 120 and β1,i = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='9, ∀i ∈ U1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' We assume one more SP with a unique service class with tp c(i) = tz c(i) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2, kc(i) = bc(i) = 100 and β2,i = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='9∀i ∈ U2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Finally, there are 2 NPs with C1 = C2 = 1200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In this example, SP1 sets as prior distribution the normal N(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='02, 2) and hence assumes elastic traffic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' At the end of each market cycle, the SP makes an estimation, ˆtz c(i), by calculating the mean of the posterior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 7 depicts the histogram of the posterior distribution for the first two market cycles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Observe that even in the third market cycle, SP1 can estimate with high accuracy the actual value of the parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' In Table I, note that the perceived revenue, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=', the expected revenue calculated using the estimation, is different between the cycles that ˆtz c(i) differs from tz c(i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hence, it is impossible for the SPs to maximize their expected profits when they don’t know the actual values of the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Indeed, observe that the bad estimate of ˆtz c(i) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='02 gives poor expected revenue compared to the last two cycles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' CONCLUDING REMARKS In this paper we focus on the technical and economic challenges that emerge from the application of the network slicing architecture to real world scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Taking into 0 2 4 6 8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4 tc(i) z (a) 0 1 2 3 4 5 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='5 tc(i) z (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 7: Posterior distribution of the unknown private parameter tz c(i) in (a) the first Market Cycle, and (b) in the second Market Cycle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cycle ˆtz c(i) Acquired Resources Perceived Revenue Actual Revenue 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='02 1087 530.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='26 699 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='68 1370 1160.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='77 1163.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='48 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='01 1365 1161.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='42 1161.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='42 TABLE I: Bayesian inference in different market cycles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' consideration the heterogenity of the users’ service classes we introduce an iterative market model along with a clock auction that converges to a robust ǫ-competitive equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Finally, we propose a Bayesian inference model, for the SPs to learn the private parameters of their users and make the next equilibria more efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Numerical results validate the convergence of the clock auction and the capability of the proposed framework to capture the different incentives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' REFERENCES [1] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Liu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zeng, “Adaptive interference-aware vnf placement for service-customized 5g network slices,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2449–2457.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [2] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Su, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Venkatesan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Gong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Li, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Ding, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Jiang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhu, “Resource allocation for network slicing in 5g telecommunication networks: A survey of principles and models,” IEEE Network, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 172–179, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [3] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Qin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Choi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Rahman, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Thottan, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Tassiulas, “Network slicing in heterogeneous software-defined rans,” in IEEE INFOCOM 2020-IEEE Conference on Computer Communications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2371–2380.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [4] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='-V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Pham and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hwang, “Network utility maximization-based congestion control over wireless networks: A survey and potential directives,” IEEE Communications Surveys & Tutorials, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1173–1200, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lee, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Mazumdar, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Shroff, “Non-convex optimization and rate control for multi-class services in the internet,” IEEE/ACM transactions on networking, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 13, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 827–840, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Liu, “A theoretical framework for solving the optimal admissions control with sigmoidal utility functions,” in 2013 International Conference on Computing, Networking and Communications (ICNC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 237–242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lieto, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Malanchini, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Mandelli, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Moro, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Capone, “Strategic network slicing management in radio access networks,” IEEE Transactions on Mobile Computing, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [8] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhu and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Boutaba, “Nonlinear quadratic pricing for concavifiable utilities in network rate control,” in IEEE GLOBECOM 2008-2008 IEEE Global Telecommunications Conference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2008, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Gao, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Pan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Liu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' You, “Virtualization framework and vcg based resource block allocation scheme for lte virtualization,” in 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [10] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lee, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Kim, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cho, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lee, “Qoe-aware scheduling for sigmoid optimization in wireless networks,” IEEE Communications Letters, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 11, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1995–1998, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [11] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hemmati, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' McCormick, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Shirmohammadi, “Qoe-aware bandwidth allocation for video traffic using sigmoidal programming,” IEEE MultiMedia, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 24, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 80–90, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [12] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Tan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Ge, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Xiong, “Utility maximization resource allocation in wireless networks: Methods and algorithms,” IEEE Transactions on systems, man, and cybernetics: systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 45, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1018–1034, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [13] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Papavassiliou, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Tsiropoulou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Promponas, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Vamvakas, “A paradigm shift toward satisfaction, realism and efficiency in wireless networks resource sharing,” IEEE Network, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 348–355, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [14] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Afolabi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Taleb, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Samdanis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Ksentini, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Flinck, “Network slicing and softwarization: A survey on principles, enabling technologies, and solutions,” IEEE Communications Surveys & Tutorials, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2429–2453, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Vassilaras, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Gkatzikis, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Liakopoulos, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Stiakogiannakis, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Qi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Shi, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Debbah, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Paschos, “The algorithmic aspects of network slicing,” IEEE Communications Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 55, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 112–119, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [16] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Habiba and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hossain, “Auction mechanisms for virtualization in 5g cellular networks: basics, trends, and open challenges,” IEEE Communications Surveys & Tutorials, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 20, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2264–2293, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [17] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Chang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Yu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Chen, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' H¨am¨al¨ainen, “A double auction mechanism for virtual resource allocation in sdn-based cellular network,” in 2016 IEEE 27th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [18] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fu and U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Kozat, “Wireless network virtualization as a sequential auction game,” in 2010 Proceedings IEEE INFOCOM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [19] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Ahmadi, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Macaluso, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Gomez, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' DaSilva, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Doyle, “Virtualization of spatial streams for enhanced spectrum sharing,” in 2016 IEEE Global Communications Conference (GLOBECOM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [20] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Lang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Chen, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Wang, “Power allocation in wireless network virtualization with buyer/seller and auction game,” in 2015 IEEE Global Communications Conference (GLOBECOM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [21] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhu and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Hossain, “Virtualization of 5g cellular networks as a hierarchical combinatorial auction,” IEEE Transactions on Mobile Computing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2640–2654, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cheng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Chen, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Wang, “Wireless virtualization as a hierarchical combinatorial auction: An illustrative example,” in 2017 IEEE Wireless Communications and Networking Conference (WCNC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [23] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Iosifidis, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Huang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Tassiulas, “A double- auction mechanism for mobile data-offloading markets,” IEEE/ACM Transactions on Networking, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 23, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1634–1647, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [24] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Ausubel and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cramton, “Auctioning many divisible goods,” Journal of the European Economic Association, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2-3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 480–493, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [25] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Ausubel, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cramton, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Milgrom, “The clock-proxy auction: A practical combinatorial auction design,” Handbook of Spectrum Auction Design, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 120–140, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [26] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' SHEN, “First fundamental theorem of welfare economics,” 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [27] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Udell and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Boyd, “Maximizing a sum of sigmoids,” Optimization and Engineering, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 1–25, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [28] ——, “Bounding duality gap for separable problems with linear constraints,” Computational Optimization and Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 64, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 355–378, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [29] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Bichler, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fichtl, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Schwarz, “Walrasian equilibria from an optimization perspective: A guide to the literature,” Naval Research Logistics (NRL), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 68, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 496–513, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [30] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Courcoubetis and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Weber, Pricing communication networks: economics, technology and modelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' John Wiley & Sons, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [31] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Boyd, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Boyd, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Vandenberghe, Convex optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Cambridge university press, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [32] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Salvat, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Zanzi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Garcia-Saavedra, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Sciancalepore, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Costa-Perez, “Overbooking network slices through yield-driven end-to-end orchestration,” in Proceedings of the 14th International Conference on emerging Networking EXperiments and Technologies, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 353–365.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' [33] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Marquez, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Gramaglia, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Fiore, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Banchs, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' Costa-Perez, “Resource sharing efficiency in network slicing,” IEEE Transactions on Network and Service Management, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 16, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' 909–923, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content=' This figure "fig1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='png" is available in "png"� format from: http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='org/ps/2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} +page_content='02840v1' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E1T4oBgHgl3EQfAgJ2/content/2301.02840v1.pdf'} diff --git a/4NAzT4oBgHgl3EQf9f64/content/tmp_files/2301.01921v1.pdf.txt b/4NAzT4oBgHgl3EQf9f64/content/tmp_files/2301.01921v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..4aa75714a4655a739ef0b2b1e88ebd3cb11a5363 --- /dev/null +++ b/4NAzT4oBgHgl3EQf9f64/content/tmp_files/2301.01921v1.pdf.txt @@ -0,0 +1,2025 @@ +Control over Berry Curvature Dipole with Electric Field in WTe2 +Xing-Guo Ye,1,* Huiying Liu,2,* Peng-Fei Zhu,1,* Wen-Zheng Xu,1,* Shengyuan A. Yang,2 +Nianze Shang,1 Kaihui Liu,1 and Zhi-Min Liao +1,† +1State Key Laboratory for Mesoscopic Physics and Frontiers Science Center for Nano-optoelectronics, School of Physics, +Peking University, Beijing 100871, China +2Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore, 487372, Singapore +Berry curvature dipole plays an important role in various nonlinear quantum phenomena. However, +the maximum symmetry allowed for nonzero Berry curvature dipole in the transport plane is a single +mirror line, which strongly limits its effects in materials. Here, via probing the nonlinear Hall effect, we +demonstrate the generation of Berry curvature dipole by applied dc electric field in WTe2, which is used to +break the symmetry constraint. A linear dependence between the dipole moment of Berry curvature and the +dc electric field is observed. The polarization direction of the Berry curvature is controlled by the relative +orientation of the electric field and crystal axis, which can be further reversed by changing the polarity of +the dc field. Our Letter provides a route to generate and control Berry curvature dipole in broad material +systems and to facilitate the development of nonlinear quantum devices. +Berry curvature is an important geometrical property +of Bloch bands, which can lead to a transverse velocity of +Bloch electrons moving under an external electric field +[1–6]. Hence, it is often regarded as a kind of magnetic field +in momentum space, leading to various exotic transport +phenomena, such as anomalous Hall effect (AHE) [1], +anomalous Nernst effect [7], and extra phase shift in +quantum oscillations [8]. The integral of Berry curvature +over the Brillouin zone for fully occupied bands gives rise +to the Chern number [5], which is one of the central +concepts of topological physics. +Recently, Sodemann and Fu [9] proposed that the dipole +moment of Berry curvature over the occupied states, known +as Berry curvature dipole (BCD), plays an important role in +the second-order nonlinear AHE in time-reversal-invariant +materials. For transport in the x-y plane which is typical in +experiments, the relevant BCD components form an in- +plane pseudovector with Dα ¼ +R +k f0ð∂αΩzÞ [9], where Dα +is the BCD component along direction α, k is the wave +vector, the integral is over the Brillouin zone and with +summation over the band index, f0 is the Fermi distribution +(in the absence of external field), Ωz is out-of-plane Berry +curvature, and ∂α ¼ ∂=∂kα. It results in a second-harmonic +Hall voltage in response to a longitudinal ac probe current, +which could find useful applications in high-frequency +rectifiers, wireless charging, energy harvesting, and infra- +red detection, etc. BCD and its associated nonlinear AHE +have been predicted in several material systems [9–11] +and experimentally detected in systems such as two- +dimensional (2D) monolayer or few-layer WTe2 [12–15], +Weyl semimetal TaIrTe4 [16], 2D MoS2, and WSe2 +[17–20], corrugated bilayer graphene [21], and a few +topological materials [22–25]. However, a severe limitation +is that BCD obeys a rather stringent symmetry constraint. +In the transport plane, the maximum symmetry allowed +for Dα is a single mirror line [9]. In several previous +Letters [17–21], one needs to perform additional material +engineering such as lattice strain or interlayer twisting to +generate a sizable BCD. This constraint limits the available +material platforms with nonzero BCD, unfavorable for the +in-depth exploration of BCD-related physics and practical +applications. +Recent works suggested an alternative route to obtain +nonzero BCD, that is, utilizing the Berry connection +polarizability to achieve a field-induced BCD, where the +additional lattice engineering is unnecessary [26,27]. The +Berry connection polarizability is also a band geometric +quantity, related to the field-induced positional shift of +Bloch electrons [28]. It is a second-rank tensor, defined as +GabðkÞ ¼ ½∂Að1Þ +a ðkÞ=∂Eb�, where Að1Þ is the field-induced +Berry connection, E is the applied electric field [28], and +the superscript “(1)” represents that the physical quantity is +the first order term of electric field. Then, the E field +induced Berry curvature is given by Ωð1Þ ¼ ∇k × ðG +↔ +EÞ +[27], where the double arrow indicates a second-rank +tensor. This field-induced Berry curvature will lead to a +field-induced BCD Dð1Þ +α . Considering transport in the x-y +plane and applied dc E field also in the plane, we +have Dð1Þ +α ¼ +R +kf0ð∂αΩð1Þ +z Þ¼εzγμ +R +kf0½∂αð∂γGμνÞ�Eν, where +α; γ; μ; ν ¼ x, y, and εzγμ is the Levi-Civita symbol. In +systems where the original BCD is forbidden by the crystal +symmetry, the field-induced BCD by an external E field +1 + +could generally be nonzero and become the dominant +contribution. In such a case, the symmetry is lowered by +the applied E field, and the induced BCD should be linear +with E and its direction also controllable by the E field. So +far, this BCD caused by Berry connection polarizability +and its field control have not been experimentally demon- +strated yet, and the nonlinear Hall effect derived from this +mechanism has not been observed. +In this Letter, we report the manipulation of electric field +induced BCD due to the Berry connection polarizability. +Utilizing a dc electric field Edc to produce BCD in bulk +WTe2 (for which the inherent BCD is symmetry forbid- +den), the second-harmonic Hall voltage V2ω +H is measured as +a response to an applied ac current Iω. Both orientation and +magnitude of the induced BCD are highly tunable by the +applied Edc. Our Letter provides a general route to extend +BCD to abundant material platforms with high tunability, +promising for practical applications. +The WTe2 devices were fabricated with circular disc +electrodes (device S1) or Hall-bar shaped electrodes +(device S2). The WTe2 flakes were exfoliated from +bulk crystal and then transferred onto the prefabricated +electrodes (Supplemental Material, Note 1 [29]). The WTe2 +thickness of device S1 is 8.4 nm (Supplemental Material, +Fig. S1 [29]), corresponding to a 12-layer WTe2, and we +present the results from device S1 in the main text. The +crystal orientations of WTe2 devices were identified +by their long, straight edges [12] and further confirmed +by both polarized Raman spectroscopy (Supplemental +Material, Note 2 [29]) and angle-dependent transport +measurements (Supplemental Material, Note 3 [29]). The +electron mobility of device S1 is ∼ 4974 cm2=V s at 5 K +(Supplemental Material, Note 4 [29]). +In our experiments, we use thick Td-WTe2 samples +(thickness ∼8.4 nm), which have an effective inversion +symmetry in the x-y plane (which is the transport plane). +This is formed by the combination of the mirror symmetry +Ma and the glide mirror symmetry ˜Mb, as indicated in +Fig. 1(c). The in-plane inversion leads to the absence +of inherent in-plane BCD and hence the nonlinear Hall +effect in bulk (see Supplemental Material, Note 5 [29] for +detailed symmetry analysis). Because ˜Mb involves a half- +cell translation along the c axis and hence is broken on the +sample surface, a small but nonzero intrinsic BCD may +exist on the surface. In fact, such BCD due to surface +symmetry breaking has already been reported [13], and is +also observed in our samples, although the signal is much +weaker in thicker samples (see Supplemental Material, +Fig. S9 [29]). +To induce BCD in bulk WTe2 through Berry connection +polarizability, a dc electric field Edc is applied in the x-y +plane. As shown in Figs. 1(a) and 1(b), the field-induced +Berry curvature shows a dipolelike distribution with non- +zero BCD (theoretical calculations; see Supplemental +Material, Note 6 [29]). The induced BCD can be controlled +by the dc E field and should satisfy the following symmetry +requirements. Because the presence of a mirror symmetry +would force the BCD to be perpendicular to the mirror +plane [9], the induced BCD Dð1Þ must be perpendicular to +Edc when Edc is along the a or b axis. Control experiments +were carried out in device S1 to confirm the above +expectations. The measurement configuration is shown +in Fig. 1(d) (see Supplemental Material, Fig. S2 [29], +for circuit schematic). The probe ac current with ac field Eω +and frequency ω was applied approximately along the −a +axis, satisfying Eω ≪ Edc, and the second-harmonic Hall +(c) +(d) +(e) +(f) +(a) +(b) +FIG. 1. +(a) and (b) The field-induced Berry curvature Ωð1Þ +c ðkÞ in the kz ¼ 0 plane by a dc electric field Edc ¼ 3 kV=m applied along +(a) a or (b) b axis, respectively. The unit of Ωð1Þ +c ðkÞ is Å2. The green arrows indicate the direction of Edc. The gray lines depict the Fermi +surface. (c) The a-b plane of monolayer Td-WTe2. (d) The optical image of device S1, where an angle θ is defined. (e) and (f) The +second-harmonic Hall voltage V2ω +H as Edc (e) along b axis (θ ¼ 0°), and (f) along −a axis (θ ¼ 90°) at 5 K. The Eω is applied along −a +axis, as schematized in (d). +2 + +voltage V2ω +H was measured to reveal the nonlinear Hall +effect. The Edc that is used to produce BCD was applied +along the direction characterized by the angle θ, which is +the angle between the direction of Edc and the baseline of a +pair of electrodes [white line in Fig. 1(d)] that is approx- +imately along the b axis. Then Edc along θ ¼ 0° (b axis) +and θ ¼ 90° (−a axis) correspond to the induced Dð1Þ along +the a axis and b axis, respectively. Because the nonlinear +Hall voltage V2ω +H +is proportional to Dð1Þ · Eω [9], the +nonlinear Hall effect should be observed for EωkDð1Þ +and be vanishing for Eω⊥Dð1Þ. +As shown in Fig. 1(e), when Edc along θ ¼ 0°, nonlinear +Hall voltage V2ω +H is indeed observed as expected. The Edc +along the b axis induces BCD along the a axis, leading to +nonzero V2ω +H since Eω is applied along the −a axis. The +second-order nature is verified by both the second- +harmonic signal and parabolic I-V characteristics. It is +found that the nonlinear Hall voltage is highly tunable by +the magnitude of Edc. The sign reverses when Edc is +reversed. Moreover, the nonlinear Hall voltage is linearly +proportional to Edc (Supplemental Material [29] Fig. S11), +as we expected. As for Edc along θ ¼ 90°, as shown in +Fig. 1(f), the V2ω +H is much suppressed, which is at least one +order of magnitude smaller than the V2ω +H +in Fig. 1(e). +Because in this case the Edc along the a axis induces BCD +along the b axis, Eω is almost perpendicular to BCD, +leading to negligible nonlinear Hall effect. Similar results +are also reproduced in device S2 (Supplemental Material +[29], Fig. S12). Such control experiments are well con- +sistent with our theoretical expectation and confirm the +validity of field-induced BCD. +Besides the crystalline axis (θ ¼ 0° and 90°), we also +study the case when Edc is applied along arbitrary θ +directions to obtain the complete angle dependence of +field-induced BCD. Here, Eω is applied along the −a or b +axis, to detect the BCD component along the a or b axis, +i.e., Dð1Þ ¼ ½Dð1Þ +a ðθÞ; Dð1Þ +b ðθÞ�, where Dð1Þ +a +and Dð1Þ +b +are the +BCD components along the a and b axis, respectively. The +measurement configurations are shown in Figs. 2(a) and +2(d). Figures 2(b) and 2(e) show the second-order Hall +voltage as a function of θ, with the magnitude of Edc fixed +at 3 kV=m. The second-order Hall response ½E2ω +H =ðEωÞ2� is +calculated by E2ω +H ¼ ðV2ω +H =WÞ and Eω ¼ ðIωRk=LÞ, where +W is the channel width, Rk is the longitudinal resistance, +and L is the channel length. As shown in Figs. 2(c) and 2(f), +½E2ω +H =ðEωÞ2� demonstrates a strong anisotropy, closely +related to the inherent symmetry of WTe2. First of all, it +is worth noting that the second-order Hall signal is +negligible at Edc ¼ 0. This is consistent with our previous +analysis that the inherent bulk in-plane BCD is symmetry +forbidden [26,27]. Second, ½E2ω +H =ðEωÞ2� almost vanishes +when EdckEω along a or b axis. This is constrained by the +mirror symmetries Ma or +˜Mb, forcing the BCD to be +perpendicular to the mirror plane in such configurations. +(a) +(b) +(c) +(d) +(e) +(f) +FIG. 2. +(a) and (d) Measurement configuration for the second-order AHE with (a) Eωk − a axis and (d) Eωkb axis, respectively. The +Edc, satisfying Edc ≫ Eω, is rotated to along various directions. (b) and (e) The second-order Hall voltage V2ω +H as a function of Iω at +fixed Edc ¼ 3 kV=m but along various directions and at 5 K with (b) Eωk − a axis and (e) Eωkb axis, respectively. (c) and (f) The +second-order Hall signal ½E2ω +H =ðEωÞ2� as a function of θ at 5 K with (c) Eωk − a axis and (f) Eωkb axis, respectively. +3 + +Thus, when EdckEω along the a or b axis, the induced BCD +is perpendicular to Edc and Eω, satisfying Dð1Þ · Eω ¼ 0, +which leads to almost vanished second-order Hall signals. +Moreover, ½E2ω +H =ðEωÞ2� exhibits a sensitive dependence on +the angle θ, indicating the BCD is highly tunable by the +orientation of Edc. A local minimum of ½E2ω +H =ðEωÞ2� is +found at an intermediate angle around θ ¼ 30° when +Eωk − a axis in Fig. 2(c). This is because ½E2ω +H =ðEωÞ2� +depends not only on ðDð1Þ · c +EωÞ, i.e., the projection of the +pseudovector Dð1Þ to the direction of Eω, but also on the +anisotropy of conductivity in WTe2. The two terms show +different dependence on the angle θ, leading to a local +minimum around θ ¼ 30°. +Through control experiments and symmetry analysis, the +extrinsic effects, such as diode effect, thermal effect, and +thermoelectric effect, could be safely ruled out as the main +reason of the observed second-order nonlinear AHE (see +Supplemental Material, Note 9 [29]). To further investigate +this effect, the temperature dependence and scaling law of +the second-order nonlinear Hall signal are studied. By +changing the temperature, V2ω +H and longitudinal conduc- +tivity σxx were collected, where the magnitude of Edc was +fixed at 3 kV=m. Figures 3(a) and 3(c) show the V2ω +H at +different temperatures with Eωk − a axis, θ ¼ 0° and Eωkb +axis, θ ¼ 90°, respectively. A relatively small but nonzero +second-order Hall signal is observed at 286 K. The scaling +law, that is, the second-order Hall signal ½E2ω +H =ðEωÞ2� +versus σxx, is presented and analyzed in Figs. 3(b) and +3(d) for different angles θ. The σxx was calculated by +σxx ¼ð1=RkÞðL=WdÞ, where d is the thickness of WTe2, +and was varied by changing temperature. According to +Ref. [42], the scaling law between ½E2ω +H =ðEωÞ2� and σxx +satisfies ½E2ω +H =ðEωÞ2� ¼ C0 þ C1σxx þ C2σ2xx. The coeffi- +cients C2 and C1 involve the mixing contributions from +various skew scattering processes [42–45], such as impu- +rity scattering, phonon scattering, and mixed scattering +from both phonons and impurities [42]. C0 is mainly +contributed by the intrinsic mechanism, i.e., the field- +induced BCD here. As shown in Figs. 3(b) and 3(d), the +scaling law is well fitted for all angles θ. +It +is +found +that +C0 +shows +strong +anisotropy +(Supplemental Material [29], Fig. S18), indicating the +field-induced BCD is also strongly dependent on angle +θ. The value of field-induced BCD can be estimated +through D ¼ ð2ℏ2n=m�eÞ½E2ω +H =ðEωÞ2� [12], where ℏ is +the reduced Planck constant, e is the electron charge, m� ¼ +0.3me is the effective electron mass, n is the carrier density. +Here, we replace the ½E2ω +H =ðEωÞ2� by the coefficient C0 +from the scaling law fitting. The two components of BCD +along the a and b axes, denoted as Dð1Þ +a +and Dð1Þ +b , are +calculated from the fitting curves with the magnitude of Edc +fixed at 3 kV=m under the Eωk − a axis and the Eωkb axis, +respectively. As shown in Figs. 4(a) and 4(b), it is found +that Dð1Þ +a +shows a cos θ dependence on θ, whereas Dð1Þ +b +(a) +(b) +(c) +(d) +FIG. 3. +(a) and (c) The second-harmonic Hall voltage at various +temperatures with the magnitude of Edc fixed at 3 kV=m (a) under +Eωk − a axis, θ ¼ 0° and (c) under Eωkb axis, θ ¼ 90°. (b),(d) +Second-order Hall signal ½E2ω +H =ðEωÞ2� as a function of σxx +(b) under Eωk − a axis and (d) under Eωkb axis at various θ +with the magnitude of Edc fixed at 3 kV=m. The temperature +range for the scaling law in (b) and (d) is 50–286 K. +(a) +(b) +(c) +FIG. 4. +The induced Berry curvature dipole as a function of θ with the magnitude of Edc fixed at 3 kV=m for (a) the component along +a axis, Dð1Þ +a and (b) the component along b axis, Dð1Þ +b . (c) The relationship between the field-induced Berry curvature dipole Dð1Þ and the +applied Edc ¼ 3 kV=m along different directions. The scale bar of Dð1Þ is 0.2 nm. +4 + +shows a sin θ dependence. Such angle dependence is +well consistent with the theoretical predications (see +Supplemental Material [29], Note 6). According to the +two components Dð1Þ +a +and Dð1Þ +b , the field induced BCD +vector of Dð1Þ is synthesized for Edc along various +directions, as presented in Fig. 4(c). It is found that both +the magnitude and orientation of the field-induced BCD are +highly tunable by the dc field. +In summary, we have demonstrated the generation, +modulation, and detection of the induced BCD due to +the Berry connection polarizability in WTe2. It is found that +the direction of the generated BCD is controlled by the +relative orientation between the applied Edc direction +and the crystal axis, and its magnitude is proportional to +the intensity of Edc. Using independent control of the +two applied fields, our Letter demonstrates an efficient +approach to probe the nonlinear transport tensor symmetry, +which is also helpful for full characterization of nonlinear +transport coefficients. Moreover, the manipulation of BCD +up to room temperature by electric means without addi- +tional symmetry breaking will greatly extend the BCD- +related physics [46,47] to more general materials and +should be valuable for developing devices utilizing the +geometric properties of Bloch electrons. +This work was supported by National Key Research and +Development Program of China (No. 2018YFA0703703), +National Natural Science Foundation of China (Grants +No. 91964201 and No. 61825401), and Singapore MOE +AcRF Tier 2 (MOE-T2EP50220-0011). We are grateful to +Dr. Yanfeng Ge at SUTD for inspired discussions. +*These authors contributed equally to this work. +†liaozm@pku.edu.cn +[1] N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and +N. P. Ong, Anomalous Hall effect, Rev. Mod. Phys. 82, +1539 (2010). +[2] J. Sinova, S. O. Valenzuela, J. Wunderlich, C. H. Back, and +T. Jungwirth, Spin Hall effects, Rev. Mod. Phys. 87, 1213 +(2015). +[3] D. Xiao, W. Yao, and Q. Niu, Valley-Contrasting Physics in +Graphene: Magnetic Moment and Topological Transport, +Phys. Rev. Lett. 99, 236809 (2007). +[4] L. Šmejkal, Y. Mokrousov, B. Yan, and A. H. MacDonald, +Topological antiferromagnetic spintronics, Nat. Phys. 14, +242 (2018). +[5] D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on +electronic properties, Rev. Mod. Phys. 82, 1959 (2010). +[6] M.-C. Chang and Q. Niu, Berry phase, hyperorbits, and the +Hofstadter spectrum: Semiclassical dynamics in magnetic +Bloch bands, Phys. Rev. B 53, 7010 (1996). +[7] M. T. Dau, C. Vergnaud, A. Marty, C. Beign´e, S. +Gambarelli, V. Maurel, T. Journot, B. Hyot, T. Guillet, B. +Gr´evin et al., The valley Nernst effect in WSe2, Nat. +Commun. 10, 5796 (2019). +[8] B.-C. Lin, S. Wang, S. Wiedmann, J.-M. Lu, W.-Z. Zheng, +D. Yu, and Z.-M. Liao, Observation of an Odd-Integer +Quantum Hall Effect from Topological Surface States in +Cd3As2, Phys. Rev. Lett. 122, 036602 (2019). +[9] I. Sodemann and L. Fu, Quantum Nonlinear Hall Effect +Induced by Berry Curvature Dipole in Time-Reversal +Invariant Materials, Phys. Rev. Lett. 115, 216806 (2015). +[10] J.-S. You, S. Fang, S.-Y. Xu, E. Kaxiras, and T. Low, Berry +curvature dipole current in the transition metal dichalcoge- +nides family, Phys. Rev. B 98, 121109(R) (2018). +[11] Y. Zhang, J. van den Brink, C. Felser, and B. Yan, Electri- +cally tunable nonlinear anomalous Hall effect in two- +dimensional transition-metal dichalcogenides WTe2 and +MoTe2, 2D Mater. 5, 044001 (2018). +[12] Q. Ma, S.-Y. Xu, H. Shen, D. MacNeill, V. Fatemi, T.-R. +Chang, A. M. M. Valdivia, S. Wu, Z. Du, C.-H. Hsu et al., +Observation of the nonlinear Hall effect under time-reversal- +symmetric conditions, Nature (London) 565, 337 (2019). +[13] K. Kang, T. Li, E. Sohn, J. Shan, and K. F. Mak, Nonlinear +anomalous Hall effect in few-layer WTe2, Nat. Mater. 18, +324 (2019). +[14] S.-Y. Xu, Q. Ma, H. Shen, V. Fatemi, S. Wu, T.-R. Chang, +G. Chang, A. M. M. Valdivia, C.-K. Chan, Q. D. Gibson, J. +Zhou, Z. Liu, K. Watanabe, T. Taniguchi, H. Lin, R. J. Cava, +L. Fu, N. Gedik, and P. Jarillo-Herrero, Electrically switch- +able Berry curvature dipole in the monolayer topological +insulator WTe2, Nat. Phys. 14, 900 (2018). +[15] J. Xiao, Y. Wang, H. Wang, C. D. Pemmaraju, S. Wang, P. +Muscher, E. J. Sie, C. M. Nyby, T. P. Devereaux, X. Qian +et al., Berry curvature memory through electrically driven +stacking transitions, Nat. Phys. 16, 1028 (2020). +[16] D. Kumar, C.-H. Hsu, R. Sharma, T.-R. Chang, P. Yu, J. +Wang, G. Eda, G. Liang, and H. Yang, Room-temperature +nonlinear Hall effect and wireless radiofrequency rectifica- +tion in Weyl semimetal TaIrTe4, Nat. Nanotechnol. 16, 421 +(2021). +[17] J. Lee, Z. Wang, H. Xie, K. F. Mak, and J. Shan, Valley +magnetoelectricity in single-layer MoS2, Nat. Mater. 16, +887 (2017). +[18] J. Son, K.-H. Kim, Y. H. Ahn, H.-W. Lee, and J. Lee, Strain +Engineering of the Berry Curvature Dipole and Valley +Magnetization in Monolayer MoS2, Phys. Rev. Lett. 123, +036806 (2019). +[19] M.-S. Qin, P.-F. Zhu, X.-G. Ye, W.-Z. Xu, Z.-H. Song, J. +Liang, K. Liu, and Z.-M. Liao, Strain tunable Berry curvature +dipole, orbital magnetization and nonlinear Hall effect in +WSe2 monolayer, Chin. Phys. Lett. 38, 017301 (2021). +[20] M. Huang, Z. Wu, J. Hu, X. Cai, E. Li, L. An, X. Feng, Z. +Ye, N. Lin, K. T. Law et al., Giant nonlinear Hall effect in +twisted WSe2, Natl. Sci. Rev. nwac232 (2022). +[21] S.-C. Ho, C.-H. Chang, Y.-C. Hsieh, S.-T. Lo, B. Huang, +T.-H.-Y. Vu, C. Ortix, and T.-M. Chen, Hall effects in +artificially corrugated bilayer graphene without breaking +time-reversal +symmetry, +Nat. +Electron. +4, +116 +(2021). +[22] P. He, H. Isobe, D. Zhu, C.-H. Hsu, L. Fu, and H. Yang, +Quantum frequency doubling in the topological insulator +Bi2Se3, Nat. Commun. 12, 698 (2021). +[23] O. O. +Shvetsov, +V. D. +Esin, +A. V. +Timonina, +N. N. +Kolesnikov, and E. V. Deviatov, Non-linear Hall effect in +5 + +three-dimensional Weyl and Dirac semimetals, JETP Lett. +109, 715 (2019). +[24] S. Dzsaber, X. Yan, M. Taupin, G. Eguchi, A. Prokofiev, T. +Shiroka, P. Blaha, O. Rubel, S. E. Grefe, H.-H. Lai et al., +Giant spontaneous Hall effect in a nonmagnetic Weyl– +Kondo semimetal, Proc. Natl. Acad. Sci. U.S.A. 118, +e2013386118 (2021). +[25] A. Tiwari, F. Chen, S. Zhong, E. Drueke, J. Koo, A. +Kaczmarek, C. Xiao, J. Gao, X. Luo, Q. Niu et al., Giant +c-axis nonlinear anomalous Hall effect in Td-MoTe2 and +WTe2, Nat. Commun. 12, 2049 (2021). +[26] S. Lai, H. Liu, Z. Zhang, J. Zhao, X. Feng, N. Wang, C. +Tang, Y. Liu, K. S. Novoselov, S. A. Yang et al., Third-order +nonlinear Hall effect induced by the Berry-connection +polarizability tensor, Nat. Nanotechnol. 16, 869 (2021). +[27] H. Liu, J. Zhao, Y.-X. Huang, X. Feng, C. Xiao, W. Wu, S. +Lai, W.-b. Gao, and S. A. Yang, Berry connection polar- +izability tensor and third-order Hall effect, Phys. Rev. B 105, +045118 (2022). +[28] Y. Gao, S. A. Yang, and Q. Niu, Field Induced Positional +Shift of Bloch Electrons and Its Dynamical Implications, +Phys. Rev. Lett. 112, 166601 (2014). +[29] See +Supplemental +Material +at +http://link.aps.org/ +supplemental/10.1103/PhysRevLett.130.016301 for device +fabrication, electrical measurements, calculation details, +polarized Raman spectroscopy of few-layer WTe2, transport +properties of the devices, angle-dependent third-order +anomalous Hall effect, symmetry analysis of WTe2, theory +analysis of the field-induced Berry curvature dipole, control +experiments in device S2, extrinsic effects that may induce +nonlinear transport, and anisotropy of the scaling parame- +ters, which includes Refs. [30–41]. +[30] L. Wang, I. Meric, P. Y. Huang, Q. Gao, Y. Gao, H. Tran, T. +Taniguchi, K. Watanabe, L. M. Campos, D. A. Muller et al., +One-dimensional electrical contact to a two-dimensional +material, Science 342, 614 (2013). +[31] G. Kresse and J. Hafner, Ab initio molecular dynamics for +open-shell transition metals, Phys. Rev. B 48, 13115 (1993). +[32] G. Kresse and J. Furthmüller, Efficient iterative schemes for +ab initio total-energy calculations using a plane-wave basis +set, Phys. Rev. B 54, 11169 (1996). +[33] P. E. Blöchl, Projector augmented-wave method, Phys. Rev. +B 50, 17953 (1994). +[34] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized +Gradient Approximation Made Simple, Phys. Rev. Lett. 77, +3865 (1996). +[35] G. Pizzi et al., Wannier90 as a community code: New +features and applications, J. Phys. Condens. Matter 32, +165902 (2020). +[36] M. Kim, S. Han, J. H. Kim, J.-U. Lee, Z. Lee, and H. +Cheong, Determination of the thickness and orientation of +few-layer tungsten ditelluride using polarized Raman spec- +troscopy, 2D Mater. 3, 034004 (2016). +[37] M. N. Ali, J. Xiong, S. Flynn, J. Tao, Q. D. Gibson, L. M. +Schoop, T. Liang, N. Haldolaarachchige, M. Hirschberger, +N. P. Ong et al., Large, non-saturating magnetoresistance in +WTe2, Nature (London) 514, 205 (2014). +[38] V. Fatemi, Q. D. Gibson, K. Watanabe, T. Taniguchi, R. J. +Cava, and P. Jarillo-Herrero, Magnetoresistance and quan- +tum oscillations of an electrostatically tuned semimetal- +to-metal transition in ultrathin WTe2, Phys. Rev. B 95, +041410(R) (2017). +[39] X. Zhang, V. Kakani, J. M. Woods, J. J. Cha, and +X. Shi, Thickness dependence of magnetotransport proper- +ties of tungsten ditelluride, Phys. Rev. B 104, 165126 +(2021). +[40] T. Akamatsu et al., Avan der Waals interface that creates in- +plane polarization and a spontaneous photovoltaic effect, +Science 372, 68 (2021). +[41] C. Dames and G. Chen, 1ω, 2ω, and 3ω methods for +measurements of thermal properties, Rev. Sci. Instrum. 76, +124902 (2005). +[42] Z. Z. Du, C. M. Wang, S. Li, H.-Z. Lu, and X. C. Xie, +Disorder-induced nonlinear Hall effect with time-reversal +symmetry, Nat. Commun. 10, 3047 (2019). +[43] Y. Tian, L. Ye, and X. Jin, Proper Scaling of the Anomalous +Hall Effect, Phys. Rev. Lett. 103, 087206 (2009). +[44] L. Ye, M. Kang, J. Liu, F. von Cube, C. R. Wicker, T. +Suzuki, C. Jozwiak, A. Bostwick, E. Rotenberg, D. C. Bell +et al., Massive Dirac fermions in a ferromagnetic kagome +metal, Nature (London) 555, 638 (2018). +[45] H. Isobe, S.-Y. Xu, and L. Fu, High-frequency rectification +via chiral Bloch electrons, Sci. Adv. 6, eaay2497 (2020). +[46] X.-G. Ye, P.-F. Zhu, W.-Z. Xu, N. Shang, K. Liu, and Z.-M. +Liao, Orbit-transfer torque driven field-free switching of +perpendicular magnetization, Chin. Phys. Lett. 39, 037303 +(2022). +[47] S. Sinha, P. C. Adak, A. Chakraborty, K. Das, K. Debnath, +L. D. V. +Sangani, +K. +Watanabe, +T. +Taniguchi, +U. V. +Waghmare, A. Agarwal, and M. M. Deshmukh, Berry +curvature dipole senses topological transition in a moir´e +superlattice, Nat. Phys. 18, 765 (2022). +6 + +1 + +Supplemental Material for +Control over Berry curvature dipole with electric field in WTe2 +Xing-Guo Ye1,+, Huiying Liu2,+, Peng-Fei Zhu1,+, Wen-Zheng Xu1,+, Shengyuan A. +Yang2, Nianze Shang1, Kaihui Liu1, and Zhi-Min Liao1,* +1 State Key Laboratory for Mesoscopic Physics and Frontiers Science Center for +Nano-optoelectronics, School of Physics, Peking University, Beijing 100871, China. +2 Research Laboratory for Quantum Materials, Singapore University of Technology +and Design, Singapore, 487372, Singapore. ++ These authors contributed equally. +* Email: liaozm@pku.edu.cn +This file contains supplemental Figures S1-S18 and Notes 1-10. +Note 1: Device fabrication, experimental and calculation methods. +Note 2: Polarized Raman spectroscopy of WTe2. +Note 3: Angle-dependent longitudinal resistance and third-order nonlinear Hall effect. +Note 4: Magnetotransport properties of WTe2. +Note 5: Symmetry analysis of WTe2. +Note 6: Theoretical analysis and calculations of field-induced Berry curvature dipole. +Note 7: Electric field dependence of second-order Hall signals. +Note 8: Control experiments in device S2. +Note 9: Discussions of other possible origins of the second order AHE. +Note 10: Angle dependence of parameter C0 obtained from the fittings of scaling law. + + + +2 + +Supplemental Note 1: Device fabrication, experimental and calculation methods. +1) Device fabrication +The WTe2 flakes were exfoliated from bulk crystal by scotch tape and then +transferred onto the polydimethylsiloxane (PDMS). The PDMS was then covered onto +a Si substrate with 285 nm-thick SiO2, where the Si substrate was precleaned by air +plasma, and further heated for about 1 minute at 90℃ to transfer the WTe2 flakes onto +Si substrate. Disk and Hall bar-shaped Ti/Au electrodes (around 10 nm thick) were +prefabricated on individual SiO2/Si substrates with e-beam lithography, metal +deposition and lift-off. Exfoliated BN (around 20 nm thick) and WTe2 flakes (around +5-20 nm thick) were sequentially picked up and then transferred onto the Ti/Au +electrodes using a polymer-based dry transfer technique [30]. The atomic force +microscope image of device S1 is shown in Fig. S1. The thickness of this sample is 8.4 +nm, corresponding to a 12-layer WTe2. The whole exfoliation and transfer processes +were done in an argon-filled glove box with O2 and H2O content below 0.01 parts per +million to avoid sample degeneration. + +Figure S1: (a) The atomic force microscope image of device S1. (b) The line profile +shows the thickness of the WTe2 sample is 8.4 nm. + +0 +1 +2 +3 +4 +5 +0 +3 +6 +9 +Height (nm) +Line profile (mm) +8.4 nm +3 +WTe2 +(a) +(b) + +3 + +2) Electrical transport measurements and circuit schematic +All the transport measurements were carried out in an Oxford cryostat with a +variable temperature insert and a superconducting magnet. First-, second- and third- +harmonic signals were collected by standard lock-in techniques (Stanford Research +Systems Model SR830) with frequency ω . Frequency  equals 17.777 Hz unless +otherwise stated. +The circuit schematic with multiple sources in experiments is depicted in Fig. S2. +The a.c. and d.c. sources are both effective current sources. The original SR830 a.c. +source is a voltage source. In experiments, we connected the SR830 voltage source and +a protective resistor with resistance value 𝑅𝑝 in series (𝑅𝑝 = 100 kΩ for device S1 +and 𝑅𝑝 = 10 kΩ for device S2), as shown in Fig. S2. The resistance of WTe2 channel +is in the order of 10 Ω, much less than 𝑅𝑝, which makes the SR830 source an effective +current source with excitation current 𝐼𝜔 ≅ 𝑈𝜔 𝑅𝑝 +⁄ +, where 𝑈𝜔 is the source voltage. +The Keithley 2400 current source is used for the d.c source. As shown in Fig. S2, +the positive and negative terminals of the Keithley source are connected to a pair of +diagonal electrodes to form a loop circuit, i.e., a floating loop. The d.c. electric field is +obtained by 𝐸𝑑𝑐 = +𝐼𝑑𝑐𝑅𝜃 +𝐿 , where 𝐼𝑑𝑐 is the applied d.c. current, 𝑅𝜃 is the resistance +of WTe2 along direction 𝜃, and 𝐿 is the channel length of WTe2. The impedance of +the floating Keithley source to ground is measured to be ~60 MΩ. While, the negative +terminal of SR830 source is directly connected to the ground. + +4 + + +Figure S2: Schematic structure of the circuit for measurements in device S1. + +3) Spectral purity of lock-in measurements +For the lock-in measurements, the used integration time is 300 ms and the filter +roll-off is 24 dB/octave, that is, the cutoff (-3 dB) frequency for the low-pass filter is +0.531 Hz and the filter roll-off is 24 dB per octave. For our lock-in measurements, the +narrow detection bandwidth (±0.531 Hz) effectively avoided the spectral leakage. +The spectral purity of the lock-in homodyne circuit is verified by the control +experiments of the lock-in measurements of a resistor. The first-, second- and third- +harmonic voltages of a resistor with resistance ~100 Ω are measured using the same +frequency (17.777 Hz), integration time (300 ms) and filter roll-off (24 dB/octave) as +used in experiments, as shown in Fig. S3. The first-harmonic voltage shows linear +dependence on the alternating current, consistent with the resistance value ~100 Ω. The +second- and third-harmonic voltages are four orders of magnitude smaller than the first- +harmonic voltage, which indicates the high purity of spectrum of the lock-in homodyne +circuit. +Keithley 2400 +current source +SR830 voltage +source +SR830 lock-in +measurement + +5 + + +Figure S3: Lock-in measurements for a resistor with resistance ~𝟏𝟎𝟎 𝛀. +a, The first-harmonic voltage versus the alternating current. +b, The second- and third-harmonic voltages versus the alternating current. + +4) Validity of electrical measurements with the two sources +In our experiments, the Keithley source is used as the d.c. current source, which +has an output impedance ~20 MΩ. The a.c. current source is realized by connecting a +resistor 𝑅𝑝 in series (𝑅𝑝 = 100 kΩ for device S1 and 𝑅𝑝 = 10 kΩ for device S2) in +series with the SR830 voltage source. Both the a.c. and d.c. current sources have +effectively large output impedance comparing to the sample resistance ~10 Ω, so that +they can be considered as independent current sources. These two current sources can +be applied to the device simultaneously, having well-defined potential differences. To +further confirm the validity of our electrical measurements with the two current sources, +we design a test circuit, as shown in Fig. S4(a). The a.c. current flowing through 𝑅2 +was calculated by measuring the first-harmonic voltage 𝑉ω of 𝑅2 and 𝐼𝜔 = 𝑉𝜔/𝑅2. +The d.c. current is applied by the Keithley current source and is measured by measuring +the d.c. voltage 𝑉𝑑𝑐 of 𝑅2 and 𝐼𝑑𝑐 = 𝑉𝑑𝑐/𝑅2. As shown in Fig. S4(b), where the a.c. +voltage of SR830 source is fixed at 1 V, it is found that the 𝐼𝜔 is unchanged when +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0 +10 +20 +30 +40 +50 +V (mV) +I (mA) +0 +0.1 +0.2 +0.3 +0.4 +0.5 +-1 +0 +1 + n = 2 + n = 3 +Vn (mV) +I (mA) +(a) +(b) + +6 + +varying the d.c. current by Keithley source, while measured 𝐼𝑑𝑐 is almost the same as +the output current of the Keithley source. In Fig. S4(c), where the d.c. current of +Keithley source is fixed, it is found that 𝐼𝜔 well satisfies 𝐼𝜔 = 𝑈𝜔/(𝑅1 + 𝑅2 + +𝑅3) ≅ 𝑈𝜔 𝑅𝑝 +⁄ + with 𝑈𝜔 as the SR830 source voltage and 𝑅𝑝 = 𝑅1 . These results +clearly confirm the a.c. and d.c. sources are effectively independent with negligible +current shunt between each other. + +Figure S4: Validity of the electrical measurements with two sources. +a, Schematic of the test circuit. +b, The 𝐼𝜔 and 𝐼𝑑𝑐 as a function of the Keithley source current with SR830 source +voltage 𝑈𝜔 fixed at 1 V. +c, The 𝐼𝜔 and 𝐼𝑑𝑐 as a function of the SR830 source voltage 𝑈𝜔 with Keithley +source current fixed at 1 mA. + +SR830 voltage source +Keithley 2400 current source +SR830 lock-in +measurement +(a) +(b) +(c) +0 +1 +2 +3 +4 +5 +0 +10 +20 +30 +40 +50 +I (mA) +U (V) +0.9 +1 +1.1 +Keithley source current=1 mA +Idc (mA) +-4 +-2 +0 +2 +4 +8 +10 +12 +I (mA) +Keithley source current (mA) +U = 1 V +-4 +-2 +0 +2 +4 +Idc (mA) + +7 + +5) Calculation methods +First-principles calculations were performed to reveal the properties of the Berry +connection polarizability tensor and field-induced Berry curvature dipole in WTe2. The +electronic structures were carried out in the framework of density functional theory as +implemented in the Vienna ab initio simulation package [31,32] with the projector +augmented wave method [33] and Perdew, Burke, and Ernzerh of exchange correlation +functionals [34]. For the convergence of the results, the spin–orbit coupling was +included self-consistently in the calculations of electronic structures with the kinetic +energy cutoff of 600 eV and Monkhorst-Pack k mesh of 14 × 8 × 4. We used d orbitals +of W atom and p orbitals of Te atoms to construct Wannier functions [35]. While +evaluating the band geometric quantities, we consider the finite temperature effect in +the distribution function and a lifetime broadening of 𝑘𝐵𝑇 with 𝑇 = 5 K. + + + + +8 + +Supplemental Note 2: Polarized Raman spectroscopy of WTe2. +The crystalline orientation of WTe2 device was determined by the polarized +Raman spectroscopy in the parallel polarization configuration [36]. Figure S5 shows +the polarized Raman spectrum of device S2 as an example. The optical image of device +S2 is displayed in Fig. S5(a). Raman spectroscopy was measured with 514 nm +excitation wavelengths through a linearly polarized solid-state laser beam. The +polarization of the excitation laser was controlled by a quarter-wave plate and a +polarizer. We collected the Raman scattered light with the same polarization as the +excitation laser. A typical Raman spectroscopy of device S2 is shown in Fig. S5(b), +where five Raman peaks are identified, belonging to the A1 modes of WTe2 [36]. We +further measured the polarization dependence of intensities of peaks P2 and P11 +[denoted in Fig. S5(b)] in Figs. S5(c) and S5(d), respectively. Based on previous +reports [36], the polarization direction with maximum intensity was assigned as the b +axis. The measured crystalline orientation is further indicated in the optical image [Fig. +S5(a)], where the applied a.c. current is approximately parallel to a axis. + +9 + + +Figure S5: Polarized Raman spectroscopy of WTe2 to determine the crystalline +orientation. +a, Optical image of device S2. The crystalline axes, i.e., a axis and b axis, determined +by the polarized Raman spectroscopy, are denoted by the black arrows. The applied a.c. +current is also noted by the red arrow, which is approximately aligned with a axis. +b, A typical Raman spectrum measured with 514 nm excitation wavelengths, where the +polarization direction is approximately along b axis. Five Raman peaks are observed, +which belong to the A1 modes of WTe2 [36]. +c,d, Polarization dependence of intensities of peaks (c) P2 and (d) P11. Here the +polarization angle takes 0° along the b axis, along which maximum intensity is +observed [36]. + + +60 +120 +180 +240 +300 +0 +100 +200 +300 +Intensity (a.u.) +Wavenumber (cm-1) +0 +60 +120 +180 +240 +300 +0 +60 +120 +180 +240 +300 +b +a +10 mm +(a) +(b) +(c) +(d) +P2 +P10 +P11 +P2 +P11 +b axis +b axis + +10 + +Supplemental Note 3: Angle-dependent longitudinal resistance and third-order +nonlinear Hall effect. +The third-order anomalous Hall effect (AHE) is investigated in device S1, as +shown in Fig. S6(a). By exploiting the circular disc electrode structure, the angle- +dependence of the third-order AHE is measured. It shows highly sensitive to the +crystalline orientation, as shown in Fig. S6(c), which inherits from the intrinsic +anisotropy of WTe2 [26]. Based on the symmetry of WTe2 [26], the third-order AHE +shows angle-dependence following the formula +E𝐻 +3ω +(𝐸𝜔)3 ∝ +cos(θ−θ0)sin(θ−θ0)[(χ22r4−3χ12r2)sin2(θ−θ0)+(3χ21r2−χ11)cos2(θ−θ0)] +(cos2(θ−θ0)+𝑟sin2(θ−θ0))3 +, +where 𝐸𝐻 +3𝜔 = +𝑉𝐻 +3𝜔 +𝑊 , 𝐸𝜔 = +𝐼𝜔𝑅∥ +𝐿 , 𝑉𝐻 +3𝜔 is the third-harmonic Hall voltage, 𝐼𝜔 is the +applied a.c. current, 𝑅∥ is the longitudinal resistance, 𝑊 and 𝐿 are channel width +and length, respectively, r is the resistance anisotropy, 𝜒𝑖𝑗 are elements of the third- +order susceptibility tensor, 𝜃0 is the angle misalignment between 𝜃 = 0° and +crystalline b axis. The fitting curve for this angle dependence is shown by the red line +in Fig. S6(c), which yields the misalignment 𝜃0 ~1.5°. In addition to the third-order +AHE, the longitudinal (𝑅∥) resistance also shows strong anisotropy [13], as shown in +Fig. S6(b), following +𝑅∥(𝜃) = 𝑅𝑏𝑐𝑜𝑠2(𝜃 − 𝜃0) + 𝑅𝑎𝑠𝑖𝑛2(𝜃 − 𝜃0), +consistent with previous results [13], where 𝑅𝑎 and 𝑅𝑏 are resistance along +crystalline a and b axis, respectively. + +11 + + +Figure S6: Angle-dependence of third-order nonlinear Hall effect in device S1 at +5 K. +a, The third-harmonic anomalous Hall voltages at various 𝜃. Here 𝜃 is defined as the +relative angle between the alternating current and the baseline (approximately along b +axis). +b,c, (b) Rxx and (c) third-order Hall signal +E𝐻 +3ω +(𝐸𝜔)3 as a function of 𝜃, respectively. + + + +0 +0.5 +1 +-15 +-10 +-5 +0 +5 +10 +15 + 0 + 30 + 60 +V3 +H (mV) +E (kV/m) + 90 + 120 + 150 +16 +24 +Rxx (W) +0 +60 +120 +180 +240 +300 +360 +-50 +0 +50 +V3 +H /(V)3 (V-2) +q () +(a) +(b) +(c) + +12 + +Supplemental Note 4: Magnetotransport properties of WTe2. +The magneto-transport properties of the device S1 were investigated. Figure S7(a) +shows the resistivity as a function of temperature. The resistivity decreases upon +decreasing temperature with a residual-resistivity at low temperatures, showing typical +metallic behaviors. Figure S7(b) shows the magnetoresistance (MR) and Hall +resistance as a function of magnetic field. MR is defined as +𝑅𝑥𝑥(𝐵)−𝑅𝑥𝑥(0) +𝑅𝑥𝑥(0) +× 100%. The +low residual resistance and large, non-saturated MR indicate the high quality of the +WTe2 devices [37,38]. The carrier mobility of device S1 is estimated as high as +4974.4 cm2/(V ⋅ s). Moreover, resistance oscillations due to the formations of Landau +levels are also observed, as shown in Fig. S7(c), indicative of the high crystal quality. +The oscillation ∆𝑅𝑥𝑥 is obtained by subtracting a parabolic background. The fast +Fourier transform (FFT) is performed, as shown in Fig. S7(d). Three frequencies are +observed, indicating the multiple Fermi pockets in WTe2, which is consistent with +previous work [37-39]. The dominant peak of FFT 𝑓1 is around 44 T. + +13 + + +Figure S7: Transport properties of the device S1. +a, The resistivity as a function of temperature. +b, Magnetoresistance and Hall resistance at 5 K. +c, Oscillations of Rxx at 5 K. The ∆𝑅𝑥𝑥 is obtained by subtracting a parabolic +background. +d, The FFT analysis of ∆𝑅𝑥𝑥 oscillations, where three peaks are obtained. + + +0 +50 +100 150 200 250 300 +0 +20 +40 +60 +80 +100 +rxx (cm×mW) +T (K) +-15 -10 +-5 +0 +5 +10 +15 +0 +500 +1000 +1500 +2000 +MR (%) +B (T) +-60 +-40 +-20 +0 +20 +40 +Rxy (W) +0.05 +0.1 +0.15 +0.2 +0.25 +-6 +-3 +0 +3 +6 +DRxx (W) +1/B (T-1) +100 +200 +300 +0 +200 +400 +600 +800 +FFT amplitude (a.u.) +Frequency (T) +f1 +f2 +f3 +(a) +(b) +(c) +(d) + +14 + +Supplemental Note 5: Symmetry analysis of WTe2. +Td-WTe2 has a distorted crystal structure with low symmetry. Here we analyze the +thickness dependence of the symmetry in WTe2 in details. Figure S8(a) shows the b-c +plane of monolayer WTe2. Each monolayer consists of a layer of W atoms sandwiched +between two layers of Te atoms, denoted as Te1 (denoted in yellow) and Te2 (denoted +in red), respectively. The inversion symmetry of the monolayer is approximately +satisfied, and Te1 is equivalent to Te2. The presence of inversion symmetry forces Berry +curvature dipole (BCD) to be zero. However, as a perpendicular displacement field is +applied to break the inversion symmetry, the Te1 is no longer equivalent to Te2. As +shown in the bottom of Fig. S8(a), an in-plane electric polarization along b axis can be +induced by the out-of-plane displacement field. The electric polarization along b axis +plays a similar role as the d.c. electric field in our work, leading to nonzero BCD along +a axis. +Nonzero BCD in bilayer WTe2 origins from crystal symmetry breaking. The +largest symmetry in bilayer WTe2 is a single mirror symmetry 𝑀𝑎 with bc plane as +mirror plane. As shown in Fig. S8(b), the stacking between the two layers makes bilayer +WTe2 inversion symmetry breaking. Under inversion operation, the top and bottom +layers are swapped, which fails to coincide with each other. As shown in Fig. S8(b), +Te1 is not equivalent to Te2 due to the stacking arrangement in bilayer. Therefore, an +in-plane electric polarization P along b axis exists, similar to the case in monolayer with +an out-of-plane displacement field. The polarization P is able to induce nonzero BCD +along the perpendicular crystalline axis, i.e., along a axis. + +15 + +In fact, such in-plane polarization P along b axis in monolayer and bilayer WTe2 is +already evidenced by the circular photogalvanic effect [14]. The symmetry breaking +induced polarization is also confirmed in various 2D materials, such as WSe2/black +phosphorus heterostructures [40]. +In trilayer and thicker WTe2, as shown in Fig. S8(c), the Te1 and Te2 are equivalent +in bulk, leading to vanished electric polarization. The in-plane inversion symmetry in +bulk forbids the presence of in-plane BCD. However, the inversion is broken on surface. +Therefore, for trilayer and thicker WTe2, a small but nonzero BCD may occur on surface. + +Figure S8: Crystal structure of Td-WTe2. +a, b-c plane of monolayer Td-WTe2. +b, b-c plane of bilayer Td-WTe2. The stacking arrangement breaks the inversion +symmetry. +c, b-c plane of trilayer Td-WTe2. + +Importantly, the surface BCD and it induced second-order AHE in few-layer WTe2 +Inversion operation +W +Te2 +c +b +Te1 +E +b +(a) +(b) +(c) + +16 + +is reported in Ref. [13], which is also observed in our device. We measured the second- +order AHE without the application of Edc in a WTe2 device, as shown in Fig. S9. This +second-order AHE is observable when applying 𝐼𝜔 in the order of 1 mA. By +comparison, the second-order AHE induced by d.c. field is observable when applying +𝐼𝜔 smaller than 0.05 mA (Fig. 1 of main text). The calculated BCD along a axis 𝐷𝑎 +without the application of Edc is ~0.03 nm, which is one order of magnitude smaller +than 𝐷𝑎 +(1) ~0.29 nm measured under Edc = 3kV/m (Fig. 4 of main text). These results +confirm the validity of Edc induced BCD in our work. + +Figure S9: The second-order AHE without external d.c. electric field in WTe2 at +1.8 K. + + + +0 +0.5 +1 +1.5 +2 +0 +10 +20 +30 +40 +V2 +H (mV) +I (mA) + +17 + +Supplemental Note 6: Theoretical analysis and calculations of field-induced Berry +curvature dipole. +The electric field-induced Berry curvature depends on the Berry connection +polarizability tensor and the applied d.c. field with the relation that +𝛀(1) = 𝛁𝐤 × (𝐆⃡𝐄𝑑𝑐), +Ωβ +(1)(𝑛, 𝒌) = εβγμ[∂γ𝐺μν(𝑛, 𝒌)]𝐸ν +dc, +with 𝐺μν(𝑛, 𝒌) = 2𝑒Re ∑ +(𝐴μ)𝑛𝑚(𝐴ν)𝑚𝑛 +ε𝑛−ε𝑚 +𝑚≠𝑛 + , where 𝐴𝑚𝑛 is the interband Berry +connection and 𝑒 is the electron charge. The superscript “(1)” represents that the +physical quantity is the first order term of electric field. Here the Greek letters refer to +the spatial directions, 𝑚, 𝑛 refer to the energy band indices, εβγμ is the Levi-Civita +symbol, and 𝜕𝛾 is short for 𝜕/𝜕𝑘𝛾 . The Berry connection polarizability tensor of +WTe2 is calculated and shown in Figs. S10(a)-(c). From the definition, the field-induced +BCD is +𝐷αβ +(1) = ∫ [𝑑𝒌]𝑓0 (∂αΩβ +(1)) +𝑘 += εβγμ ∫ [𝑑𝒌]𝑓0[∂α(∂γ𝐺μν)]𝐸ν +dc +𝑘 +, +where ∫ [𝑑𝒌] +𝑘 += ∑ +1 +(2π)3 ∭ 𝑑𝒌 +𝑛 + is taken over the first Brillouin zone of the system and +summed over all energy bands. +In two-dimensional systems, 𝛀(1) is constrained to the out of plane direction, and +BCD behaves as a pseudo vector in the plane. Here we choose our coordinate frame +along the crystal principal axes 𝑎, 𝑏, 𝑐 . By applying a d.c. electric field 𝐄dc = +(Ea +dc, Eb +dc) in the 𝑎𝑏 plane, the induced Ω𝑐 +(1) reads +Ω𝑐 +(1)(𝑛, 𝒌) = (𝜕𝑎𝐺𝑏𝑎 − 𝜕𝑏𝐺𝑎𝑎)Ea +dc + (𝜕𝑎𝐺𝑏𝑏 − 𝜕𝑏𝐺𝑎𝑏)Eb +dc. +𝐷α +(1) defined in a few-layer 2D system can be approximately derived from 𝐷αc(bulk) +(1) + of + +18 + +the bulk system by 𝐷α +(1) = 𝑑𝐷αc(bulk) +(1) + , where 𝑑 is the thickness of the film. The +independent components of 𝐷α +(1) are related to the Berry connection polarizability +tensor, 𝐄dc and 𝑑. The mirror symmetry 𝑀𝑎 and the glide symmetry 𝑀̃𝑏 in WTe2 +constrain 𝐷α +(1) to be +𝐷𝑎 +(1) = ∫[𝑑𝑘] f0[∂a(∂aGbb) − ∂a(∂bGab)]Eb +dc𝑑 +k +, +𝐷𝑏 +(1) = ∫[𝑑𝑘] f0[∂b(𝜕𝑎𝐺𝑏𝑎) − ∂b(𝜕𝑏𝐺𝑎𝑎)]Ea +dc𝑑 +k +, +where the other terms are prohibited by symmetry. In the experiment, the d.c. electric +field is applied along a direction with an angle 𝜃 between 𝑏 axis, which can be +expressed +as +𝐄dc = 𝐸dc(− sin 𝜃 , cos 𝜃) . +The +induced +BCD +𝐃(1)(𝜃) = +(𝐷𝑎 +(1)(𝜃), 𝐷𝑏 +(1)(𝜃)) hence reads +𝐷𝑎 +(1)(𝜃) = ∫[𝑑𝑘] f0[∂a(∂aGbb) − ∂a(∂bGab)]Edc +k +cos 𝜃 𝑑, +𝐷𝑏 +(1)(𝜃) = ∫[𝑑𝑘] f0[∂b(∂bGaa) − ∂b(∂aGba)]Edc +k +sin 𝜃 𝑑. +With the field-induced BCD, the second-order Hall current of an a.c. electric field +𝐄ω is [9] +𝑗𝛼 +2ω = −εαμγ +𝑒3𝜏 +2(1 + 𝑖ωτ)ℏ2 𝐷βμ +(1)𝐸β +ω𝐸γ +ω. + In two-dimensional systems, where 𝛀(1) is along out of plane direction and +𝐷αc +(1) = ∫ [𝑑𝒌]𝑓0(∂αΩc +(1)) +𝑘 +, it is equivalent to +𝒋2ω = − +𝑒3𝜏 +2(1 + 𝑖ωτ)ℏ2 (𝒛̂ × 𝐄ω)[D(1)(𝜃) ⋅ Eω]. +The magnitude of induced second-order Hall conductivity is determined by +D(1)(𝜃) ⋅ 𝐄̂ω, which is the projection of the pseudo vector 𝐃(1) to the direction of 𝐄ω, + +19 + +and the direction of Hall current is perpendicular to 𝐄ω. Consequently, we can measure +the 𝐄dc induced BCD 𝐃(1) by detecting its projective component 𝐷𝑎 +(1)(𝜃) or +𝐷𝑏 +(1)(𝜃) with an a.c. electric field along the corresponding direction. From the above +derivation, when the direction of the d.c electric field varies in the 𝑎𝑏 plane, the +independent components of induced BCD 𝐷𝑎 +(1) and 𝐷𝑏 +(1) change as a cosine and a sine +function, respectively. This relation is clearly demonstrated by our experimental results +in Fig. 4 of main text. +With first-principles calculations, we estimate the extreme value of 𝐷𝑎 +(1)(0°) and +𝐷𝑏 +(1)(90°), as shown in Fig. S10(d). It is taken that 𝑑 ∼ 8.4 nm and 𝐸dc ∼ 3 kV/m +according to the experiment. 𝐷𝑎 +(1)(0°) and 𝐷𝑏 +(1)(90°) refer to 𝐷𝑎 +(1) and 𝐷𝑏 +(1) as the +applied 𝐸dc along the b axis and -a axis, respectively. It is found that 𝐷𝑏 +(1)(90°) +varies from ~-0.14 nm to 0 as tuning chemical potential away from 0, and 𝐷𝑎 +(1)(0°) +shows a non-monotonic change between 0.18 and -0.13 nm as changing chemical +potential. The experimental results of 𝐷𝑏 +(1)(90°) ~-0.05 nm and 𝐷𝑎 +(1)(0°) ~-0.28 nm +(Fig. 4 in main text) agree well with the calculations on the order of magnitude. + +Figure S10: Calculations of Berry connection polarizability tensor and field- +(a) +(b) +(c) +(d) + +0.2 +-D(0°) +(nm) +0.1 +.-.D"(90°) +0 +D(1) +-0.1 +=3kV/m +-0.2 +-20 +-10 +0 +10 +20 +μ(meV)106 +G +104 +102 +X +0 +-102 +-104 +-106 +Y +Gbb +10° +104 +102 +X +0 +-102 +-104 +-106 +Y106 +104 +102 +X +0 +-102 +-104 +-106 +Y20 + +induced Berry curvature dipole in WTe2. +a-c, The calculated distribution of Berry connection polarizability tensor elements (a) +𝐺𝑎𝑎, (b) 𝐺𝑏𝑏, (c) 𝐺𝑎𝑏 in the 𝑘𝑧 = 0 plane of the Brillouin Zone for the occupied +bands. The unit of BCP is Å2 ⋅ V−1. The grey lines depict the Fermi surface. +d, Calculated field-induced BCD 𝐷𝑎 +(1)(0°) and 𝐷𝑏 +(1)(90°) with respect to the +chemical potential 𝜇 when 𝐸dc = 3 kV/m. In the calculations, the finite temperature +effect is considered with a boarding of 𝑘𝐵𝑇 at 5 K. + + + +21 + +Supplemental Note 7: Electric field dependence of second-order Hall signals. +The second-harmonic I-V characteristics in Fig. 1(e) of main text are converted +into the 𝑉𝐻 +2𝜔 versus (𝑉𝜔)2 in Fig. S11(a), where linear relationships are observed. +The +E𝐻 +2ω +(𝐸𝜔)2 as a function of the applied 𝐸𝑑𝑐 is further calculated and presented in Fig. +S11(b). + +Figure S11: Second-order AHE modulated by d.c. electric field at 5 K. +a, The second-harmonic Hall voltage 𝑉𝐻 +2𝜔 as a function of (𝑉𝜔)2 as 𝐄𝑑𝑐 along b +axis and 𝐄𝜔 along -a axis. +b, The second-order Hall signal +E𝐻 +2ω +(𝐸𝜔)2 as a function of 𝐸𝑑𝑐 at 𝜃 = 0° and 𝜃 = 90° +with 𝐄𝜔 ∥ −𝑎 axis. + + + +0 +5 +10 +15 +20 +25 +30 +-6 +-4 +-2 +0 +2 +4 +6 +Edc (kV/m) + 3 + 1.5 + 0 + -1.5 + -3 +V2 +H (mV) +(V)2 (10-8 V2) +q = 0 +E  -a axis +-3 +-2 +-1 +0 +1 +2 +3 +-9 +-6 +-3 +0 +3 +6 +9 + q + 0 + 90 +E2 +H /(E)2 (10-5 m/V) +Edc (kV/m) +E  -a axis +(a) +(b) + +22 + +Supplemental Note 8: Control experiments in device S2. +To demonstrate the symmetry constraint in WTe2, control experiments were +carried out in device S2. As schematically shown in Figs. S12(a), (d), the a.c. and d.c. +current sources are applied. The SR830 is an effective a.c. current source as connecting +a resistor in series with output impedance 10 kΩ. The d.c. source is the Keithley current +source with output impedance ~20 MΩ. For the d.c. field applied along a and b axis, +respectively, the first-harmonic Hall voltage shows no obvious dependence on 𝐄𝑑𝑐, as +shown in Figs. S12(b) and S12(e), which indicate the independence of the two electric +sources. When applying 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis, no second-order nonlinear Hall effect can +be observed in Fig. S12(c). Nevertheless, upon applying 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis, +as shown in Fig. S12(f), nonzero second-order nonlinear Hall effect emerges due to the +𝐄𝑑𝑐 induced Berry curvature dipole along a axis. + +Figure S12: The measurements by applying both d.c. electric field 𝐄𝒅𝒄 and a.c. +current in devices S2 at 1.8 K. +a, Schematic of the measurement configuration for (b) and (c). +0 +0.1 +0.2 +0.3 +0.4 +0.5 +-2 +-1 +0 +1 +2 +3 +Edc (104 V/m) + -10.7 + -5.2 + -1.5 + 0 + 1.5 + 5.2 + 10.7 +V2 +⊥ (mV) +I (mA) +E ⊥ Edc +0 +0.1 +0.2 +0.3 +0.4 +0.5 +-1 +-0.5 +0 +0.5 +1 +Edc + (104 V/m) + 2.5 + -2.5 +V2 +⊥ (mV) +I (mA) +E  Edc +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0 +0.5 +1 +1.5 +Edc (104 V/m) + -5.2 + 5.2 +V +H (mV) +I (mA) +E ⊥ Edc +0 +0.1 +0.2 +0.3 +0.4 +0.5 +0 +0.5 +1 +1.5 +Edc (104 V/m) + 2.5 + -2.5 +V +H (mV) +I (mA) +E  Edc +SR830 +voltage source +Keithley 2400 +current source +SR830 +voltage source +Keithley 2400 +current source +b +a +b +a +(a) +(b) +(c) +(d) +(e) +(f) + +23 + +b, First-harmonic Hall voltage 𝑉𝐻 +𝜔 under 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis. +c, There is no clear second-harmonic Hall voltage 𝑉𝐻 +2𝜔 under 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis. +d, Schematic of the measurement configuration for (e) and (f). +e, The 𝑉𝐻 +𝜔 under various 𝐄𝑑𝑐 with 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis. +f, The 𝑉𝐻 +2𝜔 under various 𝐄𝑑𝑐 with 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis. + + + +24 + +Supplemental Note 9: Discussions of other possible origins of the second order +AHE. +1) Diode effect. An accidental diode due to the contact can lead to a rectification, +causing high-order transport, which, however, can be safely ruled out in this work due +to the following reasons: +(a) Extrinsic signals of this origin should be strongly contact dependent. Thus, the +angle-dependence should be also coupled to extrinsic contacts. Nevertheless, the angle- +dependence of second-order AHE in Fig. 2 and Fig. S12 is well consistent with the +inherent symmetry of WTe2, which excludes the extrinsic origins. +(b) The two-terminal d.c. measurements for all the diagonal electrodes show linear +I-V characteristics, as shown in Fig. S13(a), excluding the existence of diode effect. +Linear fittings are performed for the two-terminal I-V curves. The R-square of the linear +fittings is at least larger than 0.99997, indicating perfect linearity. Further, the deviation +from linearity is analyzed by subtracting the linear-dependent part, as shown in Fig. +S13(b). It is found ∆Vdc, i.e., the deviation part, is four orders of magnitude smaller +than the original Vdc, indicating a negligible nonlinearity. Moreover, the ∆Vdc shows +no obvious current or angle dependence (Fig. S13(b)), and its magnitude is also much +smaller than that of the higher-harmonic Hall voltages (Fig. S13(c)), further indicating +that the observed higher-order transport in this work is failed to be attributed to the +diode effect induced by contact. + + +25 + + +Figure S13: Two-terminal d.c. measurements at 5 K in device S1. +a, Current-voltage curves from two-terminal d.c. measurements for all the diagonal +electrodes. +b, The current dependence of ∆Vdc, that is, the deviations from the linearity of the +current-voltage curves in Fig. S13a. +c, The comparation of the ∆Vdc, 𝑉𝐻 +2𝜔 and 𝑉𝐻 +3𝜔. For ∆Vdc and 𝑉𝐻 +3𝜔, the excitation +current is applied at 𝜃 = 30°, while for 𝑉𝐻 +2𝜔, the excitation current is applied along a +axis and a d.c. field 3 kV/m is applied at 𝜃 = 30°. + +2) Capacitive effect. Contact resistance is generally inevitable between the metal +electrodes and two-dimensional materials, which would induce an accidental capacitive +effect, resulting in higher-order transport effect. Here, the second-order AHE shows a +negligible dependence on frequency, as shown in Fig. S14(a), excluding the capacitive +effect. The phase of the second-harmonic Hall voltage is also investigated, where the Y +signal dominates over the X signal (Fig. S14(b)). The phase of the second-harmonic +Hall voltage is approximately ±90°, as shown in Fig. S14(c). These features further +exclude the capacitive effect. +-0.6 +0 +0.6 +-10 +0 +10 +Vdc (mV) +Idc (mA) + 90 + 120 + 150 + 0 + 30 + 60 +(a) +(b) +(c) +-0.6 +-0.4 +-0.2 +0 +0.2 +0.4 +0.6 +-0.04 +-0.02 +0 +0.02 +0.04 + 0 + 30 + 60 +DVdc (mV) +Idc (mA) + 90 + 120 + 150 +0 +0.2 +0.4 +-5 +0 +5 +10 +V (mV) +Excitation current (mA) + DVdc + V2 +H + V3 +H + +26 + + +Figure S14: Frequency-dependence and phase of second-order AHE in device S1 +at 5 K and with 𝐄𝐝𝐜 = 𝟑 𝐤𝐕/𝐦 at 𝜽 = 𝟔𝟎°. +a, The second-order Hall signals at different frequencies. +b, The X and Y signals of the second-order Hall voltages. +c, The absolute value of the phase of the second-order Hall voltages. + +3) Thermal effect. The thermal effect can also induce a second-order signal [41]. If the +observed nonlinear Hall effect origins from thermal effect, it should response to both +longitudinal and transverse d.c. electric field. However, as shown in Fig. S12, when +applying 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis, no second-order nonlinear Hall effect is observed. +Nevertheless, upon applying 𝐄𝜔 ⊥ 𝐄𝑑𝑐 , nonzero second-order nonlinear Hall effect +emerges. This observation is clearly inconsistent with the thermal effect. Moreover, the +observed second-order nonlinear Hall effect shows strong anisotropy, as shown in Fig. +2 of main text. The angle-dependence of the d.c. field-induced second-order Hall effect +is well consistent with the inherent symmetry of WTe2, which is failed to be explained +by the thermal effect. +4) Thermoelectric effect. Joule heating induced temperature gradient across the +sample can drive a thermoelectric voltage, leading to second-order nonlinear Hall effect. +This thermoelectric effect can also be excluded due to the following reasons: +0.01 +0.02 +0.03 +0.04 +0.05 +0 +20 +40 +60 +80 +100 +abs(phase) () +I (mA) +0 +0.01 0.02 0.03 0.04 0.05 +-2.5 +-2 +-1.5 +-1 +-0.5 +0 + X signal + Y signal +V2 +H (mV) +I (mA) +(b) +(c) +0 +0.01 0.02 0.03 0.04 0.05 +-2.5 +-2 +-1.5 +-1 +-0.5 +0 + 17.777 Hz + 77.777 Hz + 177.77 Hz + 777.77 Hz + 1777.7 Hz +V2 +H (mV) +I (mA) +(a) + +27 + +(a) Uniform Joule heating will not induce a temperature gradient and thus no +thermoelectric voltage across the sample. +(b) To generate thermoelectric voltage, the Joule heating should couple with +external asymmetry, such as contact junction or flake shape, which should be unrelated +to the inherent symmetry of WTe2. However, the anisotropy of second-order nonlinear +Hall effect is well consistent with the inherent symmetry analysis, as shown in Fig. 2 +of main text. +5) A residue of the first-harmonic Hall response 𝑽𝑯 +𝝎. The influence of 𝑉𝐻 +𝜔 on the +𝑉𝐻 +2𝜔 can be ruled out because the first- and second-harmonic signals show different +dependence on the d.c. electric field. As shown in Fig. S15, the first-harmonic Hall +signal (𝑉𝐻 +𝜔) shows that the I-V curves under 𝐸𝑑𝑐 = ±3 kV/m overlap with each other. +By comparison, the second-harmonic Hall signal (𝑉𝐻 +2𝜔 ) shows an anti-symmetric +dependence on 𝐸𝑑𝑐, where the sign of 𝑉𝐻 +2𝜔 is changed upon changing the sign of 𝐸𝑑𝑐. +This indicates that the existence of the first order signal 𝑉𝐻 +𝜔 will not affect the +measurements of the second order signal 𝑉𝐻 +2𝜔. + +Figure S15: The first- and second-harmonic signals at 5 K as 𝐄𝒅𝒄 along b axis +(𝜽 = 𝟎°) and 𝐄𝝎 along -a axis. +a, The first-harmonic Hall voltage 𝑉𝐻 +𝜔 as a function of 𝐼𝜔 at 𝐸𝑑𝑐 = ±3 kV/m. +0 +0.01 0.02 0.03 0.04 0.05 +-6 +-4 +-2 +0 +2 +4 +6 +Edc (kV/m) + 3 + -3 +V2 +H (mV) +I (mA) +q = 0 +(a) +(b) +0 +0.01 0.02 0.03 0.04 0.05 +0 +0.01 +0.02 +0.03 +0.04 +0.05 +Edc (kV/m) + 3 + -3 +V +H (mV) +I (mA) + +28 + +b, The second-harmonic Hall voltage 𝑉𝐻 +2𝜔. + +6) Trivial effect by d.c. source. We measured the first-harmonic longitudinal voltage +upon applying Edc = 3 kV/m , as shown in Fig. S16. It is clearly found that when +reversing the sign of d.c. electric field, the I-V curves overlapped with each other. The +results show that the d.c. source will not affect the a.c. measurements. + +Figure S16: The first-harmonic longitudinal voltage versus current under +different d.c. electric fields at 5 K. The 𝐄𝝎 and 𝐄𝒅𝒄 are along a axis. + +7) Longitudinal nonlinearity originating from a circuit artifact. We have measured +both the second-harmonic Hall and longitudinal voltage at all the angles, as shown in +Fig. S17. The measurement configuration is shown in the inset of Fig. S17(d) with d.c. +field applied at angle 𝜃. It is clearly found that the Hall nonlinearity is dominated over +longitudinal one, which guarantees that the observed second-order Hall effect doesn’t +originate from the longitudinal nonlinearity induced by a circuit artifact. +0 +0.01 0.02 0.03 0.04 0.05 +0 +0.2 +0.4 +0.6 +0.8 +V +xx (mV) +I (mA) +Edc (kV/m) + 3 + -3 + +29 + + +Figure S17: The second-harmonic Hall 𝑽𝑯 +𝟐𝝎 and longitudinal voltage 𝑽𝑳 +𝟐𝝎 with +𝐄𝝎 ∥ −𝒂 axis and 𝐄𝒅𝒄 = 𝟏. 𝟓 𝐤𝐕/𝐦 along different angles at 5 K. The angle 𝜽 is +defined in Fig. 1(d) of main text. + + + +0 +0.01 0.02 0.03 0.04 0.05 +0 +0.2 +0.4 +0.6 + V2 +H + V2 +L +V2 (mV) +I (mA) +0 +0.01 0.02 0.03 0.04 0.05 +-0.5 +0 +0.5 +1 +1.5 +2 +2.5 + V2 +H + V2 +L +V2 (mV) +I (mA) +0 +0.01 0.02 0.03 0.04 0.05 +-0.5 +0 +0.5 +1 +1.5 +2 + V2 +H + V2 +L +V2 (mV) +I (mA) +0 +0.01 0.02 0.03 0.04 0.05 +0 +0.5 +1 +1.5 +2 +2.5 + V2 +H + V2 +L +V2 (mV) +I (mA) +0 +0.01 0.02 0.03 0.04 0.05 +-2.5 +-2 +-1.5 +-1 +-0.5 +0 + V2 +H + V2 +L +V2 (mV) +I (mA) +0 +0.01 0.02 0.03 0.04 0.05 +-1.5 +-1 +-0.5 +0 + V2 +H + V2 +L +V2 (mV) +I (mA) +a +b +(a) +(b) +(c) +(d) +(e) +(f) + +30 + +Supplemental Note 10: Angle dependence of parameter C0 obtained from the +fittings of scaling law. +The second-order Hall signal +EH +2ω +(𝐸𝜔)2 is found to satisfy scaling law +EH +2ω +(𝐸𝜔)2 = 𝐶0 + +𝐶1𝜎𝑥𝑥 + 𝐶2𝜎𝑥𝑥 +2 . For 𝐄𝑑𝑐 = 3 kV/m with a fixed direction (angle 𝜃), a set of curves of +VH +2ω vs. Iω is measured at different temperatures as Iω is applied along -a axis and b +axis, respectively. Through varying temperature, the 𝜎𝑥𝑥 is changed accordingly. +Therefore, for a fixed angle 𝜃, the relationship between +EH +2ω +(𝐸𝜔)2 and 𝜎𝑥𝑥 is plotted. By +fitting the experimental data, the parameter 𝐶0 is then obtained and presented in Fig. +S18. + +Figure S18: Angle-dependence of the coefficient 𝑪𝟎. +a,b, The coefficient 𝐶0 as a function of 𝜃 with the amplitude of 𝐄𝑑𝑐 fixed at 3 kV/m +for (a) 𝐄𝜔 ∥ −𝑎 axis and (b) 𝐄𝜔 ∥ 𝑏 axis. + +0 +60 +120 +180 +240 +300 +360 +-0.6 +-0.4 +-0.2 +0 +0.2 +0.4 +0.6 +C0 (10-7 m/V) +q (o) +0 +60 +120 +180 +240 +300 +360 +-2 +-1 +0 +1 +2 +C0 (10-7 m/V) +q (o) +(a) +(b) +axis +axis + diff --git a/4NAzT4oBgHgl3EQf9f64/content/tmp_files/load_file.txt b/4NAzT4oBgHgl3EQf9f64/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..b98be33f960e6cc619f978350dff6372933bdc45 --- /dev/null +++ b/4NAzT4oBgHgl3EQf9f64/content/tmp_files/load_file.txt @@ -0,0 +1,1360 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf,len=1359 +page_content='Control over Berry Curvature Dipole with Electric Field in WTe2 Xing-Guo Ye,1,* Huiying Liu,2,* Peng-Fei Zhu,1,* Wen-Zheng Xu,1,* Shengyuan A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yang,2 Nianze Shang,1 Kaihui Liu,1 and Zhi-Min Liao 1,† 1State Key Laboratory for Mesoscopic Physics and Frontiers Science Center for Nano-optoelectronics, School of Physics, Peking University, Beijing 100871, China 2Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore, 487372, Singapore Berry curvature dipole plays an important role in various nonlinear quantum phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' However, the maximum symmetry allowed for nonzero Berry curvature dipole in the transport plane is a single mirror line, which strongly limits its effects in materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here, via probing the nonlinear Hall effect, we demonstrate the generation of Berry curvature dipole by applied dc electric field in WTe2, which is used to break the symmetry constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A linear dependence between the dipole moment of Berry curvature and the dc electric field is observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The polarization direction of the Berry curvature is controlled by the relative orientation of the electric field and crystal axis, which can be further reversed by changing the polarity of the dc field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Our Letter provides a route to generate and control Berry curvature dipole in broad material systems and to facilitate the development of nonlinear quantum devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Berry curvature is an important geometrical property of Bloch bands, which can lead to a transverse velocity of Bloch electrons moving under an external electric field [1–6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hence, it is often regarded as a kind of magnetic field in momentum space, leading to various exotic transport phenomena, such as anomalous Hall effect (AHE) [1], anomalous Nernst effect [7], and extra phase shift in quantum oscillations [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The integral of Berry curvature over the Brillouin zone for fully occupied bands gives rise to the Chern number [5], which is one of the central concepts of topological physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Recently, Sodemann and Fu [9] proposed that the dipole moment of Berry curvature over the occupied states, known as Berry curvature dipole (BCD), plays an important role in the second-order nonlinear AHE in time-reversal-invariant materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' For transport in the x-y plane which is typical in experiments, the relevant BCD components form an in- plane pseudovector with Dα ¼ R k f0ð∂αΩzÞ [9], where Dα is the BCD component along direction α, k is the wave vector, the integral is over the Brillouin zone and with summation over the band index, f0 is the Fermi distribution (in the absence of external field), Ωz is out-of-plane Berry curvature, and ∂α ¼ ∂=∂kα.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It results in a second-harmonic Hall voltage in response to a longitudinal ac probe current, which could find useful applications in high-frequency rectifiers, wireless charging, energy harvesting, and infra- red detection, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' BCD and its associated nonlinear AHE have been predicted in several material systems [9–11] and experimentally detected in systems such as two- dimensional (2D) monolayer or few-layer WTe2 [12–15], Weyl semimetal TaIrTe4 [16], 2D MoS2, and WSe2 [17–20], corrugated bilayer graphene [21], and a few topological materials [22–25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' However, a severe limitation is that BCD obeys a rather stringent symmetry constraint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In the transport plane, the maximum symmetry allowed for Dα is a single mirror line [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In several previous Letters [17–21], one needs to perform additional material engineering such as lattice strain or interlayer twisting to generate a sizable BCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This constraint limits the available material platforms with nonzero BCD, unfavorable for the in-depth exploration of BCD-related physics and practical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Recent works suggested an alternative route to obtain nonzero BCD, that is, utilizing the Berry connection polarizability to achieve a field-induced BCD, where the additional lattice engineering is unnecessary [26,27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The Berry connection polarizability is also a band geometric quantity, related to the field-induced positional shift of Bloch electrons [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is a second-rank tensor, defined as GabðkÞ ¼ ½∂Að1Þ a ðkÞ=∂Eb�, where Að1Þ is the field-induced Berry connection, E is the applied electric field [28], and the superscript “(1)” represents that the physical quantity is the first order term of electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Then, the E field induced Berry curvature is given by Ωð1Þ ¼ ∇k × ðG ↔ EÞ [27], where the double arrow indicates a second-rank tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This field-induced Berry curvature will lead to a field-induced BCD Dð1Þ α .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Considering transport in the x-y plane and applied dc E field also in the plane, we have Dð1Þ α ¼ R kf0ð∂αΩð1Þ z Þ¼εzγμ R kf0½∂αð∂γGμνÞ�Eν, where α;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' γ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' μ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' ν ¼ x, y, and εzγμ is the Levi-Civita symbol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In systems where the original BCD is forbidden by the crystal symmetry, the field-induced BCD by an external E field 1 could generally be nonzero and become the dominant contribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In such a case, the symmetry is lowered by the applied E field, and the induced BCD should be linear with E and its direction also controllable by the E field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' So far, this BCD caused by Berry connection polarizability and its field control have not been experimentally demon- strated yet, and the nonlinear Hall effect derived from this mechanism has not been observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In this Letter, we report the manipulation of electric field induced BCD due to the Berry connection polarizability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Utilizing a dc electric field Edc to produce BCD in bulk WTe2 (for which the inherent BCD is symmetry forbid- den), the second-harmonic Hall voltage V2ω H is measured as a response to an applied ac current Iω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Both orientation and magnitude of the induced BCD are highly tunable by the applied Edc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Our Letter provides a general route to extend BCD to abundant material platforms with high tunability, promising for practical applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The WTe2 devices were fabricated with circular disc electrodes (device S1) or Hall-bar shaped electrodes (device S2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The WTe2 flakes were exfoliated from bulk crystal and then transferred onto the prefabricated electrodes (Supplemental Material, Note 1 [29]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The WTe2 thickness of device S1 is 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 nm (Supplemental Material, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S1 [29]), corresponding to a 12-layer WTe2, and we present the results from device S1 in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The crystal orientations of WTe2 devices were identified by their long, straight edges [12] and further confirmed by both polarized Raman spectroscopy (Supplemental Material, Note 2 [29]) and angle-dependent transport measurements (Supplemental Material, Note 3 [29]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The electron mobility of device S1 is ∼ 4974 cm2=V s at 5 K (Supplemental Material, Note 4 [29]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In our experiments, we use thick Td-WTe2 samples (thickness ∼8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 nm), which have an effective inversion symmetry in the x-y plane (which is the transport plane).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This is formed by the combination of the mirror symmetry Ma and the glide mirror symmetry ˜Mb, as indicated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The in-plane inversion leads to the absence of inherent in-plane BCD and hence the nonlinear Hall effect in bulk (see Supplemental Material, Note 5 [29] for detailed symmetry analysis).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Because ˜Mb involves a half- cell translation along the c axis and hence is broken on the sample surface, a small but nonzero intrinsic BCD may exist on the surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In fact, such BCD due to surface symmetry breaking has already been reported [13], and is also observed in our samples, although the signal is much weaker in thicker samples (see Supplemental Material, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S9 [29]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' To induce BCD in bulk WTe2 through Berry connection polarizability, a dc electric field Edc is applied in the x-y plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(a) and 1(b), the field-induced Berry curvature shows a dipolelike distribution with non- zero BCD (theoretical calculations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' see Supplemental Material, Note 6 [29]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The induced BCD can be controlled by the dc E field and should satisfy the following symmetry requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Because the presence of a mirror symmetry would force the BCD to be perpendicular to the mirror plane [9], the induced BCD Dð1Þ must be perpendicular to Edc when Edc is along the a or b axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Control experiments were carried out in device S1 to confirm the above expectations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The measurement configuration is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(d) (see Supplemental Material, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S2 [29], for circuit schematic).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The probe ac current with ac field Eω and frequency ω was applied approximately along the −a axis, satisfying Eω ≪ Edc, and the second-harmonic Hall (c) (d) (e) (f) (a) (b) FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (a) and (b) The field-induced Berry curvature Ωð1Þ c ðkÞ in the kz ¼ 0 plane by a dc electric field Edc ¼ 3 kV=m applied along (a) a or (b) b axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The unit of Ωð1Þ c ðkÞ is Å2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The green arrows indicate the direction of Edc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The gray lines depict the Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (c) The a-b plane of monolayer Td-WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (d) The optical image of device S1, where an angle θ is defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (e) and (f) The second-harmonic Hall voltage V2ω H as Edc (e) along b axis (θ ¼ 0°), and (f) along −a axis (θ ¼ 90°) at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The Eω is applied along −a axis, as schematized in (d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2 voltage V2ω H was measured to reveal the nonlinear Hall effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The Edc that is used to produce BCD was applied along the direction characterized by the angle θ, which is the angle between the direction of Edc and the baseline of a pair of electrodes [white line in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(d)] that is approx- imately along the b axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Then Edc along θ ¼ 0° (b axis) and θ ¼ 90° (−a axis) correspond to the induced Dð1Þ along the a axis and b axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Because the nonlinear Hall voltage V2ω H is proportional to Dð1Þ · Eω [9], the nonlinear Hall effect should be observed for EωkDð1Þ and be vanishing for Eω⊥Dð1Þ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(e), when Edc along θ ¼ 0°, nonlinear Hall voltage V2ω H is indeed observed as expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The Edc along the b axis induces BCD along the a axis, leading to nonzero V2ω H since Eω is applied along the −a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The second-order nature is verified by both the second- harmonic signal and parabolic I-V characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is found that the nonlinear Hall voltage is highly tunable by the magnitude of Edc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The sign reverses when Edc is reversed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Moreover, the nonlinear Hall voltage is linearly proportional to Edc (Supplemental Material [29] Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S11), as we expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As for Edc along θ ¼ 90°, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(f), the V2ω H is much suppressed, which is at least one order of magnitude smaller than the V2ω H in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(e).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Because in this case the Edc along the a axis induces BCD along the b axis, Eω is almost perpendicular to BCD, leading to negligible nonlinear Hall effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Similar results are also reproduced in device S2 (Supplemental Material [29], Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Such control experiments are well con- sistent with our theoretical expectation and confirm the validity of field-induced BCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Besides the crystalline axis (θ ¼ 0° and 90°), we also study the case when Edc is applied along arbitrary θ directions to obtain the complete angle dependence of field-induced BCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here, Eω is applied along the −a or b axis, to detect the BCD component along the a or b axis, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Dð1Þ ¼ ½Dð1Þ a ðθÞ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Dð1Þ b ðθÞ�, where Dð1Þ a and Dð1Þ b are the BCD components along the a and b axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The measurement configurations are shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2(a) and 2(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figures 2(b) and 2(e) show the second-order Hall voltage as a function of θ, with the magnitude of Edc fixed at 3 kV=m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The second-order Hall response ½E2ω H =ðEωÞ2� is calculated by E2ω H ¼ ðV2ω H =WÞ and Eω ¼ ðIωRk=LÞ, where W is the channel width, Rk is the longitudinal resistance, and L is the channel length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2(c) and 2(f), ½E2ω H =ðEωÞ2� demonstrates a strong anisotropy, closely related to the inherent symmetry of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' First of all, it is worth noting that the second-order Hall signal is negligible at Edc ¼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This is consistent with our previous analysis that the inherent bulk in-plane BCD is symmetry forbidden [26,27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Second, ½E2ω H =ðEωÞ2� almost vanishes when EdckEω along a or b axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This is constrained by the mirror symmetries Ma or ˜Mb, forcing the BCD to be perpendicular to the mirror plane in such configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (a) (b) (c) (d) (e) (f) FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (a) and (d) Measurement configuration for the second-order AHE with (a) Eωk − a axis and (d) Eωkb axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The Edc, satisfying Edc ≫ Eω, is rotated to along various directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (b) and (e) The second-order Hall voltage V2ω H as a function of Iω at fixed Edc ¼ 3 kV=m but along various directions and at 5 K with (b) Eωk − a axis and (e) Eωkb axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (c) and (f) The second-order Hall signal ½E2ω H =ðEωÞ2� as a function of θ at 5 K with (c) Eωk − a axis and (f) Eωkb axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 3 Thus, when EdckEω along the a or b axis, the induced BCD is perpendicular to Edc and Eω, satisfying Dð1Þ · Eω ¼ 0, which leads to almost vanished second-order Hall signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Moreover, ½E2ω H =ðEωÞ2� exhibits a sensitive dependence on the angle θ, indicating the BCD is highly tunable by the orientation of Edc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A local minimum of ½E2ω H =ðEωÞ2� is found at an intermediate angle around θ ¼ 30° when Eωk − a axis in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This is because ½E2ω H =ðEωÞ2� depends not only on ðDð1Þ · c EωÞ, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', the projection of the pseudovector Dð1Þ to the direction of Eω, but also on the anisotropy of conductivity in WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The two terms show different dependence on the angle θ, leading to a local minimum around θ ¼ 30°.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Through control experiments and symmetry analysis, the extrinsic effects, such as diode effect, thermal effect, and thermoelectric effect, could be safely ruled out as the main reason of the observed second-order nonlinear AHE (see Supplemental Material, Note 9 [29]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' To further investigate this effect, the temperature dependence and scaling law of the second-order nonlinear Hall signal are studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' By changing the temperature, V2ω H and longitudinal conduc- tivity σxx were collected, where the magnitude of Edc was fixed at 3 kV=m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figures 3(a) and 3(c) show the V2ω H at different temperatures with Eωk − a axis, θ ¼ 0° and Eωkb axis, θ ¼ 90°, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A relatively small but nonzero second-order Hall signal is observed at 286 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The scaling law, that is, the second-order Hall signal ½E2ω H =ðEωÞ2� versus σxx, is presented and analyzed in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 3(b) and 3(d) for different angles θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The σxx was calculated by σxx ¼ð1=RkÞðL=WdÞ, where d is the thickness of WTe2, and was varied by changing temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' According to Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [42], the scaling law between ½E2ω H =ðEωÞ2� and σxx satisfies ½E2ω H =ðEωÞ2� ¼ C0 þ C1σxx þ C2σ2xx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The coeffi- cients C2 and C1 involve the mixing contributions from various skew scattering processes [42–45], such as impu- rity scattering, phonon scattering, and mixed scattering from both phonons and impurities [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' C0 is mainly contributed by the intrinsic mechanism, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', the field- induced BCD here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 3(b) and 3(d), the scaling law is well fitted for all angles θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is found that C0 shows strong anisotropy (Supplemental Material [29], Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S18), indicating the field-induced BCD is also strongly dependent on angle θ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The value of field-induced BCD can be estimated through D ¼ ð2ℏ2n=m�eÞ½E2ω H =ðEωÞ2� [12], where ℏ is the reduced Planck constant, e is the electron charge, m� ¼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='3me is the effective electron mass, n is the carrier density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here, we replace the ½E2ω H =ðEωÞ2� by the coefficient C0 from the scaling law fitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The two components of BCD along the a and b axes, denoted as Dð1Þ a and Dð1Þ b , are calculated from the fitting curves with the magnitude of Edc fixed at 3 kV=m under the Eωk − a axis and the Eωkb axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4(a) and 4(b), it is found that Dð1Þ a shows a cos θ dependence on θ, whereas Dð1Þ b (a) (b) (c) (d) FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (a) and (c) The second-harmonic Hall voltage at various temperatures with the magnitude of Edc fixed at 3 kV=m (a) under Eωk − a axis, θ ¼ 0° and (c) under Eωkb axis, θ ¼ 90°.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (b),(d) Second-order Hall signal ½E2ω H =ðEωÞ2� as a function of σxx (b) under Eωk − a axis and (d) under Eωkb axis at various θ with the magnitude of Edc fixed at 3 kV=m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The temperature range for the scaling law in (b) and (d) is 50–286 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (a) (b) (c) FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The induced Berry curvature dipole as a function of θ with the magnitude of Edc fixed at 3 kV=m for (a) the component along a axis, Dð1Þ a and (b) the component along b axis, Dð1Þ b .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (c) The relationship between the field-induced Berry curvature dipole Dð1Þ and the applied Edc ¼ 3 kV=m along different directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The scale bar of Dð1Þ is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4 shows a sin θ dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Such angle dependence is well consistent with the theoretical predications (see Supplemental Material [29], Note 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' According to the two components Dð1Þ a and Dð1Þ b , the field induced BCD vector of Dð1Þ is synthesized for Edc along various directions, as presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is found that both the magnitude and orientation of the field-induced BCD are highly tunable by the dc field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In summary, we have demonstrated the generation, modulation, and detection of the induced BCD due to the Berry connection polarizability in WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is found that the direction of the generated BCD is controlled by the relative orientation between the applied Edc direction and the crystal axis, and its magnitude is proportional to the intensity of Edc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Using independent control of the two applied fields, our Letter demonstrates an efficient approach to probe the nonlinear transport tensor symmetry, which is also helpful for full characterization of nonlinear transport coefficients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Moreover, the manipulation of BCD up to room temperature by electric means without addi- tional symmetry breaking will greatly extend the BCD- related physics [46,47] to more general materials and should be valuable for developing devices utilizing the geometric properties of Bloch electrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This work was supported by National Key Research and Development Program of China (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2018YFA0703703), National Natural Science Foundation of China (Grants No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 91964201 and No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 61825401), and Singapore MOE AcRF Tier 2 (MOE-T2EP50220-0011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' We are grateful to Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yanfeng Ge at SUTD for inspired discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' These authors contributed equally to this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' †liaozm@pku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='cn [1] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nagaosa, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sinova, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Onoda, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' MacDonald, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ong, Anomalous Hall effect, Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 82, 1539 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [2] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sinova, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Valenzuela, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wunderlich, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Back, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Jungwirth, Spin Hall effects, Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 87, 1213 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xiao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yao, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Niu, Valley-Contrasting Physics in Graphene: Magnetic Moment and Topological Transport, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 99, 236809 (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [4] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Šmejkal, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mokrousov, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yan, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' MacDonald, Topological antiferromagnetic spintronics, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 14, 242 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xiao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Niu, Berry phase effects on electronic properties, Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 82, 1959 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chang and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Niu, Berry phase, hyperorbits, and the Hofstadter spectrum: Semiclassical dynamics in magnetic Bloch bands, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 53, 7010 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Dau, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Vergnaud, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Marty, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Beign´e, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gambarelli, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Maurel, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Journot, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hyot, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Guillet, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gr´evin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', The valley Nernst effect in WSe2, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 10, 5796 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [8] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lin, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wiedmann, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zheng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liao, Observation of an Odd-Integer Quantum Hall Effect from Topological Surface States in Cd3As2, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 122, 036602 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [9] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sodemann and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fu, Quantum Nonlinear Hall Effect Induced by Berry Curvature Dipole in Time-Reversal Invariant Materials, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 115, 216806 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' You, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xu, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kaxiras, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Low, Berry curvature dipole current in the transition metal dichalcoge- nides family, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 98, 121109(R) (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [11] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' van den Brink, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Felser, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yan, Electri- cally tunable nonlinear anomalous Hall effect in two- dimensional transition-metal dichalcogenides WTe2 and MoTe2, 2D Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 5, 044001 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [12] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' MacNeill, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fatemi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Valdivia, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Du, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hsu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Observation of the nonlinear Hall effect under time-reversal- symmetric conditions, Nature (London) 565, 337 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [13] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Li, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sohn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shan, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mak, Nonlinear anomalous Hall effect in few-layer WTe2, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 18, 324 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [14] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shen, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fatemi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Valdivia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gibson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhou, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Taniguchi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lin, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Cava, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gedik, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Jarillo-Herrero, Electrically switch- able Berry curvature dipole in the monolayer topological insulator WTe2, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 14, 900 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [15] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xiao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Pemmaraju, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Muscher, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sie, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nyby, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Devereaux, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Qian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Berry curvature memory through electrically driven stacking transitions, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 16, 1028 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [16] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kumar, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hsu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sharma, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Eda, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yang, Room-temperature nonlinear Hall effect and wireless radiofrequency rectifica- tion in Weyl semimetal TaIrTe4, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nanotechnol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 16, 421 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lee, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xie, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mak, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shan, Valley magnetoelectricity in single-layer MoS2, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 16, 887 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Son, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kim, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ahn, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lee, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lee, Strain Engineering of the Berry Curvature Dipole and Valley Magnetization in Monolayer MoS2, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 123, 036806 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Qin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ye, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liao, Strain tunable Berry curvature dipole, orbital magnetization and nonlinear Hall effect in WSe2 monolayer, Chin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 38, 017301 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [20] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Cai, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' An, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Feng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ye, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lin, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Law et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Giant nonlinear Hall effect in twisted WSe2, Natl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' nwac232 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ho, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hsieh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lo, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Huang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Vu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ortix, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chen, Hall effects in artificially corrugated bilayer graphene without breaking time-reversal symmetry, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4, 116 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [22] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' He, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Isobe, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hsu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yang, Quantum frequency doubling in the topological insulator Bi2Se3, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 12, 698 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [23] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shvetsov, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Esin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Timonina, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kolesnikov, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Deviatov, Non-linear Hall effect in 5 three-dimensional Weyl and Dirac semimetals, JETP Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 109, 715 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Dzsaber, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Taupin, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Eguchi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Prokofiev, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shiroka, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Blaha, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rubel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Grefe, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Giant spontaneous Hall effect in a nonmagnetic Weyl– Kondo semimetal, Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Natl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 118, e2013386118 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [25] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Tiwari, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhong, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Drueke, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Koo, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kaczmarek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xiao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Luo, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Niu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Giant c-axis nonlinear anomalous Hall effect in Td-MoTe2 and WTe2, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 12, 2049 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [26] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Feng, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Tang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Novoselov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Third-order nonlinear Hall effect induced by the Berry-connection polarizability tensor, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nanotechnol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 16, 869 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [27] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Huang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Feng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xiao, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lai, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yang, Berry connection polar- izability tensor and third-order Hall effect, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 105, 045118 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [28] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Niu, Field Induced Positional Shift of Bloch Electrons and Its Dynamical Implications, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 112, 166601 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [29] See Supplemental Material at http://link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='aps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='org/ supplemental/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='130.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='016301 for device fabrication, electrical measurements, calculation details, polarized Raman spectroscopy of few-layer WTe2, transport properties of the devices, angle-dependent third-order anomalous Hall effect, symmetry analysis of WTe2, theory analysis of the field-induced Berry curvature dipole, control experiments in device S2, extrinsic effects that may induce nonlinear transport, and anisotropy of the scaling parame- ters, which includes Refs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [30–41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [30] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Meric, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Huang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Tran, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Taniguchi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Watanabe, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Campos, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Muller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', One-dimensional electrical contact to a two-dimensional material, Science 342, 614 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [31] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kresse and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hafner, Ab initio molecular dynamics for open-shell transition metals, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 48, 13115 (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [32] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kresse and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Furthmüller, Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 54, 11169 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [33] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Blöchl, Projector augmented-wave method, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 50, 17953 (1994).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [34] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Perdew, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Burke, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ernzerhof, Generalized Gradient Approximation Made Simple, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 77, 3865 (1996).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [35] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Pizzi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Wannier90 as a community code: New features and applications, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Condens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Matter 32, 165902 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [36] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Han, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lee, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lee, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Cheong, Determination of the thickness and orientation of few-layer tungsten ditelluride using polarized Raman spec- troscopy, 2D Mater.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 3, 034004 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [37] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ali, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xiong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Flynn, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Tao, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gibson, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Schoop, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Haldolaarachchige, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Hirschberger, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Large, non-saturating magnetoresistance in WTe2, Nature (London) 514, 205 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [38] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fatemi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Gibson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Taniguchi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Cava, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Jarillo-Herrero, Magnetoresistance and quan- tum oscillations of an electrostatically tuned semimetal- to-metal transition in ultrathin WTe2, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 95, 041410(R) (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [39] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhang, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kakani, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Woods, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Cha, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shi, Thickness dependence of magnetotransport proper- ties of tungsten ditelluride, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' B 104, 165126 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [40] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Akamatsu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Avan der Waals interface that creates in- plane polarization and a spontaneous photovoltaic effect, Science 372, 68 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [41] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Dames and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chen, 1ω, 2ω, and 3ω methods for measurements of thermal properties, Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Instrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 76, 124902 (2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [42] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Du, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xie, Disorder-induced nonlinear Hall effect with time-reversal symmetry, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Commun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 10, 3047 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [43] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Tian, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ye, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Jin, Proper Scaling of the Anomalous Hall Effect, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 103, 087206 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [44] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ye, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Kang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liu, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' von Cube, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Wicker, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Suzuki, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Jozwiak, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Bostwick, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Rotenberg, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Bell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', Massive Dirac fermions in a ferromagnetic kagome metal, Nature (London) 555, 638 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [45] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Isobe, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Fu, High-frequency rectification via chiral Bloch electrons, Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 6, eaay2497 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [46] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Ye, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Zhu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Xu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Shang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Liao, Orbit-transfer torque driven field-free switching of perpendicular magnetization, Chin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 39, 037303 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [47] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sinha, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Adak, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Chakraborty, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Das, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Debnath, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Sangani, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Watanabe, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Taniguchi, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Waghmare, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Agarwal, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Deshmukh, Berry curvature dipole senses topological transition in a moir´e superlattice, Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 18, 765 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 6 1 Supplemental Material for Control over Berry curvature dipole with electric field in WTe2 Xing-Guo Ye1,+, Huiying Liu2,+, Peng-Fei Zhu1,+, Wen-Zheng Xu1,+, Shengyuan A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Yang2, Nianze Shang1, Kaihui Liu1, and Zhi-Min Liao1,* 1 State Key Laboratory for Mesoscopic Physics and Frontiers Science Center for Nano-optoelectronics, School of Physics, Peking University, Beijing 100871, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2 Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore, 487372, Singapore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' + These authors contributed equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Email: liaozm@pku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='cn This file contains supplemental Figures S1-S18 and Notes 1-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 1: Device fabrication, experimental and calculation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 2: Polarized Raman spectroscopy of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 3: Angle-dependent longitudinal resistance and third-order nonlinear Hall effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 4: Magnetotransport properties of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 5: Symmetry analysis of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 6: Theoretical analysis and calculations of field-induced Berry curvature dipole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 7: Electric field dependence of second-order Hall signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 8: Control experiments in device S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 9: Discussions of other possible origins of the second order AHE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Note 10: Angle dependence of parameter C0 obtained from the fittings of scaling law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2 Supplemental Note 1: Device fabrication, experimental and calculation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1) Device fabrication The WTe2 flakes were exfoliated from bulk crystal by scotch tape and then transferred onto the polydimethylsiloxane (PDMS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The PDMS was then covered onto a Si substrate with 285 nm-thick SiO2, where the Si substrate was precleaned by air plasma, and further heated for about 1 minute at 90℃ to transfer the WTe2 flakes onto Si substrate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Disk and Hall bar-shaped Ti/Au electrodes (around 10 nm thick) were prefabricated on individual SiO2/Si substrates with e-beam lithography, metal deposition and lift-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Exfoliated BN (around 20 nm thick) and WTe2 flakes (around 5-20 nm thick) were sequentially picked up and then transferred onto the Ti/Au electrodes using a polymer-based dry transfer technique [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The atomic force microscope image of device S1 is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The thickness of this sample is 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 nm, corresponding to a 12-layer WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The whole exfoliation and transfer processes were done in an argon-filled glove box with O2 and H2O content below 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 parts per million to avoid sample degeneration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S1: (a) The atomic force microscope image of device S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (b) The line profile shows the thickness of the WTe2 sample is 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 nm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 1 2 3 4 5 0 3 6 9 Height (nm) Line profile (mm) 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 nm 3 WTe2 (a) (b) 3 2) Electrical transport measurements and circuit schematic All the transport measurements were carried out in an Oxford cryostat with a variable temperature insert and a superconducting magnet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' First-, second- and third- harmonic signals were collected by standard lock-in techniques (Stanford Research Systems Model SR830) with frequency ω .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Frequency \uf077 equals 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='777 Hz unless otherwise stated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The circuit schematic with multiple sources in experiments is depicted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' and d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' sources are both effective current sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The original SR830 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' source is a voltage source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In experiments, we connected the SR830 voltage source and a protective resistor with resistance value 𝑅𝑝 in series (𝑅𝑝 = 100 kΩ for device S1 and 𝑅𝑝 = 10 kΩ for device S2), as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The resistance of WTe2 channel is in the order of 10 Ω, much less than 𝑅𝑝, which makes the SR830 source an effective current source with excitation current 𝐼𝜔 ≅ 𝑈𝜔 𝑅𝑝 ⁄ , where 𝑈𝜔 is the source voltage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The Keithley 2400 current source is used for the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S2, the positive and negative terminals of the Keithley source are connected to a pair of diagonal electrodes to form a loop circuit, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', a floating loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field is obtained by 𝐸𝑑𝑐 = 𝐼𝑑𝑐𝑅𝜃 𝐿 , where 𝐼𝑑𝑐 is the applied d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current, 𝑅𝜃 is the resistance of WTe2 along direction 𝜃, and 𝐿 is the channel length of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The impedance of the floating Keithley source to ground is measured to be ~60 MΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' While, the negative terminal of SR830 source is directly connected to the ground.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4 Figure S2: Schematic structure of the circuit for measurements in device S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 3) Spectral purity of lock-in measurements For the lock-in measurements, the used integration time is 300 ms and the filter roll-off is 24 dB/octave, that is, the cutoff (-3 dB) frequency for the low-pass filter is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='531 Hz and the filter roll-off is 24 dB per octave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' For our lock-in measurements, the narrow detection bandwidth (±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='531 Hz) effectively avoided the spectral leakage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The spectral purity of the lock-in homodyne circuit is verified by the control experiments of the lock-in measurements of a resistor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The first-, second- and third- harmonic voltages of a resistor with resistance ~100 Ω are measured using the same frequency (17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='777 Hz), integration time (300 ms) and filter roll-off (24 dB/octave) as used in experiments, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The first-harmonic voltage shows linear dependence on the alternating current, consistent with the resistance value ~100 Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The second- and third-harmonic voltages are four orders of magnitude smaller than the first- harmonic voltage, which indicates the high purity of spectrum of the lock-in homodyne circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Keithley 2400 current source SR830 voltage source SR830 lock-in measurement 5 Figure S3: Lock-in measurements for a resistor with resistance ~𝟏𝟎𝟎 𝛀.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, The first-harmonic voltage versus the alternating current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, The second- and third-harmonic voltages versus the alternating current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4) Validity of electrical measurements with the two sources In our experiments, the Keithley source is used as the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current source, which has an output impedance ~20 MΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current source is realized by connecting a resistor 𝑅𝑝 in series (𝑅𝑝 = 100 kΩ for device S1 and 𝑅𝑝 = 10 kΩ for device S2) in series with the SR830 voltage source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Both the a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' and d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current sources have effectively large output impedance comparing to the sample resistance ~10 Ω, so that they can be considered as independent current sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' These two current sources can be applied to the device simultaneously, having well-defined potential differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' To further confirm the validity of our electrical measurements with the two current sources, we design a test circuit, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S4(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current flowing through 𝑅2 was calculated by measuring the first-harmonic voltage 𝑉ω of 𝑅2 and 𝐼𝜔 = 𝑉𝜔/𝑅2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current is applied by the Keithley current source and is measured by measuring the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' voltage 𝑉𝑑𝑐 of 𝑅2 and 𝐼𝑑𝑐 = 𝑉𝑑𝑐/𝑅2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S4(b), where the a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' voltage of SR830 source is fixed at 1 V, it is found that the 𝐼𝜔 is unchanged when 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 10 20 30 40 50 V\uf077 (mV) I\uf077 (mA) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 0 1 n = 2 n = 3 Vn\uf077 (mV) I\uf077 (mA) (a) (b) 6 varying the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current by Keithley source, while measured 𝐼𝑑𝑐 is almost the same as the output current of the Keithley source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S4(c), where the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current of Keithley source is fixed, it is found that 𝐼𝜔 well satisfies 𝐼𝜔 = 𝑈𝜔/(𝑅1 + 𝑅2 + 𝑅3) ≅ 𝑈𝜔 𝑅𝑝 ⁄ with 𝑈𝜔 as the SR830 source voltage and 𝑅𝑝 = 𝑅1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' These results clearly confirm the a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' and d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' sources are effectively independent with negligible current shunt between each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S4: Validity of the electrical measurements with two sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, Schematic of the test circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, The 𝐼𝜔 and 𝐼𝑑𝑐 as a function of the Keithley source current with SR830 source voltage 𝑈𝜔 fixed at 1 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' c, The 𝐼𝜔 and 𝐼𝑑𝑐 as a function of the SR830 source voltage 𝑈𝜔 with Keithley source current fixed at 1 mA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' SR830 voltage source Keithley 2400 current source SR830 lock-in measurement (a) (b) (c) 0 1 2 3 4 5 0 10 20 30 40 50 I\uf077 (mA) U\uf077 (V) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='9 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 Keithley source current=1 mA Idc (mA) 4 2 0 2 4 8 10 12 I\uf077 (mA) Keithley source current (mA) U\uf077 = 1 V 4 2 0 2 4 Idc (mA) 7 5) Calculation methods First-principles calculations were performed to reveal the properties of the Berry connection polarizability tensor and field-induced Berry curvature dipole in WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The electronic structures were carried out in the framework of density functional theory as implemented in the Vienna ab initio simulation package [31,32] with the projector augmented wave method [33] and Perdew, Burke, and Ernzerh of exchange correlation functionals [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' For the convergence of the results, the spin–orbit coupling was included self-consistently in the calculations of electronic structures with the kinetic energy cutoff of 600 eV and Monkhorst-Pack k mesh of 14 × 8 × 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' We used d orbitals of W atom and p orbitals of Te atoms to construct Wannier functions [35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' While evaluating the band geometric quantities, we consider the finite temperature effect in the distribution function and a lifetime broadening of 𝑘𝐵𝑇 with 𝑇 = 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 8 Supplemental Note 2: Polarized Raman spectroscopy of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The crystalline orientation of WTe2 device was determined by the polarized Raman spectroscopy in the parallel polarization configuration [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S5 shows the polarized Raman spectrum of device S2 as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The optical image of device S2 is displayed in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S5(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Raman spectroscopy was measured with 514 nm excitation wavelengths through a linearly polarized solid-state laser beam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The polarization of the excitation laser was controlled by a quarter-wave plate and a polarizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' We collected the Raman scattered light with the same polarization as the excitation laser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' A typical Raman spectroscopy of device S2 is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S5(b), where five Raman peaks are identified, belonging to the A1 modes of WTe2 [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' We further measured the polarization dependence of intensities of peaks P2 and P11 [denoted in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S5(b)] in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S5(c) and S5(d), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Based on previous reports [36], the polarization direction with maximum intensity was assigned as the b axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The measured crystalline orientation is further indicated in the optical image [Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S5(a)], where the applied a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current is approximately parallel to a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 9 Figure S5: Polarized Raman spectroscopy of WTe2 to determine the crystalline orientation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, Optical image of device S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The crystalline axes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', a axis and b axis, determined by the polarized Raman spectroscopy, are denoted by the black arrows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The applied a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current is also noted by the red arrow, which is approximately aligned with a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, A typical Raman spectrum measured with 514 nm excitation wavelengths, where the polarization direction is approximately along b axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Five Raman peaks are observed, which belong to the A1 modes of WTe2 [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' c,d, Polarization dependence of intensities of peaks (c) P2 and (d) P11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here the polarization angle takes 0° along the b axis, along which maximum intensity is observed [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 60 120 180 240 300 0 100 200 300 Intensity (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=') Wavenumber (cm-1) 0 60 120 180 240 300 0 60 120 180 240 300 b a 10 mm (a) (b) (c) (d) P2 P10 P11 P2 P11 b axis b axis 10 Supplemental Note 3: Angle-dependent longitudinal resistance and third-order nonlinear Hall effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The third-order anomalous Hall effect (AHE) is investigated in device S1, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S6(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' By exploiting the circular disc electrode structure, the angle- dependence of the third-order AHE is measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It shows highly sensitive to the crystalline orientation, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S6(c), which inherits from the intrinsic anisotropy of WTe2 [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Based on the symmetry of WTe2 [26], the third-order AHE shows angle-dependence following the formula E𝐻 3ω (𝐸𝜔)3 ∝ cos(θ−θ0)sin(θ−θ0)[(χ22r4−3χ12r2)sin2(θ−θ0)+(3χ21r2−χ11)cos2(θ−θ0)] (cos2(θ−θ0)+𝑟sin2(θ−θ0))3 , where 𝐸𝐻 3𝜔 = 𝑉𝐻 3𝜔 𝑊 , 𝐸𝜔 = 𝐼𝜔𝑅∥ 𝐿 , 𝑉𝐻 3𝜔 is the third-harmonic Hall voltage, 𝐼𝜔 is the applied a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current, 𝑅∥ is the longitudinal resistance, 𝑊 and 𝐿 are channel width and length, respectively, r is the resistance anisotropy, 𝜒𝑖𝑗 are elements of the third- order susceptibility tensor, 𝜃0 is the angle misalignment between 𝜃 = 0° and crystalline b axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The fitting curve for this angle dependence is shown by the red line in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S6(c), which yields the misalignment 𝜃0 ~1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5°.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In addition to the third-order AHE, the longitudinal (𝑅∥) resistance also shows strong anisotropy [13], as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S6(b), following 𝑅∥(𝜃) = 𝑅𝑏𝑐𝑜𝑠2(𝜃 − 𝜃0) + 𝑅𝑎𝑠𝑖𝑛2(𝜃 − 𝜃0), consistent with previous results [13], where 𝑅𝑎 and 𝑅𝑏 are resistance along crystalline a and b axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 11 Figure S6: Angle-dependence of third-order nonlinear Hall effect in device S1 at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, The third-harmonic anomalous Hall voltages at various 𝜃.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here 𝜃 is defined as the relative angle between the alternating current and the baseline (approximately along b axis).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b,c, (b) Rxx and (c) third-order Hall signal E𝐻 3ω (𝐸𝜔)3 as a function of 𝜃, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 15 10 5 0 5 10 15 0\uf0b0 30\uf0b0 60\uf0b0 V3\uf077 H (mV) E\uf077 (kV/m) 90\uf0b0 120\uf0b0 150\uf0b0 16 24 Rxx (W) 0 60 120 180 240 300 360 50 0 50 V3\uf077 H /(V\uf077)3 (V-2) q (\uf0b0) (a) (b) (c) 12 Supplemental Note 4: Magnetotransport properties of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The magneto-transport properties of the device S1 were investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S7(a) shows the resistivity as a function of temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The resistivity decreases upon decreasing temperature with a residual-resistivity at low temperatures, showing typical metallic behaviors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S7(b) shows the magnetoresistance (MR) and Hall resistance as a function of magnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' MR is defined as 𝑅𝑥𝑥(𝐵)−𝑅𝑥𝑥(0) 𝑅𝑥𝑥(0) × 100%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The low residual resistance and large, non-saturated MR indicate the high quality of the WTe2 devices [37,38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The carrier mobility of device S1 is estimated as high as 4974.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 cm2/(V ⋅ s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Moreover, resistance oscillations due to the formations of Landau levels are also observed, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S7(c), indicative of the high crystal quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The oscillation ∆𝑅𝑥𝑥 is obtained by subtracting a parabolic background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The fast Fourier transform (FFT) is performed, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S7(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Three frequencies are observed, indicating the multiple Fermi pockets in WTe2, which is consistent with previous work [37-39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The dominant peak of FFT 𝑓1 is around 44 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 13 Figure S7: Transport properties of the device S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, The resistivity as a function of temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, Magnetoresistance and Hall resistance at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' c, Oscillations of Rxx at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The ∆𝑅𝑥𝑥 is obtained by subtracting a parabolic background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' d, The FFT analysis of ∆𝑅𝑥𝑥 oscillations, where three peaks are obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 50 100 150 200 250 300 0 20 40 60 80 100 rxx (cm×mW) T (K) 15 -10 5 0 5 10 15 0 500 1000 1500 2000 MR (%) B (T) 60 40 20 0 20 40 Rxy (W) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='25 6 3 0 3 6 DRxx (W) 1/B (T-1) 100 200 300 0 200 400 600 800 FFT amplitude (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='u.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=') Frequency (T) f1 f2 f3 (a) (b) (c) (d) 14 Supplemental Note 5: Symmetry analysis of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Td-WTe2 has a distorted crystal structure with low symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here we analyze the thickness dependence of the symmetry in WTe2 in details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S8(a) shows the b-c plane of monolayer WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Each monolayer consists of a layer of W atoms sandwiched between two layers of Te atoms, denoted as Te1 (denoted in yellow) and Te2 (denoted in red), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The inversion symmetry of the monolayer is approximately satisfied, and Te1 is equivalent to Te2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The presence of inversion symmetry forces Berry curvature dipole (BCD) to be zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' However, as a perpendicular displacement field is applied to break the inversion symmetry, the Te1 is no longer equivalent to Te2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in the bottom of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S8(a), an in-plane electric polarization along b axis can be induced by the out-of-plane displacement field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The electric polarization along b axis plays a similar role as the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field in our work, leading to nonzero BCD along a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nonzero BCD in bilayer WTe2 origins from crystal symmetry breaking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The largest symmetry in bilayer WTe2 is a single mirror symmetry 𝑀𝑎 with bc plane as mirror plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S8(b), the stacking between the two layers makes bilayer WTe2 inversion symmetry breaking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Under inversion operation, the top and bottom layers are swapped, which fails to coincide with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S8(b), Te1 is not equivalent to Te2 due to the stacking arrangement in bilayer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Therefore, an in-plane electric polarization P along b axis exists, similar to the case in monolayer with an out-of-plane displacement field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The polarization P is able to induce nonzero BCD along the perpendicular crystalline axis, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', along a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 15 In fact, such in-plane polarization P along b axis in monolayer and bilayer WTe2 is already evidenced by the circular photogalvanic effect [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The symmetry breaking induced polarization is also confirmed in various 2D materials, such as WSe2/black phosphorus heterostructures [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In trilayer and thicker WTe2, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S8(c), the Te1 and Te2 are equivalent in bulk, leading to vanished electric polarization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The in-plane inversion symmetry in bulk forbids the presence of in-plane BCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' However, the inversion is broken on surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Therefore, for trilayer and thicker WTe2, a small but nonzero BCD may occur on surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S8: Crystal structure of Td-WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, b-c plane of monolayer Td-WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, b-c plane of bilayer Td-WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The stacking arrangement breaks the inversion symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' c, b-c plane of trilayer Td-WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Importantly, the surface BCD and it induced second-order AHE in few-layer WTe2 Inversion operation W Te2 c b Te1 E b (a) (b) (c) 16 is reported in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' [13], which is also observed in our device.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' We measured the second- order AHE without the application of Edc in a WTe2 device, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This second-order AHE is observable when applying 𝐼𝜔 in the order of 1 mA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' By comparison, the second-order AHE induced by d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' field is observable when applying 𝐼𝜔 smaller than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 mA (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1 of main text).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The calculated BCD along a axis 𝐷𝑎 without the application of Edc is ~0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 nm, which is one order of magnitude smaller than 𝐷𝑎 (1) ~0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='29 nm measured under Edc = 3kV/m (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4 of main text).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' These results confirm the validity of Edc induced BCD in our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S9: The second-order AHE without external d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field in WTe2 at 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='8 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 0 10 20 30 40 V2\uf077 H (mV) I\uf077 (mA) 17 Supplemental Note 6: Theoretical analysis and calculations of field-induced Berry curvature dipole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The electric field-induced Berry curvature depends on the Berry connection polarizability tensor and the applied d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' field with the relation that 𝛀(1) = 𝛁𝐤 × (𝐆⃡𝐄𝑑𝑐), Ωβ (1)(𝑛, 𝒌) = εβγμ[∂γ𝐺μν(𝑛, 𝒌)]𝐸ν dc, with 𝐺μν(𝑛, 𝒌) = 2𝑒Re ∑ (𝐴μ)𝑛𝑚(𝐴ν)𝑚𝑛 ε𝑛−ε𝑚 𝑚≠𝑛 , where 𝐴𝑚𝑛 is the interband Berry connection and 𝑒 is the electron charge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The superscript “(1)” represents that the physical quantity is the first order term of electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here the Greek letters refer to the spatial directions, 𝑚, 𝑛 refer to the energy band indices, εβγμ is the Levi-Civita symbol, and 𝜕𝛾 is short for 𝜕/𝜕𝑘𝛾 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The Berry connection polarizability tensor of WTe2 is calculated and shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S10(a)-(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' From the definition, the field-induced BCD is 𝐷αβ (1) = ∫ [𝑑𝒌]𝑓0 (∂αΩβ (1)) 𝑘 = εβγμ ∫ [𝑑𝒌]𝑓0[∂α(∂γ𝐺μν)]𝐸ν dc 𝑘 , where ∫ [𝑑𝒌] 𝑘 = ∑ 1 (2π)3 ∭ 𝑑𝒌 𝑛 is taken over the first Brillouin zone of the system and summed over all energy bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In two-dimensional systems, 𝛀(1) is constrained to the out of plane direction, and BCD behaves as a pseudo vector in the plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here we choose our coordinate frame along the crystal principal axes 𝑎, 𝑏, 𝑐 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' By applying a d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field 𝐄dc = (Ea dc, Eb dc) in the 𝑎𝑏 plane, the induced Ω𝑐 (1) reads Ω𝑐 (1)(𝑛, 𝒌) = (𝜕𝑎𝐺𝑏𝑎 − 𝜕𝑏𝐺𝑎𝑎)Ea dc + (𝜕𝑎𝐺𝑏𝑏 − 𝜕𝑏𝐺𝑎𝑏)Eb dc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 𝐷α (1) defined in a few-layer 2D system can be approximately derived from 𝐷αc(bulk) (1) of 18 the bulk system by 𝐷α (1) = 𝑑𝐷αc(bulk) (1) , where 𝑑 is the thickness of the film.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The independent components of 𝐷α (1) are related to the Berry connection polarizability tensor, 𝐄dc and 𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The mirror symmetry 𝑀𝑎 and the glide symmetry 𝑀̃𝑏 in WTe2 constrain 𝐷α (1) to be 𝐷𝑎 (1) = ∫[𝑑𝑘] f0[∂a(∂aGbb) − ∂a(∂bGab)]Eb dc𝑑 k , 𝐷𝑏 (1) = ∫[𝑑𝑘] f0[∂b(𝜕𝑎𝐺𝑏𝑎) − ∂b(𝜕𝑏𝐺𝑎𝑎)]Ea dc𝑑 k , where the other terms are prohibited by symmetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In the experiment, the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field is applied along a direction with an angle 𝜃 between 𝑏 axis, which can be expressed as 𝐄dc = 𝐸dc(− sin 𝜃 , cos 𝜃) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The induced BCD 𝐃(1)(𝜃) = (𝐷𝑎 (1)(𝜃), 𝐷𝑏 (1)(𝜃)) hence reads 𝐷𝑎 (1)(𝜃) = ∫[𝑑𝑘] f0[∂a(∂aGbb) − ∂a(∂bGab)]Edc k cos 𝜃 𝑑, 𝐷𝑏 (1)(𝜃) = ∫[𝑑𝑘] f0[∂b(∂bGaa) − ∂b(∂aGba)]Edc k sin 𝜃 𝑑.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' With the field-induced BCD, the second-order Hall current of an a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field 𝐄ω is [9] 𝑗𝛼 2ω = −εαμγ 𝑒3𝜏 2(1 + 𝑖ωτ)ℏ2 𝐷βμ (1)𝐸β ω𝐸γ ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In two-dimensional systems, where 𝛀(1) is along out of plane direction and 𝐷αc (1) = ∫ [𝑑𝒌]𝑓0(∂αΩc (1)) 𝑘 , it is equivalent to 𝒋2ω = − 𝑒3𝜏 2(1 + 𝑖ωτ)ℏ2 (𝒛̂ × 𝐄ω)[D(1)(𝜃) ⋅ Eω].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The magnitude of induced second-order Hall conductivity is determined by D(1)(𝜃) ⋅ 𝐄̂ω, which is the projection of the pseudo vector 𝐃(1) to the direction of 𝐄ω, 19 and the direction of Hall current is perpendicular to 𝐄ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Consequently, we can measure the 𝐄dc induced BCD 𝐃(1) by detecting its projective component 𝐷𝑎 (1)(𝜃) or 𝐷𝑏 (1)(𝜃) with an a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field along the corresponding direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' From the above derivation, when the direction of the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c electric field varies in the 𝑎𝑏 plane, the independent components of induced BCD 𝐷𝑎 (1) and 𝐷𝑏 (1) change as a cosine and a sine function, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This relation is clearly demonstrated by our experimental results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4 of main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' With first-principles calculations, we estimate the extreme value of 𝐷𝑎 (1)(0°) and 𝐷𝑏 (1)(90°), as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S10(d).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is taken that 𝑑 ∼ 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 nm and 𝐸dc ∼ 3 kV/m according to the experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 𝐷𝑎 (1)(0°) and 𝐷𝑏 (1)(90°) refer to 𝐷𝑎 (1) and 𝐷𝑏 (1) as the applied 𝐸dc along the b axis and -a axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is found that 𝐷𝑏 (1)(90°) varies from ~-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='14 nm to 0 as tuning chemical potential away from 0, and 𝐷𝑎 (1)(0°) shows a non-monotonic change between 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='18 and -0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='13 nm as changing chemical potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The experimental results of 𝐷𝑏 (1)(90°) ~-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 nm and 𝐷𝑎 (1)(0°) ~-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='28 nm (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4 in main text) agree well with the calculations on the order of magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S10: Calculations of Berry connection polarizability tensor and field- (a) (b) (c) (d) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 D(0°) (nm) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='-.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='D"(90°) 0 D(1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 =3kV/m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 20 10 0 10 20 μ(meV)106 G 104 102 X 0 102 104 106 Y Gbb 10° 104 102 X 0 102 104 106 Y106 104 102 X 0 102 104 106 Y20 induced Berry curvature dipole in WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a-c, The calculated distribution of Berry connection polarizability tensor elements (a) 𝐺𝑎𝑎, (b) 𝐺𝑏𝑏, (c) 𝐺𝑎𝑏 in the 𝑘𝑧 = 0 plane of the Brillouin Zone for the occupied bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The unit of BCP is Å2 ⋅ V−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The grey lines depict the Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' d, Calculated field-induced BCD 𝐷𝑎 (1)(0°) and 𝐷𝑏 (1)(90°) with respect to the chemical potential 𝜇 when 𝐸dc = 3 kV/m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' In the calculations, the finite temperature effect is considered with a boarding of 𝑘𝐵𝑇 at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 21 Supplemental Note 7: Electric field dependence of second-order Hall signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The second-harmonic I-V characteristics in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(e) of main text are converted into the 𝑉𝐻 2𝜔 versus (𝑉𝜔)2 in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S11(a), where linear relationships are observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The E𝐻 2ω (𝐸𝜔)2 as a function of the applied 𝐸𝑑𝑐 is further calculated and presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S11(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S11: Second-order AHE modulated by d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, The second-harmonic Hall voltage 𝑉𝐻 2𝜔 as a function of (𝑉𝜔)2 as 𝐄𝑑𝑐 along b axis and 𝐄𝜔 along -a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, The second-order Hall signal E𝐻 2ω (𝐸𝜔)2 as a function of 𝐸𝑑𝑐 at 𝜃 = 0° and 𝜃 = 90° with 𝐄𝜔 ∥ −𝑎 axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 5 10 15 20 25 30 6 4 2 0 2 4 6 Edc (kV/m) 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 3 V2\uf077 H (mV) (V\uf077)2 (10-8 V2) q = 0\uf0b0 E\uf077 \uf07c\uf07c -a axis 3 2 1 0 1 2 3 9 6 3 0 3 6 9 q 0\uf0b0 90\uf0b0 E2\uf077 H /(E\uf077)2 (10-5 m/V) Edc (kV/m) E\uf077 \uf07c\uf07c -a axis (a) (b) 22 Supplemental Note 8: Control experiments in device S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' To demonstrate the symmetry constraint in WTe2, control experiments were carried out in device S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As schematically shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S12(a), (d), the a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' and d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current sources are applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The SR830 is an effective a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current source as connecting a resistor in series with output impedance 10 kΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' source is the Keithley current source with output impedance ~20 MΩ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' For the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' field applied along a and b axis, respectively, the first-harmonic Hall voltage shows no obvious dependence on 𝐄𝑑𝑐, as shown in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S12(b) and S12(e), which indicate the independence of the two electric sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' When applying 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis, no second-order nonlinear Hall effect can be observed in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S12(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nevertheless, upon applying 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S12(f), nonzero second-order nonlinear Hall effect emerges due to the 𝐄𝑑𝑐 induced Berry curvature dipole along a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S12: The measurements by applying both d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field 𝐄𝒅𝒄 and a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' current in devices S2 at 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='8 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, Schematic of the measurement configuration for (b) and (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 1 0 1 2 3 Edc (104 V/m) 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='7 V2\uf077 ⊥ (mV) I\uf077 (mA) E\uf077 ⊥ Edc 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 Edc \uf07c\uf07c (104 V/m) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 V2\uf077 ⊥ (mV) I\uf077 (mA) E\uf077 \uf07c\uf07c Edc 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 Edc (104 V/m) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 V\uf077 H (mV) I\uf077 (mA) E\uf077 ⊥ Edc 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 Edc (104 V/m) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 V\uf077 H (mV) I\uf077 (mA) E\uf077 \uf07c\uf07c Edc SR830 voltage source Keithley 2400 current source SR830 voltage source Keithley 2400 current source b a b a (a) (b) (c) (d) (e) (f) 23 b, First-harmonic Hall voltage 𝑉𝐻 𝜔 under 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' c, There is no clear second-harmonic Hall voltage 𝑉𝐻 2𝜔 under 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' d, Schematic of the measurement configuration for (e) and (f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' e, The 𝑉𝐻 𝜔 under various 𝐄𝑑𝑐 with 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' f, The 𝑉𝐻 2𝜔 under various 𝐄𝑑𝑐 with 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 24 Supplemental Note 9: Discussions of other possible origins of the second order AHE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1) Diode effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' An accidental diode due to the contact can lead to a rectification, causing high-order transport, which, however, can be safely ruled out in this work due to the following reasons: (a) Extrinsic signals of this origin should be strongly contact dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Thus, the angle-dependence should be also coupled to extrinsic contacts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nevertheless, the angle- dependence of second-order AHE in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S12 is well consistent with the inherent symmetry of WTe2, which excludes the extrinsic origins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (b) The two-terminal d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' measurements for all the diagonal electrodes show linear I-V characteristics, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S13(a), excluding the existence of diode effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Linear fittings are performed for the two-terminal I-V curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The R-square of the linear fittings is at least larger than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='99997, indicating perfect linearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Further, the deviation from linearity is analyzed by subtracting the linear-dependent part, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S13(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is found ∆Vdc, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=', the deviation part, is four orders of magnitude smaller than the original Vdc, indicating a negligible nonlinearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Moreover, the ∆Vdc shows no obvious current or angle dependence (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S13(b)), and its magnitude is also much smaller than that of the higher-harmonic Hall voltages (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S13(c)), further indicating that the observed higher-order transport in this work is failed to be attributed to the diode effect induced by contact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 25 Figure S13: Two-terminal d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' measurements at 5 K in device S1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, Current-voltage curves from two-terminal d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' measurements for all the diagonal electrodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, The current dependence of ∆Vdc, that is, the deviations from the linearity of the current-voltage curves in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S13a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' c, The comparation of the ∆Vdc, 𝑉𝐻 2𝜔 and 𝑉𝐻 3𝜔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' For ∆Vdc and 𝑉𝐻 3𝜔, the excitation current is applied at 𝜃 = 30°, while for 𝑉𝐻 2𝜔, the excitation current is applied along a axis and a d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' field 3 kV/m is applied at 𝜃 = 30°.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2) Capacitive effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Contact resistance is generally inevitable between the metal electrodes and two-dimensional materials, which would induce an accidental capacitive effect, resulting in higher-order transport effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Here, the second-order AHE shows a negligible dependence on frequency, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S14(a), excluding the capacitive effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The phase of the second-harmonic Hall voltage is also investigated, where the Y signal dominates over the X signal (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S14(b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The phase of the second-harmonic Hall voltage is approximately ±90°, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S14(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' These features further exclude the capacitive effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 10 0 10 Vdc (mV) Idc (mA) 90\uf0b0 120\uf0b0 150\uf0b0 0\uf0b0 30\uf0b0 60\uf0b0 (a) (b) (c) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0\uf0b0 30\uf0b0 60\uf0b0 DVdc (mV) Idc (mA) 90\uf0b0 120\uf0b0 150\uf0b0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 5 0 5 10 V (mV) Excitation current (mA) DVdc V2\uf077 H V3\uf077 H 26 Figure S14: Frequency-dependence and phase of second-order AHE in device S1 at 5 K and with 𝐄𝐝𝐜 = 𝟑 𝐤𝐕/𝐦 at 𝜽 = 𝟔𝟎°.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, The second-order Hall signals at different frequencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' b, The X and Y signals of the second-order Hall voltages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' c, The absolute value of the phase of the second-order Hall voltages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 3) Thermal effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The thermal effect can also induce a second-order signal [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' If the observed nonlinear Hall effect origins from thermal effect, it should response to both longitudinal and transverse d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' However, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S12, when applying 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis, no second-order nonlinear Hall effect is observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Nevertheless, upon applying 𝐄𝜔 ⊥ 𝐄𝑑𝑐 , nonzero second-order nonlinear Hall effect emerges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This observation is clearly inconsistent with the thermal effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Moreover, the observed second-order nonlinear Hall effect shows strong anisotropy, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2 of main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The angle-dependence of the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' field-induced second-order Hall effect is well consistent with the inherent symmetry of WTe2, which is failed to be explained by the thermal effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 4) Thermoelectric effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Joule heating induced temperature gradient across the sample can drive a thermoelectric voltage, leading to second-order nonlinear Hall effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This thermoelectric effect can also be excluded due to the following reasons: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0 20 40 60 80 100 abs(phase) (\uf0b0) I\uf077 (mA) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 X signal Y signal V2\uf077 H (mV) I\uf077 (mA) (b) (c) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='777 Hz 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='777 Hz 177.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='77 Hz 777.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='77 Hz 1777.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='7 Hz V2\uf077 H (mV) I\uf077 (mA) (a) 27 (a) Uniform Joule heating will not induce a temperature gradient and thus no thermoelectric voltage across the sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' (b) To generate thermoelectric voltage, the Joule heating should couple with external asymmetry, such as contact junction or flake shape, which should be unrelated to the inherent symmetry of WTe2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' However, the anisotropy of second-order nonlinear Hall effect is well consistent with the inherent symmetry analysis, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 2 of main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 5) A residue of the first-harmonic Hall response 𝑽𝑯 𝝎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The influence of 𝑉𝐻 𝜔 on the 𝑉𝐻 2𝜔 can be ruled out because the first- and second-harmonic signals show different dependence on the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S15, the first-harmonic Hall signal (𝑉𝐻 𝜔) shows that the I-V curves under 𝐸𝑑𝑐 = ±3 kV/m overlap with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' By comparison, the second-harmonic Hall signal (𝑉𝐻 2𝜔 ) shows an anti-symmetric dependence on 𝐸𝑑𝑐, where the sign of 𝑉𝐻 2𝜔 is changed upon changing the sign of 𝐸𝑑𝑐.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' This indicates that the existence of the first order signal 𝑉𝐻 𝜔 will not affect the measurements of the second order signal 𝑉𝐻 2𝜔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S15: The first- and second-harmonic signals at 5 K as 𝐄𝒅𝒄 along b axis (𝜽 = 𝟎°) and 𝐄𝝎 along -a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a, The first-harmonic Hall voltage 𝑉𝐻 𝜔 as a function of 𝐼𝜔 at 𝐸𝑑𝑐 = ±3 kV/m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 6 4 2 0 2 4 6 Edc (kV/m) 3 3 V2\uf077 H (mV) I\uf077 (mA) q = 0\uf0b0 (a) (b) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 Edc (kV/m) 3 3 V\uf077 H (mV) I\uf077 (mA) 28 b, The second-harmonic Hall voltage 𝑉𝐻 2𝜔.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 6) Trivial effect by d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' We measured the first-harmonic longitudinal voltage upon applying Edc = 3 kV/m , as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is clearly found that when reversing the sign of d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric field, the I-V curves overlapped with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The results show that the d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' source will not affect the a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S16: The first-harmonic longitudinal voltage versus current under different d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' electric fields at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The 𝐄𝝎 and 𝐄𝒅𝒄 are along a axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 7) Longitudinal nonlinearity originating from a circuit artifact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' We have measured both the second-harmonic Hall and longitudinal voltage at all the angles, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The measurement configuration is shown in the inset of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S17(d) with d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' field applied at angle 𝜃.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' It is clearly found that the Hall nonlinearity is dominated over longitudinal one, which guarantees that the observed second-order Hall effect doesn’t originate from the longitudinal nonlinearity induced by a circuit artifact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='8 V\uf077 xx (mV) I\uf077 (mA) Edc (kV/m) 3 3 29 Figure S17: The second-harmonic Hall 𝑽𝑯 𝟐𝝎 and longitudinal voltage 𝑽𝑳 𝟐𝝎 with 𝐄𝝎 ∥ −𝒂 axis and 𝐄𝒅𝒄 = 𝟏.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 𝟓 𝐤𝐕/𝐦 along different angles at 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The angle 𝜽 is defined in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 1(d) of main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 V2\uf077 H V2\uf077 L V2\uf077 (mV) I\uf077 (mA) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 V2\uf077 H V2\uf077 L V2\uf077 (mV) I\uf077 (mA) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 V2\uf077 H V2\uf077 L V2\uf077 (mV) I\uf077 (mA) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 V2\uf077 H V2\uf077 L V2\uf077 (mV) I\uf077 (mA) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 V2\uf077 H V2\uf077 L V2\uf077 (mV) I\uf077 (mA) 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='5 0 V2\uf077 H V2\uf077 L V2\uf077 (mV) I\uf077 (mA) a b (a) (b) (c) (d) (e) (f) 30 Supplemental Note 10: Angle dependence of parameter C0 obtained from the fittings of scaling law.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' The second-order Hall signal EH 2ω (𝐸𝜔)2 is found to satisfy scaling law EH 2ω (𝐸𝜔)2 = 𝐶0 + 𝐶1𝜎𝑥𝑥 + 𝐶2𝜎𝑥𝑥 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' For 𝐄𝑑𝑐 = 3 kV/m with a fixed direction (angle 𝜃), a set of curves of VH 2ω vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Iω is measured at different temperatures as Iω is applied along -a axis and b axis, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Through varying temperature, the 𝜎𝑥𝑥 is changed accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Therefore, for a fixed angle 𝜃, the relationship between EH 2ω (𝐸𝜔)2 and 𝜎𝑥𝑥 is plotted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' By fitting the experimental data, the parameter 𝐶0 is then obtained and presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' S18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' Figure S18: Angle-dependence of the coefficient 𝑪𝟎.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' a,b, The coefficient 𝐶0 as a function of 𝜃 with the amplitude of 𝐄𝑑𝑐 fixed at 3 kV/m for (a) 𝐄𝜔 ∥ −𝑎 axis and (b) 𝐄𝜔 ∥ 𝑏 axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content=' 0 60 120 180 240 300 360 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} +page_content='6 C0 (10-7 m/V) q (o) 0 60 120 180 240 300 360 2 1 0 1 2 C0 (10-7 m/V) q (o) (a) (b) axis axis' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NAzT4oBgHgl3EQf9f64/content/2301.01921v1.pdf'} diff --git a/6tE0T4oBgHgl3EQffABm/vector_store/index.faiss b/6tE0T4oBgHgl3EQffABm/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..6d9efcec8ddfac387049284b7c0dfdfa2586f025 --- /dev/null +++ b/6tE0T4oBgHgl3EQffABm/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:335f0a36ae23f1ee3ded154094f80a2640b9752e45c32b0faaef44668d8d19e2 +size 2293805 diff --git a/6tE3T4oBgHgl3EQfpwpy/content/2301.04645v1.pdf b/6tE3T4oBgHgl3EQfpwpy/content/2301.04645v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1447200275fb324ab7e8cfa80c30eda3ff3abe08 --- /dev/null +++ b/6tE3T4oBgHgl3EQfpwpy/content/2301.04645v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c914e43f38b796da326874a8042b66b391cec9dc61d6ad1b1ffa68fb152afad7 +size 142542 diff --git a/6tE3T4oBgHgl3EQfpwpy/vector_store/index.faiss b/6tE3T4oBgHgl3EQfpwpy/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..3f6e023c6a99b54745954185662f28084239b365 --- /dev/null +++ b/6tE3T4oBgHgl3EQfpwpy/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0ea1e08d426c6ee9124641cce4c0d346268698fef43cfe175eee5daeecaeed07 +size 720941 diff --git a/6tE3T4oBgHgl3EQfpwpy/vector_store/index.pkl b/6tE3T4oBgHgl3EQfpwpy/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..1b60e36eba3c5c0059983421ada8eca5f5c4fbcf --- /dev/null +++ b/6tE3T4oBgHgl3EQfpwpy/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0379813fc11359e2808844f7774d8762bf6ac05e43dfc36e6bf7a3c57aa247a2 +size 32934 diff --git a/7NE3T4oBgHgl3EQfqApm/vector_store/index.pkl b/7NE3T4oBgHgl3EQfqApm/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..63abcca1528f2d9d92d133a379c0330f0fdf0da8 --- /dev/null +++ b/7NE3T4oBgHgl3EQfqApm/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:822da5373ee8d6e9a254350664f96860eec18ffac562e3d8e9cc023e2955c75e +size 255837 diff --git a/89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/2301.00056v1.pdf.txt b/89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/2301.00056v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..6e08540130601d3bac8d810b432ac339dd632483 --- /dev/null +++ b/89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/2301.00056v1.pdf.txt @@ -0,0 +1,372 @@ +MNRAS 000, 1–4 (2022) +Preprint 30 December 2022 +Compiled using MNRAS LATEX style file v3.0 +A Bayesian Neural Network Approach to identify Stars and AGNs +observed by XMM Newton ★ +Sarvesh Gharat,1† and Bhaskar Bose2 +1 Centre for Machine Intelligence and Data Science, Indian Institute of Technology Bombay, 400076, Mumbai, India +2 Smart Mobility Group, Tata Consultancy Services, 560067, Bangalore, India +Accepted XXX. Received YYY; in original form ZZZ +ABSTRACT +In today’s era, a tremendous amount of data is generated by different observatories and manual classification of data is something +which is practically impossible. Hence, to classify and categorize the objects there are multiple machine and deep learning +techniques used. However, these predictions are overconfident and won’t be able to identify if the data actually belongs to the +trained class. To solve this major problem of overconfidence, in this study we propose a novel Bayesian Neural Network which +randomly samples weights from a distribution as opposed to the fixed weight vector considered in the frequentist approach. The +study involves the classification of Stars and AGNs observed by XMM Newton. However, for testing purposes, we consider CV, +Pulsars, ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict with higher accuracy as opposed +to the frequentist approaches wherein these objects are predicted as either Stars or AGNs. The proposed algorithm is one of +the first instances wherein the use of Bayesian Neural Networks is done in observational astronomy. Additionally, we also make +our algorithm to identify stars and AGNs in the whole XMM-Newton DR11 catalogue. The algorithm almost identifies 62807 +data points as AGNs and 88107 data points as Stars with enough confidence. In all other cases, the algorithm refuses to make +predictions due to high uncertainty and hence reduces the error rate. +Key words: methods: data analysis – methods: observational – methods: miscellaneous +1 INTRODUCTION +Since the last few decades, a large amount of data is regularly +generated by different observatories and surveys. The classification +of this enormous amount of data by professional astronomers is +time-consuming as well as practically impossible. To make the +process simpler, various citizen science projects (Desjardins et al. +2021) (Cobb 2021) (Allf et al. 2022) (Faherty et al. 2021) are +introduced which has been reducing the required time by some +extent. However, there are many instances wherein classifying the +objects won’t be simple and may require domain expertise. +In this modern era, wherein Machine Learning and Neural Net- +works are widely used in multiple fields, there has been significant +development in the use of these algorithms in Astronomy. Though +these algorithms are accurate with their predictions there is certainly +some overconfidence (Kristiadi et al. 2020) (Kristiadi et al. 2021) +associated with it. Besides that, these algorithms tend to classify +every input as one of the trained classes (Beaumont & Haziza 2022) +irrespective of whether it actually belongs to those trained classes +eg: The algorithm trained to classify stars will also predict AGNs as +one of the stars. To solve this major issue, in this study we propose a +Bayesian Neural Network (Jospin et al. 2022) (Charnock et al. 2022) +★ Based on observations obtained with XMM-Newton, an ESA science mis- +sion with instruments and contributions directly funded by ESA Member +States and NASA +† E-mail: sarveshgharat19@gmail.com +which refuses to make a prediction whenever it isn’t confident about +its predictions. The proposed algorithm is implemented on the data +collected by XMM-Newton (Jansen et al. 2001). We do a binary +classification to classify Stars and AGNs (Małek et al. 2013) (Golob +et al. 2021). Additionally to test our algorithm with the inputs which +don’t belong to the trained class we consider data observed from CV, +Pulsars, ULX, and LMX. Although, the algorithm doesn’t refuse to +predict all these objects, but the number of objects it predicts for +these 4 classes is way smaller than that of trained classes. +For the trained classes, the algorithm gives its predictions for al- +most 64% of the data points and avoids predicting the output when- +ever it is not confident about its predictions. The achieved accuracy +in this binary classification task whenever the algorithm gives its +prediction is 98.41%. On the other hand, only 14.6% of the incor- +rect data points are predicted as one of the classes by the algorithm. +The percentage decrease from 100% to 14.6% in the case of different +inputs is what dominates our model over other frequentist algorithms. +2 METHODOLOGY +In this section, we discuss the methodology used to perform this +study. This section is divided into the following subsections. +• Data Collection and Feature Extraction +• Model Architecture +• Training and Testing +© 2022 The Authors + +2 +S. Gharat et al. +Class +Catalogue +AGN +VERONCAT (Véron-Cetty & Véron 2010) +LMX +NGC3115CXO (Lin et al. 2015) +RITTERLMXB (Ritter & Kolb 2003) +LMXBCAT (Liu et al. 2007) +INTREFCAT (Ebisawa et al. 2003) +M31XMMXRAY (Stiele et al. 2008) +M31CFCXO (Hofmann et al. 2013) +RASS2MASS (Haakonsen & Rutledge 2009) +Pulsars +ATNF (Manchester et al. 2005) +FERMIL2PSR (Abdo et al. 2013) +CV +CVC (Drake et al. 2014) +ULX +XSEG (Drake et al. 2014) +Stars +CSSC (Skiff 2014) +Table 1. Catalogues used to create labeled data +Class +Training Data +Test Data +AGN +8295 +2040 +LMX +0 +49 +Pulsars +0 +174 +CV +0 +36 +ULX +0 +261 +Stars +6649 +1628 +Total +14944 +4188 +Table 2. Data distribution after cross-matching all the data points with cata- +logs mentioned in Table 1 +2.1 Data Collection and Feature Extraction +In this study, we make use of data provided in "XMM-DR11 SEDs" +Webb et al. (2020). We further cross-match the collected data with +different vizier (Ochsenbein et al. 2000) catalogs. Please refer to +Table 1 to view all the catalogs used in this study. As the proposed +algorithm is a "supervised Bayesian algorithm", this happens to be +one of the important steps for our algorithm to work. +The provided data has 336 different features that can increase +computational complexity by a larger extent and also has a lot of +missing data points. Therefore in this study, we consider a set of +18 features corresponding to the observed source. The considered +features for all the sources are available on our Github repository, +more information of which is available on the official webpage 1 of +the observatory. After cross-matching and reducing the number of +features, we were left with a total of 19136 data points. The data +distribution can be seen in Table 2. We further also plot the sources +(Refer Figure1) based on their "Ra" and "Dec" to confirm if the +data coverage of the considered sources matches with the actual data +covered by the telescope. +1 http://xmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc. +html +Figure 1. Sky map coverage of considered data points +The collected data is further classified into train and test according +to the 80 : 20 splitting condition. The exact number of data points is +mentioned in Table 2 +2.2 Model Architecture +The proposed model has 1 input, hidden and output layers (refer +Figure 2) with 18, 512, and 2 neurons respectively. The reason for +having 18 neurons in the input layer is the number of input features +considered in this study. Further, to increase the non-linearity of the +output, we make use of "Relu" (Fukushima 1975) (Agarap 2018) as +an activation function for the first 2 layers. On the other hand, the +output layer makes use of "Softmax" to make the predictions. This +is done so that the output of the model will be the probability of +image belonging to a particular class (Nwankpa et al. 2018) (Feng +& Lu 2019). +The "optimizer" and "loss" used in this study are "Adam" (Kingma +et al. 2020) and "Trace Elbo" (Wingate & Weber 2013) (Ranganath +et al. 2014) respectively. The overall idea of BNN (Izmailov et al. +2021) (Jospin et al. 2022) (Goan & Fookes 2020) is to have a pos- +terior distribution corresponding to all weights and biases such that, +the output distribution produced by these posterior distributions is +similar to that of the categorical distributions defined in the training +dataset. Hence, convergence, in this case, can be achieved by min- +imizing the KL divergence between the output and the categorical +distribution or just by maximizing the ELBO (Wingate & Weber +2013) (Ranganath et al. 2014). We make use of normal distributions +which are initialized with random mean and variance as prior (For- +tuin et al. 2021), along with the likelihood derived from the data to +construct the posterior distribution. +2.3 Training and Testing +The proposed model is constructed using Pytorch (Paszke et al. +2019) and Pyro (Bingham et al. 2019). The training of the model +is conducted on Google Colaboratory, making use of NVIDIA +K80 GPU (Carneiro et al. 2018). The model is trained over 2500 +epochs with a learning rate of 0.01. Both these parameters i.e +number of epochs and learning rate has to be tuned and are done by +iterating the algorithm multiple times with varying parameter values. +The algorithm is further asked to make 100 predictions corre- +sponding to every sample in the test set. Every time it makes the +prediction, the corresponding prediction probability varies. This is +due to random sampling of weights and biases from the trained dis- +tributions. Further, the algorithm considers the "mean" and "standard +deviation" corresponding to those probabilities to make a decision +as to proceed with classification or not. +MNRAS 000, 1–4 (2022) + +4 +45° +31 +15* +15* +30* +45* +60 +-75"BNN Classifier +3 +Figure 2. Model Architecture +AGN +Stars +AGN +1312 +6 +Stars +31 +986 +Table 3. Confusion Matrix for classified data points +Class +Precision +Recall +F1 Score +AGN +0.99 +0.97 +0.98 +Stars +0.97 +0.99 +0.98 +Average +0.98 +0.98 +0.98 +Table 4. Classification report for classified data points +3 RESULTS AND DISCUSSION +The proposed algorithm is one of the initial attempts to implement +"Bayesian Neural Networks" in observational astronomy which +has shown significant results. The algorithm gives the predictions +with an accuracy of more than 98% whenever it agrees to make +predictions for trained classes. +Table 3 represents confusion matrix of classified data. To calculate +accuracy, we make use of the given formula. +Accuracy = +𝑎11 + 𝑎22 +𝑎11 + 𝑎12 + 𝑎21 + 𝑎22 +× 100 +In our case, the calculated accuracy is +Accuracy = +1312 + 986 +1312 + 6 + 31 + 986 × 100 = 98.4% +As accuracy is not the only measure to evaluate any classification +model, we further calculate precision, recall and f1 score correspond- +ing to both the classes as shown in Table 4 +Although, the obtained results from simpler "BNN" can be +obtained via complex frequentist models, the uniqueness of the +algorithm is that it agrees to classify only 14% of the unknown +classes as one of the trained classes as opposed to frequentist +approaches wherein all those samples are classified as one of these +classes. Table 5 shows the percentage of data from untrained classes +Class +AGN +Star +CV +13.8 % +0 % +Pulsars +2.3 % +6.3 % +ULX +14.9 % +6.5 % +LMX +2 % +26.5 % +Total +9.4 % +7.8 % +Table 5. Percentage of misidentified data points +which are predicted as a Star or a AGN. +As the algorithm gives significant results on labelled data, we make +use of it to identify the possible Stars and AGNs in the raw data 2. +The algorithm almost identifies almost 7.1% of data as AGNs and +10.04% of data as AGNs. Numerically, the number happens to be +62807 and 88107 respectively. Although, there’s high probability that +there exists more Stars and AGNs as compared to the given number +the algorithm simply refuses to give the prediction as it isn’t enough +confident with the same. +4 CONCLUSIONS +In this study, we propose a Bayesian approach to identify Stars and +AGNs observed by XMM Newton. The proposed algorithm avoids +making predictions whenever it is unsure about the predictions. Im- +plementing such algorithms will help in reducing the number of +wrong predictions which is one of the major drawbacks of algo- +rithms making use of the frequentist approach. This is an important +thing to consider as there always exists a situation wherein the algo- +rithm receives an input on which it is never trained. The proposed +algorithm also identifies 62807 Stars and 88107 AGNs in the data +release 11 by XMM-Newton. +5 CONFLICT OF INTEREST +The authors declare that they have no conflict of interest. +DATA AVAILABILITY +The raw data used in this study is publicly made available by XMM +Newton data archive. All the codes corresponding to the algorithm +and the predicted objects along with the predictions will be publicly +made available on "Github" and "paperswithcode" by June 2023. +REFERENCES +Abdo A., et al., 2013, The Astrophysical Journal Supplement Series, 208, 17 +Agarap A. F., 2018, arXiv preprint arXiv:1803.08375 +Allf B. C., Cooper C. B., Larson L. R., Dunn R. R., Futch S. E., Sharova M., +Cavalier D., 2022, BioScience, 72, 651 +Beaumont J.-F., Haziza D., 2022, Canadian Journal of Statistics +Bingham E., et al., 2019, The Journal of Machine Learning Research, 20, 973 +2 http://xmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc. +html +MNRAS 000, 1–4 (2022) + +h2 +54 +S. Gharat et al. +Carneiro T., Da Nóbrega R. V. M., Nepomuceno T., Bian G.-B., De Albu- +querque V. H. C., Reboucas Filho P. P., 2018, IEEE Access, 6, 61677 +Charnock T., Perreault-Levasseur L., Lanusse F., 2022, in , Artificial Intelli- +gence for High Energy Physics. World Scientific, pp 663–713 +Cobb B., 2021, in Astronomical Society of the Pacific Conference Series. +p. 415 +Desjardins R., Pahud D., Doerksen N., Laczko M., 2021, in Astronomical +Society of the Pacific Conference Series. p. 23 +Drake A., et al., 2014, Monthly Notices of the Royal Astronomical Society, +441, 1186 +Ebisawa K., Bourban G., Bodaghee A., Mowlavi N., 2003, Astronomy & +Astrophysics, 411, L59 +Faherty J. K., et al., 2021, The Astrophysical Journal, 923, 48 +Feng J., Lu S., 2019, in Journal of physics: conference series. p. 022030 +Fortuin V., Garriga-Alonso A., Ober S. W., Wenzel F., Rätsch G., Turner R. E., +van der Wilk M., Aitchison L., 2021, arXiv preprint arXiv:2102.06571 +Fukushima K., 1975, Biological cybernetics, 20, 121 +Goan E., Fookes C., 2020, in , Case Studies in Applied Bayesian Data Science. +Springer, pp 45–87 +Golob A., Sawicki M., Goulding A. D., Coupon J., 2021, Monthly Notices of +the Royal Astronomical Society, 503, 4136 +Haakonsen C. B., Rutledge R. E., 2009, The Astrophysical Journal Supple- +ment Series, 184, 138 +Hofmann F., Pietsch W., Henze M., Haberl F., Sturm R., Della Valle M., +Hartmann D. H., Hatzidimitriou D., 2013, Astronomy & Astrophysics, +555, A65 +Izmailov P., Vikram S., Hoffman M. D., Wilson A. G. G., 2021, in Interna- +tional conference on machine learning. pp 4629–4640 +Jansen F., et al., 2001, Astronomy & Astrophysics, 365, L1 +Jospin L. V., Laga H., Boussaid F., Buntine W., Bennamoun M., 2022, IEEE +Computational Intelligence Magazine, 17, 29 +Kingma D. P., Ba J. A., Adam J., 2020, arXiv preprint arXiv:1412.6980, 106 +Kristiadi A., Hein M., Hennig P.,2020, in International conferenceon machine +learning. pp 5436–5446 +Kristiadi A., Hein M., Hennig P., 2021, Advances in Neural Information +Processing Systems, 34, 18789 +Lin D., et al., 2015, The Astrophysical Journal, 808, 19 +Liu Q., Van Paradijs J., Van Den Heuvel E., 2007, Astronomy & Astrophysics, +469, 807 +Małek K., et al., 2013, Astronomy & Astrophysics, 557, A16 +Manchester R. N., Hobbs G. B., Teoh A., Hobbs M., 2005, The Astronomical +Journal, 129, 1993 +Nwankpa C., Ijomah W., Gachagan A., Marshall S., 2018, arXiv preprint +arXiv:1811.03378 +Ochsenbein F., Bauer P., Marcout J., 2000, Astronomy and Astrophysics +Supplement Series, 143, 23 +Paszke A., et al., 2019, Advances in neural information processing systems, +32 +Ranganath R., Gerrish S., Blei D., 2014, in Artificial intelligence and statistics. +pp 814–822 +Ritter H., Kolb U., 2003, Astronomy & Astrophysics, 404, 301 +Skiff B., 2014, VizieR Online Data Catalog, pp B–mk +Stiele H., Pietsch W., Haberl F., Freyberg M., 2008, Astronomy & Astro- +physics, 480, 599 +Véron-Cetty M.-P., Véron P., 2010, Astronomy & Astrophysics, 518, A10 +Webb N., et al., 2020, Astronomy & Astrophysics, 641, A136 +Wingate D., Weber T., 2013, arXiv preprint arXiv:1301.1299 +This paper has been typeset from a TEX/LATEX file prepared by the author. +MNRAS 000, 1–4 (2022) + diff --git a/89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/load_file.txt b/89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..23b4b85c99c0079833dd736e58a623989538c179 --- /dev/null +++ b/89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/load_file.txt @@ -0,0 +1,327 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf,len=326 +page_content='MNRAS 000, 1–4 (2022) Preprint 30 December 2022 Compiled using MNRAS LATEX style file v3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='0 A Bayesian Neural Network Approach to identify Stars and AGNs observed by XMM Newton ★ Sarvesh Gharat,1† and Bhaskar Bose2 1 Centre for Machine Intelligence and Data Science, Indian Institute of Technology Bombay, 400076, Mumbai, India 2 Smart Mobility Group, Tata Consultancy Services, 560067, Bangalore, India Accepted XXX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Received YYY;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' in original form ZZZ ABSTRACT In today’s era, a tremendous amount of data is generated by different observatories and manual classification of data is something which is practically impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Hence, to classify and categorize the objects there are multiple machine and deep learning techniques used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' However, these predictions are overconfident and won’t be able to identify if the data actually belongs to the trained class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' To solve this major problem of overconfidence, in this study we propose a novel Bayesian Neural Network which randomly samples weights from a distribution as opposed to the fixed weight vector considered in the frequentist approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The study involves the classification of Stars and AGNs observed by XMM Newton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' However, for testing purposes, we consider CV, Pulsars, ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict with higher accuracy as opposed to the frequentist approaches wherein these objects are predicted as either Stars or AGNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The proposed algorithm is one of the first instances wherein the use of Bayesian Neural Networks is done in observational astronomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Additionally, we also make our algorithm to identify stars and AGNs in the whole XMM-Newton DR11 catalogue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The algorithm almost identifies 62807 data points as AGNs and 88107 data points as Stars with enough confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' In all other cases, the algorithm refuses to make predictions due to high uncertainty and hence reduces the error rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Key words: methods: data analysis – methods: observational – methods: miscellaneous 1 INTRODUCTION Since the last few decades, a large amount of data is regularly generated by different observatories and surveys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The classification of this enormous amount of data by professional astronomers is time-consuming as well as practically impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' To make the process simpler, various citizen science projects (Desjardins et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2021) (Cobb 2021) (Allf et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2022) (Faherty et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2021) are introduced which has been reducing the required time by some extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' However, there are many instances wherein classifying the objects won’t be simple and may require domain expertise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' In this modern era, wherein Machine Learning and Neural Net- works are widely used in multiple fields, there has been significant development in the use of these algorithms in Astronomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Though these algorithms are accurate with their predictions there is certainly some overconfidence (Kristiadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2020) (Kristiadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2021) associated with it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Besides that, these algorithms tend to classify every input as one of the trained classes (Beaumont & Haziza 2022) irrespective of whether it actually belongs to those trained classes eg: The algorithm trained to classify stars will also predict AGNs as one of the stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' To solve this major issue, in this study we propose a Bayesian Neural Network (Jospin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2022) (Charnock et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2022) ★ Based on observations obtained with XMM-Newton, an ESA science mis- sion with instruments and contributions directly funded by ESA Member States and NASA † E-mail: sarveshgharat19@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='com which refuses to make a prediction whenever it isn’t confident about its predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The proposed algorithm is implemented on the data collected by XMM-Newton (Jansen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' We do a binary classification to classify Stars and AGNs (Małek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2013) (Golob et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Additionally to test our algorithm with the inputs which don’t belong to the trained class we consider data observed from CV, Pulsars, ULX, and LMX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Although, the algorithm doesn’t refuse to predict all these objects, but the number of objects it predicts for these 4 classes is way smaller than that of trained classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' For the trained classes, the algorithm gives its predictions for al- most 64% of the data points and avoids predicting the output when- ever it is not confident about its predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The achieved accuracy in this binary classification task whenever the algorithm gives its prediction is 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='41%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' On the other hand, only 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='6% of the incor- rect data points are predicted as one of the classes by the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The percentage decrease from 100% to 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='6% in the case of different inputs is what dominates our model over other frequentist algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2 METHODOLOGY In this section, we discuss the methodology used to perform this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' This section is divided into the following subsections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Data Collection and Feature Extraction Model Architecture Training and Testing © 2022 The Authors 2 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Gharat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Class Catalogue AGN VERONCAT (Véron-Cetty & Véron 2010) LMX NGC3115CXO (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2015) RITTERLMXB (Ritter & Kolb 2003) LMXBCAT (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2007) INTREFCAT (Ebisawa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2003) M31XMMXRAY (Stiele et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2008) M31CFCXO (Hofmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2013) RASS2MASS (Haakonsen & Rutledge 2009) Pulsars ATNF (Manchester et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2005) FERMIL2PSR (Abdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2013) CV CVC (Drake et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2014) ULX XSEG (Drake et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2014) Stars CSSC (Skiff 2014) Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Catalogues used to create labeled data Class Training Data Test Data AGN 8295 2040 LMX 0 49 Pulsars 0 174 CV 0 36 ULX 0 261 Stars 6649 1628 Total 14944 4188 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Data distribution after cross-matching all the data points with cata- logs mentioned in Table 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='1 Data Collection and Feature Extraction In this study, we make use of data provided in "XMM-DR11 SEDs" Webb et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' We further cross-match the collected data with different vizier (Ochsenbein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2000) catalogs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Please refer to Table 1 to view all the catalogs used in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' As the proposed algorithm is a "supervised Bayesian algorithm", this happens to be one of the important steps for our algorithm to work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The provided data has 336 different features that can increase computational complexity by a larger extent and also has a lot of missing data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Therefore in this study, we consider a set of 18 features corresponding to the observed source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The considered features for all the sources are available on our Github repository, more information of which is available on the official webpage 1 of the observatory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' After cross-matching and reducing the number of features, we were left with a total of 19136 data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The data distribution can be seen in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' We further also plot the sources (Refer Figure1) based on their "Ra" and "Dec" to confirm if the data coverage of the considered sources matches with the actual data covered by the telescope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 1 http://xmmssc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='irap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='omp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='eu/Catalogue/4XMM-DR11/col_unsrc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' html Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Sky map coverage of considered data points The collected data is further classified into train and test according to the 80 : 20 splitting condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The exact number of data points is mentioned in Table 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='2 Model Architecture The proposed model has 1 input, hidden and output layers (refer Figure 2) with 18, 512, and 2 neurons respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The reason for having 18 neurons in the input layer is the number of input features considered in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Further, to increase the non-linearity of the output, we make use of "Relu" (Fukushima 1975) (Agarap 2018) as an activation function for the first 2 layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' On the other hand, the output layer makes use of "Softmax" to make the predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' This is done so that the output of the model will be the probability of image belonging to a particular class (Nwankpa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2018) (Feng & Lu 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The "optimizer" and "loss" used in this study are "Adam" (Kingma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2020) and "Trace Elbo" (Wingate & Weber 2013) (Ranganath et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2014) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The overall idea of BNN (Izmailov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2021) (Jospin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2022) (Goan & Fookes 2020) is to have a pos- terior distribution corresponding to all weights and biases such that, the output distribution produced by these posterior distributions is similar to that of the categorical distributions defined in the training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Hence, convergence, in this case, can be achieved by min- imizing the KL divergence between the output and the categorical distribution or just by maximizing the ELBO (Wingate & Weber 2013) (Ranganath et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' We make use of normal distributions which are initialized with random mean and variance as prior (For- tuin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2021), along with the likelihood derived from the data to construct the posterior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='3 Training and Testing The proposed model is constructed using Pytorch (Paszke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2019) and Pyro (Bingham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The training of the model is conducted on Google Colaboratory, making use of NVIDIA K80 GPU (Carneiro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The model is trained over 2500 epochs with a learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Both these parameters i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='e number of epochs and learning rate has to be tuned and are done by iterating the algorithm multiple times with varying parameter values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The algorithm is further asked to make 100 predictions corre- sponding to every sample in the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Every time it makes the prediction, the corresponding prediction probability varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' This is due to random sampling of weights and biases from the trained dis- tributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Further, the algorithm considers the "mean" and "standard deviation" corresponding to those probabilities to make a decision as to proceed with classification or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' MNRAS 000, 1–4 (2022) 4 45° 31 15* 15* 30* 45* 60 75"BNN Classifier 3 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Model Architecture AGN Stars AGN 1312 6 Stars 31 986 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Confusion Matrix for classified data points Class Precision Recall F1 Score AGN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='98 Stars 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='98 Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='98 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Classification report for classified data points 3 RESULTS AND DISCUSSION The proposed algorithm is one of the initial attempts to implement "Bayesian Neural Networks" in observational astronomy which has shown significant results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The algorithm gives the predictions with an accuracy of more than 98% whenever it agrees to make predictions for trained classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Table 3 represents confusion matrix of classified data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' To calculate accuracy, we make use of the given formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Accuracy = 𝑎11 + 𝑎22 𝑎11 + 𝑎12 + 𝑎21 + 𝑎22 × 100 In our case, the calculated accuracy is Accuracy = 1312 + 986 1312 + 6 + 31 + 986 × 100 = 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='4% As accuracy is not the only measure to evaluate any classification model,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' we further calculate precision,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' recall and f1 score correspond- ing to both the classes as shown in Table 4 Although,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' the obtained results from simpler "BNN" can be obtained via complex frequentist models,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' the uniqueness of the algorithm is that it agrees to classify only 14% of the unknown classes as one of the trained classes as opposed to frequentist approaches wherein all those samples are classified as one of these classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Table 5 shows the percentage of data from untrained classes Class AGN Star CV 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='8 % 0 % Pulsars 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='3 % 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='3 % ULX 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='9 % 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='5 % LMX 2 % 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='5 % Total 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='4 % 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='8 % Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Percentage of misidentified data points which are predicted as a Star or a AGN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' As the algorithm gives significant results on labelled data, we make use of it to identify the possible Stars and AGNs in the raw data 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The algorithm almost identifies almost 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='1% of data as AGNs and 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='04% of data as AGNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Numerically, the number happens to be 62807 and 88107 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Although, there’s high probability that there exists more Stars and AGNs as compared to the given number the algorithm simply refuses to give the prediction as it isn’t enough confident with the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 4 CONCLUSIONS In this study, we propose a Bayesian approach to identify Stars and AGNs observed by XMM Newton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The proposed algorithm avoids making predictions whenever it is unsure about the predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Im- plementing such algorithms will help in reducing the number of wrong predictions which is one of the major drawbacks of algo- rithms making use of the frequentist approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' This is an important thing to consider as there always exists a situation wherein the algo- rithm receives an input on which it is never trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' The proposed algorithm also identifies 62807 Stars and 88107 AGNs in the data release 11 by XMM-Newton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 5 CONFLICT OF INTEREST The authors declare that they have no conflict of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' DATA AVAILABILITY The raw data used in this study is publicly made available by XMM Newton data archive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' All the codes corresponding to the algorithm and the predicted objects along with the predictions will be publicly made available on "Github" and "paperswithcode" by June 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' REFERENCES Abdo A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2013, The Astrophysical Journal Supplement Series, 208, 17 Agarap A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2018, arXiv preprint arXiv:1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='08375 Allf B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Cooper C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Larson L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Dunn R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Futch S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Sharova M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Cavalier D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2022, BioScience, 72, 651 Beaumont J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Haziza D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2022, Canadian Journal of Statistics Bingham E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2019, The Journal of Machine Learning Research, 20, 973 2 http://xmmssc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='irap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='omp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='eu/Catalogue/4XMM-DR11/col_unsrc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' html MNRAS 000, 1–4 (2022) h2 54 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Gharat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Carneiro T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Da Nóbrega R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Nepomuceno T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Bian G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', De Albu- querque V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Reboucas Filho P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2018, IEEE Access, 6, 61677 Charnock T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Perreault-Levasseur L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Lanusse F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2022, in , Artificial Intelli- gence for High Energy Physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' World Scientific, pp 663–713 Cobb B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2021, in Astronomical Society of the Pacific Conference Series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 415 Desjardins R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Pahud D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Doerksen N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Laczko M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2021, in Astronomical Society of the Pacific Conference Series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 23 Drake A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2014, Monthly Notices of the Royal Astronomical Society, 441, 1186 Ebisawa K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Bourban G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Bodaghee A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Mowlavi N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2003, Astronomy & Astrophysics, 411, L59 Faherty J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2021, The Astrophysical Journal, 923, 48 Feng J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Lu S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2019, in Journal of physics: conference series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' 022030 Fortuin V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Garriga-Alonso A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Ober S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Wenzel F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Rätsch G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Turner R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', van der Wilk M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Aitchison L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2021, arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='06571 Fukushima K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 1975, Biological cybernetics, 20, 121 Goan E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Fookes C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2020, in , Case Studies in Applied Bayesian Data Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' Springer, pp 45–87 Golob A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Sawicki M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Goulding A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Coupon J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2021, Monthly Notices of the Royal Astronomical Society, 503, 4136 Haakonsen C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Rutledge R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2009, The Astrophysical Journal Supple- ment Series, 184, 138 Hofmann F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Pietsch W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Henze M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Haberl F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Sturm R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Della Valle M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hartmann D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hatzidimitriou D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2013, Astronomy & Astrophysics, 555, A65 Izmailov P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Vikram S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hoffman M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Wilson A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2021, in Interna- tional conference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' pp 4629–4640 Jansen F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2001, Astronomy & Astrophysics, 365, L1 Jospin L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Laga H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Boussaid F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Buntine W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Bennamoun M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2022, IEEE Computational Intelligence Magazine, 17, 29 Kingma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Ba J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Adam J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2020, arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='6980, 106 Kristiadi A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hein M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hennig P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=',2020, in International conferenceon machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' pp 5436–5446 Kristiadi A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hein M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hennig P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2021, Advances in Neural Information Processing Systems, 34, 18789 Lin D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2015, The Astrophysical Journal, 808, 19 Liu Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Van Paradijs J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Van Den Heuvel E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2007, Astronomy & Astrophysics, 469, 807 Małek K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2013, Astronomy & Astrophysics, 557, A16 Manchester R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hobbs G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Teoh A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Hobbs M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2005, The Astronomical Journal, 129, 1993 Nwankpa C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Ijomah W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Gachagan A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Marshall S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2018, arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='03378 Ochsenbein F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Bauer P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Marcout J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2000, Astronomy and Astrophysics Supplement Series, 143, 23 Paszke A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2019, Advances in neural information processing systems, 32 Ranganath R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Gerrish S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Blei D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2014, in Artificial intelligence and statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' pp 814–822 Ritter H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Kolb U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2003, Astronomy & Astrophysics, 404, 301 Skiff B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2014, VizieR Online Data Catalog, pp B–mk Stiele H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Pietsch W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Haberl F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Freyberg M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2008, Astronomy & Astro- physics, 480, 599 Véron-Cetty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Véron P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2010, Astronomy & Astrophysics, 518, A10 Webb N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2020, Astronomy & Astrophysics, 641, A136 Wingate D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', Weber T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=', 2013, arXiv preprint arXiv:1301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content='1299 This paper has been typeset from a TEX/LATEX file prepared by the author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} +page_content=' MNRAS 000, 1–4 (2022)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'} diff --git a/99E5T4oBgHgl3EQfRQ7z/vector_store/index.pkl b/99E5T4oBgHgl3EQfRQ7z/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..b95c340c6da3f7a316f8fcd66ae5b9f0f8d78a84 --- /dev/null +++ b/99E5T4oBgHgl3EQfRQ7z/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:985f60497c14d9ab63b556366c0ee837e164d0849cb76182f223c956021b0d0c +size 233276 diff --git a/9dE4T4oBgHgl3EQfdwwo/content/2301.05093v1.pdf b/9dE4T4oBgHgl3EQfdwwo/content/2301.05093v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ba5e7b9731ac17c8a1b55a81e452dd9a12d13500 --- /dev/null +++ b/9dE4T4oBgHgl3EQfdwwo/content/2301.05093v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50334824fa5e4c1fb968eb00bcd8da986e3988417ea0e0409cac4bc7536fdeea +size 6542313 diff --git a/9dE4T4oBgHgl3EQfdwwo/vector_store/index.faiss b/9dE4T4oBgHgl3EQfdwwo/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..68c13cad5aede06a1a84178f9e0e3b489a75bbd0 --- /dev/null +++ b/9dE4T4oBgHgl3EQfdwwo/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:97b468a86900a36f07ea08f098d48cb6182100e2f78399a40b40e17c099b7af5 +size 4915245 diff --git a/A9FIT4oBgHgl3EQf_Swz/content/tmp_files/2301.11414v1.pdf.txt b/A9FIT4oBgHgl3EQf_Swz/content/tmp_files/2301.11414v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..c6119e5a44bf63ee86dbd11502bcb7b7853e5939 --- /dev/null +++ b/A9FIT4oBgHgl3EQf_Swz/content/tmp_files/2301.11414v1.pdf.txt @@ -0,0 +1,1626 @@ +A Simple Algorithm For Scaling Up Kernel Methods +Teng Andrea Xu†, Bryan Kelly‡, and Semyon Malamud† +†Swiss Finance Institute, EPFL +andrea.xu,semyon.malamud@epfl.ch +‡Yale School of Management, Yale University +bryan.kelly@yale.edu +Abstract +The recent discovery of the equivalence between infinitely wide neural networks +(NNs) in the lazy training regime and Neural Tangent Kernels (NTKs) Jacot et al. +[2018] has revived interest in kernel methods. However, conventional wisdom suggests +kernel methods are unsuitable for large samples due to their computational complexity +and memory requirements. We introduce a novel random feature regression algorithm +that allows us (when necessary) to scale to virtually infinite numbers of random fea- +tures. We illustrate the performance of our method on the CIFAR-10 dataset. +arXiv:2301.11414v1 [cs.LG] 26 Jan 2023 + +1 +Introduction +Modern neural networks operate in the over-parametrized regime, which sometimes requires +orders of magnitude more parameters than training data points. Effectively, they are interpo- +lators (see, Belkin [2021]) and overfit the data in the training sample, with no consequences +for the out-of-sample performance. This seemingly counterintuitive phenomenon is some- +times called “benign overfit” [Bartlett et al., 2020, Tsigler and Bartlett, 2020]. +In the so-called lazy training regime Chizat et al. [2019], wide neural networks (many +nodes in each layer) are effectively kernel regressions, and “early stopping” commonly used +in neural network training is closely related to ridge regularization [Ali et al., 2019]. See, +Jacot et al. [2018], Hastie et al. [2019], Du et al. [2018, 2019a], Allen-Zhu et al. [2019]. Recent +research also emphasizes the “double descent,” in which expected forecast error drops in the +high-complexity regime. See, for example, Zhang et al. [2016], Belkin et al. [2019a,b], Spigler +et al. [2019], Belkin et al. [2020]. +These discoveries made many researchers argue that we need to gain a deeper under- +standing of kernel methods (and, hence, random feature regressions) and their link to deep +learning. See, e.g., Belkin et al. [2018]. Several recent papers have developed numerical +algorithms for scaling kernel-type methods to large datasets and large numbers of random +features. See, e.g., Zandieh et al. [2021], Ma and Belkin [2017], Arora et al. [2019a], Shankar +et al. [2020]. In particular, Arora et al. [2019b] show how NTK combined with the support +vector machines (SVM) (see also Fern´andez-Delgado et al. [2014]) perform well on small +data tasks relative to many competitors, including the highly over-parametrized ResNet-34. +In particular, while modern deep neural networks do generalize on small datasets (see, e.g., +Olson et al. [2018]), Arora et al. [2019b] show that kernel-based methods achieve superior +performance in such small data environments. Similarly, Du et al. [2019b] find that the graph +neural tangent kernel (GNTK) dominates graph neural networks on datasets with up to 5000 +2 + +samples. Shankar et al. [2020] show that, while NTK is a powerful kernel, it is possible to +build other classes of kernels (they call Neural Kernels) that are even more powerful and are +often at par with extremely complex deep neural networks. +In this paper, we develop a novel form of kernel ridge regression that can be applied to +any kernel and any way of generating random features. We use a doubly stochastic method +similar to that in Dai et al. [2014], with an important caveat: We generate (potentially large, +defined by the RAM constraints) batches of random features and then use linear algebraic +properties of covariance matrices to recursively update the eigenvalue decomposition of the +feature covariance matrix, allowing us to perform the optimization in one shot across a large +grid of ridge parameters. +The paper is organized as follows. Section 2 discusses related work. In Section 3, we +provide a novel random feature regression mathematical formulation and algorithm. Then, +Section 4 and Section 5 present numerical results and conclusions, respectively. +2 +Related Work +Before the formal introduction of the NTK in Jacot et al. [2018], numerous papers discussed +the intriguing connections between infinitely wide neural networks and kernel methods. See, +e.g., Neal [1996]; Williams [1997]; Le Roux and Bengio [2007]; Hazan and Jaakkola [2015]; +Lee et al. [2018]; Matthews et al. [2018]; Novak et al. [2018]; Garriga-Alonso et al. [2018]; +Cho and Saul [2009]; Daniely et al. [2016]; Daniely [2017]. +As in the standard random +feature approximation of the kernel ridge regression (see Rahimi and Recht [2007]), only the +network’s last layer is trained in the standard kernel ridge regression. A surprising discovery +of Jacot et al. [2018] is that (infinitely) wide neural networks in the lazy training regime +converge to a kernel even though all network layers are trained. The corresponding kernel, +the NTK, has a complex structure dependent on the neural network’s architecture. See also +Lee et al. [2019], Arora et al. [2019a] for more results about the link between NTK and the +3 + +underlying neural network, and Novak et al. [2019] for an efficient algorithm for implementing +the NTK. In a recent paper, Shankar et al. [2020] introduce a new class of kernels and show +that they perform remarkably well on even very large datasets, achieving a 90% accuracy on +the CIFAR-10 dataset. While this performance is striking, it comes at a huge computational +cost. Shankar et al. [2020] write: +“CIFAR-10/CIFAR-100 consist of 60, 000 32 × 32 × 3 images and MNIST consists of +70, 000 28 × 28 images. Even with this constraint, the largest compositional kernel matrices +we study took approximately 1000 GPU hours to compute. Thus, we believe an imperative +direction of future work is reducing the complexity of each kernel evaluation. Random feature +methods or other compression schemes could play a significant role here. +In this paper, we offer one such highly scalable scheme based on random features. How- +ever, computing the random features underlying the Neural Kernels of Shankar et al. [2020] +would require developing non-trivial numerical algorithms based on the recursive iteration +of non-linear functions. We leave this as an important direction for future research. +As in standard kernel ridge regressions, we train our random feature regression on the +full sample. This is a key computational limitation for large datasets. After all, one of +the reasons for the success of modern deep learning is the possibility of training them us- +ing stochastic gradient descent on mini-batches of data. Ma and Belkin [2017] shows how +mini-batch training can be applied to kernel ridge regression. +A key technical difficulty +arises because kernel matrices (equivalently, covariance matrices of random features) have +eigenvalues that decay very quickly. Yet, these low eigenvalues contain essential informa- +tion and cannot be neglected. Our regression method can be easily modified to allow for +mini-batches. Furthermore, it is known that mini-batch linear regression can even lead to +performance gains in the high-complexity regime. As LeJeune et al. [2020] show, one can run +regression on mini-batches and then treat the obtained predictions as an ensemble. LeJeune +et al. [2020] prove that, under technical conditions, the average of these predictions attains +4 + +a lower generalization error than the full-train-sample-based regression. We test this mini- +batch ensemble approach using our method and show that, indeed, with moderately-sized +mini-batches, the method’s performance matches that of the full sample regression. +Moreover, there is an intriguing connection between mini-batch regressions and spectral +dimensionality reduction. By construction, the feature covariance matrix with a mini-batch +of size B has at most B non-zero eigenvalues. +Thus, a mini-batch effectively performs +a dimensionality reduction on the covariance matrix. Intuitively, we expect that the two +methods (using a mini-batch of size B or using the full sample but only keeping B largest +eigenvalues) should achieve comparable performance. We show that this is indeed the case +for small sample sizes. However, the spectral method for larger-sized samples (N ≥ 10000) +is superior to the mini-batch method unless we use very large mini-batches. For example, +on the full CIFAR-10 dataset, the spectral method outperforms the mini-batch approach by +3% (see Section 4 for details). +3 +Random Features Ridge Regression and Classifica- +tion +Suppose that we have a train sample (X, y) += +(xi, yi)N +i=1, xi ∈ Rd, yi ∈ R, so that +X ∈ RN×d, y ∈ RN×1. Following Rahimi and Recht [2007] we construct a large number of +random features f(x; θp), p = 1, . . . , P, where f is a non-linear function and θp are sampled +from some distribution, and P is a large number. +We denote S = f(X; θ) ∈ RN×P as +the train sample realizations of random features. Following Rahimi and Recht [2007], we +consider the random features ridge regression, +β(z) = (S⊤S/N + zI)−1S⊤y/N , +(1) +5 + +as an approximation for kernel ridge regression when P → ∞. For classification problems, +it is common to use categorical cross-entropy as the objective. However, as Belkin [2021] +explains, minimizing the mean-squared error with one-hot encoding often achieves superior +generalization performance. Here, we follow this approach. Given the K labels, k = 1, . . . , K, +we build the one-hot encoding matrix Q = (qi,k) where qi,k = 1yi=k. Then, we get +β(z) = (S⊤S/N + zI)−1S⊤Q/N ∈ RP×K . +(2) +Then, for each test feature vector s = f(x; θ) ∈ RP, we get a vector β(z)⊤s ∈ RK. Next, +define the actual classifier as +k(x; z) = arg max{β(z)⊤s} ∈ {1, · · · , K} . +(3) +3.1 +Dealing with High-Dimensional Features +A key computational (hardware) limitation of kernel methods comes from the fact that, +when P is large, computing the matrix S⊤S ∈ RP×P becomes prohibitively expensive, in +particular, because S cannot even be stored in RAM. We start with a simple observation +that the following identity implies that storing all these features is not necessary:1 +(S⊤S/N + zI)−1S⊤ = S⊤(SS⊤/N + zI)−1 , +(4) +and therefore we can compute β(z) as +β(z) = S⊤(SS⊤/N + zI)−1y/N . +(5) +Suppose now we split S into multiple blocks, S1, . . . , SK, where Sk ∈ RN×P1 for all +1This identity follows directly from (S⊤S/N + zI)S⊤ = S⊤(SS⊤/N + zI). +6 + +k = 1, . . . , K, for some small P1, with KP1 = P. Then, +Ψ = SS⊤ = +K +� +k=1 +SkS⊤ +k +(6) +can be computed by generating the blocks Sk, one at a time, and recursively adding SkS⊤ +k up. +Once Ψ has been computed, one can calculate its eigenvalue decomposition, Ψ = V DV ⊤, +and then evaluate Q(z) = (Ψ/N + zI)−1y/N += V (D + zI)−1V ⊤y/N ∈ RN in one go for +a grid of z. Then, using the same seeds, we can again generate the random features Sk and +compute βk(z) = S⊤ +k Q(z) ∈ RP1. Then, β(z) = (βk(z))K +k=1 ∈ RP . The logic described above +is formalized in Algorithm 1. +Algorithm 1 FABR +Require: P1, P, X ∈ RN×d, y ∈ RN, z, voc curve +blocks ← P//P1 +k ← 0 +Ψ ← 0N×N +while k < blocks do +Generate Sk ∈ RN×P1 Use k as seed +Ψ ← Ψ + SkSk⊤ +if k in voc curve then +DV ← eigen( Ψ +N ) +Qk(z) ← V (D + zI)−1V ⊤ y +N +▷ Store Qk(z) +end if +k = k + 1 +end while +DV ← eigen( Ψ +N ) +Q(z) ← V (D + zI)−1V ⊤ y +N +k ← 0 +while k < blocks do +(re-)Generate Sk ∈ RN×P1 +▷ Use k as seed +βk(z) ← S⊤ +k Q(z) +ˆy += Skβk +end while +7 + +3.2 +Dealing with Massive Datasets +The above algorithm relies crucially on the assumption that N is small. Suppose now that +the sample size N is so large that storing and eigen-decomposing the matrix SS⊤ ∈ RN×N +becomes prohibitively expensive. In this case, we proceed as follows. +Define for all k = 1, . . . , K +Ψk = +k +� +κ=1 +SkS⊤ +k ∈ RN×N, Ψ0 = 0N×N , +(7) +and let λ1(A) ≥ · · · ≥ λN(A) be the eigenvalues of a symmetric matrix A ∈ RN×N. Our +goal is to design an approximation to (ΨK + zI)−1, based on a simple observation that the +eigenvalues of the empirically observed Ψk matrices tend to decay very quickly, with only +a few hundreds of largest eigenvalues being significantly different from zero. In this case, +we can fix a ν ∈ N and design a simple, rank−ν approximation to ΨK by annihilating all +eigenvalues below λν(ΨK). As we now show, it is possible to design a recursive algorithm for +constructing such an approximation to ΨK, dealing with small subsets of random features +simultaneously. To this end, we proceed as follows. +Suppose we have constructed an approximation ˆΨk ∈ RN×N to Ψk with rank ν, and +let Vk ∈ RN×ν be the corresponding matrix of orthogonal eigenvectors for the non-zero +eigenvalues, and Dk ∈ Rν×ν the diagonal matrix of eigenvalues so that ˆΨk = VkDkV ⊤ +k +and +V ⊤ +k Vk = Iν×ν. Instead of storing the full ˆΨk matrix, we only need to store the pair (Vk, Dk). +For all k = 1, . . . , K, we now define +˜Ψk+1 = ˆΨk + Sk+1S⊤ +k+1 . +(8) +This N × N matrix is a theoretical construct. We never actually compute it (see Algorithm +8 + +2). Let Θk = I − VkV ⊤ +k be the orthogonal projection on the kernel of ˆΨk, and +˜Sk+1 = ΘkSk+1 = Sk+1 − Vk +���� +N×ν +(V ⊤ +k Sk+1 +� �� � +ν×P1 +) +(9) +be Sk+1 orthogonalized with respect to the columns of Vk. Then, we define ˜Wk+1 = ˜Sk+1( ˜Sk+1 ˜S⊤ +k+1)−1/2 +to be the orthogonalized columns of ˜Sk+1, and ˆVk+1 = [Vk, ˜Wk+1]. To compute ˜Sk+1( ˜Sk+1 ˜S⊤ +k+1)−1/2, +we use the following lemma that, once again, uses smart eigenvalue decomposition techniques +to avoid dealing with the N × N matrix ˜Sk+1 ˜S⊤ +k+1. +Lemma 1. Let ˜S⊤ +k+1 ˜Sk+1 +� +�� +� +ν×ν += Wδ ˜W ⊤ be the eigenvalue decomposition of ˜S⊤ +k+1 ˜Sk+1. Then, +˜W = ˜Sk+1Wδ−1/2 is the matrix of eigenvectors of ˜Sk+1 ˜S⊤ +k+1 for the non-zero eigenvalues. +Thus, +˜Sk+1( ˜Sk+1 ˜S⊤ +k+1)−1/2 = +˜Wk+1 . +(10) +By construction, the columns of ˆVk+1 form an orthogonal basis of the span of the columns +of Vk, Sk+1, and hence +Ψk+1,∗ = ˆV ⊤ +k+1 ˜Ψk+1 ˆVk+1 ∈ R(P1+ν)×(P1+ν) +(11) +has the same non-zero eigenvalues as ˜Ψk+1. We then define ˜Vk+1 ∈ R(P1+ν)×ν to be the +matrix with eigenvectors of Ψk+1,∗ for the largest ν eigenvalues, and we denote the diagonal +matrix of these eigenvalues by Dk+1 ∈ Rν×ν, and then we define Vk+1 = +ˆVk+1 ˜Vk+1 . Then, +ˆΨk+1 += +Vk+1Dk+1Vk+1 += +Πk+1 ˜Ψk+1Πk+1 , where Πk+1 += +ˆVk+1 ˜Vk+1 ˜V ⊤ +k+1 ˆV ⊤ +k+1 is the +orthogonal projection onto the eigen-subspace of ˜Ψk+1 for the largest ν eigenvalues. +Lemma 2. We have ˆΨk ≤ ˜Ψk ≤ ΨK and +∥Ψk − ˆΨk∥ ≤ +k +� +i=1 +λν+1(Ψi) ≤ k λν+1(ΨK) , +(12) +9 + +and +∥(Ψk+1 + zI)−1 − (ˆΨk+1 + zI)−1∥ ≤ z−2 +k +� +i=1 +λν+1(Ψi) . +(13) +There is another important aspect of our algorithm: It allows us to directly compute +the performance of models with an expanding level of complexity. Indeed, since we load +random features in batches of size P1, we generate predictions for P ∈ [P1, 2P1, · · · , KP1]. +This is useful because we might use it to calibrate the optimal degree of complexity and +because we can directly study the double descent-like phenomena, see, e.g., Belkin et al. +[2019a] and Nakkiran et al. [2021]. That is the effect of complexity on the generalization +error. In the next section, we do this. As we show, consistent with recent theoretical results +Kelly et al. [2022], with sufficient shrinkage, the double descent curve disappears, and the +performance becomes almost monotonic in complexity. Following Kelly et al. [2022], we +name this phenomenon the virtue of complexity (VoC) and the corresponding performance +plots the VoC curves. See, Figure 6 below. +We call this algorithm Fast Annihilating Batch Regression (FABR) as it annihilates all +eigenvalues below λν(ΨK) and allows to solve the random features ridge regression in one go +for a grid of z. Algorithm 2 formalizes the logic described above. +4 +Numerical Results +This section presents several experimental results on different datasets to evaluate FABR’s +performance and applications. In contrast to the most recent computational power demand +in kernel methods, e.g., Shankar et al. [2020], we ran all experiments on a laptop, a MacBook +Pro model A2485, equipped with an M1 Max with a 10-core CPU and 32 GB RAM. +10 + +Algorithm 2 FABR-ν +Require: ν, P1, P, X ∈ RN×d, y ∈ RN, z, voc curve +blocks ← P//P1 +k ← 0 +while k < blocks do +Generate Sk ∈ RN×P1 +▷ Use k as seed to generate the random features +if k = 0 then +˜d, ˜V ← eigen(S⊤ +k Sk) +V ← Sk ˜V diag( ˜d)− 1 +2 +V0 ← V:,min(ν,P1) +▷ Save V0 +d0 ← ˜d:min(ν,P1) +▷ Save d0 +if k in voc curve then +Q0(z) ← V0(diag(d0) + zI)−1V ⊤ +0 y +▷ Save Q0(z) +end if +else if k > 0 then +˜Sk ← (I − Vk−1V ⊤ +k−1)Sk +Γk ← ˜ +S⊤ +k ˜Sk +δk, Wk ← eigen(Γk) +Keep top min(ν, P1) eigenvalues and eigenvectors from δk, Wk +˜ +Wk ← ˜SkWkdiag(δk)− 1 +2 +ˆVk ← [Vk−1, ˜ +Wk] +¯Vk ← ˆ +V ⊤ +k Vk−1 +¯ +Wk ← ¯Vkdiag(dk−1) ¯ +V ⊤ +k +¯Sk ← ˆ +V ⊤ +k Sk +¯Zk ← ¯SkS⊤ +k +Ψ∗ ← ¯ +Wk ¯Zk +dk, Vk ← eigen(Ψ∗) +Keep top min(ν, P1) eigenvalues and eigenvectors from dk, Vk +Vk ← ˆVkVk +▷ Save dk, Vk +if k in voc curve then +Qk(z) ← Vk(diag(dk) + zI)−1V ⊤ +k y +▷ Save Qk(z) +end if +end if +k = k + 1 +end while +k ← 0 +while k < blocks do +(re-)Generate Sk ∈ RN×P1 +▷ Use k as seed to generate the random features +βk(z) ← S⊤ +k Qk(z) +ˆy += Skβk +end while +11 + +4.1 +A comparison with sklearn +We now aim to show FABR’s training and prediction time with respect to the number of +features d. To this end, we do not use any random feature projection or the rank-ν matrix +approximation described in Section 3.1. We draw N = 5000 i.i.d. samples from ⊗d +j=1N(0, 1) +and let +yi = xiβ + ϵi +∀i = 1, . . . , N, +where β ∼ ⊗d +j=1N(0, 1), and ϵi ∼ N(0, 1) for all i = 1, . . . , N. Then, we define +yi = +� +� +� +� +� +� +� +1 +if yi > median(y), +0 +otherwise +∀i = 1, . . . , N. +Next, we create a set of datasets for classification with varying complexity d and keep the +first 4000 samples as the training set and the remaining 1000 as the test set. We show in +Figure 1 the average training and prediction time (in seconds) of FABR with a different +number of regularizers ( we denote this number by |z|) and sklearn RidgeClassifier with +an increasing number of features d. The training and prediction time is averaged over five +independent runs. As one can see, our method is drastically faster when d > 10000. E.g., for +d = 100000 we outperform sklearn by approximately 5 and 25 times for |z| = 5 and |z| = 50, +respectively. Moreover, one can notice that the number of different shrinkages |z| does not +affect FABR. We report a more detailed table with average training and prediction time and +standard deviation in Appendix B. +4.2 +Experiments on Real Datasets +We assess FABR’s performance on both small and big datasets regimes for further evaluation. +For all experiments, we perform a random features kernel ridge regression for demeaned one- +12 + +0 +20000 +40000 +60000 +80000 100000 +d +0 +200 +400 +600 +800 +Training and Prediction Time (s) +FABR - |z| = 5 +FABR - |z| = 10 +FABR - |z| = 20 +FABR - |z| = 50 +sklearn - |z| = 5 +sklearn - |z| = 10 +sklearn - |z| = 20 +sklearn - |z| = 50 +Figure 1: The figure above compares FABR training and prediction time, shown on the y- +axis, in black, against sklearn’s RidgeClassifier, in red, for an increasing amount of features, +shown on the x-axis, and the number of shrinkages z. +Here, |z| denotes the number of +different values of z for which we perform the training. +hot labels and solve the optimization problem using FABR as described in Section 3. +4.2.1 +Data Representation +Table 1: The table below shows the average test accuracy and standard deviation of +ResNet-34, CNTK, and FABR on the subsampled CIFAR-10 datasets. The test accuracy is +average over twenty independent runs. +n +ResNet-34 +14-layer CNTK +z=1 +z=100 +z=10000 +z=100000 +10 +14.59% ± 1.99% +15.33% ± 2.43% +18.50% ± 2.18% +18.50% ± 2.18% +18.42% ± 2.13% +18.13% ± 2.01% +20 +17.50% ± 2.47% +18.79% ± 2.13% +20.84% ± 2.38% +20.85% ± 2.38% +20.78% ± 2.35% +20.13% ± 2.34% +40 +19.52% ± 1.39% +21.34% ± 1.91% +25.09% ± 1.76% +25.10% ± 1.76% +25.14% ± 1.75% +24.41% ± 1.88% +80 +23.32% ± 1.61% +25.48% ± 1.91% +29.61% ± 1.35% +29.60% ± 1.35% +29.62% ± 1.39% +28.63% ± 1.66% +160 +28.30% ± 1.38% +30.48% ± 1.17% +34.86% ± 1.12% +34.87% ± 1.12% +35.02% ± 1.11% +33.54% ± 1.24% +320 +33.15% ± 1.20% +36.57% ± 0.88% +40.46% ± 0.73% +40.47% ± 0.73% +40.66% ± 0.72% +39.34% ± 0.72% +640 +41.66% ± 1.09% +42.63% ± 0.68% +45.68% ± 0.71% +45.68% ± 0.72% +46.17% ± 0.68% +44.91% ± 0.72% +1280 +49.14% ± 1.31% +48.86% ± 0.68% +50.30% ± 0.57% +50.32% ± 0.56% +51.05% ± 0.54% +49.74% ± 0.42% +FABR requires, like any standard kernel methods or randomized-feature techniques, a +good data representation. Usually, we don’t know such a representation a-priori, and learning +a good kernel is outside the scope of this paper. Therefore, we build a simple Convolutional +Neural Network (CNN) mapping h : Rd → RD; that extracts image features ˜x ∈ RD for +some sample x ∈ Rd. The CNN is not optimized; we use it as a simple random feature +mapping. The CNN architecture, shown in Fig. 2, alternates a 3 × 3 convolution layer with +13 + +GlobalAveragePool +3x3 Convolution +ReLU +2x2 Average Pool +BatchNormalization +3x3 Convolution +ReLU +2x2 Average Pool +BatchNormalization +3x3 Convolution +ReLU +2x2 Average Pool +BatchNormalization +3x3 Convolution +ReLU +2x2 Average Pool +BatchNormalization +Figure 2: CNN architecture used to extract image features. +a ReLU activation function, a 2 × 2 Average Pool, and a BatchNormalization layer Ioffe +and Szegedy [2015]. Convolutional layers weights are initialized using He Uniform He et al. +[2015]. To vectorize images, we use a global average pooling layer that has proven to enforce +correspondences between feature maps and to be more robust to spatial translations of the +input Lin et al. [2013]. We finally obtain the train and test random features realizations +s = f(˜x, θ). Specifically, we use the following random features mapping +si = σ(W ˜x), +(14) +where W ∈ RP×D with wi,j ∼ N(0, 1) and σ is some elementwise activation function. This +14 + +can be described as a one-layer neural network with random weights W. +To show the +importance of over-parametrized models, throughout the results, we report the complexity, +c, of the model as c = P/N, that is, the ratio between the parameters (dimensions) and the +number of observations. See Belkin et al. [2019a], Hastie et al. [2019], Kelly et al. [2022]. +4.2.2 +Small Datasets +We now study the performance of FABR on the subsampled CIFAR-10 dataset Krizhevsky +et al. [2009]. +To this end, we reproduce the same experiment described in Arora et al. +[2019b]. In particular, we obtain random subsampled training set (y; X) = (yi; xi)n +i=1 where +n ∈ {10, 20, 40, 80, 160, 320, 640, 1280} and test on the whole test set of size 10000. We make +sure that exactly n/10 sample from each image class is in the training sample. We train +FABR using random features projection of the subsampled training set +S = σ(Wg(X)) ∈ Rn×P, +where g is an untrained CNN from Figure 2, randomly initialized using He Uniform distri- +bution. In this experiment, we push the model complexity c to 100; in other words, FABR’s +number of parameters equals a hundred times the number of observations in the subsample. +As n is small, we deliberately do not perform any low-rank covariance matrix approximation. +Finally, we run our model twenty times and report the mean out-of-sample performance and +the standard deviation. We report in Table 1 FABR’s performance for different shrinkages +(z) together with ResNet-34 and the 14-layers CNTK. Without any complicated random fea- +ture projection, FABR can outperform both ResNet-34 and CNTK. FABR’s test accuracy +increases with the model’s complexity c on different (n) subsampled CIFAR-10 datasets. We +show Figure 3 as an example for n = 10. Additionally, we show, to better observe the double +descent phenomena, truncated curves at c = 25 for all CIFAR-10 subsamples in Figure 4. +The full curves are shown in Appendix B. To sum up this section findings: +15 + +Figure 3: The figures above show FABR’s test accuracy increases with the model’s complexity +c on the subsampled CIFAR-10 dataset for n = 10. The test accuracy is averaged over five +independent runs. +• FABR, with enough complexity together and a simple random feature projection, is +able to outperform deep neural networks (ResNet-34) and CNTKs. +• FABR always reaches the maximum accuracy beyond the interpolation threshold. +• Moreover, if the random feature ridge regression shrinkage z is sufficiently high, the +double descent phenomenon disappears, and the accuracy does not drop at the inter- +polation threshold point, i.e., when c = 1 or n = P. Following Kelly et al. [2022], we +call this phenomenon virtue of complexity (VoC). +4.2.3 +Big Datasets +In this section, we repeat the same experiments described in Section 4.2.2, but we extend +the training set size n up to the full CIFAR-10 dataset. For each n, we train FABR, FABR-ν +with a rank-ν approximation as described in Algorithm 2, and the min-batch-FABR. We +use ν = 2000 and batch size = 2000 in the last two algorithms. Following Arora et al. +[2019b], we train ResNet-34 as the benchmark for 160 epochs, with an initial learning rate +of 0.001 and a batch size of 32. We decrease the learning rate by ten at epochs 80 and 120. +ResNet-34 always reaches close to perfect accuracy on the training set, i.e., above 99%. We +16 + +0.18 +0.16 +(%) +Z = 10-5 +Accuracy +z= 10-1 +0.14 +Z= 100 +z= 101 +0.12 +z= 102 +z= 103 +z= 104 +0.10 +Z= 105 +0 +20 +40 +60 +80 +100 +c(a) n = 10 +(b) n = 20 +(c) n = 40 +(d) n = 80 +(e) n = 160 +(f) n = 320 +(g) n = 640 +(h) n = 1280 +Figure 4: The figures above show FABR’s test accuracy increases with the model’s complexity +c on different (n) subsampled CIFAR-10 datasets. The expanded dataset follows similar +patterns. We truncate the curve for c > 25 to better show the double descent phenomena. +The full curves are shown in Appendix B. Notice that when the shrinkage is sufficiently +high, the double descent disappears, and the accuracy monotonically increases in complexity. +Following Kelly et al. [2022], we name this phenomenon the virtue of complexity (VoC). The +test accuracy is averaged over 20 independent runs. +run each training five times and report mean out-of-sample performance and its standard +deviation. As the training sample is sufficiently large already, we set the model complexity +to only c = 15, meaning that for the full sample, FABR performs a random feature ridge +regression with P = 7.5 × 105. We report the results in Tables 4.2.3 and 3. +Table 2: The table below shows the average test accuracy and standard deviation of ResNet- +34 and FABR on the subsampled and full CIFAR-10 dataset. The test accuracy is average +over five independent runs. +n +ResNet-34 +z=1 +z=100 +z=10000 +z=100000 +2560 +48.12% ± 0.69% +52.24% ± 0.29% +52.45% ± 0.21% +54.29% ± 0.44% +48.28% ± 0.37% +5120 +56.03% ± 0.82% +55.34% ± 0.32% +55.74% ± 0.34% +58.29% ± 0.20% +52.06% ± 0.08% +10240 +63.21% ± 0.26% +58.36% ± 0.45% +58.86% ± 0.54% +62.17% ± 0.35% +55.75% ± 0.18% +20480 +69.24% ± 0.47% +61.08% ± 0.17% +61.65% ± 0.27% +65.12% ± 0.19% +59.34% ± 0.14% +50000 +75.34% ± 0.21% +66.38% ± 0.00% +66.98% ± 0.00% +68.62% ± 0.00% +63.25% ± 0.00% +The experiment delivers a number of additional conclusions: +• First, we observe that, while for small train sample sizes of n ≤ 10000, simple kernel +17 + +0.45 +0.40 +- +(%) +0.35 +Z = 10-5 +Accuracy +0.30 +Z= 10-1 +z= 100 +0.25 +z= 101 +0.20 +z = 102 +z= 103 +0.15 +z= 104 +Z= 105 +0.10 +0 +5 +10 +15 +20 +25 +c0.5 +0.4 +%) +Z = 10-5 +Accuracy +Z= 10-1 +0.3 +z=100 +z= 101 +z = 102 +0.2 +Z= 103 +z= 104 +Z= 105 +0.1 +0 +5 +10 +15 +20 +25 +c0.16 +(%) +Z = 10-5 +cy +0.14 +Z= 10-1 +Accura +Z= 100 +z= 101 +0.12 +z = 102 +z= 103 +z= 104 +0.10 +Z= 105 +0 +5 +10 +15 +20 +25 +c- +0.20 +- +0.18 +(%) +0.16 +Z = 10-5 +Accuracy +Z= 10-1 +z= 100 +0.14 +z= 101 +z = 102 +0.12 +Z= 103 +z= 104 +0.10 +Z= 105 +0 +5 +10 +15 +20 +25 +c- +0.24 +0.22 +0.20 +Z = 10-5 +Accuracy +0.18 +Z= 10-1 +z= 100 +0.16 +z= 101 +z = 102 +0.14 +z= 103 +z= 104 +0.12 +Z= 105 +0 +5 +10 +15 +20 +25 +c0.275 +0.250 +(%) +0.225 +Z = 10-5 + Accuracy +Z= 10-1 +0.200 +z= 100 +0.175 +z= 101 +z= 102 +0.150 +z= 103 +z = 104 +0.125 +Z= 105 +0.100 +0 +5 +10 +15 +20 +25 +c1 +0.30 +- +(%) +Z = 10-5 +0.25 +Accuracy +Z = 10-1 +z= 100 +0.20 +z= 101 +z= 102 +z= 103 +0.15 +Z= 104 +Z=105 +0.10 +0 +5 +10 +15 +20 +25 +c0.40 +- +0.35 +- +(%) +0.30 +Z = 10-5 +Accuracy +Z= 10-1 +0.25 +z= 100 +z= 101 +0.20 +Z= 102 +z= 103 +0.15 +z= 104 +Z= 105 +0.10 +0 +5 +10 +15 +20 +25 +c(a) n = 2560 +(b) n = 50000 +Figure 5: The figures above show FABR’s test accuracy increases with the model’s complexity +c on the subsampled CIFAR-10 dataset 5a and the full CIFAR-10 dataset 5b. FABR is trained +using a ν = 2000 low-rank covariance matrix approximation. Notice that we still observe a +(shifted) double descent when ν ≈ n. The same phenomenon disappears when ν ≪ n. The +test accuracy is averaged over five independent runs. +Table 3: The table below shows the average test accuracy and standard deviation of FABR-ν +and mini-batch FABR on the subsampled and full CIFAR-10 dataset. The test accuracy is +average over five independent runs. +z = 1 +z = 100 +z = 10000 +z = 100000 +FABR +batch = 2000 +ν = 2000 +batch = 2000 +ν = 2000 +batch = 2000 +ν = 2000 +batch = 2000 +ν = 2000 +n +2560 +53.13% ± 0.38% +53.48% ± 0.22% +53.15% ± 0.42% +53.63% ± 0.24% +52.01% ± 0.51% +54.05% ± 0.44% +46.78% ± 0.52% +48.23% ± 0.34% +5120 +57.68% ± 0.18% +57.63% ± 0.19% +57.70% ± 0.16% +57.63% ± 0.18% +56.83% ± 0.27% +57.53% ± 0.11% +51.42% ± 0.22% +51.75% ± 0.14% +10240 +59.79% ± 0.35% +61.20% ± 0.39% +59.79% ± 0.35% +61.20% ± 0.38% +58.63% ± 0.28% +60.63% ± 0.21% +53.73% ± 0.37% +55.16% ± 0.34% +20480 +61.56% ± 0.35% +63.50% ± 0.12% +61.55% ± 0.37% +63.50% ± 0.13% +60.90% ± 0.20% +62.92% ± 0.12% +57.10% ± 0.19% +58.40% ± 0.21% +50000 +62.74% ± 0.10% +65.45% ± 0.18% +62.74% ± 0.10% +65.44% ± 0.18% +62.35% ± 0.05% +65.04% ± 0.19% +59.99% ± 0.02% +61.71% ± 0.09% +methods achieve performance comparable with that of DNNs, this is not the case for +n > 20000. Beating DNNs on big datasets with shallow methods requires more complex +kernels, such as those in Shankar et al. [2020], Li et al. [2019]. +• Second, we confirm the findings of Ma and Belkin [2017], Lee et al. [2020] suggesting +that the role of small +eigenvalues is important. For example, FABR-ν with ν = 2000 loses several percent of +accuracy on larger datasets. +18 + +0.55- +0.50 +(%) +0.45 +z = 10-5 +Accuracy +z= 10-1 +0.40 +z= 100 +z= 101 +0.35 +z= 102 +z= 103 +0.30 +z= 104 +z= 105 +0.0 +2.5 +5.0 +7.5 +10.0 +12.5 +15.0 +c0.65 +0.60 +z= 10-5 +0.55 +z= 10-1 +z= 100 +0.50 +z= 101 +z= 102 +0.45 +Z= 103 +z= 104 +z= 105 +0.40 +0.0 +2.5 +5.0 +7.5 +10.0 +12.5 +15.0 +c• Third, surprisingly, both the mini-batch FABR and FABR-ν sometimes achieve higher +accuracy than the full sample regression on moderately-sized datasets. See Tables 2 +and 3. Understanding these phenomena is an interesting direction for future research. +• Fourth, the double descent phenomenon naturally appears for both FABR-ν and the +mini-batch FABR but only when ν ≈ n or batch size ≈ n. However, the double descent +phenomenon disappears when ν ≪ n. This intriguing finding is shown in Figure 5 for +FABR-ν, and in Appendix B for the mini-batch FABR. +• Fifth, on average, FABR-ν outperforms mini-batch FABR on larger datasets. +5 +Conclusion and Discussion +The recent discovery of the equivalence between infinitely wide neural networks (NNs) in +the lazy training regime and neural tangent kernels (NTKs) Jacot et al. [2018] has revived +interest in kernel methods. However, these kernels are extremely complex and usually re- +quire running on big and expensive computing clusters Avron et al. [2017], Shankar et al. +[2020] due to memory (RAM) requirements. This paper proposes a highly scalable random +features ridge regression that can run on a simple laptop. We name it Fast Annihilating +Batch Regression (FABR). Thanks to the linear algebraic properties of covariance matrices, +this tool can be applied to any kernel and any way of generating random features. More- +over, we provide several experimental results to assess its performance. We show how FABR +can outperform (in training and prediction speed) the current state-of-the-art ridge classi- +fier’s implementation. Then, we show how a simple data representation strategy combined +with a random features ridge regression can outperform complicated kernels (CNTKs) and +over-parametrized Deep Neural Networks (ResNet-34) in the few-shot learning setting. The +experiments section concludes by showing additional results on big datasets. In this paper, +we focus on very simple classes of random features. Recent findings (see, e.g., Shankar et al. +19 + +[2020]) suggest that highly complex kernel architectures are necessary to achieve competi- +tive performance on large datasets. Since each kernel regression can be approximated with +random features, our method is potentially applicable to these kernels as well. However, +directly computing the random feature representation of such complex kernels is non-trivial +and we leave it for future research. +20 + +References +Alnur Ali, J Zico Kolter, and Ryan J Tibshirani. A continuous-time view of early stopping for +least squares regression. In The 22nd International Conference on Artificial Intelligence +and Statistics, pages 1370–1378. PMLR, 2019. +Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via +over-parameterization. In International Conference on Machine Learning, pages 242–252. +PMLR, 2019. +Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. +On exact computation with an infinitely wide neural net. Advances in Neural Information +Processing Systems, 32, 2019a. +Sanjeev Arora, Simon S Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, and Dingli +Yu. Harnessing the power of infinitely wide deep nets on small-data tasks. arXiv preprint +arXiv:1910.01663, 2019b. +Haim Avron, Kenneth L Clarkson, and David P Woodruff. Faster kernel ridge regression +using sketching and preconditioning. SIAM Journal on Matrix Analysis and Applications, +38(4):1116–1138, 2017. +Peter L Bartlett, Philip M Long, G´abor Lugosi, and Alexander Tsigler. Benign overfitting in +linear regression. Proceedings of the National Academy of Sciences, 117(48):30063–30070, +2020. +Mikhail Belkin. Fit without fear: remarkable mathematical phenomena of deep learning +through the prism of interpolation. Acta Numerica, 30:203–248, 2021. +Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need +21 + +to understand kernel learning. In International Conference on Machine Learning, pages +541–549. PMLR, 2018. +Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine- +learning practice and the classical bias–variance trade-off. Proceedings of the National +Academy of Sciences, 116(32):15849–15854, 2019a. +Mikhail Belkin, Alexander Rakhlin, and Alexandre B Tsybakov. Does data interpolation +contradict statistical optimality? +In The 22nd International Conference on Artificial +Intelligence and Statistics, pages 1611–1619. PMLR, 2019b. +Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. +SIAM Journal on Mathematics of Data Science, 2(4):1167–1180, 2020. +Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable pro- +gramming. Advances in Neural Information Processing Systems, 32, 2019. +Youngmin Cho and Lawrence Saul. Kernel methods for deep learning. Advances in neural +information processing systems, 22, 2009. +Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song. +Scalable kernel methods via doubly stochastic gradients. Advances in neural information +processing systems, 27, 2014. +Amit Daniely. Sgd learns the conjugate kernel class of the network. Advances in Neural +Information Processing Systems, 30, 2017. +Amit Daniely, Roy Frostig, and Yoram Singer. +Toward deeper understanding of neural +networks: The power of initialization and a dual view on expressivity. Advances in neural +information processing systems, 29, 2016. +22 + +Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds +global minima of deep neural networks. In International conference on machine learning, +pages 1675–1685. PMLR, 2019a. +Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. +Gradient descent provably +optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018. +Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, +and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph +kernels. Advances in neural information processing systems, 32, 2019b. +Manuel Fern´andez-Delgado, Eva Cernadas, Sen´en Barro, and Dinani Amorim. Do we need +hundreds of classifiers to solve real world classification problems? The journal of machine +learning research, 15(1):3133–3181, 2014. +Adri`a Garriga-Alonso, Carl Edward Rasmussen, and Laurence Aitchison. Deep convolu- +tional networks as shallow gaussian processes. In International Conference on Learning +Representations, 2018. +Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in +high-dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560, +2019. +Tamir Hazan and Tommi Jaakkola. Steps toward deep kernel methods from infinite neural +networks. arXiv preprint arXiv:1508.05133, 2015. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: +Surpassing human-level performance on imagenet classification. +In Proceedings of the +IEEE international conference on computer vision, pages 1026–1034, 2015. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training +23 + +by reducing internal covariate shift. In International conference on machine learning, pages +448–456. PMLR, 2015. +Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: Convergence +and generalization in neural networks. Advances in neural information processing systems, +31, 2018. +Bryan T Kelly, Semyon Malamud, and Kangying Zhou. The virtue of complexity in return +prediction. 2022. +Alex Krizhevsky, Geoffrey Hinton, et al. +Learning multiple layers of features from tiny +images. 2009. +Nicolas Le Roux and Yoshua Bengio. Continuous neural networks. In Artificial Intelligence +and Statistics, pages 404–411. PMLR, 2007. +Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, +and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. In International +Conference on Learning Representations, 2018. +Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- +Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear +models under gradient descent. Advances in neural information processing systems, 32, +2019. +Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman +Novak, and Jascha Sohl-Dickstein. Finite versus infinite neural networks: an empirical +study. Advances in Neural Information Processing Systems, 33:15156–15172, 2020. +Daniel LeJeune, Hamid Javadi, and Richard Baraniuk. The implicit regularization of ordi- +nary least squares ensembles. In International Conference on Artificial Intelligence and +Statistics, pages 3525–3535. PMLR, 2020. +24 + +Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S Du, Wei Hu, Ruslan Salakhutdinov, +and Sanjeev Arora. +Enhanced convolutional neural tangent kernels. +arXiv preprint +arXiv:1911.00809, 2019. +Min Lin, Qiang Chen, and Shuicheng Yan. +Network in network. +arXiv preprint +arXiv:1312.4400, 2013. +Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on +large-scale shallow learning. Advances in neural information processing systems, 30, 2017. +Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin +Ghahramani. Gaussian process behaviour in wide deep neural networks. arXiv preprint +arXiv:1804.11271, 2018. +Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya +Sutskever. +Deep double descent: Where bigger models and more data hurt. +Journal +of Statistical Mechanics: Theory and Experiment, 2021(12):124003, 2021. +Radford M Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks, +pages 29–53. Springer, 1996. +Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Jiri Hron, Daniel A +Abolafia, Jeffrey Pennington, and Jascha Sohl-dickstein. +Bayesian deep convolutional +networks with many channels are gaussian processes. +In International Conference on +Learning Representations, 2018. +Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A Alemi, Jascha Sohl- +Dickstein, and Samuel S Schoenholz. +Neural tangents: Fast and easy infinite neural +networks in python. arXiv preprint arXiv:1912.02803, 2019. +Matthew Olson, Abraham Wyner, and Richard Berk. Modern neural networks generalize on +small data sets. Advances in Neural Information Processing Systems, 31, 2018. +25 + +Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. Advances +in neural information processing systems, 20, 2007. +Vaishaal Shankar, Alex Fang, Wenshuo Guo, Sara Fridovich-Keil, Jonathan Ragan-Kelley, +Ludwig Schmidt, and Benjamin Recht. Neural kernels without tangents. In International +Conference on Machine Learning, pages 8614–8623. PMLR, 2020. +Stefano Spigler, Mario Geiger, St´ephane d’Ascoli, Levent Sagun, Giulio Biroli, and Matthieu +Wyart. A jamming transition from under-to over-parametrization affects generalization in +deep learning. Journal of Physics A: Mathematical and Theoretical, 52(47):474001, 2019. +A. Tsigler and P. L. Bartlett. Benign overfitting in ridge regression, 2020. +Christopher KI Williams. Computing with infinite networks. In Advances in Neural Infor- +mation Processing Systems 9: Proceedings of the 1996 Conference, volume 9, page 295. +MIT Press, 1997. +Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, and Jinwoo Shin. +Scaling neural tangent kernels via sketching and random features. +In M. Ranzato, +A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Ad- +vances in Neural Information Processing Systems, volume 34, pages 1062–1073. Cur- +ran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ +08ae6a26b7cb089ea588e94aed36bd15-Paper.pdf. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. +Understanding +deep +learning +requires +rethinking +generalization. +arXiv preprint +arXiv:1611.03530, 2016. +26 + +A +Proofs +Proof of Lemma 2. We have +Ψk+1 = Ψk + Sk+1S′ +k+1 +˜Ψk+1 = ˆΨk + Sk+1S′ +k+1 +ˆΨk+1 = Pk+1 ˜Ψk+1Pk+1 . +(15) +By the definition of the spectral projection, we have +∥˜Ψk+1 − ˆΨk+1∥ ≤ λν+1(˜Ψk+1) ≤ λν+1(Ψk+1) , +(16) +and hence +∥Ψk+1 − ˆΨk+1∥ +≤ ∥Ψk+1 − ˜Ψk+1∥ + ∥˜Ψk+1 − ˆΨk+1∥ += ∥Ψk − ˆΨk∥ + ∥˜Ψk+1 − ˆΨk+1∥) +≤ ∥Ψk − ˆΨk∥ + λν+1(Ψk+1) , +(17) +and the claim follows by induction. The last claim follows from the simple inequality +∥(Ψk+1 + zI)−1 − (ˆΨk+1 + zI)−1∥ ≤ z−2∥Ψk+1 − ˆΨk+1∥ . +(18) +B +Additional Experimental Results +This section provides additional experiments and findings that may help the community with +future research. +First, we dive into more details about our comparison with sklearn. Table 4 shows a more +27 + +detailed training and prediction time comparison between FABR and sklearn. In particular, +we average training and prediction time over five independent runs. The experiment settings +are explained in Section 4.1. We show how one, depending on the number shrinkages |z|, +would start considering using FABR when the number of observations in the dataset n ≈ +5000. In this case, we have used the numpy linear algebra library to decompose FABR’s +covariance matrix, which appears to be faster than the scipy counterpart. We share our +code in the following repository: https://github.com/tengandreaxu/fabr. +Second, while Figure 4 shows FABR’s test accuracy on increasing complexity c truncated +curves, we present here the whole picture; i.e., Figure 6 shows full FABR’s test accuracy +increases with the model’s complexity c on different (n) subsampled CIFAR-10 datasets +averaged over twenty independent runs. +The expanded dataset follows similar patterns. +Similar to Figure 4, one can notice that when the shrinkage is sufficiently high, the double +descent disappears, and the accuracy monotonically increases in complexity. +Third, the double descent phenomenon naturally appears for both FABR-ν and the +mini-batch FABR but only when ν ≈ n or batch size ≈ n. However, the double descent +phenomenon disappears when ν ≪ n. +This intriguing finding is shown in Figure 5 for +FABR-ν, and here, in Figure 7, we report the same curves for mini-batch FABR. +28 + +(a) n = 10 +(b) n = 20 +(c) n = 40 +(d) n = 80 +(e) n = 160 +(f) n = 320 +(g) n = 640 +(h) n = 1280 +Figure 6: The figure above shows the full FABR’s accuracy increase with the model’s com- +plexity c in the small dataset regime. The expanded dataset follows similar patterns. +(a) n = 2560 +(b) n = 50000 +Figure 7: Similar to Figure 5, the figures above show FABR’s test accuracy increases with +the model’s complexity c on the subsampled CIFAR-10 dataset 7a and the full CIFAR-10 +dataset 7b. FABR trains using mini-batches with batch size=2000 in both cases. Notice that +we still observe a (shifted) double descent when batch size ≈ n, while the same phenomenon +disappears when batch size ≪ n. The test accuracy is averaged over 5 independent runs. +29 + +0.20 +0.18 +(%) +Z = 10-5 +Accuracy +0.16 +z= 10-1 +Z= 100 +0.14 +z= 101 +z= 102 +0.12 +z= 103 +z= 104 +0.10 +Z= 105 +0 +20 +40 +60 +80 +100 +c0.24 +0.22 +(%) +0.20 +Z = 10-5 + Accuracy +Z= 10-1 +0.18 +z=100 +0.16 +z= 101 +z = 102 +0.14 +z= 103 +z= 104 +0.12 +Z= 105 +0 +20 +40 +60 +80 +100 +c0.30 +0.25 +(%) +Z = 10-5 +Accuracy +z= 10-1 +0.20 +z= 100 +z= 101 +z= 102 +0.15 +z= 103 +z= 104 +Z= 105 +0.10 +0 +20 +40 +60 +80 +100 +c0.35 +0.30 +(%) +Z = 10-5 +Accuracy +0.25 +z= 10-1 +z= 100 +0.20 +z= 101 +Z= 102 +z= 103 +0.15 +z= 104 +Z= 105 +0.10 +0 +20 +40 +60 +80 +100 +c0.40 +0.35 +%) +0.30 +Z = 10-5 +Accuracy +z= 10-1 +0.25 +z= 100 +z= 101 +0.20 +Z= 102 +z= 103 +0.15 +z= 104 +Z= 105 +0.10 +0 +20 +40 +60 +80 +100 +c0.45 +0.40 +0.35 +Z = 10-5 +Accuracy +0.30 +Z= 10-1 +z=100 +0.25 +z= 101 +z= 102 +0.20 +z= 103 +0.15 +z= 104 +z= 105 +0.10 +0 +20 +40 +60 +80 +100 +c0.5 +0.4 +[%) +Z = 10-5 +Accuracy +Z= 10-1 +0.3 +Z= 100 +z= 101 +z= 102 +0.2 +Z= 103 +z= 104 +Z= 105 +0.1 +0 +20 +40 +60 +80 +100 +c0.50. +(%) +0.45 +z= 10-5 +Accuracy +0.40 +z= 10-1 +z= 100 +z= 101 +0.35 +z= 102 +z= 103 +0.30 +z= 104 +z= 105 +0.25 +0.0 +2.5 +5.0 +7.5 +10.0 +12.5 +15.0 +c0.60 +(%) +0.55 +z= 10-5 +Accuracy +z= 10-1 +z= 100 +0.50 +Z= 101 +z= 102 +0.45 +z= 103 +z= 104 +z= 105 +0.40 +0.0 +2.5 +5.0 +7.5 +10.0 +12.5 +15.0 +c0.18 +0.16 +(%) +Z = 10-5 +Accuracy +z= 10-1 +0.14 +Z= 100 +z= 101 +0.12 +z= 102 +z= 103 +z= 104 +0.10 +Z= 105 +0 +20 +40 +60 +80 +100 +cTable 4: The table below shows FABR and sklearn’s training and prediction time (in sec- +onds) on a synthetic dataset. We vary the dataset number of features d and the number of +shrinkages (|z|). We report the average running time and the standard deviation over five +independent runs. +|z| = 5 +|z| = 10 +|z| = 20 +|z| = 50 +FABR +sklearn +FABR +sklearn +FABR +sklearn +FABR +sklearn +d +10 +7.72s ± 0.36s +0.01s ± 0.00s +6.90s ± 0.77s +0.02s ± 0.00s +7.04s ± 0.67s +0.03s ± 0.00s +7.44s ± 0.57s +0.07s ± 0.01s +100 +7.35s ± 0.36s +0.06s ± 0.02s +6.58s ± 0.34s +0.11s ± 0.01s +7.61s ± 1.14s +0.24s ± 0.04s +7.3s ± 0.49s +0.53s ± 0.06s +500 +7.37s ± 0.44s +0.33s ± 0.16s +6.81s ± 0.25s +0.54s ± 0.03s +7.02s ± 0.35s +1.01s ± 0.07s +7.44s ± 0.48s +2.41s ± 0.21s +1000 +7.62s ± 0.31s +0.58s ± 0.21s +7.38s ± 0.23s +1.06s ± 0.04s +7.51s ± 0.24s +2.04s ± 0.04s +7.69s ± 0.08s +4.79s ± 0.36s +2000 +8.33s ± 0.42s +1.21s ± 0.03s +8.09s ± 0.73s +2.44s ± 0.05s +8.33s ± 0.24s +4.87s ± 0.07s +8.29s ± 0.47s +12.21s ± 0.15s +3000 +9.24s ± 0.25s +2.49s ± 0.05s +9.18s ± 0.41s +5.08s ± 0.03s +9.51s ± 0.20s +10.06s ± 0.02s +9.67s ± 0.41s +25.67s ± 0.23s +5000 +10.64s ± 0.86s +5.36s ± 0.05s +11.01s ± 0.7s +10.74s ± 0.06s +11.57s ± 0.81s +21.31s ± 0.12s +11.54s ± 0.41s +54.18s ± 0.73s +10000 +11.49s ± 0.66s +17.87s ± 8.58s +11.81s ± 0.47s +28.32s ± 10.53s +11.61s ± 0.49s +44.72s ± 9.99s +12.55s ± 0.3s +101.58s ± 15.66s +25000 +13.89s ± 0.21s +27.79s ± 8.75s +14.50s ± 0.45s +49.84s ± 9.68s +14.46s ± 0.96s +94.08s ± 10.94s +15.68s ± 0.74s +224.31s ± 11.75s +50000 +17.99s ± 0.22s +50.51s ± 8.99s +18.27s ± 0.37s +92.88s ± 10.45s +19.10s ± 0.37s +176.24s ± 10.07s +19.68s ± 0.85s +422.95s ± 13.22s +100000 +25.30s ± 0.39s +95.57s ± 0.25s +26.16s ± 0.46s +177.54s ± 3.77s +27.93s ± 0.35s +340.32s ± 3.74s +29.48s ± 1.38s +816.25s ± 4.35s +30 + diff --git a/A9FIT4oBgHgl3EQf_Swz/content/tmp_files/load_file.txt b/A9FIT4oBgHgl3EQf_Swz/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..d73fa2eec7a8c5e73f670768c07e11d3a77de9a7 --- /dev/null +++ b/A9FIT4oBgHgl3EQf_Swz/content/tmp_files/load_file.txt @@ -0,0 +1,1167 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf,len=1166 +page_content='A Simple Algorithm For Scaling Up Kernel Methods Teng Andrea Xu†, Bryan Kelly‡, and Semyon Malamud† †Swiss Finance Institute, EPFL andrea.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='xu,semyon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='malamud@epfl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='ch ‡Yale School of Management, Yale University bryan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='kelly@yale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='edu Abstract The recent discovery of the equivalence between infinitely wide neural networks (NNs) in the lazy training regime and Neural Tangent Kernels (NTKs) Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018] has revived interest in kernel methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' However, conventional wisdom suggests kernel methods are unsuitable for large samples due to their computational complexity and memory requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We introduce a novel random feature regression algorithm that allows us (when necessary) to scale to virtually infinite numbers of random fea- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We illustrate the performance of our method on the CIFAR-10 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='11414v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='LG] 26 Jan 2023 1 Introduction Modern neural networks operate in the over-parametrized regime, which sometimes requires orders of magnitude more parameters than training data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Effectively, they are interpo- lators (see, Belkin [2021]) and overfit the data in the training sample, with no consequences for the out-of-sample performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' This seemingly counterintuitive phenomenon is some- times called “benign overfit” [Bartlett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', 2020, Tsigler and Bartlett, 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In the so-called lazy training regime Chizat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019], wide neural networks (many nodes in each layer) are effectively kernel regressions, and “early stopping” commonly used in neural network training is closely related to ridge regularization [Ali et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See, Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018], Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019], Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018, 2019a], Allen-Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Recent research also emphasizes the “double descent,” in which expected forecast error drops in the high-complexity regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See, for example, Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2016], Belkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019a,b], Spigler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019], Belkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' These discoveries made many researchers argue that we need to gain a deeper under- standing of kernel methods (and, hence, random feature regressions) and their link to deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Belkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Several recent papers have developed numerical algorithms for scaling kernel-type methods to large datasets and large numbers of random features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Zandieh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2021], Ma and Belkin [2017], Arora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019a], Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In particular, Arora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019b] show how NTK combined with the support vector machines (SVM) (see also Fern´andez-Delgado et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2014]) perform well on small data tasks relative to many competitors, including the highly over-parametrized ResNet-34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In particular, while modern deep neural networks do generalize on small datasets (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Olson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018]), Arora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019b] show that kernel-based methods achieve superior performance in such small data environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Similarly, Du et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019b] find that the graph neural tangent kernel (GNTK) dominates graph neural networks on datasets with up to 5000 2 samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] show that, while NTK is a powerful kernel, it is possible to build other classes of kernels (they call Neural Kernels) that are even more powerful and are often at par with extremely complex deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In this paper, we develop a novel form of kernel ridge regression that can be applied to any kernel and any way of generating random features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We use a doubly stochastic method similar to that in Dai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2014], with an important caveat: We generate (potentially large, defined by the RAM constraints) batches of random features and then use linear algebraic properties of covariance matrices to recursively update the eigenvalue decomposition of the feature covariance matrix, allowing us to perform the optimization in one shot across a large grid of ridge parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Section 2 discusses related work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In Section 3, we provide a novel random feature regression mathematical formulation and algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, Section 4 and Section 5 present numerical results and conclusions, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 2 Related Work Before the formal introduction of the NTK in Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018], numerous papers discussed the intriguing connections between infinitely wide neural networks and kernel methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Neal [1996];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Williams [1997];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Le Roux and Bengio [2007];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Hazan and Jaakkola [2015];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Matthews et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Novak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Garriga-Alonso et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Cho and Saul [2009];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Daniely et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2016];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Daniely [2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As in the standard random feature approximation of the kernel ridge regression (see Rahimi and Recht [2007]), only the network’s last layer is trained in the standard kernel ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' A surprising discovery of Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018] is that (infinitely) wide neural networks in the lazy training regime converge to a kernel even though all network layers are trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The corresponding kernel, the NTK, has a complex structure dependent on the neural network’s architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See also Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019], Arora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019a] for more results about the link between NTK and the 3 underlying neural network, and Novak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019] for an efficient algorithm for implementing the NTK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In a recent paper, Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] introduce a new class of kernels and show that they perform remarkably well on even very large datasets, achieving a 90% accuracy on the CIFAR-10 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' While this performance is striking, it comes at a huge computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] write: “CIFAR-10/CIFAR-100 consist of 60, 000 32 × 32 × 3 images and MNIST consists of 70, 000 28 × 28 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Even with this constraint, the largest compositional kernel matrices we study took approximately 1000 GPU hours to compute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Thus, we believe an imperative direction of future work is reducing the complexity of each kernel evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Random feature methods or other compression schemes could play a significant role here.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In this paper, we offer one such highly scalable scheme based on random features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' How- ever, computing the random features underlying the Neural Kernels of Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] would require developing non-trivial numerical algorithms based on the recursive iteration of non-linear functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We leave this as an important direction for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As in standard kernel ridge regressions, we train our random feature regression on the full sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' This is a key computational limitation for large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' After all, one of the reasons for the success of modern deep learning is the possibility of training them us- ing stochastic gradient descent on mini-batches of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Ma and Belkin [2017] shows how mini-batch training can be applied to kernel ridge regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' A key technical difficulty arises because kernel matrices (equivalently, covariance matrices of random features) have eigenvalues that decay very quickly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Yet, these low eigenvalues contain essential informa- tion and cannot be neglected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Our regression method can be easily modified to allow for mini-batches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Furthermore, it is known that mini-batch linear regression can even lead to performance gains in the high-complexity regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As LeJeune et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] show, one can run regression on mini-batches and then treat the obtained predictions as an ensemble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' LeJeune et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] prove that, under technical conditions, the average of these predictions attains 4 a lower generalization error than the full-train-sample-based regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We test this mini- batch ensemble approach using our method and show that, indeed, with moderately-sized mini-batches, the method’s performance matches that of the full sample regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Moreover, there is an intriguing connection between mini-batch regressions and spectral dimensionality reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' By construction, the feature covariance matrix with a mini-batch of size B has at most B non-zero eigenvalues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Thus, a mini-batch effectively performs a dimensionality reduction on the covariance matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Intuitively, we expect that the two methods (using a mini-batch of size B or using the full sample but only keeping B largest eigenvalues) should achieve comparable performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We show that this is indeed the case for small sample sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' However, the spectral method for larger-sized samples (N ≥ 10000) is superior to the mini-batch method unless we use very large mini-batches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' For example, on the full CIFAR-10 dataset, the spectral method outperforms the mini-batch approach by 3% (see Section 4 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 3 Random Features Ridge Regression and Classifica- tion Suppose that we have a train sample (X, y) = (xi, yi)N i=1, xi ∈ Rd, yi ∈ R, so that X ∈ RN×d, y ∈ RN×1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Following Rahimi and Recht [2007] we construct a large number of random features f(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' θp), p = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , P, where f is a non-linear function and θp are sampled from some distribution, and P is a large number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We denote S = f(X;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' θ) ∈ RN×P as the train sample realizations of random features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Following Rahimi and Recht [2007], we consider the random features ridge regression, β(z) = (S⊤S/N + zI)−1S⊤y/N , (1) 5 as an approximation for kernel ridge regression when P → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' For classification problems, it is common to use categorical cross-entropy as the objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' However, as Belkin [2021] explains, minimizing the mean-squared error with one-hot encoding often achieves superior generalization performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Here, we follow this approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Given the K labels, k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , K, we build the one-hot encoding matrix Q = (qi,k) where qi,k = 1yi=k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, we get β(z) = (S⊤S/N + zI)−1S⊤Q/N ∈ RP×K .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (2) Then, for each test feature vector s = f(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' θ) ∈ RP, we get a vector β(z)⊤s ∈ RK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Next, define the actual classifier as k(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' z) = arg max{β(z)⊤s} ∈ {1, · · · , K} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (3) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='1 Dealing with High-Dimensional Features A key computational (hardware) limitation of kernel methods comes from the fact that, when P is large, computing the matrix S⊤S ∈ RP×P becomes prohibitively expensive, in particular, because S cannot even be stored in RAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We start with a simple observation that the following identity implies that storing all these features is not necessary:1 (S⊤S/N + zI)−1S⊤ = S⊤(SS⊤/N + zI)−1 , (4) and therefore we can compute β(z) as β(z) = S⊤(SS⊤/N + zI)−1y/N .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (5) Suppose now we split S into multiple blocks, S1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , SK, where Sk ∈ RN×P1 for all 1This identity follows directly from (S⊤S/N + zI)S⊤ = S⊤(SS⊤/N + zI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 6 k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , K, for some small P1, with KP1 = P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, Ψ = SS⊤ = K � k=1 SkS⊤ k (6) can be computed by generating the blocks Sk, one at a time, and recursively adding SkS⊤ k up.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Once Ψ has been computed, one can calculate its eigenvalue decomposition, Ψ = V DV ⊤, and then evaluate Q(z) = (Ψ/N + zI)−1y/N = V (D + zI)−1V ⊤y/N ∈ RN in one go for a grid of z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, using the same seeds, we can again generate the random features Sk and compute βk(z) = S⊤ k Q(z) ∈ RP1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, β(z) = (βk(z))K k=1 ∈ RP .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The logic described above is formalized in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Algorithm 1 FABR Require: P1, P, X ∈ RN×d, y ∈ RN, z, voc curve blocks ← P//P1 k ← 0 Ψ ← 0N×N while k < blocks do Generate Sk ∈ RN×P1 Use k as seed Ψ ← Ψ + SkSk⊤ if k in voc curve then DV ← eigen( Ψ N ) Qk(z) ← V (D + zI)−1V ⊤ y N ▷ Store Qk(z) end if k = k + 1 end while DV ← eigen( Ψ N ) Q(z) ← V (D + zI)−1V ⊤ y N k ← 0 while k < blocks do (re-)Generate Sk ∈ RN×P1 ▷ Use k as seed βk(z) ← S⊤ k Q(z) ˆy += Skβk end while 7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2 Dealing with Massive Datasets The above algorithm relies crucially on the assumption that N is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Suppose now that the sample size N is so large that storing and eigen-decomposing the matrix SS⊤ ∈ RN×N becomes prohibitively expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In this case, we proceed as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Define for all k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , K Ψk = k � κ=1 SkS⊤ k ∈ RN×N, Ψ0 = 0N×N , (7) and let λ1(A) ≥ · · · ≥ λN(A) be the eigenvalues of a symmetric matrix A ∈ RN×N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Our goal is to design an approximation to (ΨK + zI)−1, based on a simple observation that the eigenvalues of the empirically observed Ψk matrices tend to decay very quickly, with only a few hundreds of largest eigenvalues being significantly different from zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In this case, we can fix a ν ∈ N and design a simple, rank−ν approximation to ΨK by annihilating all eigenvalues below λν(ΨK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As we now show, it is possible to design a recursive algorithm for constructing such an approximation to ΨK, dealing with small subsets of random features simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To this end, we proceed as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Suppose we have constructed an approximation ˆΨk ∈ RN×N to Ψk with rank ν, and let Vk ∈ RN×ν be the corresponding matrix of orthogonal eigenvectors for the non-zero eigenvalues, and Dk ∈ Rν×ν the diagonal matrix of eigenvalues so that ˆΨk = VkDkV ⊤ k and V ⊤ k Vk = Iν×ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Instead of storing the full ˆΨk matrix, we only need to store the pair (Vk, Dk).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' For all k = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , K, we now define ˜Ψk+1 = ˆΨk + Sk+1S⊤ k+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (8) This N × N matrix is a theoretical construct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We never actually compute it (see Algorithm 8 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Let Θk = I − VkV ⊤ k be the orthogonal projection on the kernel of ˆΨk, and ˜Sk+1 = ΘkSk+1 = Sk+1 − Vk ���� N×ν (V ⊤ k Sk+1 � �� � ν×P1 ) (9) be Sk+1 orthogonalized with respect to the columns of Vk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, we define ˜Wk+1 = ˜Sk+1( ˜Sk+1 ˜S⊤ k+1)−1/2 to be the orthogonalized columns of ˜Sk+1, and ˆVk+1 = [Vk, ˜Wk+1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To compute ˜Sk+1( ˜Sk+1 ˜S⊤ k+1)−1/2, we use the following lemma that, once again, uses smart eigenvalue decomposition techniques to avoid dealing with the N × N matrix ˜Sk+1 ˜S⊤ k+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Let ˜S⊤ k+1 ˜Sk+1 � �� � ν×ν = Wδ ˜W ⊤ be the eigenvalue decomposition of ˜S⊤ k+1 ˜Sk+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, ˜W = ˜Sk+1Wδ−1/2 is the matrix of eigenvectors of ˜Sk+1 ˜S⊤ k+1 for the non-zero eigenvalues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Thus, ˜Sk+1( ˜Sk+1 ˜S⊤ k+1)−1/2 = ˜Wk+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (10) By construction, the columns of ˆVk+1 form an orthogonal basis of the span of the columns of Vk, Sk+1, and hence Ψk+1,∗ = ˆV ⊤ k+1 ˜Ψk+1 ˆVk+1 ∈ R(P1+ν)×(P1+ν) (11) has the same non-zero eigenvalues as ˜Ψk+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We then define ˜Vk+1 ∈ R(P1+ν)×ν to be the matrix with eigenvectors of Ψk+1,∗ for the largest ν eigenvalues, and we denote the diagonal matrix of these eigenvalues by Dk+1 ∈ Rν×ν, and then we define Vk+1 = ˆVk+1 ˜Vk+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, ˆΨk+1 = Vk+1Dk+1Vk+1 = Πk+1 ˜Ψk+1Πk+1 , where Πk+1 = ˆVk+1 ˜Vk+1 ˜V ⊤ k+1 ˆV ⊤ k+1 is the orthogonal projection onto the eigen-subspace of ˜Ψk+1 for the largest ν eigenvalues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We have ˆΨk ≤ ˜Ψk ≤ ΨK and ∥Ψk − ˆΨk∥ ≤ k � i=1 λν+1(Ψi) ≤ k λν+1(ΨK) , (12) 9 and ∥(Ψk+1 + zI)−1 − (ˆΨk+1 + zI)−1∥ ≤ z−2 k � i=1 λν+1(Ψi) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (13) There is another important aspect of our algorithm: It allows us to directly compute the performance of models with an expanding level of complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Indeed, since we load random features in batches of size P1, we generate predictions for P ∈ [P1, 2P1, · · · , KP1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' This is useful because we might use it to calibrate the optimal degree of complexity and because we can directly study the double descent-like phenomena, see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Belkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019a] and Nakkiran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' That is the effect of complexity on the generalization error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In the next section, we do this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As we show, consistent with recent theoretical results Kelly et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2022], with sufficient shrinkage, the double descent curve disappears, and the performance becomes almost monotonic in complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Following Kelly et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2022], we name this phenomenon the virtue of complexity (VoC) and the corresponding performance plots the VoC curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See, Figure 6 below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We call this algorithm Fast Annihilating Batch Regression (FABR) as it annihilates all eigenvalues below λν(ΨK) and allows to solve the random features ridge regression in one go for a grid of z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Algorithm 2 formalizes the logic described above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 4 Numerical Results This section presents several experimental results on different datasets to evaluate FABR’s performance and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In contrast to the most recent computational power demand in kernel methods, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020], we ran all experiments on a laptop, a MacBook Pro model A2485, equipped with an M1 Max with a 10-core CPU and 32 GB RAM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 10 Algorithm 2 FABR-ν Require: ν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' P1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' X ∈ RN×d,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' y ∈ RN,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' z,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' voc curve blocks ← P//P1 k ← 0 while k < blocks do Generate Sk ∈ RN×P1 ▷ Use k as seed to generate the random features if k = 0 then ˜d,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' ˜V ← eigen(S⊤ k Sk) V ← Sk ˜V diag( ˜d)− 1 2 V0 ← V:,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='min(ν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='P1) ▷ Save V0 d0 ← ˜d:min(ν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='P1) ▷ Save d0 if k in voc curve then Q0(z) ← V0(diag(d0) + zI)−1V ⊤ 0 y ▷ Save Q0(z) end if else if k > 0 then ˜Sk ← (I − Vk−1V ⊤ k−1)Sk Γk ← ˜ S⊤ k ˜Sk δk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Wk ← eigen(Γk) Keep top min(ν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' P1) eigenvalues and eigenvectors from δk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Wk ˜ Wk ← ˜SkWkdiag(δk)− 1 2 ˆVk ← [Vk−1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' ˜ Wk] ¯Vk ← ˆ V ⊤ k Vk−1 ¯ Wk ← ¯Vkdiag(dk−1) ¯ V ⊤ k ¯Sk ← ˆ V ⊤ k Sk ¯Zk ← ¯SkS⊤ k Ψ∗ ← ¯ Wk ¯Zk dk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Vk ← eigen(Ψ∗) Keep top min(ν,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' P1) eigenvalues and eigenvectors from dk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Vk Vk ← ˆVkVk ▷ Save dk,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Vk if k in voc curve then Qk(z) ← Vk(diag(dk) + zI)−1V ⊤ k y ▷ Save Qk(z) end if end if k = k + 1 end while k ← 0 while k < blocks do (re-)Generate Sk ∈ RN×P1 ▷ Use k as seed to generate the random features βk(z) ← S⊤ k Qk(z) ˆy += Skβk end while 11 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='1 A comparison with sklearn We now aim to show FABR’s training and prediction time with respect to the number of features d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To this end, we do not use any random feature projection or the rank-ν matrix approximation described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We draw N = 5000 i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' samples from ⊗d j=1N(0, 1) and let yi = xiβ + ϵi ∀i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , N, where β ∼ ⊗d j=1N(0, 1), and ϵi ∼ N(0, 1) for all i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, we define yi = � � � � � � � 1 if yi > median(y), 0 otherwise ∀i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' , N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Next, we create a set of datasets for classification with varying complexity d and keep the first 4000 samples as the training set and the remaining 1000 as the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We show in Figure 1 the average training and prediction time (in seconds) of FABR with a different number of regularizers ( we denote this number by |z|) and sklearn RidgeClassifier with an increasing number of features d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The training and prediction time is averaged over five independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As one can see, our method is drastically faster when d > 10000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', for d = 100000 we outperform sklearn by approximately 5 and 25 times for |z| = 5 and |z| = 50, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Moreover, one can notice that the number of different shrinkages |z| does not affect FABR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We report a more detailed table with average training and prediction time and standard deviation in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2 Experiments on Real Datasets We assess FABR’s performance on both small and big datasets regimes for further evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' For all experiments,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' we perform a random features kernel ridge regression for demeaned one- 12 0 20000 40000 60000 80000 100000 d 0 200 400 600 800 Training and Prediction Time (s) FABR - |z| = 5 FABR - |z| = 10 FABR - |z| = 20 FABR - |z| = 50 sklearn - |z| = 5 sklearn - |z| = 10 sklearn - |z| = 20 sklearn - |z| = 50 Figure 1: The figure above compares FABR training and prediction time,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' shown on the y- axis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' in black,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' against sklearn’s RidgeClassifier,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' in red,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' for an increasing amount of features,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' shown on the x-axis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' and the number of shrinkages z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Here, |z| denotes the number of different values of z for which we perform the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' hot labels and solve the optimization problem using FABR as described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='1 Data Representation Table 1: The table below shows the average test accuracy and standard deviation of ResNet-34, CNTK, and FABR on the subsampled CIFAR-10 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The test accuracy is average over twenty independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' n ResNet-34 14-layer CNTK z=1 z=100 z=10000 z=100000 10 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='59% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='99% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='33% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='43% 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18% 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18% 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='42% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='13% 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='13% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01% 20 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='47% 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='79% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='13% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='84% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='85% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='78% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='13% ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% 40 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='52% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='39% 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='91% 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='09% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='76% 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='76% 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='75% 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='41% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='88% 80 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='32% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='61% 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='48% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='91% 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='61% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='60% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='62% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='39% 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='63% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='66% 160 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38% 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='48% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='17% 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='86% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12% 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='87% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12% 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='11% 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='54% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24% 320 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20% 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='57% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='88% 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='46% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='73% 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='47% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='73% 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='66% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='72% 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='72% 640 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='66% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='09% 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='63% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68% 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='71% 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='72% 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='17% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68% 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='91% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='72% 1280 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14% ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='31% 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='86% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68% 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='57% 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='32% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='56% 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='05% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='54% 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='74% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='42% FABR requires, like any standard kernel methods or randomized-feature techniques, a good data representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Usually, we don’t know such a representation a-priori, and learning a good kernel is outside the scope of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Therefore, we build a simple Convolutional Neural Network (CNN) mapping h : Rd → RD;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' that extracts image features ˜x ∈ RD for some sample x ∈ Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The CNN is not optimized;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' we use it as a simple random feature mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The CNN architecture, shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 2, alternates a 3 × 3 convolution layer with 13 GlobalAveragePool 3x3 Convolution ReLU 2x2 Average Pool BatchNormalization 3x3 Convolution ReLU 2x2 Average Pool BatchNormalization 3x3 Convolution ReLU 2x2 Average Pool BatchNormalization 3x3 Convolution ReLU 2x2 Average Pool BatchNormalization Figure 2: CNN architecture used to extract image features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' a ReLU activation function, a 2 × 2 Average Pool, and a BatchNormalization layer Ioffe and Szegedy [2015].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Convolutional layers weights are initialized using He Uniform He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2015].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To vectorize images, we use a global average pooling layer that has proven to enforce correspondences between feature maps and to be more robust to spatial translations of the input Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2013].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We finally obtain the train and test random features realizations s = f(˜x, θ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Specifically, we use the following random features mapping si = σ(W ˜x), (14) where W ∈ RP×D with wi,j ∼ N(0, 1) and σ is some elementwise activation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' This 14 can be described as a one-layer neural network with random weights W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To show the importance of over-parametrized models, throughout the results, we report the complexity, c, of the model as c = P/N, that is, the ratio between the parameters (dimensions) and the number of observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See Belkin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019a], Hastie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019], Kelly et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2 Small Datasets We now study the performance of FABR on the subsampled CIFAR-10 dataset Krizhevsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2009].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To this end, we reproduce the same experiment described in Arora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In particular, we obtain random subsampled training set (y;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' X) = (yi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' xi)n i=1 where n ∈ {10, 20, 40, 80, 160, 320, 640, 1280} and test on the whole test set of size 10000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We make sure that exactly n/10 sample from each image class is in the training sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We train FABR using random features projection of the subsampled training set S = σ(Wg(X)) ∈ Rn×P, where g is an untrained CNN from Figure 2, randomly initialized using He Uniform distri- bution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In this experiment, we push the model complexity c to 100;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' in other words, FABR’s number of parameters equals a hundred times the number of observations in the subsample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As n is small, we deliberately do not perform any low-rank covariance matrix approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Finally, we run our model twenty times and report the mean out-of-sample performance and the standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We report in Table 1 FABR’s performance for different shrinkages (z) together with ResNet-34 and the 14-layers CNTK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Without any complicated random fea- ture projection, FABR can outperform both ResNet-34 and CNTK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' FABR’s test accuracy increases with the model’s complexity c on different (n) subsampled CIFAR-10 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We show Figure 3 as an example for n = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Additionally, we show, to better observe the double descent phenomena, truncated curves at c = 25 for all CIFAR-10 subsamples in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The full curves are shown in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To sum up this section findings: 15 Figure 3: The figures above show FABR’s test accuracy increases with the model’s complexity c on the subsampled CIFAR-10 dataset for n = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The test accuracy is averaged over five independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' FABR, with enough complexity together and a simple random feature projection, is able to outperform deep neural networks (ResNet-34) and CNTKs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' FABR always reaches the maximum accuracy beyond the interpolation threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Moreover, if the random feature ridge regression shrinkage z is sufficiently high, the double descent phenomenon disappears, and the accuracy does not drop at the inter- polation threshold point, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', when c = 1 or n = P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Following Kelly et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2022], we call this phenomenon virtue of complexity (VoC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='3 Big Datasets In this section, we repeat the same experiments described in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2, but we extend the training set size n up to the full CIFAR-10 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' For each n, we train FABR, FABR-ν with a rank-ν approximation as described in Algorithm 2, and the min-batch-FABR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We use ν = 2000 and batch size = 2000 in the last two algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Following Arora et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019b], we train ResNet-34 as the benchmark for 160 epochs, with an initial learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='001 and a batch size of 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We decrease the learning rate by ten at epochs 80 and 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' ResNet-34 always reaches close to perfect accuracy on the training set, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', above 99%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16 (%) Z = 10-5 Accuracy z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14 Z= 100 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12 z= 102 z= 103 z= 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 Z= 105 0 20 40 60 80 100 c(a) n = 10 (b) n = 20 (c) n = 40 (d) n = 80 (e) n = 160 (f) n = 320 (g) n = 640 (h) n = 1280 Figure 4: The figures above show FABR’s test accuracy increases with the model’s complexity c on different (n) subsampled CIFAR-10 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The expanded dataset follows similar patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We truncate the curve for c > 25 to better show the double descent phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The full curves are shown in Appendix B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Notice that when the shrinkage is sufficiently high, the double descent disappears, and the accuracy monotonically increases in complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Following Kelly et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2022], we name this phenomenon the virtue of complexity (VoC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The test accuracy is averaged over 20 independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' run each training five times and report mean out-of-sample performance and its standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' As the training sample is sufficiently large already, we set the model complexity to only c = 15, meaning that for the full sample, FABR performs a random feature ridge regression with P = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 × 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We report the results in Tables 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='3 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Table 2: The table below shows the average test accuracy and standard deviation of ResNet- 34 and FABR on the subsampled and full CIFAR-10 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The test accuracy is average over five independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' n ResNet-34 z=1 z=100 z=10000 z=100000 2560 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='69% 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='29% 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21% 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='29% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='44% 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='28% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='37% 5120 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='03% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='82% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='32% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='74% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='29% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20% 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='06% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='08% 10240 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='26% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='36% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='86% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='54% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='17% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='75% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18% 20480 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='47% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='08% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='17% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='65% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='27% 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='19% 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14% 50000 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21% 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00% 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='98% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00% 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='62% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00% 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00% The experiment delivers a number of additional conclusions: First, we observe that, while for small train sample sizes of n ≤ 10000, simple kernel 17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35 Z = 10-5 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 Z= 10-1 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 z = 102 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15 z= 104 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 0 5 10 15 20 25 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='4 %) Z = 10-5 Accuracy Z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='3 z=100 z= 101 z = 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2 Z= 103 z= 104 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='1 0 5 10 15 20 25 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16 (%) Z = 10-5 cy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14 Z= 10-1 Accura Z= 100 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12 z = 102 z= 103 z= 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 Z= 105 0 5 10 15 20 25 c- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18 (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16 Z = 10-5 Accuracy Z= 10-1 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14 z= 101 z = 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12 Z= 103 z= 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 Z= 105 0 5 10 15 20 25 c- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 Z = 10-5 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18 Z= 10-1 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16 z= 101 z = 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14 z= 103 z= 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12 Z= 105 0 5 10 15 20 25 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='275 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='250 (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='225 Z = 10-5 Accuracy Z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='200 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='175 z= 101 z= 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='150 z= 103 z = 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='125 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='100 0 5 10 15 20 25 c1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 (%) Z = 10-5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 Accuracy Z = 10-1 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 z= 101 z= 102 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15 Z= 104 Z=105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 0 5 10 15 20 25 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35 (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 Z = 10-5 Accuracy Z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 z= 100 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 Z= 102 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15 z= 104 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 0 5 10 15 20 25 c(a) n = 2560 (b) n = 50000 Figure 5: The figures above show FABR’s test accuracy increases with the model’s complexity c on the subsampled CIFAR-10 dataset 5a and the full CIFAR-10 dataset 5b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' FABR is trained using a ν = 2000 low-rank covariance matrix approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Notice that we still observe a (shifted) double descent when ν ≈ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The same phenomenon disappears when ν ≪ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The test accuracy is averaged over five independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Table 3: The table below shows the average test accuracy and standard deviation of FABR-ν and mini-batch FABR on the subsampled and full CIFAR-10 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The test accuracy is average over five independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' z = 1 z = 100 z = 10000 z = 100000 FABR batch = 2000 ν = 2000 batch = 2000 ν = 2000 batch = 2000 ν = 2000 batch = 2000 ν = 2000 n 2560 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='13% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='48% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='22% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='42% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='63% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24% 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='51% 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='05% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='44% 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='78% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='52% 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='23% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% 5120 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='63% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='19% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='70% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='63% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18% 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='83% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='27% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='53% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='11% 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='42% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='22% 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='75% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14% 10240 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='79% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='39% 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='79% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='63% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='28% 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='63% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='73% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='37% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34% 20480 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='56% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='55% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='37% 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='13% 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='90% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='92% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='19% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21% 50000 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='74% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10% 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='74% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10% 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='44% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='05% 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='04% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='19% 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='99% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='71% ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='09% methods achieve performance comparable with that of DNNs, this is not the case for n > 20000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Beating DNNs on big datasets with shallow methods requires more complex kernels, such as those in Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020], Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Second, we confirm the findings of Ma and Belkin [2017], Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] suggesting that the role of small eigenvalues is important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' For example, FABR-ν with ν = 2000 loses several percent of accuracy on larger datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='55- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50 (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45 z = 10-5 Accuracy z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 z= 100 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35 z= 102 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 z= 104 z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='60 z= 10-5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='55 z= 10-1 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50 z= 101 z= 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45 Z= 103 z= 104 z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 c• Third, surprisingly, both the mini-batch FABR and FABR-ν sometimes achieve higher accuracy than the full sample regression on moderately-sized datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' See Tables 2 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Understanding these phenomena is an interesting direction for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Fourth, the double descent phenomenon naturally appears for both FABR-ν and the mini-batch FABR but only when ν ≈ n or batch size ≈ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' However, the double descent phenomenon disappears when ν ≪ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' This intriguing finding is shown in Figure 5 for FABR-ν, and in Appendix B for the mini-batch FABR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Fifth, on average, FABR-ν outperforms mini-batch FABR on larger datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 5 Conclusion and Discussion The recent discovery of the equivalence between infinitely wide neural networks (NNs) in the lazy training regime and neural tangent kernels (NTKs) Jacot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2018] has revived interest in kernel methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' However, these kernels are extremely complex and usually re- quire running on big and expensive computing clusters Avron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2017], Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' [2020] due to memory (RAM) requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' This paper proposes a highly scalable random features ridge regression that can run on a simple laptop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We name it Fast Annihilating Batch Regression (FABR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Thanks to the linear algebraic properties of covariance matrices, this tool can be applied to any kernel and any way of generating random features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' More- over, we provide several experimental results to assess its performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We show how FABR can outperform (in training and prediction speed) the current state-of-the-art ridge classi- fier’s implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Then, we show how a simple data representation strategy combined with a random features ridge regression can outperform complicated kernels (CNTKs) and over-parametrized Deep Neural Networks (ResNet-34) in the few-shot learning setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The experiments section concludes by showing additional results on big datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In this paper, we focus on very simple classes of random features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Recent findings (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Shankar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 19 [2020]) suggest that highly complex kernel architectures are necessary to achieve competi- tive performance on large datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Since each kernel regression can be approximated with random features, our method is potentially applicable to these kernels as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' However, directly computing the random feature representation of such complex kernels is non-trivial and we leave it for future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 20 References Alnur Ali, J Zico Kolter, and Ryan J Tibshirani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' A continuous-time view of early stopping for least squares regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1370–1378.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' A convergence theory for deep learning via over-parameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 242–252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' On exact computation with an infinitely wide neural net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 32, 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Sanjeev Arora, Simon S Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, and Dingli Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Harnessing the power of infinitely wide deep nets on small-data tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01663, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Haim Avron, Kenneth L Clarkson, and David P Woodruff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Faster kernel ridge regression using sketching and preconditioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' SIAM Journal on Matrix Analysis and Applications, 38(4):1116–1138, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Peter L Bartlett, Philip M Long, G´abor Lugosi, and Alexander Tsigler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Benign overfitting in linear regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 117(48):30063–30070, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Mikhail Belkin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Acta Numerica, 30:203–248, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Mikhail Belkin, Siyuan Ma, and Soumik Mandal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' To understand deep learning we need 21 to understand kernel learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 541–549.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Reconciling modern machine- learning practice and the classical bias–variance trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 116(32):15849–15854, 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Mikhail Belkin, Alexander Rakhlin, and Alexandre B Tsybakov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Does data interpolation contradict statistical optimality?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1611–1619.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Mikhail Belkin, Daniel Hsu, and Ji Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Two models of double descent for weak features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' SIAM Journal on Mathematics of Data Science, 2(4):1167–1180, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Lenaic Chizat, Edouard Oyallon, and Francis Bach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' On lazy training in differentiable pro- gramming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Youngmin Cho and Lawrence Saul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Kernel methods for deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 22, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Scalable kernel methods via doubly stochastic gradients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 27, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Amit Daniely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Sgd learns the conjugate kernel class of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Amit Daniely, Roy Frostig, and Yoram Singer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 22 Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Gradient descent finds global minima of deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International conference on machine learning, pages 1675–1685.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Gradient descent provably optimizes over-parameterized neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02054, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Graph neural tangent kernel: Fusing graph neural networks with graph kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 32, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Manuel Fern´andez-Delgado, Eva Cernadas, Sen´en Barro, and Dinani Amorim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Do we need hundreds of classifiers to solve real world classification problems?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The journal of machine learning research, 15(1):3133–3181, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Adri`a Garriga-Alonso, Carl Edward Rasmussen, and Laurence Aitchison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Deep convolu- tional networks as shallow gaussian processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Surprises in high-dimensional ridgeless least squares interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1903.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='08560, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Tamir Hazan and Tommi Jaakkola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Steps toward deep kernel methods from infinite neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1508.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='05133, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Sergey Ioffe and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Batch normalization: Accelerating deep network training 23 by reducing internal covariate shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International conference on machine learning, pages 448–456.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Arthur Jacot, Franck Gabriel, and Cl´ement Hongler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Neural tangent kernel: Convergence and generalization in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Bryan T Kelly, Semyon Malamud, and Kangying Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The virtue of complexity in return prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Alex Krizhevsky, Geoffrey Hinton, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Learning multiple layers of features from tiny images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Nicolas Le Roux and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Continuous neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In Artificial Intelligence and Statistics, pages 404–411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Deep neural networks as gaussian processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- Dickstein, and Jeffrey Pennington.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Wide neural networks of any depth evolve as linear models under gradient descent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak, and Jascha Sohl-Dickstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Finite versus infinite neural networks: an empirical study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33:15156–15172, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Daniel LeJeune, Hamid Javadi, and Richard Baraniuk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The implicit regularization of ordi- nary least squares ensembles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International Conference on Artificial Intelligence and Statistics, pages 3525–3535.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 24 Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S Du, Wei Hu, Ruslan Salakhutdinov, and Sanjeev Arora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Enhanced convolutional neural tangent kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00809, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Min Lin, Qiang Chen, and Shuicheng Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Network in network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='4400, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Siyuan Ma and Mikhail Belkin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Diving into the shallows: a computational perspective on large-scale shallow learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin Ghahramani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Gaussian process behaviour in wide deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1804.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='11271, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Deep double descent: Where bigger models and more data hurt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Journal of Statistical Mechanics: Theory and Experiment, 2021(12):124003, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Radford M Neal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Priors for infinite networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In Bayesian Learning for Neural Networks, pages 29–53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Springer, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Jiri Hron, Daniel A Abolafia, Jeffrey Pennington, and Jascha Sohl-dickstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Bayesian deep convolutional networks with many channels are gaussian processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A Alemi, Jascha Sohl- Dickstein, and Samuel S Schoenholz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Neural tangents: Fast and easy infinite neural networks in python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1912.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02803, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Matthew Olson, Abraham Wyner, and Richard Berk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Modern neural networks generalize on small data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 31, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 25 Ali Rahimi and Benjamin Recht.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Random features for large-scale kernel machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Advances in neural information processing systems, 20, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Vaishaal Shankar, Alex Fang, Wenshuo Guo, Sara Fridovich-Keil, Jonathan Ragan-Kelley, Ludwig Schmidt, and Benjamin Recht.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Neural kernels without tangents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 8614–8623.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Stefano Spigler, Mario Geiger, St´ephane d’Ascoli, Levent Sagun, Giulio Biroli, and Matthieu Wyart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' A jamming transition from under-to over-parametrization affects generalization in deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Journal of Physics A: Mathematical and Theoretical, 52(47):474001, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Tsigler and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Bartlett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Benign overfitting in ridge regression, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Christopher KI Williams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Computing with infinite networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In Advances in Neural Infor- mation Processing Systems 9: Proceedings of the 1996 Conference, volume 9, page 295.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' MIT Press, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, and Jinwoo Shin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Scaling neural tangent kernels via sketching and random features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Ranzato, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Beygelzimer, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Dauphin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Liang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Wortman Vaughan, editors, Ad- vances in Neural Information Processing Systems, volume 34, pages 1062–1073.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Cur- ran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='cc/paper/2021/file/ 08ae6a26b7cb089ea588e94aed36bd15-Paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Understanding deep learning requires rethinking generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' arXiv preprint arXiv:1611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='03530, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 26 A Proofs Proof of Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We have Ψk+1 = Ψk + Sk+1S′ k+1 ˜Ψk+1 = ˆΨk + Sk+1S′ k+1 ˆΨk+1 = Pk+1 ˜Ψk+1Pk+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (15) By the definition of the spectral projection, we have ∥˜Ψk+1 − ˆΨk+1∥ ≤ λν+1(˜Ψk+1) ≤ λν+1(Ψk+1) , (16) and hence ∥Ψk+1 − ˆΨk+1∥ ≤ ∥Ψk+1 − ˜Ψk+1∥ + ∥˜Ψk+1 − ˆΨk+1∥ = ∥Ψk − ˆΨk∥ + ∥˜Ψk+1 − ˆΨk+1∥) ≤ ∥Ψk − ˆΨk∥ + λν+1(Ψk+1) , (17) and the claim follows by induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The last claim follows from the simple inequality ∥(Ψk+1 + zI)−1 − (ˆΨk+1 + zI)−1∥ ≤ z−2∥Ψk+1 − ˆΨk+1∥ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (18) B Additional Experimental Results This section provides additional experiments and findings that may help the community with future research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' First, we dive into more details about our comparison with sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Table 4 shows a more 27 detailed training and prediction time comparison between FABR and sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In particular, we average training and prediction time over five independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The experiment settings are explained in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We show how one, depending on the number shrinkages |z|, would start considering using FABR when the number of observations in the dataset n ≈ 5000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' In this case, we have used the numpy linear algebra library to decompose FABR’s covariance matrix, which appears to be faster than the scipy counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We share our code in the following repository: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='com/tengandreaxu/fabr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Second, while Figure 4 shows FABR’s test accuracy on increasing complexity c truncated curves, we present here the whole picture;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=', Figure 6 shows full FABR’s test accuracy increases with the model’s complexity c on different (n) subsampled CIFAR-10 datasets averaged over twenty independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The expanded dataset follows similar patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Similar to Figure 4, one can notice that when the shrinkage is sufficiently high, the double descent disappears, and the accuracy monotonically increases in complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Third, the double descent phenomenon naturally appears for both FABR-ν and the mini-batch FABR but only when ν ≈ n or batch size ≈ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' However, the double descent phenomenon disappears when ν ≪ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' This intriguing finding is shown in Figure 5 for FABR-ν, and here, in Figure 7, we report the same curves for mini-batch FABR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 28 (a) n = 10 (b) n = 20 (c) n = 40 (d) n = 80 (e) n = 160 (f) n = 320 (g) n = 640 (h) n = 1280 Figure 6: The figure above shows the full FABR’s accuracy increase with the model’s com- plexity c in the small dataset regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The expanded dataset follows similar patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (a) n = 2560 (b) n = 50000 Figure 7: Similar to Figure 5, the figures above show FABR’s test accuracy increases with the model’s complexity c on the subsampled CIFAR-10 dataset 7a and the full CIFAR-10 dataset 7b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' FABR trains using mini-batches with batch size=2000 in both cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' Notice that we still observe a (shifted) double descent when batch size ≈ n, while the same phenomenon disappears when batch size ≪ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' The test accuracy is averaged over 5 independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' 29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18 (%) Z = 10-5 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16 z= 10-1 Z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14 z= 101 z= 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12 z= 103 z= 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 Z= 105 0 20 40 60 80 100 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='22 (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 Z = 10-5 Accuracy Z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18 z=100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16 z= 101 z = 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14 z= 103 z= 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12 Z= 105 0 20 40 60 80 100 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 (%) Z = 10-5 Accuracy z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 z= 100 z= 101 z= 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15 z= 103 z= 104 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 0 20 40 60 80 100 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 (%) Z = 10-5 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 z= 10-1 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 z= 101 Z= 102 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15 z= 104 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 0 20 40 60 80 100 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35 %) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 Z = 10-5 Accuracy z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 z= 100 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 Z= 102 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15 z= 104 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 0 20 40 60 80 100 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35 Z = 10-5 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 Z= 10-1 z=100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 z= 101 z= 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15 z= 104 z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 0 20 40 60 80 100 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='4 [%) Z = 10-5 Accuracy Z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='3 Z= 100 z= 101 z= 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='2 Z= 103 z= 104 Z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='1 0 20 40 60 80 100 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45 z= 10-5 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 z= 10-1 z= 100 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35 z= 102 z= 103 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30 z= 104 z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='60 (%) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='55 z= 10-5 Accuracy z= 10-1 z= 100 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50 Z= 101 z= 102 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45 z= 103 z= 104 z= 105 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='0 c0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16 (%) Z = 10-5 Accuracy z= 10-1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14 Z= 100 z= 101 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12 z= 102 z= 103 z= 104 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10 Z= 105 0 20 40 60 80 100 cTable 4: The table below shows FABR and sklearn’s training and prediction time (in sec- onds) on a synthetic dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We vary the dataset number of features d and the number of shrinkages (|z|).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' We report the average running time and the standard deviation over five independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content=' |z| = 5 |z| = 10 |z| = 20 |z| = 50 FABR sklearn FABR sklearn FABR sklearn FABR sklearn d 10 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='72s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='36s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00s 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='90s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='77s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='04s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='67s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='03s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='00s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='44s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='57s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='07s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01s 100 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='36s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='06s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02s 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='58s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='34s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='11s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='61s ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='14s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='04s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='3s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='49s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='53s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='06s 500 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='37s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='44s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='33s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16s 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='81s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='54s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='03s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='07s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='44s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='48s 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='41s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21s 1000 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='62s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='31s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='58s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='23s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='06s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='04s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='51s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24s 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='04s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='04s 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='69s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='08s 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='79s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='36s 2000 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='33s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='42s 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='03s 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='09s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='73s 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='44s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='05s 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='33s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24s 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='87s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='07s 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='29s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='47s 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='15s 3000 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25s 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='49s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='05s 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='41s 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='08s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='03s 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='51s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='20s 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='06s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='02s 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='67s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='41s 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='67s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='23s 5000 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='64s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='86s 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='36s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='05s 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='01s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='7s 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='74s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='06s 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='57s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='81s 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='31s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='12s 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='54s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='41s 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='18s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='73s 10000 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='49s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='66s 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='87s ± 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='58s 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='81s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='47s 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='32s ± 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='53s 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='61s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='49s 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='72s ± 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='99s 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='55s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='3s 101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='58s ± 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='66s 25000 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='89s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='21s 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='79s ± 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='75s 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='50s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45s 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='84s ± 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68s 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='46s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='96s 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='08s ± 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='94s 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='74s 224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='31s ± 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='75s 50000 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='99s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='22s 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='51s ± 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='99s 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='27s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='37s 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='88s ± 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='45s 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='10s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='37s 176.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='24s ± 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='07s 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='68s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='85s 422.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='95s ± 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='22s 100000 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='30s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='39s 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='57s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25s 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='16s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='46s 177.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='54s ± 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='77s 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='93s ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35s 340.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='32s ± 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='74s 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='48s ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='38s 816.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='25s ± 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} +page_content='35s 30' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/A9FIT4oBgHgl3EQf_Swz/content/2301.11414v1.pdf'} diff --git a/ANFIT4oBgHgl3EQf-iyR/vector_store/index.faiss b/ANFIT4oBgHgl3EQf-iyR/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..2a259006faf581af096c11502176dd308993339f --- /dev/null +++ b/ANFIT4oBgHgl3EQf-iyR/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:352353fa857996658be2c35bf9b7889b82dbe8cea97eae48d9a558c830b43f71 +size 3735597 diff --git a/ANFIT4oBgHgl3EQf-iyR/vector_store/index.pkl b/ANFIT4oBgHgl3EQf-iyR/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..fdfd7d2c1cea3aed5610481f0c93df821c3fd17a --- /dev/null +++ b/ANFIT4oBgHgl3EQf-iyR/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adfd288be7a32edeb2df4b13df57ca4fbbc7c06cdefe7e951284b91080d24a75 +size 158965 diff --git a/B9E1T4oBgHgl3EQf9gbH/content/tmp_files/2301.03558v1.pdf.txt b/B9E1T4oBgHgl3EQf9gbH/content/tmp_files/2301.03558v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..909bd3f5ccbf152279b6ff954fc3643d46cb5afb --- /dev/null +++ b/B9E1T4oBgHgl3EQf9gbH/content/tmp_files/2301.03558v1.pdf.txt @@ -0,0 +1,798 @@ +Draft version January 10, 2023 +Typeset using LATEX default style in AASTeX631 +Pre-merger sky localization of gravitational waves from binary neutron star mergers using deep +learning +Chayan Chatterjee +1 and Linqing Wen +1 +1Department of Physics, OzGrav-UWA, The University of Western Australia, +35 Stirling Hwy, Crawley, Western Australia 6009, Australia +ABSTRACT +The simultaneous observation of gravitational waves (GW) and prompt electromagnetic counterparts +from the merger of two neutron stars can help reveal the properties of extreme matter and gravity +during and immediately after the final plunge. Rapid sky localization of these sources is crucial to +facilitate such multi-messenger observations. Since GWs from binary neutron star (BNS) mergers can +spend up to 10-15 mins in the frequency bands of the detectors at design sensitivity, early warning +alerts and pre-merger sky localization can be achieved for sufficiently bright sources, as demonstrated +in recent studies. In this work, we present pre-merger BNS sky localization results using CBC-SkyNet, +a deep learning model capable of inferring sky location posterior distributions of GW sources at orders +of magnitude faster speeds than standard Markov Chain Monte Carlo methods. We test our model’s +performance on a catalog of simulated injections from Sachdev et al. (2020), recovered at 0-60 secs +before merger, and obtain comparable sky localization areas to the rapid localization tool BAYESTAR. +These results show the feasibility of our model for rapid pre-merger sky localization and the possibility +of follow-up observations for precursor emissions from BNS mergers. +1. INTRODUCTION +The first direct detection of GWs from a merging binary black hole (BBH) system was made in 2015 (Abbott et al. +(2016)), which heralded a new era in astronomy. Since then the LIGO-Virgo-KAGRA (LVK) Collaboration (Aasi et al. +(2015); Acernese et al. (2014); Akutsu et al. (2019)) has made more than 90 detections of GWs from merging compact +binaries (Abbott et al. (2021a)), including two confirmed detections from merging binary neutron stars (BNS) and at +two from mergers of neutron star-black hole (NSBH) binaries (Abbott et al. (2021a,b)). The first detection of GWs +from a BNS merger on August 17th, 2017 (GW170817) along with its associated electromagnetic (EM) counterpart +revolutionized the field of multi-messenger astronomy (Abbott et al. (2017a)). This event involved the joint detection +of the GW signal by LIGO and Virgo, and the prompt short gamma-ray burst (sGRB) observation by the Fermi-GBM +and INTEGRAL space telescopes (Abbott et al. (2017b,c)) ∼ 2 secs after the merger. This joint observation of GWs +and sGRB, along with the observations of EM emissions at all wavelengths for months after the event had a tremendous +impact on astronomy, leading to – an independent measurement of the Hubble Constant (Abbott et al. (2017d)), new +constraints on the neutron star equation of state (Abbott et al. (2019)) and confirmation of the speculated connection +between sGRB and kilonovae with BNS mergers (Abbott et al. (2017b)). +While more multi-messenger observations involving GWs are certainly desirable, the typical delays between a GW +detection and the associated GCN alerts, which is of the order of a few minutes (Magee et al. (2021)), makes such joint +discoveries extremely challenging. This is because the prompt EM emissions lasts for just 1-2 secs after merger, which +means an advance warning system with pre-merger sky localization of such events is essential to enable joint GW and +EM observations by ground and space-based telescopes (Haas et al. (2016); Nissanke et al. (2013); Dyer et al. (2022)). +In recent years, several studies have shown that for a fraction of BNS events, it will be possible to issue alerts +up to 60 secs before merger (Magee et al. (2021); Sachdev et al. (2020); Kovalam et al. (2022); Nitz et al. (2020)). +Such early-warning detections, along with pre-merger sky localizations will facilitate rapid EM follow-up of prompt +emissions. The observations of optical and ultraviolet emissions prior to mergers are necessary for understanding +r-process nucleosynthesis (Nicholl et al. (2017)) and shock-heated ejecta (Metzger (2017)) post mergers. Prompt X- +ray emission can reveal the final state of the remnant (Metzger & Piro (2014); Bovard et al. (2017); Siegel & Ciolfi +(2016)), and early radio observations can reveal pre-merger magnetosphere interactions (Most & Philippov (2020)), +arXiv:2301.03558v1 [astro-ph.HE] 30 Dec 2022 + +ID2 +and help test theories connecting BNS mergers with fast radio bursts (Totani (2013); Wang et al. (2016); Dokuchaev +& Eroshenko (2017)). +In the last three LVK observation runs, five GW low-latency detection pipelines have processed data and sent out +alerts in real-time. These pipelines are GstLAL (Sachdev et al. (2019)), SPIIR (Chu et al. (2022)), PyCBC (Usman +et al. (2016)), MBTA (Aubin et al. (2021)), and cWB (Klimenko et al. (2016)). Of these, the first four pipelines use +the technique of matched filtering (Hooper (2013)) to identify real GW signals in detector data, while cWB uses a +coherent analysis to search for burst signals in detector data streams. In 2020, an end-to-end mock data challenge +(Magee et al. (2021)) was conducted by the GstLAL and SPIIR search pipelines and successfully demonstrated their +feasibility to send pre-merger alerts (Magee et al. (2021)). This study also estimated the expected rate of BNS mergers +and their sky localization areas using the rapid localization tool, BAYESTAR (Singer & Price (2016)) using a four detector +network consisting of LIGO Hanford (H1), LIGO Livingston (L1), Virgo (V1) and KAGRA in O4 detector sensitivity. +In a previous study, Sachdev et al. (2020) (Sachdev et al. (2020)) showed early warning performance of the GstLAL +pipeline over a month of simulated data with injections. Their study suggested that alerts could be issued 10s (60 s) +before merger for 24 (3) BNS systems over the course of one year of observations of a three-detector Advanced network +operating at design sensitivity. These findings were in broad agreement with the estimates of Cannon et al. (2012) +(Cannon et al. (2012)) on the rates of early warning detections at design sensitivity. Sky localization was also obtained +at various number of seconds before merger, using the online rapid sky localization software called BAYESTAR (Singer +& Price (2016)), with the indication that around one event will be both detected before merger and localized within +100 deg2, based on current BNS merger rate estimates. +The online search pipelines, however, experience additional latencies owing to data transfer, calibration and filtering +processes, which contribute up to 7-8 secs of delay in the publication of early warning alerts (Kovalam et al. (2022); +Sachdev et al. (2020)). For sky localization, BAYESTAR typically takes 8 secs to produce skymaps, which is expected +to reduce to 1-2 secs in the third observation run. This latency can, however, be potentially reduced further by the +application of machine learning techniques, as demonstrated in Chatterjee et al. (2022) (Chatterjee et al. (2022)). +In this Letter, we report pre-merger sky localization using deep learning for the first time. We obtain our results using +CBC-SkyNet (Compact Binary Coalescence - Sky Localization Neural Network.), a normalizing flow model (Rezende & +Mohamed (2015); Kingma et al. (2016); Papamakarios et al. (2017)) for sky localization of all types of compact binary +coalescence sources (Chatterjee et al. (2022)). We test our model on simulated BNS events from the injection catalog +in Sachdev et al. (2020) (Sachdev et al. (2020)), that consists of signals detected at 0 to 60 secs before merger using the +GstLAL search pipeline. We compare our sky localization performance with BAYESTAR and find that our localization +contours have comparable sky contour areas with BAYESTAR, at an inference speed of just a few milli-seconds using a +P100 GPU. +The paper is divided as follows: we briefly describe our normalizing flow model in Section 2. In Section 3, we describe +the details of the simulations used to generate the training and test sets. In Section 4, we desribe our architecture of +CBC-SkyNet. In Section 5, we discuss results obtained using our network on the dataset from Sachdev et al. (2020) +(Sachdev et al. (2020)). Finally, we discuss future directions of this research in Section 6. +2. METHOD +Our neural network, CBC-SkyNet is based on a class of deep neural density estimators called normalizing flow, +the details of which is provided in (Chatterjee et al. (2022)). CBC-SkyNet consists of three main components: (i) +the normalizing flow, specifically, a Masked Autoregressive Flow (MAF) (Kingma et al. (2016); Papamakarios et al. +(2017)) network, (ii) a ResNet-34 model (He et al. (2015)) that extracts features from the complex signal-to-noise +(SNR) time series data which is obtained by matched filtering GW strains with BNS template waveforms, and (iii) +a fully connected neural network whose inputs are the intrinsic parameters (component masses and z-component of +spins) of the templates used to generate the SNR time series by matched filtering. The architecture of our model is +shown in Figure 1. The features extracted by the ResNet-34 and fully connected networks from the SNR time series +(ρ(t)) and best-matched intrinsic parameters (ˆθin) respectively, are combined into a single feature vector and passed +as a conditional input to the MAF. The MAF is a normalizing flow with a specific architecture, that transforms a +simple base distribution (a multi-variate Gaussian) z ∼ p(z) into a more complex target distribution x ∼ p(x) which +in our case, is the posterior distribution of the right ascension (α) and declination angles (δ) of the GW events, given +the SNR time series and intrinsic parameters p(α, δ|ρ(t), ˆθin). + +3 +Figure 1. +Architecture of our model, CBC-SkyNet. The input data, consisting of the SNR time series, ρ(t) and intrinsic +parameters, ˆθin are provided to the network through two separate channels: the ResNet-34 channel (only one ResNet block is +shown here) and the multi-layered fully connected (Dense) network respectively. The features extracted by ρ(t) and ˆθin are then +combined and provided as conditional input to the main component of CBC-SkyNet - the Masked Autoregressive Flow (MAF) +network , denoted by f(z). The MAF draws samples, z, from a multivariate Gaussian, and learns a mapping between z to (α, +δ), which are the right ascension and declination angles of the GW events. +This mapping is learnt by the flow during training using the method of maximum likelihood, and can be expressed +as: +p(x) = π(z) +����det∂f(z) +∂z +���� +−1 +, +(1) +If z is a random sample drawn from the base distribution π(z), and f is the invertible transformation parametrized by +the normalizing flow, then the new random variable obtained after the transformation is x = f(z). The transformation, +f can be made more flexible and expressive by stacking a chain of transformations together as follows: +xk = fk ◦ . . . ◦ f1 (z0) +(2) +This helps the normalizing flow learn arbitrarily complex distributions, provided each of the transformations are +invertible and the Jacobians are easy to evaluate. Neural posterior estimation (NPE) (Papamakarios & Murray (2016); +Lueckmann et al. (2017); Greenberg et al. (2019)) techniques, including normalizing flows and conditional variational +autoencoders have been used to estimate posterior distribution of BBH source parameters with high accuracy and +speed (Dax et al. (2021); Gabbard et al. (2022); Chua & Vallisneri (2020)). Chatterjee et al. (2022) (Chatterjee +et al. (2022)) used a normalizing flow to demonstrate rapid inference of sky location posteriors for all CBC sources for +the first time. This work shows the first application of deep learning for pre-merger BNS sky localization and is an +extension of the model introduced in Chatterjee et al. (2022) +3. +DATA GENERATION + +Hanford +Livingston +Virgo +Conv 2D +Dense (64) +BatchNorm +ReLU +Conv2D +Dense (64) +Conv 2D +BatchNorm +Dense (64) +BatchNorm +Dense (64) +Dense (64) +ReLU +90% area: 121 deg +50% area: 34 deg +60* +Pz(z) +Feature Vector +f(z) +30° +2 +z ~ N(0, 1)D+1 +α, S ~ Pe(α, Slp(t), θin)4 +We train six different versions of CBC-SkyNet with distinct training sets (ρi(t), ˆθi +in) for each “negative latency", +i = 0, 10, 14, 28, 44, 58 secs before merger. +Our training and test set injections parameters were sampled from the +publicly available injection dataset used in Sachdev et al. (2020) Sachdev et al. (2020). These ˆθi +in parameters were +used to first simulate the BNS waveforms using the SpinTaylorT4 approximant (Sturani et al. (2010)) and then +injected into Gaussian noise with advanced LIGO power spectral density (PSD) at design sensitivity (Littenberg & +Cornish (2015)) to obtain the desired strains. The SNR time series, ρi(t), was then obtained by matched filtering the +simulated BNS strains with template waveforms. +For generating the training sets, the template waveforms for matched filtering were simulated using the optimal +parameters, which have the exact same values as the injection parameters used to generate the detector strains. +The SNR time series obtained by matched filtering the strains with the optimal templates, ρi +opt(t), and the optimal +intrinsic parameters, ˆθi,opt +in +, were then used as input to our network during the training process. For testing, the +template parameters were sampled from publicly available data by Sachdev et al. (2020) (Sachdev et al. (2020)). These +parameters correspond to the parameters of the maximum likelihood or ‘best-matched’ signal template recovered by +the GstLAL matched-filtering search pipeline. Therefore the values of ˆθi +in used during testing are close to, but is not +the exact same as ˆθi,opt +in +. Similarly, the SNR time series ρi(t) is not exactly similar to the optimal ρi +opt(t), and has a +slightly lower peak amplitude than the corresponding ρi +opt(t) peak because of the small mismatch between the injection +parameters and the best-matched template waveform parameters. +While our injections have the same parameter distribution as (Sachdev et al. (2020)), we only choose samples with +network SNRs lying between 9 and 40, at each negative latency, for this analysis. This is because when the network +is trained on samples with identical parameter distributions as the dataset from (Sachdev et al. (2020)), our model’s +predictions on test samples with network SNRs > 40 tend to become spurious, with α and δ samples drawn from +the predicted posterior distribution for these events having values outside their permissible ranges. This is because +in the dataset from (Sachdev et al. (2020)), injection samples with SNR > 40 are much fewer in number compared +to samples between SNR 9 and 40. This means for models trained on data with parameters from (Sachdev et al. +(2020)), there exists very few training examples for SNR > 40 to learn from. Since Normalizing Flow models are +known to fail at learning out-of-distribution data, as described in (Kirichenko et al. (2020)), our model fails to make +accurate predictions at the high SNR limit. Although this can potentially be solved by generating training sets with +uniform SNR distribution over the entire existing SNR range in (Sachdev et al. (2020)), which corresponds to a uniform +distribution of sources in comoving volume up to a redshift of z=0.2, this would be require generating an unfeasibly +large number of training samples for each negative latency. Also, such events detected with SNR > 40 are expected +to be exceptionally rare, even at design sensitivities of advanced LIGO and Virgo, which is why we choose to ignore +them for this study. We therefore generate samples with uniformly distributed SNRs between 9 and 40 for training, +while our test samples have the same SNR distribution as (Sachdev et al. (2020)) between 9 and 40. +4. NETWORK ARCHITECTURE +In this section, we describe the architecture of the different components of our model. The MAF is implemented +using a neural network that is designed to efficiently model conditional probability densities. This network is called +Masked Autoencoder for Density Estimation (MADE) (Germain et al. (2015)). We stack 10 MADE blocks together +to make a sufficiently expressive model, with each MADE block consisting of 5 layers with 256 neurons in each layer. +In between each pair of MADE networks, we use batch normalization to stabilize training. We use a ResNet-34 model +(He et al. (2015)), that is constructed using 2D convolutional and MaxPooling layers with skip connections, (He et al. +(2015)) to extract features from the SNR time series data. The real and imaginary parts of the SNR time series are +stacked vertically to generate a two dimensional input data stream for each training and test sample. The initial +number of kernels for the convolutional layers of the ResNet model is chosen to be 32, which is doubled progressively +through the network (He et al. (2015)). The final vector of features obtained by the ResNet are combined with the +features extracted from the intrinsic parameters, ˆθi +in, by the fully-connected network, consisting of 5 hidden layers +with 64 neurons in each hidden layer. The combined feature vector is then passed as a conditional input to the MAF +which learns the mapping between the base and target distributions during training. +5. RESULTS +In this section, we describe the results of the injection runs at each negative latency. Figure 2 (a) to (f) shows +the histograms of the areas of the 90% credible intervals of the predicted posterior distributions from CBC-SkyNet + +5 +(blue) and BAYESTAR (orange), evaluated on the injections in (Sachdev et al. (2020)) with network SNRs between 9 +and 40. We observe that for most of the test sets, our model predicts smaller median 90% credible interval areas than +BAYESTAR. Also, BAYESTAR shows much broader tails at < 100 deg2, compared to CBC-SkyNet, especially for 0 secs, +10 secs and 15 secs before merger (Figures 2 (a), (b) and (c)). These injections, with 90% areas < 100 deg2 typically +have SNR > 25, which shows that although CBC-SkyNet produces smaller 90 % contours on average, it fails to match +BAYESTAR’s accuracy for high SNR cases. Especially at 0 secs before merger (Figure 2 (a)), the area of the smallest +90% credible interval by CBC-SkyNet is 13 deg2, whereas for BAYESTAR, it is around 1 deg2. The number of injections +localized with a 90% credible interval area between 10 - 15 deg2 by CBC-SkyNet is also much lower than BAYESTAR, +although this effect is much less prominent for the other test sets. +Similar results are found for the searched area distributions at 0 secs before merger (Figure 3 (a)), although the +distributions of searched areas from for all other cases (Figure 3 (b) - (f)) from CBC-SkyNet and BAYESTAR are very +similar. Figures 4 (a) and (b) show box and whisker plots for 90% credible interval areas and searched areas obtained +by CBC-SkyNet (blue) and BAYESTAR (pink) respectively. We observe that our median 90% areas (white horizontal +lines) for most of the cases are smaller than BAYESTAR’s. +A possible explanation for these observations is as follows: BAYESTAR uses an adaptive sampling method (Singer & +Price (2016)) to evaluate the densities, in which the posterior probability is first evaluated over Nside,0 = 16 HEALPix +grids (Górski et al. (2005)), corresponding to a single sky grid area of 13.4 deg2. The highest probability grids are +then adaptively subdivided into smaller grids over which the posterior is evaluated again. This process is repeated +seven times, with the highest possible resolution at the end of the iteration being Nside = 211, with an area of ∼ 10−3 +deg2 for the smallest grid (Singer & Price (2016)). +This adaptive sampling process, however, takes much longer to evaluate, compared to conventional evaluation over +a uniform angular resolution in the sky. This is why for our analysis, we do not adopt the adaptive sampling process, +since our primary aim is to improve the speed of pre-merger sky localization. Instead, we draw 5000 α and δ posterior +samples each, from our model’s predicted posterior and then apply a 2-D Kernel Density Estimate (KDE) over these +samples. We then evaluate the KDE over Nside,0 = 32 HEALPix grids, corresponding to a single grid area of ∼ 3.3 +deg2 to obtain our final result. Therefore, our chosen angular resolution results in sky grids which are much larger +than BAYESTAR’s smallest sky grids after adaptive refinement. Therefore our approach results in larger 90% contours +and searched areas than BAYESTAR for high network SNR cases where the angular resolution has a more significant +impact in the overall result. The sampling process adopted by us may also explain why our median areas are smaller +compared to BAYESTAR. During inference, after sampling α and δ from the predicted posterior, we evaluate the KDE +with a fixed bandwidth of 0.03, chosen by cross-validation. This may result in a narrower contour estimate, on average, +compared to BAYESTAR’s sampling method. +Figures 5 (a) - (f) show P-P plots for a subset of injections at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs +before merger respectively. To obtain the P-P plots, we compute the percentile scores of the true right ascension and +declination parameters within their marginalized posteriors and obtain the cumulative distribution of these scores. +For accurate posteriors, the distribution of the percentile scores should be uniform, which means the cumulative +distribution should be diagonal, which is evident from the figures. We also perform Kolmogorov-Smirnoff (KS) tests +for each dataset to test our hypothesis that the percentile values for each set are uniformly distributed. The p-values +from the KS tests, shown in the legend, for each parameter have values > 0.05, which means at a 95% level of +significance, we cannot reject the null hypothesis that the percentile values are uniform, and thereby our posteriors +are consistent with the expected distribution. +Because of the low dimensionality of our input data, training our network takes less than an hour on a NVIDIA Tesla +P100 GPU. Overall the sampling and evaluation step during inference takes a few milli-seconds for each injection on +the same computational resource. Sample generation and matched filtering was implemented with a modified version +of the code developed by (Gebhard et al. (2019)) that uses PyCBC software (Nitz et al. (2021)). CBC-SkyNet was written +in TensorFlow 2.4 (Abadi et al. (2016)) using the Python language. +6. DISCUSSION +In summary, we have reported the first deep learning based approach for pre-merger sky localization of BNS sources, +capable of orders of magnitude faster inference than Bayesian methods. Currently our model’s accuracy is similar to +BAYESTAR on injections with network SNR between 9 and 40 at design sensitivity. The next step in this research would +be to perform similar analysis on real detector data which has non-stationary noise and glitches that may corrupt + +6 +(a) +(b) +(c) +(d) +(e) +(f) +Figure 2. +Top panel from (a) to (c): Histograms of the areas of the 90% credible intervals from CBC-SkyNet (blue) and +BAYESTAR (orange) for 0 secs, 10 secs, 15 secs before merger are shown. Bottom panel from (d) to (f): Similar histograms for 28 +secs, 44 secs and 58 secs before merger are shown. +the signal and affect detection and sky localization. A possible way to improve our model’s performance at high +SNRs (> 25) would be to use a finer angular resolution in the sky for evaluating the posteriors. We can also train +different versions of the model for different luminosity distance (and hence SNR) ranges. Our long-term goal is to +construct an independent machine learning pipeline for pre-merger detection and localization of GW sources. The +faster inference speed of machine learning models would be crucial for electromagnetic follow-up and observation of +prompt and precursor emissions from compact binary mergers. This method is also scalable and can be applied for +predicting the luminosity distance of the sources pre-merger, which would help obtain volumetric localization of the +source and potentially identify host galaxies of BNS mergers. +The authors would like to thank Dr. Foivois Diakogiannis, Kevin Vinsen, Prof. Amitava Datta and Damon Beveridge +for useful comments on this work. This research was supported in part by the Australian Research Council Centre of +Excellence for Gravitational Wave Discovery (OzGrav, through Project No. CE170100004). This research was under- +taken with the support of computational resources from the Pople high-performance computing cluster of the Faculty of +Science at the University of Western Australia. This work used the computer resources of the OzStar computer cluster +at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National +Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government. This +research used data obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a +service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the +U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), +the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and +Hungarian institutes. This material is based upon work supported by NSF’s LIGO Laboratory which is a major facility +fully funded by the National Science Foundation. +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +13 +REFERENCES +Aasi, J., Abbott, B. P., Abbott, R., et al. 2015, Classical +and Quantum Gravity, 32, 074001, +doi: 10.1088/0264-9381/32/7/074001 +Abadi, M., Agarwal, A., Barham, P., et al. 2016, +TensorFlow: Large-Scale Machine Learning on +Heterogeneous Distributed Systems + +CBC-SkyNet +0.8 +Bayestar +0.6 +Density +0.4 +0.2 +0.0 +0 +1 +2 +3 +4 +90% credible interval area in log (deg2)1.2 +CBC-SkyNet +Bayestar +1.0 +0.8 +Density +0.6 +0.4 +0.2 +0.0 +1 +2 +3 +4 +90% credible interval area in log (deg2)CBC-SkyNet +1.25 +Bayestar +1.00 +Density +0.75 +0.50 +0.25 +0.00 +2 +1 +3 +4 +90% credible interval area in log (deg2)CBC-SkyNet +Bayestar +1.5 +Density +1.0 +0.5 +0.0 +1 +2 +3 +4 +90% credible interval area in log (deg2)CBC-SkyNet +Bayestar +1.5 +Density +1.0 +0.5 +0.0 +1 +2 +3 +4 +90% credible interval area in log (deg2)2.0 +CBC-SkyNet +Bayestar +1.5 +Density +1.0 +0.5 +0.0 +1 +2 +3 +4 +90% credible interval area in log (deg2)7 +(a) +(b) +(c) +(d) +(e) +(f) +Figure 3. +Top panel from (a) to (c): Histograms of the searched areas from CBC-SkyNet (blue) and BAYESTAR (orange) for 0 +secs, 10 secs, 15 secs before merger are shown. Bottom panel from (d) to (f): Similar histograms for 28 secs, 44 secs and 58 secs +before merger are shown. +(a) +(b) +Figure 4. +(a) Box and whiskers plots showing the areas of the 90% credible intervals from CBC-SkyNet (blue) and BAYESTAR +(pink) at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs before merger. The boxes encompass 95% of the events and the +whiskers extend up to the rest. The white lines within the boxes represent the median values of the respective data sets. (b) +Similar box and whiskers plot as (a) for comparing searched areas from CBC-SkyNet (blue) and BAYESTAR (pink) at 0 secs, 10 +secs, 15 secs, 28 secs, 44 secs and 58 secs before merger. + +0.6 +CBC-SkyNet +Bayestar +0.5 +0.4 +Density +0.3 +0.2 +0.1 +0.0 +-4 +-2 +0 +2 +4 +Searched area in log (deg2)0.6 +CBC-SkyNet +Bayestar +0.5 +0.4 +Density +0.3 +0.2 +0.1 +0.0 +-4 +-2 +0 +2 +4 +Searched area in log (deg2)0.6 +CBC-SkyNet +Bayestar +0.5 +0.4 +Density +0.3 +0.2 +0.1 +0.0 +-4 +-2 +0 +2 +4 +Searched area in log (deg2)CBC-SkyNet +Bayestar +0.5 +0.4 +Density +0.3 +0.2 +0.1 +0.0 +-4 +-2 +0 +2 +4 +Searched area in log (deg2)0.6 +CBC-SkyNet +Bayestar +0.5 +0.4 +Density +0.3 +0.2 +0.1 +0.0 +-4 +-2 +0 +2 +4 +Searched area in log (deg2)104. +2 +in deg +area +103 +90% credible interval +102 +CBC-SkyNet +101 +Bayestar +-10 +-15 +-28 +-44 +-58 +0 +Time from merger (in secs)104 +103. +2 +6 +p +Searched area in +102 +101 +S +100 +CBC-SkyNet +Bayestar +-10 +-15 +-28 +-44 +-58 +0 +Time from merger (in secs)CBC-SkyNet +0.6 +Bayestar +0.5 +Density +0.4 +0.3 +0.2 +0.1 +0.0 +-4 +-2 +0 +2 +4 +Searched area in log (deg2)8 +(a) +(b) +(c) +(d) +(e) +(f) +Figure 5. (a) to (f): P–P plots for a subset of the total number of test samples at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and +58 secs before merger. We compute the percentile values (denoted as p) of the true right ascension and declination parameters +within their 1D posteriors. The figure shows the cumulative distribution function of the percentile values, which should lie close +to the diagonal if the network is performing properly. The p-values of the KS test for each run is shown in the legend. + +1.0 +RA(0.101) +Dec (0.0571) +0.8 +Cumulative distribution +0.6 +0.4 +0.2 +0.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +p1.0 +RA (0.817) +Dec (0.325) +0.8 +Cumulative distribution +0.6 +0.4 +0.2 +0.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +p1.0 +RA (0.829) +Dec (0.188) +0.8 +Cumulative distribution +0.6 +0.4 +0.2 +0.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +p1.0 +RA(0.0891) +Dec (0.441) +0.8 +Cumulative distribution +0.6 +0.4 +0.2 +0.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +p1.0 +RA (0.0745) +Dec (0.122) +0.8 +Cumulative distribution +0.6 +0.4 +0.2 +0.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +p1.0 +RA (0.338) +Dec (0.147) +0.8 +Cumulative distribution +0.6 +0.4 +0.2 +0.0 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +p9 +Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016, Phys. +Rev. Lett., 116, 061102, +doi: 10.1103/PhysRevLett.116.061102 +—. 2017a, Phys. Rev. Lett., 119, 161101, +doi: 10.1103/PhysRevLett.119.161101 +—. 2017b, The Astrophysical Journal Letters, 848, L12, +doi: 10.3847/2041-8213/aa91c9 +—. 2017c, The Astrophysical Journal Letters, 848, L13, +doi: 10.3847/2041-8213/aa920c +—. 2017d, Nature, 551, 85, doi: 10.1038/nature24471 +—. 2019, Phys. Rev. X, 9, 011001, +doi: 10.1103/PhysRevX.9.011001 +Abbott, R., Abbott, T. D., Acernese, F., et al. 2021a, +GWTC-3: Compact Binary Coalescences Observed by +LIGO and Virgo During the Second Part of the Third +Observing Run +Abbott, R., Abbott, T. D., Abraham, S., et al. 2021b, The +Astrophysical Journal Letters, 915, L5, +doi: 10.3847/2041-8213/ac082e +Acernese, F., Agathos, M., Agatsuma, K., et al. 2014, +Classical and Quantum Gravity, 32, 024001, +doi: 10.1088/0264-9381/32/2/024001 +Akutsu, T., Ando, M., Arai, K., et al. 2019, Nature +Astronomy, 3, 35, doi: 10.1038/s41550-018-0658-y +Aubin, F., Brighenti, F., Chierici, R., et al. 2021, Classical +and Quantum Gravity, 38, 095004, +doi: 10.1088/1361-6382/abe913 +Bovard, L., Martin, D., Guercilena, F., et al. 2017, 96. +https://www.osti.gov/pages/biblio/1415425 +Cannon, K., Cariou, R., Chapman, A., et al. 2012, The +Astrophysical Journal, 748, 136, +doi: 10.1088/0004-637X/748/2/136 +Chatterjee, C., Wen, L., Beveridge, D., Diakogiannis, F., & +Vinsen, K. 2022, Rapid localization of gravitational wave +sources from compact binary coalescences using deep +learning +Chu, Q., Kovalam, M., Wen, L., et al. 2022, PhRvD, 105, +024023, doi: 10.1103/PhysRevD.105.024023 +Chua, A. J. K., & Vallisneri, M. 2020, Phys. Rev. Lett., +124, 041102, doi: 10.1103/PhysRevLett.124.041102 +Dax, M., Green, S. R., Gair, J., et al. 2021, Phys. Rev. +Lett., 127, 241103, doi: 10.1103/PhysRevLett.127.241103 +Dokuchaev, V. I., & Eroshenko, Y. N. 2017 +Dyer, M. J., Ackley, K., Lyman, J., et al. 2022, in +Proc.SPIE, Vol. 12182, The Gravitational-wave Optical +Transient Observer (GOTO), 121821Y, +doi: 10.1117/12.2629369 +Gabbard, H., Messenger, C., Heng, I. S., Tonolini, F., & +Murray-Smith, R. 2022, Nature Physics, 18, 112, +doi: 10.1038/s41567-021-01425-7 +Gebhard, T. D., Kilbertus, N., Harry, I., & Schölkopf, B. +2019, Phys. Rev. D, 100, 063015, +doi: 10.1103/PhysRevD.100.063015 +Germain, M., Gregor, K., Murray, I., & Larochelle, H. 2015 +Greenberg, D., Nonnenmacher, M., & Macke, J. 2019, in +Proceedings of Machine Learning Research, Vol. 97, +Proceedings of the 36th International Conference on +Machine Learning, ed. K. Chaudhuri & R. Salakhutdinov +(PMLR), 2404–2414. +https://proceedings.mlr.press/v97/greenberg19a.html +Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, The +Astrophysical Journal, 622, 759, doi: 10.1086/427976 +Haas, R., Ott, C. D., Szilagyi, B., et al. 2016, Phys. Rev. D, +93, 124062, doi: 10.1103/PhysRevD.93.124062 +He, K., Zhang, X., Ren, S., & Sun, J. 2015, Deep Residual +Learning for Image Recognition +Hooper, S. 2013, PhD thesis +Kingma, D. P., Salimans, T., Jozefowicz, R., et al. 2016, +Improving Variational Inference with Inverse +Autoregressive Flow +Kirichenko, P., Izmailov, P., & Wilson, A. G. 2020, Why +Normalizing Flows Fail to Detect Out-of-Distribution +Data +Klimenko, S., Vedovato, G., Drago, M., et al. 2016, Phys. +Rev. D, 93, 042004, doi: 10.1103/PhysRevD.93.042004 +Kovalam, M., Patwary, M. A. K., Sreekumar, A. K., et al. +2022, The Astrophysical Journal Letters, 927, L9, +doi: 10.3847/2041-8213/ac5687 +Littenberg, T. B., & Cornish, N. J. 2015, Phys. Rev. D, 91, +084034, doi: 10.1103/PhysRevD.91.084034 +Lueckmann, J.-M., Goncalves, P. J., Bassetto, G., et al. +2017, in Advances in Neural Information Processing +Systems, ed. I. Guyon, U. V. Luxburg, S. Bengio, +H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett, +Vol. 30 (Curran Associates, Inc.). +https://proceedings.neurips.cc/paper/2017/file/ +addfa9b7e234254d26e9c7f2af1005cb-Paper.pdf +Magee, R., Chatterjee, D., Singer, L. P., et al. 2021, The +Astrophysical Journal Letters, 910, L21, +doi: 10.3847/2041-8213/abed54 +Metzger, B. D. 2017, Welcome to the Multi-Messenger Era! +Lessons from a Neutron Star Merger and the Landscape +Ahead +Metzger, B. D., & Piro, A. L. 2014, Monthly Notices of the +Royal Astronomical Society, 439, 3916, +doi: 10.1093/mnras/stu247 +Most, E. R., & Philippov, A. A. 2020, The Astrophysical +Journal Letters, 893, L6, doi: 10.3847/2041-8213/ab8196 + +10 +Nicholl, M., Berger, E., Kasen, D., et al. 2017, The +Astrophysical Journal Letters, 848, L18, +doi: 10.3847/2041-8213/aa9029 +Nissanke, S., Kasliwal, M., & Georgieva, A. 2013, The +Astrophysical Journal, 767, 124, +doi: 10.1088/0004-637X/767/2/124 +Nitz, A., Harry, I., Brown, D., et al. 2021, gwastro/pycbc: +1.18.0 release of PyCBC, v1.18.0, Zenodo, +doi: 10.5281/zenodo.4556907 +Nitz, A. H., Schäfer, M., & Canton, T. D. 2020, The +Astrophysical Journal Letters, 902, L29, +doi: 10.3847/2041-8213/abbc10 +Papamakarios, G., & Murray, I. 2016, Fast ϵ-free Inference +of Simulation Models with Bayesian Conditional Density +Estimation +Papamakarios, G., Pavlakou, T., & Murray, I. 2017, in +Advances in Neural Information Processing Systems, ed. +I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, +R. Fergus, S. Vishwanathan, & R. Garnett, Vol. 30 +(Curran Associates, Inc.). +https://proceedings.neurips.cc/paper/2017/file/ +6c1da886822c67822bcf3679d04369fa-Paper.pdf +Rezende, D. J., & Mohamed, S. 2015, Variational Inference +with Normalizing Flows +Sachdev, S., Caudill, S., Fong, H., et al. 2019, The GstLAL +Search Analysis Methods for Compact Binary Mergers in +Advanced LIGO’s Second and Advanced Virgo’s First +Observing Runs +Sachdev, S., Magee, R., Hanna, C., et al. 2020, The +Astrophysical Journal Letters, 905, L25, +doi: 10.3847/2041-8213/abc753 +Siegel, D. M., & Ciolfi, R. 2016, The Astrophysical Journal, +819, 14, doi: 10.3847/0004-637X/819/1/14 +Singer, L. P., & Price, L. R. 2016, Phys. Rev. D, 93, +024013, doi: 10.1103/PhysRevD.93.024013 +Sturani, R., Fischetti, S., Cadonati, L., et al. 2010, +Phenomenological gravitational waveforms from spinning +coalescing binaries +Totani, T. 2013, Publications of the Astronomical Society +of Japan, 65, L12, doi: 10.1093/pasj/65.5.L12 +Usman, S. A., Nitz, A. H., Harry, I. W., et al. 2016, +Classical and Quantum Gravity, 33, 215004, +doi: 10.1088/0264-9381/33/21/215004 +Wang, J.-S., Yang, Y.-P., Wu, X.-F., Dai, Z.-G., & Wang, +F.-Y. 2016, The Astrophysical Journal Letters, 822, L7, +doi: 10.3847/2041-8205/822/1/L7 + diff --git a/B9E1T4oBgHgl3EQf9gbH/content/tmp_files/load_file.txt b/B9E1T4oBgHgl3EQf9gbH/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..29035a7ffa7b4ecaae470c8208e50aadd6977a01 --- /dev/null +++ b/B9E1T4oBgHgl3EQf9gbH/content/tmp_files/load_file.txt @@ -0,0 +1,783 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf,len=782 +page_content='Draft version January 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2023 Typeset using LATEX default style in AASTeX631 Pre-merger sky localization of gravitational waves from binary neutron star mergers using deep learning Chayan Chatterjee 1 and Linqing Wen 1 1Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' OzGrav-UWA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The University of Western Australia,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 35 Stirling Hwy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Crawley,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Western Australia 6009,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Australia ABSTRACT The simultaneous observation of gravitational waves (GW) and prompt electromagnetic counterparts from the merger of two neutron stars can help reveal the properties of extreme matter and gravity during and immediately after the final plunge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rapid sky localization of these sources is crucial to facilitate such multi-messenger observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Since GWs from binary neutron star (BNS) mergers can spend up to 10-15 mins in the frequency bands of the detectors at design sensitivity, early warning alerts and pre-merger sky localization can be achieved for sufficiently bright sources, as demonstrated in recent studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In this work, we present pre-merger BNS sky localization results using CBC-SkyNet, a deep learning model capable of inferring sky location posterior distributions of GW sources at orders of magnitude faster speeds than standard Markov Chain Monte Carlo methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We test our model’s performance on a catalog of simulated injections from Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020), recovered at 0-60 secs before merger, and obtain comparable sky localization areas to the rapid localization tool BAYESTAR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' These results show the feasibility of our model for rapid pre-merger sky localization and the possibility of follow-up observations for precursor emissions from BNS mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' INTRODUCTION The first direct detection of GWs from a merging binary black hole (BBH) system was made in 2015 (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016)), which heralded a new era in astronomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Since then the LIGO-Virgo-KAGRA (LVK) Collaboration (Aasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2015);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Acernese et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Akutsu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2019)) has made more than 90 detections of GWs from merging compact binaries (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021a)), including two confirmed detections from merging binary neutron stars (BNS) and at two from mergers of neutron star-black hole (NSBH) binaries (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021a,b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The first detection of GWs from a BNS merger on August 17th, 2017 (GW170817) along with its associated electromagnetic (EM) counterpart revolutionized the field of multi-messenger astronomy (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017a)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This event involved the joint detection of the GW signal by LIGO and Virgo, and the prompt short gamma-ray burst (sGRB) observation by the Fermi-GBM and INTEGRAL space telescopes (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017b,c)) ∼ 2 secs after the merger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This joint observation of GWs and sGRB, along with the observations of EM emissions at all wavelengths for months after the event had a tremendous impact on astronomy, leading to – an independent measurement of the Hubble Constant (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017d)), new constraints on the neutron star equation of state (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2019)) and confirmation of the speculated connection between sGRB and kilonovae with BNS mergers (Abbott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' While more multi-messenger observations involving GWs are certainly desirable, the typical delays between a GW detection and the associated GCN alerts, which is of the order of a few minutes (Magee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021)), makes such joint discoveries extremely challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This is because the prompt EM emissions lasts for just 1-2 secs after merger, which means an advance warning system with pre-merger sky localization of such events is essential to enable joint GW and EM observations by ground and space-based telescopes (Haas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Nissanke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2013);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Dyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In recent years, several studies have shown that for a fraction of BNS events, it will be possible to issue alerts up to 60 secs before merger (Magee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Kovalam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Nitz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Such early-warning detections, along with pre-merger sky localizations will facilitate rapid EM follow-up of prompt emissions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The observations of optical and ultraviolet emissions prior to mergers are necessary for understanding r-process nucleosynthesis (Nicholl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017)) and shock-heated ejecta (Metzger (2017)) post mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Prompt X- ray emission can reveal the final state of the remnant (Metzger & Piro (2014);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Bovard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Siegel & Ciolfi (2016)), and early radio observations can reveal pre-merger magnetosphere interactions (Most & Philippov (2020)), arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='03558v1 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='HE] 30 Dec 2022 ID2 and help test theories connecting BNS mergers with fast radio bursts (Totani (2013);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Dokuchaev & Eroshenko (2017)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In the last three LVK observation runs, five GW low-latency detection pipelines have processed data and sent out alerts in real-time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' These pipelines are GstLAL (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2019)), SPIIR (Chu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022)), PyCBC (Usman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016)), MBTA (Aubin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021)), and cWB (Klimenko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Of these, the first four pipelines use the technique of matched filtering (Hooper (2013)) to identify real GW signals in detector data, while cWB uses a coherent analysis to search for burst signals in detector data streams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In 2020, an end-to-end mock data challenge (Magee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021)) was conducted by the GstLAL and SPIIR search pipelines and successfully demonstrated their feasibility to send pre-merger alerts (Magee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This study also estimated the expected rate of BNS mergers and their sky localization areas using the rapid localization tool, BAYESTAR (Singer & Price (2016)) using a four detector network consisting of LIGO Hanford (H1), LIGO Livingston (L1), Virgo (V1) and KAGRA in O4 detector sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In a previous study, Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020) (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)) showed early warning performance of the GstLAL pipeline over a month of simulated data with injections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Their study suggested that alerts could be issued 10s (60 s) before merger for 24 (3) BNS systems over the course of one year of observations of a three-detector Advanced network operating at design sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' These findings were in broad agreement with the estimates of Cannon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2012) (Cannon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2012)) on the rates of early warning detections at design sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Sky localization was also obtained at various number of seconds before merger, using the online rapid sky localization software called BAYESTAR (Singer & Price (2016)), with the indication that around one event will be both detected before merger and localized within 100 deg2, based on current BNS merger rate estimates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The online search pipelines, however, experience additional latencies owing to data transfer, calibration and filtering processes, which contribute up to 7-8 secs of delay in the publication of early warning alerts (Kovalam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' For sky localization, BAYESTAR typically takes 8 secs to produce skymaps, which is expected to reduce to 1-2 secs in the third observation run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This latency can, however, be potentially reduced further by the application of machine learning techniques, as demonstrated in Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022) (Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In this Letter, we report pre-merger sky localization using deep learning for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We obtain our results using CBC-SkyNet (Compact Binary Coalescence - Sky Localization Neural Network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' ), a normalizing flow model (Rezende & Mohamed (2015);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Kingma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Papamakarios et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017)) for sky localization of all types of compact binary coalescence sources (Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We test our model on simulated BNS events from the injection catalog in Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020) (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)), that consists of signals detected at 0 to 60 secs before merger using the GstLAL search pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We compare our sky localization performance with BAYESTAR and find that our localization contours have comparable sky contour areas with BAYESTAR, at an inference speed of just a few milli-seconds using a P100 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The paper is divided as follows: we briefly describe our normalizing flow model in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In Section 3, we describe the details of the simulations used to generate the training and test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In Section 4, we desribe our architecture of CBC-SkyNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In Section 5, we discuss results obtained using our network on the dataset from Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020) (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Finally, we discuss future directions of this research in Section 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' METHOD Our neural network, CBC-SkyNet is based on a class of deep neural density estimators called normalizing flow, the details of which is provided in (Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' CBC-SkyNet consists of three main components: (i) the normalizing flow, specifically, a Masked Autoregressive Flow (MAF) (Kingma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Papamakarios et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017)) network, (ii) a ResNet-34 model (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2015)) that extracts features from the complex signal-to-noise (SNR) time series data which is obtained by matched filtering GW strains with BNS template waveforms, and (iii) a fully connected neural network whose inputs are the intrinsic parameters (component masses and z-component of spins) of the templates used to generate the SNR time series by matched filtering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The architecture of our model is shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The features extracted by the ResNet-34 and fully connected networks from the SNR time series (ρ(t)) and best-matched intrinsic parameters (ˆθin) respectively, are combined into a single feature vector and passed as a conditional input to the MAF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The MAF is a normalizing flow with a specific architecture, that transforms a simple base distribution (a multi-variate Gaussian) z ∼ p(z) into a more complex target distribution x ∼ p(x) which in our case, is the posterior distribution of the right ascension (α) and declination angles (δ) of the GW events, given the SNR time series and intrinsic parameters p(α, δ|ρ(t), ˆθin).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 3 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Architecture of our model, CBC-SkyNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The input data, consisting of the SNR time series, ρ(t) and intrinsic parameters, ˆθin are provided to the network through two separate channels: the ResNet-34 channel (only one ResNet block is shown here) and the multi-layered fully connected (Dense) network respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The features extracted by ρ(t) and ˆθin are then combined and provided as conditional input to the main component of CBC-SkyNet - the Masked Autoregressive Flow (MAF) network , denoted by f(z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The MAF draws samples, z, from a multivariate Gaussian, and learns a mapping between z to (α, δ), which are the right ascension and declination angles of the GW events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This mapping is learnt by the flow during training using the method of maximum likelihood, and can be expressed as: p(x) = π(z) ����det∂f(z) ∂z ���� −1 , (1) If z is a random sample drawn from the base distribution π(z), and f is the invertible transformation parametrized by the normalizing flow, then the new random variable obtained after the transformation is x = f(z).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The transformation, f can be made more flexible and expressive by stacking a chain of transformations together as follows: xk = fk ◦ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' ◦ f1 (z0) (2) This helps the normalizing flow learn arbitrarily complex distributions, provided each of the transformations are invertible and the Jacobians are easy to evaluate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Neural posterior estimation (NPE) (Papamakarios & Murray (2016);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Lueckmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2017);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Greenberg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2019)) techniques, including normalizing flows and conditional variational autoencoders have been used to estimate posterior distribution of BBH source parameters with high accuracy and speed (Dax et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Gabbard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Chua & Vallisneri (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022) (Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022)) used a normalizing flow to demonstrate rapid inference of sky location posteriors for all CBC sources for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This work shows the first application of deep learning for pre-merger BNS sky localization and is an extension of the model introduced in Chatterjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2022) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' DATA GENERATION Hanford Livingston Virgo Conv 2D Dense (64) BatchNorm ReLU Conv2D Dense (64) Conv 2D BatchNorm Dense (64) BatchNorm Dense (64) Dense (64) ReLU 90% area: 121 deg 50% area: 34 deg 60* Pz(z) Feature Vector f(z) 30° 2 z ~ N(0, 1)D+1 α, S ~ Pe(α, Slp(t), θin)4 We train six different versions of CBC-SkyNet with distinct training sets (ρi(t), ˆθi in) for each “negative latency", i = 0, 10, 14, 28, 44, 58 secs before merger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Our training and test set injections parameters were sampled from the publicly available injection dataset used in Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020) Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' These ˆθi in parameters were used to first simulate the BNS waveforms using the SpinTaylorT4 approximant (Sturani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2010)) and then injected into Gaussian noise with advanced LIGO power spectral density (PSD) at design sensitivity (Littenberg & Cornish (2015)) to obtain the desired strains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The SNR time series, ρi(t), was then obtained by matched filtering the simulated BNS strains with template waveforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' For generating the training sets, the template waveforms for matched filtering were simulated using the optimal parameters, which have the exact same values as the injection parameters used to generate the detector strains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The SNR time series obtained by matched filtering the strains with the optimal templates, ρi opt(t), and the optimal intrinsic parameters, ˆθi,opt in , were then used as input to our network during the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' For testing, the template parameters were sampled from publicly available data by Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020) (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' These parameters correspond to the parameters of the maximum likelihood or ‘best-matched’ signal template recovered by the GstLAL matched-filtering search pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Therefore the values of ˆθi in used during testing are close to, but is not the exact same as ˆθi,opt in .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Similarly, the SNR time series ρi(t) is not exactly similar to the optimal ρi opt(t), and has a slightly lower peak amplitude than the corresponding ρi opt(t) peak because of the small mismatch between the injection parameters and the best-matched template waveform parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' While our injections have the same parameter distribution as (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)), we only choose samples with network SNRs lying between 9 and 40, at each negative latency, for this analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This is because when the network is trained on samples with identical parameter distributions as the dataset from (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)), our model’s predictions on test samples with network SNRs > 40 tend to become spurious, with α and δ samples drawn from the predicted posterior distribution for these events having values outside their permissible ranges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This is because in the dataset from (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)), injection samples with SNR > 40 are much fewer in number compared to samples between SNR 9 and 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This means for models trained on data with parameters from (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)), there exists very few training examples for SNR > 40 to learn from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Since Normalizing Flow models are known to fail at learning out-of-distribution data, as described in (Kirichenko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)), our model fails to make accurate predictions at the high SNR limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Although this can potentially be solved by generating training sets with uniform SNR distribution over the entire existing SNR range in (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)), which corresponds to a uniform distribution of sources in comoving volume up to a redshift of z=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2, this would be require generating an unfeasibly large number of training samples for each negative latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Also, such events detected with SNR > 40 are expected to be exceptionally rare, even at design sensitivities of advanced LIGO and Virgo, which is why we choose to ignore them for this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We therefore generate samples with uniformly distributed SNRs between 9 and 40 for training, while our test samples have the same SNR distribution as (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)) between 9 and 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' NETWORK ARCHITECTURE In this section, we describe the architecture of the different components of our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The MAF is implemented using a neural network that is designed to efficiently model conditional probability densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This network is called Masked Autoencoder for Density Estimation (MADE) (Germain et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2015)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We stack 10 MADE blocks together to make a sufficiently expressive model, with each MADE block consisting of 5 layers with 256 neurons in each layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' In between each pair of MADE networks, we use batch normalization to stabilize training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We use a ResNet-34 model (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2015)), that is constructed using 2D convolutional and MaxPooling layers with skip connections, (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2015)) to extract features from the SNR time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The real and imaginary parts of the SNR time series are stacked vertically to generate a two dimensional input data stream for each training and test sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The initial number of kernels for the convolutional layers of the ResNet model is chosen to be 32, which is doubled progressively through the network (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2015)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The final vector of features obtained by the ResNet are combined with the features extracted from the intrinsic parameters, ˆθi in, by the fully-connected network, consisting of 5 hidden layers with 64 neurons in each hidden layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The combined feature vector is then passed as a conditional input to the MAF which learns the mapping between the base and target distributions during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' RESULTS In this section, we describe the results of the injection runs at each negative latency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Figure 2 (a) to (f) shows the histograms of the areas of the 90% credible intervals of the predicted posterior distributions from CBC-SkyNet 5 (blue) and BAYESTAR (orange), evaluated on the injections in (Sachdev et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2020)) with network SNRs between 9 and 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We observe that for most of the test sets, our model predicts smaller median 90% credible interval areas than BAYESTAR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Also, BAYESTAR shows much broader tails at < 100 deg2, compared to CBC-SkyNet, especially for 0 secs, 10 secs and 15 secs before merger (Figures 2 (a), (b) and (c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' These injections, with 90% areas < 100 deg2 typically have SNR > 25, which shows that although CBC-SkyNet produces smaller 90 % contours on average, it fails to match BAYESTAR’s accuracy for high SNR cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Especially at 0 secs before merger (Figure 2 (a)), the area of the smallest 90% credible interval by CBC-SkyNet is 13 deg2, whereas for BAYESTAR, it is around 1 deg2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The number of injections localized with a 90% credible interval area between 10 - 15 deg2 by CBC-SkyNet is also much lower than BAYESTAR, although this effect is much less prominent for the other test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Similar results are found for the searched area distributions at 0 secs before merger (Figure 3 (a)), although the distributions of searched areas from for all other cases (Figure 3 (b) - (f)) from CBC-SkyNet and BAYESTAR are very similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Figures 4 (a) and (b) show box and whisker plots for 90% credible interval areas and searched areas obtained by CBC-SkyNet (blue) and BAYESTAR (pink) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We observe that our median 90% areas (white horizontal lines) for most of the cases are smaller than BAYESTAR’s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' A possible explanation for these observations is as follows: BAYESTAR uses an adaptive sampling method (Singer & Price (2016)) to evaluate the densities, in which the posterior probability is first evaluated over Nside,0 = 16 HEALPix grids (Górski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2005)), corresponding to a single sky grid area of 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 deg2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The highest probability grids are then adaptively subdivided into smaller grids over which the posterior is evaluated again.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This process is repeated seven times, with the highest possible resolution at the end of the iteration being Nside = 211, with an area of ∼ 10−3 deg2 for the smallest grid (Singer & Price (2016)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This adaptive sampling process, however, takes much longer to evaluate, compared to conventional evaluation over a uniform angular resolution in the sky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This is why for our analysis, we do not adopt the adaptive sampling process, since our primary aim is to improve the speed of pre-merger sky localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Instead, we draw 5000 α and δ posterior samples each, from our model’s predicted posterior and then apply a 2-D Kernel Density Estimate (KDE) over these samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We then evaluate the KDE over Nside,0 = 32 HEALPix grids, corresponding to a single grid area of ∼ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3 deg2 to obtain our final result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Therefore, our chosen angular resolution results in sky grids which are much larger than BAYESTAR’s smallest sky grids after adaptive refinement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Therefore our approach results in larger 90% contours and searched areas than BAYESTAR for high network SNR cases where the angular resolution has a more significant impact in the overall result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The sampling process adopted by us may also explain why our median areas are smaller compared to BAYESTAR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' During inference, after sampling α and δ from the predicted posterior, we evaluate the KDE with a fixed bandwidth of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='03, chosen by cross-validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This may result in a narrower contour estimate, on average, compared to BAYESTAR’s sampling method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Figures 5 (a) - (f) show P-P plots for a subset of injections at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs before merger respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' To obtain the P-P plots, we compute the percentile scores of the true right ascension and declination parameters within their marginalized posteriors and obtain the cumulative distribution of these scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' For accurate posteriors, the distribution of the percentile scores should be uniform, which means the cumulative distribution should be diagonal, which is evident from the figures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We also perform Kolmogorov-Smirnoff (KS) tests for each dataset to test our hypothesis that the percentile values for each set are uniformly distributed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The p-values from the KS tests, shown in the legend, for each parameter have values > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='05, which means at a 95% level of significance, we cannot reject the null hypothesis that the percentile values are uniform, and thereby our posteriors are consistent with the expected distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Because of the low dimensionality of our input data, training our network takes less than an hour on a NVIDIA Tesla P100 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Overall the sampling and evaluation step during inference takes a few milli-seconds for each injection on the same computational resource.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Sample generation and matched filtering was implemented with a modified version of the code developed by (Gebhard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2019)) that uses PyCBC software (Nitz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2021)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' CBC-SkyNet was written in TensorFlow 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 (Abadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (2016)) using the Python language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' DISCUSSION In summary, we have reported the first deep learning based approach for pre-merger sky localization of BNS sources, capable of orders of magnitude faster inference than Bayesian methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Currently our model’s accuracy is similar to BAYESTAR on injections with network SNR between 9 and 40 at design sensitivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The next step in this research would be to perform similar analysis on real detector data which has non-stationary noise and glitches that may corrupt 6 (a) (b) (c) (d) (e) (f) Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Top panel from (a) to (c): Histograms of the areas of the 90% credible intervals from CBC-SkyNet (blue) and BAYESTAR (orange) for 0 secs, 10 secs, 15 secs before merger are shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Bottom panel from (d) to (f): Similar histograms for 28 secs, 44 secs and 58 secs before merger are shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' the signal and affect detection and sky localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' A possible way to improve our model’s performance at high SNRs (> 25) would be to use a finer angular resolution in the sky for evaluating the posteriors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We can also train different versions of the model for different luminosity distance (and hence SNR) ranges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Our long-term goal is to construct an independent machine learning pipeline for pre-merger detection and localization of GW sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The faster inference speed of machine learning models would be crucial for electromagnetic follow-up and observation of prompt and precursor emissions from compact binary mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This method is also scalable and can be applied for predicting the luminosity distance of the sources pre-merger, which would help obtain volumetric localization of the source and potentially identify host galaxies of BNS mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The authors would like to thank Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Foivois Diakogiannis, Kevin Vinsen, Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Amitava Datta and Damon Beveridge for useful comments on this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This research was supported in part by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav, through Project No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' CE170100004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This research was under- taken with the support of computational resources from the Pople high-performance computing cluster of the Faculty of Science at the University of Western Australia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This work used the computer resources of the OzStar computer cluster at Swinburne University of Technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The OzSTAR program receives funding in part from the Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This research used data obtained from the Gravitational Wave Open Science Center (https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='gw-openscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' LIGO is funded by the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' National Science Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' This material is based upon work supported by NSF’s LIGO Laboratory which is a major facility fully funded by the National Science Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 1 2 3 4 5 6 7 8 9 10 11 12 13 REFERENCES Aasi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Abbott, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Abbott, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2015, Classical and Quantum Gravity, 32, 074001, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1088/0264-9381/32/7/074001 Abadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Agarwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Barham, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems CBC-SkyNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Bayestar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0 1 2 3 4 90% credible interval area in log (deg2)1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 CBC-SkyNet Bayestar 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 1 2 3 4 90% credible interval area in log (deg2)CBC-SkyNet 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='25 Bayestar 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='00 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='00 2 1 3 4 90% credible interval area in log (deg2)CBC-SkyNet Bayestar 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 Density 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 1 2 3 4 90% credible interval area in log (deg2)CBC-SkyNet Bayestar 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 Density 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 1 2 3 4 90% credible interval area in log (deg2)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 CBC-SkyNet Bayestar 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 Density 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 1 2 3 4 90% credible interval area in log (deg2)7 (a) (b) (c) (d) (e) (f) Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Top panel from (a) to (c): Histograms of the searched areas from CBC-SkyNet (blue) and BAYESTAR (orange) for 0 secs, 10 secs, 15 secs before merger are shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Bottom panel from (d) to (f): Similar histograms for 28 secs, 44 secs and 58 secs before merger are shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (a) (b) Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (a) Box and whiskers plots showing the areas of the 90% credible intervals from CBC-SkyNet (blue) and BAYESTAR (pink) at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs before merger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The boxes encompass 95% of the events and the whiskers extend up to the rest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The white lines within the boxes represent the median values of the respective data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (b) Similar box and whiskers plot as (a) for comparing searched areas from CBC-SkyNet (blue) and BAYESTAR (pink) at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs before merger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 CBC-SkyNet Bayestar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 4 2 0 2 4 Searched area in log (deg2)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 CBC-SkyNet Bayestar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 4 2 0 2 4 Searched area in log (deg2)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 CBC-SkyNet Bayestar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 4 2 0 2 4 Searched area in log (deg2)CBC-SkyNet Bayestar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 4 2 0 2 4 Searched area in log (deg2)0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 CBC-SkyNet Bayestar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 4 2 0 2 4 Searched area in log (deg2)104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2 in deg area 103 90% credible interval 102 CBC-SkyNet 101 Bayestar 10 15 28 44 58 0 Time from merger (in secs)104 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2 6 p Searched area in 102 101 S 100 CBC-SkyNet Bayestar 10 15 28 44 58 0 Time from merger (in secs)CBC-SkyNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 Bayestar 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5 Density 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 4 2 0 2 4 Searched area in log (deg2)8 (a) (b) (c) (d) (e) (f) Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' (a) to (f): P–P plots for a subset of the total number of test samples at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs before merger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' We compute the percentile values (denoted as p) of the true right ascension and declination parameters within their 1D posteriors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The figure shows the cumulative distribution function of the percentile values, which should lie close to the diagonal if the network is performing properly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' The p-values of the KS test for each run is shown in the legend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 RA(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='101) Dec (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0571) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Cumulative distribution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 p1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 RA (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='817) Dec (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='325) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Cumulative distribution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 p1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 RA (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='829) Dec (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='188) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Cumulative distribution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 p1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 RA(0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0891) Dec (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='441) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Cumulative distribution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 p1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 RA (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0745) Dec (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='122) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Cumulative distribution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 p1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 RA (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='338) Dec (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='147) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 Cumulative distribution 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 p9 Abbott, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Abbott, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Abbott, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', 116, 061102, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='061102 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017a, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', 119, 161101, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='161101 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017b, The Astrophysical Journal Letters, 848, L12, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/aa91c9 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017c, The Astrophysical Journal Letters, 848, L13, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/aa920c —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017d, Nature, 551, 85, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1038/nature24471 —.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2019, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' X, 9, 011001, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='011001 Abbott, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Abbott, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Acernese, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2021a, GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During the Second Part of the Third Observing Run Abbott, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Abbott, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Abraham, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2021b, The Astrophysical Journal Letters, 915, L5, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/ac082e Acernese, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Agathos, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Agatsuma, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2014, Classical and Quantum Gravity, 32, 024001, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1088/0264-9381/32/2/024001 Akutsu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Ando, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Arai, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2019, Nature Astronomy, 3, 35, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1038/s41550-018-0658-y Aubin, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Brighenti, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Chierici, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2021, Classical and Quantum Gravity, 38, 095004, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1088/1361-6382/abe913 Bovard, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Martin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Guercilena, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017, 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='osti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='gov/pages/biblio/1415425 Cannon, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Cariou, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Chapman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2012, The Astrophysical Journal, 748, 136, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1088/0004-637X/748/2/136 Chatterjee, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Wen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Beveridge, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Diakogiannis, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Vinsen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2022, Rapid localization of gravitational wave sources from compact binary coalescences using deep learning Chu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Kovalam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Wen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2022, PhRvD, 105, 024023, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='024023 Chua, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Vallisneri, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2020, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', 124, 041102, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='124.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='041102 Dax, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Green, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Gair, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2021, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', 127, 241103, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='241103 Dokuchaev, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Eroshenko, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017 Dyer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Ackley, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Lyman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2022, in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='SPIE, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 12182, The Gravitational-wave Optical Transient Observer (GOTO), 121821Y, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1117/12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='2629369 Gabbard, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Messenger, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Heng, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Tonolini, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Murray-Smith, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2022, Nature Physics, 18, 112, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1038/s41567-021-01425-7 Gebhard, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Kilbertus, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Harry, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Schölkopf, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2019, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D, 100, 063015, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='063015 Germain, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Gregor, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Murray, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2015 Greenberg, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Nonnenmacher, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Macke, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2019, in Proceedings of Machine Learning Research, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 97, Proceedings of the 36th International Conference on Machine Learning, ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Chaudhuri & R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Salakhutdinov (PMLR), 2404–2414.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='mlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='press/v97/greenberg19a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='html Górski, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Hivon, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Banday, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2005, The Astrophysical Journal, 622, 759, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1086/427976 Haas, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Ott, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Szilagyi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D, 93, 124062, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='124062 He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Ren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2015, Deep Residual Learning for Image Recognition Hooper, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2013, PhD thesis Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Jozefowicz, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, Improving Variational Inference with Inverse Autoregressive Flow Kirichenko, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Izmailov, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Wilson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2020, Why Normalizing Flows Fail to Detect Out-of-Distribution Data Klimenko, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Vedovato, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Drago, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D, 93, 042004, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='042004 Kovalam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Patwary, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Sreekumar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2022, The Astrophysical Journal Letters, 927, L9, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/ac5687 Littenberg, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Cornish, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2015, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D, 91, 084034, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='084034 Lueckmann, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Goncalves, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Bassetto, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017, in Advances in Neural Information Processing Systems, ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Guyon, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Luxburg, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Bengio, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Wallach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Vishwanathan, & R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Garnett, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 30 (Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='cc/paper/2017/file/ addfa9b7e234254d26e9c7f2af1005cb-Paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='pdf Magee, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Chatterjee, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Singer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2021, The Astrophysical Journal Letters, 910, L21, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/abed54 Metzger, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017, Welcome to the Multi-Messenger Era!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Lessons from a Neutron Star Merger and the Landscape Ahead Metzger, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Piro, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2014, Monthly Notices of the Royal Astronomical Society, 439, 3916, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1093/mnras/stu247 Most, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Philippov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2020, The Astrophysical Journal Letters, 893, L6, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/ab8196 10 Nicholl, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Berger, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Kasen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017, The Astrophysical Journal Letters, 848, L18, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/aa9029 Nissanke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Kasliwal, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Georgieva, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2013, The Astrophysical Journal, 767, 124, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1088/0004-637X/767/2/124 Nitz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Harry, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Brown, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2021, gwastro/pycbc: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0 release of PyCBC, v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='0, Zenodo, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='4556907 Nitz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Schäfer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Canton, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2020, The Astrophysical Journal Letters, 902, L29, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/abbc10 Papamakarios, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Murray, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, Fast ϵ-free Inference of Simulation Models with Bayesian Conditional Density Estimation Papamakarios, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Pavlakou, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Murray, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2017, in Advances in Neural Information Processing Systems, ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Guyon, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Luxburg, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Bengio, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Wallach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Vishwanathan, & R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Garnett, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 30 (Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='cc/paper/2017/file/ 6c1da886822c67822bcf3679d04369fa-Paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='pdf Rezende, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Mohamed, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2015, Variational Inference with Normalizing Flows Sachdev, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Caudill, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Fong, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2019, The GstLAL Search Analysis Methods for Compact Binary Mergers in Advanced LIGO’s Second and Advanced Virgo’s First Observing Runs Sachdev, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Magee, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Hanna, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2020, The Astrophysical Journal Letters, 905, L25, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8213/abc753 Siegel, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Ciolfi, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, The Astrophysical Journal, 819, 14, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/0004-637X/819/1/14 Singer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Price, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' D, 93, 024013, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1103/PhysRevD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='024013 Sturani, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Fischetti, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Cadonati, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2010, Phenomenological gravitational waveforms from spinning coalescing binaries Totani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2013, Publications of the Astronomical Society of Japan, 65, L12, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1093/pasj/65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='L12 Usman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Nitz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Harry, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, Classical and Quantum Gravity, 33, 215004, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='1088/0264-9381/33/21/215004 Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', Dai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=', & Wang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content=' 2016, The Astrophysical Journal Letters, 822, L7, doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} +page_content='3847/2041-8205/822/1/L7' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E1T4oBgHgl3EQf9gbH/content/2301.03558v1.pdf'} diff --git a/B9E4T4oBgHgl3EQfFQzj/content/2301.04885v1.pdf b/B9E4T4oBgHgl3EQfFQzj/content/2301.04885v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..805fe4df4bd8455131bf8c337bd74a23940a536f --- /dev/null +++ b/B9E4T4oBgHgl3EQfFQzj/content/2301.04885v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:539c3c373277403f6a408828805329e11a5e973396257197f07b97259f641970 +size 1279791 diff --git a/B9E4T4oBgHgl3EQfFQzj/vector_store/index.pkl b/B9E4T4oBgHgl3EQfFQzj/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..c3c8f3f519b4d902d5d463d2297336d32ef3870d --- /dev/null +++ b/B9E4T4oBgHgl3EQfFQzj/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dd1980d7e48b9e5feccf9365ca03aa32f1794aa3d708d8a8b993a668ed2b9dc5 +size 73699 diff --git a/CdAzT4oBgHgl3EQfGftE/content/2301.01028v1.pdf b/CdAzT4oBgHgl3EQfGftE/content/2301.01028v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7fa7d308ec22c5cd07f7042b31006da43a043997 --- /dev/null +++ b/CdAzT4oBgHgl3EQfGftE/content/2301.01028v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:814f31820af1c6783d01ffc3510e7611218283e34e9b9fdad229ea6078553e5f +size 13147693 diff --git a/DNAzT4oBgHgl3EQfGfv5/vector_store/index.faiss b/DNAzT4oBgHgl3EQfGfv5/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..f3378bed9f495b4d028b5d3814c296f12ff71109 --- /dev/null +++ b/DNAzT4oBgHgl3EQfGfv5/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:716a9e7e1511e4b06a884118b9af2b201a4e5ab6ee650759a3a5031d447e5a71 +size 2228269 diff --git a/DtAzT4oBgHgl3EQfGvt3/content/tmp_files/2301.01033v1.pdf.txt b/DtAzT4oBgHgl3EQfGvt3/content/tmp_files/2301.01033v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..30031dfe4d1141cefdd2e3194663f6b67654d323 --- /dev/null +++ b/DtAzT4oBgHgl3EQfGvt3/content/tmp_files/2301.01033v1.pdf.txt @@ -0,0 +1,4876 @@ +Ca’ Foscari University +Department of Environmental Sciences, +Informatics and Statistics +Dissecting Continual Learning +a Structural and Data Analysis +Ph.D. Thesis - Computer Science +XXXIV Cycle +Submitted by: +Pelosin Francesco +Supervisor: +Prof. Torsello Andrea +Venice, Italy - March, 2022 +arXiv:2301.01033v1 [cs.CV] 3 Jan 2023 + +G +. +E +DOMODissecting continual learning: a structural and data analysis +2 +Chapter 0 + +Abstract +Deep Learning aims to discover how artificial neural networks learn the rich inter- +nal representations required for difficult tasks such as recognizing objects or under- +standing language. This hard question is still unanswered although we are constantly +improving the performance of such systems spanning from computer vision problems +to natural language processing tasks. Continual Learning (CL) is a field dedicated +in devising algorithms able to achieve lifelong learning by overcoming the knowledge +disruption of previously acquired concepts, a phenomenon that affects deep learn- +ing architectures and that goes by the name of catastrophic forgetting. Currently, +deep learning methods can achieve outstanding results when the data modeled does +not undergo a considerable distribution shift in subsequent learning sessions, but as +we expose the systems to such incremental setting, performance abruptly drops due +to catastrophic forgetting. As the data generated in the world is continuously in- +creasing, the demand to model such streams in a sequential fashion is increasing. +As such, devising techniques to prevent knowledge corruption in neural networks +is fundamental. Overcoming such limitations would allow us to build truly intelli- +gent systems showing adaptability and human-like quality. Secondly, it would allow +us to overcome the limitation, and onerous aspect, of retraining the architectures +from scratch with the updated data. Such drawback comes from how deep neural +networks learn, that is, they require several parameter updates to learn any given +concept. This is also the exact reason why catastrophic forgetting happens, as we +learn new concepts we overwrite old ones, while a truly intelligent system would show +a stability-plasticity optimal trade-off. In this thesis, we first describe the background +needed to understand continual learning in the computer vision realm. We do so with +the introduction of a notation and a formal description of the problem. Then, we will +introduce several CL setting variants and main solution categories proposed in the +literature, along with an analysis of the state-of-the-art. We then first analyze one +of the baseline approaches to continual learning and discover that in rehearsal-based +techniques the quantity of data stored is a more important factor than the quality +of memorized data. This trade-off surprisingly holds even for impressively high com- +pression rates of the data. Secondly, this thesis proposes one of the early works on +the study of incremental learning on vision transformer architectures (ViTs). In par- +ticular, we will compare functional, weight, and attention regularization approaches +for the challenging rehearsal-free CL. We then propose an asymmetric loss variant +inspired by PODNet, achieving good capabilities in terms of plasticity. Among these +contributions, we propose a simple, but effective baseline for off-the-shelf continual +learning exploiting pretrained models and discuss its extension to unsupervised con- +tinual learning, a topic that deserves further attention from the community. As the +final work, we introduce a novel algorithm able to explore the environment through +unsupervised visual pattern discovery. +We then provide a conclusion and discuss +further developments and promising paths to be followed by the CL research. +Chapter 0 +3 + +Dissecting continual learning: a structural and data analysis +4 +Chapter 0 + +Akwnowledgments +First, I would like to express my gratitude to my supervisor Andrea Torsello, for all +the deep insights and for welcoming me to pursue this research with him. +Secondly, I would like to thank all the people that I encountered throughout these +years, especially colleagues and friends that I met, a personal acknowledgment to +Alessandro, Seyum, Fatima, and also the friends I met in Spain Hect´or, Laura, and +Albin. I would also like to say thank everyone that loved me during this period, you +gave me the strength to carry on this tough journey! +Lastly, I would say that I learned a lot during these years, and the force that moved +me to pursue a Ph.D., is the same force that allows us to expand and look for +answers, to find meanings, and to unfold something beautiful. +‘‘You’re pretty good’’ +Chapter 0 +5 + +Dissecting continual learning: a structural and data analysis +6 +Chapter 0 + +Contents +1 +Introduction +9 +2 +Background and Motivation +13 +2.1 +Artificial vs Natural Intelligence . . . . . . . . . . . . . . . . . . . . +14 +2.2 +What is Continual Learning? . . . . . . . . . . . . . . . . . . . . . +17 +2.2.1 +Stability-Plasticity Dilemma +. . . . . . . . . . . . . . . . . +19 +2.2.2 +Catastrophic Forgetting +. . . . . . . . . . . . . . . . . . . +20 +2.2.3 +A Visual Example . . . . . . . . . . . . . . . . . . . . . . . +20 +3 +Continual Learning Framework +25 +3.1 +Definition and Settings . . . . . . . . . . . . . . . . . . . . . . . . +26 +3.1.1 +Online CL vs Offline CL +. . . . . . . . . . . . . . . . . . . +28 +3.1.2 +Task-Incremental vs Class-Incremental . . . . . . . . . . . . +29 +3.2 +Baselines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +30 +3.2.1 +Cumulative +. . . . . . . . . . . . . . . . . . . . . . . . . . +31 +3.2.2 +Finetuning . . . . . . . . . . . . . . . . . . . . . . . . . . . +32 +7 + +Dissecting continual learning: a structural and data analysis +3.3 +State-of-the-art . . . . . . . . . . . . . . . . . . . . . . . . . . . . +33 +3.3.1 +Structural-based +. . . . . . . . . . . . . . . . . . . . . . . +33 +3.3.2 +Regularization-based . . . . . . . . . . . . . . . . . . . . . +34 +3.3.3 +Rehearsal-based . . . . . . . . . . . . . . . . . . . . . . . . +35 +4 +Works +37 +4.1 +Smaller is Better: An Analysis of Instance Quantity/Quality Trade- +off in Rehearsal-based Continual Learning . . . . . . . . . . . . . . +38 +4.2 +Towards Exemplar-Free Continual Learning in Vision Transformers: +an Account of Attention, Functional and Weight Regularization +. . +58 +4.3 +Simpler is Better: off-the-shelf Continual Learning through Pretrained +Backbones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . +76 +4.4 +Unsupervised Semantic Discovery through Visual Patterns detection +85 +5 +Conclusions +99 +8 +Chapter 0 + +Chapter 1 +Introduction +“The measure of intelligence is the ability to change” +- Albert Einstein +9 + +Dissecting continual learning: a structural and data analysis +The interconnections among entities in our world are growing. Along with this +fact, the ability to keep track and record such data has accordingly increased. The +need for systems that can cope with such phenomena is essential. Deep Learning +(DL) revealed itself to be a powerful weapon to model such complex streams, es- +pecially in Computer Vision and Natural Language Processing fields. The advent of +DL unlocked the ability to develop outstanding technologies that can directly impact +our lives. Self-driving cars are one example. Unfortunately not always the impact is +positive, if not properly controlled. Therefore, the need for systems that show gen- +eralization abilities and can cope with unexpected scenarios, is nowadays essential. +To this end, we also need responsive machines, that can be trained to quickly learn +new concepts with low resource consumption. In fact, what happens if the stream +of data encountered by a deep learning model changes its quality over time? This +particular question is tackled by Continual Learning (CL) whose aim is to develop +lifelong learning machines, unlocking fast adaptability to new environments. +Modern deep learning methods for computer vision adapt themselves only to the +manifold they are trained on. Instead, we need to devise models which are plastic +enough to generalize to distributional shifts in the data and do not require complete +retraining. This challenge would be solved if training Deep Learning models would +not be such a delicate process affected by unexpected drawbacks. In fact, when +we introduce the notion of learning through time and expose the system to face +incremental tasks of different nature, things can get really complicated. +One of the drawbacks of incrementally learning is the so-called catastrophic for- +getting, where the system is subject to an abrupt deterioration of past knowledge +whenever asked to learn new concepts. This big limitation is broadly studied in con- +tinual learning. To approach this delicate subject, in this thesis, we start by gently +introducing some basic differences between artificial and natural intelligence. Here, +we clarify some operative differences between artificial neural networks and some +basic brain mechanisms arising from neuroscience. Then, we informally introduce +the notion of continual learning and discuss the stability-plasticity dilemma along +with the phenomenon of catastrophic forgetting of artificial neural networks. We +proceed by introducing a more formal definition of incremental learning along with +its fine-grained inclinations. Before moving to the contributions we introduce a brief +overview of the state-of-the-art and define the main baselines which act as lower and +upper bounds for continual learning methodologies. +We step into the major contributions by focusing on rehearsal systems, a family of +methods that exploit cache memories to replay previous knowledge. Here, we study +how the compression of stored rehearsal data impacts the performance of the model. +Tackling the memory side of CL, we provide a quality/quantity analysis through +10 +Chapter 1 + +Introduction +the usage of several compression schemes. We consider also extreme compression +rates, providing some insights. On top of that, we consider continual learning under +low-resource constraints through the usage of random projections and, in particular, +Extreme Learning Machines. +To follow, as a second major contribution, we are among the first to investigate +Vision Transformers in continual learning. In particular, we analyze several regular- +ization schemes for ViTs, providing a first envision of rehearsal-free CL. We consider +weight, functional and attentional regularizations, being the latter unexplored be- +fore, we carefully study the application of regularizations to specific parts of the +self-attention mechanism. As a side contribution we introduce a new asymmetric +loss variant inspired by a contemporary continual learning method (PODNet) prin- +cipled by the observation that new attention should not penalize the acquisition of +new knowledge. +We then further clarify the usage of pretrained models in continual learning +through an experimental segment. We compare fully pretrained CNNs and Vision +Transformers in several incremental benchmarks. We provide a clear simple baseline +that requires few KBytes to operate and does not perform parameter updates. Being +simple and effective, we discuss its extension to the unsupervised realm. Here we +consider further extensions for future works. +Along with these three contributions, we also study the ability of a system to +autonomously discover new visual patterns, a notion embedded in an optimal incre- +mental learner. We, therefore, provide a simple unsupervised pipeline able to discover +semantic patterns on different visual scales. Finally, we conclude by wrapping up our +perspectives on the main aforementioned challenges. +As a final note, we hope this thesis finds a meaningful purpose in the CL com- +munity, contributing to the development of Continual Learning and Computer Vision +research. +Chapter 1 +11 + +Dissecting continual learning: a structural and data analysis +Contibution Prefaction +In this thesis we included some papers developed while pursuing the Ph.D.. The main +contributions have been reported in Chapter 4. The chapter holds the outcome of +several collaborations and with the following list we report the names of the authors +and the venues where the works have been submitted: +• The work reported in Section 4.1, has been accepted as oral poster to IJCNN +2022. The authors who contributed to the work are (in order): Francesco +Pelosin and Andrea Torsello from Ca’ Foscari University of Venice +• The work reported in Section 4.2 is the outcome of the collaboration of the re- +search period abroad and has been accepted as poster to the Continual Lerning +Workshop of CVPR 2022. The authors who contributed to the work are (in +order): Francesco Pelosin, Ca’ Foscari University of Venice (Equal Contrib); +Saurav Jha, University of New South Wales, Australia (Equal Contrib); Andrea +Torsello, Ca’ Foscari University of Venice, Italy; Bogdan Raducanu and Joost +van de Weijer from Computer Vision Center, Spain. +• The work reported in Section 4.3 has been accepted as poster to the Transform- +ers for Vision Workshop of CVPR 2022 and it is single authored by Francesco +Pelosin. +• The work reported in Section 4.4, has been accepted to the S+SSPR 2020. +The authors who contributed to the work are (in order): Francesco Pelosin; +Andrea Gasparetto; Andrea Albarelli and Andrea Torsello, Ca Foscari University +of Venice, Italy +12 +Chapter 1 + +Chapter 2 +Background and Motivation +13 + +Dissecting continual learning: a structural and data analysis +2.1 +Artificial vs Natural Intelligence +Although the recent developments and great achievements of the field of Artificial +Intelligence, the fundamental nature of Artificial Neural Networks (ANNs) might still +be a coarse approximation of how our biological brains work. With the mathematical +introduction by [McCulloch and Pitts, 1943] and the introduction of the “Perceptron” +by [Rosenblatt, 1958], which constitutes the smallest unit that form a ANN, we +shaped our modeling of intelligence. +An artificial neuron can be described as a +cumulative summation of multiplications over some weights followed by a non-linear +function. +Then, after the introduction of the famous Multy Layer Perceptrons (MLPs) the +structure of ANNs has not changed much: we work in a connectionist paradigm where +the learning happens through a distributed signal activity via connections among ar- +tificial neurons. In particular, the learning occurs by modifying connection strengths +based on experience, this modification procedure has a particular name and it is the +so-called backpropagation algorithm whose discovery can be traced back to [Rumel- +hart et al., 1986] but with some earlier works by [Linnainmaa, 1976] (as an M.Sc. +Thesis) as pointed in [Schmidhuber, 2014]. +The success of connectionists models span over different fields: Convolutional +Neural Networks (CNN) for Computer Vision (CV) [He et al., 2016], Language Mod- +els for Natural Language Processing (NLP) [Devlin et al., 2019], Deep Q-Learning +Networks (DQN) for Reinforcement Learning [Agarwal et al., 2020], Generative Au- +dio Models for Audio [van den Oord et al., 2016] and Graph Convolutional Networks +(GCN) for graph data [Kipf and Welling, 2017]. +Connectionist models are a composition of several layers of artificial neurons, +followed by a non-linearity. There are several types of layers each with its peculiar- +ity. For example with the introduction of Batch Normalization [Ioffe and Szegedy, +2015] we allowed the networks to achieve faster training. The introduction of some +specialized units often allowed to excel in particular fields such as the convolutional +operation [LeCun et al., 1998] for Computer Vision tasks and the Self-Attention +mechanism in Natural Language Processing. [Vaswani et al., 2017], although the +attention mechanism has achieved tremendous achievements in vision tasks thanks +to [Dosovitskiy et al., 2021] and its introduction of Visual Transformers. Nowadays +there is still no perfect mechanism/model for each scenario because we are still in +the process of discovering how learning happens. For sure in the future, we might +see other methodologies working in fields where they are not born from. +14 +Chapter 2 + +Background and Motivation +Figure 2.1: Feature visualization of GoogLeNet [Szegedy et al., 2015], trained on the +ImageNet [Russakovsky et al., 2015] dataset. Concepts in early layers are reported +on left while concepts of last layers are on the right. The image is taken from [Olah +et al., 2017] +While attention-based models spread the knowledge, and feature representations, +uniformly across the layers [Raghu et al., 2021], in classical convolution-based mod- +els (such as ResNets [He et al., 2016]) the knowledge is constructed in a bottom-up +fashion. This is a well-known fact. In particular, abstract concepts are always the +result of the composition of simpler concepts. For example in early layers of CNNs +for CV tasks, each neuron specializes in the detection of low-level features, while, +as we move towards the head, the network learns patterns with more semantic rele- +vance for us humans. This can be seen thanks to the beautiful visualization of [Olah +et al., 2017] captured in Figure 2.1. This also reflects some neuroscientific discover- +ies where hierarchies of more and more abstract concepts have been demonstrated +repeatedly, especially in the visual brain areas [Riesenhuber and Poggio, 1999]. +While those resemblances are appealing to draw a connection between artificial +and biological brains, the difference is still striking. For example, quite often Deep +Learning models are static, that is, they are not altering their architecture over time +but, in our biological brains, new connections can appear, while others can also cease +to exist. This is also the so-called neuroplasticity of our brains, whose first scientific +evidence has been reported by [Bennett et al., 1964]. +As we will see, continual +learning and few other fields (e.g., dynamic routing, conditional computation, etc.) +are the only ones going in this direction. +On another note, time seems to be a major factor in both artificial and natural +learning. Our current connectionist framework does not exploit the notion of time +in learning. To accommodate such a factor we would need to redefine the current +learning framework because so far the models process data but without being condi- +tioned to when something is learned. There have been some attempts towards this +Chapter 2 +15 + +Edges (layer conv2do) +Textures (layer mixed3a) +Parts (layers mixed4b & mixed4c)Dissecting continual learning: a structural and data analysis +direction by defining the learning as a system of differential equations taking into +consideration time as a fundamental variable and also some attempts to implement +it by [Betti et al., 2020], although the majority of the works still operate in the +classical scenario. +Another clear distinction between artificial and biological neurons lies in how they +decide to fire. The artificial neuron receives inputs and multiplies them by some +weights that are adapted during learning. +To fire, it uses an activation function +(such as ReLu [Agarap, 2018]), but the reality of biological neurons is different. Each +biological neuron has its threshold resultant from a complex chemical interaction. A +class of models that are trying to bridge this gap is Spiking Neural Networks [MAA, +1997] where the firing of the neuron is determined by a threshold on the signal +received. Note that also this simplified model mimics neither the creation nor the +destruction of connections (dendrites or axons) between neurons, and ignores signal +timing. However, this restricted model alone is powerful enough to work with simple +classification tasks. +Another important difference is that biological circuits contain a myriad of addi- +tional details and complexity not translated to DL models, including diverse neural +cell types [Tasic et al., 2018] with some recent attempts by [Doty et al., 2021] to +bridge this gap by changing the activation function for each artificial neuron. Another +attempt to introduce more complex structures has been proposed by [Sabour et al., +2017] with the introduction of Capsule Net models, a family of networks where the +neurons are structured in hierarchies. +The most widely known neuroscientific framework for the brain is the Comple- +mentary Learning Systems (CLS) [McClelland et al., 1995]. This framework explains +why the brain requires two deferentially specialized learning and memory systems, +and it nicely specifies their central properties i.e., the hippocampus as a sparse, +pattern-separated system for rapidly learning episodic memories, and the neocortex +as a distributed, overlapping system that gradually integrates experienced episodes +and extracts latent semantic structures. +Instead, most of the proposed artificial +models, are more of a well-engineered pipeline crafted to excel in a particular task +such as Computer Vision, NLP, etc. and do not draw inspiration from such theo- +ries, although a very recent work prosed by [Arani et al., 2022] explored over this +direction. With some recent developments in the CL field, rehearsal systems [Parisi +et al., 2019] (systems that replay old data through a buffer) can be recast with such +a point of view. In fact, we can think of the rehearsal buffer (or the part of the CL +system dedicated to storing “old” patterns used in replay) as a long-term memory +while the other part of the architecture is the fast-paced learner of the intelligent +agent i.e. the hippocampus. Perhaps the key to continual learning will be in the +16 +Chapter 2 + +Background and Motivation +inspiration from neuroscientific models. Indeed recently [McCaffary, 2021] proposed +a systematic review of the approaches in CL along with some insights into why we +should pay more attention to neuroscientific theories. +As we saw, the gap between artificial and biological models is still relevant and +the two fields, nowadays, show big differences in their understanding of intelligence. +However, one striking fact is that the artificial community has achieved impressive +results without directly mimicking the current neuroscientific theories, suggesting +that, perhaps, several paradigms of intelligence exist. +2.2 +What is Continual Learning? +“Every machine is built to make decisions, if it does not have the faculty to learn, +it will act always in conformity to a mechanical scheme. We don’t have to let the +machine decide about our conduct if we first have not studied the laws that rule its +behavior, and made sure that such behavior will be based on principles that we can +accept!” +- Norbert Wiener +Definition: +The aforementioned quote is taken from “Introduction to Cybernet- +ics”, and highlights the fact that the fundamental ability to continually learn is a +very important skill that any intelligent system should possess. Although we are now +able to devise powerful artificial systems achieving superhuman performance in some +tasks, we, as humans, still exhibit a core ability that would be fundamental to repli- +cate intelligence as we know it. The ability to learn new concepts without erasing +past knowledge. +These two aspects are the main objectives of Continual Learn- +ing. First, exhibiting the ability to assimilate new concepts incrementally. Secondly, +showing the capability of memorization i.e. not forgetting what has been previously +learned. In a nutshell Continual Learning studies how to develop systems that +learn incrementally over time without forgetting previously acquired knowledge. +History: +Continual Learning has drawn a lot of interest from the research commu- +nity only in the later years even though the question itself is very old. One of the +early papers trying to tackle this phenomenon has been proposed by [Carpenter and +Grossberg, 1988] where the authors proposed a short-term and long-term memory +pattern detector through the Adaptive Resonance Theory. In fact, to the best of +our knowledge, this seems to be the earliest work proposed. Later, as connectionist +Chapter 2 +17 + +Dissecting continual learning: a structural and data analysis +Figure 2.2: +Continual learning spectrum. +The optimal algorithm should exhibit +enough plasticity to learn new tasks while retaining enough stability to not forget +the acquired knowledge. +models pave the way for modern Artificial Intelligence, other attempts and several +proposals have been made. Later the work by [Ring et al., 1994] coined the term +“Continual Learning”, here the system proposed, aimed to construct hierarchies of +knowledge within a neural network. Later, with the works by [Thrun, 1995a] and +[Thrun and Mitchell, 1995] Continual Learning started to get attention especially in +both the Robotic and Reinforcement Learning research community. +Terms: +When we say Continual Learning we have two other equivalent terms: In- +cremental Learning and Lifelong Learning. These terms can be used interchangeably +and denote the same setting. There are no clear distinctions and probably the pref- +erence of one over another is just a matter of the research field we are in. +For +example, in the computer science field, it seems that continual learning and incre- +mental learning are more common. Other terms are used but differ in the specific +continual setting they study. For example: Online Learning and Streaming Learning. +These are very similar, and there is no clear distinction yet. These terms are used to +describe algorithms that learn by observing an example just one time and, sometimes, +the latter can also refer to systems that can respond to queries in real-time. We will +introduce a more formal definition in the next chapter. +Subject of CL: +As we previously discussed, the study of Continual Learning is +strictly tight with the widespread usage of connectionist models. In fact, before the +advent of Artificial Neural Networks (ANNs), intelligence was modeled, usually, by a +mixture of expert systems and clever algorithms. Posing the same “continual learning +question” for these systems is still an interesting challenge, but the success of ANNs +shifted the focus to connectionist models. +18 +Chapter 2 + +Optimal +Stability +Plasticity +Very good at solving +Very good adaptation +old tasks +to new tasksBackground and Motivation +2.2.1 +Stability-Plasticity Dilemma +Learning incrementally (or continually) with connectionist models requires one core +ability, that is, to adapt to a changing environment. If the environment would +not change over time, and we expose a system to operate on it, we would just need +to understand, model, and hard-code the environment’s rules to the system and we +would achieve perfect functionality. +Unfortunately, the real world does not seem +to behave in such a predictable way. Instead, our reality constantly changes and +we need to redefine our knowledge, reshape it in light of new facts, have room to +constantly learn something new, and recombine previous knowledge to understand a +novel concept. This is not the only necessary property for an intelligent system, the +counterpart is also important. In fact, some things do not change in the world, old +challenges might propose again, and, therefore, fundamental knowledge should not +be forgotten. A truly intelligent system would behave consistently on past lessons. +It would be able to detect and recognize past challenges, delivering correct solutions. +The researchers gave a name to this trade-off and it is called the stability-plasticity +dilemma. The long-term goal of Continual Learning is to create a system able to +achieve a perfect balance between these two abilities as depicted in Figure 2.2. As +we will see, it is termed a “dilemma” since achieving the optimal trade-off is a very +hard task. +On top of these considerations dissecting new concepts and redefining them as +a combination of old knowledge allows the forward transfer of intelligence. That is +when we learn we sometimes can abstract the knowledge to solve a related problem. +This is not uncommon it is the mechanism of analogy thinking where an “opera- +tional pattern” can be used to solve problems in apparently different domains. As +an example, [Hill et al., 2019] investigates such property of intelligence in artificial +networks. On the other hand, continual learning should give the ability to better +grasp the past knowledge improving the ability to past challenges. This is even more +common and we can think of this kind of ability as the “experience” that an agent +accumulates in a certain field or in solving a certain category of tasks. +In a nutshell, the stability-plasticity dilemma can be considered the crux of intel- +ligence. Showing adaptability to new environments while at the same time retaining +knowledge of old environments seems to be the major qualities of an intelligent agent. +Chapter 2 +19 + +Dissecting continual learning: a structural and data analysis +Figure 2.3: The original images for each task. This image shows the ground truth +relative to Figure 2.5. +2.2.2 +Catastrophic Forgetting +One core aspect of deep neural networks lies in the fact that if we do not introduce +any kind of mechanism to achieve the balance between stability and plasticity, the +artificial network is naturally inclined to forget. That is, the neural networks put +much more emphasis on plasticity rather than stability. From a neuroscientific point +of view, this fact does not make much sense unless we think about neural networks as +systems without any form of memory. The reality is that networks do have memory, +but by the nature of the learning algorithms we overwrite such memory. As the model +incrementally learns, each parameter in the network is modified by the updates of the +backpropagation algorithm. The optimal continual learning method would be able to +modify the parameters without altering the performances of old tasks. This seems +not to happen and therefore neural networks are prone to the so-called catastrophic +forgetting, the phenomenon where old knowledge is corrupted. +2.2.3 +A Visual Example +To better grasp the phenomenon of catastrophic forgetting, we will provide a visual +example in the following section. As we discussed, catastrophic forgetting happens +because the parameters tuned to solve a task (usually experienced before in time), +are not suited for the currently experienced task. We hope to provide a clear visual +example of the effects of catastrophic forgetting in a shallow architecture. +As the name suggests Deep Learning refers to architectures with many layers +on top of each other. Because of this huge depth, computer vision (but not only +this community) was able to achieve impressive results in the domain of pattern +recognition. Unfortunately, we still do not fully control how the knowledge is built +inside a deep neural network and if we want to counter forgetting we would need +such information. To do so, we would need to keep track of each parameter variation +20 +Chapter 2 + +8 +8 +8 +8Background and Motivation +as we learn new concepts in a continuous fashion, but doing so, especially in such +models, is hard if not an impossible job. Said that, on a small scale, we can still +show what is going on inside a network. In the following toy example we try to +track forgetting of an autoencoder by dissecting the learning process per task. +We will use a simple one-layer autoencoder model and try to incrementally learn the +famous MNIST [LeCun et al., 1998] dataset, still used in the continual literature to +validate the proposed methods. We will divide the dataset into 5 tasks and learn to +compress and reconstruct images. By doing so we will show the corruption of old +images as we learn new tasks and connect them to the network’s variation of the +parameters. +The MNIST dataset is a grayscale dataset of 28 × 28 images of handwritten +digits going from the digit 0 to the digit 9, here some examples: +. +The MNIST was constructed from NIST’s Special Database 3 and Special Database +1, the first has been collected among Census Bureau employees and the second one +among high school students. It has a training set of 60,000 examples, and a test set +of 10,000 examples. We will divide the dataset in 5 tasks, the first 1 is composed of +the digits +, task 2 by +, task 3 by +, task 4 by +and finally task 5 by +Although the best practice to work with image data is to use CNNs, we will limit +our toy example to a naive autoencoder model of linear layers. This choice allows +us to better unfold and analyze the variation of the parameters due to its simplicity. +The model is composed of a single layer encoder φ that encodes an image into a +latent vector and a single layer decoder ψ that reconstructs the image. In particular, +the single-layer encoder is a linear layer φ : R784 → R16 that will receive in input a +flattened (28 × 28 = 784) representation of the image and compress into a latent +vector of magnitude 16. The decoder, then take care of the reconstruction of the +image by doing the reverse process, that is ψ : R16 → R784 i.e. given a latent vector +of size 16 it decompresses it to a flattened image. +More formally an autoencoder can be represented in the following way: +ˆx = ψ(φ(x)) +Where x ∈ R784 is the flattened representation of an original image coming from a +task t, φ is the encoder network and ψ is the decoder network, and ˆx ∈ R784 is the +flattened representation of the reconstructed image. The objective is to optimize +and, in particular, minimize the mean square error (MSE) between the original image +and the encoder’s reconstruction. More formally we can define the objective function +as: +min +φΘ,ψΘ L (x, ˆx) = min +φΘ,ψΘ ∥x − ˆx∥2 +Chapter 2 +21 + +Dissecting continual learning: a structural and data analysis +Figure 2.4: Variation of the parameters grouped by task. Each bar plot shows the +distribution of the weights, we can see that each task modifies internal parameters. +Each weight is computed as the sum of all the connections of the particular latent +neuron. +Here φΘ represents the set of encoder’s parameters to be optimized while we use ψΘ +for the decoder. +By incrementally learning each task we want to show the corruption in the +ability of reconstruction of previous tasks. +The change in the parameters to +accommodate the new task negatively impacts old tasks. In fact, if we try to retrieve +old concepts we see catastrophic interference, that is, the network is confusing old +concepts with newly learned ones. From now on let us refer to Figure 2.5, where +is depicted the complete incremental learning and its effects. +The grid reported +encodes the performance of the autoencoder. Each row i refers to the model trained +solely on data of task i but tested on all the other tasks. From the experiment, we +can appreciate several effects. First, if we isolate the first column of the grid, we +can visualize the performance of the original first task as time passes (we can think +of it as the stability of the network as we will discuss in Section 4.2). Here, one +can clearly see that feeding new concepts corrupts old ones. On the other hand, +if we focus on the upper triangular section of the matrix, we see the ability of the +model to generalize knowledge. This stresses the fact that generalization is a key +component in continual learning. Intuitively more “general” models might experience +less forgetting (further hints on this path can be found in Section 4.2 and Section +4.3). The connected change in the weights for each task is reported in Figure 2.4 +(for both the encoder and decoder). As we can see, even a small change in the +parameters dramatically impacts the stability plasticity trade-off. +As reference in +Figure 2.3 we report the ground truths. +22 +Chapter 2 + +Decoder +Encoder +Parameter Distribution +Task1 +Task2 +Task3 +Task4 +Task5 +40 +-50 +30 +-100 +Weight +Weight +20 +-150 +10 +-200 +0 +-250 +Task1 +Task2 +Task3 +Task4 +Task5 +Parameter DistributionBackground and Motivation +Figure 2.5: Results of the incremental training and test of the autoencoder model +in the MNIST dataset were split into 5 tasks. Each row i of the grid, reports the +performance of the model when trained on task i (or time ti) and tested on both old +(left) and future (right) tasks. Training on previous tasks might unlock the intrinsic +possibility to solve future tasks. This latest phenomenon is highlighted with the blue +boxes. Ground truth in Figure 2.3. +Chapter 2 +23 + +3 +6 +5 +Catastrophic Forgetting +Generalization Ability +Train and Test on the same task, optimal plasticityDissecting continual learning: a structural and data analysis +24 +Chapter 2 + +Chapter 3 +Continual Learning Framework +25 + +Dissecting continual learning: a structural and data analysis +3.1 +Definition and Settings +Being Continual Learning a relatively new discipline, the community unfortunately +still does not fully agree on a formal setting. This is also corroborated by the fact +that incremental learning is under the research light of several communities. Among +the most active communities, we have NLP, Computer Vision, Reinforcement Learn- +ing, Neuroscience, and Robotics. Each of these communities has a well-established +history and standard protocols, therefore, accommodating everyone in a common +ground is still an ongoing process. However, in the following, we will introduce the +most common definitions and settings shared in the Computer Vision literature. +There have been some attempts to formalize a setting for continual learning +[van de Ven and Tolias, 2019, Lomonaco and Maltoni, 2017] through the definition +of learning protocols and new terminologies. We will see these different learning +paradigms in the following sections, but the core feature underlying learning incre- +mentally is that the data experiences some distributional shift, that is, the distribu- +tion of the data changes over time. This is sufficient to abruptly cause forgetting +in connectionist models, but we can define some settings which are more prone to +cause such phenomenon, while others are more simple to overcome. +The typical continual learning setting in computer vision is composed of a split +dataset, where each (usually non-overlapping) split is considered an incremental task. +Therefore, each task contains data from several classes. Although this is not the +only way to define a continual learning scenario, this is the most prominent one as +pointed out in these surveys [Mai et al., 2022, Delange et al., 2021, Parisi et al., +2019]. Let us define a more formal definition: +Formal Definition: +Given a dataset D containing (in our case) images, we want +to split D in a sequence of n disjoint tasks that can be learned sequentially by our +model: +T = [t1, t2, . . . , tn] +(3.1) +where each task ti = (Ci, Di) is represented by a set of classes Ct = +� +ct +1, ct +2 . . . , ct +nt +� +and training data Dt. We use Nt to represent the total number of classes in all tasks +up to and including task t : Nt = �t +i=1 +��Ci��. As a side note, usually in literature one +would use the notation t to point at the current task (the task at time t) and t − 1 +to point to the task before the current one. +A continual learning algorithm aims to model each task sequentially as time passes +exposing the model at training time to each task in a sequential fashion. Operatively: +26 +Chapter 3 + +Continual Learning Framework +first, the algorithm is trained with mini-batches of patterns coming from task 1. Here +we will record the system performance. Then, the model is exposed to task 2 data +and the process continues until task n. One visual example can be seen in Figure 3.1, +here the MNIST dataset is split into 5 tasks with 2 classes each 1. +The previously defined learning scenario takes into consideration a distinct transi- +tion among tasks. In this particular case, we implicitly assume a reset signal between +two tasks. +When such signal is not present, and the transition between tasks is +smooth, the complexity of the continual learning problem increases. If in this par- +ticular setting we query the system for real-time response, we are talking about +streaming learning [Hayes et al., 2019]. This setting is more challenging because +the models are allowed much less time to consolidate previously seen knowledge +and therefore are more prone to experience catastrophic forgetting. Since this the- +sis focuses on computer vision problems, throughout the work we will stick to the +introduced setting. +Fine-Grained +So far we limited the notion of a task as a split of a dataset, but +what happens if in a new task we experience new instances of previously seen classes? +To this end, more complete settings for continual learning benchmarking have been +proposed. One example is constituted by [Lomonaco and Maltoni, 2017]. Here the +authors, along with a new dedicate dataset, introduce three different settings by +mixing the experience of old and new data. Specifically, here we report the different +scenarios: +• New Instances (NI): new training patterns of the same classes become avail- +able in subsequent tasks. Here the model can experience new instances of old, +previously seen, classes. With the possibility of seeing the same objects in new +poses and conditions (illumination, background, occlusion, etc.). Here a good +model is expected to incrementally consolidate its knowledge about the known +classes without compromising what it has learned before. +• New Classes (NC) : new training patterns belonging to different, never seen, +classes become available in subsequent tasks. This is the classic scenario (the +one we formally introduced) and a model should be able to deal with the new +classes without losing accuracy on the previous ones. +• New Instances and Classes (NIC): new training patterns belonging both to +known and new classes become available in subsequent training tasks. A good +1This particular setting takes the name of MNIST-split +Chapter 3 +27 + +Dissecting continual learning: a structural and data analysis +model is expected to consolidate its knowledge about the known classes and +learn the new ones. This is the most complete and difficult scenario since the +addition of new classes poses the challenge of having good plasticity while the +introduction of new old patterns asks for stability. +In our opinion, this categorization is preferable since it provides a more complete +description of a continual learning benchmark. In fact, if we assume, as an example, +that each task data is generated by an independent source, the task data will be +continually augmented with new information. This scenario is captured by the NIC +setting and cannot be handled by the standard definition. Unfortunately, due to the +recent development of the field, we usually assume the NC scenario independently. +3.1.1 +Online CL vs Offline CL +So far we introduced a basic notation, now we discuss how a model can be trained to +face a continual learning stream of tasks and introduce the name of these scenarios. +The continual learning literature distinguishes two options: online training and offline +training. +Online +In particular, in the online continual learning protocol, the algorithm is re- +quired to have a single parameter update per pattern (or one forward-pass). This +is a very coercive setting and requires maximum performance in knowledge consoli- +dation from the continual learner. In fact, this scenario is quite challenging because +of the nature of Stochastic Gradient Descent i.e. the learning algorithm at the core +connectionists models. Here the system might not have enough time to assimilate a +concept, therefore weakening its understanding and subsequent stability. +Offline +In the offline learning protocol, instead, we are free to perform several +parameter updates per pattern i.e. we are allowed to see an image more than once. +For an incremental learner, this setting is a double edge sword, in one case it favors +the consolidation of the concepts since setting a large number of epochs guarantees +the correct training of a model. +On the other side, if we do not introduce any +forgetting prevention mechanism, this corrupts the old informational content of the +network i.e. the system is more exposed to catastrophic forgetting. +In the following paragraphs, we will introduce some of the settings that are now, +28 +Chapter 3 + +Continual Learning Framework +Figure 3.1: +Schematic representation of split-MNIST task protocol.Taken from +[van de Ven and Tolias, 2019] +de facto, shared among all the research communities researching continual learning. +3.1.2 +Task-Incremental vs Class-Incremental +Assuming an NC-type of task flow, two sub-settings have been widely adopted by +the research community and are well-defined. The Task Incremental (TI) setting and +the Class Incremental (CI) setting. +Task-Incremental +In the Task Incremental scenario, which is sometimes also re- +ferred to as multi-head scenario or task aware (TAw) the learning happens sequen- +tially, but at test time, the learner has also access to the task label. This scenario is +also known as multi-head because a typical learning system can potentially dedicate +a particular subsystem per task, that can be specifically queried at test time through +the task label knowledge. Typically the subsystem is a classifier head on top of a +backbone. +More formally we consider task-incremental classification problems where at train- +ing time the learner has access to: +Dt = {(x1, y1, z1) , (x2, y2, z2) , . . . , (xmt, ymt, zmt)} +while at test time the learner has access to: +Dt = {(x1, z1) , (x2, z2) , . . . , (xmt, zmt)} +where x are input features for a training sample, and y ∈ {0, 1}Nt is a one-hot class +ground truth label vector corresponding to xi while z ∈ {0, 1}|T | is the is a one-hot +task ground truth label vector. In a nutshell, during training for task t, the learner +only has complete access to Dt, then we assume a reset signal among tasks i.e. +Chapter 3 +29 + +Task 1 +Task 2 +Task 3 +Task 4 +Task 5 +00 +first +second +first +second +first +second +first +second +first +second +class +class +class +class +class +class +class +class +class +classDissecting continual learning: a structural and data analysis +Ci ∩ Cj = ∅ if i ̸= j, and at test time the learner has access to patterns and their +task label. +Class-Incremental +Instead, in class incremental scenario, also known as single- +head or Task Agnostic (TAg) the system has both access to task and class label +during training time, but at test time it only has raw data. This constitutes a harder +problem, but also a more realistic scenario. +More formally we consider class-incremental classification problems where at +training time the learner has access to: +Dt = {(x1, y1, z1) , (x2, y2, z2) , . . . , (xmt, ymt, zmt)} +while at test time the learner has access only to: +Dt = {x1, x2, . . . , xmt} +where x are input features for a training sample, and y ∈ {0, 1}Nt is a one-hot class +ground truth label vector corresponding to xi while z ∈ {0, 1}|T | is the is a one-hot +task ground truth label vector, same as in TAW setting. +Although taw scenarios are more interesting from a pure machine learning per- +spective, the tag setting is more realistic. For example, let’s draw an analogy: let +us consider a baby as our incremental algorithm. We want to teach the baby to +recognize elements coming from a particular environment, for example, kitchen ac- +cessories. Here the task label would be ’kitchen’. After the learning process has +successfully terminated, whenever we ask the baby to recognize a fork, we do not +need to provide a hint on the task (kitchen). In fact, the information of where +he learned the concept should be irrelevant. This is also important because several +objects can appear, and could be part of, several environments (tasks). For example, +scissors can be found in the kitchen, but also in a studio. Therefore knowledge itself +should be independent of the context where it is learned and, we think that class +incremental setting provides a more useful challenge. +3.2 +Baselines +In this chapter, we will see the principal naive approaches and introduce an overview of +the state-of-the-art. In particular, we will introduce the cumulative and the finetuning +30 +Chapter 3 + +Continual Learning Framework +Figure 3.2: Depiction of the Cumulative/Joint approach for continual learning. The +model is trained with all the data up to the current task ti. The updates flow in the +backbone and in all the heads up to hi. +methods which constitute, respectively, the upper and the lower bound to evaluate +continual learning strategies. Moreover, we consider our model to be composed of +a backbone (or a feature extractor) and a dedicated classifier (head) for each task. +We do so in light of the majority of the works in continual learning and computer +vision, which are composed of this very structure. +3.2.1 +Cumulative +To evaluate a continual learning algorithm we need an optimal method that acts as +an upper bound. The cumulative strategy (also known as joint-training) consti- +tutes the optimal continual learning strategy since mimics a learner with perfect +memory. Indeed if we have perfect memory we can recall the past and not expe- +rience forgetting, to this end a recent work from [Knoblauch et al., 2020] proved +theoretically that optimal continual learning has a perfect memory and is NP-hard. +To have optimal memory of the past, an algorithm should be able to save all the +data that has been seen. This is a very inconvenient requirement and it must be +avoided when considering the development of real lifelong learning systems. In fact, +as the pace of real-world data generation is growing, such constraints would not be +satisfied. Training from scratch with all the dataset data could be an upper bound +approach, but it does not break down each incremental step upper bound. To this +end, the cumulative strategy accumulates all the data seen up to a certain task and +trains the network from scratch, therefore, providing an incremental upper bound. +Chapter 3 +31 + +to +tN +to +t1 +tN +to +t1 +t1 +tN +.Dissecting continual learning: a structural and data analysis +Figure 3.3: Depiction of the Finetuning approach for continual learning. The model +is trained exclusively with the data coming from the current task ti. The updates +flow in the backbone and only in the hi head. +More formally, for the cumulative approach, the data of task i is defined to be: +ti = +j=i +� +j=0 +tj +when i = 1 . . . n to complete the incremental setting. At each time ti the model is +trained on the cumulative data and therefore we are able to define the upper bound +performance for each task i. One observation is that the cumulative performance in +the last task it is equivalent to the performance of the model trained with the whole +data. In Figure 3.2 we depict a visual example of the cumulative approach. Here +for each task, the backbone is always updated along with the heads of competence. +However the updates of the heads can be also shared among all the tasks, that is, +each task data alters all heads parameters. Of course, this design choice does not +favor the prevention of forgetting, instead, it allows the disruption of consolidated +knowledge and we won’t consider this case2. +3.2.2 +Finetuning +We previously saw the upper bound for CL, that is, the optimal continual learning +approach for a benchmark. Now, we introduce the finetuning approach which consti- +tutes the lower bound methodology. Although we can argue that a random classifier +would be the true lower bound, in practice we consider finetuning in which it is ab- +sent of any forgetting prevention mechanisms. In fact, it is equal to the practice of +2this is valid for finetuning too +32 +Chapter 3 + +to +t1 +tN +to +t1 +tN +to +t1 +tNContinual Learning Framework +transfer learning among subsequent tasks and measures the base resilience of the +model against incremental scenarios. We also can consider it as a baseline to assess +the generalization capabilities of a model. +A depiction of the method is given in Figure 3.3. Here, the model is trained +sequentially and each task head is updated with the data of its competence and, as +in the cumulative approach, the backbone is always updated. +3.3 +State-of-the-art +In the following sections, we will introduce the main categorizations of the approaches +proposed by the community. In particular, we will explain the core mechanism and +show the pros and cons of each category. Although there is no absolute preferred +solution, some approaches are more explored than others and show more promising +results. +3.3.1 +Structural-based +Structural-based approaches, also known as architectural approaches or parameter- +isolation methods, fight forgetting by altering the structural composition of the net- +work itself. In particular, structural approaches instantiate dedicated modules as they +experience new tasks. The first work falling in this category is perhaps Progressive +Neural Networks (PNN) [Rusu et al., 2016] where the network is augmented with +new connections spanning both height-wise and width-wise. +In the task-aware setting, this approach constitutes a convenient and naive so- +lution to fight catastrophic forgetting. In fact, having the task label at test time +allows us to correctly determine a dedicated subnetwork. Instead, in task agnostic +setting, we would not be able to select such a submodule. We can see very few +structural-based approaches tackling class incremental setting due to the aforemen- +tioned limitation [Lee et al., 2020, Rajasegaran et al., 2019]. That said, Structural +approaches can be subdivided into Fixed Architecture (FA) and Dynamic Architec- +ture (DA). FA only activates relevant parameters for each task without modifying +the architecture [Mallya and Lazebnik, 2018, Kirkpatrick et al., 2017], while DA adds +new parameters for new tasks while keeping old parameters unchanged [Yoon et al., +2018, Rusu et al., 2016]. Although architectural methods are very intuitive, they are +Chapter 3 +33 + +Dissecting continual learning: a structural and data analysis +Figure 3.4: Architectural approaches for Continual Learning alter the structural prop- +erties of the network itself. +bulky. In fact, the major drawbacks are in the expansion of the parameters which +can result in a memory-intensive method (DA), or in the architectural limitation of +the number of parameters that can be saturated (FA). +3.3.2 +Regularization-based +In parameter based approaches also known as weight-regularization or data-regularization +approaches, forgetting is handled with procedures that regularize the parameter up- +dates. Among the most famous ones, there are Elastic Weight Consolidation (EWC) +[Kirkpatrick et al., 2017] and Synaptic Intelligence (SI) [Zenke et al., 2017]. EWC +was the first regularization-based approach using second-order information. In par- +ticular, the procedure regularizes the updates through the Fisher information which +is computed at each parameter update. +Figure 3.5: Regularization approaches for Continual Learning alter only the parame- +ters properties of the network. +In this category, we can also find Learning without Forgetting (LwF) [Li and +Hoiem, 2017], which is one of the most influential methods in continual learning +literature. LwF uses Knowledge Distillation [Hinton et al., 2015] in the logits of the +34 +Chapter 3 + +Model 1 +Task 1 +Task i +Model iTask iContinual Learning Framework +network. The main strength of LwF lies in the fact that it does not use previously- +stored examples while still being purely data-driven. In particular by storing the old +model at time (t − 1) the method can distillate old knowledge by forwarding to +the old model the current data. Since the introduction LwF, KD has been widely +adopted by the continual learning community as part of new methodologies among +the works we report [Douillard et al., 2020, Rebuffi et al., 2017, Buzzega et al., +2020, Pourkeshavarz and Sabokrou, 2022, Joseph et al., 2021, Wu et al., 2019, +Banerjee et al., 2021, Javed and Shafait, 2018, Ahn et al., 2021, Dhar et al., 2019], +but we are aware of many others that we do not report for brevity. +The main +strength of regularization-based approaches lies in their data/architecture constraint- +free nature. In fact, they usually work with an underlying mathematical justification. +This property surely allows a more principled continual learning strategy, but it can +make the learning procedure cumbersome: computing second-order or estimating +gradients directions, might slow down the learning while hindering it. +3.3.3 +Rehearsal-based +In rehearsal-based approaches (or data-replay approaches) the main mechanism ex- +ploited to overcome forgetting, lies in the usage of a replay buffer for old exemplars. +The methods falling under this category, dedicate a memory cache to store data +examples encountered during the incremental training i.e. the system samples and +stores images experienced in previous tasks. We can think of the buffer as long-term +memory. In fact, what typically happens is that the memory is queried to augment +the task at hand, that is, we retrieve and inject old examples to the current data +batch. This mechanism prevents forgetting by allowing the network to directly recall +past examples, a visual depiction can be seen in Figure 3.6. +Perhaps the most famous work among rehearsal-based approaches is Experience +Replay (ER) [Rolnick et al., 2019] inspired by the Reinforcement Learning community +its strategy is replaying data by randomly selecting old examples. In the evolution +of ER, which is Maximally Interfered Retrieval (ER-MIR) [Aljundi et al., 2019a], +proposed a controlled sampling of the replays. Specifically, they retrieve the samples +which are most interfered with, i.e. whose prediction will be most negatively impacted +by the foreseen parameters update. Another famous method is Gradient Episodic +Memory (GEM) [Lopez-Paz and Ranzato, 2017] in which the authors devised a +system where the gradient update of the replay examples should follow the original +direction. +A closely related mechanism is generative replay (GEN) [Shin et al., 2017, van de +Chapter 3 +35 + +Dissecting continual learning: a structural and data analysis +Figure 3.6: Rehearsal approaches for Continual Learning store old patterns to aug- +ment the data of the current task. +Ven and Tolias, 2018, Wu et al., 2018]. In this approach, old data is recorded in a +buffer and then compressed, after that, a generative model such as a GAN [Goodfel- +low et al., 2014], generates a synthetic version of the old distribution and augments +the data of the current task. The main disadvantages of generative replay are that +it takes a long time to train and it does not constitute a viable option for more com- +plex datasets given the current state of deep generative models. Another approach +devised by [Liu et al., 2020a] tries to overcome such limitations by generating inter- +mediate features instead of the original data, trying to decrease the computational +complexity of the generation procedure. +The pros of rehearsal-based approaches are their simplicity and effectiveness. In +fact, the methods with best performances in continual learning exploit exemplars as +shown in this challenge review [Lomonaco et al., 2022] where the best approaches +used exemplars. The drawback of rehearsal continual learning is the usage of a mem- +ory buffer, which can be saturated as the number of tasks to be learned grows. To +overcome such drawback some methods propose the usage of representative exem- +plars [Hayes et al., 2019] and herding [Liu et al., 2020b] techniques aimed to reduce +the amount of memory required. Here, an interesting work (GDumb) proposed by +[Prabhu et al., 2020] offers a simple baseline to rehearsal systems and questions +the advancements of continual learning research itself due to its outstanding perfor- +mance. Besides its performance, the system is very simple. In particular, the model +samples data as experiences the stream of incoming task data. It does so until it fills +a rehearsal buffer, by taking care to balance the proportion among classes. When +the task data stream ends the dumb learner (a simple MLP or CNN) is trained only +on the buffer data. GDumb achieves state-of-the-art performances. +36 +Chapter 3 + +Old tasks ++ +Task i +MemoryChapter 4 +Works +37 + +Dissecting continual learning: a structural and data analysis +6 +4.1 +Smaller is Better: An Analysis of Instance Quan- +tity/Quality Trade-off in Rehearsal-based Contin- +ual Learning +We begin our dissection by focusing on rehearsal-based methods i.e., solutions in +where the learner exploits memory to revisit past data. Due to its prominet perfor- +mance and the abrupt usage, rehearsal systems are nowadays one of the preferred +countermeasures to fight catastrophic forgetting. +So far, the focus from the community has been put into finding smart method- +ologies to improve the incremental performance. +Instead, we ask ourselves what +happens if we boost the capacity of the memory buffer. How much does impact al- +tering the data storable in the memory? Indeed, in this study, we propose an analysis +of the memory quantity/quality trade-off adopting various data reduction approaches +to increase the number of instances storable in memory. By apply complex instance +compression techniques to the original data, such as deep encoders, but also trivial +approaches such as image resizing and linear dimensionality reduction, we offer a +simple study on the trade-off. +Then we introduce the usage of Random Projections as compression scheme and +offer a simple pipeline through Extreme Learning Machines to resource-constrained +continual learning, an appealing scenario where computational and memory resources +are limited. +38 +Chapter 4 + +Works +Continual Learning (CL) is increasingly at the center of attention of the research +community due to its promise of adapting to the dynamically changing environment +resulting from the huge increase in size and heterogeneity of data available to learning +systems. It has found applications in several domains. Its prime application, and still +most active field, is computer vision, and in particular object detection [Gidaris and +Komodakis, 2018, Thrun, 1995b, Parisi et al., 2019]; however it has since found +applications in several other domains such as segmentation [Cermelli et al., 2020, +Michieli and Zanuttigh, 2019, Yu et al., 2020a], where each segmented class has +to be learned in an incremental fashion, as well as in other fields, among which we +mention Reinforcement Learning (RL) [Xu and Zhu, 2018, Lomonaco et al., 2020] +and Natural Language Processing (NLP) [Gupta et al., 2020, Sun et al., 2020, +de Masson d’Autume et al., 2019]. +Ideally, the behaviour of CL systems should resemble human intelligence in its +ability to incrementally learn in a dynamical environment [Hadsell et al., 2020], with +minimal waste of resources, spatial or computational. The main problem encountered +by these systems resides in the famous stability-plasticity dilemma of neuroscience, +resulting in the so called catastrophic forgetting [McCloskey and Cohen, 1989], a +phenomenon where new information dislodges or corrupts previously learned knowl- +edge, resulting in the deterioration of the ability to solve previously learned tasks. +Solutions to this problem typically incur in a increase in resource requirements +[Lomonaco et al., 2022] both for CL’s very nature (the more tasks arrive the more +data the agent need to process), and for the nature of the systems that try to solve +it, both in the increased complexity of the typically deep learning models, and in the +time and space requirements of continuously learning multiple models. This problem +become particularly evident in rehearsal-based methods. +Rehearsal-based methods, i.e., approaches that leverage a memory buffer to cope +with catastrophic forgetting, are emerging as the most effective methodology to +tackle CL. Their performance, backed by extensive empirical evidence [Lomonaco +et al., 2022], finds also a theoretical justification in Knoblauch and co-workers’ finding +that optimally solving CL would require perfect memory of the past [Knoblauch et al., +2020]. In fact, if we were able to completely re-train a new system with all previous +data every time a new task arrives, Continual Learning would not appear to be any +different from any other learning problem. However, this approach is both spatially +and computationally infeasible for most real-world problems and we can argue it +is precisely these memory and computational limitations that characterize CL and +distinguish it from other learning problems. +Our investigation aims to analyze the trade-offs on limited-memory CL systems. +Chapter 4 +39 + +Dissecting continual learning: a structural and data analysis +Memory +Data +Train +Rehearsal Method +Reduction +Time +Figure 4.1: Our work analyzes the optimal instance quantity/quality trade-off in +memory buffers of rehearsal-based Continual Learning systems. We carry out our +analysis by applying several dimensionality reduction schemes to increase the quantity +of storable data. +In particular, we focus on the quantity/quality trade-off for memory instances. We +do so through the analysis of several dimensionality-reduction schemes applied to +data instances that allows us to increase the number of examples storable in our +fixed-capacity memory. In particular we adopted deep learning encoders such as a +variation of ResNet18 [He et al., 2016] and Variational Autoencoders (VAE) [Kingma +and Welling, 2014], the simple yet surprisingly effective extreme resizing of image +data, and, lastly, we explored Random Projections for dimensionality reduction. The +latter scheme turns out to be very effective in low memory scenarios also reducing +the model’s parameter complexity. +Indeed, we will show that a variation of Ex- +treme Learning Machines (ELM) offers a simple yet effective solution for resources- +constrained CL systems. +Our analysis will focus on computer vision tasks and use GDumb [Prabhu et al., +2020] as a rehearsal-baseline. GDumb is a model that has been proposed to question +the community’s progress in CL thanks to the fact that in lieu of its outstanding sim- +plicity, it was still able to provide state-of-the-art performance. Further, its simplicity +also results in high versatility, as it proposes a general CL formulation comprising all +task formulations in the literature. GDumb is fully rehearsal-based, and it is com- +posed by a greedy sampler and a dumb learner, that is, the system does not introduce +any particular strategy in the selection of replay data. Therefore, it represents the +ideal candidate method to carry out our analysis. +The experimental findings highlighted in this study are multiple: first, we show +that when the memory buffer is fixed and extreme values of resizing of instance data is +applied, we can easily push the state-of-the-art of CL rehearsal systems by a minimum +of +6% to a maximum of +67% in terms of final accuracy. This surprising result +suggests that the optimal trade-off between data quantity and quality is severely +skewed toward the former and that in general the informational content required to +40 +Chapter 4 + +Works +(b) +or +(c) +Resize +(a) +Figure 4.2: Depiction of the three main dimensionality reduction techniques analyzed. +In (a) random projection (RP) each image is vectorized (vi) and then orthogonally- +projected through a random matrix Q into v ′ +i . In (b), the encoder φ outputs a latent +vector v ′ +i (such as in VAEs) or a noise-free / shrinked image x′ +i (as in CutR). In (c), +we adopt a simple image resizing strategy through standard biliniear interpolation. +correctly classify images in standard datasets is relatively low. Then, we analyze +the consumption of resources of rehearsal CL systems as we saturate the rehearsal +buffer, and show that ELM offer a clear solution on CL systems constrained by very +low resources environments. +Related Works +Following some recent surveys [Parisi et al., 2019, Hadsell et al., 2020, Mundt et al., +2020], we divide CL approaches into three main categories: regularization-based +approaches, data rehearsal-based approaches and architectural-based approaches. +Although a few novel theoretical frameworks based on meta-learning have been in- +troduced recently [Hadsell et al., 2020], the majority still fall within these categories +(or in a mixture of them). +Regularization-based approaches address catastrophic forgetting by controlling +each parameter’s importance through the subsequent tasks, by means of the ad- +dition of a finely-tuned regularizing loss criterion. +Elastic Weight Consolidation +(EWC) [Kirkpatrick et al., 2017] was the first well established approach of this +class. +It uses Fisher information to estimate each parameter’s importance while +discouraging the update for parameters with greatest task specificity. Learn without +Forgetting (LwF) [Li and Hoiem, 2017] exploits the concept of “knowledge distil- +lation” to preserve and regularize the output for old tasks. More recently, Learn- +ing without Memorizing (LwM) [Dhar et al., 2019] adds in the loss an information +preserving penalty exploiting attention maps, Continual Bayesian Neural Networks +(UCB) [Ebrahimi et al., 2020] adapts the learning rate according to the uncertainty +Chapter 4 +41 + +Dissecting continual learning: a structural and data analysis +defined in the probability distribution of the weights in the network, while Pomponi et +al. [Pomponi et al., 2020] propose a regularization of network’s latent embeddings. +Rehearsal-based +Rehearsal-based solutions allocate a memory buffer of a prede- +fined size and devise some smart schemes to store previously used data to be replayed +in the future, i.e., to be added to future training samples. One of the first method- +ologies developed is Experience Replay (ER) [Rolnick et al., 2019], which stores a +small subset of previous samples and uses them to augment the incoming task-data. +Aljundi et al. [Aljundi et al., 2019a] propose an evolution of ER which takes in consid- +eration Maximal Interfered Retrieval (ER-MIR). Their proposal lies between rehearsal +and regularization methods, its strategy is to retrieve the samples that are most in- +terfered, i.e. +whose prediction will be most negatively impacted by the foreseen +parameters update. Among other mixed approaches we have Rebuffi et al. [Rebuffi +et al., 2017] that proposes a method which simultaneously learns strong classifiers +and data representation (iCaRL). Gradient Episodic Memory (GEM) [Lopez-Paz and +Ranzato, 2017] and its improved version Averaged-GEM (AGEM) [Chaudhry et al., +2019a] exploits the memory buffer to constrain the parameter updates and stores +the previous samples as trained points in the parameter space, while Gradient based +Sample Selection (GSS) +[Aljundi et al., 2019a] diversifies/prioritizes the gradient +of the examples stored in the replay memory. Finally, a recent method proposed by +Shim et al. [Shim et al., 2021] scores memory data samples according to their ability +to preserve latent decision boundaries (ASER). +Architectural-based +Architectural methods alter their parameter space for each +task. The most influential architectural-based approach is arguably Progressive Net- +works (PN) [Rusu et al., 2016], where a dedicated network is instantiated for each +task while Continual Learning with Adaptive Weights (CLAW) [Adel et al., 2020] +grows a network that adaptively identifies which parts to share between tasks in a +data-driven approach. Note that, in general, the approaches that use incremental +modules suffer the lack of task labels at test time, since there is no easy way to +decide which module to adopt. +Method +Before introducing the dimensionality reduction approaches adopted in our quan- +tity/quality analysis we have to introduce the CL scenario considered and its task +composition. +Unfortunately the community has not yet converged to a unique +42 +Chapter 4 + +Works +standard way to define a CL setting [van de Ven and Tolias, 2019]. +Here we +adopt GDumb’s formulation which is the most general one and specifically resembles +Lomonaco and Maltoni’s formulation [Lomonaco and Maltoni, 2017]. In particu- +lar, we focus on the new class (NC)-type scenario [Lomonaco and Maltoni, 2017] +where each task Ti introduces data instances of CTi new, previously unseen, classes. +More formally a dataset benchmark D, containing examples from CD classes, is +divided into n tasks. +Each task, Ti with i = 1 . . . n, carries a set of examples +Ti = {XTi, YTi} whose class is previously unseen i.e. YTj ∩ YTi = ∅ with j = 1 . . . i +and YTi = {c1 . . . cTi}. In other words, the model experiences a shift in the distri- +bution of data as we train on each new task. We also consider the more realistic +class incremental scenario (CI), that is, we are not allowed to know task labels at +test time. +As incremental approach we use the recently proposed GDumb, which is com- +posed of a simple learner and a greedy balancer. That is, given a fixed amount of +memory M, each instance of task data is randomly sampled in order to balance class +instances in the memory, so that, at the end of the Ti task experience, the memory +contains an equal number of instances of all previously encountered classes i.e. each +class has +� +M +CD∗i +� +instances in memory. +Besides providing state-of-the-art performances, GDumb has been proposed as +standard baseline to question our progresses in continual learning research, since +after experiencing a task, the simple learner (such as a ResNet18 [He et al., 2016] +or a MLP) is trained only with memory data, making GDumb a fully rehearsal based +approach with random filtering of incoming data, and thus the ideal candidate to +carry our study. In the following paragraphs, we briefly describe all the strategies +adopted for dimensionality reduction. +Random Projections (RP) +Extreme Learning Machines (ELM) [Huang et al., 2006] are a set of algorithms that +exploit random projections as dimensionality reduction technique to preserve compu- +tational and spatial resources while learning. ELM have been introduced in 2006 and +recently have found application in neuroscience [Qureshi et al., 2016, Lama et al., +2017] and in other problems such as in molecular biology [Chen et al., 2020]. The +idea can be roughly described as a composition of two modules where the first one +performs a random projection of the data, while the second one is a learning model. +The appealing property of RP lies in the Johnson-Lindenstrauss lemma [Johnson, +1984] which states that given a set of points in a high dimensional plane, there is a +Chapter 4 +43 + +Dissecting continual learning: a structural and data analysis +linear map to a subspace that roughly preserves the distances between data points +by some approximation factor. +The Johnson-Lindenstrauss lemma guarantees that we can obtain a low-distortion +to the dimensionality reduction by multiplying each instance vector by a semi-orthogonal +random matrix Qm×n in the (m, n) Stiefel manifold. More formally, let xi be an image +of the current task of width, height and number of channels w, h, and c respectively, +then the size of xi is n = hwc. We can consider its vectorization as vi ∈ Rn and its +compressed representation +v ′ +i = Qvi +s.t. +QTQ = Im +(4.1) +with v ′ +i ∈ Rm. +The usage of ELM unsuspectedly unlocks two main advantages: First it allows +us to exploit the dimensionality reduction by increasing the number data instances +storable in the memory buffer. Secondly and, more importantly, allows us to use +models with significantly fewer parameters. On the other hand, the approach loses +coordinate contiguity and, with that, shift co-variance, rendering convolutional ap- +proaches inapplicable. +After the random projection, data instances will be forwarded to the greedy sam- +pler of GDumb to fill the memory M. Then, we perform a rehearsal train with any +MLP-like architecture, resulting in an order-of-magnitude reduction in the amount of +parameters needed to process visual data allowing the usage of CL rehearsal based +solutions in very low resource scenarios. +Deep Encoders +Deep encoders are neural models φ that take as input an image xi and, depending +from the structure of such model, can output either a latent vectorial representation +v ′ +i , or a squared feature map which we consider as a noise-free shrinked image x′ +i . +Figure 4.2 (b) reports visually the two possible encoding scenarios. In this work, +we adopt a Variational AutoEncoder (VAE) [Kingma and Welling, 2014] for the first +case and a pretrained ResNet18 [He et al., 2016] cut up to a predefined block (CutR) +as a prototype for the second. +44 +Chapter 4 + +Works +CIFAR10 +Method +Acc@600KiB +Acc@1.5MiB +Acc@3MiB +EWC [Kirkpatrick et al., 2017] +17.9 ± 0.3 +17.9 ± 0.3 +17.9 ± 0.3 +GEM [Lopez-Paz and Ranzato, 2017] +16.8 ± 1.1 +17.1 ± 1.0 +17.5 ± 1.6 +AGEM [Chaudhry et al., 2019a] +22.7 ± 1.8 +22.7 ± 1.9 +22.6 ± 0.7 +iCARL [Rebuffi et al., 2017] +28.6 ± 1.2 +33.7 ± 1.6 +32.4 ± 2.1 +ER [Rolnick et al., 2019] +27.5 ± 1.2 +33.1 ± 1.7 +41.3 ± 1.9 +ER-MIR [Aljundi et al., 2019a] +29.8 ± 1.1 +40.0 ± 1.1 +47.6 ± 1.1 +ER5 [Aljundi et al., 2019a] +- +- +42.4 ± 1.1 +ER-MIR5 [Aljundi et al., 2019a] +- +- +49.3 ± 0.1 +GSS [Aljundi et al., 2019c] +26.9 ± 1.2 +30.7 ± 1.2 +40.1 ± 1.4 +ASER [Shim et al., 2021] +27.8 ± 1.0 +36.2 ± 1.1 +43.1 ± 1.2 +ASERµ [Shim et al., 2021] +26.4 ± 1.5 +36.3 ± 1.2 +43.5 ± 1.4 +GDumb [Prabhu et al., 2020] +35.0 ± 0.6 +45.8 ± 0.9 +61.3 ± 1.7 +Resize (8 × 8) +55.5 ± 0.2 +64.5 ± 0.2 +73.1 ± 0.2 +ELM (128) +43.0 ± 0.3 +47.1 ± 0.2 +50.0 ± 0.2 +CutR (8 × 8) +54.4 ± 0.2 +60.9 ± 0.2 +71.6 ± 0.6 +Table 4.1: CIFAR10 experiments (5 runs) +VAE +Variational Autoencoders [Kingma and Welling, 2014] have been introduced +as an efficient approximation of the posterior for arbitrary probabilistic models. A +VAE is essentially an autoencoder that is trained with a reconstruction error between +the input and decoded data, with a surplus loss that constitutes a variational objective +term attempting to impose a normal latent space distribution. The variational loss is +typically computed through a Kullback-Leibler divergence between the latent space +distribution and the standard Gaussian, the total loss can be summarized as follows: +L = Lr(xi, ˆxi) + LKL(q(zi|xi), p(zi)) +(4.2) +given an input data image xi, the conditional distribution q(zi|xi) of the encoder, +the standard Gaussian distribution p(zi), and the reconstructed data ˆxi. We use the +encoding part of a VAE pretrained on a dataset by feeding each incoming image and +retrieving the vectorial output representation v ′ +i , then the data point is forwarded to +GDumb’s greedy sampler to feed M. +CutR +As our second encoding approach, we use a pretrained ResNet18 [He et al., +2016] cut up to a predefined block. ResNets models are Convolutional Neural Net- +works (CNNs) introducing skip connections between convolutional blocks to alleviate +the so called vanishing gradient [Hochreiter, 1998] problem afflicting deep architec- +tures. The idea behind it, is to use the cut ResNet18 as a filtering module that +outputs a smaller feature map, giving us x′ +i . In fact, we cut the network towards +later blocks, since neurons in the last layers, encode more structured semantics with +Chapter 4 +45 + +Dissecting continual learning: a structural and data analysis +ImageNet100 +CIFAR100 +Method +Acc@12MiB +Acc@24MiB +Acc@3MiB +Acc@6MiB +AGEM [Chaudhry et al., 2019a] +7.0 ± 0.4 +7.1 ± 0.5 +9.05 ± 0.4 +9.3 ± 0.4 +ER [Rolnick et al., 2019] +8.7 ± 0.4 +11.8 ± 0.9 +11.02 ± 0.4 +14.6 ± 0.4 +EWC [Kirkpatrick et al., 2017] +3.2 ± 0.3 +3.1 ± 0.3 +4.8 ± 0.2 +4.8 ± 0.2 +GSS [Aljundi et al., 2019c] +7.5 ± 0.5 +10.7 ± 0.8 +9.3 ± 0.2 +10.9 ± 0.3 +ER-MIR [Aljundi et al., 2019a] +8.1 ± 0.3 +11.2 ± 0.7 +11.2 ± 0.3 +14.1 ± 0.2 +ASER [Shim et al., 2021] +11.7 ± 0.7 +14.4 ± 0.4 +12.3 ± 0.4 +14.7 ± 0.7 +ASERµ [Shim et al., 2021] +12.2 ± 0.8 +14.8 ± 1.1 +14.0 ± 0.4 +17.2 ± 0.5 +GDumb [Prabhu et al., 2020] +13.0 ± 0.3 +21.6 ± 0.3 +17.1 ± 0.2 +25.7 ± 0.7 +Resize (8 × 8) +33.6 ± 0.2 +33.6 ± 0.3 +38.5 ± 0.4 +45.1 ± 0.2 +ELM (128) +13.3 ± 0.2 +15.4 ± 0.4 +22.4 ± 0.3 +25.7 ± 0.3 +CutR (8 × 8) +36.25 ± 0.4* +36.27 ± 0.5* +32.6 ± 0.6 +37.1 ± 0.2 +Table 4.2: ImageNet and CIFAR100 experiments (5 runs) +respect to the early ones [Olah et al., 2017]. Therefore, we are able to extract se- +mantic knowledge from unseen images leveraging transfer learning [Tan et al., 2018], +that is, we exploit the ability of a model to generalize over unseed data. We refer +to this method with the name CutR(esnet18). We use CutR instance encoding by +feeding each image belonging to the current task and retrieving the shrinked out- +put x′ +i which is then forwarded to the greedy sampler module of GDumb to fill the +memory M. +In our analysis, we adopted the less resource-hungry VAE scheme for datasets +where shift co-variance is not as important, such as the MNIST, in which the digits +are centered in the image and thus most approaches at the state-of-the-art use a +MLP as classifier. In all other instances, we used the CutR scheme. +Resizing +We used also the simplest instance reduction approach one can think of i.e., resizing +the images to very low resolution through standard bilinear interpolation. The resized +images are then fed to the sampler of GDumb to balance the classes in M and all +training and prediction is performed on the lowered resolution images. +Independently of the approach adopted, all data instances are reduced before +storing them in memory M, then we use GDumb’s greedy sampler to select and +balance class instances, and finally, we use a suitable learner to fit memory data and +assess the performance. In general, following GDumb, we adopt ResNet18 for large- +scale image classification tasks for all approaches that maintain shift co-variance, +reverting to a simple MLP for approaches without shift co-variance like RP. +46 +Chapter 4 + +Works +Experiments +We performed our analysis on the following standard benchmarks: +• MNIST [LeCun et al., 1998]: the dataset is composed by 70000 28 × 28 +grayscale images of handwritten digits divided into 60000 training and 10000 +test images belonging to 10 classes. +• CIFAR10 [Krizhevsky, 2009]: consists of 60000 RGB images of objects and +animals. The size of each image is 32 × 32 divided in 10 classes, with 6000 +images per class. The dataset is split into 50000 training images and 10000 +test images. +• CIFAR100 [Krizhevsky, 2009]: is composed by 60000, 32 × 32 RGB images +subdivided in 100 classess with 600 images each. +The dataset is split into +60000 training images and 10000 test images. +• ImageNet100 [Deng et al., 2009]: the dataset is composed of 64 × 64 RGB +images divided in 100 classes; it is composed of 60000 images split into 50000 +training and 10000 test. +• Core50 [Lomonaco and Maltoni, 2017]: the dataset is composed of 128 × 128 +RGB images of domestic objects divided in 50 classes. The set consists of +164866 images split into 115366 training and 49500 test. +Following [Prabhu et al., 2020], we use final accuracy as the evaluation metric +throughout the work. The metric is computed at the end of all tasks against a test +set of never seen before images composed of an equal number of instances per class. +This allows us to directly compare against the largest number of competitors in the +literature. +All the experiments has been conducted with an Intel i7-4790K CPU with 32GB +RAM and a 4GB GeForce GTX 980 machine running PyTorch 1.8.1+cu102. +Parameter Sensitivity +In the first experiment, we compared different dimensionality reduction strategies as +we altered the parameters. The analysis was conducted on three different datasets: +MNIST, CIFAR10 and ImageNet100. +In this evaluation we fixed the amount of +Chapter 4 +47 + +Dissecting continual learning: a structural and data analysis +memory buffer used for GDumb during rehearsal training, and we measured the +final accuracy as the parameters varied for each dimensionality reduction method. In +particular we subdivided both MNIST and CIFAR10 datasets into 5 tasks of 2 classes +each, with 600 KiB dedicated memory buffer, while ImageNet100 was divided into +10 tasks of 10 classes each, with 12 MiB memory buffer. +Figure 4.3 plots the performance of the various schemes as we reduce the dimen- +sionality of the instances and and thus increase their number in the allocated memory. +The orange line represents the performance of the resize scheme. For the MNIST +dataset, we considered eight different target sizes1 x′ +i ∈ {27 × 27, 24 × 24, 20 × +20, 16 × 16, 12 × 12, 8 × 8, 4 × 4, 2 × 2, 1 × 1}. We performed the same resizing for +CIFAR10 data. We did not report CIFAR100 analysis since the data format is the +same as CIFAR10 and the result would be analogous. For ImageNet100, we resized +each instance to x′ +i ∈ {32 × 32, 24 × 24, 16 × 16, 6 × 6, 4 × 4, 2 × 2}. +The green line of Figure 4.3 represents the deep encoders. +In particular, for +MNIST we used a VAE [Kingma and Welling, 2014] pretrained on KMNIST [Clanuwat +et al., 2018] and analyzed the performance of GDumb with compressed instances as +we altered the size of the latent embedding vector to v ′ +i ∈ {128, 64, 32, 16}. On +the other hand, for the CIFAR10 and ImageNet100 dataset we considered different +parameters for CutR. In particular, we cut the ResNet18 up to the sixth layer to get +a 4 × 4 output, to the fifth to have a 8 × 8 encoding, and lastly up to the third block +to get a 16 × 16 feature map. +The CutR Resnet18 has been pretrained on the complete ImageNet, thus the +results in the ImageNet100 benchmark can be biased. We denote these biased results +with CutR*. +Lastly, the blue line of Figure 4.3 reports the accuracy of Random Projection +followed by an MLP classifier. We recall that this kind of architecture is a variation +of an Extreme Learning Machine (ELM), therefore we will refer to it with the term +ELM. We analyzed the final accuracy as the size of the random projection changes, +in particular the embedding sizes considered are v ′ +i ∈ {512, 256, 128, 64, 32, 16} for +all the datasets. +For all the experiments in MNIST data, we used a 2-layer MLP with 400 hidden +nodes as learning module, while we used a Resnet18 [He et al., 2016] for all the +other analysis with exception of ELM scheme that maintains the 2-layer MLP model +throughout. We did not perform any hyperparameter tuning on the learning module +1throughout the work we omit to write the channel component for brevity +48 +Chapter 4 + +Works +MNIST +Method +Acc@382KiB +GEN [Hsu et al., 2018] +75.5 ± 1.3 +GEN-MIR [Aljundi et al., 2019a] +81.6 ± 0.9 +ER [Rolnick et al., 2019] +82.1 ± 1.5 +GEM [Lopez-Paz and Ranzato, 2017] +86.3 ± 1.4 +ER-MIR [Aljundi et al., 2019a] +87.6 ± 0.7 +GDumb [Prabhu et al., 2020] +91.9 ± 0.5 +Resize (8 × 8) +97.2 ± 0.1 +ELM (128) +95.0 ± 0.4 +VAE (32) +94.6 ± 0.1 +Table 4.3: MNIST final accuracy (5 runs) analysis as we vary the memory for all +schemes considered. +in accordance with the GDumb [Prabhu et al., 2020] experimental protocol. For +completeness we report the learning parameters: the system uses an SGD optimizer, +a fixed batch size of 16, learning rates [0.05, 0.0005], an SGDR [Loshchilov and +Hutter, 2017] schedule with T0 = 1, Tmult = 2 and warm start of 1 epoch. Early +stopping with patience of 1 cycle of SGDR, along with standard data augmentation +is used (normalization of data). GDumb uses cutmix [Yun et al., 2019] with p = 0.5 +and α = 1.0 for regularization on all datasets except MNIST. +As we can also see from Figure 4.3 all the strategies considered unlock perfor- +mance greatly above GDumb , thus suggesting that the quantity/quality trade-off +is severely skewed toward quantity since each dimensionality reduction technique +greatly improves the amount of data instances that can be stored in the memory +buffer. It is also evident that the simple resizing strategy gives the best performance +improving GDumb by +6% on MNIST and roughly by +20% on both CIFAR10 and +ImageNet100 datasets. +Moreover, we chose to consider extreme levels of encoding. We did so to find +the level of compression that irreversibly corrupts spatial information and thus makes +learning impossible. Surprisingly, it turns out that a 2 × 2 resizing still works on +CIFAR10 data with perfomances above GDumb while a 1 × 1 resize is still better +than a random classifier whose performance would be 20% of final accuracy. This +is a strong evidence that the amount of data storable in the memory buffer plays +a central role, but also that CIFAR10 dataset constitutes an unrealistic benchmark +and should not been considered to assess novel methodologies in the future. +After choosing and fixing the optimal parameters for each compression scheme, +we study the performance of the rehearsal system as we alter the quantity of the +memory allocated. +In Tables 4.3,4.2 we compute the final accuracy for all the +Chapter 4 +49 + +Dissecting continual learning: a structural and data analysis +Figure 4.3: At top-left the accuracy analysis of the MNIST dataset. In top-right we +have the analysis of CIFAR10 and at bottom we have ImageNet100. The state-of- +the-art (SOTA) method is plain GDumb with an MLP as incremental learner in the +MNIST experiment and Resnet18 in the others. The number of instances in memory +(i.e. the x axis) is in log scale. We report the results of (5 runs). +datasets previously considered, with the addition of CIFAR100 with an increase of +20% in performance. The amount of dedicated memory for the rehearsal buffer, has +been chosen in order to be consistent with several other methods at GDumb , al- +lowing us to compare GDumb’s performance on optimized memory schemes against +other methods. As we can see, all memory optimizations still provide huge advan- +tages as the memory buffer varies, suggesting again, that instance quantity plays a +fundamental role in rehearsal systems even with extreme encoding settings. +Finally, we note that the deep models used for classification have a large number +of degrees of freedom and require a large amount of instances to be properly trained +to capture the complexity of the task at hand. Simpler, lower dimensionality instances +allow both for more instances and simpler classifiers with fewer parameters without +50 +Chapter 4 + +MNIST Fixed 382 KiB Memory +CIFAR10 Fixed 600 KiB Memory +0.98 +0.60 +x12x12-8x8 +RP+MLP (ELM) +RP+MLP (ELM) +8x8 +x16x16 +-¥- Resize+MLP +0.55- +16x16 +8x8 +4x4 +一样一 +Resize+ResNet +0.96 - +20x20 +.... VAE+MLP +x12x12 +... +CutR+ResNet +128 +SOTA +0.50 - +SOTA +32 +24x24 +0.94 - +16x16 +256 +64 +.4x4 +racy +*27x27 +64 +.4x4 +128 +Accur +0.92 +64 +0.40 +24x24 +256 +32 +2x2 +128 +28x28 512 +0.90 +0.35 +16 +0.30 - +16 +0.88 - +*1x1 +32 +0.25 +0.86 +103 +104 +105 +103 +104 +105 +Memory Slots +Memory Slots +ImageNet100 Fixed 12000 KiB Memory +RP+MLP (ELM) +0.5 - +样一 +Resize+ResNet +16x16 +CutR*+ResNet +SOTA +0.4 - +8x8 +8x8 +2 +16x16 +24x24 +.4x4 +32x32 +4x4 +0.2 +256 +128 +512 +64 +32 +x2x2 +0.1 - +16 +0.0 +104 +105 +106 +Memory SlotsWorks +Figure 4.4: We show the total amount of KiB used by the whole CL system. We +measure the consumption as we saturate the rehearsal memory plus the storage of +model parameters. The x−axis is in log scale. +losing lot of informational content. +Resource Consumption +With the second experiment, we wanted to analyze the performance versus the total +memory requirement for each approach. Here, we increased the number of instances +in the memory buffer and added to the total consumption the working memory used +by the classifier to store (and train) the parameters. +We considered three different scenarios: first we used the plain GDumb CL system +without dimensionality reduction (representing GDumb ), then we used ELM (with +fixed embedding size of (v ′ +i = 128), and lastly the resizing scheme (images resized to +x′ +i = 8×8). We selected the best parameters resulting from the previous experiment. +We then assessed the performance and resource usage using a new dataset, +namely the Core50 [Lomonaco and Maltoni, 2017]. The reason behind the use of +Core50 to validate our findings is twofold: first, we test again whether the quantity +of extremely encoded data plays a central role on our rehearsal scheme. Secondly, we +measure the performance and the resource usage of a CL system on a more complex +Chapter 4 +51 + +MNIST +CIFAR10 +CIFAR100 +1.0 - +0.8 - +0.4 +0.8 +0.6 +0.6 - +0.2 +0.4 +0.4 +0.2 +0.0 +104 +105 +104 +105 +104 +105 +KiB +KiB +KiB +ImageNet100 +Core50 +0.8 +ELM +0.6 +GDumb +0.6 +Accuracy +Resize +0.4 +0.4 +0.2 +0.2 +0.0 +0.0 +104 +105 +104 +105 +KiB +KiBDissecting continual learning: a structural and data analysis +set of tasks. We divided the dataset into 10 tasks of 5 classes each. +In Figure 4.4, we report the results of this experiment. We can see that extreme +levels of resizing still provide optimal results in all the datasets considered. One strik- +ing finding is that in Core50 with extreme resizing, even if the size was not optimized +for the dataset, the final accuracy is increased by +67% with respect to GDumb . +Second, we note that ELM constitute a viable solution in low resources scenarios. +Indeed, we can surpass the performance of GDumb for low memory scenarios where +even just the classifier used in other approaches could not fit in the allocated memory, +much less the rehearsal buffer. This is clearly observed from the Core50 results. We +can appreciate that by randomly projecting image data and learning in a low resource +scenario provides a boost of +34% in the final accuracy. +Finally, it is worth noting there is a striking dissonance in the literature of rehersal- +based method when the narrative around buffer-memory sizes revolves around deci- +sions among sizes of the order of 300KiB to 600KiB when then the same systems +adopt complex classifiers using several megabytes of memory just for the learned +parameters and in the order of gigabytes of working memory for learning. In a real +constrained-memory scenario a simpler classifier with more instances offers a clear +advantage. +Conclusion +In this study, we analyzed the quantity/quality trade-off in rehearsal-based Continual +Learning systems adopting several dimensionality reduction schemes to increase the +number of instances in memory at the cost of possible loss in information. In par- +ticular, we used deep encoders, random projections, and a simple resizing scheme. +What we found is that even simple, but extremely compressed encodings of instance +data provide a notable boost in performance with respect to the state of the art, +suggesting that in order to cope with catastrophic forgetting, the optimization of the +memory buffer can play a central role. Notably, the performance boost of extreme +instance compression suggests that the quality/quantity trade-off is severely biased +toward data quantity over data quality. We suspect that some fault might be in the +overly simplistic datasets adopted by the community, but mostly the deep models +used for classification are well known to be data-hungry and the instances stored are +not sufficient to properly train them, but can suffice for simpler classifiers with fewer +parameters working on simplified instances. +It is worth noting there is a striking dissonance in the literature of rehearsal-based +52 +Chapter 4 + +Works +method. +The narrative on buffer-memory sizes revolves around decisions among +sizes of the order of 300KiB to 600KiB when then the same systems adopt complex +classifiers using several megabytes of memory just for the learned parameters and in +the order of gigabytes of working memory for training. In a real constrained-memory +scenario, a simpler classifier with more instances offers a clear advantage. +In fact, in a real low-resources scenario deep convolutional systems using several +megabytes of memory for the model parameters and gigabytes of working memory +for learning are not a viable solution. In this case, a variation of Extreme Learning +Machines offer a simple and effective solution. +Other Experiments +Fixed Data Instances +With this experiment we aim to better show that instance quantity is preferable over +instance quality. We fixed the number of data slots in the memory buffer, and we +analyzed the performance as we alter the encoding size. In particular, we tested +two datasets namely CIFAR10 and Core50. In CIFAR10 we fixed the buffer to 1000 +data slots, while in the latter benchmark we fixed it to be 8000 slots. What we +can see from Figure 4.5 is that the improvement of performance is not given by the +encoding’s smoothing property, and, again, we confirm that rehearsal systems are +skewed towards data quantity. +Chapter 4 +53 + +Dissecting continual learning: a structural and data analysis +Figure 4.5: Performance as we vary the parameters for each scheme on CIFAR10 +and Core50. In the former benchmark, the memory buffer is of 1000 fixed instances, +while in the latter is of 8000. +ELM Width Analysis +As we specified in the work, we used a variation of an Extreme Learning Machine. +In particular, the architecture is composed by a random projection module and a +learning module. The first is implemented through an orthogonal random matrix. +While the second is a two layer MLP. Throughout the study we used 400 hidden +units as last layer before the output. We choose to do so to be consistent with +GDumb experimental settings. With this experiment we analyze the accuracy metric +as we change the number of hidden units. We fixed the encoded size of data to +be v ′ +i 128. As memory buffer, we used a different number of data slots for different +datasets. That is, for MNIST and CIFAR10 we adopted 2400 slots (600 KiB), in +ImageNet100 we used 48000 instances i.e. 12 MiB, while for Core50 we used 8000 +slots (2 MiB). In Figure 4.6 we can see that 100 hidden units are sufficient to achieve +54 +Chapter 4 + +CIFAR10 ELM +CIFAR10 Resize +0.55 +0.38 +0.36 +0.50 +racy +0.34 +Accu +0.45 +0.32 +0.30 +0.40 +0.28 +16 64 128 +256 +512 +4x4 +8x8 +16x16 +24x24 +Encoding Size +Encoding Size +Core50 ELM +Core50 Resize +0.55 +0.36 +0.50 +Accuracy +0.45 +0.34 - +0.40 +0.32 +0.35 +0.30 +0.30 +16 64 128 +256 +. +512 +4x4 +8x8 +16x16 +24x24 +Encoding Size +Encoding SizeWorks +the maximum performance. This, again, shows that more deep classifiers which are +common in CL rehearsal literature, might need more data to be trained properly. +Figure 4.6: Analysis of final accuracy as we alter the number of hidden units in ELM. +Experiments with other Rehearsal Systems +Throughout our study, we used GDumb to carry out our analysis. Although we ex- +tensively motivated this choice, we also tested two different rehearsal systems. In +particular we studied ER Rolnick et al. [2019] and ER-MIR Aljundi et al. [2019a] per- +formance as we adapt them to work in a low resource scenario. We simply substitute +the original learner with our ELM proposal. In Table 4.4 we report the performance +of CIFAR10 with 600 KiB buffer memory and v ′ +i = 128 encoding. As validation +metrics we used the final accuracy and the average forgetting Chaudhry et al. [2018] +(lower is better). In order to train the systems, we used the official implementations +found at https://github.com/optimass/Maximally Interfered Retrieval without any +alteration of training hyperparameters. As we can see, the results suggest again +that ELMs constitute a valid solution for low resource CL systems and that rehearsal +solutions are biased toward data quantity over data quality. +Chapter 4 +55 + +ELM Width Analaysis +1.0 +MNIST +0.8 +CIFAR10 +ImageNet100 +Accuracy +0.6 +Core50 +0.4 +0.2 +0.0 +0 +50 +100 +150 +200 +250 +300 +350 +Hidden NodesDissecting continual learning: a structural and data analysis +CIFAR10 Fixed Memory 600 KiB +Method +Accuracy (A) +Forgetting (F) +ELM (A) +ELM (F) +ER Rolnick et al. [2019] +27.5 ± 1.20 +48.0 ± 0.40 +42.0 ± 0.10 +41.2 ± 0.16 +ER-MIR Aljundi et al. [2019a] +29.8 ± 1.10 +44.6 ± 0.48 +45.6 ± 0.10 +31.6 ± 0.01 +Table 4.4: Experiments in CIFAR10 with two different rehearsal systems in low +resource scenario. +Other Specifications +Resource Consumption +In Table 4.5 we report some summary statistics. In particular, we report GDumb’s +performance improvements for two encoding schemes i.e. Resize (8 × 8) and ELM +(v ′ +i = 128). We reported only the accuracy according to optimal parameters. We +also added the compression factor C, the requirements to store model’s parameters +Θ and the memory buffer M. We also report the quantity of GPU memory usage +to train GDumb for each encoding scheme. We can see that there is a big gap on +the training requirements and memory buffers. +MNIST +CIFAR10 +CIFAR100 +ImageNet100 +Core50 +Compression +Params + M +GPU Training +Resize (8 × 8) +(+6%) +(+21%) +(+20%) +(+20%) +(+67%) +253:1 +60 MiB +2.2 GiB +ELM (128) +(+10%) +(+10%) +(+10%) +(+10%) +(+10%) +192:1 +16 MiB +0.72 GiB +Table 4.5: Performance summary and memory compression +Datasets Specification +For completeness, we report in Table 4.6 some specifications for the considered +datasets. In particular, we provide the task subdivision for each dataset. As we can +see MNIST and CIFAR10 have been split in 5 tasks of 2 classes each. This splitting +is also known in literature as Split-CIFAR10 and Split-MNIST. For CIFAR10 and +ImageNet100 benchmarks we used 10 tasks of 10 classes each, meanwhile for Core50 +we shuffled all scenarios and created 10 tasks of 5 classes each. The majority of the +works fix the memory slots to define the memory buffer. In our case we used memory +requirements expressed in KiB or MiB so that we could alter each slot consumption. +56 +Chapter 4 + +Works +We provide a correspondence between memory requirements and memory slots in the +case we consider original image sizes, we do so to ease future comparisons against +our work. +Experimental Settings +Dataset +Image size +Memory Size +# Instances +Task Composition +MNIST +28x28x1 +382 KiB +500 +5 tasks, 2 classes +CIFAR10 +32x32x3 +600 KiB +200 +5 tasks, 2 classes +1.5 MiB +500 +3 MiB +1000 +6 MiB +2000 +CIFAR100 +- +- +- +10 tasks, 10 classes +ImageNet100 +64x64x3 +12 MiB +1000 +10 tasks, 10 classes +24 MiB +2000 +Core50 +128x128x3 +15 MiB +312 +10 tasks, 5 classes +Table 4.6: Dataset and memory statistics, in CIFAR100 row we omit the 2nd, 3rd +and 4th columns since are equal to CIFAR10 row. +Chapter 4 +57 + +Dissecting continual learning: a structural and data analysis +4.2 +Towards Exemplar-Free Continual Learning in Vi- +sion Transformers: an Account of Attention, Func- +tional and Weight Regularization +While in the previous work we considered old data points as pivotal instrument to +investigate catastrophic forgetting, now we focus on the structural properties of the +model considered. In particular, we ask ourselves how some parts of a network, when +properly regularized, impact to the overall performance of an incremental scenario. +We decided to investigate the continual learning of Vision Transformers (ViT) for the +challenging exemplar-free scenario. We opted to study ViTs since there are several +works tackling CNNs while virtually no one focused to ViTs yet although they are +getting consistently better at vision tasks. +This work takes an initial step towards a surgical investigation of the self atten- +tion mechanism (SAM) for designing coherent continual learning methods in ViTs. +We first carry out an evaluation of established continual learning regularization tech- +niques. We then examine the effect of regularization when applied to two key enablers +of SAM: (a) the contextualized embedding layers, for their ability to capture well- +scaled representations with respect to the values, and (b) the prescaled attention +maps, for carrying value-independent global contextual information. We depict the +perks of each distilling strategy on two image recognition benchmarks (CIFAR100 +and ImageNet-32) – while (a) leads to a better overall accuracy, (b) helps enhance +the rigidity by maintaining competitive performances. Furthermore, we identify the +limitation imposed by the symmetric nature of regularization losses. +To alleviate +this, we propose an asymmetric variant and apply it to the pooled output distillation +(POD) loss adapted for ViTs. As we will see through the section, our experiments +confirm that introducing asymmetry to POD boosts its plasticity while retaining sta- +bility across (a) and (b). Moreover, we acknowledge low forgetting measures for all +the compared methods, indicating that ViTs might be naturally inclined continual +learners. +58 +Chapter 4 + +Works +Transformers have shown excellent results for a wide range of language tasks +[Brown et al., 2020, Roy et al., 2021] over the course of the last couple of years. +Influenced by their initial results, Dosovitskiy et al. [Dosovitskiy et al., 2021] pro- +posed Vision Transformers (ViTs) as the first firm yet competitive application of +transformers within the computer vision community.2 ViTs’ applications have since +spanned a range of vision tasks, including, but not limited to image classification +[Touvron et al., 2021], object recognition [Liu et al., 2021], and image segmentation +[Wang et al., 2021]. The singlemost essential element of their architecture remains +the self-attention mechanism (SAM) that allows the learning of long-range interde- +pendence between the elements of a sequence (or patches of an image). Another +feature vital to their performance is the way they are pretrained in an often unsuper- +vised or self-supervised manner over a large amount of data. This is then followed +by the finetuning stage where they are adapted to a downstream task [Devlin et al., +2019]. +For ViTs to be able to operate in real-world scenarios, they must exploit streaming +data, i.e., sequential availability of training data for each task.3 Storage limitations +or privacy constraints further imply the restrictions on the storage of data from +previous tasks. Task-incremental continual learning (CL) seeks to find solutions to +such constraints by alleviating the event of catastrophic forgetting - a phenomena +where the network has a dramatic drop in performance on data from previous tasks. +Several solutions have been proposed to address forgetting, including regularization +[Kirkpatrick et al., 2017, Aljundi et al., 2018, Zenke et al., 2017, Ritter et al., 2018], +data replay [Chaudhry et al., 2019b, Aljundi et al., 2019a, Lopez-Paz and Ranzato, +2017] and parameter isolation [Mallya and Lazebnik, 2018, Rusu et al., 2016, Aljundi +et al., 2017, Lee et al., 2020]. +Most works on CL de nos jours study recurrent +[Sodhani et al., 2020, Chiaro et al., 2020] and convolutional neural networks (CNNs) +[Kirkpatrick et al., 2017]. However, little has been done to investigate different CL +settings in the domain of ViTs. We, therefore, mark the first step for the domain by +considering the further restrictive setting of exemplar-free CL with a zero overhead +of storing any data from previous tasks. We consider this restriction for its real-world +aptness to scenarios involving privacy regulations and/or data security considerations. +Given that regularization-based methods form one of the main techniques for +exemplar-free CL, we consider an in-depth analysis of these for ViTs. Regularization- +based techniques are mainly organized along two branches: weight regularization +methods (such as EWC [Kirkpatrick et al., 2017], SI [Zenke et al., 2017], MAS [Aljundi +et al., 2018]) and functional regularization methods ( such as LwF [Li and Hoiem, +2017], PODNET [Douillard et al., 2020]). As discussed above, the architectural +2By firmness, we refer to the non-reliance on convolutional operations. +3A task may encompass training data of one or more classes. +Chapter 4 +59 + +Dissecting continual learning: a structural and data analysis +Figure 4.7: Self-attention mechanism comprising a vision transformer encoder. We +compare Attention-based approaches computed prior to the softmax operation and +Functional-based approaches computed on the contextualized embeddings. +novelty of transformers lies in the SAM building a representation of a sequence by +exhaustively learning relations among query-key pairs of its elements [Vaswani et al., +2017]. We show that for ViTs (and subsequently, all other architectures leveraging +SAM), this property allows for a third form of regularization, which we coin Atten- +tion Regularization (see Figure 4.7). We ground our idea in the hypothesis that +when learning new tasks, the attention of the new model should still remain in the +neighborhood of the attention of the previous model. As another contribution, we +question the temporal symmetry currently applied to regularization losses; referring +to the fact that they penalize the forgetting of previous knowledge and the acquiring +of new knowledge equally (see Figure 4.8). With the aim of countering forgetting +while mitigating the loss of plasticity, we then propose an asymmetric regulariza- +60 +Chapter 4 + +Queries +Keys +A +(A) Scalar Product +(B) Softmax/Scaling +(C) Linear Combination +B +Values +Functional-based +Attention-based +Contextualized +EmbeddingsWorks +tion loss that penalizes the loss of previous knowledge but not the acquiring of new +knowledge. We index the major contributions of our work below: +• We are the first to investigate continual learning in vision transformers in the +more challenging exemplar-free setting. We perform a full analysis of regular- +ization techniques to counter catastrophic forgetting. +• Given the distinct role of self-attention in modeling short and long-range depen- +dencies [Yang et al., 2021], we propose distilling the attention-level matrices +of ViTs. Our findings show that such distillation offers accuracy scores on par +with that of their more common functional counterpart while offering superior +plasticity and forgetting. Motivated by the work of Douillard et al. [Douillard +et al., 2020], we pool spatiality-induced attention distillation across our network +layers. +• We propose an asymmetric variant of functional and attention regularization +which prevents forgetting while maintaining higher plasticity. +Through our +extensive experiments, we show that the proposed asymmetric loss surpasses +its symmetric variant across a range of task incremental settings. +Related Works +Continual learning has been gaining contributions from the deep learning research +community during the last few years. In the following, we list the most prominent +ones: +• Weight-based: these methods operate in the parameter space of the model +through gradient updates. Elastic Weight Consolidation (EWC) [Kirkpatrick +et al., 2017] and Synaptic Intelligence (SI) [Zenke et al., 2017] are two widely +used methods in this family with the former being probably, the most well- +known. EWC uses fisher information to identify the parameters important to +individual tasks and penalizes their updates to preserve knowledge from older +tasks. SI makes the neurons accumulate and exploit old task-specific knowledge +to contrast forgetting. +• Functional-based: these methods rely upon trading the plasticity for stability by +training either the current (new) model on older data or vice-versa. Learning +Without Forgetting (LWF) [Li and Hoiem, 2017] remains among the most +Chapter 4 +61 + +Dissecting continual learning: a structural and data analysis +widely known approaches in this family. +It employs Knowledge Distillation +[Hinton et al., 2015] upon the logits of the network. +• Parameter Isolation-based: also known as architectural approaches, these meth- +ods tackle CF through a dynamic expansion of the network’s parameters as the +number of tasks grow. Among the first widely known methods in this family +remain Progressive Neural Network (PNN) [Rusu et al., 2016] followed by +Dynamically Expandable Network (DEN) [Yoon et al., 2018] and Reinforced +Continual Learing (RCL) [Xu and Zhu, 2018]. +The majority of the aforementioned works target CL in CNNs mainly due to +their inductive bias allowing them to solve almost all problems that involve visual +data. This can also be seen in several reviews [Mai et al., 2022, Biesialska et al., +2020, Delange et al., 2021, Parisi et al., 2019, Belouadah et al., 2021, Mai et al., +2022] reporting few approaches that consider architectures besides CNNs, despite +the attempts to investigate CL in RNNs [Sodhani et al., 2020, Chiaro et al., 2020]. +Only recently have some works analyzed catastrophic forgetting in transformers. +Among the earliest to do so remains that of Li et al. [Li et al., 2022] proposing +the continual learning with transformers (COLT) framework for object detection in +autonomous driving scenarios. +Using the Swin Transformer [Liu et al., 2021] as +the backbone for a CascadeRCNN detector, the authors show that the extracted +features generalize better to unseen domains hence achieving lesser forgetting rates +compared to ResNet50 and ResNet101 [He et al., 2016] backbones. In case of ViTs, +Yu et al. [Yu et al., 2021] show that their vanilla counterparts are more prone to +forgetting when trained from scratch. Alongside heavy augmentations, they employ +a set of techniques to mitigate forgetting: (a) knowledge distillation, (b) balanced +re-training of the head on exemplars (inspired by LUCIR [Hou et al., 2019]), and (c) +prepending a convolutional stem to improve low-level feature extraction of ViTs. +In their work studying the impact of model architectures in CL, Mirzadeh et al. +[Mirzadeh et al., 2022] also experiment with ViTs in brief (with the rest of the work +focusing mainly on CNNs). While they vary the number of attention heads of ViTs +to show that this has little effect on the accuracy and forgetting scores, they further +conclude that ViTs do offer more robustness to forgetting arising from distributional +shifts when compared with their CNN-based counterparts with an equivalent num- +ber of parameters. The conclusion remains in line with previous works [Paul and +Chen, 2021]. Finally, [Douillard et al., 2021] attempt to overcome forgetting in ViTs +through a parameter-isolation approach which dynamically expands the tokens pro- +cessed by the last layer. For each task, they learn a new task-specific token per head. +They then couple such approach through the usage of exemplars and knowledge dis- +62 +Chapter 4 + +Works +tillation on backbone features. It is worth noting that these works rely either on +pretrained feature extractors [Li et al., 2022] or rehearsal [Yu et al., 2021, Douillard +et al., 2021] to defy forgetting. Thus the challenging scenario of exemplar-free CL +in ViTs remains unmarked. +Methodology +We start by shortly describing the two main existing regularization techniques for +continual learning. We then propose attention regularization as an alternative ap- +proach tailored for ViTs. Lastly, we put forward an adaptation for functional and +attention regularization which is designed to elevate plasticity while retaining stability +levels. +Functional and Weight Regularization +Functional Regularization: +We include LwF [Li and Hoiem, 2017] in this compo- +nent since it constitutes one of the most prominent, and perhaps the most widely +used regularization method acting on data. The appealing property of LwF lies in the +fact it is exemplar-free, i.e., it uses only the data of the current task and maintains +only the model at task t − 1 to exploit Knowledge Distillation [Hinton et al., 2015]. +Formally, LwF can be defined as: +LLwF(θ) = λoLKD +� +Yo, ˆYo +� ++ LCE +� +Yn, ˆYn +� ++ R(θ) +(4.3) +where LKD is the knowledge distillation loss incorporated to impose stability on the +outputs, ˆYo the predictions on the current task data from the old model and ˆYo the +ground truth of such data. λo remains the temperature annealing factor for softmax +logits while LCE is the standard cross entropy loss calculated upon the new task +examples. +Weight Regularization: +These methods encourage the network to adapt to the +current task data mainly by using those parameters of the network that are not +considered important for previous tasks. As representative method we select EWC +[Kirkpatrick et al., 2017]. EWC exploits second-order information to estimate the +importance of parameters for the current task. The importance is approximated by +Chapter 4 +63 + +Dissecting continual learning: a structural and data analysis +Figure 4.8: Visual illustration of the asymmetric loss. +The image considers two +generated attention maps (a) and (b) while training task 2. +In case (a), when +previous knowledge is lost, both the symmetric and assymetric regularization work +correctly. However, in case (b), when new knowledge is acquired, this is penalized by +the symmetric loss but not by the assymetric loss. The idea is that the assymetric +loss leads to higher plasticity without hurting stability. +the diagonal of the Fisher Information Matrix F: +LEWC(θ) = LX(θ) + +� +j +λ +2Fj +� +θj − θ∗ +Y,j +�2 +(4.4) +where LX(θ) is the loss for task X, λ the regularization strength, and θ∗ +Y,j the optimal +value of jth parameter after having learned task Y. +Attention Regularization +Self-Attention Mechanism: +The self-attention mechanism (SAM) [Vaswani et al., +2017] forms the core of Transformer-based models and can be defined as: +z = softmax +�QKT +√de +� +V +(4.5) +where Q, K, and V are respectively the projections of the Query, Key, and Values +of the Rde input embeddings while z constitutes the new contextualized embed- +dings. Our novel attention-based regularization intervenes prior to the computation +64 +Chapter 4 + +Attention +Attention +Symmetric +Asymmetric +Previous Model +Current Model +Regularization +Regularization +OK +OK +Penalize +Penalize +forgetting previous +forgetting previous +knowledge +knowledge +(a) +NOT OK +OK +Penalize new +New knowledge +Task 1 : Dogs +knowledge +is notpenalized +Task 2 : Deers +(b)Works +of the softmax operation of the standard self-attention mechanism as illustrated in +Figure 4.7. +In particular, given a ViT model at incremental step t and an SAM head k of +layer l, we define the prescaled attention matrix At +kl prior to the softmax operation +as: +At +kl = QKT +√de +(4.6) +We denote the attention matrix corresponding to the model at time step (t − 1) +computed in a similar way as At−1 +kl . We employ this predecessor in the calculation of +knowledge distillation in what follows. +Pooled Attention Distillation: +Functional approaches leverage network’s submod- +ules typically to apply knowledge distillation [Hinton et al., 2015]. When the regular- +ization takes place in intermediate layers, the model can experience excessive stability, +therefore loosing in plasticity abilities [Douillard et al., 2020, Liu et al., 2020a, Yu +et al., 2020b]. Amongst these methods, PODNet [Douillard et al., 2020] clearly +identifies the problem of excessive stability. +We devise a regularization approach +which instead of regularizing functional submodules targets attention maps, the core +mechanisms of SAMs. +More formally, given the attention maps at steps t and (t − 1), we define +LPAD +� +At−1 +kl , At +kl +� +[Douillard et al., 2020] to be: +LPAD-width +� +At−1 +kl , At +kl +� ++ LPAD-height +� +At−1 +kl , At +kl +� +(4.7) +where LPAD-width +� +At−1 +kl , At +kl +� += +H +� +h=1 +DW +� +At−1 +kl , At +kl +� +, +LPAD-height +� +At−1 +kl , At +kl +� += +W +� +w=1 +DH +� +At−1 +kl , At +kl +� +, +(4.8) +DX +� +At−1 +kl , At +kl +� += +����� +X +� +x=1 +At−1 +kl,w,h − +X +� +x=1 +At +kl,w,h +����� +2 +(4.9) +where, W and H indicate the width and height dimensions of the attention maps, +and DX(a, b) is the sum total of the distance measure between maps a and b along +X-th dimension. As shown in equation 4.9, the standard LPAD uses the difference +Chapter 4 +65 + +Dissecting continual learning: a structural and data analysis +Figure 4.9: Mean and standard deviation of task-aware accuracy and forgetting +scores for CIFAR100/10 and ImageNet/6 settings (over 3 random runs). Asymmetric +approaches depict higher accuracy with respect to their symmetric counterparts. The +low forgetting scores across all methods suggest an intrinsic forgetting resilience in +vision transformer architectures. +operator as the choice for D. We now point out the limitation of such symmetric D +and introduce in the next section the notion of asymmetry into our distance measure. +As previously mentioned, Douilllard el al. [Douillard et al., 2020] propose the +pooled outputs distillation PODNet loss which leverages the symmetric Euclidean +distance between the L2-normalized outputs of the convolutional layers of models at +t and (t − 1) after pooling them along specific dimension(s). They achieve the best +results upon combining the pooling along the spatial width and height axes which +they term as the POD-spatial loss. Given the generic correspondence among the +various pooling variants in their paper, our work is particularly influenced by POD- +spatial as we pool attention maps of ViTs along two dimensions. In fact, throughout +the experiments, we analyze this formulation when applied to the contextualized +embeddings z resulting from a SAM operation. We would like to highlight that PAD +differs from PODNet in two important factors: its applied to the attention and not +directly on the layer output, and secondly, its marginalization is not on the spatial +dimensions due to the fact that z does not encode the spatial dimension. +Asymmetric Regularization +The proposed attention regularization prevents forgetting of previous task by en- +suring that the old attention maps be retained while the model learns to attend to +new regions over tasks. However, the symmetric nature of DX (with respect to the +two attention maps) means that any differences between the older and the newly +learned attention maps lead to increased loss values (see Equation 4.8). We agree +that penalizing a loss in attention with respect to previous knowledge is crucial in +66 +Chapter 4 + +Task Aware - avg 3 seeds +.. FUNC(asym) ..... finetuning +LwF +ATT(sym) +FUNC(sym) +0.58 +0.28 +0.20 +0.100 +0T/001 +0.55 +ImageNet/6 +20.075. + 0.26 +curac +CIFAR1( + 0.24 +For +0.05 +0.025 +0.45 +0.22 +0.000 +0.00 +2 +3 +4 +5 +6 +89 10 +5678910 +6 +1 +2 +3 +2 +5 +6 +1 +4 +5 +Tasks +Tasks +Tasks +TasksWorks +addressing forgetting. However, also penalizing a gain in attention for newly learned +knowledge is undesirable and may actually hurt the performance over subsequently +learned tasks. In other words, punishing additional attention can be counterproduc- +tive. As a result, we propose using an asymmetric variant of DX that can better +retain previous knowledge: +DX +� +At−1 +kl , At +kl +� += +�����Fasym +� X +� +x=1 +At−1 +kl,w,h − +X +� +x=1 +At +kl,w,h +������ +2 +(4.10) +where, Fasym is as asymmetric function. +We experimented with ReLU [Nair and +Hinton, 2010], ELU [Clevert et al., 2016] and Leaky ReLU [Maas et al., 2013] as +choices for Fasym and found that in general, ReLU performed the best across our +settings. By introducing the ReLU function, new attention generated by the current +model at task t is not penalized. Attention present at task t − 1 but missing in the +current model t is penalized. An illustration of the functioning of the new loss is +provided in Figure 4.8. +Based on our choice for DX from equations 4.9 and 4.10, we classify our final +PAD loss as symmetric LPAD-sym or asymmetric LPAD-asym, respectively. Each of these +losses are computed separately for each of the SAM head and model layer. The final +asymmetric variant can thus be stated as: +LPAD-asym(At−1 +kl , At +kl) = +1 +L +L +� +1 +1 +K +K +� +1 +LPAD (At−1 +kl , At +kl) +(4.11) +where, K is the total number of heads per layer and L is the total number of layers +of the model. Note that equation 4.11 can be adapted for LPAD-sym without loss of +generality. +Overall loss: +We augment the asymmetric and symmetric PAD losses from equa- +tion 4.11 with knowledge distillation loss LLwF [Li and Hoiem, 2017] and standard +cross entropy loss LCE. The overall loss term takes the form: +L = µLPAD-(a)sym + λLLwF + LCE +(4.12) +where µ, λ ∈ [0, 1] are two hyperparameters regulating the respective contributions. +Note that when µ = 0, L degenerates to baseline finetuning for λ = 0 and to LwF +for λ = 1. +Chapter 4 +67 + +Dissecting continual learning: a structural and data analysis +Figure 4.10: Mean and standard deviation of task-aware plasticity-stability scores +for CIFAR100/10 and ImageNet/6 settings (over 3 random runs). Asymmetric ap- +proaches are more plastic compared to their symmetric counterparts while retaining +competitive stability. +Stability-Plasticity Curves: +Several measures have been proposed in the CL lit- +erature to assess the performance of an incremental learner. Besides the standard +incremental accuracy, Lopez-Paz et al [Lopez-Paz and Ranzato, 2017] introduce the +notion of Backward Transfert (BWT) and Forward Transfert (FWT). BWT mea- +sures the ability of a system to propagate knowledge to past tasks, while FWT +assesses the ability to generalize to future tasks. The CL community, however, still +lacks consensus on a specific definition of the stability-plasticity dilemma. An ele- +mental formulation for such quantification is thus desirable for allowing us to better +grasp the balancing capabilities of an incremental learner at acquiring new knowledge +without discarding previous concepts. To this end, we introduce stability-plasticity +curves computed using task accuracy matrices. +A task accuracy matrix M for an incremental learning setting composed of T +tasks is defined to be a [0, 1]T ×T matrix, whose entries are the accuracies computed +at each incremental step.4 For instance, Mi,j would constitute the test accuracy +of task j when the system is learning task i. Subsequently, the diagonal entries of +Mi, i give us the accuracies at the respective current tasks while the entries below +the diagonal, i.e., j < i, give the performance of the model on past tasks. A visual +depiction can be seen in Figure 4.11. +We define the stability to be the performance on the first experienced task at +any given time and plasticity to be the ability of the model to adapt to the current +task. Namely, these constitute the first column M:,0 and the diagonal of the matrix +diag(M). We employ the curves dervied from these definitions to better dissect the +stability-plasticity dilemma of the methods analyzed in our work. +4This calls for M to be lower trapezoidal. +68 +Chapter 4 + +Task Aware - avg 3 seeds +EWC --. ATT(asym) - +.-.FUNC(asym).....finetuning +LwF +ATT(sym) +FUNC(sym) +0.24 +0.32 +0.65 +0.55 +0.22 +CIFAR100/10 +0.30 +0.50 +0.40 +0.24 +0.16 +0.50 +0.35 +0.22 +0.14 +0.45 +0.30 +910 +8 +910 +1 +2 +3 +4 +5 +6 +1 +5 +7 +8 +1 +2 +3 +5 +6 +7 +1 +2 +3 +4 +5 +6 +Tasks +Tasks +Tasks +TasksWorks +Figure 4.11: Illustration of a task accuracy matrix: we fix stability to be the per- +formance of the first task across time steps while we define plasticity to be the +performance at the current step. +Experiments +In this section, we compare regularization-based methods for exemplar-free continual +learning. We evaluate the newly proposed attention regularization and compare it +with the existing functional (LwF) and feature regularization methods. We then +ablate the usefulness of the newly proposed asymmetric loss as well as the importance +of pooling before applying the regularization. +Experimental Setup +Setting: +For our experiments, we adopt the variation of ViTs introduced by Xiao +et al. [Xiao et al., 2021]. Here, the standard linear embedder of a ViT model is +replaced by a smaller convolutional stem which helps build more resilient low-level +features. Convolutional stems have previously been shown to improve performance +and convergence speed in incremental learning settings [Yu et al., 2021]. We there- +fore define our architecture to be a lightweight variation of a ViT-Base by setting +L = 12 layers, K = 12 heads per layer and a de = 192 embedding size. The choice +of a small embedding size has been made to speed up the training procedure and +unlock the ability to handle larger batch sizes (1024 for our work). +We analyze our task-incremental setting on two widely used image recognition +datasets - namely CIFAR100 and ImageNet-32 with 100, and 300 classes each. Both +datasets host 32×32 images. On CIFAR100, we consider a split of 10 tasks (denoted +as CIFAR100/10 setting) where each incremental task is composed of 10 disjoint +set of classes. On ImageNet-32, we split 6 tasks with 50 disjoint set of classes each +Chapter 4 +69 + +Acc.Matrix +Test +Train +Stability +PlasticityDissecting continual learning: a structural and data analysis +(denoted as ImageNet/6).5 +Our total training epochs remain 200 (per task) for CIFAR100 and 100 for Im- +ageNet32 with an initial learning rate of 0.01 and patience set of 20 epochs. We +report our scores averaged over 3 random runs. We apply a constant padding of size +4 across all our datasets. The train images are augmented using random crops of +sizes 32 × 32 and random horizontal flips with a flipping probability of 50%. For test +images, we only apply center crops of sizes 32 × 32. +We compare the attentional and functional symmetric and asymmetric versions +of LPAD-(a)sym. We use LwF [Li and Hoiem, 2017] and EWC [Kirkpatrick et al., 2017] +as our basic functional and weight regularization approaches. For all our experiments +relying on PAD losses, we performed a hyperparemeter search (using equation 4.12) +for µ and λ by varying each in the range [0.5, 1.0] and found µ = λ = 1.0 to perform +reasonably well. We thus stick to these values unless otherwise specified. For the +sake of brevity, we indicate LPAD-asym with Asym att and LPAD-sym with Sym att. +Note that these are both variations of equation 4.12. The functional approaches +are analogous to their attentional counterparts except for the fact that they rely on +the regularization of the contextualized embeddings rather than the attention matrix +(see Figure 4.7). The latter correspond to Asym func and Sym func accordingly. +Results +We report accuracy as well as forgetting [Chaudhry et al., 2018] scores in task aware +(taw) setting6. We further report taw plasticity-stability curves (based on Figure +4.11) to provide insights upon how well the different models handle the trade-off. +Accuracy and Forgetting: +As seen in Figure 4.9, all asymmetric approaches show +better performances with respect to their symmetric counterparts on CIFAR100/10 +with Asym att offering the best accuracy of 57.3% on the last task. +The trend +continues for ImageNet/6 with an exception of asymmetric functional approach with +an accuracy of 27.55% falling behind its symmetric counterpart by 0.44%. In general, +the asymmetric and symmetric losses lead to improved accuracy scores with respect +to other methods. Moreover, we observe that all the methods depict good forgetting +resilience with their forgetting scores running around ≈0.01%) except for EWC. This +suggests us that vision transformers are better incremental learners but require more +5Refer to Section 4.2 for experiments on additional settings. +6The corresponding task agnostic scores can be found in Figure 4.14, Section 4.2. +70 +Chapter 4 + +Works +CIFAR100/10 (taw) +Asym Func +Spatial +Sym Func +Spatial +Asym Func +Intact +Sym Func +Intact +LwF +Average Incr. +Accuracy +56.18% +55.67% +54.43% +53.12% +55.11% +Last Task +Accuracy +57.26% +56.92% +56.04% +54.59% +55.93% +Table 4.7: Comparison of intact (no pooling), spatial (pooling along width and +height), and LwF. +training and tuning efforts to achieve reasonable accuracies. This remark remains in +accordance with prior studies [Mirzadeh et al., 2022, Paul and Chen, 2021]. In the +particular case of EWC, we observe poor compatibility in terms of accuracy as well +as forgetting – with the scores falling behind finetuning at times. We suspect that +the method might not be less suited for ViTs due to its reliance on exhaustive fisher +information estimation. +Plasticity-stability tradeoff: +We compare the dilemma for various methods in Fig- +ure 4.10. With no distillation, finetuning is prone to the worst trading of plasticity +for stability. Meanwhile, our asymmetric losses can be seen to be more plastic with +respect to their symmetric counterparts while depicting comparable stability scores. +This confirms our hypothesis regarding the nature of the asymmetry keeping it from +discarding older attention while favoring the integration of new attention at the +same time. Although, LwF with a last task score of 47.74% on CIFAR100/10 and +32.0% on ImageNet/6, reports the best plasticity among our approaches, it clearly +lags behind the pooling-based approaches at retaining stability. On the contrary, the +(a)symmetric attention losses and the symmetric functional loss perform similar with +a last task stability score of ≈ 0.23% on ImageNet/6 and ≈ 53% on CIFAR100/10. +EWC shows good plasticity but virtually zero stability. This trend is in line with our +previous comment on the limitation of EWC in Figure 4.9. +Ablation study +Towards the end goal of evaluating the effectiveness of PAD losses, we ablate +the contribution of pooling on the CIFAR100/10 setting. +In particular, we con- +sider distilling the attention maps when these are: (a) pooled along both dimen- +sions, i.e.,(A)sym Func Spatial (see Equation 4.7), and (b) not pooled at all, i.e., +Chapter 4 +71 + +Dissecting continual learning: a structural and data analysis +Figure 4.12: Mean and standard deviation of task-aware accuracy and forgetting +scores for the additional CIFAR100/20 and CIFAR100/50 settings (over 3 random +runs). +(A)sym Func Intact. Distilling the intact maps of the latter setting imply enhanced +stability over their pooled counterparts. Our standard accuracy and plasticity-stability +measures across tasks can therefore be deemed redundant in this setting. As a conse- +quence, we choose to compare the task-aware average incremental accuracy [Rebuffi +et al., 2017] and the last task accuracy across (a) and (b) while contrasting these +with LwF as a strong baseline. For further crisper observations, we limit our com- +parisons to the functional setting. As shown in Table 4.7, we find that Asym Func +Spatial consistently performs the best across both the metrics (with a gain of > 2% +over Sym Func Intact in either metric). In general, distilling the intact attention +maps can be seen to be hurting the performance of the models as their accuracy +drop below that of the baseline LwF. +Conclusion +In this work, we adapted and analyzed several continual learning methods to counter +forgetting in Vision Transformers mainly with the help of regularization. We then +72 +Chapter 4 + +Task Aware - avg 3 seeds +EWC +ATT(asym) --. FUNC(asym) ..... +finetuning +LwF +ATT(sym) +FUNC(sym) +0.15 +CIFAR100/50 Base +0.55 +0.10 +0.50 +Accuracy +0.45 +0.05 +0.40 +0.35 +0.00 +5 +6 +1 +2 +3 +4 +5 +1 +2 +3 +4 +6 +Tasks +Tasks +0.25 +CIFAR100/20 Base +0.55 +0.20 +Accuracy +0.15 +0.50 +0.10 +0.05 +0.45 +0.00 +1 +2 +3 +4 +5 +6 +7 +8 +9 +1 +2 +3 +4 +5 +6 +7 +8 +9 +Tasks +TasksWorks +introduced a novel PODNet-inspired regularization, based on the attention maps of +self-attention mechanisms which we termed as Pooled Attention Distillation (PAD). +Shedding light on its limitation at learning new attention, we devised its asymmetric +version that avoids penalizing the addition of new knowledge in the model. +We +validated the superior plasticity of the asymmetric loss on several benchmarks. +Besides the meticulous comparison of a range of regularization approaches, i.e., +functional (LwF), weight (EWC), and the proposed attention-based regularization, +we extended the application of PAD to the functional submodules of ViTs. To this +end, we investigated regularization in the contextualized embeddings of ViTs. The +latter exploration led us to discover that the regularization of functional submodules +can help achieve the best overall performances while the regularization of their at- +tentional counterparts endow CL models with superior stability. Finally, we remarked +the low forgetting scores of vision transformers across the incremental tasks and +concluded that their enhanced generalization capabilities may endow them with a +natural inclination for incremental learning. By making our code open-source, we +hope to open the doors for future research along the direction of efficient continual +learning with transformer-based architectures. +Additional Settings +We experiment on two further CIFAR100 settings with distinct cardinality of base +task classes: +• CIFAR100/20 Base, with 20 base task classes followed by 8 incremental tasks +with 10 classes each, +• CIFAR100/50 Base, with 50 base task classes followed by 5 incremental tasks +with 10 classes each. +The task aware accuracy and forgetting scores on these are shown in Figure +4.12. We find the PAD-based losses to consistently outperform other regularization +approaches with LwF being the closest tie. Along the direction of plasticity-stability +tradeoff (see Figure 4.13), we observe that: (a) the attentional PAD losses retain +better rigidity than their functional counterparts, and (b) the asymmetric variants of +PAD losses are more plastic than their symmetric counterparts across these settings. +These trends further validate our hypotheses in sections 4.2 and 4.2, respectively. +Chapter 4 +73 + +Dissecting continual learning: a structural and data analysis +Figure 4.13: Mean and standard deviation of task-aware plasticity-stability scores for +the additional CIFAR100/20 and CIFAR100/50 settings (over 3 random runs). +Task Agnostic Results +Figure 4.14 depicts the task-agnostic accuracy and forgetting scores for the settings +mentioned in the main section as well as in Section 4.2. Given the contradictory +terms of resource-scarce exemplar-free CL and data-hungry ViTs, task-agnostic eval- +uations can be seen to be particularly challenging. The further avoidance of heavier +data augmentations in our training settings can be seen to give rise to two major +repercussions across the task-agnostic accuracies: (a) the scores remain consistently +low, and (b) the models show smaller yet consistent variations in performances across +all settings. +That said, we find functional PAD losses to be performing the best on all but +CIFAR100/50 setting. The larger proportion of base task classes in the latter setting +can be seen to be greatly benefiting the learning of LwF (the least parameterized loss +term). Further on the note of class proportions, we observe that an equal spread of +classes across the tasks can be seen to have a smoothing effect on the variations of +scores across different methods. +74 +Chapter 4 + +Task Aware- avg 3 seeds +EWC +ATT(asym)--. FUNC(asym)..... finetuning +LwF +ATT(sym)- +FUNC(sym) +0.7 +0.3501 +CIFAR100/50 Base +0.325 +0.6 +0.300 +Plasticity +Stability +0.275 +0.5 +0.250 +0.4 +0.225 +0.200 +1 +2 +3 +4 +5 +6 +1 +2 +3 +4 +5 +6 +Tasks +Tasks +0.701 +Base +0.45 +0.65 +CIFAR100/20 +0.40 +0.60 +Plasticity +Stability +0.55 +0.35 +0.50 +0.30 +0.45 +0.25 +1 +2 +3 +4 +5 +6 +7 +8 +9 +1 +2 +3 +4 +5 +6 +7 +8 +9 +Tasks +TasksWorks +On the contrary, the CIFAR100/50 setting leads to low variability of task-agnostic +forgetting scores across the methods. +This can again be attributed to the fact +that a very large first task better leverages the generalization capabilities of ViTs +thus making them better at avoiding forgetting over the subsequent incremental +steps. This further adds to our reasoning regarding the natural resilience of ViTs +to incremental learning settings. When compared across methods, the attentional +variants of PAD losses can be seen to display the least amount of forgetting followed +by their functional counterparts. +Figure 4.14: Mean and standard deviation of task-agnostic accuracy and forgetting +scores for CIFAR100/10, CIFAR100/20, CIFAR100/50, and ImageNet/6 settings +(over 3 random runs). +The larger proportion of base task classes (for example, +CIFAR100/50) gives rise to higher variations of accuracies and lower variation of +forgetting scores across methods – with the latter indicating the inclination of ViTs +towards better generalization and preservation of knowledge. +Chapter 4 +75 + +Task Agnostic - avg 3 seeds +EWC +ATT(asym) +FUNC(asym) ..... finetuning +LWF +ATT(sym) +FUNC(sym) +CIFAR100/50 Base +CIFAR100/20 Base +CIFAR100/10 +ImageNet/6 +0.35 +0.4 +0.5 +0.30 +0.20 +0.2 +0.2 +0.20 +0.10 +0.1 +i +2 +3 +4 +5 +6 +5 +1 +8 +1 +2 +3 +4 +6 +1 +2 +4 +6 +7 +8 +¥910 +1 +2 +3 +4 +5 +6 +Tasks +Tasks +Tasks +Tasks +0.6 +0.251 +0.5 +0.3 +0.4 +0.20 +Forgetting +0.2 + 0.2 +0.1 +0.1 +0.05 +0.0 +0.0 +0.0 : +0.00 +1 +2 +3 +4 +5 +6 +3 +4 +5 +6 +7 +2 +9 +1 +2 +3 +4 +5 +6 +7 +1 +2 +3 +1 +5 +6 +Tasks +Tasks +Tasks +TasksDissecting continual learning: a structural and data analysis +4.3 +Simpler is Better: off-the-shelf Continual Learn- +ing through Pretrained Backbones +In this section we propose a simple baseline for continual learning that leverages +pretrained backbones. The approach devised is fast, since requires no parameters +updates and has minimal memory requirements (order of KBytes). By providing such +a simple baseline, and achieving strong performance on all the major benchmarks used +in literature, we follow the concerns raised in Section 4.1 on the simplicity of the +benchmarks used. Secondly, we show that pretraining cause the network to generalize +at a point where the incremental learning of new tasks is very simple. +In particular, the ”training” phase reorders data and exploit the power of pre- +trained models to compute a class prototype and fill a memory bank. At inference +time we match the closest prototype through a knn-like approach, providing us the +prediction. We will see how this naive solution can act as an off-the-shelf continual +learning system. In order to better consolidate our results, and merge the above +two works, we use the devised pipeline with CNN and Vision Transformers. We will +discover that thew latter have the ability to produce features of higher quality. As a +side note we discuss some extension to the unsupervised realm. +In a nutshell, this simple pipeline raises the same questions raised by previous +works such as Prabhu et al. [2020] on the effective progresses made by the CL +community especially in the dataset considered and the usage of pretrained models. +76 +Chapter 4 + +Works +Figure 4.15: Depiction of our simple baseline. Our pipeline does not perform param- +eters updates and consumes few KBytes as memory bank. +Until now, the CL community mainly focused in the analysis of catastrophic +forgetting in Convolutional Neural Networks (CNN) models. But, as can be seen +by some recent works, Vision Transformers (ViT) are asserting themselves as a +valuable alternative to CNNs for computer vision tasks, sometimes, achieving better +performances with respect to CNNs Chen et al. [2022]. The power of ViTs lies in their +less inductive bias Morrison et al. [2021] and in their subsequent better generalization +ability. Thanks to this ability ViTs are naturally inclined continual learners, as pointed +in Section 4.2. +In transformer literature, the usage of pretrained backbones is becoming a must, +in fact, training such systems requires extensive amount of data and careful hyper- +parameters optimization. Using pretrained backbones is common also in Computer +Chapter 4 +77 + +Training +1. Batch Reordering +2.FeatureExtraction +3. Prototype Creation +feats c1 +C1 +p1 +H +feats c2 +Task 1 +C2 +p2 +feats c3 +C3 +p3 +Visual +Trans. +feats_c4 +C4 +p4 +feats c5 +Task 2 +C5 +p5 +feats c6 +C6 +p6 +... +... +MemoryBank +Test +1. Feature extraction +2. Match closest prototype +p1 +p2 +Visual +p3 +feat x +Trans. +p4 +p5 +p6 +MemoryBankDissecting continual learning: a structural and data analysis +Vision communities where CNNs are the main player. In CL literature, the pretraining +is frequent, but not constant. It is typically carried on half of the analyzed dataset +or through a big initial task that has the objective of facilitating the learning of low +level features. The very best results, however, have been achieved when we do not +skip pretraining. This can be confirmed by the CVPR 2020 Continual Learning Chal- +lenge summary report Lomonaco et al. [2022], where the authors noted that all the +methods proposed solutions leveraging pretrained backbones. +On top of that, simple baselines sometimes provide better results with respect to +overly engineered CL solutions, GDumb Prabhu et al. [2020] is such an example. In +the work, the authors showed superior performance against several methods at the +state-of-the-art through a system composed just by a memory random sampler and +a simple learner (CNN or MLP). From a practical point of view, these methods often +constitute a simple, clear, fast, intuitive and efficient solution. +Following these lines, we explore a knn-like method to perform off-the-shelf online +continual learning leveraging the power of pretrained vision transformers. Our sys- +tem constitutes a simple and memory-friendly architecture requiring zero parameters +updates. Being our work one of the first using ViTs in CL, we propose a robust +baseline for future works and provide an extensive comparison against CNNs. +In brevity, the contributions are the following: +• We devise a simple pipeline composed by a pretrained feature extractor and an +incremental prototype bank. The latter is updated as new data is experienced. +The overall cost of the method is in the storage of a pretrained backbone and +few Kbytes for the memory bank. +• We devise a baseline for future CL methodologies that will exploit pretrained +Vision Transformers or Resnets. The baseline is fast and does not require any +parameter update, yet achieving robust results in 200 lines of Python, unlocking +reproducibility too. +• We provide a comparison for our pipeline between Resnets and Visual Trans- +formers. We discover that Vision Transformers produce more discriminative +features, appealing also for the CL setting. +• In light of such results, we arise the same questions, as GDumb Prabhu et al. +[2020] does, in the progresses made by the CL community so far specifically in +the quality of the datasets and in the usage of pretrained backbones. +78 +Chapter 4 + +Works +Algorithm 1 Off-the-shelf CL. “Training” +Require: ti, φ, M +for ti ∈ T do +G = GroupByClass(ti) +for g ∈ G do +f = φ(g) +Extract features +p = µ(f ) +Compute mean feature +M ← p +Store prototype in memory +return M +Related Works +Only recently few works considered self-attention models in continual learning. Li et +al. Li et al. [2022] proposed a framework for object detection exploiting Swin Trans- +former Liu et al. [2021] as pretrained backbone for a CascadeRCNN detector, the +authors show that the extracted features generalize better to unseen domains hence +achieving lesser forgetting rates compared to ResNet50 He et al. [2016] backbones. +This also follows the conclusions made by Paul and Chen Paul and Chen [2021] on +the fact that vision transformers are more robust learners with respect to CNNs. +Several methods in CL use pretrained backbones as feature extractors such as +in Hayes et al Hayes and Kanan [2020] or Aljundi et al. [2019b], Hocquet et al. +[2020] and sometimes the pretraining is carried on half (or a big portion) the dataset +considered, as in PODNet Douillard et al. [2020] or in Yu et al. Yu et al. [2021]. +For a more complete review on CL methodologies we point out these recent surveys +Parisi et al. [2019], Hadsell et al. [2020], Mundt et al. [2020]. +A similar study on pretraining for CL has been conducted by Mehta et al. Mehta +et al. [2021]. In particular, they study the impact on catastrophic forgetting that +a linear layer might accuse while using a pretrained backbone. Their study focuses +only on Resnet18 for vision tasks, but they also include NLP tasks. +Method +Setting +Continual Learning characterizes the learning by introducing the notion of +subsequent tasks. In particular, the learning happens in an incremental fashion, that +is, the model incrementally experiences different training sessions as time advances. +Chapter 4 +79 + +Dissecting continual learning: a structural and data analysis +Practically, a learning dataset is split in chunks where each split is considered an +incremental task containing data. CL being a relatively new field, the community +is still converging to a common setting notation, but we focus on an online, task- +agnostic NC-type scenario. +Tat is, the model forwards a pattern just once and +does not have the task label at test time. As a more fine grained specific we follow +Lomonaco and Maltoni [2017] categorization and use a NC-type scenario where each +task contains a disjoint group of classes. +More formally, given a dataset D and a set of n disjoint tasks T that will be +experienced sequentially: +T = [t1, t2, . . . , tn] +(4.13) +each task ti = (Ci, Di) represented by a set of classes Ct = ct +1, ct +2 . . . , ct +nt and +training data Dt (images). We assume that the classes of each task do not overlap +i.e. Ci � Cj = ∅ if i ̸= j +“Training” Phase +In the training phase, given a task ti ∈ T , a feature extractor +φ and a memory bank as a dictionary M, the procedure does the following: +1. First it performs batch reordering, that is, it groups the images of a given task +by their class +2. After grouping, it forwards each new subset to the feature extractor φ +3. Given the feature representations of a group, it computes the mean of the +features to create a class prototype +4. Updates the memory bank M by storing the each computed prototype +At the end of the training procedure for a given task ti, we would have a repre- +sentative prototype vector for each class contained in ti. As we said, the prototype +vector is computed as the mean feature representation of the patterns of the same +class. A depiction of the “training” phase is reported in Figure 4.15, we also provide +a pseudocode in Algorithm 1. We also point out that there is not formal “training” +of the network, in fact we do not perform any parameter update, we simply exploit +the pretrained models and construct a knn-like memory system. +80 +Chapter 4 + +Works +Memory +KiB class +Params +Model +CIFAR100 +CIFAR10 +Core50 +Oxford +Flowers102 +Tiny +ImgNet200 +2 KiB +11.7M +resnet18 +0.53 +0.76 +0.72 +0.73 +0.55 +2 KiB +21.8M +resnet34 +0.55 +0.81 +0.74 +0.67 +0.62 +8 KiB +25.5M +resnet50 +0.59 +0.80 +0.71 +0.70 +0.63 +8 KiB +60.1M +resnet152 +0.67 +0.89 +0.72 +0.66 +0.76 +0.75 KiB +5.6M +ViT-T/16 +0.36 +0.63 +0.49 +0.54 +0.24 +3 KiB +86.4M +ViT-B/16 +0.64 +0.87 +0.74 +0.95 +0.63 +0.75 KiB +5.6M +DeiT-T/16 +0.57 +0.80 +0.73 +0.68 +0.64 +3 KiB +86.4M +DeiT-B/16 +0.68 +0.90 +0.80 +0.74 +0.79 +Table 4.8: Off-the-shelf accuracy performance on different dataset benchmarks, we +both analyzed a CNN model and a ViT pretrained models. +Test Phase +After completing the training phase for a task ti the memory bank +M will be populated by the prototypes of the classes encoundered so far. During +this test phase, we simply use a knn-like approach. Given an image x, the updated +memory bank M and the feature extractor φ we devise the test phase as follows: +1. Forward the test image x to the feature extractor φ +2. Compute a distance between the feature representation of the image and all +prototypes contained in M +3. We match the prototype with minimum distance and return its class +In a nutshell, we perform k-nn with k=1 over the feature representation of an +image, matching the class of the closes prototype in the bank. If the class selected +is the same of the test example we would have a hit, a miss otherwise. Figure 4.15 +reports a visual depiction of the test procedure. As distance we use a simple l2, but +several tests have been made with cosine similarity. Although the results with the +cosine similarity are better, we opt for the l2 since provides the best speedup in the +implementation through Pytorch. +Experiments +It is suspected that Visual Transformers generalize better with respect to CNN mod- +els. To this end, we compare CNNs models and ViTs models as feature extractors. +We selected four CNN models to compare against four attention-based models. In +Chapter 4 +81 + +Dissecting continual learning: a structural and data analysis +particular, we selected DeiT-Base/15, DeiT-Tiny/15 Touvron et al. [2021], ViT- +Base/16 and ViT-Tiny/16 Dosovitskiy et al. [2021] as visual transformers. While +we opted for Resnet18/34/50/152 He et al. [2016] as CNN models. We used the +timm Wightman [2019] library to fetch the pretrained models where all the models +have been trained on ImageNet Deng et al. [2009] and the continuum Douillard +and Lesort [2021] library to create the incremental setting for 5 datasets, namely +CIFAR10/100, Core50, OxfordFlowers102 and TinyImageNet200. +In all dataset benchmarks, we upscaled the images to 224 × 224 pixels in order +to accommodate visual transformers which needs such imput dimension. We apply +such transformation to resnet data too for a fair comparison. In order to match the +closes prototype at test time, we used l2 as preferred measure. +The main results are reported in Table 4.8. The pipeline is extremely simple, +yet it achieves impressive performance as an off-the-shelf method, at cost of a very +small overhead to store the prototype memory. In fact, at the end of the training +phase, the memory bank translates only into few KBytes of storage. Although this +preliminary work only consider task-agnostic setting, we remind that if at test time we +are given the task label of the data, we can recast the method to work in task-aware +setting. In this case, performing the test phase would be easier since the comparison +of the test data will be carried only on a subset of the prototypes. On the same line, +one can see that in Table 4.8 we do not report each dataset task split. In fact, our +method works for any dataset split since it just need any partition of the datasets +that respect a NC protocol i.e. as long as tasks are formed by images that can be +grouped in classes. We can also appreciate that transformer architectures work best +in all benchmarks, suggesting direct superior generalization capabilities with respect +to CNNs or, at least, more discriminative features. +Discussion +In light of these results, we think that this work may be extended to be considered as a +baseline to assess the performance continual learning methodologies using pretrained +networks as feature extractors. +In particular, a thorough investigation should be +carried by substituting the k-nn approach with a linear classifier, this would allow +also a better comparison between resnets and visual transformers. +However, we +think that these preliminary results are of interest to the Vision Transformer and CL +research community. +We then raise some concerns with respect to the procedure and the benchmarks +82 +Chapter 4 + +Works +Figure 4.16: Direct off-the-shelf extension of the baseline proposed to tackle unsper- +vised continual learning. +used to assess new CL methodologies. As we can see, through a pretrained model, we +can achieve impressive results with respect to the current CL state-of-the-art Parisi +et al. [2019], Hadsell et al. [2020], Mundt et al. [2020]. This point have been also +raised by GDumb Prabhu et al. [2020] where the authors questioned the progresses +by providing a very simple baseline. +Moreover, we can further extend this simple pipeline to be used in unsupervised +continual learning. Actually, the extension is straightforward. In an unsupervised +scenario the batch reordering step cannot be performed since we are not allowed to +know each data class label. To cope with this lack of information one can substitute +the step with any clustering algorithm such as K-means (we tried it but with no luck) +or a more sophisticated approach such as autoencoders, self-organizing maps etc.. +The test phase of the unsupervised extension would be analogous to the supervised +counterpart. +Conclusion +In this short ex[erimental segment we proposed a baseline for continual learning +methodologies that exploit pretrained Vision Transformers and Resnets. We tackle +online NC-type class-incremental learning scenario, the most common one, even +though, our pipeline can be extended to different scenarios. Our off-the-shelf method +Chapter 4 +83 + +MemoryofPrototypes +P +feats c1 +feats c2 +p 2 +T_1 +feats c3 +p3 +pretr +K-means +model +T_2 +feats _c4 +p4 +... +feats c5 +p 5 +T_n +feats_c6 +p6Dissecting continual learning: a structural and data analysis +is conceptually simple yet gives strong results and can be implemented in 200 lines of +Python therefore enhancing reproducibility. To assess the performance of different +backbones our pipeline we compared Resnets models against Vision Transformers +feature extractors pretrained on the same dataset, and show that vision transformers +provide more powerful features. This suggests that Vision Transformers ability to +encode knowledge is is broader. Then we raise some questions about CL research +progress and note that with a pretrained model and a simple pipeline one can achieve +strong results and, therefore, new methodologies should drop the usage of pretrained +backbones when testing on such dataset benchmarks. +84 +Chapter 4 + +Works +4.4 +Unsupervised Semantic Discovery through Visual +Patterns detection +So far, we directly investigated the impact of performance by altering structural and +data properties of object recognition frameworks. If we step back a bit and consider a +more broader vision about continual learning, we understand that, in order to adapt +to a changing environment, an artificial agent should manifest also the ability to +continuously discover new patterns, in our case visual patterns. +We propose a smart pipeline that it is able to discover repetitive patterns in an +image, by means of a threshold parameter. That is, if we alter this specific parameter, +we are able to discover new semantic levels in a scene. This work goes a bit in +another direction from the dissection of current continual learning methodologies +treated in this thesis. Instead, it is a step towards the ability to build a system able +to incrementally explore. +To this end, we propose a new fast fully unsupervised method to discover se- +mantic patterns. Our algorithm is able to hierarchically find visual categories and +produce a segmentation mask. Through the modeling of what is a visual pattern +in an image, we introduce the notion of “semantic levels” and devise a conceptual +framework along with measures and a dedicated benchmark dataset for future com- +parisons. Our algorithm is composed by two phases. A filtering phase, which selects +semantical hotsposts by means of an accumulator space, then a clustering phase +which propagates the semantic properties of the hotspots on a superpixels basis. We +provide both qualitative and quantitative experimental validation, achieving optimal +results. +Chapter 4 +85 + +Dissecting continual learning: a structural and data analysis +While the vast majority of supervised object detection and segmentation ap- +proaches leverage rich datasets with semantically labelled categories, unsupervised +methods cannot rely on such a luxury. Indeed they are expected to infer from the +image content itself what is a relevant object and which are its boundaries. This is a +daunting task, as relevance is totally domain-specific and also highly subjective, espe- +cially when taking in account human judgement, which exploits a lot of out-of-band +information that cannot be found in the sheer image data. +As a matter of fact, little effort have been put to investigate unsupervised auto- +matic approaches to detect and segment semantically relevant objects without any +additional information than the image or any a priori knowledge of the context. This +is due to the fact that a unique definition of what is a relevant object (or, how we +prefer to call it, a visual category) does not actually exist. +This is especially true if we are seeking to set a formal definition that can be +adopted across all the domains in a consistent manner with respect to human judge- +ment. +Within this section, we try to address this problem by considering a visual category +each pattern which appearance is consistent enough across the image. In other words, +we consider something to be a relevant object if it appears more than once, exhibiting +consistent visual features in different parts of the scene. +From a cognitive and perceptual point of view this makes a lot of sense. In fact, +it is easy to observe that if a human is presented with images representing several +different but recurring objects, even in a cluttered scene, he does not need to know +what the objects actually are representing in order to be able to assign semantically- +consistent labels to each of them. He would even be able to label each pixel, defining +the boundaries of the objects. +As an example, if someone takes a look at a large bin of different (but to some +extent repeated) mechanical parts he never saw before, he is still able to tell one part +from the other by exploiting their coherent visual and structural appearance. This +ability is also preserved with slight changes in scale, orientation or partial occlusion +of the objects. +Since this automatic assignment to a visual category of recurrent object is both +well-defined and quite natural in humans, it is a very good candidate as a rule for +automatically detecting relevant objects in an unsupervised manner that has good +chances of being coherent with human judgement applied to the same image. +86 +Chapter 4 + +Works +Figure 4.17: A real world example of unsupervised segmentation of a grocery shelf. +Our method can automatically discover both low-level coherent patterns (brands, +flavor images and logos) and high-level compound objects (multi-packs and bricks) +by controlling the semantical level of the detection and segmentation process. +To be fair, we must also underline the fact that, in order to define the boundaries +of a visual category and thus obtain a meaningful segmentation, also the level of detail +must be taken into account. As an example, if we present to a human an image of +a crowded road captured from a side, and we ask him to segment visual categories +according to recurrent patterns, we could get slightly different results from different +people depending on their attention to details. Some people will segment cars and +trees. Other could consider the car body to be a different object from the wheels ad +branches from the tree trunk. The most picky could even separate tires from wheel +rims and segment out each single leaf. In practice semantic consistency can happen +at different scale when dealing with compound objects presenting themselves internal +self repetitions or made up of single parts that are also present in other objects. +To address this aspect we also have to design a proper strategy to perform visual +category detection and interpretation at a particular scale, according to the level of +detail we want to express during the segmentation process. We define this level of +detail as semantical level. Semantical levels, of course, do not map directly on specific +high level concepts, such as whole objects, large parts or minute components. Rather +the semantic level will act as a coarse degree of granularity of the segmentation +process that will result in a hierarchical split of segments as it changes. +These two definitions of visual categories and semantical levels, that will be +developed throughout the remainder of the work, are the two key concepts driving +our novel segmentation method. +Chapter 4 +87 + +Yoga +Yogo +Optimum +Optimun +Optimum +Optimum +OptimumDissecting continual learning: a structural and data analysis +The ability of our approach to leverage repetitions to capture the internal rep- +resentation in the real world and then extrapolates visual categories at a specific +semantical level is actually achieved through the combination of a couple of standard +techniques, slightly modified for the specific task, and of a few key steps specifically +crafted to make the process work in a consistent way with respect to the cognitive +process adopted by humans. This happens, for instance, by seeking for highly rel- +evant repetitive structural patterns, called semantical hotspots, characterized by a +novel feature descriptor, called splash. We do this through a scale-invariant method +and with no continuous geometrical constraints on the visual pattern disposition. +We also do not constrain ourselves to find only one visual pattern, which is another +very common assumption with other approaches in literature. Rather our technique +is designed from the start to be able to detect more patterns at once, being able to +assign to each of them a different visual category label, corresponding to a different +real world object or object part, according to the selected semantical level. +Overall, with this study, we are offering to the community the following contri- +butions: +• A new pipeline, including the definition of a specially crafted feature descriptor, +to capture semantical categories with the ability to hierarchically span over +semantical levels; +• A specially crafted conceptual framework to evaluate unsupervised semantic- +driven segmentation methods through the introduction of the semantical levels +notion along with a new metric; +• A new dataset consisting of a few hundredths labelled images that can be used +as a benchmark for visual repetition detection in general. +The remainder of the section is organized as follows. Section 4.4 describes the +related works with respect to feature extraction and automatic visual patterns de- +tection. Section 4.4 introduces our method, giving details on the overall pipeline and +on the implementation details. Section 4.4 presents an experimental evaluation and +comparison with similar approaches. Finally, the conclusions are found in Section +4.4. +Code, dataset and notebooks used in this study will be made available for public +use. +88 +Chapter 4 + +Works +Related Works +Several works have been proposed to tackle visual pattern discovery and detection. +While the paper by Leung and Malik [Leung and Malik, 1996] could be consid- +ered seminal, many other works build on their basic approach, working by detecting +contiguous structures of similar patches by knowing the window size enclosing the +distinctive pattern. +One common procedure in order to describe what a pattern is, consists to first +extract descriptive features such as SIFT to perform a clustering in the feature +space and then model the group disposition over the image by exploiting geometrical +constraints, as in [Pritts et al., 2014] and [Chum and Matas, 2010], or by relying +only on appearance, as in [Doubek et al., 2010, Liu and Liu, 2013, Torii et al., 2015]. +The geometrical modeling of the repetitions usually is done by fitting a planar +2-D lattice, or a deformation of it [Park et al., 2009], through RANSAC procedures +as in [Schaffalitzky and Zisserman] [Pritts et al., 2014] or even by exploiting the +mathematical theory of crystallographic groups as in [Liu et al., 2004]. Shechtman +and Irani [Shechtman and Irani, 2007], also exploited an active learning environment +to detect visual patterns in a semi-supervised fashion. For example Cheng et al. +[Cheng et al., 2010] use input scribbles performed by a human to guide detection +and extraction of such repeated elements, while Huberman and Fattal [Huberman +and Fattal, 2016] ask the user to detect an object instance and then the detection +is performed by exploiting correlation of patches near the input area. +Recently, as a result of the new wave of AI-driven Computer Vision, a number of +Deep Leaning based approaches emerged, in particular Lettry et al. [Lettry et al., +2017] argued that filter activation in a model such as AlexNet can be exploited in +order to find regions of repeated elements over the image, thanks to the fact that +filters over different layers show regularity in the activations when convolved with +the repeated elements of the image. On top of the latter work, Rodr´ıguez-Pardo et +al. [Rodr´ıguez-Pardo et al., 2019] proposed a modification to perform the texture +synthesis step. +A brief survey of visual pattern discovery in both video and image data, up to +2013, is given by Wang et al. [Wang et al., 2014], unfortunately after that it seems +that the computer vision community lost interest in this challenging problem. We +point out that all the aforementioned methods look for only one particular visual +repetition except for [Liu and Liu, 2013] that can be considered the most direct +competitor and the main benchmark against which to compare our results. +Chapter 4 +89 + +Dissecting continual learning: a structural and data analysis +Figure 4.18: (a) A splash in the image space with center in the keypoint ⃗cj. (b) +H, with the superimposed splash at the center, you can note the different levels of +the vote ordered by endpoint importance i.e. descriptor similarity. (c) 3D projec- +tion showing the gaussian-like formations and the thresholding procedure of H. (d) +Backprojection through the set S. +Method Description +Features Localization and Extraction +We observe that any visual pattern is delimited by its contours. The first step of our +algorithm, in fact, consists in the extraction of a set C of contour keypoints indicating +a position ⃗cj in the image. To extract keypoints, we opted for the Canny algorithm, +for its simplicity and efficiency, although more recent and better edge extractor could +be used [Liu et al., 2019] to have a better overall procedure. +A descriptor dj is then computed for each selected ⃗cj ∈ C thus obtaining a +descriptor set D. +In particular, we adopted the DAISY algorithm because of its +appealing dense matching properties that nicely fit our scenario. +Again, here we +can replace this module of the pipeline with something more advanced such as [Ono +et al., 2018] at the cost of some computational time. +Semantic Hot Spots Detection +In order to detect self-similar patterns in the image we start by associating the k +most similar descriptors for each descriptor ⃗dj. We can visualize this data structure +as a star subgraph with k endpoints called splash “centered” on descriptor ⃗dj. Figure +4.18 (a) shows one. +90 +Chapter 4 + +2m +Semantical Hotspots +Tnxm +Hw= Hw+g(w,h,(i)) +(i) +Reproject +Splash +Accum +S +to Accum +Threshold +h.( +(b) +(a) +(d)Works +Splashes potentially encode repeated patterns in the image and similar patterns +are then represented by similar splashes. The next step consists in separating these +splashes from those that encode noise only, this is accomplished through an accu- +mulator space. +In particular, we consider a 2-D accumulator space H of size double the image. +We then superimpose each splash on the space H and cast k votes as shown in Figure +4.18 (b). In order to take into account the noise present in the splashes, we adopt +a gaussian vote-casting procedure g(·). Similar superimposed splashes contribute to +similar locations on the accumulator space, resulting in peak formations (Figure 4.18 +(c)). We summarize the voting procedure as follows: +H ⃗w = H ⃗w + g( ⃗w,⃗h(j) +i ) +(4.14) +where ⃗h(j) +i +is the i-th splash endpoint of descriptor ⃗dj in accumulator coordinates and +⃗w is the size of the gaussian vote. We filter all the regions in H which are above a +certain threshold τ, to get a set S of the locations corresponding to the peaks in H. +The τ parameter acts as a coarse filter and is not a critical parameter to the overall +pipeline. A sufficient value is to set it to 0.05 · max(H). Lastly, in order to visualize +the semantic hotspots in the image plane we map splash locations between H and +the image plane by means of a backtracking structure V. +In summary, the key insight here is that similar visual regions share similar splashes, +we discern noisy splashes from representative splashes through an auxiliary structure, +namely an accumulator. +We then identify and backtrack in the image plane the +semantic hotspots that are candidate points part of a visual repetition. +Semantic Categories Definition and Extraction +While the first part previously described acts as a filter for noisy keypoints allowing +to obtain a good pool of candidates, we now transform the problem of finding visual +categories in a problem of dense subgraphs extraction. +We enclose semantic hotspots in superpixels, this extends the semantic signifi- +cance of such identified points to a broader, but coherent, area. To do so we use +the SLIC [Achanta et al., 2012] algorithm which is a simple and one of the fastest +approaches to extract superpixels as pointed out in this recent survey [Stutz et al., +2018]. Then we choose the cardinality of the superpixels P to extract. This is the +Chapter 4 +91 + +Dissecting continual learning: a structural and data analysis +Algorithm 2 Semantic categories extraction algorithm +Require: G weighted undirected graph +i = 0 +s∗ = − inf +K∗ = ∅ +while Gi is not fully disconnected do +i = i + 1 +Compute Gi by corroding each edge with the minimum edge weight +Extract the set Ki of all connected components in Gi +s(Gi, Ki) = � +k∈Ki µ(k) − α |Ki| +if s(Gi, Ki) > s∗ then +s∗ = s(Gi, Ki) +K∗ = Ki +return s∗, K∗ +second and most fundamental parameter that will allow us to span over different +semantic levels. +Once the superpixels have been extracted, let G be an undirected weighted graph +where each node correspond to a superpixel p ∈ P. In order to put edges between +graph nodes (i.e. two superpixels), we exploit the splashes origin and endpoints. In +particular the strength of the connection between two vertices in G is calculated with +the number of splashes endpoints falling between the two in a mutual coherent way. +So to put a weight of 1 between two nodes we need exactly 2 splashes endpoints +falling with both origin and end point in the two candidate superpixels. +With this construction scheme, the graph has clear dense subraphs formations. +Therefore, the last part simply computes a partition of G where each connected +component correspond to a cluster of similar superpixels. In order to achieve such +objective we optimize a function that is maximized when we partition the graph to +represent so. To this end we define the following density score that given G and a +set K of connected components captures the optimality of the clustering: +s(G, K) = +� +k∈K +µ(k) − α |K| +(4.15) +where µ(k) is a function that computes the average edge weight in a undirected +weighted graph. +The first term, in the score function, assign a high vote if each connected compo- +92 +Chapter 4 + +Works +nent is dense. While the second term acts as a regulator for the number of connected +components. We also added a weighting factor α to better adjust the procedure. As +a proxy to maximize this function we devised an iterative algorithm reported in Algo- +rithm 2 based on graph corrosion and with temporal complexity of O(|E|2 +|E| |V |). +At each step the procedure corrupts the graph edges by the minimum edge weight +of G. For each corroded version of the graph that we call partition, we compute s to +capture the density. Finally the algorithm selects the corroded graph partition which +maximizes the s and subsequently extracts the node groups. +In brevity we first enclose semantic hotspots in superpixels and consider each one +as a node of a weighted graph. We then put edges with weight proportional to the +number of splashes falling between two superpixels. This results in a graph with clear +dense subgraphs formations that correspond to superpixels clusters i.e. semantic +categories. The semantic categories detection translates in the extraction of dense +subgraphs. To this end we devised an iterative algorithm based on graph corrosion +where we let the procedure select the corroded graph partition that filters noisy edges +and let dense subgraphs emerge. We do so by maximizing score that captures the +density of each connected component. +Experiments +Dataset +As we introduced in Section 4.4 one of the aims of this work is to provide a better +comparative framework for visual pattern detection. To do so we created a public +dataset by taking 104 pictures of store shelves. Each picture has been took with a +5mpx camera with approximatively the same visual conditions. We also rectified the +images to eliminate visual distortions. +We manually segmented and labeled each repeating product in two different se- +mantic levels. In the first semantic level products made by the same company share +the same label. In the second semantic level visual repetitions consist in the exact +identical products. In total the dataset is composed by 208 ground truth images, +half in the first level and the rest for the second one. +Chapter 4 +93 + +Dissecting continual learning: a structural and data analysis +Figure 4.19: (top) Analysis of measures as the number of superpixels |P| retrieved +varies. The rightmost figure shows the running time of the algorithm. We repeated +the experiments with the noisy version of the dataset but report only the mean since +variation is almost equal to the original one. (bottom) Distributions of the measures +for the two semantic levels, by varying the two main parameters r and |P|. +µ-consistency +We devised a new measure that captures the semantic consistency of a detected +pattern that is a proxy of the average precision of detection. +In fact, we want to be sure that all pattern instances fall on similar ground truth +objects. +First we introduce the concept of semantic consistency for a particular +pattern ⃗p. Let ⃗P be the set of patterns discovered by the algorithm. Each pattern +⃗p contains several instances ⃗pi. ⃗L is the set of ground truth categories, each ground +truth category ⃗l contain several objects instances ⃗li. Let us define ⃗tp as the vector +of ground truth labels touched by all instances of ⃗p. We say that ⃗p is consistent if +all its instances ⃗pi, i = 0 . . . |⃗p| fall on ground truth regions sharing the same label. +In this case ⃗tp would be uniform and we consider ⃗p a good detection. The worst +scenario is when given a pattern ⃗p every ⃗pi falls on objects with different label ⃗l i.e. +all the values in ⃗tp are different. +To get an estimate of the overall consistency of the proposed detection, we +average the consistency for each ⃗p ∈ ⃗P giving us: +94 +Chapter 4 + +Superpixels Analysis +1.00 +11 +0.80 +0.85 +0.95 +10 +0.75 +0.80 +recall +Time (sec.) + 0.70 +0.75 +0.85 +cal +total +0.70 +0.80 +0.65 +0.60 +First Level +First Level +First Level +First Level + noise +First Level + noise +0.60 +0.70 +0.55 +Second Level +Second Level +Second Level +Second Level + noise +Second Level + noise +Second Level + noise +All Levels +0.55 +mm +mm +000 +0 +Superpixels + Superpixels +Superpixels +Superpixels +Measures Distributions +1.0 +0.8 +0.6 +0.4 +0.2 + First Level +0.0 + Second Level +μ-consistency +recall +total recallWorks +Figure 4.20: Qualitative comparison between [Liu and Liu, 2013] [14], [Lettry et al., +2017] [10] and our algorithm. Our method detects and segments more than one +pattern and does not constrain itself to a particular geometrical disposition. +µ-consistency = +1 +���⃗P +��� +� +⃗p∈⃗P +��mode +�⃗tp +��� +��⃗tp +�� +(4.16) +Recall +The second measure is the classical recall over the objects retrieved by the algorithm. +Since our object detector outputs more than one pattern we average the recall for +each ground truth label by taking the best fitting pattern. +1 +���⃗L +��� +� +⃗l∈⃗L +max⃗p∈⃗P recall (⃗p,⃗l) +(4.17) +The last measure is the total recall, here we consider a hit if any of the pattern +falls in a labeled region. In general we expect this to be higher than the recall. +We report the summary performances in Figure 4.20. As can be seen the algo- +rithm achieves a very high µ-consistency while still able to retrieve the majority of +the ground truth patterns in both levels. +One can observe in Figure 4.19 an inverse behaviour between recall and con- +sistency as the number of superpixels retrieved grows. This is expected since less +superpixels means bigger patterns, therefore it is more likely to retrieve more ground +truth patterns. +Chapter 4 +95 + +[14] +10 +OursDissecting continual learning: a structural and data analysis +In order to study the robustness we repeated the same experiments with an altered +version of our dataset. In particular for each image we applied one of the following +corruptions: Additive Gaussian Noise (scale = 0.1 ∗ 255), Gaussian Blur (σ = 3), +Spline Distortions (grid affine), Brightness (+100), and Linear Contrast (1.5). +Qualitative Validation +Firstly we begin the comparison by commenting on [Liu and Liu, 2013]. One can +observe that our approach has a significant advantage in terms of how the visual pat- +tern is modeled. While the authors model visual repetitions as geometrical artifacts +associating points, we output a higher order representation of the visual pattern. In- +deed the capability to provide a segmentation mask of the repeated instance region +together the ability to span over different levels unlocks a wider range of use cases +and applications. +As qualitative comparison we also added the latest (and only) deep learning based +methodology [Lettry et al., 2017] we found. This methodology is only able to find a +single instance of visual pattern, namely the most frequent and most significant with +respect to the filters weights. This means that the detection strongly depends from +the training set of the CNN backbone, while our algorithm is fully unsupervised and +data agnostic. +Quantitative Validation +We compared quantitatively our method against [Liu and Liu, 2013] that constitutes, +to the best of our knowledge, the only work developed able to detect more than one +visual pattern. We recreated the experimental settings of the authors by using the +Face dataset [Li et al., 2007] as benchmark achieving 1.00 precision vs. 0.98 of [Liu +and Liu, 2013] and 0.77 in recall vs. and 0.63. We considered a miss on the object +retrieval task, if more than 20% of a pattern total area falls outside from the ground +truth. The parameter used were |C| = 9000, k = 15, r = 30, τ = 5, |P| = 150. We +also fixed the window of the gaussian vote to be 11 × 11 pixels throughout all the +experiments. +96 +Chapter 4 + +Works +Conclusions +With this study we introduced a fast and unsupervised method addressing the prob- +lem of finding semantic categories by detecting consistent visual pattern repetitions +at a given scale. +The proposed pipeline hierarchically detects self-similar regions +represented by a segmentation mask. +As we demonstrated in the experimental evaluation, our approach retrieves more +than one pattern and achieves better performances with respect to competitors meth- +ods. We also introduce the concept of semantic levels endowed with a dedicated +dataset and a new metric to provide to other researchers tools to evaluate the con- +sistency of their approaches. +Acknowledgments +We would like to express our gratitude to Alessandro Torcinovich and Filippo Berga- +masco for their suggestions to improve the work. We also thank Mattia Mantoan +for his work to produce the dataset labeling. +Chapter 4 +97 + +Dissecting continual learning: a structural and data analysis +98 +Chapter 4 + +Chapter 5 +Conclusions +99 + +Dissecting continual learning: a structural and data analysis +In this thesis, we contributed spanned the dissection of continual learning by +providing several structural and data analyses. First we provide a gentle introduction +to the topic of continual learning starting by highlighting the difference between +natural and artificial models. Among the differences we stress the importance of +time, which is an essential component for developing lifelong learning machines. +Then, we informally introduce the main challenges that continual learning systems +must tackle. In particular, catastrophic forgetting and the stability plasticity dilemma. +To better provide an intuition about these topics, we provided a visual example of +catastrophic forgetting in an autoencoder model, showing how distributional shifts +in the subsequent tasks result in the abrupt damage of past knowledge. +Later, we move on by giving a more formal definition of continual learning settings +prominently adopted in literature. We introduced the notions of class-incremental, +task-incremental, online/offline learning along with a specification on other common +settings in the field. Before moving on the contributions we provided a small literature +review on the state-of-the-art by describing the main categories under which continual +learning methods have been grouped. +Finally, we move on the main contributions. +First, we introduced a study on +the quality/quantity trade-off in rehearsal-based continual learning. +Here, we se- +lected one of the most performant baselines, that is GDumb, and analyzed several +compression techniques when applied to the replay buffer. We highlighted that the +quantity of data is a far more important factor when storing examplars in the re- +play buffer. We do so by considering different compression schemes with extreme +rates. Then, we moved into the second major contribution which considers Visual +Transformers in an incremental setting. Here, besides being one of the first works +on visual transformers for continual learning, we provided a surgical investigation on +regularization methods for ViTs in the challenging setting of rehearsal-free CL. We +compared functional, weight and attentional regularizations, with the latter being +a regularization in the matrix of the self-attention mechanism. Attentional regu- +larizations provide comparable performance with respect to the other methods. As +second contribution we also introduced a loss inspired by a method nowadays in vogue +(PODNet) and devised an asymmetric variant. We show that the introduction of +the asymmetric variant allows achieving more plasticity to the model when applied +to different part of the mechanism of self-attention. Then, we proposed a study on +off-the-shelf continual learning exploiting fully pretrained networks and, in particular, +we proposed a simple baseline. The baseline is composed by a feature extractor and a +knn-like prototype memory. The baseline is crafted to be performant in practical sce- +narios achieving optimal results with a memory overhead of few KBytes. Moreover +we discussed its possible extension to the realm of unsupervised continual learning. +We then linked this preliminary discussion with the exploration of visual categories. +100 +Chapter 5 + +Conclusions +To do so we introduce another work tackling unsupervised pattern discovery. In fact, +the notion of discovery is naturally included into the notion of lifelong learning: an +agent capable of lifelong learning, surely should possess the ability to autonomously +discover new knowledge. We do so by introducing a new unsupervised algorithm to +perform unsupervised semantic segmentation at different semantic scales. +Further Developings +With the several studies proposed, we want to highlight the directions where it might +be more fruitful to investigate further to build better Continual Learning agents. +A first warning we raised regards the dataset usage to assess the performance of +CL algorithms. In particular, with Section 4.1 we see that extreme levels of buffer +data resize still provide good results in rehearsal systems, suggesting that, perhaps, +more realistic datasets should be included to devise more useful solutions. This find- +ing is also supported by Section 4.3 which shows that tackling these benchmarks +with a pretrained backbone is sufficient to overcome quasi-optimally continual learn- +ing scenarios on 5 different datasets. This also suggests that pretraining could be +a great advantage, in the generalization ability of the model, when building new CL +algorithms. +To tackle the aforementioned point, the community can focus more on unsuper- +vised continual learning which is a natural and more challenging problem extension. +While keeping the same datasets we can now also leverage pretrained backbones. +While being appealing on its own, following this line is also greatly encouraged by +the fact that there are virtually no works on such a topic. +With the study proposed in Section 4.2 we show that ViTs are naturally inclined +continual learners. We suspect that the less inductive bias carried by such models +might be the key that allows such models to perform better in incremental scenarios. +On another side, we see that the results obtained without pretraining have difficulty +achieving CNN performances so easily (we can compare the results of Section 4.1 +and Section 4.2). This calls for the need to build less data-hungry models in line with +the world’s fast-paced data generation. Within Section 4.2 we also propose a new +way to assess Continual Learning methods. We think that the community still lacks +of a principled way to measure the stability-plasticity trade-off. With our introduction +of the two curves, we proposed an initial tentative to monitor the performance of a +system. +Chapter 5 +101 + +Dissecting continual learning: a structural and data analysis +Last but not least, with the work of Section 4.4 we stress that autonomously +discovering new patterns should be a core ability of an intelligent system. In fact, if +an agent can explore the real world and find hierarchies of knowledge without help, +all it has to do to incrementally learn is to store such knowledge in some kind of +long-term memory repository which translates into a compression problem. +102 +Chapter 5 + +Bibliography +Networks of spiking neurons: The third generation of neural network models. Neural +Networks, 1997. +Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aur´elien Lucchi, Pascal Fua, and +Sabine S¨usstrunk. SLIC superpixels compared to state-of-the-art superpixel meth- +ods. IEEE Trans. Pattern Anal. Mach. Intell., 2012. +Tameem Adel, Han Zhao, and Richard E. Turner. Continual learning with adap- +tive weights (CLAW). In International Conference on Learning Representations, +(ICLR), 2020. +Abien Fred Agarap. Deep learning using rectified linear units (relu). ArXiv, 2018. +Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic per- +spective on offline reinforcement learning. In ICML, 2020. +Hongjoon Ahn, Jihwan Kwak, Su Fang Lim, Hyeonsu Bang, Hyojun Kim, and Tae- +sup Moon. Ss-il: Separated softmax for incremental learning. 2021 IEEE/CVF +International Conference on Computer Vision (ICCV), pages 824–833, 2021. +Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong +learning with a network of experts. In Proceedings of the IEEE Conference on +Computer Vision and Pattern Recognition, 2017. +Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne +Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings +of the European Conference on Computer Vision (ECCV), 2018. +Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Cac- +cia, Min Lin, and Lucas Page-Caccia. Online continual learning with maximal inter- +fered retrieval. In Advances in Neural Information Processing Systems (NeurIPS). +Curran Associates, Inc., 2019a. +Rahaf Aljundi, Klaas Kelchtermans, and Tinne Tuytelaars. Task-free continual learn- +ing. In CVPR. Computer Vision Foundation / IEEE, 2019b. +103 + +Dissecting continual learning: a structural and data analysis +Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sam- +ple selection for online continual learning. In Hanna M. Wallach, Hugo Larochelle, +Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B. Fox, and Roman Garnett, edi- +tors, Advances in Neural Information Processing Systems (NeurIPS), 2019c. +Elahe Arani, Fahad Sarfraz, and Bahram Zonooz. Learning fast, learning slow: A +general continual learning method based on complementary learning system. ICLR, +2022. +S. Banerjee, Vinay Kumar Verma, Toufiq Parag, Maneesh Kumar Singh, and Vinay P. +Namboodiri. Class incremental online streaming learning. NeurIPS, 2021. +Eden Belouadah, Adrian Popescu, and Ioannis Kanellos. A comprehensive study of +class incremental learning algorithms for visual tasks. Neural Networks, 2021. +Edward L. Bennett, Marian Cleeves Diamond, David Krech, and Mark Richard Rosen- +zweig. Chemical and anatomical plasticity of brain changes in brain through expe- +rience, demanded by learning theories, are found in experiments with rats. Science, +1964. +Alessandro Betti, Marco Gori, Simone Marullo, and Stefano Melacci. Developing +constrained neural units over time. In International Joint Conference on Neural +Networks, IJCNN. IEEE, 2020. +Magdalena Biesialska, Katarzyna Biesialska, and Marta R. Costa-juss`a. Continual +lifelong learning in natural language processing: A survey. In Proceedings of the +28th International Conference on Computational Linguistics. International Com- +mittee on Computational Linguistics, 2020. +Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Pra- +fulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, +Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon +Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christo- +pher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, +Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, +and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle, +Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, +editors, Advances in Neural Information Processing Systems 33: Annual Confer- +ence on Neural Information Processing Systems 2020, NeurIPS, 2020. +Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calder- +ara. Dark experience for general continual learning: a strong, simple baseline. In +Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and +Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: +104 +Bibliography + +BIBLIOGRAPHY +Annual Conference on Neural Information Processing Systems 2020, NeurIPS, +2020. +Gail A. Carpenter and Stephen Grossberg. The art of adaptive pattern recognition +by a self-organizing neural network. Computer, 1988. +Fabio Cermelli, Massimiliano Mancini, Samuel Rota Bulo, Elisa Ricci, and Barbara +Caputo. Modeling the background for incremental learning in semantic segmenta- +tion. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. +Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. +Riemannian walk for incremental learning: Understanding forgetting and intransi- +gence. In Proceedings of the European Conference on Computer Vision (ECCV), +2018. +Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. +Efficient lifelong learning with A-GEM. In International Conference on Learning +Representations, (ICLR), 2019a. +Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, +Puneet K. Dokania, Philip H. S. Torr, and Marc’Aurelio Ranzato. On tiny episodic +memories in continual learning, 2019b. +Hui Chen, Chao Tan, and Zan Lin. +Ensemble of extreme learning machines for +multivariate calibration of near-infrared spectroscopy. Spectrochimica Acta Part +A: Molecular and Biomolecular Spectroscopy, 2020. +Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers out- +perform resnets without pretraining or strong data augmentations. ICLR, 2022. +Ming-Ming Cheng, Fang-Lue Zhang, Niloy J. Mitra, Xiaolei Huang, and Shi-Min Hu. +Repfinder: finding approximately repeated scene elements for image editing. ACM +Trans. Graph., 2010. +Riccardo Del Chiaro, Bartlomiej Twardowski, Andrew D. Bagdanov, and Joost van de +Weijer. RATT: recurrent attention to transient tasks for continual image cap- +tioning. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina +Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing +Systems 33: Annual Conference on Neural Information Processing Systems 2020, +NeurIPS, 2020. +Ondrej Chum and Jiri Matas. Unsupervised discovery of co-occurrence in sparse high +dimensional data. In The Twenty-Third IEEE Conference on Computer Vision and +Pattern Recognition, CVPR 2010, 2010. +Bibliography +105 + +Dissecting continual learning: a structural and data analysis +Tarin Clanuwat, Mikel Bober-Irizar, A. Kitamoto, Alex Lamb, Kazuaki Yamamoto, +and David Ha. Deep learning for classical japanese literature. ArXiv, 2018. +Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate +deep network learning by exponential linear units (elus). In Yoshua Bengio and +Yann LeCun, editors, 4th International Conference on Learning Representations, +ICLR, 2016. +Cyprien de Masson d’Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. +Episodic memory in lifelong language learning. +In Hanna M. Wallach, Hugo +Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B. Fox, and Roman +Garnett, editors, Advances in Neural Information Processing Systems (NeurIPS), +2019. +Matthias Delange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales +Leonardis, Greg Slabaugh, and Tinne Tuytelaars. +A continual learning survey: +Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis +and Machine Intelligence, 2021. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: +A large-scale hierarchical image database. In Society Conference on Computer +Vision and Pattern Recognition (CVPR). IEEE, 2009. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre- +training of deep bidirectional transformers for language understanding. +In Jill +Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 +Conference of the North American Chapter of the Association for Computational +Linguistics: Human Language Technologies, NAACL-HLT. Association for Com- +putational Linguistics, 2019. +Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chel- +lappa. +Learning without memorizing. +In Conference on Computer Vision and +Pattern Recognition (CVPR), 2019. +Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua +Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, +Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: +Transformers for image recognition at scale. In 9th International Conference on +Learning Representations, ICLR, 2021. +Briar Doty, Stefan Mihalas, Anton Arkhipov, and Alex T. Piet. Heterogeneous ‘cell +types’ can improve performance of deep neural networks. bioRxiv, 2021. +106 +Bibliography + +BIBLIOGRAPHY +Petr Doubek, Jiri Matas, Michal Perdoch, and Ondrej Chum. Image matching and +retrieval by repetitive patterns. In 20th International Conference on Pattern Recog- +nition, ICPR 2010, 2010. +Arthur Douillard and Timoth´ee Lesort. Continuum: Simple management of complex +continual learning scenarios, 2021. +Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle. +Podnet: Pooled outputs distillation for small-tasks incremental learning. In Pro- +ceedings of the IEEE European Conference on Computer Vision (ECCV), 2020. +Arthur Douillard, Alexandre Ram´e, Guillaume Couairon, and Matthieu Cord. Dy- +tox: Transformers for continual learning with dynamic token expansion. CoRR, +abs/2111.11326, 2021. +Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, and Marcus Rohrbach. +Uncertainty-guided continual learning with bayesian neural networks. +In Inter- +national Conference on Learning Representations, (ICLR), 2020. +Spyros Gidaris and Nikos Komodakis. +Dynamic few-shot visual learning without +forgetting. In Conference on Computer Vision and Pattern Recognition, (CVPR). +IEEE, 2018. +Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, +Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. +In NIPS, 2014. +Pankaj Gupta, Yatin Chaudhary, Thomas A. Runkler, and Hinrich Sch¨utze. Neural +topic modeling with continual lifelong learning. +In International Conference on +Machine Learning, (ICML). PMLR, 2020. +Raia Hadsell, D. Rao, Andrei A. Rusu, and Razvan Pascanu. Embracing change: +Continual learning in deep neural networks. Trends in Cognitive Sciences, 2020. +Tyler L. Hayes and Christopher Kanan. Lifelong machine learning with deep streaming +linear discriminant analysis. In CVPR, 2020. +Tyler L. Hayes, Nathan D. Cahill, and Christopher Kanan. Memory efficient experi- +ence replay for streaming learning. In International Conference on Robotics and +Automation, ICRA 2019, Montreal, QC, Canada, May 20-24, 2019. IEEE, 2019. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for +image recognition. In Conference on Computer Vision and Pattern Recognition, +(CVPR). IEEE, 2016. +Bibliography +107 + +Dissecting continual learning: a structural and data analysis +Felix Hill, Adam Santoro, David G. T. Barrett, Ari S. Morcos, and Timothy P. +Lillicrap. Learning to make analogies by contrasting abstract relational structure. In +7th International Conference on Learning Representations, ICLR. OpenReview.net, +2019. +Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a +neural network. ArXiv, 2015. +Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural +nets and problem solutions. International Journal of Uncertainty, Fuzziness and +Knowledge-Based Systems, 1998. +Guillaume Hocquet, Olivier Bichler, and Damien Querlioz. Ova-inn: Continual learn- +ing with invertible neural networks. +In 2020 International Joint Conference on +Neural Networks (IJCNN), 2020. +Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Learning +a unified classifier incrementally via rebalancing. +In The IEEE Conference on +Computer Vision and Pattern Recognition (CVPR), June 2019. +Yen-Chang Hsu, Y. Liu, and Z. Kira. Re-evaluating continual learning scenarios: A +categorization and case for strong baselines. ArXiv, abs/1810.12488, 2018. +Guang-Bin Huang, Lei Chen, and Chee Kheong Siew. Universal approximation using +incremental constructive feedforward networks with random hidden nodes. IEEE +Trans. Neural Networks, 2006. +Inbar Huberman and Raanan Fattal. Detecting repeating objects using patch corre- +lation analysis. In 2016 IEEE Conference on Computer Vision and Pattern Recog- +nition, CVPR, 2016. +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network +training by reducing internal covariate shift. In Francis R. Bach and David M. Blei, +editors, Proceedings of the 32nd International Conference on Machine Learning, +ICML, JMLR Workshop and Conference Proceedings. JMLR.org, 2015. +Khurram Javed and Faisal Shafait. Revisiting distillation and incremental classifier +learning. In ACCV, 2018. +W. Johnson. +Extensions of lipschitz mappings into hilbert space. +Contemporary +mathematics, 1984. +K. J. Joseph, Jathushan Rajasegaran, Salman Hameed Khan, Fahad Shahbaz Khan, +Vineeth N. Balasubramanian, and Ling Shao. Incremental object detection via +meta-learning. +IEEE transactions on pattern analysis and machine intelligence +TPAMI, 2021. +108 +Bibliography + +BIBLIOGRAPHY +Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua +Bengio and Yann LeCun, editors, 2nd International Conference on Learning Rep- +resentations, (ICLR), 2014. +Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convo- +lutional networks. In 5th International Conference on Learning Representations, +ICLR 2017. OpenReview.net, 2017. +James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume +Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka +Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and +Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings +of the National Academy of Sciences, 2017. +Jeremias Knoblauch, Hisham Husain, and Tom Diethe. Optimal continual learning +has perfect memory and is np-hard. In ICML, 2020. +Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical +report, University of Toronto, 2009. +Ramesh Kumar Lama, Jeonghwan Gwak, Jeong-Seon Park, and Sang-Woong Lee. +Diagnosis of alzheimer’s disease based on structural mri images using a regularized +extreme learning machine and pca features. Journal of healthcare engineering, +2017. +Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based +learning applied to document recognition. Proceedings of the IEEE, 1998. +Soochan Lee, Junsoo Ha, Dongsu Zhang, and Gunhee Kim. A neural dirichlet process +mixture model for task-free continual learning. In 8th International Conference on +Learning Representations, ICLR, 2020. +Louis Lettry, Michal Perdoch, Kenneth Vanhoey, and Luc Van Gool. Repeated pat- +tern detection using CNN activations. In 2017 IEEE Winter Conference on Appli- +cations of Computer Vision, WACV 2017, 2017. +Thomas K. Leung and Jitendra Malik. Detecting, localizing and grouping repeated +scene elements from an image. In Computer Vision - ECCV’96, 4th European +Conference on Computer Vision, Proceedings, Volume I. Springer, 1996. +Duo Li, Guimei Cao, Yunlu Xu, Zhanzhan Cheng, and Yi Niu. +Technical report +for iccv 2021 challenge sslad-track3b: Transformers are better continual learners. +arXiv preprint arXiv:2201.04924, 2022. +Bibliography +109 + +Dissecting continual learning: a structural and data analysis +Fei-Fei Li, Robert Fergus, and Pietro Perona. +Learning generative visual models +from few training examples: An incremental bayesian approach tested on 101 +object categories. Comput. Vis. Image Underst., 2007. +Zhizhong Li and Derek Hoiem. Learning without forgetting. transactions on pattern +analysis and machine intelligence (PAMI), 2017. +Seppo Linnainmaa. Taylor expansion of the accumulated rounding error. BIT Nu- +merical Mathematics, 1976. +Jingchen Liu and Yanxi Liu. GRASP recurring patterns from a single view. In IEEE +Conference on Computer Vision and Pattern Recognition, 2013. +Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D +Bagdanov, Shangling Jui, and Joost van de Weijer. +Generative feature replay +for class-incremental learning. +In Conference on Computer Vision and Pattern +Recognition Workshops CVPRW, 2020a. +Yanxi Liu, Robert T. Collins, and Yanghai Tsin. A computational model for periodic +pattern perception based on frieze and wallpaper groups. IEEE Trans. Pattern +Anal. Mach. Intell., 2004. +Yaoyao Liu, Anan Liu, Yuting Su, Bernt Schiele, and Qianru Sun. Mnemonics train- +ing: Multi-class incremental learning without forgetting. 2020 IEEE/CVF Confer- +ence on Computer Vision and Pattern Recognition (CVPR), 2020b. +Yun Liu, Ming-Ming Cheng, Xiaowei Hu, Jia-Wang Bian, Le Zhang, Xiang Bai, and +Jinhui Tang. Richer convolutional features for edge detection. IEEE Transactions +on Pattern Analysis and Machine Intelligence, 2019. +Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, +and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted +windows. Proc. ICCV, 2021. +Vincenzo Lomonaco and Davide Maltoni. Core50: a new dataset and benchmark +for continuous object recognition. In Sergey Levine, Vincent Vanhoucke, and Ken +Goldberg, editors, Proceedings of the 1st Annual Conference on Robot Learning, +Proceedings of Machine Learning Research. PMLR, 2017. +Vincenzo Lomonaco, Karan Desai, Eugenio Culurciello, and Davide Maltoni. Con- +tinual reinforcement learning in 3d non-stationary environments. In Conference on +Computer Vision and Pattern Recognition, (CVPR). IEEE, 2020. +Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodr´ıguez L´opez, Massimo Caccia, +Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David V´azquez, +110 +Bibliography + +BIBLIOGRAPHY +German Ignacio Parisi, Nikhil Churamani, Marc Pickett, Issam H. Laradji, and +Davide Maltoni. +Cvpr 2020 continual learning in computer vision competition: +Approaches, results, current challenges and future directions. Artif. Intell., 2022. +David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual +learning. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, +Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in +Neural Information Processing Systems 30: Annual Conference on Neural Infor- +mation Processing Systems (NeurIPS), 2017. +Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm +restarts. In International Conference on Learning Representations, (ICLR), 2017. +Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. +Rectifier nonlinearities +improve neural network acoustic models. In in ICML Workshop on Deep Learning +for Audio, Speech and Language Processing, 2013. +Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo J. Kim, and Scott +Sanner. +Online continual learning in image classification: An empirical survey. +Neurocomputing, 2022. +Arun Mallya and Svetlana Lazebnik. +Packnet: Adding multiple tasks to a single +network by iterative pruning. In 2018 IEEE Conference on Computer Vision and +Pattern Recognition, CVPR. Computer Vision Foundation / IEEE Computer So- +ciety, 2018. +David McCaffary. Towards continual task learning in artificial neural networks: cur- +rent approaches and insights from neuroscience. ArXiv, 2021. +James L. McClelland, Bruce L. McNaughton, and Randall C. O’Reilly. Why there +are complementary learning systems in the hippocampus and neocortex: insights +from the successes and failures of connectionist models of learning and memory. +Psychological review, 1995. +Michael McCloskey and Neal J Cohen. Catastrophic interference in connectionist +networks: The sequential learning problem. In Psychology of learning and moti- +vation. Elsevier, 1989. +Warren S McCulloch and Walter Pitts. A logical calculus of the ideas immanent in +nervous activity. The bulletin of mathematical biophysics, 1943. +Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell. +An +empirical investigation of the role of pre-training in lifelong learning. +ArXiv, +abs/2112.09153, 2021. +Bibliography +111 + +Dissecting continual learning: a structural and data analysis +Umberto Michieli and Pietro Zanuttigh. Incremental learning techniques for seman- +tic segmentation. In Proceedings of the IEEE/CVF International Conference on +Computer Vision Workshops (ICCV-W), 2019. +Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pas- +canu, Dilan Gorur, and Mehrdad Farajtabar. Architecture matters in continual +learning. arXiv preprint arXiv:2202.00275, 2022. +Katelyn Morrison, Benjamin Gilby, Colton Lipchak, Adam Mattioli, and Adriana Ko- +vashka. Exploring corruption robustness: Inductive biases in vision transformers +and mlp-mixers. CoRR, abs/2106.13122, 2021. +Martin Mundt, Yong Won Hong, Iuliia Pliushch, and Visvanathan Ramesh. A wholis- +tic view of continual learning with deep neural networks: Forgotten lessons and +the bridge to active and open world learning. ArXiv, abs/2009.01797, 2020. +Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltz- +mann machines. In ICML, 2010. +Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. +Feature visualization. +Distill, 2017. +doi: +10.23915/distill.00007. +https://distill.pub/2017/feature- +visualization. +Yuki Ono, Eduard Trulls, Pascal Fua, and Kwang Moo Yi. Lf-net: Learning local +features from images. +In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, +Kristen Grauman, Nicol`o Cesa-Bianchi, and Roman Garnett, editors, Advances in +Neural Information Processing Systems 31, NeurIPS, 2018. +German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan +Wermter. +Continual lifelong learning with neural networks: A review. +Neural +Networks, 2019. +Minwoo Park, Kyle Brocklehurst, Robert T. Collins, and Yanxi Liu. Deformed lattice +detection in real-world images using mean-shift belief propagation. IEEE Trans. +Pattern Anal. Mach. Intell., 2009. +Sayak Paul and Pin-Yu Chen. Vision transformers are robust learners. arXiv preprint +arXiv:2105.07581, 2021. +Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, and Aurelio Uncini. Effi- +cient continual learning in neural networks with embedding regularization. Neuro- +computing, 2020. +Mozhgan Pourkeshavarz and M. Sabokrou. Zs-il: Looking back on learned experi- +ences for zero-shot incremental learning. ICLR, 2022. +112 +Bibliography + +BIBLIOGRAPHY +Ameya Prabhu, Philip H. S. Torr, and Puneet K. Dokania. Gdumb: A simple approach +that questions our progress in continual learning. In Andrea Vedaldi, Horst Bischof, +Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - (ECCV), Lec- +ture Notes in Computer Science. Springer, 2020. +James Pritts, Ondrej Chum, and Jiri Matas. +Rectification, and segmentation of +coplanar repeated patterns. In 2014 IEEE Conference on Computer Vision and +Pattern Recognition, CVPR, 2014. +Muhammad Naveed Iqbal Qureshi, Beomjun Min, Hang Joon Jo, and Boreom Lee. +Multiclass classification for the differential diagnosis on the adhd subtypes using +recursive feature elimination and hierarchical extreme learning machine: structural +mri study. PloS one, 2016. +Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey +Dosovitskiy. Do vision transformers see like convolutional neural networks? ArXiv, +2021. +Jathushan Rajasegaran, Munawar Hayat, Salman H. Khan, Fahad Shahbaz Khan, and +Ling Shao. Random path selection for continual learning. In Hanna M. Wallach, +Hugo Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B. Fox, and +Roman Garnett, editors, Advances in Neural Information Processing Systems 32: +Annual Conference on Neural Information Processing Systems 2019, NeurIPS, +2019. +Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lam- +pert. icarl: Incremental classifier and representation learning. In Conference on +Computer Vision and Pattern Recognition, (CVPR). IEEE, 2017. +Maximilian Riesenhuber and Tomaso A. Poggio. Hierarchical models of object recog- +nition in cortex. Nature Neuroscience, 1999. +Mark Bishop Ring et al. Continual learning in reinforcement environments. PhD +thesis, 1994. +Hippolyt Ritter, Aleksandar Botev, and David Barber. +Online structured laplace +approximations for overcoming catastrophic forgetting. +In Advances in Neural +Information Processing Systems, 2018. +Carlos Rodr´ıguez-Pardo, Sergio Suja, David Pascual, Jorge Lopez-Moreno, and Elena +Garces. Automatic extraction and synthesis of regular repeatable patterns. Com- +put. Graph., 2019. +Bibliography +113 + +Dissecting continual learning: a structural and data analysis +David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P. Lillicrap, and Gregory +Wayne. Experience replay for continual learning. In Hanna M. Wallach, Hugo +Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B. Fox, and Roman +Garnett, editors, Advances in Neural Information Processing Systems 32: Annual +Conference on Neural Information Processing Systems 2019, NeurIPS, 2019. +Frank Rosenblatt. The perceptron: a probabilistic model for information storage and +organization in the brain. Psychological review, 1958. +Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. +Efficient +content-based sparse attention with routing transformers. Trans. Assoc. Com- +put. Linguistics, 2021. +David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning represen- +tations by back-propagating errors. Nature, 1986. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, +Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexan- +der C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. Int. +J. Comput. Vis., 2015. +Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James +Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive +neural networks. ArXiv, 2016. +Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. Dynamic routing between +capsules. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wal- +lach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances +in Neural Information Processing Systems 30: Annual Conference on Neural In- +formation Processing Systems NIPS, 2017. +Frederik Schaffalitzky and Andrew Zisserman. Geometric grouping of repeated ele- +ments within images. In John N. Carter and Mark S. Nixon, editors, Proceedings +of the British Machine Vision Conference 1998, BMVC. +J¨urgen Schmidhuber. +Deep learning in neural networks: +An overview. +CoRR, +abs/1404.7828, 2014. +Eli Shechtman and Michal Irani. Matching local self-similarities across images and +videos. +In 2007 IEEE Computer Society Conference on Computer Vision and +Pattern Recognition (CVPR 2007). IEEE Computer Society, 2007. +Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, and +Jongseong Jang. Online class-incremental continual learning with adversarial shap- +ley value. AAAI, 2021. +114 +Bibliography + +BIBLIOGRAPHY +Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. +Continual learning +with deep generative replay. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, +Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, ed- +itors, Advances in Neural Information Processing Systems 30: Annual Conference +on Neural Information Processing Systems NIPS, 2017. +Shagun Sodhani, Sarath Chandar, and Yoshua Bengio. Toward training recurrent +neural networks for lifelong learning. Neural Computation, 2020. +David Stutz, Alexander Hermans, and Bastian Leibe. Superpixels: An evaluation of +the state-of-the-art. Comput. Vis. Image Underst., 2018. +Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. LAMOL: language modeling for +lifelong language learning. In International Conference on Learning Representa- +tions, (ICLR), 2020. +Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir +Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going +deeper with convolutions. In IEEE Conference on Computer Vision and Pattern +Recognition, CVPR, 2015. +Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang +Liu. A survey on deep transfer learning. In Vera Kurkov´a, Yannis Manolopoulos, +Barbara Hammer, Lazaros S. Iliadis, and Ilias Maglogiannis, editors, International +Conference on Artificial Neural Networks (ICANN), Lecture Notes in Computer +Science. Springer, 2018. +Bosiljka Tasic, Zizhen Yao, Lucas T. Graybuck, Kimberly A. Smith, Thuc Nghi +Nguyen, Darren Bertagnolli, Jeff Goldy, Emma Garren, Michael N. Economo, +Sarada Viswanathan, Osnat Penn, Trygve E Bakken, Vilas Menon, Jeremy A. +Miller, Olivia Fong, Karla E. Hirokawa, Kanan Lathia, Christine Rimorin, Michael +Tieu, Rachael Larsen, Tamara Casper, Eliza Barkan, Matthew Kroll, Sheana E. +Parry, Nadiya Shapovalova, Daniel Hirschstein, Julie Pendergraft, Heather Anne +Sullivan, Tae Kyung Kim, Aaron Szafer, Nick Dee, Peter A. Groblewski, Ian R. +Wickersham, Ali H. Cetin, Julie A. Harris, Boaz P. Levi, Susan M. Sunkin, Linda J. +Madisen, Tanya L. Daigle, Loren L. Looger, Amy Bernard, John W. Phillips, Ed S. +Lein, Michael J. Hawrylycz, Karel Svoboda, Allan R. Jones, Christof Koch, and +Hongkui Zeng. Shared and distinct transcriptomic cell types across neocortical +areas. Nature, 2018. +Sebastian Thrun. Explanation-based neural network learning a lifelong learning ap- +proach. PhD thesis, University of Bonn, Germany, 1995a. +Bibliography +115 + +Dissecting continual learning: a structural and data analysis +Sebastian Thrun. Is learning the n-th thing any easier than learning the first? +In +David S. Touretzky, Michael Mozer, and Michael E. Hasselmo, editors, Advances +in Neural Information Processing Systems, (NIPS). MIT Press, 1995b. +Sebastian Thrun and Tom M. Mitchell. Lifelong robot learning. Robotics Auton. +Syst., 15(1-2):25–46, 1995. +Akihiko Torii, Josef Sivic, Masatoshi Okutomi, and Tom´as Pajdla. +Visual place +recognition with repetitive structures. IEEE Trans. Pattern Anal. Mach. Intell., +2015. +Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablay- +rolles, and Herv´e J´egou. Training data-efficient image transformers & distillation +through attention. In Marina Meila and Tong Zhang, editors, ICML, 2021. +Gido M. van de Ven and A. Tolias. Three scenarios for continual learning. ArXiv, +2019. +Gido M. van de Ven and Andreas Savas Tolias. Generative replay with feedback +connections as a general strategy for continual learning. ArXiv, 2018. +A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, +Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. +Wavenet: A generative model for raw audio. In The 9th ISCA Speech Synthesis +Workshop. ISCA, 2016. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. +Gomez, Lukasz Kaiser, and Illia Polosukhin. +Attention is all you need. +In Is- +abelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, +S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Informa- +tion Processing Systems 30: Annual Conference on Neural Information Processing +Systems NIPS, 2017. +Hongxing Wang, Gangqiang Zhao, and Junsong Yuan. Visual pattern discovery in +image and video data: a brief survey. Wiley Interdiscip. Rev. Data Min. Knowl. +Discov., 2014. +Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao +Shen, and Huaxia Xia. End-to-end video instance segmentation with transform- +ers. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition +(CVPR), 2021. +Ross Wightman. +Pytorch image models. +https://github.com/rwightman/ +pytorch-image-models, 2019. +116 +Bibliography + +BIBLIOGRAPHY +Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, +Zhengyou Zhang, and Yun Raymond Fu. Incremental classifier learning with gen- +erative adversarial networks. ArXiv, 2018. +Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and +Yun Raymond Fu. Large scale incremental learning. 2019 IEEE/CVF Conference +on Computer Vision and Pattern Recognition (CVPR), 2019. +Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Doll´ar, and Ross B. +Girshick. Early convolutions help transformers see better. CoRR, abs/2106.14881, +2021. +Ju Xu and Zhanxing Zhu. Reinforced continual learning. In Samy Bengio, Hanna M. +Wallach, Hugo Larochelle, Kristen Grauman, Nicol`o Cesa-Bianchi, and Roman +Garnett, editors, Advances in Neural Information Processing Systems 31: Annual +Conference on Neural Information Processing Systems, (NeurIPS), 2018. +Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, and +Jianfeng Gao. Focal self-attention for local-global interactions in vision transform- +ers. ArXiv, abs/2107.00641, 2021. +Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning +with dynamically expandable networks. In 6th International Conference on Learning +Representations, ICLR. OpenReview.net, 2018. +Lu Yu, X. Liu, and J. Weijer. Self-training for class-incremental semantic segmenta- +tion. ArXiv, 2020a. +Lu Yu, Bartlomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, +Shangling Jui, and Joost van de Weijer. Semantic drift compensation for class- +incremental learning. In Proceedings of the IEEE/CVF Conference on Computer +Vision and Pattern Recognition CVPR, 2020b. +Pei Yu, Yinpeng Chen, Ying Jin, and Zicheng Liu. Improving vision transformers for +incremental learning. ArXiv, abs/2112.06103, 2021. +Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, +and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with +localizable features. In IEEE/CVF International Conference on Computer Vision, +(ICCV). IEEE, 2019. +Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synap- +tic intelligence. In Proceedings of the 34th International Conference on Machine +Learning, Proceedings of Machine Learning Research. PMLR, 2017. +Bibliography +117 + diff --git a/DtAzT4oBgHgl3EQfGvt3/content/tmp_files/load_file.txt b/DtAzT4oBgHgl3EQfGvt3/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc8fd8f7878fd64db215bb93624708360f7e0141 --- /dev/null +++ b/DtAzT4oBgHgl3EQfGvt3/content/tmp_files/load_file.txt @@ -0,0 +1,3374 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf,len=3373 +page_content='Ca’ Foscari University Department of Environmental Sciences, Informatics and Statistics Dissecting Continual Learning a Structural and Data Analysis Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Thesis - Computer Science XXXIV Cycle Submitted by: Pelosin Francesco Supervisor: Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Torsello Andrea Venice, Italy - March, 2022 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='01033v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='CV] 3 Jan 2023 G .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' E DOMODissecting continual learning: a structural and data analysis 2 Chapter 0 Abstract Deep Learning aims to discover how artificial neural networks learn the rich inter- nal representations required for difficult tasks such as recognizing objects or under- standing language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This hard question is still unanswered although we are constantly improving the performance of such systems spanning from computer vision problems to natural language processing tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continual Learning (CL) is a field dedicated in devising algorithms able to achieve lifelong learning by overcoming the knowledge disruption of previously acquired concepts, a phenomenon that affects deep learn- ing architectures and that goes by the name of catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Currently, deep learning methods can achieve outstanding results when the data modeled does not undergo a considerable distribution shift in subsequent learning sessions, but as we expose the systems to such incremental setting, performance abruptly drops due to catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As the data generated in the world is continuously in- creasing, the demand to model such streams in a sequential fashion is increasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As such, devising techniques to prevent knowledge corruption in neural networks is fundamental.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Overcoming such limitations would allow us to build truly intelli- gent systems showing adaptability and human-like quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Secondly, it would allow us to overcome the limitation, and onerous aspect, of retraining the architectures from scratch with the updated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Such drawback comes from how deep neural networks learn, that is, they require several parameter updates to learn any given concept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is also the exact reason why catastrophic forgetting happens, as we learn new concepts we overwrite old ones, while a truly intelligent system would show a stability-plasticity optimal trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this thesis, we first describe the background needed to understand continual learning in the computer vision realm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We do so with the introduction of a notation and a formal description of the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, we will introduce several CL setting variants and main solution categories proposed in the literature, along with an analysis of the state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then first analyze one of the baseline approaches to continual learning and discover that in rehearsal-based techniques the quantity of data stored is a more important factor than the quality of memorized data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This trade-off surprisingly holds even for impressively high com- pression rates of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Secondly, this thesis proposes one of the early works on the study of incremental learning on vision transformer architectures (ViTs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In par- ticular, we will compare functional, weight, and attention regularization approaches for the challenging rehearsal-free CL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then propose an asymmetric loss variant inspired by PODNet, achieving good capabilities in terms of plasticity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Among these contributions, we propose a simple, but effective baseline for off-the-shelf continual learning exploiting pretrained models and discuss its extension to unsupervised con- tinual learning, a topic that deserves further attention from the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As the final work, we introduce a novel algorithm able to explore the environment through unsupervised visual pattern discovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then provide a conclusion and discuss further developments and promising paths to be followed by the CL research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 0 3 Dissecting continual learning: a structural and data analysis 4 Chapter 0 Akwnowledgments First, I would like to express my gratitude to my supervisor Andrea Torsello, for all the deep insights and for welcoming me to pursue this research with him.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Secondly, I would like to thank all the people that I encountered throughout these years, especially colleagues and friends that I met, a personal acknowledgment to Alessandro, Seyum, Fatima, and also the friends I met in Spain Hect´or, Laura, and Albin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' I would also like to say thank everyone that loved me during this period, you gave me the strength to carry on this tough journey!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lastly, I would say that I learned a lot during these years, and the force that moved me to pursue a Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', is the same force that allows us to expand and look for answers, to find meanings, and to unfold something beautiful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ‘‘You’re pretty good’’ Chapter 0 5 Dissecting continual learning: a structural and data analysis 6 Chapter 0 Contents 1 Introduction 9 2 Background and Motivation 13 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Artificial vs Natural Intelligence .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 14 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 What is Continual Learning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 17 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Stability-Plasticity Dilemma .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 19 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Catastrophic Forgetting .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 20 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 A Visual Example .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 20 3 Continual Learning Framework 25 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Definition and Settings .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 26 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Online CL vs Offline CL .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 28 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Task-Incremental vs Class-Incremental .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 29 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Baselines .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 30 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Cumulative .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 31 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Finetuning .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 32 7 Dissecting continual learning: a structural and data analysis 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 State-of-the-art .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 33 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Structural-based .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 33 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Regularization-based .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 34 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 Rehearsal-based .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 35 4 Works 37 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Smaller is Better: An Analysis of Instance Quantity/Quality Trade- off in Rehearsal-based Continual Learning .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 38 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 58 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 Simpler is Better: off-the-shelf Continual Learning through Pretrained Backbones .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 76 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 Unsupervised Semantic Discovery through Visual Patterns detection 85 5 Conclusions 99 8 Chapter 0 Chapter 1 Introduction “The measure of intelligence is the ability to change” Albert Einstein 9 Dissecting continual learning: a structural and data analysis The interconnections among entities in our world are growing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Along with this fact, the ability to keep track and record such data has accordingly increased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The need for systems that can cope with such phenomena is essential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Deep Learning (DL) revealed itself to be a powerful weapon to model such complex streams, es- pecially in Computer Vision and Natural Language Processing fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The advent of DL unlocked the ability to develop outstanding technologies that can directly impact our lives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Self-driving cars are one example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Unfortunately not always the impact is positive, if not properly controlled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Therefore, the need for systems that show gen- eralization abilities and can cope with unexpected scenarios, is nowadays essential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end, we also need responsive machines, that can be trained to quickly learn new concepts with low resource consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, what happens if the stream of data encountered by a deep learning model changes its quality over time?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This particular question is tackled by Continual Learning (CL) whose aim is to develop lifelong learning machines, unlocking fast adaptability to new environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Modern deep learning methods for computer vision adapt themselves only to the manifold they are trained on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Instead, we need to devise models which are plastic enough to generalize to distributional shifts in the data and do not require complete retraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This challenge would be solved if training Deep Learning models would not be such a delicate process affected by unexpected drawbacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, when we introduce the notion of learning through time and expose the system to face incremental tasks of different nature, things can get really complicated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One of the drawbacks of incrementally learning is the so-called catastrophic for- getting, where the system is subject to an abrupt deterioration of past knowledge whenever asked to learn new concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This big limitation is broadly studied in con- tinual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To approach this delicate subject, in this thesis, we start by gently introducing some basic differences between artificial and natural intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, we clarify some operative differences between artificial neural networks and some basic brain mechanisms arising from neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, we informally introduce the notion of continual learning and discuss the stability-plasticity dilemma along with the phenomenon of catastrophic forgetting of artificial neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We proceed by introducing a more formal definition of incremental learning along with its fine-grained inclinations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Before moving to the contributions we introduce a brief overview of the state-of-the-art and define the main baselines which act as lower and upper bounds for continual learning methodologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We step into the major contributions by focusing on rehearsal systems, a family of methods that exploit cache memories to replay previous knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, we study how the compression of stored rehearsal data impacts the performance of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tackling the memory side of CL, we provide a quality/quantity analysis through 10 Chapter 1 Introduction the usage of several compression schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We consider also extreme compression rates, providing some insights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On top of that, we consider continual learning under low-resource constraints through the usage of random projections and, in particular, Extreme Learning Machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To follow, as a second major contribution, we are among the first to investigate Vision Transformers in continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we analyze several regular- ization schemes for ViTs, providing a first envision of rehearsal-free CL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We consider weight, functional and attentional regularizations, being the latter unexplored be- fore, we carefully study the application of regularizations to specific parts of the self-attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a side contribution we introduce a new asymmetric loss variant inspired by a contemporary continual learning method (PODNet) prin- cipled by the observation that new attention should not penalize the acquisition of new knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then further clarify the usage of pretrained models in continual learning through an experimental segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We compare fully pretrained CNNs and Vision Transformers in several incremental benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We provide a clear simple baseline that requires few KBytes to operate and does not perform parameter updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Being simple and effective, we discuss its extension to the unsupervised realm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here we consider further extensions for future works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Along with these three contributions, we also study the ability of a system to autonomously discover new visual patterns, a notion embedded in an optimal incre- mental learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We, therefore, provide a simple unsupervised pipeline able to discover semantic patterns on different visual scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, we conclude by wrapping up our perspectives on the main aforementioned challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a final note, we hope this thesis finds a meaningful purpose in the CL com- munity, contributing to the development of Continual Learning and Computer Vision research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 1 11 Dissecting continual learning: a structural and data analysis Contibution Prefaction In this thesis we included some papers developed while pursuing the Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='. The main contributions have been reported in Chapter 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The chapter holds the outcome of several collaborations and with the following list we report the names of the authors and the venues where the works have been submitted: The work reported in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1, has been accepted as oral poster to IJCNN 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The authors who contributed to the work are (in order): Francesco Pelosin and Andrea Torsello from Ca’ Foscari University of Venice The work reported in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 is the outcome of the collaboration of the re- search period abroad and has been accepted as poster to the Continual Lerning Workshop of CVPR 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The authors who contributed to the work are (in order): Francesco Pelosin, Ca’ Foscari University of Venice (Equal Contrib);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Saurav Jha, University of New South Wales, Australia (Equal Contrib);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Andrea Torsello, Ca’ Foscari University of Venice, Italy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bogdan Raducanu and Joost van de Weijer from Computer Vision Center, Spain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The work reported in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 has been accepted as poster to the Transform- ers for Vision Workshop of CVPR 2022 and it is single authored by Francesco Pelosin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The work reported in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4, has been accepted to the S+SSPR 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The authors who contributed to the work are (in order): Francesco Pelosin;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Andrea Gasparetto;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Andrea Albarelli and Andrea Torsello, Ca Foscari University of Venice, Italy 12 Chapter 1 Chapter 2 Background and Motivation 13 Dissecting continual learning: a structural and data analysis 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Artificial vs Natural Intelligence Although the recent developments and great achievements of the field of Artificial Intelligence, the fundamental nature of Artificial Neural Networks (ANNs) might still be a coarse approximation of how our biological brains work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With the mathematical introduction by [McCulloch and Pitts, 1943] and the introduction of the “Perceptron” by [Rosenblatt, 1958], which constitutes the smallest unit that form a ANN, we shaped our modeling of intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' An artificial neuron can be described as a cumulative summation of multiplications over some weights followed by a non-linear function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, after the introduction of the famous Multy Layer Perceptrons (MLPs) the structure of ANNs has not changed much: we work in a connectionist paradigm where the learning happens through a distributed signal activity via connections among ar- tificial neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, the learning occurs by modifying connection strengths based on experience, this modification procedure has a particular name and it is the so-called backpropagation algorithm whose discovery can be traced back to [Rumel- hart et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 1986] but with some earlier works by [Linnainmaa, 1976] (as an M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Sc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Thesis) as pointed in [Schmidhuber, 2014].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The success of connectionists models span over different fields: Convolutional Neural Networks (CNN) for Computer Vision (CV) [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016], Language Mod- els for Natural Language Processing (NLP) [Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019], Deep Q-Learning Networks (DQN) for Reinforcement Learning [Agarwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020], Generative Au- dio Models for Audio [van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] and Graph Convolutional Networks (GCN) for graph data [Kipf and Welling, 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Connectionist models are a composition of several layers of artificial neurons, followed by a non-linearity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' There are several types of layers each with its peculiar- ity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example with the introduction of Batch Normalization [Ioffe and Szegedy, 2015] we allowed the networks to achieve faster training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The introduction of some specialized units often allowed to excel in particular fields such as the convolutional operation [LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 1998] for Computer Vision tasks and the Self-Attention mechanism in Natural Language Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017], although the attention mechanism has achieved tremendous achievements in vision tasks thanks to [Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] and its introduction of Visual Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Nowadays there is still no perfect mechanism/model for each scenario because we are still in the process of discovering how learning happens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For sure in the future, we might see other methodologies working in fields where they are not born from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 14 Chapter 2 Background and Motivation Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1: Feature visualization of GoogLeNet [Szegedy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015], trained on the ImageNet [Russakovsky et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015] dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Concepts in early layers are reported on left while concepts of last layers are on the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The image is taken from [Olah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] While attention-based models spread the knowledge, and feature representations, uniformly across the layers [Raghu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021], in classical convolution-based mod- els (such as ResNets [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016]) the knowledge is constructed in a bottom-up fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is a well-known fact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, abstract concepts are always the result of the composition of simpler concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example in early layers of CNNs for CV tasks, each neuron specializes in the detection of low-level features, while, as we move towards the head, the network learns patterns with more semantic rele- vance for us humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This can be seen thanks to the beautiful visualization of [Olah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] captured in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This also reflects some neuroscientific discover- ies where hierarchies of more and more abstract concepts have been demonstrated repeatedly, especially in the visual brain areas [Riesenhuber and Poggio, 1999].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While those resemblances are appealing to draw a connection between artificial and biological brains, the difference is still striking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example, quite often Deep Learning models are static, that is, they are not altering their architecture over time but, in our biological brains, new connections can appear, while others can also cease to exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is also the so-called neuroplasticity of our brains, whose first scientific evidence has been reported by [Bennett et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 1964].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we will see, continual learning and few other fields (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', dynamic routing, conditional computation, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=') are the only ones going in this direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On another note, time seems to be a major factor in both artificial and natural learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our current connectionist framework does not exploit the notion of time in learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To accommodate such a factor we would need to redefine the current learning framework because so far the models process data but without being condi- tioned to when something is learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' There have been some attempts towards this Chapter 2 15 Edges (layer conv2do) Textures (layer mixed3a) Parts (layers mixed4b & mixed4c)Dissecting continual learning: a structural and data analysis direction by defining the learning as a system of differential equations taking into consideration time as a fundamental variable and also some attempts to implement it by [Betti et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020], although the majority of the works still operate in the classical scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Another clear distinction between artificial and biological neurons lies in how they decide to fire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The artificial neuron receives inputs and multiplies them by some weights that are adapted during learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To fire, it uses an activation function (such as ReLu [Agarap, 2018]), but the reality of biological neurons is different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each biological neuron has its threshold resultant from a complex chemical interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A class of models that are trying to bridge this gap is Spiking Neural Networks [MAA, 1997] where the firing of the neuron is determined by a threshold on the signal received.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Note that also this simplified model mimics neither the creation nor the destruction of connections (dendrites or axons) between neurons, and ignores signal timing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, this restricted model alone is powerful enough to work with simple classification tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Another important difference is that biological circuits contain a myriad of addi- tional details and complexity not translated to DL models, including diverse neural cell types [Tasic et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018] with some recent attempts by [Doty et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] to bridge this gap by changing the activation function for each artificial neuron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Another attempt to introduce more complex structures has been proposed by [Sabour et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] with the introduction of Capsule Net models, a family of networks where the neurons are structured in hierarchies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The most widely known neuroscientific framework for the brain is the Comple- mentary Learning Systems (CLS) [McClelland et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 1995].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This framework explains why the brain requires two deferentially specialized learning and memory systems, and it nicely specifies their central properties i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', the hippocampus as a sparse, pattern-separated system for rapidly learning episodic memories, and the neocortex as a distributed, overlapping system that gradually integrates experienced episodes and extracts latent semantic structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Instead, most of the proposed artificial models, are more of a well-engineered pipeline crafted to excel in a particular task such as Computer Vision, NLP, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' and do not draw inspiration from such theo- ries, although a very recent work prosed by [Arani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022] explored over this direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With some recent developments in the CL field, rehearsal systems [Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] (systems that replay old data through a buffer) can be recast with such a point of view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, we can think of the rehearsal buffer (or the part of the CL system dedicated to storing “old” patterns used in replay) as a long-term memory while the other part of the architecture is the fast-paced learner of the intelligent agent i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' the hippocampus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Perhaps the key to continual learning will be in the 16 Chapter 2 Background and Motivation inspiration from neuroscientific models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Indeed recently [McCaffary, 2021] proposed a systematic review of the approaches in CL along with some insights into why we should pay more attention to neuroscientific theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we saw, the gap between artificial and biological models is still relevant and the two fields, nowadays, show big differences in their understanding of intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, one striking fact is that the artificial community has achieved impressive results without directly mimicking the current neuroscientific theories, suggesting that, perhaps, several paradigms of intelligence exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 What is Continual Learning?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' “Every machine is built to make decisions, if it does not have the faculty to learn, it will act always in conformity to a mechanical scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We don’t have to let the machine decide about our conduct if we first have not studied the laws that rule its behavior, and made sure that such behavior will be based on principles that we can accept!”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Norbert Wiener Definition: The aforementioned quote is taken from “Introduction to Cybernet- ics”, and highlights the fact that the fundamental ability to continually learn is a very important skill that any intelligent system should possess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although we are now able to devise powerful artificial systems achieving superhuman performance in some tasks, we, as humans, still exhibit a core ability that would be fundamental to repli- cate intelligence as we know it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The ability to learn new concepts without erasing past knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' These two aspects are the main objectives of Continual Learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' First, exhibiting the ability to assimilate new concepts incrementally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Secondly, showing the capability of memorization i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' not forgetting what has been previously learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In a nutshell Continual Learning studies how to develop systems that learn incrementally over time without forgetting previously acquired knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' History: Continual Learning has drawn a lot of interest from the research commu- nity only in the later years even though the question itself is very old.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One of the early papers trying to tackle this phenomenon has been proposed by [Carpenter and Grossberg, 1988] where the authors proposed a short-term and long-term memory pattern detector through the Adaptive Resonance Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, to the best of our knowledge, this seems to be the earliest work proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Later, as connectionist Chapter 2 17 Dissecting continual learning: a structural and data analysis Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2: Continual learning spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The optimal algorithm should exhibit enough plasticity to learn new tasks while retaining enough stability to not forget the acquired knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' models pave the way for modern Artificial Intelligence, other attempts and several proposals have been made.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Later the work by [Ring et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 1994] coined the term “Continual Learning”, here the system proposed, aimed to construct hierarchies of knowledge within a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Later, with the works by [Thrun, 1995a] and [Thrun and Mitchell, 1995] Continual Learning started to get attention especially in both the Robotic and Reinforcement Learning research community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Terms: When we say Continual Learning we have two other equivalent terms: In- cremental Learning and Lifelong Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' These terms can be used interchangeably and denote the same setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' There are no clear distinctions and probably the pref- erence of one over another is just a matter of the research field we are in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example, in the computer science field, it seems that continual learning and incre- mental learning are more common.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Other terms are used but differ in the specific continual setting they study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example: Online Learning and Streaming Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' These are very similar, and there is no clear distinction yet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' These terms are used to describe algorithms that learn by observing an example just one time and, sometimes, the latter can also refer to systems that can respond to queries in real-time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We will introduce a more formal definition in the next chapter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Subject of CL: As we previously discussed, the study of Continual Learning is strictly tight with the widespread usage of connectionist models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, before the advent of Artificial Neural Networks (ANNs), intelligence was modeled, usually, by a mixture of expert systems and clever algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Posing the same “continual learning question” for these systems is still an interesting challenge, but the success of ANNs shifted the focus to connectionist models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 18 Chapter 2 Optimal Stability Plasticity Very good at solving Very good adaptation old tasks to new tasksBackground and Motivation 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Stability-Plasticity Dilemma Learning incrementally (or continually) with connectionist models requires one core ability, that is, to adapt to a changing environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' If the environment would not change over time, and we expose a system to operate on it, we would just need to understand, model, and hard-code the environment’s rules to the system and we would achieve perfect functionality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Unfortunately, the real world does not seem to behave in such a predictable way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Instead, our reality constantly changes and we need to redefine our knowledge, reshape it in light of new facts, have room to constantly learn something new, and recombine previous knowledge to understand a novel concept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is not the only necessary property for an intelligent system, the counterpart is also important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, some things do not change in the world, old challenges might propose again, and, therefore, fundamental knowledge should not be forgotten.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A truly intelligent system would behave consistently on past lessons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It would be able to detect and recognize past challenges, delivering correct solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The researchers gave a name to this trade-off and it is called the stability-plasticity dilemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The long-term goal of Continual Learning is to create a system able to achieve a perfect balance between these two abilities as depicted in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we will see, it is termed a “dilemma” since achieving the optimal trade-off is a very hard task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On top of these considerations dissecting new concepts and redefining them as a combination of old knowledge allows the forward transfer of intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' That is when we learn we sometimes can abstract the knowledge to solve a related problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is not uncommon it is the mechanism of analogy thinking where an “opera- tional pattern” can be used to solve problems in apparently different domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As an example, [Hill et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] investigates such property of intelligence in artificial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On the other hand, continual learning should give the ability to better grasp the past knowledge improving the ability to past challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is even more common and we can think of this kind of ability as the “experience” that an agent accumulates in a certain field or in solving a certain category of tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In a nutshell, the stability-plasticity dilemma can be considered the crux of intel- ligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Showing adaptability to new environments while at the same time retaining knowledge of old environments seems to be the major qualities of an intelligent agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 2 19 Dissecting continual learning: a structural and data analysis Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3: The original images for each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This image shows the ground truth relative to Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Catastrophic Forgetting One core aspect of deep neural networks lies in the fact that if we do not introduce any kind of mechanism to achieve the balance between stability and plasticity, the artificial network is naturally inclined to forget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' That is, the neural networks put much more emphasis on plasticity rather than stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' From a neuroscientific point of view, this fact does not make much sense unless we think about neural networks as systems without any form of memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The reality is that networks do have memory, but by the nature of the learning algorithms we overwrite such memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As the model incrementally learns, each parameter in the network is modified by the updates of the backpropagation algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The optimal continual learning method would be able to modify the parameters without altering the performances of old tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This seems not to happen and therefore neural networks are prone to the so-called catastrophic forgetting, the phenomenon where old knowledge is corrupted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 A Visual Example To better grasp the phenomenon of catastrophic forgetting, we will provide a visual example in the following section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we discussed, catastrophic forgetting happens because the parameters tuned to solve a task (usually experienced before in time), are not suited for the currently experienced task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We hope to provide a clear visual example of the effects of catastrophic forgetting in a shallow architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As the name suggests Deep Learning refers to architectures with many layers on top of each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Because of this huge depth, computer vision (but not only this community) was able to achieve impressive results in the domain of pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Unfortunately, we still do not fully control how the knowledge is built inside a deep neural network and if we want to counter forgetting we would need such information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To do so, we would need to keep track of each parameter variation 20 Chapter 2 8 8 8 8Background and Motivation as we learn new concepts in a continuous fashion, but doing so, especially in such models, is hard if not an impossible job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Said that, on a small scale, we can still show what is going on inside a network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the following toy example we try to track forgetting of an autoencoder by dissecting the learning process per task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We will use a simple one-layer autoencoder model and try to incrementally learn the famous MNIST [LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 1998] dataset, still used in the continual literature to validate the proposed methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We will divide the dataset into 5 tasks and learn to compress and reconstruct images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' By doing so we will show the corruption of old images as we learn new tasks and connect them to the network’s variation of the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The MNIST dataset is a grayscale dataset of 28 × 28 images of handwritten digits going from the digit 0 to the digit 9, here some examples: .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The MNIST was constructed from NIST’s Special Database 3 and Special Database 1, the first has been collected among Census Bureau employees and the second one among high school students.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It has a training set of 60,000 examples, and a test set of 10,000 examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We will divide the dataset in 5 tasks, the first 1 is composed of the digits , task 2 by , task 3 by , task 4 by and finally task 5 by Although the best practice to work with image data is to use CNNs, we will limit our toy example to a naive autoencoder model of linear layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This choice allows us to better unfold and analyze the variation of the parameters due to its simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The model is composed of a single layer encoder φ that encodes an image into a latent vector and a single layer decoder ψ that reconstructs the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, the single-layer encoder is a linear layer φ : R784 → R16 that will receive in input a flattened (28 × 28 = 784) representation of the image and compress into a latent vector of magnitude 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The decoder, then take care of the reconstruction of the image by doing the reverse process, that is ψ : R16 → R784 i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' given a latent vector of size 16 it decompresses it to a flattened image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally an autoencoder can be represented in the following way: ˆx = ψ(φ(x)) Where x ∈ R784 is the flattened representation of an original image coming from a task t, φ is the encoder network and ψ is the decoder network, and ˆx ∈ R784 is the flattened representation of the reconstructed image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The objective is to optimize and, in particular, minimize the mean square error (MSE) between the original image and the encoder’s reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally we can define the objective function as: min φΘ,ψΘ L (x, ˆx) = min φΘ,ψΘ ∥x − ˆx∥2 Chapter 2 21 Dissecting continual learning: a structural and data analysis Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4: Variation of the parameters grouped by task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each bar plot shows the distribution of the weights, we can see that each task modifies internal parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each weight is computed as the sum of all the connections of the particular latent neuron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here φΘ represents the set of encoder’s parameters to be optimized while we use ψΘ for the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' By incrementally learning each task we want to show the corruption in the ability of reconstruction of previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The change in the parameters to accommodate the new task negatively impacts old tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, if we try to retrieve old concepts we see catastrophic interference, that is, the network is confusing old concepts with newly learned ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' From now on let us refer to Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5, where is depicted the complete incremental learning and its effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The grid reported encodes the performance of the autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each row i refers to the model trained solely on data of task i but tested on all the other tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' From the experiment, we can appreciate several effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' First, if we isolate the first column of the grid, we can visualize the performance of the original first task as time passes (we can think of it as the stability of the network as we will discuss in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, one can clearly see that feeding new concepts corrupts old ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On the other hand, if we focus on the upper triangular section of the matrix, we see the ability of the model to generalize knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This stresses the fact that generalization is a key component in continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Intuitively more “general” models might experience less forgetting (further hints on this path can be found in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 and Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The connected change in the weights for each task is reported in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 (for both the encoder and decoder).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we can see, even a small change in the parameters dramatically impacts the stability plasticity trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As reference in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 we report the ground truths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 22 Chapter 2 Decoder Encoder Parameter Distribution Task1 Task2 Task3 Task4 Task5 40 50 30 100 Weight Weight 20 150 10 200 0 250 Task1 Task2 Task3 Task4 Task5 Parameter DistributionBackground and Motivation Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5: Results of the incremental training and test of the autoencoder model in the MNIST dataset were split into 5 tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each row i of the grid, reports the performance of the model when trained on task i (or time ti) and tested on both old (left) and future (right) tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Training on previous tasks might unlock the intrinsic possibility to solve future tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This latest phenomenon is highlighted with the blue boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ground truth in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 2 23 3 6 5 Catastrophic Forgetting Generalization Ability Train and Test on the same task, optimal plasticityDissecting continual learning: a structural and data analysis 24 Chapter 2 Chapter 3 Continual Learning Framework 25 Dissecting continual learning: a structural and data analysis 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Definition and Settings Being Continual Learning a relatively new discipline, the community unfortunately still does not fully agree on a formal setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is also corroborated by the fact that incremental learning is under the research light of several communities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Among the most active communities, we have NLP, Computer Vision, Reinforcement Learn- ing, Neuroscience, and Robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each of these communities has a well-established history and standard protocols, therefore, accommodating everyone in a common ground is still an ongoing process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, in the following, we will introduce the most common definitions and settings shared in the Computer Vision literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' There have been some attempts to formalize a setting for continual learning [van de Ven and Tolias, 2019, Lomonaco and Maltoni, 2017] through the definition of learning protocols and new terminologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We will see these different learning paradigms in the following sections, but the core feature underlying learning incre- mentally is that the data experiences some distributional shift, that is, the distribu- tion of the data changes over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is sufficient to abruptly cause forgetting in connectionist models, but we can define some settings which are more prone to cause such phenomenon, while others are more simple to overcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The typical continual learning setting in computer vision is composed of a split dataset, where each (usually non-overlapping) split is considered an incremental task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Therefore, each task contains data from several classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although this is not the only way to define a continual learning scenario, this is the most prominent one as pointed out in these surveys [Mai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022, Delange et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021, Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Let us define a more formal definition: Formal Definition: Given a dataset D containing (in our case) images, we want to split D in a sequence of n disjoint tasks that can be learned sequentially by our model: T = [t1, t2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , tn] (3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1) where each task ti = (Ci, Di) is represented by a set of classes Ct = � ct 1, ct 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , ct nt � and training data Dt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We use Nt to represent the total number of classes in all tasks up to and including task t : Nt = �t i=1 ��Ci��.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a side note, usually in literature one would use the notation t to point at the current task (the task at time t) and t − 1 to point to the task before the current one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A continual learning algorithm aims to model each task sequentially as time passes exposing the model at training time to each task in a sequential fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Operatively: 26 Chapter 3 Continual Learning Framework first, the algorithm is trained with mini-batches of patterns coming from task 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here we will record the system performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, the model is exposed to task 2 data and the process continues until task n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One visual example can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1, here the MNIST dataset is split into 5 tasks with 2 classes each 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The previously defined learning scenario takes into consideration a distinct transi- tion among tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this particular case, we implicitly assume a reset signal between two tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' When such signal is not present, and the transition between tasks is smooth, the complexity of the continual learning problem increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' If in this par- ticular setting we query the system for real-time response, we are talking about streaming learning [Hayes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This setting is more challenging because the models are allowed much less time to consolidate previously seen knowledge and therefore are more prone to experience catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Since this the- sis focuses on computer vision problems, throughout the work we will stick to the introduced setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fine-Grained So far we limited the notion of a task as a split of a dataset, but what happens if in a new task we experience new instances of previously seen classes?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end, more complete settings for continual learning benchmarking have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One example is constituted by [Lomonaco and Maltoni, 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here the authors, along with a new dedicate dataset, introduce three different settings by mixing the experience of old and new data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Specifically, here we report the different scenarios: New Instances (NI): new training patterns of the same classes become avail- able in subsequent tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here the model can experience new instances of old, previously seen, classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With the possibility of seeing the same objects in new poses and conditions (illumination, background, occlusion, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here a good model is expected to incrementally consolidate its knowledge about the known classes without compromising what it has learned before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' New Classes (NC) : new training patterns belonging to different, never seen, classes become available in subsequent tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is the classic scenario (the one we formally introduced) and a model should be able to deal with the new classes without losing accuracy on the previous ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' New Instances and Classes (NIC): new training patterns belonging both to known and new classes become available in subsequent training tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A good 1This particular setting takes the name of MNIST-split Chapter 3 27 Dissecting continual learning: a structural and data analysis model is expected to consolidate its knowledge about the known classes and learn the new ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is the most complete and difficult scenario since the addition of new classes poses the challenge of having good plasticity while the introduction of new old patterns asks for stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In our opinion, this categorization is preferable since it provides a more complete description of a continual learning benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, if we assume, as an example, that each task data is generated by an independent source, the task data will be continually augmented with new information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This scenario is captured by the NIC setting and cannot be handled by the standard definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Unfortunately, due to the recent development of the field, we usually assume the NC scenario independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Online CL vs Offline CL So far we introduced a basic notation, now we discuss how a model can be trained to face a continual learning stream of tasks and introduce the name of these scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The continual learning literature distinguishes two options: online training and offline training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Online In particular, in the online continual learning protocol, the algorithm is re- quired to have a single parameter update per pattern (or one forward-pass).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is a very coercive setting and requires maximum performance in knowledge consoli- dation from the continual learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, this scenario is quite challenging because of the nature of Stochastic Gradient Descent i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' the learning algorithm at the core connectionists models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here the system might not have enough time to assimilate a concept, therefore weakening its understanding and subsequent stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Offline In the offline learning protocol, instead, we are free to perform several parameter updates per pattern i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' we are allowed to see an image more than once.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For an incremental learner, this setting is a double edge sword, in one case it favors the consolidation of the concepts since setting a large number of epochs guarantees the correct training of a model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On the other side, if we do not introduce any forgetting prevention mechanism, this corrupts the old informational content of the network i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' the system is more exposed to catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the following paragraphs, we will introduce some of the settings that are now, 28 Chapter 3 Continual Learning Framework Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1: Schematic representation of split-MNIST task protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Taken from [van de Ven and Tolias, 2019] de facto, shared among all the research communities researching continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Task-Incremental vs Class-Incremental Assuming an NC-type of task flow, two sub-settings have been widely adopted by the research community and are well-defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The Task Incremental (TI) setting and the Class Incremental (CI) setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Task-Incremental In the Task Incremental scenario, which is sometimes also re- ferred to as multi-head scenario or task aware (TAw) the learning happens sequen- tially, but at test time, the learner has also access to the task label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This scenario is also known as multi-head because a typical learning system can potentially dedicate a particular subsystem per task, that can be specifically queried at test time through the task label knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Typically the subsystem is a classifier head on top of a backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally we consider task-incremental classification problems where at train- ing time the learner has access to: Dt = {(x1, y1, z1) , (x2, y2, z2) , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , (xmt, ymt, zmt)} while at test time the learner has access to: Dt = {(x1, z1) , (x2, z2) , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , (xmt, zmt)} where x are input features for a training sample, and y ∈ {0, 1}Nt is a one-hot class ground truth label vector corresponding to xi while z ∈ {0, 1}|T | is the is a one-hot task ground truth label vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In a nutshell, during training for task t, the learner only has complete access to Dt, then we assume a reset signal among tasks i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 3 29 Task 1 Task 2 Task 3 Task 4 Task 5 00 first second first second first second first second first second class class class class class class class class class classDissecting continual learning: a structural and data analysis Ci ∩ Cj = ∅ if i ̸= j, and at test time the learner has access to patterns and their task label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Class-Incremental Instead, in class incremental scenario, also known as single- head or Task Agnostic (TAg) the system has both access to task and class label during training time, but at test time it only has raw data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This constitutes a harder problem, but also a more realistic scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally we consider class-incremental classification problems where at training time the learner has access to: Dt = {(x1, y1, z1) , (x2, y2, z2) , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , (xmt, ymt, zmt)} while at test time the learner has access only to: Dt = {x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , xmt} where x are input features for a training sample, and y ∈ {0, 1}Nt is a one-hot class ground truth label vector corresponding to xi while z ∈ {0, 1}|T | is the is a one-hot task ground truth label vector, same as in TAW setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although taw scenarios are more interesting from a pure machine learning per- spective, the tag setting is more realistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example, let’s draw an analogy: let us consider a baby as our incremental algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We want to teach the baby to recognize elements coming from a particular environment, for example, kitchen ac- cessories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here the task label would be ’kitchen’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' After the learning process has successfully terminated, whenever we ask the baby to recognize a fork, we do not need to provide a hint on the task (kitchen).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, the information of where he learned the concept should be irrelevant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is also important because several objects can appear, and could be part of, several environments (tasks).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example, scissors can be found in the kitchen, but also in a studio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Therefore knowledge itself should be independent of the context where it is learned and, we think that class incremental setting provides a more useful challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Baselines In this chapter, we will see the principal naive approaches and introduce an overview of the state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we will introduce the cumulative and the finetuning 30 Chapter 3 Continual Learning Framework Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2: Depiction of the Cumulative/Joint approach for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The model is trained with all the data up to the current task ti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The updates flow in the backbone and in all the heads up to hi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' methods which constitute, respectively, the upper and the lower bound to evaluate continual learning strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Moreover, we consider our model to be composed of a backbone (or a feature extractor) and a dedicated classifier (head) for each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We do so in light of the majority of the works in continual learning and computer vision, which are composed of this very structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Cumulative To evaluate a continual learning algorithm we need an optimal method that acts as an upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The cumulative strategy (also known as joint-training) consti- tutes the optimal continual learning strategy since mimics a learner with perfect memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Indeed if we have perfect memory we can recall the past and not expe- rience forgetting, to this end a recent work from [Knoblauch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] proved theoretically that optimal continual learning has a perfect memory and is NP-hard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To have optimal memory of the past, an algorithm should be able to save all the data that has been seen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is a very inconvenient requirement and it must be avoided when considering the development of real lifelong learning systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, as the pace of real-world data generation is growing, such constraints would not be satisfied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Training from scratch with all the dataset data could be an upper bound approach, but it does not break down each incremental step upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end, the cumulative strategy accumulates all the data seen up to a certain task and trains the network from scratch, therefore, providing an incremental upper bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 3 31 to tN to t1 tN to t1 t1 tN .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Dissecting continual learning: a structural and data analysis Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3: Depiction of the Finetuning approach for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The model is trained exclusively with the data coming from the current task ti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The updates flow in the backbone and only in the hi head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally, for the cumulative approach, the data of task i is defined to be: ti = j=i � j=0 tj when i = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' n to complete the incremental setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' At each time ti the model is trained on the cumulative data and therefore we are able to define the upper bound performance for each task i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One observation is that the cumulative performance in the last task it is equivalent to the performance of the model trained with the whole data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 we depict a visual example of the cumulative approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here for each task, the backbone is always updated along with the heads of competence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However the updates of the heads can be also shared among all the tasks, that is, each task data alters all heads parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Of course, this design choice does not favor the prevention of forgetting, instead, it allows the disruption of consolidated knowledge and we won’t consider this case2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Finetuning We previously saw the upper bound for CL, that is, the optimal continual learning approach for a benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Now, we introduce the finetuning approach which consti- tutes the lower bound methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although we can argue that a random classifier would be the true lower bound, in practice we consider finetuning in which it is ab- sent of any forgetting prevention mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, it is equal to the practice of 2this is valid for finetuning too 32 Chapter 3 to t1 tN to t1 tN to t1 tNContinual Learning Framework transfer learning among subsequent tasks and measures the base resilience of the model against incremental scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also can consider it as a baseline to assess the generalization capabilities of a model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A depiction of the method is given in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, the model is trained sequentially and each task head is updated with the data of its competence and, as in the cumulative approach, the backbone is always updated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 State-of-the-art In the following sections, we will introduce the main categorizations of the approaches proposed by the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we will explain the core mechanism and show the pros and cons of each category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although there is no absolute preferred solution, some approaches are more explored than others and show more promising results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Structural-based Structural-based approaches, also known as architectural approaches or parameter- isolation methods, fight forgetting by altering the structural composition of the net- work itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, structural approaches instantiate dedicated modules as they experience new tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The first work falling in this category is perhaps Progressive Neural Networks (PNN) [Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] where the network is augmented with new connections spanning both height-wise and width-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the task-aware setting, this approach constitutes a convenient and naive so- lution to fight catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, having the task label at test time allows us to correctly determine a dedicated subnetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Instead, in task agnostic setting, we would not be able to select such a submodule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can see very few structural-based approaches tackling class incremental setting due to the aforemen- tioned limitation [Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Rajasegaran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' That said, Structural approaches can be subdivided into Fixed Architecture (FA) and Dynamic Architec- ture (DA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' FA only activates relevant parameters for each task without modifying the architecture [Mallya and Lazebnik, 2018, Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017], while DA adds new parameters for new tasks while keeping old parameters unchanged [Yoon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018, Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although architectural methods are very intuitive, they are Chapter 3 33 Dissecting continual learning: a structural and data analysis Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4: Architectural approaches for Continual Learning alter the structural prop- erties of the network itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' bulky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, the major drawbacks are in the expansion of the parameters which can result in a memory-intensive method (DA), or in the architectural limitation of the number of parameters that can be saturated (FA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Regularization-based In parameter based approaches also known as weight-regularization or data-regularization approaches, forgetting is handled with procedures that regularize the parameter up- dates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Among the most famous ones, there are Elastic Weight Consolidation (EWC) [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] and Synaptic Intelligence (SI) [Zenke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' EWC was the first regularization-based approach using second-order information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In par- ticular, the procedure regularizes the updates through the Fisher information which is computed at each parameter update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5: Regularization approaches for Continual Learning alter only the parame- ters properties of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this category, we can also find Learning without Forgetting (LwF) [Li and Hoiem, 2017], which is one of the most influential methods in continual learning literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' LwF uses Knowledge Distillation [Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015] in the logits of the 34 Chapter 3 Model 1 Task 1 Task i Model iTask iContinual Learning Framework network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The main strength of LwF lies in the fact that it does not use previously- stored examples while still being purely data-driven.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular by storing the old model at time (t − 1) the method can distillate old knowledge by forwarding to the old model the current data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Since the introduction LwF, KD has been widely adopted by the continual learning community as part of new methodologies among the works we report [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017, Buzzega et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Pourkeshavarz and Sabokrou, 2022, Joseph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021, Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019, Banerjee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021, Javed and Shafait, 2018, Ahn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021, Dhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019], but we are aware of many others that we do not report for brevity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The main strength of regularization-based approaches lies in their data/architecture constraint- free nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, they usually work with an underlying mathematical justification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This property surely allows a more principled continual learning strategy, but it can make the learning procedure cumbersome: computing second-order or estimating gradients directions, might slow down the learning while hindering it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 Rehearsal-based In rehearsal-based approaches (or data-replay approaches) the main mechanism ex- ploited to overcome forgetting, lies in the usage of a replay buffer for old exemplars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The methods falling under this category, dedicate a memory cache to store data examples encountered during the incremental training i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' the system samples and stores images experienced in previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can think of the buffer as long-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, what typically happens is that the memory is queried to augment the task at hand, that is, we retrieve and inject old examples to the current data batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This mechanism prevents forgetting by allowing the network to directly recall past examples, a visual depiction can be seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Perhaps the most famous work among rehearsal-based approaches is Experience Replay (ER) [Rolnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] inspired by the Reinforcement Learning community its strategy is replaying data by randomly selecting old examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the evolution of ER, which is Maximally Interfered Retrieval (ER-MIR) [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a], proposed a controlled sampling of the replays.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Specifically, they retrieve the samples which are most interfered with, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' whose prediction will be most negatively impacted by the foreseen parameters update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Another famous method is Gradient Episodic Memory (GEM) [Lopez-Paz and Ranzato, 2017] in which the authors devised a system where the gradient update of the replay examples should follow the original direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A closely related mechanism is generative replay (GEN) [Shin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017, van de Chapter 3 35 Dissecting continual learning: a structural and data analysis Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6: Rehearsal approaches for Continual Learning store old patterns to aug- ment the data of the current task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ven and Tolias, 2018, Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this approach, old data is recorded in a buffer and then compressed, after that, a generative model such as a GAN [Goodfel- low et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2014], generates a synthetic version of the old distribution and augments the data of the current task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The main disadvantages of generative replay are that it takes a long time to train and it does not constitute a viable option for more com- plex datasets given the current state of deep generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Another approach devised by [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020a] tries to overcome such limitations by generating inter- mediate features instead of the original data, trying to decrease the computational complexity of the generation procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The pros of rehearsal-based approaches are their simplicity and effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, the methods with best performances in continual learning exploit exemplars as shown in this challenge review [Lomonaco et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022] where the best approaches used exemplars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The drawback of rehearsal continual learning is the usage of a mem- ory buffer, which can be saturated as the number of tasks to be learned grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To overcome such drawback some methods propose the usage of representative exem- plars [Hayes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] and herding [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020b] techniques aimed to reduce the amount of memory required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, an interesting work (GDumb) proposed by [Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] offers a simple baseline to rehearsal systems and questions the advancements of continual learning research itself due to its outstanding perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Besides its performance, the system is very simple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, the model samples data as experiences the stream of incoming task data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It does so until it fills a rehearsal buffer, by taking care to balance the proportion among classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' When the task data stream ends the dumb learner (a simple MLP or CNN) is trained only on the buffer data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' GDumb achieves state-of-the-art performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 36 Chapter 3 Old tasks + Task i MemoryChapter 4 Works 37 Dissecting continual learning: a structural and data analysis 6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Smaller is Better: An Analysis of Instance Quan- tity/Quality Trade-off in Rehearsal-based Contin- ual Learning We begin our dissection by focusing on rehearsal-based methods i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', solutions in where the learner exploits memory to revisit past data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Due to its prominet perfor- mance and the abrupt usage, rehearsal systems are nowadays one of the preferred countermeasures to fight catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' So far, the focus from the community has been put into finding smart method- ologies to improve the incremental performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Instead, we ask ourselves what happens if we boost the capacity of the memory buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' How much does impact al- tering the data storable in the memory?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Indeed, in this study, we propose an analysis of the memory quantity/quality trade-off adopting various data reduction approaches to increase the number of instances storable in memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' By apply complex instance compression techniques to the original data, such as deep encoders, but also trivial approaches such as image resizing and linear dimensionality reduction, we offer a simple study on the trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then we introduce the usage of Random Projections as compression scheme and offer a simple pipeline through Extreme Learning Machines to resource-constrained continual learning, an appealing scenario where computational and memory resources are limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 38 Chapter 4 Works Continual Learning (CL) is increasingly at the center of attention of the research community due to its promise of adapting to the dynamically changing environment resulting from the huge increase in size and heterogeneity of data available to learning systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It has found applications in several domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Its prime application, and still most active field, is computer vision, and in particular object detection [Gidaris and Komodakis, 2018, Thrun, 1995b, Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' however it has since found applications in several other domains such as segmentation [Cermelli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Michieli and Zanuttigh, 2019, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020a], where each segmented class has to be learned in an incremental fashion, as well as in other fields, among which we mention Reinforcement Learning (RL) [Xu and Zhu, 2018, Lomonaco et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] and Natural Language Processing (NLP) [Gupta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Sun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, de Masson d’Autume et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ideally, the behaviour of CL systems should resemble human intelligence in its ability to incrementally learn in a dynamical environment [Hadsell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020], with minimal waste of resources, spatial or computational.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The main problem encountered by these systems resides in the famous stability-plasticity dilemma of neuroscience, resulting in the so called catastrophic forgetting [McCloskey and Cohen, 1989], a phenomenon where new information dislodges or corrupts previously learned knowl- edge, resulting in the deterioration of the ability to solve previously learned tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Solutions to this problem typically incur in a increase in resource requirements [Lomonaco et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022] both for CL’s very nature (the more tasks arrive the more data the agent need to process), and for the nature of the systems that try to solve it, both in the increased complexity of the typically deep learning models, and in the time and space requirements of continuously learning multiple models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This problem become particularly evident in rehearsal-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rehearsal-based methods, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', approaches that leverage a memory buffer to cope with catastrophic forgetting, are emerging as the most effective methodology to tackle CL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Their performance, backed by extensive empirical evidence [Lomonaco et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022], finds also a theoretical justification in Knoblauch and co-workers’ finding that optimally solving CL would require perfect memory of the past [Knoblauch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, if we were able to completely re-train a new system with all previous data every time a new task arrives, Continual Learning would not appear to be any different from any other learning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, this approach is both spatially and computationally infeasible for most real-world problems and we can argue it is precisely these memory and computational limitations that characterize CL and distinguish it from other learning problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our investigation aims to analyze the trade-offs on limited-memory CL systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 39 Dissecting continual learning: a structural and data analysis Memory Data Train Rehearsal Method Reduction Time Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1: Our work analyzes the optimal instance quantity/quality trade-off in memory buffers of rehearsal-based Continual Learning systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We carry out our analysis by applying several dimensionality reduction schemes to increase the quantity of storable data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we focus on the quantity/quality trade-off for memory instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We do so through the analysis of several dimensionality-reduction schemes applied to data instances that allows us to increase the number of examples storable in our fixed-capacity memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular we adopted deep learning encoders such as a variation of ResNet18 [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] and Variational Autoencoders (VAE) [Kingma and Welling, 2014], the simple yet surprisingly effective extreme resizing of image data, and, lastly, we explored Random Projections for dimensionality reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The latter scheme turns out to be very effective in low memory scenarios also reducing the model’s parameter complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Indeed, we will show that a variation of Ex- treme Learning Machines (ELM) offers a simple yet effective solution for resources- constrained CL systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our analysis will focus on computer vision tasks and use GDumb [Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] as a rehearsal-baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' GDumb is a model that has been proposed to question the community’s progress in CL thanks to the fact that in lieu of its outstanding sim- plicity, it was still able to provide state-of-the-art performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Further, its simplicity also results in high versatility, as it proposes a general CL formulation comprising all task formulations in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' GDumb is fully rehearsal-based, and it is com- posed by a greedy sampler and a dumb learner, that is, the system does not introduce any particular strategy in the selection of replay data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Therefore, it represents the ideal candidate method to carry out our analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The experimental findings highlighted in this study are multiple: first, we show that when the memory buffer is fixed and extreme values of resizing of instance data is applied, we can easily push the state-of-the-art of CL rehearsal systems by a minimum of +6% to a maximum of +67% in terms of final accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This surprising result suggests that the optimal trade-off between data quantity and quality is severely skewed toward the former and that in general the informational content required to 40 Chapter 4 Works (b) or (c) Resize (a) Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2: Depiction of the three main dimensionality reduction techniques analyzed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In (a) random projection (RP) each image is vectorized (vi) and then orthogonally- projected through a random matrix Q into v ′ i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In (b), the encoder φ outputs a latent vector v ′ i (such as in VAEs) or a noise-free / shrinked image x′ i (as in CutR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In (c), we adopt a simple image resizing strategy through standard biliniear interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' correctly classify images in standard datasets is relatively low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, we analyze the consumption of resources of rehearsal CL systems as we saturate the rehearsal buffer, and show that ELM offer a clear solution on CL systems constrained by very low resources environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Related Works Following some recent surveys [Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019, Hadsell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Mundt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020], we divide CL approaches into three main categories: regularization-based approaches, data rehearsal-based approaches and architectural-based approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although a few novel theoretical frameworks based on meta-learning have been in- troduced recently [Hadsell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020], the majority still fall within these categories (or in a mixture of them).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Regularization-based approaches address catastrophic forgetting by controlling each parameter’s importance through the subsequent tasks, by means of the ad- dition of a finely-tuned regularizing loss criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Elastic Weight Consolidation (EWC) [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] was the first well established approach of this class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It uses Fisher information to estimate each parameter’s importance while discouraging the update for parameters with greatest task specificity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learn without Forgetting (LwF) [Li and Hoiem, 2017] exploits the concept of “knowledge distil- lation” to preserve and regularize the output for old tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More recently, Learn- ing without Memorizing (LwM) [Dhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] adds in the loss an information preserving penalty exploiting attention maps, Continual Bayesian Neural Networks (UCB) [Ebrahimi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] adapts the learning rate according to the uncertainty Chapter 4 41 Dissecting continual learning: a structural and data analysis defined in the probability distribution of the weights in the network, while Pomponi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Pomponi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] propose a regularization of network’s latent embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rehearsal-based Rehearsal-based solutions allocate a memory buffer of a prede- fined size and devise some smart schemes to store previously used data to be replayed in the future, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', to be added to future training samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One of the first method- ologies developed is Experience Replay (ER) [Rolnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019], which stores a small subset of previous samples and uses them to augment the incoming task-data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] propose an evolution of ER which takes in consid- eration Maximal Interfered Retrieval (ER-MIR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Their proposal lies between rehearsal and regularization methods, its strategy is to retrieve the samples that are most in- terfered, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' whose prediction will be most negatively impacted by the foreseen parameters update.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Among other mixed approaches we have Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] that proposes a method which simultaneously learns strong classifiers and data representation (iCaRL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gradient Episodic Memory (GEM) [Lopez-Paz and Ranzato, 2017] and its improved version Averaged-GEM (AGEM) [Chaudhry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] exploits the memory buffer to constrain the parameter updates and stores the previous samples as trained points in the parameter space, while Gradient based Sample Selection (GSS) [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] diversifies/prioritizes the gradient of the examples stored in the replay memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, a recent method proposed by Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] scores memory data samples according to their ability to preserve latent decision boundaries (ASER).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Architectural-based Architectural methods alter their parameter space for each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The most influential architectural-based approach is arguably Progressive Net- works (PN) [Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016], where a dedicated network is instantiated for each task while Continual Learning with Adaptive Weights (CLAW) [Adel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] grows a network that adaptively identifies which parts to share between tasks in a data-driven approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Note that, in general, the approaches that use incremental modules suffer the lack of task labels at test time, since there is no easy way to decide which module to adopt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Method Before introducing the dimensionality reduction approaches adopted in our quan- tity/quality analysis we have to introduce the CL scenario considered and its task composition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Unfortunately the community has not yet converged to a unique 42 Chapter 4 Works standard way to define a CL setting [van de Ven and Tolias, 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here we adopt GDumb’s formulation which is the most general one and specifically resembles Lomonaco and Maltoni’s formulation [Lomonaco and Maltoni, 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particu- lar, we focus on the new class (NC)-type scenario [Lomonaco and Maltoni, 2017] where each task Ti introduces data instances of CTi new, previously unseen, classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally a dataset benchmark D, containing examples from CD classes, is divided into n tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each task, Ti with i = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' n, carries a set of examples Ti = {XTi, YTi} whose class is previously unseen i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' YTj ∩ YTi = ∅ with j = 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' i and YTi = {c1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' cTi}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In other words, the model experiences a shift in the distri- bution of data as we train on each new task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also consider the more realistic class incremental scenario (CI), that is, we are not allowed to know task labels at test time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As incremental approach we use the recently proposed GDumb, which is com- posed of a simple learner and a greedy balancer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' That is, given a fixed amount of memory M, each instance of task data is randomly sampled in order to balance class instances in the memory, so that, at the end of the Ti task experience, the memory contains an equal number of instances of all previously encountered classes i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' each class has � M CD∗i � instances in memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Besides providing state-of-the-art performances, GDumb has been proposed as standard baseline to question our progresses in continual learning research, since after experiencing a task, the simple learner (such as a ResNet18 [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] or a MLP) is trained only with memory data, making GDumb a fully rehearsal based approach with random filtering of incoming data, and thus the ideal candidate to carry our study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the following paragraphs, we briefly describe all the strategies adopted for dimensionality reduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Random Projections (RP) Extreme Learning Machines (ELM) [Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2006] are a set of algorithms that exploit random projections as dimensionality reduction technique to preserve compu- tational and spatial resources while learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ELM have been introduced in 2006 and recently have found application in neuroscience [Qureshi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016, Lama et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] and in other problems such as in molecular biology [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The idea can be roughly described as a composition of two modules where the first one performs a random projection of the data, while the second one is a learning model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The appealing property of RP lies in the Johnson-Lindenstrauss lemma [Johnson, 1984] which states that given a set of points in a high dimensional plane, there is a Chapter 4 43 Dissecting continual learning: a structural and data analysis linear map to a subspace that roughly preserves the distances between data points by some approximation factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The Johnson-Lindenstrauss lemma guarantees that we can obtain a low-distortion to the dimensionality reduction by multiplying each instance vector by a semi-orthogonal random matrix Qm×n in the (m, n) Stiefel manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally, let xi be an image of the current task of width, height and number of channels w, h, and c respectively, then the size of xi is n = hwc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can consider its vectorization as vi ∈ Rn and its compressed representation v ′ i = Qvi s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' QTQ = Im (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1) with v ′ i ∈ Rm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The usage of ELM unsuspectedly unlocks two main advantages: First it allows us to exploit the dimensionality reduction by increasing the number data instances storable in the memory buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Secondly and, more importantly, allows us to use models with significantly fewer parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On the other hand, the approach loses coordinate contiguity and, with that, shift co-variance, rendering convolutional ap- proaches inapplicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' After the random projection, data instances will be forwarded to the greedy sam- pler of GDumb to fill the memory M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, we perform a rehearsal train with any MLP-like architecture, resulting in an order-of-magnitude reduction in the amount of parameters needed to process visual data allowing the usage of CL rehearsal based solutions in very low resource scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Deep Encoders Deep encoders are neural models φ that take as input an image xi and, depending from the structure of such model, can output either a latent vectorial representation v ′ i , or a squared feature map which we consider as a noise-free shrinked image x′ i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 (b) reports visually the two possible encoding scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this work, we adopt a Variational AutoEncoder (VAE) [Kingma and Welling, 2014] for the first case and a pretrained ResNet18 [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] cut up to a predefined block (CutR) as a prototype for the second.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 44 Chapter 4 Works CIFAR10 Method Acc@600KiB Acc@1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5MiB Acc@3MiB EWC [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 GEM [Lopez-Paz and Ranzato, 2017] 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 AGEM [Chaudhry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 iCARL [Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ± 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ER [Rolnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ER-MIR [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ER5 [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ER-MIR5 [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 GSS [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019c] 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ASER [Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ASERµ [Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 GDumb [Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 Resize (8 × 8) 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ELM (128) 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 CutR (8 × 8) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1: CIFAR10 experiments (5 runs) VAE Variational Autoencoders [Kingma and Welling, 2014] have been introduced as an efficient approximation of the posterior for arbitrary probabilistic models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A VAE is essentially an autoencoder that is trained with a reconstruction error between the input and decoded data, with a surplus loss that constitutes a variational objective term attempting to impose a normal latent space distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The variational loss is typically computed through a Kullback-Leibler divergence between the latent space distribution and the standard Gaussian, the total loss can be summarized as follows: L = Lr(xi, ˆxi) + LKL(q(zi|xi), p(zi)) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2) given an input data image xi, the conditional distribution q(zi|xi) of the encoder, the standard Gaussian distribution p(zi), and the reconstructed data ˆxi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We use the encoding part of a VAE pretrained on a dataset by feeding each incoming image and retrieving the vectorial output representation v ′ i , then the data point is forwarded to GDumb’s greedy sampler to feed M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CutR As our second encoding approach, we use a pretrained ResNet18 [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] cut up to a predefined block.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ResNets models are Convolutional Neural Net- works (CNNs) introducing skip connections between convolutional blocks to alleviate the so called vanishing gradient [Hochreiter, 1998] problem afflicting deep architec- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The idea behind it, is to use the cut ResNet18 as a filtering module that outputs a smaller feature map, giving us x′ i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, we cut the network towards later blocks, since neurons in the last layers, encode more structured semantics with Chapter 4 45 Dissecting continual learning: a structural and data analysis ImageNet100 CIFAR100 Method Acc@12MiB Acc@24MiB Acc@3MiB Acc@6MiB AGEM [Chaudhry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='05 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ER [Rolnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='02 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 EWC [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 GSS [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019c] 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ER-MIR [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ASER [Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ASERµ [Shim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 GDumb [Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 Resize (8 × 8) 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ELM (128) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 CutR (8 × 8) 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='25 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4* 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='27 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5* 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2: ImageNet and CIFAR100 experiments (5 runs) respect to the early ones [Olah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Therefore, we are able to extract se- mantic knowledge from unseen images leveraging transfer learning [Tan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018], that is, we exploit the ability of a model to generalize over unseed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We refer to this method with the name CutR(esnet18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We use CutR instance encoding by feeding each image belonging to the current task and retrieving the shrinked out- put x′ i which is then forwarded to the greedy sampler module of GDumb to fill the memory M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In our analysis, we adopted the less resource-hungry VAE scheme for datasets where shift co-variance is not as important, such as the MNIST, in which the digits are centered in the image and thus most approaches at the state-of-the-art use a MLP as classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In all other instances, we used the CutR scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Resizing We used also the simplest instance reduction approach one can think of i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', resizing the images to very low resolution through standard bilinear interpolation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The resized images are then fed to the sampler of GDumb to balance the classes in M and all training and prediction is performed on the lowered resolution images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Independently of the approach adopted, all data instances are reduced before storing them in memory M, then we use GDumb’s greedy sampler to select and balance class instances, and finally, we use a suitable learner to fit memory data and assess the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In general, following GDumb, we adopt ResNet18 for large- scale image classification tasks for all approaches that maintain shift co-variance, reverting to a simple MLP for approaches without shift co-variance like RP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 46 Chapter 4 Works Experiments We performed our analysis on the following standard benchmarks: MNIST [LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 1998]: the dataset is composed by 70000 28 × 28 grayscale images of handwritten digits divided into 60000 training and 10000 test images belonging to 10 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CIFAR10 [Krizhevsky, 2009]: consists of 60000 RGB images of objects and animals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The size of each image is 32 × 32 divided in 10 classes, with 6000 images per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The dataset is split into 50000 training images and 10000 test images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CIFAR100 [Krizhevsky, 2009]: is composed by 60000, 32 × 32 RGB images subdivided in 100 classess with 600 images each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The dataset is split into 60000 training images and 10000 test images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ImageNet100 [Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2009]: the dataset is composed of 64 × 64 RGB images divided in 100 classes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' it is composed of 60000 images split into 50000 training and 10000 test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Core50 [Lomonaco and Maltoni, 2017]: the dataset is composed of 128 × 128 RGB images of domestic objects divided in 50 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The set consists of 164866 images split into 115366 training and 49500 test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Following [Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020], we use final accuracy as the evaluation metric throughout the work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The metric is computed at the end of all tasks against a test set of never seen before images composed of an equal number of instances per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This allows us to directly compare against the largest number of competitors in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' All the experiments has been conducted with an Intel i7-4790K CPU with 32GB RAM and a 4GB GeForce GTX 980 machine running PyTorch 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1+cu102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Parameter Sensitivity In the first experiment, we compared different dimensionality reduction strategies as we altered the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The analysis was conducted on three different datasets: MNIST, CIFAR10 and ImageNet100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this evaluation we fixed the amount of Chapter 4 47 Dissecting continual learning: a structural and data analysis memory buffer used for GDumb during rehearsal training, and we measured the final accuracy as the parameters varied for each dimensionality reduction method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular we subdivided both MNIST and CIFAR10 datasets into 5 tasks of 2 classes each, with 600 KiB dedicated memory buffer, while ImageNet100 was divided into 10 tasks of 10 classes each, with 12 MiB memory buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 plots the performance of the various schemes as we reduce the dimen- sionality of the instances and and thus increase their number in the allocated memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The orange line represents the performance of the resize scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For the MNIST dataset, we considered eight different target sizes1 x′ i ∈ {27 × 27, 24 × 24, 20 × 20, 16 × 16, 12 × 12, 8 × 8, 4 × 4, 2 × 2, 1 × 1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We performed the same resizing for CIFAR10 data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We did not report CIFAR100 analysis since the data format is the same as CIFAR10 and the result would be analogous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For ImageNet100, we resized each instance to x′ i ∈ {32 × 32, 24 × 24, 16 × 16, 6 × 6, 4 × 4, 2 × 2}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The green line of Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 represents the deep encoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, for MNIST we used a VAE [Kingma and Welling, 2014] pretrained on KMNIST [Clanuwat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018] and analyzed the performance of GDumb with compressed instances as we altered the size of the latent embedding vector to v ′ i ∈ {128, 64, 32, 16}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On the other hand, for the CIFAR10 and ImageNet100 dataset we considered different parameters for CutR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we cut the ResNet18 up to the sixth layer to get a 4 × 4 output, to the fifth to have a 8 × 8 encoding, and lastly up to the third block to get a 16 × 16 feature map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The CutR Resnet18 has been pretrained on the complete ImageNet, thus the results in the ImageNet100 benchmark can be biased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We denote these biased results with CutR*.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lastly, the blue line of Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 reports the accuracy of Random Projection followed by an MLP classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We recall that this kind of architecture is a variation of an Extreme Learning Machine (ELM), therefore we will refer to it with the term ELM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We analyzed the final accuracy as the size of the random projection changes, in particular the embedding sizes considered are v ′ i ∈ {512, 256, 128, 64, 32, 16} for all the datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For all the experiments in MNIST data, we used a 2-layer MLP with 400 hidden nodes as learning module, while we used a Resnet18 [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] for all the other analysis with exception of ELM scheme that maintains the 2-layer MLP model throughout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We did not perform any hyperparameter tuning on the learning module 1throughout the work we omit to write the channel component for brevity 48 Chapter 4 Works MNIST Method Acc@382KiB GEN [Hsu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018] 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 GEN-MIR [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ER [Rolnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 GEM [Lopez-Paz and Ranzato, 2017] 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 ER-MIR [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a] 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 GDumb [Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 Resize (8 × 8) 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ELM (128) 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 VAE (32) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3: MNIST final accuracy (5 runs) analysis as we vary the memory for all schemes considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' in accordance with the GDumb [Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] experimental protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For completeness we report the learning parameters: the system uses an SGD optimizer, a fixed batch size of 16, learning rates [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='05, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0005], an SGDR [Loshchilov and Hutter, 2017] schedule with T0 = 1, Tmult = 2 and warm start of 1 epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Early stopping with patience of 1 cycle of SGDR, along with standard data augmentation is used (normalization of data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' GDumb uses cutmix [Yun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] with p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 and α = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 for regularization on all datasets except MNIST.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we can also see from Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 all the strategies considered unlock perfor- mance greatly above GDumb , thus suggesting that the quantity/quality trade-off is severely skewed toward quantity since each dimensionality reduction technique greatly improves the amount of data instances that can be stored in the memory buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It is also evident that the simple resizing strategy gives the best performance improving GDumb by +6% on MNIST and roughly by +20% on both CIFAR10 and ImageNet100 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Moreover, we chose to consider extreme levels of encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We did so to find the level of compression that irreversibly corrupts spatial information and thus makes learning impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Surprisingly, it turns out that a 2 × 2 resizing still works on CIFAR10 data with perfomances above GDumb while a 1 × 1 resize is still better than a random classifier whose performance would be 20% of final accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is a strong evidence that the amount of data storable in the memory buffer plays a central role, but also that CIFAR10 dataset constitutes an unrealistic benchmark and should not been considered to assess novel methodologies in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' After choosing and fixing the optimal parameters for each compression scheme, we study the performance of the rehearsal system as we alter the quantity of the memory allocated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Tables 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3,4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 we compute the final accuracy for all the Chapter 4 49 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3: At top-left the accuracy analysis of the MNIST dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In top-right we have the analysis of CIFAR10 and at bottom we have ImageNet100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The state-of- the-art (SOTA) method is plain GDumb with an MLP as incremental learner in the MNIST experiment and Resnet18 in the others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The number of instances in memory (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' the x axis) is in log scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We report the results of (5 runs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' datasets previously considered, with the addition of CIFAR100 with an increase of 20% in performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The amount of dedicated memory for the rehearsal buffer, has been chosen in order to be consistent with several other methods at GDumb , al- lowing us to compare GDumb’s performance on optimized memory schemes against other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we can see, all memory optimizations still provide huge advan- tages as the memory buffer varies, suggesting again, that instance quantity plays a fundamental role in rehearsal systems even with extreme encoding settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, we note that the deep models used for classification have a large number of degrees of freedom and require a large amount of instances to be properly trained to capture the complexity of the task at hand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Simpler, lower dimensionality instances allow both for more instances and simpler classifiers with fewer parameters without 50 Chapter 4 MNIST Fixed 382 KiB Memory CIFAR10 Fixed 600 KiB Memory 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='60 x12x12-8x8 RP+MLP (ELM) RP+MLP (ELM) 8x8 x16x16 ¥- Resize+MLP 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55- 16x16 8x8 4x4 一样一 Resize+ResNet 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='96 - 20x20 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='. VAE+MLP x12x12 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CutR+ResNet 128 SOTA 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 - SOTA 32 24x24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='94 - 16x16 256 64 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4x4 racy 27x27 64 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4x4 128 Accur 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='92 64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='40 24x24 256 32 2x2 128 28x28 512 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='35 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 - 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='88 - 1x1 32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='86 103 104 105 103 104 105 Memory Slots Memory Slots ImageNet100 Fixed 12000 KiB Memory RP+MLP (ELM) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 - 样一 Resize+ResNet 16x16 CutR*+ResNet SOTA 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 - 8x8 8x8 2 16x16 24x24 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4x4 32x32 4x4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 256 128 512 64 32 x2x2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 - 16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 104 105 106 Memory SlotsWorks Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4: We show the total amount of KiB used by the whole CL system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We measure the consumption as we saturate the rehearsal memory plus the storage of model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The x−axis is in log scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' losing lot of informational content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Resource Consumption With the second experiment, we wanted to analyze the performance versus the total memory requirement for each approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, we increased the number of instances in the memory buffer and added to the total consumption the working memory used by the classifier to store (and train) the parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We considered three different scenarios: first we used the plain GDumb CL system without dimensionality reduction (representing GDumb ), then we used ELM (with fixed embedding size of (v ′ i = 128), and lastly the resizing scheme (images resized to x′ i = 8×8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We selected the best parameters resulting from the previous experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then assessed the performance and resource usage using a new dataset, namely the Core50 [Lomonaco and Maltoni, 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The reason behind the use of Core50 to validate our findings is twofold: first, we test again whether the quantity of extremely encoded data plays a central role on our rehearsal scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Secondly, we measure the performance and the resource usage of a CL system on a more complex Chapter 4 51 MNIST CIFAR10 CIFAR100 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 104 105 104 105 104 105 KiB KiB KiB ImageNet100 Core50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ELM 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 GDumb 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 Accuracy Resize 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 104 105 104 105 KiB KiBDissecting continual learning: a structural and data analysis set of tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We divided the dataset into 10 tasks of 5 classes each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4, we report the results of this experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can see that extreme levels of resizing still provide optimal results in all the datasets considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One strik- ing finding is that in Core50 with extreme resizing, even if the size was not optimized for the dataset, the final accuracy is increased by +67% with respect to GDumb .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Second, we note that ELM constitute a viable solution in low resources scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Indeed, we can surpass the performance of GDumb for low memory scenarios where even just the classifier used in other approaches could not fit in the allocated memory, much less the rehearsal buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is clearly observed from the Core50 results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can appreciate that by randomly projecting image data and learning in a low resource scenario provides a boost of +34% in the final accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, it is worth noting there is a striking dissonance in the literature of rehersal- based method when the narrative around buffer-memory sizes revolves around deci- sions among sizes of the order of 300KiB to 600KiB when then the same systems adopt complex classifiers using several megabytes of memory just for the learned parameters and in the order of gigabytes of working memory for learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In a real constrained-memory scenario a simpler classifier with more instances offers a clear advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Conclusion In this study, we analyzed the quantity/quality trade-off in rehearsal-based Continual Learning systems adopting several dimensionality reduction schemes to increase the number of instances in memory at the cost of possible loss in information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In par- ticular, we used deep encoders, random projections, and a simple resizing scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' What we found is that even simple, but extremely compressed encodings of instance data provide a notable boost in performance with respect to the state of the art, suggesting that in order to cope with catastrophic forgetting, the optimization of the memory buffer can play a central role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Notably, the performance boost of extreme instance compression suggests that the quality/quantity trade-off is severely biased toward data quantity over data quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We suspect that some fault might be in the overly simplistic datasets adopted by the community, but mostly the deep models used for classification are well known to be data-hungry and the instances stored are not sufficient to properly train them, but can suffice for simpler classifiers with fewer parameters working on simplified instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It is worth noting there is a striking dissonance in the literature of rehearsal-based 52 Chapter 4 Works method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The narrative on buffer-memory sizes revolves around decisions among sizes of the order of 300KiB to 600KiB when then the same systems adopt complex classifiers using several megabytes of memory just for the learned parameters and in the order of gigabytes of working memory for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In a real constrained-memory scenario, a simpler classifier with more instances offers a clear advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, in a real low-resources scenario deep convolutional systems using several megabytes of memory for the model parameters and gigabytes of working memory for learning are not a viable solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this case, a variation of Extreme Learning Machines offer a simple and effective solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Other Experiments Fixed Data Instances With this experiment we aim to better show that instance quantity is preferable over instance quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We fixed the number of data slots in the memory buffer, and we analyzed the performance as we alter the encoding size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we tested two datasets namely CIFAR10 and Core50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In CIFAR10 we fixed the buffer to 1000 data slots, while in the latter benchmark we fixed it to be 8000 slots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' What we can see from Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 is that the improvement of performance is not given by the encoding’s smoothing property, and, again, we confirm that rehearsal systems are skewed towards data quantity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 53 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5: Performance as we vary the parameters for each scheme on CIFAR10 and Core50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the former benchmark, the memory buffer is of 1000 fixed instances, while in the latter is of 8000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ELM Width Analysis As we specified in the work, we used a variation of an Extreme Learning Machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, the architecture is composed by a random projection module and a learning module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The first is implemented through an orthogonal random matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While the second is a two layer MLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Throughout the study we used 400 hidden units as last layer before the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We choose to do so to be consistent with GDumb experimental settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With this experiment we analyze the accuracy metric as we change the number of hidden units.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We fixed the encoded size of data to be v ′ i 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As memory buffer, we used a different number of data slots for different datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' That is, for MNIST and CIFAR10 we adopted 2400 slots (600 KiB), in ImageNet100 we used 48000 instances i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 12 MiB, while for Core50 we used 8000 slots (2 MiB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 we can see that 100 hidden units are sufficient to achieve 54 Chapter 4 CIFAR10 ELM CIFAR10 Resize 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 racy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='34 Accu 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='28 16 64 128 256 512 4x4 8x8 16x16 24x24 Encoding Size Encoding Size Core50 ELM Core50 Resize 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='34 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 16 64 128 256 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 512 4x4 8x8 16x16 24x24 Encoding Size Encoding SizeWorks the maximum performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This, again, shows that more deep classifiers which are common in CL rehearsal literature, might need more data to be trained properly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6: Analysis of final accuracy as we alter the number of hidden units in ELM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Experiments with other Rehearsal Systems Throughout our study, we used GDumb to carry out our analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although we ex- tensively motivated this choice, we also tested two different rehearsal systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular we studied ER Rolnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2019] and ER-MIR Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2019a] per- formance as we adapt them to work in a low resource scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We simply substitute the original learner with our ELM proposal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 we report the performance of CIFAR10 with 600 KiB buffer memory and v ′ i = 128 encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As validation metrics we used the final accuracy and the average forgetting Chaudhry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2018] (lower is better).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In order to train the systems, we used the official implementations found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='com/optimass/Maximally Interfered Retrieval without any alteration of training hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we can see, the results suggest again that ELMs constitute a valid solution for low resource CL systems and that rehearsal solutions are biased toward data quantity over data quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 55 ELM Width Analaysis 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 MNIST 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 CIFAR10 ImageNet100 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 Core50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 0 50 100 150 200 250 300 350 Hidden NodesDissecting continual learning: a structural and data analysis CIFAR10 Fixed Memory 600 KiB Method Accuracy (A) Forgetting (F) ELM (A) ELM (F) ER Rolnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2019] 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='40 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='16 ER-MIR Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2019a] 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='48 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='01 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4: Experiments in CIFAR10 with two different rehearsal systems in low resource scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Other Specifications Resource Consumption In Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 we report some summary statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we report GDumb’s performance improvements for two encoding schemes i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Resize (8 × 8) and ELM (v ′ i = 128).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We reported only the accuracy according to optimal parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also added the compression factor C, the requirements to store model’s parameters Θ and the memory buffer M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also report the quantity of GPU memory usage to train GDumb for each encoding scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can see that there is a big gap on the training requirements and memory buffers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' MNIST CIFAR10 CIFAR100 ImageNet100 Core50 Compression Params + M GPU Training Resize (8 × 8) (+6%) (+21%) (+20%) (+20%) (+67%) 253:1 60 MiB 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 GiB ELM (128) (+10%) (+10%) (+10%) (+10%) (+10%) 192:1 16 MiB 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='72 GiB Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5: Performance summary and memory compression Datasets Specification For completeness, we report in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 some specifications for the considered datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we provide the task subdivision for each dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we can see MNIST and CIFAR10 have been split in 5 tasks of 2 classes each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This splitting is also known in literature as Split-CIFAR10 and Split-MNIST.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For CIFAR10 and ImageNet100 benchmarks we used 10 tasks of 10 classes each, meanwhile for Core50 we shuffled all scenarios and created 10 tasks of 5 classes each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The majority of the works fix the memory slots to define the memory buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In our case we used memory requirements expressed in KiB or MiB so that we could alter each slot consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 56 Chapter 4 Works We provide a correspondence between memory requirements and memory slots in the case we consider original image sizes, we do so to ease future comparisons against our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Experimental Settings Dataset Image size Memory Size # Instances Task Composition MNIST 28x28x1 382 KiB 500 5 tasks, 2 classes CIFAR10 32x32x3 600 KiB 200 5 tasks, 2 classes 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 MiB 500 3 MiB 1000 6 MiB 2000 CIFAR100 10 tasks, 10 classes ImageNet100 64x64x3 12 MiB 1000 10 tasks, 10 classes 24 MiB 2000 Core50 128x128x3 15 MiB 312 10 tasks, 5 classes Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6: Dataset and memory statistics, in CIFAR100 row we omit the 2nd, 3rd and 4th columns since are equal to CIFAR10 row.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 57 Dissecting continual learning: a structural and data analysis 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 Towards Exemplar-Free Continual Learning in Vi- sion Transformers: an Account of Attention, Func- tional and Weight Regularization While in the previous work we considered old data points as pivotal instrument to investigate catastrophic forgetting, now we focus on the structural properties of the model considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we ask ourselves how some parts of a network, when properly regularized, impact to the overall performance of an incremental scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We decided to investigate the continual learning of Vision Transformers (ViT) for the challenging exemplar-free scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We opted to study ViTs since there are several works tackling CNNs while virtually no one focused to ViTs yet although they are getting consistently better at vision tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This work takes an initial step towards a surgical investigation of the self atten- tion mechanism (SAM) for designing coherent continual learning methods in ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We first carry out an evaluation of established continual learning regularization tech- niques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then examine the effect of regularization when applied to two key enablers of SAM: (a) the contextualized embedding layers, for their ability to capture well- scaled representations with respect to the values, and (b) the prescaled attention maps, for carrying value-independent global contextual information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We depict the perks of each distilling strategy on two image recognition benchmarks (CIFAR100 and ImageNet-32) – while (a) leads to a better overall accuracy, (b) helps enhance the rigidity by maintaining competitive performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Furthermore, we identify the limitation imposed by the symmetric nature of regularization losses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To alleviate this, we propose an asymmetric variant and apply it to the pooled output distillation (POD) loss adapted for ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we will see through the section, our experiments confirm that introducing asymmetry to POD boosts its plasticity while retaining sta- bility across (a) and (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Moreover, we acknowledge low forgetting measures for all the compared methods, indicating that ViTs might be naturally inclined continual learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 58 Chapter 4 Works Transformers have shown excellent results for a wide range of language tasks [Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Roy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] over the course of the last couple of years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Influenced by their initial results, Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] pro- posed Vision Transformers (ViTs) as the first firm yet competitive application of transformers within the computer vision community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 ViTs’ applications have since spanned a range of vision tasks, including, but not limited to image classification [Touvron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021], object recognition [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021], and image segmentation [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The singlemost essential element of their architecture remains the self-attention mechanism (SAM) that allows the learning of long-range interde- pendence between the elements of a sequence (or patches of an image).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Another feature vital to their performance is the way they are pretrained in an often unsuper- vised or self-supervised manner over a large amount of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is then followed by the finetuning stage where they are adapted to a downstream task [Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For ViTs to be able to operate in real-world scenarios, they must exploit streaming data, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', sequential availability of training data for each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 Storage limitations or privacy constraints further imply the restrictions on the storage of data from previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Task-incremental continual learning (CL) seeks to find solutions to such constraints by alleviating the event of catastrophic forgetting - a phenomena where the network has a dramatic drop in performance on data from previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Several solutions have been proposed to address forgetting, including regularization [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017, Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018, Zenke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017, Ritter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018], data replay [Chaudhry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019b, Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a, Lopez-Paz and Ranzato, 2017] and parameter isolation [Mallya and Lazebnik, 2018, Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016, Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017, Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Most works on CL de nos jours study recurrent [Sodhani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Chiaro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] and convolutional neural networks (CNNs) [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, little has been done to investigate different CL settings in the domain of ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We, therefore, mark the first step for the domain by considering the further restrictive setting of exemplar-free CL with a zero overhead of storing any data from previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We consider this restriction for its real-world aptness to scenarios involving privacy regulations and/or data security considerations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Given that regularization-based methods form one of the main techniques for exemplar-free CL, we consider an in-depth analysis of these for ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Regularization- based techniques are mainly organized along two branches: weight regularization methods (such as EWC [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017], SI [Zenke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017], MAS [Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018]) and functional regularization methods ( such as LwF [Li and Hoiem, 2017], PODNET [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As discussed above, the architectural 2By firmness, we refer to the non-reliance on convolutional operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 3A task may encompass training data of one or more classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 59 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7: Self-attention mechanism comprising a vision transformer encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We compare Attention-based approaches computed prior to the softmax operation and Functional-based approaches computed on the contextualized embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' novelty of transformers lies in the SAM building a representation of a sequence by exhaustively learning relations among query-key pairs of its elements [Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We show that for ViTs (and subsequently, all other architectures leveraging SAM), this property allows for a third form of regularization, which we coin Atten- tion Regularization (see Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We ground our idea in the hypothesis that when learning new tasks, the attention of the new model should still remain in the neighborhood of the attention of the previous model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As another contribution, we question the temporal symmetry currently applied to regularization losses;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' referring to the fact that they penalize the forgetting of previous knowledge and the acquiring of new knowledge equally (see Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With the aim of countering forgetting while mitigating the loss of plasticity, we then propose an asymmetric regulariza- 60 Chapter 4 Queries Keys A (A) Scalar Product (B) Softmax/Scaling (C) Linear Combination B Values Functional-based Attention-based Contextualized EmbeddingsWorks tion loss that penalizes the loss of previous knowledge but not the acquiring of new knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We index the major contributions of our work below: We are the first to investigate continual learning in vision transformers in the more challenging exemplar-free setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We perform a full analysis of regular- ization techniques to counter catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Given the distinct role of self-attention in modeling short and long-range depen- dencies [Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021], we propose distilling the attention-level matrices of ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our findings show that such distillation offers accuracy scores on par with that of their more common functional counterpart while offering superior plasticity and forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Motivated by the work of Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020], we pool spatiality-induced attention distillation across our network layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We propose an asymmetric variant of functional and attention regularization which prevents forgetting while maintaining higher plasticity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Through our extensive experiments, we show that the proposed asymmetric loss surpasses its symmetric variant across a range of task incremental settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Related Works Continual learning has been gaining contributions from the deep learning research community during the last few years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the following, we list the most prominent ones: Weight-based: these methods operate in the parameter space of the model through gradient updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Elastic Weight Consolidation (EWC) [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] and Synaptic Intelligence (SI) [Zenke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] are two widely used methods in this family with the former being probably, the most well- known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' EWC uses fisher information to identify the parameters important to individual tasks and penalizes their updates to preserve knowledge from older tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' SI makes the neurons accumulate and exploit old task-specific knowledge to contrast forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Functional-based: these methods rely upon trading the plasticity for stability by training either the current (new) model on older data or vice-versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning Without Forgetting (LWF) [Li and Hoiem, 2017] remains among the most Chapter 4 61 Dissecting continual learning: a structural and data analysis widely known approaches in this family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It employs Knowledge Distillation [Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015] upon the logits of the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Parameter Isolation-based: also known as architectural approaches, these meth- ods tackle CF through a dynamic expansion of the network’s parameters as the number of tasks grow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Among the first widely known methods in this family remain Progressive Neural Network (PNN) [Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] followed by Dynamically Expandable Network (DEN) [Yoon et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018] and Reinforced Continual Learing (RCL) [Xu and Zhu, 2018].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The majority of the aforementioned works target CL in CNNs mainly due to their inductive bias allowing them to solve almost all problems that involve visual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This can also be seen in several reviews [Mai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022, Biesialska et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Delange et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021, Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019, Belouadah et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021, Mai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022] reporting few approaches that consider architectures besides CNNs, despite the attempts to investigate CL in RNNs [Sodhani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Chiaro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Only recently have some works analyzed catastrophic forgetting in transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Among the earliest to do so remains that of Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022] proposing the continual learning with transformers (COLT) framework for object detection in autonomous driving scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Using the Swin Transformer [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] as the backbone for a CascadeRCNN detector, the authors show that the extracted features generalize better to unseen domains hence achieving lesser forgetting rates compared to ResNet50 and ResNet101 [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In case of ViTs, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] show that their vanilla counterparts are more prone to forgetting when trained from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Alongside heavy augmentations, they employ a set of techniques to mitigate forgetting: (a) knowledge distillation, (b) balanced re-training of the head on exemplars (inspired by LUCIR [Hou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019]), and (c) prepending a convolutional stem to improve low-level feature extraction of ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In their work studying the impact of model architectures in CL, Mirzadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Mirzadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022] also experiment with ViTs in brief (with the rest of the work focusing mainly on CNNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While they vary the number of attention heads of ViTs to show that this has little effect on the accuracy and forgetting scores, they further conclude that ViTs do offer more robustness to forgetting arising from distributional shifts when compared with their CNN-based counterparts with an equivalent num- ber of parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The conclusion remains in line with previous works [Paul and Chen, 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] attempt to overcome forgetting in ViTs through a parameter-isolation approach which dynamically expands the tokens pro- cessed by the last layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For each task, they learn a new task-specific token per head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' They then couple such approach through the usage of exemplars and knowledge dis- 62 Chapter 4 Works tillation on backbone features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It is worth noting that these works rely either on pretrained feature extractors [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022] or rehearsal [Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021, Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021] to defy forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Thus the challenging scenario of exemplar-free CL in ViTs remains unmarked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Methodology We start by shortly describing the two main existing regularization techniques for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then propose attention regularization as an alternative ap- proach tailored for ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lastly, we put forward an adaptation for functional and attention regularization which is designed to elevate plasticity while retaining stability levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Functional and Weight Regularization Functional Regularization: We include LwF [Li and Hoiem, 2017] in this compo- nent since it constitutes one of the most prominent, and perhaps the most widely used regularization method acting on data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The appealing property of LwF lies in the fact it is exemplar-free, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', it uses only the data of the current task and maintains only the model at task t − 1 to exploit Knowledge Distillation [Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Formally, LwF can be defined as: LLwF(θ) = λoLKD � Yo, ˆYo � + LCE � Yn, ˆYn � + R(θ) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3) where LKD is the knowledge distillation loss incorporated to impose stability on the outputs, ˆYo the predictions on the current task data from the old model and ˆYo the ground truth of such data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' λo remains the temperature annealing factor for softmax logits while LCE is the standard cross entropy loss calculated upon the new task examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Weight Regularization: These methods encourage the network to adapt to the current task data mainly by using those parameters of the network that are not considered important for previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As representative method we select EWC [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' EWC exploits second-order information to estimate the importance of parameters for the current task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The importance is approximated by Chapter 4 63 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8: Visual illustration of the asymmetric loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The image considers two generated attention maps (a) and (b) while training task 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In case (a), when previous knowledge is lost, both the symmetric and assymetric regularization work correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, in case (b), when new knowledge is acquired, this is penalized by the symmetric loss but not by the assymetric loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The idea is that the assymetric loss leads to higher plasticity without hurting stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' the diagonal of the Fisher Information Matrix F: LEWC(θ) = LX(θ) + � j λ 2Fj � θj − θ∗ Y,j �2 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4) where LX(θ) is the loss for task X, λ the regularization strength, and θ∗ Y,j the optimal value of jth parameter after having learned task Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Attention Regularization Self-Attention Mechanism: The self-attention mechanism (SAM) [Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] forms the core of Transformer-based models and can be defined as: z = softmax �QKT √de � V (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5) where Q, K, and V are respectively the projections of the Query, Key, and Values of the Rde input embeddings while z constitutes the new contextualized embed- dings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our novel attention-based regularization intervenes prior to the computation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='64 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Chapter 4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Attention ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Symmetric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Asymmetric ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Previous Model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Current Model ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Regularization ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Regularization ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='OK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='OK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Penalize ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Penalize ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='forgetting previous ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='forgetting previous ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='knowledge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='knowledge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='(a) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='NOT OK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='OK ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Penalize new ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='New knowledge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Task 1 : Dogs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='knowledge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='is notpenalized ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Task 2 : Deers ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='(b)Works ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='of the softmax operation of the standard self-attention mechanism as illustrated in ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, given a ViT model at incremental step t and an SAM head k of layer l, we define the prescaled attention matrix At kl prior to the softmax operation as: At kl = QKT √de (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6) We denote the attention matrix corresponding to the model at time step (t − 1) computed in a similar way as At−1 kl .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We employ this predecessor in the calculation of knowledge distillation in what follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pooled Attention Distillation: Functional approaches leverage network’s submod- ules typically to apply knowledge distillation [Hinton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' When the regular- ization takes place in intermediate layers, the model can experience excessive stability, therefore loosing in plasticity abilities [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020a, Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020b].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Amongst these methods, PODNet [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] clearly identifies the problem of excessive stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We devise a regularization approach which instead of regularizing functional submodules targets attention maps, the core mechanisms of SAMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally, given the attention maps at steps t and (t − 1), we define LPAD � At−1 kl , At kl � [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] to be: LPAD-width � At−1 kl , At kl � + LPAD-height � At−1 kl , At kl � (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7) where LPAD-width � At−1 kl , At kl � = H � h=1 DW � At−1 kl , At kl � , LPAD-height � At−1 kl , At kl � = W � w=1 DH � At−1 kl , At kl � , (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8) DX � At−1 kl , At kl � = ����� X � x=1 At−1 kl,w,h − X � x=1 At kl,w,h ����� 2 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9) where, W and H indicate the width and height dimensions of the attention maps, and DX(a, b) is the sum total of the distance measure between maps a and b along X-th dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As shown in equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9, the standard LPAD uses the difference Chapter 4 65 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9: Mean and standard deviation of task-aware accuracy and forgetting scores for CIFAR100/10 and ImageNet/6 settings (over 3 random runs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Asymmetric approaches depict higher accuracy with respect to their symmetric counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The low forgetting scores across all methods suggest an intrinsic forgetting resilience in vision transformer architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' operator as the choice for D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We now point out the limitation of such symmetric D and introduce in the next section the notion of asymmetry into our distance measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As previously mentioned, Douilllard el al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2020] propose the pooled outputs distillation PODNet loss which leverages the symmetric Euclidean distance between the L2-normalized outputs of the convolutional layers of models at t and (t − 1) after pooling them along specific dimension(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' They achieve the best results upon combining the pooling along the spatial width and height axes which they term as the POD-spatial loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Given the generic correspondence among the various pooling variants in their paper, our work is particularly influenced by POD- spatial as we pool attention maps of ViTs along two dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, throughout the experiments, we analyze this formulation when applied to the contextualized embeddings z resulting from a SAM operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We would like to highlight that PAD differs from PODNet in two important factors: its applied to the attention and not directly on the layer output, and secondly, its marginalization is not on the spatial dimensions due to the fact that z does not encode the spatial dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Asymmetric Regularization The proposed attention regularization prevents forgetting of previous task by en- suring that the old attention maps be retained while the model learns to attend to new regions over tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, the symmetric nature of DX (with respect to the two attention maps) means that any differences between the older and the newly learned attention maps lead to increased loss values (see Equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We agree that penalizing a loss in attention with respect to previous knowledge is crucial in 66 Chapter 4 Task Aware - avg 3 seeds .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='. FUNC(asym) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' finetuning LwF ATT(sym) FUNC(sym) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='100 0T/001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 ImageNet/6 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='075.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='26 curac CIFAR1( 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='24 For 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='025 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00 2 3 4 5 6 89 10 5678910 6 1 2 3 2 5 6 1 4 5 Tasks Tasks Tasks TasksWorks addressing forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, also penalizing a gain in attention for newly learned knowledge is undesirable and may actually hurt the performance over subsequently learned tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In other words, punishing additional attention can be counterproduc- tive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a result, we propose using an asymmetric variant of DX that can better retain previous knowledge: DX � At−1 kl , At kl � = �����Fasym � X � x=1 At−1 kl,w,h − X � x=1 At kl,w,h ������ 2 (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10) where, Fasym is as asymmetric function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We experimented with ReLU [Nair and Hinton, 2010], ELU [Clevert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2016] and Leaky ReLU [Maas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2013] as choices for Fasym and found that in general, ReLU performed the best across our settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' By introducing the ReLU function, new attention generated by the current model at task t is not penalized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Attention present at task t − 1 but missing in the current model t is penalized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' An illustration of the functioning of the new loss is provided in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Based on our choice for DX from equations 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10, we classify our final PAD loss as symmetric LPAD-sym or asymmetric LPAD-asym, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each of these losses are computed separately for each of the SAM head and model layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The final asymmetric variant can thus be stated as: LPAD-asym(At−1 kl , At kl) = 1 L L � 1 1 K K � 1 LPAD (At−1 kl , At kl) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11) where, K is the total number of heads per layer and L is the total number of layers of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Note that equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11 can be adapted for LPAD-sym without loss of generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Overall loss: We augment the asymmetric and symmetric PAD losses from equa- tion 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11 with knowledge distillation loss LLwF [Li and Hoiem, 2017] and standard cross entropy loss LCE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The overall loss term takes the form: L = µLPAD-(a)sym + λLLwF + LCE (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='12) where µ, λ ∈ [0, 1] are two hyperparameters regulating the respective contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Note that when µ = 0, L degenerates to baseline finetuning for λ = 0 and to LwF for λ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 67 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10: Mean and standard deviation of task-aware plasticity-stability scores for CIFAR100/10 and ImageNet/6 settings (over 3 random runs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Asymmetric ap- proaches are more plastic compared to their symmetric counterparts while retaining competitive stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Stability-Plasticity Curves: Several measures have been proposed in the CL lit- erature to assess the performance of an incremental learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Besides the standard incremental accuracy, Lopez-Paz et al [Lopez-Paz and Ranzato, 2017] introduce the notion of Backward Transfert (BWT) and Forward Transfert (FWT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' BWT mea- sures the ability of a system to propagate knowledge to past tasks, while FWT assesses the ability to generalize to future tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The CL community, however, still lacks consensus on a specific definition of the stability-plasticity dilemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' An ele- mental formulation for such quantification is thus desirable for allowing us to better grasp the balancing capabilities of an incremental learner at acquiring new knowledge without discarding previous concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end, we introduce stability-plasticity curves computed using task accuracy matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A task accuracy matrix M for an incremental learning setting composed of T tasks is defined to be a [0, 1]T ×T matrix, whose entries are the accuracies computed at each incremental step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 For instance, Mi,j would constitute the test accuracy of task j when the system is learning task i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Subsequently, the diagonal entries of Mi, i give us the accuracies at the respective current tasks while the entries below the diagonal, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', j < i, give the performance of the model on past tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A visual depiction can be seen in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We define the stability to be the performance on the first experienced task at any given time and plasticity to be the ability of the model to adapt to the current task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Namely, these constitute the first column M:,0 and the diagonal of the matrix diag(M).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We employ the curves dervied from these definitions to better dissect the stability-plasticity dilemma of the methods analyzed in our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 4This calls for M to be lower trapezoidal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 68 Chapter 4 Task Aware - avg 3 seeds EWC --.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ATT(asym) - .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='-.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='FUNC(asym).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='finetuning LwF ATT(sym) FUNC(sym) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='22 CIFAR100/10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 910 8 910 1 2 3 4 5 6 1 5 7 8 1 2 3 5 6 7 1 2 3 4 5 6 Tasks Tasks Tasks TasksWorks Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11: Illustration of a task accuracy matrix: we fix stability to be the per- formance of the first task across time steps while we define plasticity to be the performance at the current step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Experiments In this section, we compare regularization-based methods for exemplar-free continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We evaluate the newly proposed attention regularization and compare it with the existing functional (LwF) and feature regularization methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then ablate the usefulness of the newly proposed asymmetric loss as well as the importance of pooling before applying the regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Experimental Setup Setting: For our experiments, we adopt the variation of ViTs introduced by Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, the standard linear embedder of a ViT model is replaced by a smaller convolutional stem which helps build more resilient low-level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Convolutional stems have previously been shown to improve performance and convergence speed in incremental learning settings [Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We there- fore define our architecture to be a lightweight variation of a ViT-Base by setting L = 12 layers, K = 12 heads per layer and a de = 192 embedding size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The choice of a small embedding size has been made to speed up the training procedure and unlock the ability to handle larger batch sizes (1024 for our work).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We analyze our task-incremental setting on two widely used image recognition datasets - namely CIFAR100 and ImageNet-32 with 100, and 300 classes each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Both datasets host 32×32 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On CIFAR100, we consider a split of 10 tasks (denoted as CIFAR100/10 setting) where each incremental task is composed of 10 disjoint set of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On ImageNet-32, we split 6 tasks with 50 disjoint set of classes each Chapter 4 69 Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='Matrix Test Train Stability PlasticityDissecting continual learning: a structural and data analysis (denoted as ImageNet/6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 Our total training epochs remain 200 (per task) for CIFAR100 and 100 for Im- ageNet32 with an initial learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='01 and patience set of 20 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We report our scores averaged over 3 random runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We apply a constant padding of size 4 across all our datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The train images are augmented using random crops of sizes 32 × 32 and random horizontal flips with a flipping probability of 50%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For test images, we only apply center crops of sizes 32 × 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We compare the attentional and functional symmetric and asymmetric versions of LPAD-(a)sym.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We use LwF [Li and Hoiem, 2017] and EWC [Kirkpatrick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] as our basic functional and weight regularization approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For all our experiments relying on PAD losses, we performed a hyperparemeter search (using equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='12) for µ and λ by varying each in the range [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0] and found µ = λ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 to perform reasonably well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We thus stick to these values unless otherwise specified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For the sake of brevity, we indicate LPAD-asym with Asym att and LPAD-sym with Sym att.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Note that these are both variations of equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The functional approaches are analogous to their attentional counterparts except for the fact that they rely on the regularization of the contextualized embeddings rather than the attention matrix (see Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The latter correspond to Asym func and Sym func accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Results We report accuracy as well as forgetting [Chaudhry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018] scores in task aware (taw) setting6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We further report taw plasticity-stability curves (based on Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11) to provide insights upon how well the different models handle the trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Accuracy and Forgetting: As seen in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9, all asymmetric approaches show better performances with respect to their symmetric counterparts on CIFAR100/10 with Asym att offering the best accuracy of 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3% on the last task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The trend continues for ImageNet/6 with an exception of asymmetric functional approach with an accuracy of 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55% falling behind its symmetric counterpart by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='44%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In general, the asymmetric and symmetric losses lead to improved accuracy scores with respect to other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Moreover, we observe that all the methods depict good forgetting resilience with their forgetting scores running around ≈0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='01%) except for EWC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This suggests us that vision transformers are better incremental learners but require more 5Refer to Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 for experiments on additional settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 6The corresponding task agnostic scores can be found in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='14, Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 70 Chapter 4 Works CIFAR100/10 (taw) Asym Func Spatial Sym Func Spatial Asym Func Intact Sym Func Intact LwF Average Incr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Accuracy 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='18% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='67% 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='43% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='12% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11% Last Task Accuracy 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='26% 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='92% 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='04% 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='59% 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='93% Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7: Comparison of intact (no pooling), spatial (pooling along width and height), and LwF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' training and tuning efforts to achieve reasonable accuracies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This remark remains in accordance with prior studies [Mirzadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022, Paul and Chen, 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the particular case of EWC, we observe poor compatibility in terms of accuracy as well as forgetting – with the scores falling behind finetuning at times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We suspect that the method might not be less suited for ViTs due to its reliance on exhaustive fisher information estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Plasticity-stability tradeoff: We compare the dilemma for various methods in Fig- ure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With no distillation, finetuning is prone to the worst trading of plasticity for stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Meanwhile, our asymmetric losses can be seen to be more plastic with respect to their symmetric counterparts while depicting comparable stability scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This confirms our hypothesis regarding the nature of the asymmetry keeping it from discarding older attention while favoring the integration of new attention at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although, LwF with a last task score of 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='74% on CIFAR100/10 and 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0% on ImageNet/6, reports the best plasticity among our approaches, it clearly lags behind the pooling-based approaches at retaining stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On the contrary, the (a)symmetric attention losses and the symmetric functional loss perform similar with a last task stability score of ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='23% on ImageNet/6 and ≈ 53% on CIFAR100/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' EWC shows good plasticity but virtually zero stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This trend is in line with our previous comment on the limitation of EWC in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ablation study Towards the end goal of evaluating the effectiveness of PAD losses, we ablate the contribution of pooling on the CIFAR100/10 setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we con- sider distilling the attention maps when these are: (a) pooled along both dimen- sions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=',(A)sym Func Spatial (see Equation 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7), and (b) not pooled at all, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', Chapter 4 71 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='12: Mean and standard deviation of task-aware accuracy and forgetting scores for the additional CIFAR100/20 and CIFAR100/50 settings (over 3 random runs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' (A)sym Func Intact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Distilling the intact maps of the latter setting imply enhanced stability over their pooled counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our standard accuracy and plasticity-stability measures across tasks can therefore be deemed redundant in this setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a conse- quence, we choose to compare the task-aware average incremental accuracy [Rebuffi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] and the last task accuracy across (a) and (b) while contrasting these with LwF as a strong baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For further crisper observations, we limit our com- parisons to the functional setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As shown in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7, we find that Asym Func Spatial consistently performs the best across both the metrics (with a gain of > 2% over Sym Func Intact in either metric).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In general, distilling the intact attention maps can be seen to be hurting the performance of the models as their accuracy drop below that of the baseline LwF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Conclusion In this work, we adapted and analyzed several continual learning methods to counter forgetting in Vision Transformers mainly with the help of regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then 72 Chapter 4 Task Aware - avg 3 seeds EWC ATT(asym) --.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' FUNC(asym) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' finetuning LwF ATT(sym) FUNC(sym) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='15 CIFAR100/50 Base 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00 5 6 1 2 3 4 5 1 2 3 4 6 Tasks Tasks 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='25 CIFAR100/20 Base 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20 Accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Tasks TasksWorks introduced a novel PODNet-inspired regularization, based on the attention maps of self-attention mechanisms which we termed as Pooled Attention Distillation (PAD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Shedding light on its limitation at learning new attention, we devised its asymmetric version that avoids penalizing the addition of new knowledge in the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We validated the superior plasticity of the asymmetric loss on several benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Besides the meticulous comparison of a range of regularization approaches, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', functional (LwF), weight (EWC), and the proposed attention-based regularization, we extended the application of PAD to the functional submodules of ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end, we investigated regularization in the contextualized embeddings of ViTs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The latter exploration led us to discover that the regularization of functional submodules can help achieve the best overall performances while the regularization of their at- tentional counterparts endow CL models with superior stability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, we remarked the low forgetting scores of vision transformers across the incremental tasks and concluded that their enhanced generalization capabilities may endow them with a natural inclination for incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' By making our code open-source, we hope to open the doors for future research along the direction of efficient continual learning with transformer-based architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Additional Settings We experiment on two further CIFAR100 settings with distinct cardinality of base task classes: CIFAR100/20 Base, with 20 base task classes followed by 8 incremental tasks with 10 classes each, CIFAR100/50 Base, with 50 base task classes followed by 5 incremental tasks with 10 classes each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The task aware accuracy and forgetting scores on these are shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We find the PAD-based losses to consistently outperform other regularization approaches with LwF being the closest tie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Along the direction of plasticity-stability tradeoff (see Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='13), we observe that: (a) the attentional PAD losses retain better rigidity than their functional counterparts, and (b) the asymmetric variants of PAD losses are more plastic than their symmetric counterparts across these settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' These trends further validate our hypotheses in sections 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 73 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='13: Mean and standard deviation of task-aware plasticity-stability scores for the additional CIFAR100/20 and CIFAR100/50 settings (over 3 random runs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Task Agnostic Results Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='14 depicts the task-agnostic accuracy and forgetting scores for the settings mentioned in the main section as well as in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Given the contradictory terms of resource-scarce exemplar-free CL and data-hungry ViTs, task-agnostic eval- uations can be seen to be particularly challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The further avoidance of heavier data augmentations in our training settings can be seen to give rise to two major repercussions across the task-agnostic accuracies: (a) the scores remain consistently low, and (b) the models show smaller yet consistent variations in performances across all settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' That said, we find functional PAD losses to be performing the best on all but CIFAR100/50 setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The larger proportion of base task classes in the latter setting can be seen to be greatly benefiting the learning of LwF (the least parameterized loss term).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Further on the note of class proportions, we observe that an equal spread of classes across the tasks can be seen to have a smoothing effect on the variations of scores across different methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 74 Chapter 4 Task Aware- avg 3 seeds EWC ATT(asym)--.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' FUNC(asym).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' finetuning LwF ATT(sym)- FUNC(sym) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3501 CIFAR100/50 Base 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='325 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='300 Plasticity Stability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='275 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='250 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='225 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='200 1 2 3 4 5 6 1 2 3 4 5 6 Tasks Tasks 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='701 Base 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='65 CIFAR100/20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='60 Plasticity Stability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='25 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Tasks TasksWorks On the contrary, the CIFAR100/50 setting leads to low variability of task-agnostic forgetting scores across the methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This can again be attributed to the fact that a very large first task better leverages the generalization capabilities of ViTs thus making them better at avoiding forgetting over the subsequent incremental steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This further adds to our reasoning regarding the natural resilience of ViTs to incremental learning settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' When compared across methods, the attentional variants of PAD losses can be seen to display the least amount of forgetting followed by their functional counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='14: Mean and standard deviation of task-agnostic accuracy and forgetting scores for CIFAR100/10, CIFAR100/20, CIFAR100/50, and ImageNet/6 settings (over 3 random runs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The larger proportion of base task classes (for example, CIFAR100/50) gives rise to higher variations of accuracies and lower variation of forgetting scores across methods – with the latter indicating the inclination of ViTs towards better generalization and preservation of knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 75 Task Agnostic - avg 3 seeds EWC ATT(asym) FUNC(asym) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' finetuning LWF ATT(sym) FUNC(sym) CIFAR100/50 Base CIFAR100/20 Base CIFAR100/10 ImageNet/6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 i 2 3 4 5 6 5 1 8 1 2 3 4 6 1 2 4 6 7 8 ¥910 1 2 3 4 5 6 Tasks Tasks Tasks Tasks 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='251 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20 Forgetting 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 : 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00 1 2 3 4 5 6 3 4 5 6 7 2 9 1 2 3 4 5 6 7 1 2 3 1 5 6 Tasks Tasks Tasks TasksDissecting continual learning: a structural and data analysis 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 Simpler is Better: off-the-shelf Continual Learn- ing through Pretrained Backbones In this section we propose a simple baseline for continual learning that leverages pretrained backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The approach devised is fast, since requires no parameters updates and has minimal memory requirements (order of KBytes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' By providing such a simple baseline, and achieving strong performance on all the major benchmarks used in literature, we follow the concerns raised in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 on the simplicity of the benchmarks used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Secondly, we show that pretraining cause the network to generalize at a point where the incremental learning of new tasks is very simple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, the ”training” phase reorders data and exploit the power of pre- trained models to compute a class prototype and fill a memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' At inference time we match the closest prototype through a knn-like approach, providing us the prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We will see how this naive solution can act as an off-the-shelf continual learning system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In order to better consolidate our results, and merge the above two works, we use the devised pipeline with CNN and Vision Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We will discover that thew latter have the ability to produce features of higher quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a side note we discuss some extension to the unsupervised realm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In a nutshell, this simple pipeline raises the same questions raised by previous works such as Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020] on the effective progresses made by the CL community especially in the dataset considered and the usage of pretrained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 76 Chapter 4 Works Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='15: Depiction of our simple baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our pipeline does not perform param- eters updates and consumes few KBytes as memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Until now, the CL community mainly focused in the analysis of catastrophic forgetting in Convolutional Neural Networks (CNN) models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' But, as can be seen by some recent works, Vision Transformers (ViT) are asserting themselves as a valuable alternative to CNNs for computer vision tasks, sometimes, achieving better performances with respect to CNNs Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The power of ViTs lies in their less inductive bias Morrison et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2021] and in their subsequent better generalization ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Thanks to this ability ViTs are naturally inclined continual learners, as pointed in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In transformer literature, the usage of pretrained backbones is becoming a must, in fact, training such systems requires extensive amount of data and careful hyper- parameters optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Using pretrained backbones is common also in Computer Chapter 4 77 Training 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Batch Reordering 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='FeatureExtraction 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Prototype Creation feats c1 C1 p1 H feats c2 Task 1 C2 p2 feats c3 C3 p3 Visual Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' feats_c4 C4 p4 feats c5 Task 2 C5 p5 feats c6 C6 p6 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' MemoryBank Test 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Feature extraction 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Match closest prototype p1 p2 Visual p3 feat x Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' p4 p5 p6 MemoryBankDissecting continual learning: a structural and data analysis Vision communities where CNNs are the main player.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In CL literature, the pretraining is frequent, but not constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' It is typically carried on half of the analyzed dataset or through a big initial task that has the objective of facilitating the learning of low level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The very best results, however, have been achieved when we do not skip pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This can be confirmed by the CVPR 2020 Continual Learning Chal- lenge summary report Lomonaco et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2022], where the authors noted that all the methods proposed solutions leveraging pretrained backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On top of that, simple baselines sometimes provide better results with respect to overly engineered CL solutions, GDumb Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020] is such an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the work, the authors showed superior performance against several methods at the state-of-the-art through a system composed just by a memory random sampler and a simple learner (CNN or MLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' From a practical point of view, these methods often constitute a simple, clear, fast, intuitive and efficient solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Following these lines, we explore a knn-like method to perform off-the-shelf online continual learning leveraging the power of pretrained vision transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our sys- tem constitutes a simple and memory-friendly architecture requiring zero parameters updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Being our work one of the first using ViTs in CL, we propose a robust baseline for future works and provide an extensive comparison against CNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In brevity, the contributions are the following: We devise a simple pipeline composed by a pretrained feature extractor and an incremental prototype bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The latter is updated as new data is experienced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The overall cost of the method is in the storage of a pretrained backbone and few Kbytes for the memory bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We devise a baseline for future CL methodologies that will exploit pretrained Vision Transformers or Resnets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The baseline is fast and does not require any parameter update, yet achieving robust results in 200 lines of Python, unlocking reproducibility too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We provide a comparison for our pipeline between Resnets and Visual Trans- formers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We discover that Vision Transformers produce more discriminative features, appealing also for the CL setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In light of such results, we arise the same questions, as GDumb Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020] does, in the progresses made by the CL community so far specifically in the quality of the datasets and in the usage of pretrained backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 78 Chapter 4 Works Algorithm 1 Off-the-shelf CL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' “Training” Require: ti, φ, M for ti ∈ T do G = GroupByClass(ti) for g ∈ G do f = φ(g) Extract features p = µ(f ) Compute mean feature M ← p Store prototype in memory return M Related Works Only recently few works considered self-attention models in continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2022] proposed a framework for object detection exploiting Swin Trans- former Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2021] as pretrained backbone for a CascadeRCNN detector, the authors show that the extracted features generalize better to unseen domains hence achieving lesser forgetting rates compared to ResNet50 He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2016] backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This also follows the conclusions made by Paul and Chen Paul and Chen [2021] on the fact that vision transformers are more robust learners with respect to CNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Several methods in CL use pretrained backbones as feature extractors such as in Hayes et al Hayes and Kanan [2020] or Aljundi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2019b], Hocquet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020] and sometimes the pretraining is carried on half (or a big portion) the dataset considered, as in PODNet Douillard et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020] or in Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For a more complete review on CL methodologies we point out these recent surveys Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2019], Hadsell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020], Mundt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A similar study on pretraining for CL has been conducted by Mehta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mehta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, they study the impact on catastrophic forgetting that a linear layer might accuse while using a pretrained backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Their study focuses only on Resnet18 for vision tasks, but they also include NLP tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Method Setting Continual Learning characterizes the learning by introducing the notion of subsequent tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, the learning happens in an incremental fashion, that is, the model incrementally experiences different training sessions as time advances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 79 Dissecting continual learning: a structural and data analysis Practically, a learning dataset is split in chunks where each split is considered an incremental task containing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CL being a relatively new field, the community is still converging to a common setting notation, but we focus on an online, task- agnostic NC-type scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tat is, the model forwards a pattern just once and does not have the task label at test time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a more fine grained specific we follow Lomonaco and Maltoni [2017] categorization and use a NC-type scenario where each task contains a disjoint group of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' More formally, given a dataset D and a set of n disjoint tasks T that will be experienced sequentially: T = [t1, t2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , tn] (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='13) each task ti = (Ci, Di) represented by a set of classes Ct = ct 1, ct 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' , ct nt and training data Dt (images).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We assume that the classes of each task do not overlap i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ci � Cj = ∅ if i ̸= j “Training” Phase In the training phase, given a task ti ∈ T , a feature extractor φ and a memory bank as a dictionary M, the procedure does the following: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' First it performs batch reordering, that is, it groups the images of a given task by their class 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' After grouping, it forwards each new subset to the feature extractor φ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Given the feature representations of a group, it computes the mean of the features to create a class prototype 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Updates the memory bank M by storing the each computed prototype At the end of the training procedure for a given task ti, we would have a repre- sentative prototype vector for each class contained in ti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we said, the prototype vector is computed as the mean feature representation of the patterns of the same class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A depiction of the “training” phase is reported in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='15, we also provide a pseudocode in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also point out that there is not formal “training” of the network, in fact we do not perform any parameter update, we simply exploit the pretrained models and construct a knn-like memory system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 80 Chapter 4 Works Memory KiB class Params Model CIFAR100 CIFAR10 Core50 Oxford Flowers102 Tiny ImgNet200 2 KiB 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7M resnet18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 2 KiB 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8M resnet34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='81 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='62 8 KiB 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5M resnet50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='63 8 KiB 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1M resnet152 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='75 KiB 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6M ViT-T/16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='24 3 KiB 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4M ViT-B/16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='95 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='75 KiB 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6M DeiT-T/16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='64 3 KiB 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4M DeiT-B/16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='79 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8: Off-the-shelf accuracy performance on different dataset benchmarks, we both analyzed a CNN model and a ViT pretrained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Test Phase After completing the training phase for a task ti the memory bank M will be populated by the prototypes of the classes encoundered so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' During this test phase, we simply use a knn-like approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Given an image x, the updated memory bank M and the feature extractor φ we devise the test phase as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Forward the test image x to the feature extractor φ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Compute a distance between the feature representation of the image and all prototypes contained in M 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We match the prototype with minimum distance and return its class In a nutshell, we perform k-nn with k=1 over the feature representation of an image, matching the class of the closes prototype in the bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' If the class selected is the same of the test example we would have a hit, a miss otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='15 reports a visual depiction of the test procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As distance we use a simple l2, but several tests have been made with cosine similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although the results with the cosine similarity are better, we opt for the l2 since provides the best speedup in the implementation through Pytorch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Experiments It is suspected that Visual Transformers generalize better with respect to CNN mod- els.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end, we compare CNNs models and ViTs models as feature extractors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We selected four CNN models to compare against four attention-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Chapter 4 81 Dissecting continual learning: a structural and data analysis particular, we selected DeiT-Base/15, DeiT-Tiny/15 Touvron et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2021], ViT- Base/16 and ViT-Tiny/16 Dosovitskiy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2021] as visual transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While we opted for Resnet18/34/50/152 He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2016] as CNN models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We used the timm Wightman [2019] library to fetch the pretrained models where all the models have been trained on ImageNet Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2009] and the continuum Douillard and Lesort [2021] library to create the incremental setting for 5 datasets, namely CIFAR10/100, Core50, OxfordFlowers102 and TinyImageNet200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In all dataset benchmarks, we upscaled the images to 224 × 224 pixels in order to accommodate visual transformers which needs such imput dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We apply such transformation to resnet data too for a fair comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In order to match the closes prototype at test time, we used l2 as preferred measure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The main results are reported in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The pipeline is extremely simple, yet it achieves impressive performance as an off-the-shelf method, at cost of a very small overhead to store the prototype memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, at the end of the training phase, the memory bank translates only into few KBytes of storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Although this preliminary work only consider task-agnostic setting, we remind that if at test time we are given the task label of the data, we can recast the method to work in task-aware setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this case, performing the test phase would be easier since the comparison of the test data will be carried only on a subset of the prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On the same line, one can see that in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 we do not report each dataset task split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, our method works for any dataset split since it just need any partition of the datasets that respect a NC protocol i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' as long as tasks are formed by images that can be grouped in classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can also appreciate that transformer architectures work best in all benchmarks, suggesting direct superior generalization capabilities with respect to CNNs or, at least, more discriminative features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Discussion In light of these results, we think that this work may be extended to be considered as a baseline to assess the performance continual learning methodologies using pretrained networks as feature extractors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, a thorough investigation should be carried by substituting the k-nn approach with a linear classifier, this would allow also a better comparison between resnets and visual transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' However, we think that these preliminary results are of interest to the Vision Transformer and CL research community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then raise some concerns with respect to the procedure and the benchmarks 82 Chapter 4 Works Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='16: Direct off-the-shelf extension of the baseline proposed to tackle unsper- vised continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' used to assess new CL methodologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we can see, through a pretrained model, we can achieve impressive results with respect to the current CL state-of-the-art Parisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2019], Hadsell et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020], Mundt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This point have been also raised by GDumb Prabhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [2020] where the authors questioned the progresses by providing a very simple baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Moreover, we can further extend this simple pipeline to be used in unsupervised continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Actually, the extension is straightforward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In an unsupervised scenario the batch reordering step cannot be performed since we are not allowed to know each data class label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To cope with this lack of information one can substitute the step with any clustering algorithm such as K-means (we tried it but with no luck) or a more sophisticated approach such as autoencoders, self-organizing maps etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='. The test phase of the unsupervised extension would be analogous to the supervised counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Conclusion In this short ex[erimental segment we proposed a baseline for continual learning methodologies that exploit pretrained Vision Transformers and Resnets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We tackle online NC-type class-incremental learning scenario, the most common one, even though, our pipeline can be extended to different scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our off-the-shelf method Chapter 4 83 MemoryofPrototypes P feats c1 feats c2 p 2 T_1 feats c3 p3 pretr K-means model T_2 feats _c4 p4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' feats c5 p 5 T_n feats_c6 p6Dissecting continual learning: a structural and data analysis is conceptually simple yet gives strong results and can be implemented in 200 lines of Python therefore enhancing reproducibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To assess the performance of different backbones our pipeline we compared Resnets models against Vision Transformers feature extractors pretrained on the same dataset, and show that vision transformers provide more powerful features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This suggests that Vision Transformers ability to encode knowledge is is broader.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then we raise some questions about CL research progress and note that with a pretrained model and a simple pipeline one can achieve strong results and, therefore, new methodologies should drop the usage of pretrained backbones when testing on such dataset benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 84 Chapter 4 Works 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 Unsupervised Semantic Discovery through Visual Patterns detection So far, we directly investigated the impact of performance by altering structural and data properties of object recognition frameworks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' If we step back a bit and consider a more broader vision about continual learning, we understand that, in order to adapt to a changing environment, an artificial agent should manifest also the ability to continuously discover new patterns, in our case visual patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We propose a smart pipeline that it is able to discover repetitive patterns in an image, by means of a threshold parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' That is, if we alter this specific parameter, we are able to discover new semantic levels in a scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This work goes a bit in another direction from the dissection of current continual learning methodologies treated in this thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Instead, it is a step towards the ability to build a system able to incrementally explore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end, we propose a new fast fully unsupervised method to discover se- mantic patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our algorithm is able to hierarchically find visual categories and produce a segmentation mask.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Through the modeling of what is a visual pattern in an image, we introduce the notion of “semantic levels” and devise a conceptual framework along with measures and a dedicated benchmark dataset for future com- parisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our algorithm is composed by two phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A filtering phase, which selects semantical hotsposts by means of an accumulator space, then a clustering phase which propagates the semantic properties of the hotspots on a superpixels basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We provide both qualitative and quantitative experimental validation, achieving optimal results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 85 Dissecting continual learning: a structural and data analysis While the vast majority of supervised object detection and segmentation ap- proaches leverage rich datasets with semantically labelled categories, unsupervised methods cannot rely on such a luxury.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Indeed they are expected to infer from the image content itself what is a relevant object and which are its boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is a daunting task, as relevance is totally domain-specific and also highly subjective, espe- cially when taking in account human judgement, which exploits a lot of out-of-band information that cannot be found in the sheer image data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a matter of fact, little effort have been put to investigate unsupervised auto- matic approaches to detect and segment semantically relevant objects without any additional information than the image or any a priori knowledge of the context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is due to the fact that a unique definition of what is a relevant object (or, how we prefer to call it, a visual category) does not actually exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is especially true if we are seeking to set a formal definition that can be adopted across all the domains in a consistent manner with respect to human judge- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Within this section, we try to address this problem by considering a visual category each pattern which appearance is consistent enough across the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In other words, we consider something to be a relevant object if it appears more than once, exhibiting consistent visual features in different parts of the scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' From a cognitive and perceptual point of view this makes a lot of sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, it is easy to observe that if a human is presented with images representing several different but recurring objects, even in a cluttered scene, he does not need to know what the objects actually are representing in order to be able to assign semantically- consistent labels to each of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' He would even be able to label each pixel, defining the boundaries of the objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As an example, if someone takes a look at a large bin of different (but to some extent repeated) mechanical parts he never saw before, he is still able to tell one part from the other by exploiting their coherent visual and structural appearance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This ability is also preserved with slight changes in scale, orientation or partial occlusion of the objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Since this automatic assignment to a visual category of recurrent object is both well-defined and quite natural in humans, it is a very good candidate as a rule for automatically detecting relevant objects in an unsupervised manner that has good chances of being coherent with human judgement applied to the same image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 86 Chapter 4 Works Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='17: A real world example of unsupervised segmentation of a grocery shelf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our method can automatically discover both low-level coherent patterns (brands, flavor images and logos) and high-level compound objects (multi-packs and bricks) by controlling the semantical level of the detection and segmentation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To be fair, we must also underline the fact that, in order to define the boundaries of a visual category and thus obtain a meaningful segmentation, also the level of detail must be taken into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As an example, if we present to a human an image of a crowded road captured from a side, and we ask him to segment visual categories according to recurrent patterns, we could get slightly different results from different people depending on their attention to details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Some people will segment cars and trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Other could consider the car body to be a different object from the wheels ad branches from the tree trunk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The most picky could even separate tires from wheel rims and segment out each single leaf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In practice semantic consistency can happen at different scale when dealing with compound objects presenting themselves internal self repetitions or made up of single parts that are also present in other objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To address this aspect we also have to design a proper strategy to perform visual category detection and interpretation at a particular scale, according to the level of detail we want to express during the segmentation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We define this level of detail as semantical level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Semantical levels, of course, do not map directly on specific high level concepts, such as whole objects, large parts or minute components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rather the semantic level will act as a coarse degree of granularity of the segmentation process that will result in a hierarchical split of segments as it changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' These two definitions of visual categories and semantical levels, that will be developed throughout the remainder of the work, are the two key concepts driving our novel segmentation method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 87 Yoga Yogo Optimum Optimun Optimum Optimum OptimumDissecting continual learning: a structural and data analysis The ability of our approach to leverage repetitions to capture the internal rep- resentation in the real world and then extrapolates visual categories at a specific semantical level is actually achieved through the combination of a couple of standard techniques,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' slightly modified for the specific task,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' and of a few key steps specifically crafted to make the process work in a consistent way with respect to the cognitive process adopted by humans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This happens, for instance, by seeking for highly rel- evant repetitive structural patterns, called semantical hotspots, characterized by a novel feature descriptor, called splash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We do this through a scale-invariant method and with no continuous geometrical constraints on the visual pattern disposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also do not constrain ourselves to find only one visual pattern, which is another very common assumption with other approaches in literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rather our technique is designed from the start to be able to detect more patterns at once, being able to assign to each of them a different visual category label, corresponding to a different real world object or object part, according to the selected semantical level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Overall, with this study, we are offering to the community the following contri- butions: A new pipeline, including the definition of a specially crafted feature descriptor, to capture semantical categories with the ability to hierarchically span over semantical levels;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A specially crafted conceptual framework to evaluate unsupervised semantic- driven segmentation methods through the introduction of the semantical levels notion along with a new metric;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A new dataset consisting of a few hundredths labelled images that can be used as a benchmark for visual repetition detection in general.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The remainder of the section is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 describes the related works with respect to feature extraction and automatic visual patterns de- tection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 introduces our method, giving details on the overall pipeline and on the implementation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 presents an experimental evaluation and comparison with similar approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, the conclusions are found in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Code, dataset and notebooks used in this study will be made available for public use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 88 Chapter 4 Works Related Works Several works have been proposed to tackle visual pattern discovery and detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While the paper by Leung and Malik [Leung and Malik, 1996] could be consid- ered seminal, many other works build on their basic approach, working by detecting contiguous structures of similar patches by knowing the window size enclosing the distinctive pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One common procedure in order to describe what a pattern is, consists to first extract descriptive features such as SIFT to perform a clustering in the feature space and then model the group disposition over the image by exploiting geometrical constraints, as in [Pritts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2014] and [Chum and Matas, 2010], or by relying only on appearance, as in [Doubek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2010, Liu and Liu, 2013, Torii et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The geometrical modeling of the repetitions usually is done by fitting a planar 2-D lattice, or a deformation of it [Park et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2009], through RANSAC procedures as in [Schaffalitzky and Zisserman] [Pritts et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2014] or even by exploiting the mathematical theory of crystallographic groups as in [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2004].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Shechtman and Irani [Shechtman and Irani, 2007], also exploited an active learning environment to detect visual patterns in a semi-supervised fashion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For example Cheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Cheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2010] use input scribbles performed by a human to guide detection and extraction of such repeated elements, while Huberman and Fattal [Huberman and Fattal, 2016] ask the user to detect an object instance and then the detection is performed by exploiting correlation of patches near the input area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Recently, as a result of the new wave of AI-driven Computer Vision, a number of Deep Leaning based approaches emerged, in particular Lettry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Lettry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] argued that filter activation in a model such as AlexNet can be exploited in order to find regions of repeated elements over the image, thanks to the fact that filters over different layers show regularity in the activations when convolved with the repeated elements of the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On top of the latter work, Rodr´ıguez-Pardo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Rodr´ıguez-Pardo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] proposed a modification to perform the texture synthesis step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A brief survey of visual pattern discovery in both video and image data, up to 2013, is given by Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2014], unfortunately after that it seems that the computer vision community lost interest in this challenging problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We point out that all the aforementioned methods look for only one particular visual repetition except for [Liu and Liu, 2013] that can be considered the most direct competitor and the main benchmark against which to compare our results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 89 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='18: (a) A splash in the image space with center in the keypoint ⃗cj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' (b) H, with the superimposed splash at the center, you can note the different levels of the vote ordered by endpoint importance i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' descriptor similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' (c) 3D projec- tion showing the gaussian-like formations and the thresholding procedure of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' (d) Backprojection through the set S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Method Description Features Localization and Extraction We observe that any visual pattern is delimited by its contours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The first step of our algorithm, in fact, consists in the extraction of a set C of contour keypoints indicating a position ⃗cj in the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To extract keypoints, we opted for the Canny algorithm, for its simplicity and efficiency, although more recent and better edge extractor could be used [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019] to have a better overall procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A descriptor dj is then computed for each selected ⃗cj ∈ C thus obtaining a descriptor set D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we adopted the DAISY algorithm because of its appealing dense matching properties that nicely fit our scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Again, here we can replace this module of the pipeline with something more advanced such as [Ono et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018] at the cost of some computational time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Semantic Hot Spots Detection In order to detect self-similar patterns in the image we start by associating the k most similar descriptors for each descriptor ⃗dj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We can visualize this data structure as a star subgraph with k endpoints called splash “centered” on descriptor ⃗dj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='18 (a) shows one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 90 Chapter 4 2m Semantical Hotspots Tnxm Hw= Hw+g(w,h,(i)) (i) Reproject Splash Accum S to Accum Threshold h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='( (b) (a) (d)Works Splashes potentially encode repeated patterns in the image and similar patterns are then represented by similar splashes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The next step consists in separating these splashes from those that encode noise only, this is accomplished through an accu- mulator space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, we consider a 2-D accumulator space H of size double the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then superimpose each splash on the space H and cast k votes as shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='18 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In order to take into account the noise present in the splashes, we adopt a gaussian vote-casting procedure g(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Similar superimposed splashes contribute to similar locations on the accumulator space, resulting in peak formations (Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='18 (c)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We summarize the voting procedure as follows: H ⃗w = H ⃗w + g( ⃗w,⃗h(j) i ) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='14) where ⃗h(j) i is the i-th splash endpoint of descriptor ⃗dj in accumulator coordinates and ⃗w is the size of the gaussian vote.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We filter all the regions in H which are above a certain threshold τ, to get a set S of the locations corresponding to the peaks in H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The τ parameter acts as a coarse filter and is not a critical parameter to the overall pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A sufficient value is to set it to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='05 · max(H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lastly, in order to visualize the semantic hotspots in the image plane we map splash locations between H and the image plane by means of a backtracking structure V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In summary, the key insight here is that similar visual regions share similar splashes, we discern noisy splashes from representative splashes through an auxiliary structure, namely an accumulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then identify and backtrack in the image plane the semantic hotspots that are candidate points part of a visual repetition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Semantic Categories Definition and Extraction While the first part previously described acts as a filter for noisy keypoints allowing to obtain a good pool of candidates, we now transform the problem of finding visual categories in a problem of dense subgraphs extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We enclose semantic hotspots in superpixels, this extends the semantic signifi- cance of such identified points to a broader, but coherent, area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To do so we use the SLIC [Achanta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2012] algorithm which is a simple and one of the fastest approaches to extract superpixels as pointed out in this recent survey [Stutz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then we choose the cardinality of the superpixels P to extract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is the Chapter 4 91 Dissecting continual learning: a structural and data analysis Algorithm 2 Semantic categories extraction algorithm Require: G weighted undirected graph i = 0 s∗ = − inf K∗ = ∅ while Gi is not fully disconnected do i = i + 1 Compute Gi by corroding each edge with the minimum edge weight Extract the set Ki of all connected components in Gi s(Gi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ki) = � k∈Ki µ(k) − α |Ki| if s(Gi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ki) > s∗ then s∗ = s(Gi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ki) K∗ = Ki return s∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' K∗ second and most fundamental parameter that will allow us to span over different semantic levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Once the superpixels have been extracted, let G be an undirected weighted graph where each node correspond to a superpixel p ∈ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In order to put edges between graph nodes (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' two superpixels), we exploit the splashes origin and endpoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular the strength of the connection between two vertices in G is calculated with the number of splashes endpoints falling between the two in a mutual coherent way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' So to put a weight of 1 between two nodes we need exactly 2 splashes endpoints falling with both origin and end point in the two candidate superpixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With this construction scheme, the graph has clear dense subraphs formations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Therefore, the last part simply computes a partition of G where each connected component correspond to a cluster of similar superpixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In order to achieve such objective we optimize a function that is maximized when we partition the graph to represent so.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end we define the following density score that given G and a set K of connected components captures the optimality of the clustering: s(G, K) = � k∈K µ(k) − α |K| (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='15) where µ(k) is a function that computes the average edge weight in a undirected weighted graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The first term, in the score function, assign a high vote if each connected compo- 92 Chapter 4 Works nent is dense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While the second term acts as a regulator for the number of connected components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also added a weighting factor α to better adjust the procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As a proxy to maximize this function we devised an iterative algorithm reported in Algo- rithm 2 based on graph corrosion and with temporal complexity of O(|E|2 +|E| |V |).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' At each step the procedure corrupts the graph edges by the minimum edge weight of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' For each corroded version of the graph that we call partition, we compute s to capture the density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally the algorithm selects the corroded graph partition which maximizes the s and subsequently extracts the node groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In brevity we first enclose semantic hotspots in superpixels and consider each one as a node of a weighted graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then put edges with weight proportional to the number of splashes falling between two superpixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This results in a graph with clear dense subgraphs formations that correspond to superpixels clusters i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' semantic categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The semantic categories detection translates in the extraction of dense subgraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To this end we devised an iterative algorithm based on graph corrosion where we let the procedure select the corroded graph partition that filters noisy edges and let dense subgraphs emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We do so by maximizing score that captures the density of each connected component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Experiments Dataset As we introduced in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 one of the aims of this work is to provide a better comparative framework for visual pattern detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To do so we created a public dataset by taking 104 pictures of store shelves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each picture has been took with a 5mpx camera with approximatively the same visual conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also rectified the images to eliminate visual distortions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We manually segmented and labeled each repeating product in two different se- mantic levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the first semantic level products made by the same company share the same label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In the second semantic level visual repetitions consist in the exact identical products.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In total the dataset is composed by 208 ground truth images, half in the first level and the rest for the second one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 93 Dissecting continual learning: a structural and data analysis Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='19: (top) Analysis of measures as the number of superpixels |P| retrieved varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The rightmost figure shows the running time of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We repeated the experiments with the noisy version of the dataset but report only the mean since variation is almost equal to the original one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' (bottom) Distributions of the measures for the two semantic levels, by varying the two main parameters r and |P|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' µ-consistency We devised a new measure that captures the semantic consistency of a detected pattern that is a proxy of the average precision of detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, we want to be sure that all pattern instances fall on similar ground truth objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' First we introduce the concept of semantic consistency for a particular pattern ⃗p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Let ⃗P be the set of patterns discovered by the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Each pattern ⃗p contains several instances ⃗pi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ⃗L is the set of ground truth categories, each ground truth category ⃗l contain several objects instances ⃗li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Let us define ⃗tp as the vector of ground truth labels touched by all instances of ⃗p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We say that ⃗p is consistent if all its instances ⃗pi, i = 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' |⃗p| fall on ground truth regions sharing the same label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In this case ⃗tp would be uniform and we consider ⃗p a good detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The worst scenario is when given a pattern ⃗p every ⃗pi falls on objects with different label ⃗l i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' all the values in ⃗tp are different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To get an estimate of the overall consistency of the proposed detection, we average the consistency for each ⃗p ∈ ⃗P giving us: 94 Chapter 4 Superpixels Analysis 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00 11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='95 10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='80 recall Time (sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=') 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='85 cal total 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='80 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='60 First Level First Level First Level First Level + noise First Level + noise 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 Second Level Second Level Second Level Second Level + noise Second Level + noise Second Level + noise All Levels 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='55 mm mm 000 0 Superpixels Superpixels Superpixels Superpixels Measures Distributions 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 First Level 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='0 Second Level μ-consistency recall total recallWorks Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20: Qualitative comparison between [Liu and Liu, 2013] [14], [Lettry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] [10] and our algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Our method detects and segments more than one pattern and does not constrain itself to a particular geometrical disposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' µ-consistency = 1 ���⃗P ��� � ⃗p∈⃗P ��mode �⃗tp ��� ��⃗tp �� (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='16) Recall The second measure is the classical recall over the objects retrieved by the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Since our object detector outputs more than one pattern we average the recall for each ground truth label by taking the best fitting pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 1 ���⃗L ��� � ⃗l∈⃗L max⃗p∈⃗P recall (⃗p,⃗l) (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='17) The last measure is the total recall, here we consider a hit if any of the pattern falls in a labeled region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In general we expect this to be higher than the recall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We report the summary performances in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As can be seen the algo- rithm achieves a very high µ-consistency while still able to retrieve the majority of the ground truth patterns in both levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One can observe in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='19 an inverse behaviour between recall and con- sistency as the number of superpixels retrieved grows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This is expected since less superpixels means bigger patterns, therefore it is more likely to retrieve more ground truth patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 95 [14] 10 OursDissecting continual learning: a structural and data analysis In order to study the robustness we repeated the same experiments with an altered version of our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular for each image we applied one of the following corruptions: Additive Gaussian Noise (scale = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 ∗ 255), Gaussian Blur (σ = 3), Spline Distortions (grid affine), Brightness (+100), and Linear Contrast (1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Qualitative Validation Firstly we begin the comparison by commenting on [Liu and Liu, 2013].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' One can observe that our approach has a significant advantage in terms of how the visual pat- tern is modeled.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While the authors model visual repetitions as geometrical artifacts associating points, we output a higher order representation of the visual pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In- deed the capability to provide a segmentation mask of the repeated instance region together the ability to span over different levels unlocks a wider range of use cases and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As qualitative comparison we also added the latest (and only) deep learning based methodology [Lettry et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2017] we found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This methodology is only able to find a single instance of visual pattern, namely the most frequent and most significant with respect to the filters weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This means that the detection strongly depends from the training set of the CNN backbone, while our algorithm is fully unsupervised and data agnostic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Quantitative Validation We compared quantitatively our method against [Liu and Liu, 2013] that constitutes, to the best of our knowledge, the only work developed able to detect more than one visual pattern.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We recreated the experimental settings of the authors by using the Face dataset [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2007] as benchmark achieving 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00 precision vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='98 of [Liu and Liu, 2013] and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='77 in recall vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We considered a miss on the object retrieval task, if more than 20% of a pattern total area falls outside from the ground truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The parameter used were |C| = 9000, k = 15, r = 30, τ = 5, |P| = 150.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also fixed the window of the gaussian vote to be 11 × 11 pixels throughout all the experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 96 Chapter 4 Works Conclusions With this study we introduced a fast and unsupervised method addressing the prob- lem of finding semantic categories by detecting consistent visual pattern repetitions at a given scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The proposed pipeline hierarchically detects self-similar regions represented by a segmentation mask.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As we demonstrated in the experimental evaluation, our approach retrieves more than one pattern and achieves better performances with respect to competitors meth- ods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also introduce the concept of semantic levels endowed with a dedicated dataset and a new metric to provide to other researchers tools to evaluate the con- sistency of their approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Acknowledgments We would like to express our gratitude to Alessandro Torcinovich and Filippo Berga- masco for their suggestions to improve the work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We also thank Mattia Mantoan for his work to produce the dataset labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 4 97 Dissecting continual learning: a structural and data analysis 98 Chapter 4 Chapter 5 Conclusions 99 Dissecting continual learning: a structural and data analysis In this thesis, we contributed spanned the dissection of continual learning by providing several structural and data analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' First we provide a gentle introduction to the topic of continual learning starting by highlighting the difference between natural and artificial models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Among the differences we stress the importance of time, which is an essential component for developing lifelong learning machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, we informally introduce the main challenges that continual learning systems must tackle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, catastrophic forgetting and the stability plasticity dilemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To better provide an intuition about these topics, we provided a visual example of catastrophic forgetting in an autoencoder model, showing how distributional shifts in the subsequent tasks result in the abrupt damage of past knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Later, we move on by giving a more formal definition of continual learning settings prominently adopted in literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We introduced the notions of class-incremental, task-incremental, online/offline learning along with a specification on other common settings in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Before moving on the contributions we provided a small literature review on the state-of-the-art by describing the main categories under which continual learning methods have been grouped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Finally, we move on the main contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' First, we introduced a study on the quality/quantity trade-off in rehearsal-based continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, we se- lected one of the most performant baselines, that is GDumb, and analyzed several compression techniques when applied to the replay buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We highlighted that the quantity of data is a far more important factor when storing examplars in the re- play buffer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We do so by considering different compression schemes with extreme rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, we moved into the second major contribution which considers Visual Transformers in an incremental setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Here, besides being one of the first works on visual transformers for continual learning, we provided a surgical investigation on regularization methods for ViTs in the challenging setting of rehearsal-free CL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We compared functional, weight and attentional regularizations, with the latter being a regularization in the matrix of the self-attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Attentional regu- larizations provide comparable performance with respect to the other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' As second contribution we also introduced a loss inspired by a method nowadays in vogue (PODNet) and devised an asymmetric variant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We show that the introduction of the asymmetric variant allows achieving more plasticity to the model when applied to different part of the mechanism of self-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Then, we proposed a study on off-the-shelf continual learning exploiting fully pretrained networks and, in particular, we proposed a simple baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The baseline is composed by a feature extractor and a knn-like prototype memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The baseline is crafted to be performant in practical sce- narios achieving optimal results with a memory overhead of few KBytes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Moreover we discussed its possible extension to the realm of unsupervised continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We then linked this preliminary discussion with the exploration of visual categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 100 Chapter 5 Conclusions To do so we introduce another work tackling unsupervised pattern discovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, the notion of discovery is naturally included into the notion of lifelong learning: an agent capable of lifelong learning, surely should possess the ability to autonomously discover new knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We do so by introducing a new unsupervised algorithm to perform unsupervised semantic segmentation at different semantic scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Further Developings With the several studies proposed, we want to highlight the directions where it might be more fruitful to investigate further to build better Continual Learning agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A first warning we raised regards the dataset usage to assess the performance of CL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In particular, with Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 we see that extreme levels of buffer data resize still provide good results in rehearsal systems, suggesting that, perhaps, more realistic datasets should be included to devise more useful solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This find- ing is also supported by Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='3 which shows that tackling these benchmarks with a pretrained backbone is sufficient to overcome quasi-optimally continual learn- ing scenarios on 5 different datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This also suggests that pretraining could be a great advantage, in the generalization ability of the model, when building new CL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' To tackle the aforementioned point, the community can focus more on unsuper- vised continual learning which is a natural and more challenging problem extension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While keeping the same datasets we can now also leverage pretrained backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' While being appealing on its own, following this line is also greatly encouraged by the fact that there are virtually no works on such a topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With the study proposed in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 we show that ViTs are naturally inclined continual learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We suspect that the less inductive bias carried by such models might be the key that allows such models to perform better in incremental scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On another side, we see that the results obtained without pretraining have difficulty achieving CNN performances so easily (we can compare the results of Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='1 and Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' This calls for the need to build less data-hungry models in line with the world’s fast-paced data generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Within Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='2 we also propose a new way to assess Continual Learning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' We think that the community still lacks of a principled way to measure the stability-plasticity trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' With our introduction of the two curves, we proposed an initial tentative to monitor the performance of a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chapter 5 101 Dissecting continual learning: a structural and data analysis Last but not least, with the work of Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='4 we stress that autonomously discovering new patterns should be a core ability of an intelligent system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In fact, if an agent can explore the real world and find hierarchies of knowledge without help, all it has to do to incrementally learn is to store such knowledge in some kind of long-term memory repository which translates into a compression problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 102 Chapter 5 Bibliography Networks of spiking neurons: The third generation of neural network models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neural Networks, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aur´elien Lucchi, Pascal Fua, and Sabine S¨usstrunk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' SLIC superpixels compared to state-of-the-art superpixel meth- ods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pattern Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tameem Adel, Han Zhao, and Richard E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Turner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continual learning with adap- tive weights (CLAW).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In International Conference on Learning Representations, (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Abien Fred Agarap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Deep learning using rectified linear units (relu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' An optimistic per- spective on offline reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In ICML, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hongjoon Ahn, Jihwan Kwak, Su Fang Lim, Hyeonsu Bang, Hyojun Kim, and Tae- sup Moon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ss-il: Separated softmax for incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 824–833, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Expert gate: Lifelong learning with a network of experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Memory aware synapses: Learning what (not) to forget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Proceedings of the European Conference on Computer Vision (ECCV), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Cac- cia, Min Lin, and Lucas Page-Caccia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Online continual learning with maximal inter- fered retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems (NeurIPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rahaf Aljundi, Klaas Kelchtermans, and Tinne Tuytelaars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Task-free continual learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In CVPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Computer Vision Foundation / IEEE, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 103 Dissecting continual learning: a structural and data analysis Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gradient based sam- ple selection for online continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fox, and Roman Garnett, edi- tors, Advances in Neural Information Processing Systems (NeurIPS), 2019c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Elahe Arani, Fahad Sarfraz, and Bahram Zonooz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning fast, learning slow: A general continual learning method based on complementary learning system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ICLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Banerjee, Vinay Kumar Verma, Toufiq Parag, Maneesh Kumar Singh, and Vinay P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Namboodiri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Class incremental online streaming learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' NeurIPS, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Eden Belouadah, Adrian Popescu, and Ioannis Kanellos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A comprehensive study of class incremental learning algorithms for visual tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neural Networks, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Edward L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bennett, Marian Cleeves Diamond, David Krech, and Mark Richard Rosen- zweig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chemical and anatomical plasticity of brain changes in brain through expe- rience, demanded by learning theories, are found in experiments with rats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Science, 1964.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Alessandro Betti, Marco Gori, Simone Marullo, and Stefano Melacci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Developing constrained neural units over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In International Joint Conference on Neural Networks, IJCNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Magdalena Biesialska, Katarzyna Biesialska, and Marta R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Costa-juss`a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continual lifelong learning in natural language processing: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Proceedings of the 28th International Conference on Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' International Com- mittee on Computational Linguistics, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tom B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Pra- fulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ziegler, Jeffrey Wu, Clemens Winter, Christo- pher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Language models are few-shot learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Confer- ence on Neural Information Processing Systems 2020, NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calder- ara.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Dark experience for general continual learning: a strong, simple baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: 104 Bibliography BIBLIOGRAPHY Annual Conference on Neural Information Processing Systems 2020, NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gail A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Carpenter and Stephen Grossberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The art of adaptive pattern recognition by a self-organizing neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Computer, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fabio Cermelli, Massimiliano Mancini, Samuel Rota Bulo, Elisa Ricci, and Barbara Caputo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Modeling the background for incremental learning in semantic segmenta- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Conference on Computer Vision and Pattern Recognition (CVPR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Riemannian walk for incremental learning: Understanding forgetting and intransi- gence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Proceedings of the European Conference on Computer Vision (ECCV), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Efficient lifelong learning with A-GEM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In International Conference on Learning Representations, (ICLR), 2019a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Dokania, Philip H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Torr, and Marc’Aurelio Ranzato.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' On tiny episodic memories in continual learning, 2019b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hui Chen, Chao Tan, and Zan Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ensemble of extreme learning machines for multivariate calibration of near-infrared spectroscopy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' When vision transformers out- perform resnets without pretraining or strong data augmentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ICLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ming-Ming Cheng, Fang-Lue Zhang, Niloy J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mitra, Xiaolei Huang, and Shi-Min Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Repfinder: finding approximately repeated scene elements for image editing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Riccardo Del Chiaro, Bartlomiej Twardowski, Andrew D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bagdanov, and Joost van de Weijer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' RATT: recurrent attention to transient tasks for continual image cap- tioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ondrej Chum and Jiri Matas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Unsupervised discovery of co-occurrence in sparse high dimensional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bibliography 105 Dissecting continual learning: a structural and data analysis Tarin Clanuwat, Mikel Bober-Irizar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Kitamoto, Alex Lamb, Kazuaki Yamamoto, and David Ha.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Deep learning for classical japanese literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fast and accurate deep network learning by exponential linear units (elus).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Cyprien de Masson d’Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Episodic memory in lifelong language learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems (NeurIPS), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Matthias Delange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ales Leonardis, Greg Slabaugh, and Tinne Tuytelaars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A continual learning survey: Defying forgetting in classification tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Imagenet: A large-scale hierarchical image database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Society Conference on Computer Vision and Pattern Recognition (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' BERT: pre- training of deep bidirectional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Association for Com- putational Linguistics, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chel- lappa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning without memorizing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' An image is worth 16x16 words: Transformers for image recognition at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 9th International Conference on Learning Representations, ICLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Briar Doty, Stefan Mihalas, Anton Arkhipov, and Alex T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Piet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Heterogeneous ‘cell types’ can improve performance of deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' bioRxiv, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 106 Bibliography BIBLIOGRAPHY Petr Doubek, Jiri Matas, Michal Perdoch, and Ondrej Chum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Image matching and retrieval by repetitive patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 20th International Conference on Pattern Recog- nition, ICPR 2010, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Arthur Douillard and Timoth´ee Lesort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continuum: Simple management of complex continual learning scenarios, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Arthur Douillard, Matthieu Cord, Charles Ollion, Thomas Robert, and Eduardo Valle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Podnet: Pooled outputs distillation for small-tasks incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Pro- ceedings of the IEEE European Conference on Computer Vision (ECCV), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Arthur Douillard, Alexandre Ram´e, Guillaume Couairon, and Matthieu Cord.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Dy- tox: Transformers for continual learning with dynamic token expansion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CoRR, abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='11326, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, and Marcus Rohrbach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Uncertainty-guided continual learning with bayesian neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Inter- national Conference on Learning Representations, (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Spyros Gidaris and Nikos Komodakis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Dynamic few-shot visual learning without forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Conference on Computer Vision and Pattern Recognition, (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ian J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Courville, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Generative adversarial nets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In NIPS, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pankaj Gupta, Yatin Chaudhary, Thomas A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Runkler, and Hinrich Sch¨utze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neural topic modeling with continual lifelong learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In International Conference on Machine Learning, (ICML).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Raia Hadsell, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rao, Andrei A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rusu, and Razvan Pascanu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Embracing change: Continual learning in deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Trends in Cognitive Sciences, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tyler L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hayes and Christopher Kanan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lifelong machine learning with deep streaming linear discriminant analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tyler L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hayes, Nathan D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Cahill, and Christopher Kanan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Memory efficient experi- ence replay for streaming learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In International Conference on Robotics and Automation, ICRA 2019, Montreal, QC, Canada, May 20-24, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Conference on Computer Vision and Pattern Recognition, (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bibliography 107 Dissecting continual learning: a structural and data analysis Felix Hill, Adam Santoro, David G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Barrett, Ari S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Morcos, and Timothy P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lillicrap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning to make analogies by contrasting abstract relational structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 7th International Conference on Learning Representations, ICLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='net, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Geoffrey E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hinton, Oriol Vinyals, and Jeffrey Dean.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Distilling the knowledge in a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sepp Hochreiter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The vanishing gradient problem during learning recurrent neural nets and problem solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Guillaume Hocquet, Olivier Bichler, and Damien Querlioz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ova-inn: Continual learn- ing with invertible neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 2020 International Joint Conference on Neural Networks (IJCNN), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning a unified classifier incrementally via rebalancing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yen-Chang Hsu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Liu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Kira.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Re-evaluating continual learning scenarios: A categorization and case for strong baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='12488, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Guang-Bin Huang, Lei Chen, and Chee Kheong Siew.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Universal approximation using incremental constructive feedforward networks with random hidden nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neural Networks, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Inbar Huberman and Raanan Fattal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Detecting repeating objects using patch corre- lation analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 2016 IEEE Conference on Computer Vision and Pattern Recog- nition, CVPR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sergey Ioffe and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Batch normalization: Accelerating deep network training by reducing internal covariate shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Francis R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bach and David M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, ICML, JMLR Workshop and Conference Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' JMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='org, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Khurram Javed and Faisal Shafait.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Revisiting distillation and incremental classifier learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In ACCV, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Johnson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Extensions of lipschitz mappings into hilbert space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Contemporary mathematics, 1984.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Joseph, Jathushan Rajasegaran, Salman Hameed Khan, Fahad Shahbaz Khan, Vineeth N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Balasubramanian, and Ling Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Incremental object detection via meta-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence TPAMI, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 108 Bibliography BIBLIOGRAPHY Diederik P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Kingma and Max Welling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Auto-encoding variational bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Rep- resentations, (ICLR), 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Thomas N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Kipf and Max Welling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Semi-supervised classification with graph convo- lutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 5th International Conference on Learning Representations, ICLR 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='net, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' James Kirkpatrick, Razvan Pascanu, Neil C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Overcoming catastrophic forgetting in neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jeremias Knoblauch, Hisham Husain, and Tom Diethe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Optimal continual learning has perfect memory and is np-hard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In ICML, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Alex Krizhevsky.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning multiple layers of features from tiny images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Technical report, University of Toronto, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ramesh Kumar Lama, Jeonghwan Gwak, Jeong-Seon Park, and Sang-Woong Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Diagnosis of alzheimer’s disease based on structural mri images using a regularized extreme learning machine and pca features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Journal of healthcare engineering, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gradient-based learning applied to document recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Proceedings of the IEEE, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Soochan Lee, Junsoo Ha, Dongsu Zhang, and Gunhee Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A neural dirichlet process mixture model for task-free continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 8th International Conference on Learning Representations, ICLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Louis Lettry, Michal Perdoch, Kenneth Vanhoey, and Luc Van Gool.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Repeated pat- tern detection using CNN activations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 2017 IEEE Winter Conference on Appli- cations of Computer Vision, WACV 2017, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Thomas K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Leung and Jitendra Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Detecting, localizing and grouping repeated scene elements from an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Computer Vision - ECCV’96, 4th European Conference on Computer Vision, Proceedings, Volume I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Springer, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Duo Li, Guimei Cao, Yunlu Xu, Zhanzhan Cheng, and Yi Niu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Technical report for iccv 2021 challenge sslad-track3b: Transformers are better continual learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='04924, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bibliography 109 Dissecting continual learning: a structural and data analysis Fei-Fei Li, Robert Fergus, and Pietro Perona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Image Underst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Zhizhong Li and Derek Hoiem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning without forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' transactions on pattern analysis and machine intelligence (PAMI), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Seppo Linnainmaa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Taylor expansion of the accumulated rounding error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' BIT Nu- merical Mathematics, 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jingchen Liu and Yanxi Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' GRASP recurring patterns from a single view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D Bagdanov, Shangling Jui, and Joost van de Weijer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Generative feature replay for class-incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Conference on Computer Vision and Pattern Recognition Workshops CVPRW, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yanxi Liu, Robert T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Collins, and Yanghai Tsin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A computational model for periodic pattern perception based on frieze and wallpaper groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pattern Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yaoyao Liu, Anan Liu, Yuting Su, Bernt Schiele, and Qianru Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mnemonics train- ing: Multi-class incremental learning without forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 2020 IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yun Liu, Ming-Ming Cheng, Xiaowei Hu, Jia-Wang Bian, Le Zhang, Xiang Bai, and Jinhui Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Richer convolutional features for edge detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Swin transformer: Hierarchical vision transformer using shifted windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ICCV, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vincenzo Lomonaco and Davide Maltoni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Core50: a new dataset and benchmark for continuous object recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Sergey Levine, Vincent Vanhoucke, and Ken Goldberg, editors, Proceedings of the 1st Annual Conference on Robot Learning, Proceedings of Machine Learning Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vincenzo Lomonaco, Karan Desai, Eugenio Culurciello, and Davide Maltoni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Con- tinual reinforcement learning in 3d non-stationary environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Conference on Computer Vision and Pattern Recognition, (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vincenzo Lomonaco, Lorenzo Pellegrini, Pau Rodr´ıguez L´opez, Massimo Caccia, Qi She, Yu Chen, Quentin Jodelet, Ruiping Wang, Zheda Mai, David V´azquez, 110 Bibliography BIBLIOGRAPHY German Ignacio Parisi, Nikhil Churamani, Marc Pickett, Issam H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Laradji, and Davide Maltoni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Cvpr 2020 continual learning in computer vision competition: Approaches, results, current challenges and future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Artif.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' David Lopez-Paz and Marc’Aurelio Ranzato.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gradient episodic memory for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Rob Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Infor- mation Processing Systems (NeurIPS), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' SGDR: stochastic gradient descent with warm restarts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In International Conference on Learning Representations, (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Andrew L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Maas, Awni Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hannun, and Andrew Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rectifier nonlinearities improve neural network acoustic models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Zheda Mai, Ruiwen Li, Jihwan Jeong, David Quispe, Hyunwoo J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Kim, and Scott Sanner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Online continual learning in image classification: An empirical survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neurocomputing, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Arun Mallya and Svetlana Lazebnik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Packnet: Adding multiple tasks to a single network by iterative pruning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Computer Vision Foundation / IEEE Computer So- ciety, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' David McCaffary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Towards continual task learning in artificial neural networks: cur- rent approaches and insights from neuroscience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' James L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' McClelland, Bruce L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' McNaughton, and Randall C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' O’Reilly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Psychological review, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Michael McCloskey and Neal J Cohen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Catastrophic interference in connectionist networks: The sequential learning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Psychology of learning and moti- vation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Elsevier, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Warren S McCulloch and Walter Pitts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A logical calculus of the ideas immanent in nervous activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The bulletin of mathematical biophysics, 1943.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' An empirical investigation of the role of pre-training in lifelong learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, abs/2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='09153, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bibliography 111 Dissecting continual learning: a structural and data analysis Umberto Michieli and Pietro Zanuttigh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Incremental learning techniques for seman- tic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCV-W), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Seyed Iman Mirzadeh, Arslan Chaudhry, Dong Yin, Timothy Nguyen, Razvan Pas- canu, Dilan Gorur, and Mehrdad Farajtabar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Architecture matters in continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00275, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Katelyn Morrison, Benjamin Gilby, Colton Lipchak, Adam Mattioli, and Adriana Ko- vashka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Exploring corruption robustness: Inductive biases in vision transformers and mlp-mixers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CoRR, abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='13122, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Martin Mundt, Yong Won Hong, Iuliia Pliushch, and Visvanathan Ramesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A wholis- tic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, abs/2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='01797, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vinod Nair and Geoffrey E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rectified linear units improve restricted boltz- mann machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In ICML, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chris Olah, Alexander Mordvintsev, and Ludwig Schubert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Feature visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Distill, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='23915/distill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' https://distill.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='pub/2017/feature- visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yuki Ono, Eduard Trulls, Pascal Fua, and Kwang Moo Yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lf-net: Learning local features from images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Samy Bengio, Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Hugo Larochelle, Kristen Grauman, Nicol`o Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31, NeurIPS, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' German Ignacio Parisi, Ronald Kemker, Jose L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Part, Christopher Kanan, and Stefan Wermter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continual lifelong learning with neural networks: A review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neural Networks, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Minwoo Park, Kyle Brocklehurst, Robert T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Collins, and Yanxi Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Deformed lattice detection in real-world images using mean-shift belief propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pattern Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sayak Paul and Pin-Yu Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vision transformers are robust learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='07581, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jary Pomponi, Simone Scardapane, Vincenzo Lomonaco, and Aurelio Uncini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Effi- cient continual learning in neural networks with embedding regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neuro- computing, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mozhgan Pourkeshavarz and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sabokrou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Zs-il: Looking back on learned experi- ences for zero-shot incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ICLR, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 112 Bibliography BIBLIOGRAPHY Ameya Prabhu, Philip H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Torr, and Puneet K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Dokania.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gdumb: A simple approach that questions our progress in continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - (ECCV), Lec- ture Notes in Computer Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' James Pritts, Ondrej Chum, and Jiri Matas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rectification, and segmentation of coplanar repeated patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Muhammad Naveed Iqbal Qureshi, Beomjun Min, Hang Joon Jo, and Boreom Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Multiclass classification for the differential diagnosis on the adhd subtypes using recursive feature elimination and hierarchical extreme learning machine: structural mri study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' PloS one, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Do vision transformers see like convolutional neural networks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jathushan Rajasegaran, Munawar Hayat, Salman H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Khan, Fahad Shahbaz Khan, and Ling Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Random path selection for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lam- pert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' icarl: Incremental classifier and representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Conference on Computer Vision and Pattern Recognition, (CVPR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Maximilian Riesenhuber and Tomaso A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Poggio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hierarchical models of object recog- nition in cortex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Nature Neuroscience, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mark Bishop Ring et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continual learning in reinforcement environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' PhD thesis, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hippolyt Ritter, Aleksandar Botev, and David Barber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Online structured laplace approximations for overcoming catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Carlos Rodr´ıguez-Pardo, Sergio Suja, David Pascual, Jorge Lopez-Moreno, and Elena Garces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Automatic extraction and synthesis of regular repeatable patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Com- put.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bibliography 113 Dissecting continual learning: a structural and data analysis David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lillicrap, and Gregory Wayne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Experience replay for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Frank Rosenblatt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' The perceptron: a probabilistic model for information storage and organization in the brain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Psychological review, 1958.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Efficient content-based sparse attention with routing transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Assoc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Com- put.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Linguistics, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' David E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rumelhart, Geoffrey E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hinton, and Ronald J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Williams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Learning represen- tations by back-propagating errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Nature, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bernstein, Alexan- der C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Berg, and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Imagenet large scale visual recognition challenge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Andrei A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rusu, Neil C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Progressive neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sara Sabour, Nicholas Frosst, and Geoffrey E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Dynamic routing between capsules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wal- lach, Rob Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural In- formation Processing Systems NIPS, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Frederik Schaffalitzky and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Geometric grouping of repeated ele- ments within images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In John N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Carter and Mark S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Nixon, editors, Proceedings of the British Machine Vision Conference 1998, BMVC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Deep learning in neural networks: An overview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CoRR, abs/1404.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='7828, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Eli Shechtman and Michal Irani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Matching local self-similarities across images and videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Computer Society, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Dongsub Shim, Zheda Mai, Jihwan Jeong, Scott Sanner, Hyunwoo Kim, and Jongseong Jang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Online class-incremental continual learning with adversarial shap- ley value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' AAAI, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 114 Bibliography BIBLIOGRAPHY Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continual learning with deep generative replay.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Rob Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vishwanathan, and Roman Garnett, ed- itors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems NIPS, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Shagun Sodhani, Sarath Chandar, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Toward training recurrent neural networks for lifelong learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Neural Computation, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' David Stutz, Alexander Hermans, and Bastian Leibe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Superpixels: An evaluation of the state-of-the-art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Image Underst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' LAMOL: language modeling for lifelong language learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In International Conference on Learning Representa- tions, (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Going deeper with convolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A survey on deep transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Vera Kurkov´a, Yannis Manolopoulos, Barbara Hammer, Lazaros S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Iliadis, and Ilias Maglogiannis, editors, International Conference on Artificial Neural Networks (ICANN), Lecture Notes in Computer Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Springer, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bosiljka Tasic, Zizhen Yao, Lucas T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Graybuck, Kimberly A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Smith, Thuc Nghi Nguyen, Darren Bertagnolli, Jeff Goldy, Emma Garren, Michael N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Economo, Sarada Viswanathan, Osnat Penn, Trygve E Bakken, Vilas Menon, Jeremy A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Miller, Olivia Fong, Karla E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hirokawa, Kanan Lathia, Christine Rimorin, Michael Tieu, Rachael Larsen, Tamara Casper, Eliza Barkan, Matthew Kroll, Sheana E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Parry, Nadiya Shapovalova, Daniel Hirschstein, Julie Pendergraft, Heather Anne Sullivan, Tae Kyung Kim, Aaron Szafer, Nick Dee, Peter A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Groblewski, Ian R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wickersham, Ali H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Cetin, Julie A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Harris, Boaz P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Levi, Susan M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sunkin, Linda J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Madisen, Tanya L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Daigle, Loren L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Looger, Amy Bernard, John W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Phillips, Ed S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lein, Michael J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hawrylycz, Karel Svoboda, Allan R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jones, Christof Koch, and Hongkui Zeng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Shared and distinct transcriptomic cell types across neocortical areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Nature, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sebastian Thrun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Explanation-based neural network learning a lifelong learning ap- proach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' PhD thesis, University of Bonn, Germany, 1995a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bibliography 115 Dissecting continual learning: a structural and data analysis Sebastian Thrun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Is learning the n-th thing any easier than learning the first?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In David S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Touretzky, Michael Mozer, and Michael E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hasselmo, editors, Advances in Neural Information Processing Systems, (NIPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' MIT Press, 1995b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sebastian Thrun and Tom M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mitchell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lifelong robot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Robotics Auton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 15(1-2):25–46, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Akihiko Torii, Josef Sivic, Masatoshi Okutomi, and Tom´as Pajdla.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Visual place recognition with repetitive structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pattern Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablay- rolles, and Herv´e J´egou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Training data-efficient image transformers & distillation through attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Marina Meila and Tong Zhang, editors, ICML, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gido M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' van de Ven and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tolias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Three scenarios for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gido M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' van de Ven and Andreas Savas Tolias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Generative replay with feedback connections as a general strategy for continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' A¨aron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Senior, and Koray Kavukcuoglu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wavenet: A generative model for raw audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In The 9th ISCA Speech Synthesis Workshop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ISCA, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Gomez, Lukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Is- abelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Rob Fergus, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Vishwanathan, and Roman Garnett, editors, Advances in Neural Informa- tion Processing Systems 30: Annual Conference on Neural Information Processing Systems NIPS, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Hongxing Wang, Gangqiang Zhao, and Junsong Yuan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Visual pattern discovery in image and video data: a brief survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wiley Interdiscip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Data Min.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Knowl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Discov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=', 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, and Huaxia Xia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' End-to-end video instance segmentation with transform- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ross Wightman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pytorch image models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='com/rwightman/ pytorch-image-models, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 116 Bibliography BIBLIOGRAPHY Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, Zhengyou Zhang, and Yun Raymond Fu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Incremental classifier learning with gen- erative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Raymond Fu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Large scale incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Doll´ar, and Ross B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Girshick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Early convolutions help transformers see better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' CoRR, abs/2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='14881, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Ju Xu and Zhanxing Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Reinforced continual learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Samy Bengio, Hanna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Wallach, Hugo Larochelle, Kristen Grauman, Nicol`o Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems, (NeurIPS), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, and Jianfeng Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Focal self-attention for local-global interactions in vision transform- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, abs/2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='00641, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lifelong learning with dynamically expandable networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In 6th International Conference on Learning Representations, ICLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='net, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lu Yu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Liu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Weijer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Self-training for class-incremental semantic segmenta- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Lu Yu, Bartlomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, and Joost van de Weijer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Semantic drift compensation for class- incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition CVPR, 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Pei Yu, Yinpeng Chen, Ying Jin, and Zicheng Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Improving vision transformers for incremental learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' ArXiv, abs/2112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content='06103, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Cutmix: Regularization strategy to train strong classifiers with localizable features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In IEEE/CVF International Conference on Computer Vision, (ICCV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Friedemann Zenke, Ben Poole, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Continual learning through synap- tic intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' In Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' PMLR, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} +page_content=' Bibliography 117' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtAzT4oBgHgl3EQfGvt3/content/2301.01033v1.pdf'} diff --git a/FNAyT4oBgHgl3EQfSffh/content/tmp_files/2301.00089v1.pdf.txt b/FNAyT4oBgHgl3EQfSffh/content/tmp_files/2301.00089v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..84d28870ae4f442cc058431472bd727f3547d233 --- /dev/null +++ b/FNAyT4oBgHgl3EQfSffh/content/tmp_files/2301.00089v1.pdf.txt @@ -0,0 +1,1676 @@ +Chair of Robotics, Artificial Intelligence and Real-Time Systems +TUM School of Computation, Information and Technology +Technical University of Munich +1 +Autonomous Driving Simulator based on Neurorobotics Platform +Wei Cao, Liguo Zhou �, Yuhong Huang, and Alois Knoll +Chair of Robotics, Artificial Intelligence and Real-Time Systems, Technical University of Munich +� liguo.zhou@tum.de +Abstract — There are many artificial intelligence algorithms for autonomous driving in the present market, +but directly installing these algorithms on vehicles is unrealistic and expensive. At the same time, many of these +algorithms need an environment to train and optimize. Simulation is a valuable and meaningful solution with +training and testing functions, and it can say that simulation is a critical link in the autonomous driving world. +There are also many different applications or systems of simulation from companies or academies such as SVL +and Carla. These simulators flaunt that they have the closest real-world simulation, but their environment objects, +such as pedestrians and other vehicles around the agent-vehicle, are already fixed programmed. They can only +move along the pre-setting trajectory, or random numbers determine their movements. What is the situation +when all environmental objects are also installed by Artificial Intelligence, or their behaviors are like real people +or natural reactions of other drivers? This problem is a blind spot for most of the simulation applications, or +these applications cannot be easy to solve this problem. The Neurorobotics Platform from the TUM team of +Prof. Alois Knoll has the idea about "Engines" and "Transceiver Functions" to solve the multi-agents problem. +This report will start with a little research on the Neurorobotics Platform and analyze the potential and possibility +of developing a new simulator to achieve the true real-world simulation goal. Then based on the NRP-Core +Platform, this initial development aims to construct an initial demo experiment. The consist of this report +starts with the basic knowledge of NRP-Core and its installation, then focus on the explanation of the necessary +components for a simulation experiment, at last, about the details of constructions for the autonomous driving +system, which is integrated object detection function and autonomous driving control function. At the end will +discuss the existing disadvantages and improvements of this autonomous driving system. +Keywords— Simulation, Neurorobotics Platform, NRP-Core, Engines, Transceiver Functions, Autonomous Driving, +Object Detection, PID Trajectory Control +1 Introduction +1.1 Motivation +At present, there are many different Artificial Intelligence (AI) algorithms used for autonomous driving. Some algorithms +are used to perceive the environment, such as object detection and semantic/instance segmentation. Some algorithms are +dedicated to making the best trajectory strategy and control decisions based on the road environment. Others contribute +to many different applications, e.g. path planning and parking. Simulation is the best cost-performance way to develop +these algorithms before they are truly deployed to actual vehicles or robots. So, the performance of a simulation platform +is influencing the performance of the AI algorithms. In the present market or business world, there are already a lot of +different “real-world” simulation applications such as CARLA [1] for simulating the algorithm for autonomous driving, +AirSim [2] from Microsoft for autonomous vehicle and quadrotor and PTV Vissim [3] from Germany PTV Group for +flexible traffic simulation. +Although these simulators are dedicated to the “real world” simulation, they have more or less “unreal” problems on +some sides in the process of simulation. For example, besides the problem about the unreal 3-D models and environment, +these simulators have an obvious feature, these AI algorithms are only deployed to target experimental subjects, vehicles, or +robots, and the environment such as other vehicles, motorbikes, and pedestrian looks very close to the “real” environment +but actually these environmental subjects are already in advance pre-programmed and have a fix motion trail. The core +problem of most of them focuses on basic information transmission. They only transfer the essential or necessary traffic +information to the agent subject in the simulation. This transmission is one-way direction. Considering this situation, can +let other subjects in this simulation have their own different AI algorithms at the same time that they can react to the agent’s +behavior? In the future world, there would be not only one vehicle owning one algorithm from one company, but they +must also have much interaction with other agents. The interaction between different algorithms can take which influence +back on these algorithms, and this problem is also a blind point for many simulators. +This large range of interaction between lots of agents is the main problem that these applications should pay attention +to and these existing applications do not have an efficient way to solve this problem. A simulation platform that is truly +arXiv:2301.00089v1 [cs.RO] 31 Dec 2022 + +2 +like the real world, whose environment is not only a fixed pre-definition program, the objects in the environment can make +a relative objective interaction with vehicles with the testing autonomous driving algorithms and they can influence each +other, the goal and concept is an intractable problem for the construction of a simulation platform. There is a platform +called The Neurorobotics Platform (NRP) from the TUM team of Prof. Alois Knoll that provides a potential idea to solve +this interaction problem. This research project focuses on preliminary implementation and searches for the possibility of +solving the previously mentioned interaction problem. +1.2 Neurorobotics Platform (NRP) +Figure 1.1 The base model of Neurorobotics Platform (NRP) +Neurorobotics Platform [4] is an open-source integrative simulation framework platform developed by the group of the +chair of Robotics, Artificial Intelligence and Real-Time Systems of the Technical University of Munich in the context of +the Human Brain Project - a FET Flagship funded by the European Commission. The basic starting point of this platform +enables to choose and test of different brain models (ranging from spiking neural networks to deep networks) for robots. +This platform builds an efficient information transmission framework to let simulated agents interact with their virtual +environment. +The new Version of NRP called NRP Core provides a new idea, which regards all the Participator in the Simulation- +system as "Engines", just like the object in the programming language C++/python, the properties of the simulation +participator such as the robot, autonomous-driving car, weather, or pedestrian and their "behaviors" would be completely +constructed in their own "Engine"-object and let all the participates become a "real" object and can each other influence in +the simulation world and they would not be a fix definite "Program". And the NRP-Platform is the most important transport +median between these engines and they are called the Transceiver Function. It transmits the "Information" such as the +image from the camera and sends the image to an autonomous-driving car and the same time would send other information +to other engines by different transfer protocols such as JSON or ROS system. That means the transmission of information +is highly real-time and lets the simulation world very close to the real world and it has high simulation potency, e.g. the +platform sends the image information to the autonomous-driving car and lets the car computes the situation and makes +the right strategy and rational decision, and at the same moment the environment-cars or "drivers" also get the location +information from the autonomous-driving car and make their own decisions such like drive further or change velocity and +lanes, and the same time these cars are influenced by the situation of the weather, e.g. in raining days the brake time of the +car would be longer and let the decision making and object detection more significant. +NRP-core is mostly written in C++, with the Transceiver Function framework relying on Python for better usability. +It guarantees a fully deterministic execution of the simulation, provided every simulator used is itself deterministic and +works on the basis of controlled progression through time steps. Users should thus take note that event-based simulators +may not be suitable for integration in NRP-core (to be analyzed on a case-by-case basis). Communications to and from +NRP-core are indeed synchronous, and function calls are blocking; as such, the actual execution time of a simulation +based on NRP-core will critically depend on the slowest simulator integrated therein. The aforementioned feature of the +NRP-Core platform is significant to build multi-object which interact with other agencies in the simulation progress and +lets the simulation be close to the real world. +2 NRP-Core configurations for simulation progress +NRP-Core has many application scenarios for different demands of simulation situations. For a specific purpose, the +model of NRP-Core can be widely different. This development for the Autonomous-driving benchmark focuses on the +actual suggested development progress. It concentrates on the construction of the simulation application, the details of + +Close +Transceiver Functions +Loop +Engine3 +the operation mechanism of NRP-Core would not be discussed, and deep research in this development documentation, the +principle of the operation mechanism can be found on the homepage of NRP-Core. +2.1 Installation of NRP-Core and setting environment +For the complete installation, refer to the homepage of the NRP-Core Platform by "Getting Started" under the page +"Installation Instructions." This section lists only all the requirements for applying the autonomous driving simulator and +benchmark. +WARNING: Previous versions of the NRP install forked versions of several libraries, notably NEST and Gazebo. +Installing NRP-core in a system where a previous version of NRP is installed is known to cause conflicts. That will be +strongly recommended not to install the last version at the same time. +Operating System: recommend on Ubuntu 20.04 +Setting the Installation Environment: To properly set the environment to run experiments with NRP-core, please make +sure that it is added the lines below to your /.bashrc file. +1 # Start +setting +environment +2 export +NRP_INSTALL_DIR ="/home/${USER }/. local/nrp" # The +installation +directory , +which was given +before +3 export +NRP_DEPS_INSTALL_DIR ="/home/${USER }/. local/nrp_deps" +4 export +PYTHONPATH="${ NRP_INSTALL_DIR }"/lib/python3 .8/site -packages:"${ +NRP_DEPS_INSTALL_DIR }"/lib/python3 .8/site -packages:$PYTHONPATH +5 export +LD_LIBRARY_PATH ="${ NRP_INSTALL_DIR }"/lib:"${ NRP_DEPS_INSTALL_DIR }"/lib:${ +NRP_INSTALL_DIR }/lib/ nrp_gazebo_plugins : $LD_LIBRARY_PATH +6 export +PATH=$PATH:"${ NRP_INSTALL_DIR }"/bin:"${ NRP_DEPS_INSTALL_DIR }"/bin +7 export +GAZEBO_PLUGIN_PATH =${ NRP_INSTALL_DIR }/lib/ nrp_gazebo_plugins :${ +GAZEBO_PLUGIN_PATH } +8 . /usr/share/gazebo -11/ setup.sh +9 . /opt/ros/noetic/setup.bash +10 . ${CATKIN_WS }/ devel/setup.bash +11 # End of setting +environment +Dependency installation: +1 # Start of dependencies +installation +2 # Pistache +REST +Server +3 sudo add -apt -repository +ppa:pistache+team/unstable +4 +5 # Gazebo +repository +6 sudo sh -c ’echo "deb http :// packages. osrfoundation .org/gazebo/ubuntu -stable ‘ +lsb_release -cs ‘ main"> /etc/apt/sources.list.d/gazebo -stable.list ’ +7 wget +https :// packages.osrfoundation .org/gazebo.key -O - | sudo apt -key add - +8 +9 sudo apt update +10 sudo apt +install +git cmake +libpistache -dev libboost -python -dev libboost - +filesystem -dev libboost -numpy -dev libcurl4 -openssl -dev nlohmann -json3 -dev +libzip -dev cython3 +python3 -numpy +libgrpc ++-dev protobuf -compiler -grpc +libprotobuf -dev +doxygen +libgsl -dev libopencv -dev python3 -opencv +python3 -pil +python3 -pip libgmock -dev +11 +12 # required by gazebo +engine +13 sudo apt +install +libgazebo11 -dev +gazebo11 +gazebo11 -plugin -base +14 +15 # Remove the flask if it was +installed to ensure it is installed +from pip +16 sudo apt remove +python3 -flask python3 -flask -cors +17 # required by Python +engine +18 # If you are +planning to use The +Virtual +Brain +framework , you will most +likely +have to use flask +version +1.1.4. +19 # By installing +flask +version +1.1.4 +markupsafe +library (included +with +flask) has +to be downgraded to version +2.0.1 to run +properly +with +gunicorn +20 # You can +install +that +version +with +21 # pip install +flask ==1.1.4 +gunicorn +markupsafe ==2.0.1 +22 pip install +flask +gunicorn +23 +24 # required by nest -server (which is built and +installed +along +with nrp -core) + +4 +25 sudo apt +install +python3 - restrictedpython +uwsgi -core uwsgi -plugin -python3 +26 pip install +flask_cors +mpi4py +docopt +27 +28 # required by nrp -server , which +uses gRPC +python +bindings +29 pip install +grpcio -tools +pytest +psutil +docker +30 +31 # Required +for using +docker +with ssh +32 pip install +paramiko +33 +34 # ROS , when not needed , can jump to the next step +35 +36 # Install ROS: follow the +installation +instructions: http :// wiki.ros.org/noetic +Installation/Ubuntu. To enable ros +support in nrp on ‘ros -noetic -ros -base ‘ is +required. +37 +38 #Tell +nrpcore +where +your +catkin +workspace is located: export a variable +CATKIN_WS +pointing to an existing +catkin +workspace +root +folder. If the +variable +does not exist , a new catkin +workspace +will be created at ‘${HOME }/ +catkin_ws ‘. +39 +40 # MQTT , if needed , see the +homepage of NRP -Core +41 +42 # End of dependencies +installation +NRP installation: +1 # Start of installation +2 git clone +https :// bitbucket.org/ hbpneurorobotics /nrp -core.git +3 cd nrp -core +4 mkdir +build +5 cd build +6 # See the +section "Common NRP -core +CMake +options" in the +documentation +for the +additional +ways to configure +the +project +with +CMake +7 cmake .. -DCMAKE_INSTALL_PREFIX ="${ NRP_INSTALL_DIR }" - +DNRP_DEP_CMAKE_INSTALL_PREFIX ="${ NRP_DEPS_INSTALL_DIR }" +8 mkdir -p "${ NRP_INSTALL_DIR }" +9 # the +installation +process +might +take some time , as it downloads +and +compiles +Nest as well. +10 # If you haven ’t installed +MQTT libraries , add +ENABLE_MQTT=OFF +definition to +cmake (-DENABLE_MQTT=OFF). +11 make +12 make +install +13 # Just in case of wanting to build the +documentation . Documentation +can then be +found in a new doxygen +folder +14 make +nrp_doxygen +15 # End of installation +Common NRP-core CMake options: Here is the list of the CMake options that can help modify the project configu- +ration (turn on and off the support of some components and features). +• Developers options: +– COVERAGE enables the generation of the code coverage reports during the testing +– BUILD_RST enables the generation of the reStructuredText source files from the Doxygen documentation +• Communication protocols options: +– ENABLE_ROS enables compilation with ROS support; +– ENABLE_MQTT enables compilation with the MQTT support. +• ENABLE_SIMULATOR and BUILD_SIMULATOR_ENGINE_SERVER options: +– ENABLE_NEST and BUILD_NEST_ENGINE_SERVER; +– ENABLE_GAZEBO and BUILD_GAZEBO_ENGINE_SERVER. +The ENABLE_SIMULATOR and BUILD_SIMULATOR_ENGINE_SERVER flags allow disabling the compilation +of those parts of nrp-core that depend on or install a specific simulator (eg. gazebo, nest). +The expected behavior for each of these pairs of flags is as follows: + +5 +• the NRPCoreSim is always built regardless of any of the flags values. +• if ENABLE_SIMULATOR is set to OFF: +– the related simulator won’t be assumed to be installed in the system, ie. make won’t fail if it isn’t. Also it +won’t be installed in the compilation process if this possibility is available (as in the case of nest) +– The engines connected with this simulator won’t be built (nor client nor server components) +– tests that would fail if the related simulator is not available won’t be built +• if the ENABLE_SIMULATOR is set to ON and BUILD_SIMULATOR_ENGINE_SERVER is set to OFF: Same +as above, but: +– the engine clients connected to this simulator will be built. This means that they should not depend on or link +to any specific simulator +– the engine server-side components might or might not be built, depending on if the related simulator is +required at compilation time +• if both flags are set to ON the simulator is assumed to be installed or it will be installed from the source if this +option is available. All targets connected with this simulator will be built. +This flag system allows configuring the resulting NRP-Core depending on which simulators are available on the system, +both for avoiding potential dependency conflicts between simulators and enforcing modularity, opening the possibility of +having specific engine servers running on a different machine or inside containers. +2.2 Introduction of basic components of simulation by NRP +Some important elements for constructing a simulation example by the NRP platform are: Engines, Transceiver Function +(TF) + Preprocessing Function (PF), Simulation Configuration JSON file, Simulation model file and DataPack, which are +basic components of simulation progress. In this section, list and declare their definition, content and implementation. +2.2.1 Engine +Engines are a core aspect of the NRP-core framework. They run the actual simulation software (which can be comprised +of any number of heterogeneous modules), with the Simulation Loop and TransceiverFunctions merely being a way to +synchronize and exchange data between them. The data exchange is carried out through an engine client (see paragraph +below). An Engine can run any type of software, from physics engines to brain simulators. The only requirement is that +they should be able to manage progressing through time with fixed-duration time steps. +There are different engines already implemented in NRP-Core: +• Nest: two different implementations that integrate the NEST Simulator into NRP-core. +• Gazebo: engine implementation for the Gazebo physics simulator. +• PySim: engine implementation based on the Python JSON Engine wrapping different simulators (Mujoco, Opensim, +and OpenAI) with a python API. +• The Virtual Brain: engine implementation based on the Python JSON Engine and TVB Python API. +and so on are provided by NRP and as the first user-interested engines for research Spiking neural Networks and the +like. These applications are distributed to the specific simulator. This platform provides also Python JSON Engine, this +versatile engine enables users to execute a user-defined python script as an engine server, thus ensuring synchronization +and enabling DataPack data transfer with the Simulation Loop process. It can be used to integrate any simulator with a +Python API in an NRP-core experiment. This feature allows users to modular develop experiment agents in constructed +simulation world and is flexible to manage plural objects with different behaviors and characters. +2.2.2 DataPack and Construction format +The carrier of Information which is transported between engines and lets engines with each other communicate is DataPack. +By NRP are there three types of supported DataPack, all of them are simple objects which wrap around arbitrary data +structures, one is JSON DataPack, second is Protobuf DataPack and another is ROS msg DataPack. They provide the +necessary abstract interface, which is understood by all components of NRP-Core, while still allowing the passing of data +in various formats. DataPack is also an important feature or property of a specific Engine, meaning the parameters and +form of data of a specific DataPack be declared in the Engine (Example see section 3.4.2). +A DataPack consists of two parts: + +6 +• DataPack ID: which allows unique identify the object. +• DataPack data: this is the data stored by the DataPack, which can be in the principle of any type. +DataPacks are mainly used by Transceiver functions to relay data between engines. Each engine type is designed to +accept only datapacks of a certain type and structure. +Every DataPack contains a DataPackIdentifier, which uniquely identifies the datapack object and allows for the routing +of the data between transceiver functions, engine clients and engine servers. A datapack identifier consists of three fields: +• name - the name of the DataPack. It must be unique. +• type - string representation of the DataPack data type. This field will most probably be of no concern for the users. +It is set and used internally and is not in human-readable form. +• engine name - the name of the engine to which the DataPack is bound. +DataPack is a template class with a single template parameter, which specifies the type of data contained by the DataPack. +This DataPack data can be in the principle of any type. In practice, there are some limitations though, since DataPacks, +which are C++ objects, must be accessible from TransceiverFunctions, which are written in Python. Therefore the only +DataPack data types which can be actually used in NRP-core are those for which Python bindings are provided. It is +possible for a DataPack to contain no data. This is useful, for example, when an Engine is asked for a certain DataPack +but it is not able to provide it. In this case, an Engine can return an empty DataPack. This type of Datapack contains only +a Datapack identifier and no data. Attempting to retrieve the data from an empty DataPack will result in an exception. A +method "isEmpty" is provided to check whether a DataPack is empty or not before attempting to access its data: +1 if(not +datapack.isEmpty ()): +2 +# It’s safe to get the data +3 +print(datapack.data) +4 else: +5 +# This will +raise an exception +6 +print(datapack.data) +• The Format of getting DataPack from a particular Engine: +1 # Declare +datapack +with "datapack_name " name from +engine "engine_name" as +input +using the +@EngineDataPack +decorator +2 # The +transceiver +function +must +accept an argument +with the same name as " +keyword" in the +datapack +decorator +3 +4 @EngineDataPack (keyword="datapack", id= DataPackIdentifier (" datapack_name ", +"engine_name")) +5 @TransceiverFunction ("engine_name") +6 def +transceiver_function (datapack): +7 +print(datapack.data) +8 +9 # Multiple +input +datapacks +from +different +engines +can be declared +10 @EngineDataPack (keyword="datapack1", id= DataPackIdentifier (" datapack_name1 " +, "engine_name1")) +11 @EngineDataPack (keyword="datapack2", id= DataPackIdentifier (" datapack_name2 " +, "engine_name2")) +12 @TransceiverFunction ("engine_name1 ") +13 def +transceiver_function (datapack1 , datapack2): +14 +print(datapack1.data) +15 +print(datapack2.data) +PS: The details of two Decorators of TransceiverFunction see below in section 2.2.3. +• The Format of setting information in DataPack and sending to particular Engine: +1 # NRP -Core +expects +transceiver +functions to always +return a list of +datapacks +2 @TransceiverFunction ("engine_name") +3 def +transceiver_function (): +4 +datapack = JsonDataPack(" datapack_name ", "engine_name") +5 +return [ datapack ] +6 + +7 +7 # Multiple +datapacks +can be returned +8 +9 @TransceiverFunction ("engine_name") +10 def +transceiver_function (): +11 +datapack1 = JsonDataPack(" datapack_name1 ", "engine_name") +12 +datapack2 = JsonDataPack(" datapack_name2 ", "engine_name") +13 +14 +return [ datapack1 , datapack2 ] +2.2.3 Transceiver Function and Preprocessing Function +1. Transceiver Function +Transceiver Functions are user-defined Python functions that take the role of transmitting DataPacks between engines. +They are used in the architecture to convert, transform or combine data from one or multiple engines and relay it to another. +The definition of a Transceiver Function must use Decorator before the user-defined “def” transceiver function, which +means: Sending the DataPack to the target Engine: +1 @TransceiverFunction ("engine_name") +To request datapacks from engines, additional decorators can be prepended to the Transceiver Function, with the form +(Attention: Receive-Decorator must be in the front of TransceiverFunction): +1 @EngineDataPack (keyword_datapack , id_datapack) +• keyword_datapack: user-defined new data name of DataPacks, this keyword is used as Input to Transceiver Function. +• id_datapack: the id of from particular Engine received DataPack, “DataPack ID” = “DataPack Name” + “Engine +Name” (Examples see 2.2.2) +2. Preprocessing Function +Preprocessing Function is very similar to Transceiver Function but has different usage. Preprocessing Functions are +introduced to optimize expensive computations on DataPacks attached to a single engine. In some cases, there might be +necessary to apply the same operations on a particular DataPack in multiple Transceiver Functions. An example of this +might be applying a filter to a DataPack containing an image from a physics simulator. In order to allow to execute this +operation just once and let other TFs access the processed DataPack data, PreprocessingFunctions (PFs) are introduced. +They show two main differences with respect to Transceiver Functions: +• Their output datapacks are not sent to the corresponding Engines, they are kept in a local datapack cache and can +be used as input in TransceiverFunctions +• PFs just can take input DataPacks from the Engine they are linked to +The format of Preprocessing Function is similar to Transceiver Function: +1 @PreprocessingFunction ("engine_name") +2 @PreprocessedDataPack (keyword_datapack , id_datapack) +These Decorators “@PreprocessingFunction” and “@PreprocessedDataPack” must be used in Preprocessing Functions. +Since the output of Preprocessing Function is stored in the local cache and does not need to process on the Engine Server +side, Preprocessing Function can return any type of DataPack without restrictions. +2.2.4 Simulation Configuration Json file +The details of configuration information for any simulation with Engines and Transceiver Functions are stored in a single +JSON file, this file contains the objects of engines, Transceiver functions, and also their important necessary parameters +to initialize and execute a simulation. This file is usually written in the “example_simlation.json” file. +The JSON format is here a JSON schema, which is highly readable and offers similar capabilities as XML Schema. +The advantage of composability and inheritance allows the simulation to use reference keywords to definite the agent and +to validate inheritance by referring to other schemas. That means that the same basement of an engine can at the same +time create plural agents or objects with only different identify IDs. +1. Simulation Parameters +For details, see appendix Table A.1: Simulation configuration parameter. +2. Example form + +8 +1 { +2 +" SimulationName": " example_simulation ", +3 +" SimulationDescription ": "Launch two python +engines. " +4 +" SimulationTimeout ": 1, +5 +" EngineConfigs": +6 +[ +7 +{ +8 +"EngineType": "python_json", +9 +"EngineName": "python_1", +10 +" PythonFileName": "engine_1.py" +11 +}, +12 +{ +13 +"EngineType": "python_json", +14 +"EngineName": "python_2", +15 +" PythonFileName": "engine_2.py" +16 +} +17 +], +18 +" DataPackProcessingFunctions ": +19 +[ +20 +{ +21 +"Name": "tf_1", +22 +"FileName": "tf_1.py" +23 +} +24 +] +25 } +• EngineConfigs: this section list all the engines are participating in the simulation progress. +There are some +important parameters should be declared: +– EngineType: which type of engine is used for this validated engine, e.g., gazebo engine, python JSON engine +– EngineName: user-defined unit identification name for the validated engine +– Other Parameters: These Parameters should be declared according to the type of engines (details see appendix +Table A.2: Engine Base Parameter) +∗ Python Json engine: “PythonFileName” – reference base python script for validated engine +∗ Gazebo engine: see in section +• DataPackProcessingFunctions: this section lists all the Transceiver functions validated in simulation progress. +Mostly are there two parameters that should be declared: +– Name: user-defined identification name for validated Transceiver Function +– FileName: which file as reference base python script to validate Transceiver Function +• Other Simulation Parameters: see section 2.2.4 – 1. Simulation Parameters +• Launch a simulation: This simulation configuration JSON file is also the launch file and uses the NRP command to +start a simulation experiment with the following command: +1 NRPCoreSim -c user_defined_simulation_config .json +Tip: In a user-defined simulation, the folder can simultaneously exist many different named configuration JSON files. It +is very useful to config the target engine or Transceiver Functions that which user wants to launch and test with. To start +and launch the target simulation experiment, just choose the corresponding configuration file. +2.2.5 Simulation model file +In this experiment for Autonomous driving on the NRP platform Gazebo physics simulator [5] is the world description +simulator. For the construction of the simulation, the world can use the “SDF” file based on XML format to describe all +the necessary information about 3D models in a file, e.g. sunlight, environment, friction, wind, landform, robots, vehicles, +and other physics objects. This file can in detail describe the static or dynamic information of the robot, the relative +position and motion information, the declaration of sensor or control plugins, and so on. And Gazebo is a simulator that +has a close correlation to the ROS system and provides simulation components for ROS, so the ROS system describes +many similar details about the construction of SDF file [6]. +According to XML format label to describe components of the simulation world and construct the dependence relation- +ship of these components: + +9 +• World Label +1 +2 + +3 +........ +4 + +5 +All the components and their labels should be under label. +• Model Labels +1 +2 +0 0 0 0 -0 0 +3 + +4 +......... +5 + +6 + +7 + +8 +The Description is under label , and importantly if user will use a plugin such as the control-plugin or +sensor-plugin (camera or lidar), this label must be set under the corresponding label. Under + label describes the model physics features like , , , and so on. +• 3-D models – mesh files +Gazebo requires that mesh files be formatted as STL, Collada, or OBJ, with Collada and OBJ being the preferred +formats. Blow lists the file suffixes to the corresponding mesh file format. +Collada - .dae +OBJ - .obj +STL - .stl +Tip: Collada and OBJ file formats allow users to attach materials to the meshes. Use this mechanism to improve +the visual appearance of meshes. +Mesh file should be declared under a needed label like or with layer structure with +- - (Uri can be absolute or relative file path): +1 +2 + +3 +xxxx/xxxx.dae +4 + +5 +3 Simulation Construction on NRP-Core +Based on the steps for configuring a simulation on the NRP-Core platform, the autonomous driving benchmark can now be +implemented with the components mentioned above, from 3D models to communicating mechanisms. This section will +introduce the requirements of the autonomous driving application, and second will analyze the corresponding components +and their functions. The third is the concrete implementation of these requirements. +Second, this project will also research the possibility of achieving modular development for multi-agents on the NRP +platform, comparing it with other existing and widely used systems, and analyzing the simulation performance according +to the progress result. +3.1 Analysis of requirements for autonomous driving application +An application to achieve the goal of testing the performance of autonomous driving algorithms can refer to different +aspects. The reason is that autonomous driving can integrate different algorithms such as computer vision, object detection, +decision-making and trajectory planning, vehicle control, or Simultaneous localization and mapping. The concept and +final goal of the application are to build a real-world simulation that integrates multi-agents, different algorithms, and +corresponding evaluation systems to the performance of the autonomous driving vehicle. +But that first needs many +available, mature, and feasible algorithms. Second, the construction of world 3D models is a big project. And last, the +evaluation system is based on the successful operation of the simulation. So the initial construction of the application will +focus on the base model of the communication mechanism to first achieve the communication between the single agent + +10 +and object-detection algorithm under the progress of NRP-Core. And for vehicle control algorithm reacts logically based +on the object detection and generates feasible control commands, in this project will skip this step and give a specific +trajectory, that let the vehicle along this trajectory move. +Requirements of implementation: +• Construction of the base model frame for communication between the Gazebo simulator, object-detection algorithm, +and control unit. +• Selection of feasible object-detection algorithm +• Simple control system for autonomous movement of high accuracy physical vehicle model +3.2 Object detection algorithm and YOLO v5 Detector Python Class +According to the above analysis, the requirements of the application should choose an appropriate existing object detection +algorithm as the example to verify the communication mechanism of the NRP platform and at the same time to optimize +performance. +On the research of existing object detection algorithms from base Alex-Net for image classification [7] and CNN- +Convolution neural network for image recognition [8], the optimized neural network ResNet [9] and SSD neural network +for multi-box Detector [10] and in the end the YOLOv5 neural network [11], YOLOv5 has high performance on the object +detection and its advantage by efficient handling of frame image on real-time let this algorithm also be meaningful as a +reference to test other object-detection algorithms. Considering the requirements of autonomous driving is YOLOv5 also +a suitable choice as the experimental object-detection algorithm to integrate into the NRP platform. +Table Notes: +• All checkpoints are trained to 300 epochs with default settings and hyperparameters. +• mAPval values are for single-model single-scale on COCO val2017 dataset. Reproduced by python val.py –data +coco.yaml –img 640 –conf 0.001 –iou 0.65 +• Speed averaged over COCO val images using a AWS p3.2xlarge instance. +NMS times ( 1 ms/img) not in- +cluded.Reproduce by python val.py –data coco.yaml –img 640 –conf 0.25 –iou 0.45 +• TTA Test Time Augmentation includes reflection and scale augmentations.Reproduce by python val.py –data +coco.yaml –img 1536 –iou 0.7 –augment +Requirements and Environment for YOLOv5: +• Quick link for YOLOv5 documentation : YOLOv5 Docs [12] +• Environment requirements: Python >= 3.7.0 version and PyTorch [13] >= 1.7 +• Integration of original initial trained YOLOv5 neural network parameters, the main backbone has no changes +compared to the initial version +Based on the original execute-python file “detect.py” has another python file “Yolov5Detector.py” with a self-defined +Yolov5Detector class interface written in the “YOLOv5” package. To use YOLO v5 should in main progress validate the +YOLO v5 class, second use warm-up function “detectorWarmUp()” to initiate the neural network. And “detectImage()” +is the function that sends the image frame to the main predict detection function and will finally return the detected image +with bounding boxes in NumPy format. +3.3 3D-Models for Gazebo simulation world +According to the performance of the Gazebo is the scope of the base environment world not suitable to use a large map. +On the basic test of different sizes of the map of Garching-area is the environment world model recommends encircling +the area of Parkring in Garching-Hochbrück. This map model is based on the high-accuracy satellite generated and is very +similar to the origin location. And by the simulation progress, the experimental vehicle moves around the main road of +Parkring. +The experimental vehicle is also a high detail modeling vehicle model with independently controllable steerings for +diversion control of two front wheels, free front, and rear wheels, and a high-definition camera. For the rebuilding of +these models, the belonging relationship for each mode should be declared in the SDF file. In the SDF file are these +models including base-chassis, steerings, wheels, and camera as “Link” of the car “model” under the label with a +user-defined unique name. Attention, the name of models or links must be specific and has no same name as other objects. +The below shows the base architecture frame to describe the physical relationship of the whole vehicle in the SDF file: + +11 +(a) Parkring Garching Hochbrueck high accuracy map model +(b) Experiment vehicle for simulation +Figure 3.1 +1 +2 + +3 +....... +4 + +5 + +6 +....... +7 + +8 + +9 +base_link +10 +eye_vision_camera +11 +...... +12 + +13 + +14 +....... +15 + +16 + +17 +base_link +18 +front_left_steering_link +19 +....... +20 + +21 +...... +22 +1. Description of Labels [6]: +• — The corresponding model as a component from the entirety model +• — Description of relationship between link-components +• — Type of the joint: +– revolute — a hinge joint that rotates along the axis and has a limited range specified by the upper and lower +limits. +– continuous — a continuous hinge joint that rotates around the axis and has no upper and lower limits. +– prismatic — a sliding joint that slides along the axis, and has a limited range specified by the upper and lower +limits. +– fixed — this is not a joint because it cannot move. All degrees of freedom are locked. This type of joint does +not require the , , , or . +– floating — this joint allows a motion for all 6 degrees of freedom. +– planar — this joint allows motion in a plane perpendicular to the axis. +• / — the secondary label as element of label +— declaration for the belonging relationship of referring “links” + +12 +The mesh file “vehicle_body.dae” (shown in Fig. 3.1b the blue car body) is used for the base-chassis of the experiment +vehicle under label. And the mesh file “wheel.dae” is used for the rotatable vehicle wheels +under and the other three similar link labels. And for steering models, +labels are used to simply generate length – 0.01m + height radius 0.1m cylinder as the joint elements between wheels and +chassis. +2. Sensor Label: +In the Gazebo simulator to activate the camera function, the camera model should under the “camera link” label declare +a new secondary “sensor label” - with “name” and “type=camera” elements. And the detailed construction for +the camera sensor seeing blow scripts: +1 +2 +0 0 0.132 0 +-0.174 0 +3 +/smart/camera +4 + +5 +1.57 +6 + +7 +736 +8 +480 +9 + +10 + +11 +0.1 +12 +100 +13 + +14 + +15 +gaussian +16 +0 +17 +0.007 +18 + +19 + +20 +1 +21 +30 +22 +1 +23 +• — this label defines the camera resolution ratio and this is regarded as the size of the frame-image that +sends to the Yolo detector engine. According to the requirement of the YOLO detection algorithm, the width and +height of the camera should be set as integral multiples by 32. +3.4 Construction of Engines and Transceiver Functions +Figure 3.2 the system of autonomous driving on NRP +The construction of the whole project regards as an experiment on the NRP platform, and as an experiment, the +whole package of the autonomous driving benchmark is under the “nrp-core” path in the examples folder. According +to bevor announced NRP components for a simulation experiment is the application also modular developed referring + +YoloDetectorEngine +Gazebo Engine +camera Transeiver +imageprocess +function +Camera frame-image +Yolo detector class +OpenCv +location coordination +state Transeiver function +Detected +camera +Vehicle control Engine +motorsettingTranseiver +coordination transform +joint control +function +trajectorycompute +controlcommand13 +to requirements of autonomous driving benchmark application. And the whole system frame is shown in Fig. 3.2. The +construction of simulation would according to primary embrace two branches extend: +• A close loop from the Gazebo engine to get the location information of the vehicle and sent to the Vehicle control +engine depending on Gazebo DataPacks (Protobuf DataPack), then send the joint control command back to the +Gazebo engine. +• An open loop from Gazebo engine to get camera information and sent to Yolo Detector Engine, final using OpenCV +to show the detected frame-image as monitor window. +3.4.1 Gazebo plugins +Before the steps to acquire the different information must the corresponding plugins in SDF be declared. These plugins label +are such as recognition-label to let Gazebo know what information and parameters should be sent or received and assigned. +A set of plugins is provided to integrate the Gazebo in NRP-Core simulation. NRPGazeboCommunicationPlugin registers +the engine with the SimulationManager and handles control requests for advancing the gazebo simulation or shutting it +down. Its use is mandatory in order to run the engine. And there are two implementations of the Gazebo engine are +provided. One is based on JSON over REST and another on Protobuf over gRPC. The latter performs much better and it is +recommended. The gRPC implementation uses protobuf objects to encapsulate data exchanged between the Engine and +TFs, whereas the JSON implementation uses nlohmann::json objects. Besides this fact, both engines are very similar in +their configuration and behavior. The rest of the documentation below is implicitly referred to the gRPC implementation +even though in most cases the JSON implementation shows no differences. The corresponding plugins are also based on +Protobuf over the gRPC protocol. There are four plugins that would be applied in the SDF model world file: +• World communication plugin – NRPGazeboGrpcCommunicationPlugin +This plugin is the main communication plugin to set up a gRPC server and waits for NRP commands. It must be +declared under the label in the SDF file. +1 +2 ... +3 + +4 ... +5 +• Activation of Camera sensor plugin – NRPGazeboGrpcCameraPlugin +This plugin is used to add a GazeboCameraDataPack datapack. In the SDF file, the plugin would be named +“smart_camera” (user-defined). This name can be accessed by TransceiverFunctions and get the corresponding +information. This plugin must be declared under label in the application under the camera sensor label: +1 +2 +... +3 + +4 +... +5 +• Joint control and message – NRPGazeboGrpcJointPlugin +This plugin is used to register GazeboJointDataPack DataPack and in this case, only those joints that are explicitly +named in the plugin will be registered and made available to control under NRP. The joint’s name must be unique +and once again in the plugin declared. In contrast to the other plugins described above or below, when using +NRPGazeboGrpcJointPlugin DataPacks can be used to set a target state for the referenced joint, the plugin is +integrated with the PID controller and can for each of the joint-specific set a better control performance. +This plugin must be declared under the corresponding label and have the parallel level in contrast to the + label, and there are four joints that would be chosen to control: rear left and right wheel joint, front left +and right steering joint, and according to small tests of the physical model of experiment-vehicle in Gazebo are the +parameters of PID controller listed in below block: +1 +2 +... +3 +... + +14 +4 +... +5 +... +6 +... +7 +... +8 + +9 + +10 + +11 + +12 + +13 + +14 +... +15 +Attention: There are two target types that can be influenced and supported in Gazebo: Position and Velocity. And +for the rear left and right wheels of the vehicle are recommended for setting type with “Velocity” and for the front +left and right steering are recommended setting type with “Position”. Because the actual control of the rear wheels +is better with velocity and front steering uses angle to describe the turning control. +• Gazebo link information – NRPGazeboGrpcLinkPlugin +This plugin is used to register GazebolinkDataPack DataPacks for each link of the experiment vehicle. Similar to +the sensor plugin, this plugin must be declared under label and has the parallel level of label, and +only be declared once: +1 +2 +... +3 + +4 +... +5 +... +6 +... +7 +... +8 +... +9 +... +10 +... +11 +... +12 +... +13 +... +14 +3.4.2 State Transceiver Function “state_tf.py” +State Transceiver Function acquires the location information from the Gazebo engine and transmits it to Vehicle Control +Engine to compute the next control commands. The receiving of location coordinates of the vehicle is based on the +DataPack from Gazebo, and this DataPack is already encapsulated in NRP, it only needs to in the Decoder indicate which +link information should be loaded in DataPack. +1 @EngineDataPack (keyword=’state_gazebo ’, id= DataPackIdentifier (’ +smart_car_link_plugin :: base_link ’, ’gazebo ’)) +2 @TransceiverFunction ("car_ctl_engine ") +3 def +car_control(state_gazebo): +The location coordinates in the experiment would be the coordinate of base-chassis “base_link” chosen and use C++ +inheritance declaration with the name of the plugin that is declared in the SDF file. And the received DataPack with the +user-defined keyword “state_gazebo” would be sent in Transceiver Function “car_control()”. +Attention: Guarantee to get link-information from Gazebo it is recommended new declaring on the top of the script +with the below sentence: +1 from +nrp_core.data.nrp_protobuf +import +GazeboLinkDataPack + +15 +that could let NRP accurately communicate with Gazebo. +The link-information DataPack in NRP would be called GazeboLinkDataPack. And its Attributes are listed in next +Table 3.1. In Project are “position” and “rotation” information chosen and set to the “car_ctl_engine” engine defining Json +DataPack, in the last “return” back to “car_ctl_engine”. Use the “JsonDataPack” function to get in other engine-defined +DataPack and itself form and assign the corresponding parameter with received information from Gazebo. +1 car_state = JsonDataPack(" state_location ", " car_ctl_engine ") +2 +3 car_state.data[’location_x ’] = state_gazebo.data.position [0] +4 car_state.data[’location_y ’] = state_gazebo.data.position [1] +5 car_state.data[’qtn_x ’] = state_gazebo.data.rotation [0] +6 car_state.data[’qtn_y ’] = state_gazebo.data.rotation [1] +7 car_state.data[’qtn_z ’] = state_gazebo.data.rotation [2] +8 car_state.data[’qtn_w ’] = state_gazebo.data.rotation [3] +Tip: The z-direction coordinate is not necessary. So only x- and y-direction coordinates are included in DataPack to +make the size of JSON DataPack smaller and let the transmission more efficient. +Attribute +Description +Python Type +C Type +pos +Link Position +numpy.array(3, numpy.float32) +std::array<float,3> +rot +Link Rotation as quaternion +numpy.array(4, numpy.float32) +std::array<float,4> +lin_vel +Link Linear Velocity +numpy.array(3, numpy.float32) +std::array<float,3> +ang_vel +Link Angular Velocity +numpy.array(3, numpy.float32) +std::array<float,3> +Table 3.1 GazeboLinkDataPack Attributes. +Tip: the rotation information from Gazebo is quaternion and its four +parameters sort sequence is “x, y, z, w”. +3.4.3 Vehicle Control Engine “car_ctl_engine.py” +The Vehicle Control Engine would be written according to the form of Python Json Engine. The construction of a Python +Json Engine is similar to the definition of a python class file that includes the attributes such as parameters or initialization +and its functions. And a class file should declare that this Python Json Engine inherits the class “EngineScript” to let NRP +recognize this file as a Python Json Engine to compute and execute. So a Python Json Engine can mostly be divided into +three main blocks with def functions: def initialize(self), def runLoop(self, timestep_ns), and def shutdown(self). +• In initialize block is the initial parameters and functions defined for the next simulation. And in this block, should the +correspondingDataPacksthatbelongtothespecificEngineatthesametimebedefinedwith“self._registerDataPack()” +and “self._setDataPack()” functions: +1 self. _registerDataPack ("actors") +2 self._setDataPack("actors", {"angular_L": 0, "angular_R": 0, "linear_L": 0, +"linear_R": 0}) +3 self. _registerDataPack ("state_location ") +4 self._setDataPack("state_location ", { "location_x": 0, "location_y": 0, " +qtn_x": 0, "qtn_y": 0,"qtn_z": 0,"qtn_w": 0}) +– _registerDataPack(): - given the user-defined DataPack in the corresponding Engine. +– _setDataPack(): - given the corresponding name of DataPack and set parameters, form, and value of the +DataPack. +The generated actors-control-commands and location-coordinate of the vehicle in this project would be as properties +of the DataPack belonging to the “car_ctl_engine” Engine. +• runLoop block is the main block that would always be looped during the simulation progress, which means the +computation that relies on time and always need to update would be written in this block. In the “car_ctl_engine” +Engine should always get the information from Gazebo Engine with the function “self._getDataPack()”: +1 state = self._getDataPack(" state_location ") +– _getDataPack(): - given the user_defined name of the DataPack +Attention: the name must be same as the name in the Transceiver function that user-chosen DataPack which +is sent back to Engine. + +16 +After the computation of the corresponding command to control the vehicle is the function “_setDataPack()” once +again called to set the commands information in corresponding “actors” DataPack and waiting for other Transceiver +Function to call this DataPack: +1 self._setDataPack("actors", {"angular_L": steerL_angle , "angular_R": +steerR_angle , "linear_L": rearL_omiga , "linear_R": rearR_omiga }) +• shutdown block is only called when the simulation is shutting down or the Engine arises errors and would run +under progress. +3.4.4 Package of Euler-angle-quaternion Transform and Trajectory +• Euler-angle and quaternion transform +The received information of rotation from Gazebo is quaternion. That should be converted into Euler-angle to +conveniently compute the desired steering angle value according to the beforehand setting trajectory. And this +package is called “euler_from_quaternion.py” and should be in the “car_ctl_engine” Engine imported. +• Trajectory and Computation of target relative steering angle +The beforehand setting trajectory consists of many equal proportional divided points-coordinate. And through +the comparison of the present location coordinate and the target coordinate, the package would get the desired +distance and steering angle to adjust whether the vehicle arrives at the target. If the vehicle arrives in the radius +0.8m of the target location points will be decided that the vehicle will reach the present destination, and the +index will jump to the next destination location coordinate until the final destination. +This package is called +“relateAngle_computation.py”. +3.4.5 Actors “Motor” Setting Transceiver Function “motor_set_tf.py” +This Transceiver Function is the communication medium similar to the state-Transceiver Function. The direction of data +is now from the “car_ctl_engine” Engine to the Gazebo engine. The acquired data from the “car_ctl_engine” Engine is +the DataPack “actors” with the keyword “actors”: +1 @EngineDataPack (keyword=’actors ’, id= DataPackIdentifier (’actors ’, ’ +car_ctl_engine ’)) +2 @TransceiverFunction ("gazebo") +3 def +car_control(actors): +And the DataPack from the Gazebo joint must be validated in this Transceiver Function with the “GazeboJointDat- +aPack()” function. This function is specifically provided by Gazebo to control the joint, the given parameters are the +corresponding joint name (declared with NRPGazeboGrpcJointPlugin plugin name in the SDF file) and target Gazebo +engine (gazebo) (Attention: each joint should be registered as a new joint DataPack): +1 rear_left_wheel_joint = GazeboJointDataPack (" smart_car_joint_plugin :: +rear_left_wheel_joint ", "gazebo") +2 rear_right_wheel_joint = GazeboJointDataPack (" smart_car_joint_plugin :: +rear_right_wheel_joint ", "gazebo") +3 front_left_steering_joint = GazeboJointDataPack (" smart_car_joint_plugin :: +front_left_steering_joint ", "gazebo") +4 front_right_steering_joint = GazeboJointDataPack (" smart_car_joint_plugin :: +front_right_steering_joint ", "gazebo") +The joint control DataPack is GazeboJointDataPack and its attributes are listed in Table 3.2: +Attribute +Description +Python Type +C Type +position +Joint angle position (in rad) +float +float +velocity +Joint angle velocity (in rad/s) +float +float +effort +Joint angle effort (in N) +float +float +Table 3.2 GazeboJointDataPack Attributes. +Attention: Guarantee to send Joint-information to Gazebo it is recommended new declaring on the top of the script +with the below sentence: +1 from +nrp_core.data.nrp_protobuf +import +GazeboJointDataPack + +17 +3.4.6 Camera Frame-Image Transceiver Function “camera_tf.py” +Camera frame-image Transceiver Function acquires the single frame image gathered by Gazebo internally installed camera +plugin and sends this frame image to YOLO v5 Engine “yolo_detector”. The receiving of the image of the camera is based +on the camera DataPack from Gazebo called “GazeboCameraDataPack”. To get the data, should the Decorator declare +the corresponding sensor name with Validation through C++ and indicate the “gazebo” engine and assign a new keyword +for the next Transceiver Function: +1 @EngineDataPack (keyword=’camera ’, id= DataPackIdentifier (’smart_camera :: camera ’, +’gazebo ’)) +2 @TransceiverFunction ("yolo_detector ") +3 def +detect_img(camera): +Attention: Guarantee to acquire camera information from Gazebo it is recommended new declaring on the top of the +script with the below sentence that confirms import GazeboCameraDataPack: +1 from +nrp_core.data.nrp_protobuf +import +GazeboCameraDataPack +And received image Json-information is four parameters: height, width, depth, and image data. The Attributes of the +GazeboCameraDataPack are listed in Table 3.3: +Attribute +Description +Python Type +C Type +image_height +Camera Image height +uint32 +uint32 +image_width +Camera Image width +uint32 +uint32 +image_depth +Camera Image depth. +Number of bytes per pixel +uint8 +uint32 +image_data +Camera Image data. +1-D array of pixel data +numpy.array(image_height +* image_width * image_depth, +numpy.uint8) +std::vector +Table 3.3 GazeboCameraDataPack Attributes. +The received image data from the gazebo is a 1-D array of pixels with unsigned-int-8 form in a sequence of 3 channels. +So this Transceiver Function should be pre-processed with NumPy “frombuffer()” function that transforms the 1-D array +in NumPy form: +1 imgData = np.frombuffer(trans_imgData_bytes , np.uint8) +And in the end, validate the Json-DataPack from YOLO v5 Engine and set all information in DataPack, and return to +YOLO v5 Engine: +1 processed_image = JsonDataPack("camera_img", " yolo_detector ") +2 +3 processed_image .data[’c_imageHeight ’] = trans_imgHeight +4 processed_image .data[’c_imageWidth ’] = trans_imgWidth +5 processed_image .data[’current_image_frame ’] = imgData +3.4.7 YOLO v5 Engine for Detection of the Objects “yolo_detector_engine.py” +YOLO v5 Engine acquires the camera frame image from Gazebo during the camera Transceiver Function and detects +objects in the current frame image. In the end, through the OpenCV package, the result is shown in another window. And +the Yolo v5 Engine is also based on the Python Json Engine model and is similar to the vehicle control Engine in section +3.4.2. The whole structure is divided into three main blocks with another step to import Yolo v5 package. +• Initialization of Engine with establishing “camera_img” DataPack and validation Yolo v5 object with specific +pre-preparation by “detectorWarmUp()”: +1 self. _registerDataPack ("camera_img") +2 self._setDataPack("camera_img", {" c_imageHeight ": 0, "c_imageWidth": 0, " +current_image_frame ": [240 , 320 , 3]}) +3 self.image_np = 0 +4 +5 self.detector = Yolov5.Yolov5Detector () + +18 +6 stride , names , pt , jit , onnx , engine , imgsz , device = self.detector. +detectorInit () +7 self.detector.detectorWarmUp () +• In the main loop function first step is to acquire the camera image with the “_getDataPack()” function. And the +extracted image data from Json DataPack during the camera Transceiver Function became already again in 1-D +“list” data form. There is a necessary step to reform the structure of the image data to fit the form for OpenCV. The +first is to convert the 1-D array into NumPy ndarray form and, according to acquired height and width information, +reshape this np-array. And image form for OpenCV is the default in “BGR” form, and the image from Gazebo is +“RGB”. There is also an extra step to convert the “RGB” shaped NumPy ndarray [14]. In the last, it sends the +original NumPy array-shaped image and OpenCV-shaped image together into detect-function and finally returns an +OpenCV-shaped image with an object-bonding box, and this OpenCV-shaped ndarray can directly use the function +of OpenCV showed in the window: +1 # Image +conversion +2 img_frame = np.array(img_list , dtype=np.uint8) +3 cv_image = img_frame.reshape (( img_height , img_width , 3)) +4 cv_image = cv_image [:, :, ::-1] - np.zeros_like(cv_image) +5 np_image = cv_image.transpose (2,0,1) +6 +7 # Image +detection by Yolo v5 +8 cv_ImgRet ,detect ,_ = self.detector.detectImage(np_image , cv_image , +needProcess=True) +9 +10 # Show of Detected +image +through +OpenCV +11 cv2.imshow(’detected +image ’, cv_ImgRet) +12 cv2.waitKey (1) +4 Simulation Result and Analysis of Performance +(a) +(b) +Figure 4.1 Object-detection by Yolo v5 on NRP platform (right: another frame) +The final goal of the Autonomous driving Benchmark Platform is to build a real-world simulation platform that can +train, do research, test or validate different AI algorithms integrated into vehicles, and next, according to the performance +to give benchmark and evaluation to adjust algorithms, in the end to real installed these algorithms on the real vehicle. +This project “Autonomous Driving Simulator and Benchmark on Neurorobotics Platform” is a basic and tentative concept +and foundation to research the possibility of the simulator with multi-agents on the NRP-Core platform. And according to +the above construction of a single vehicle agent, the autonomous driving simulation experiment has been finished. This +section will discuss the results and suggestions based on the performance of the simulation on the NRP-Core Platform and +the Gazebo simulator. + +detected Image +traffic light 0.27 +umbrella0.69 +suitcase 0.47 plant 0.25 +truck 0.68person0.93 +person 0.89 +car0.55 +firehydrint 0.87 +x=1273.v=107) +R:18G-13B:11detectedimage +suitcase 0.57 +umbrella 0.69 +truck0.72 +person0.81 +firehydrant 0.8719 +4.1 Simulation Result of Object-detection and Autonomous Driving +4.1.1 Object Detection through YOLOv5 on NRP +The object detection is based on the visual camera from the Gazebo simulator through the Yolo v5 algorithm. NRP-Core +is the behind transmit medium between the Gazebo and Yolo v5 detector. The simulation result is shown in Fig. 4.1. +On the point of objects-detection, the result reaches the standard and performances well, most of the objects in the +camera frame image has been detected, but in some different frame, the detected objects are not stable and come to +“undetected.” And in the other hand, although most objects are correctly detected with a high confidence coefficient, e.g., +the person is between 80% +93%, at the same time, there are few detected errors, such as when the flowering shrubs are +detected as a car or a potted plant, the bush plant is detected as an umbrella and the but in front of the vehicle is detected +as a suitcase. And last, even though the Yolo works well on the NRP platform, the performance is actually not smooth, +and in the Gazebo simulator, the running frame rate is very low, perhaps only around 10-13 frames per second, in a more +complex situation, the frame rate came to only 5 frames per second. That makes the simulation in Gazebo very slow and +felled the sense of stumble. And when the size and resolution ratio of the camera became bigger, that made the stumble +situation worse. +4.1.2 Autonomous Driving along pre-defined Trajectory +Autonomous driving along a pre-defined trajectory works well, the performance of simulation also runs smoothly and the +FPS (frame pro second) holds between 20-40 fps. This FPS ratio is also in the tolerance of real-world simulation. The +part trajectory of the experiment vehicle is shown in Fig. 4.2, and the vehicle could run around Parkring and finish one +circle. As the first image of the experiment, the vehicle would, according to the detection result, make the corresponding +decision to control the vehicle to accelerate or to brake down and turn to evade other obstacles. But for this project, there +is no appropriate autonomous driving algorithm to support presently, so here only use a pre-defined trajectory consisting +of plenty of point coordinates. The speed of the vehicle is also fixed, and using PID controller to achieve simulated +autonomous driving. +And on the other hand, all the 3-D models are equal in proportion to the real size of objects. After many tests of +different sizes of the world maps, the size of Parkring is almost the limit of the Gazebo, even though the complexity of the +map is not high. For a bigger scenario of the map, the FPS is obviously reduced, and finally, the simulation would become +stumble and generate a sense of separation. +(a) +(b) +Figure 4.2 Simulation trajectory of autonomous driving +4.1.3 Multi-Engines united Simulation +The final experiment is to start the Yolo v5 Engine and the autonomous driving control Engine. The above experiments +are loaded with only one Engine, and they actually reacted well and had a relatively good performance. And the goal of +this project is also to research the possibility of multi-agent simulation. +The result of multi-Engines simulation actually works in that the Yolo v5 Engine can detect the image and show it +in a window and at the same time, the vehicle can move along the trajectory automatically drive. But the simulation +performance is not good, and the FPS can only hold between 9 -11 fps. The driving vehicle in Gazebo moves very slowly +and not smoothly, and the simulation time has an enormous error compared to the real-time situation. + +20 +4.2 Analysis of Simulation Performance and Discussion +4.2.1 YOLOv5 Detection ratio and Accuracy +Most of the objects near the vehicle in the field of view of the camera have been detected and have high confidence, but +there are also some errors appearing during the detection that some objects in as wrong objects are detected, some far +objects are detected bus some obvious close objects are not detected. The reason can conclude in two aspects: +1. The employment of the integrated Yolo v5 algorithm is the original version that is not aimed at the specific purpose +of this autonomous driving project and has not been trained according to the specific usage. Its network parameters and +arts of objects are original and did not use the specific self-own data set, which makes the result actually have a big error +between the detected result and expected performance. So that makes the result described in section 4.1.1 that appears +some detection error. +2. The accuracy and reality of 3-D models and environment. The object detection algorithm is actually deeply dependent +on the quality of the sent image. Here the quality is not about the resolution size but refers to the “reality” of the objects in +the image. The original Yolo v5 algorithm was trained based on real-world images, but the camera images from Gazebo +actually have enormous distances from real-world images. But the 3-D models and the environment in Gazebo Simulator +are relatively very rough, and like cartoon style, they have a giant distance to the real-world objects on the side of the light, +material texture of surface and reflection, the accuracy of objects. For example, in Gazebo, the bus has terrible texture +and reflection that lets the bus be seen as a black box and not easy to recognize, and Yolo Engine actually detected as a +suitcase. And the Environment in Gazebo is also not well exquisitely built. For example, the shrub and bushes on the +roadside have a rough appearance with coarse triangles and obvious polygon shapes. That would make huge mistakes and +influence the accuracy of desired algorithms. +(a) +(b) +Figure 4.3 Distance between real-world and visual camera image +3. The property of the Gazebo simulator. The Gazebo simulator is perhaps suitable for small scene simulations like in +a room, a tank station, or in a factory. Comparing to other simulators on the market like Unity or Unreal, the advantage +of Gazebo is quickly start-up to the reproduction of a situation and environment. But the upper limit of Gazebo and its +rendering quality is actually not very close to the real world and can let people at the first time recognize this is a virtual +simulation, which also has a huge influence on training object-detection algorithms. And the construction of the virtual +world in Gazebo is very difficult and has to use other supported applications like Blender [15] to help the construction. +Even in Blender, the world has a very high reality, but after the transfer to Gazebo, the rendering quality becomes terrible +and awful. +In fact, although detection has some mistakes and errors, the total result and performance are in line with the forecast +that the Yolo v5 algorithm has excellent performance. +4.2.2 Multi-Engines Situation and Non-smooth Simulation Phenomenon +The simulation of single loaded Yolo Engine and the multi-engine meanwhile operation appear terrible performance by +the movement of the vehicle and inferior progress FPS of the whole simulation. But simulation for single loaded vehicle +control engine is actually working well and has smooth performance. After the comparison experiment, the main reason +for the terrible performance is because of the backstage transmission mechanism of information between Python Json + +21 +Engine on the NRP Platform. In the simulation of a single loaded vehicle control Engine, the transmission from Gazebo +is based on Protobuf-gRPC protocol, and transmission back to Gazebo is JSON protocol, but the size of transmitted +information is actually very small because the transmitted data consists of only the control commands like “line-velocity” +and “angular-velocity” that don’t take much transmission capacity and for JSON Protocol is actually has a negligible error +to Protobuf Protocol. And the image transmission from Gazebo to Transceiver Function is also based on the Protobuf- +gRPC method. But the transmission of an image from the Transceiver Function to Yolo Engine through JSON Protocol is +very slow because the information of an image is hundreds of commands, and the according to the simulation loop in NRP, +would make a block during the process of simulation and let the system “be forced” wait for the finish of transmission +of the image. The transfer efficiency of JSON Protocol is actually compared to real-time slowness and tardiness, which +takes the choke point to the transmission and, according to the test, only reduces the resolution rate of the camera to fit the +simulation speed requirements. +4.3 Improvement Advice and Prospect +The autonomous driving simulator and application on NRP-Core achieve the first goal of building a concept and foundation +for multi-agents, and at the same time, this model is still imperfect and has many disadvantages that would be improved. +On the NRP-Core platform is also the possibility for a real-world simulator discussed, and the NRP-Core has large potential +to achieve the complete simulation and online cooperation with other platforms. There are also some directions and advice +for the improvement of this application presently on NRP for further development. +4.3.1 Unhindered simulation with other communication protocol +As mentioned before, the problem that communication with JSON protocol is the simulation at present is not smooth and +has terrible simulation performance with Yolo Engine. Actually, the transmission of information through the Protobuf +protocol based on the transmission between Gazebo and Transceiver Functions has an exceeding expectation performance +than JSON protocol. +The development Group of NRP-Core has also been developing and integrating the Protobuf- +gRPC [16] communication backstage mechanism on the NRP-Core platform to solve the big data transmission problem. +And in order to use Yolo or other object-detection Engines, it is recommended to change the existing communication +protocol in the Protobuf-gRPC protocol. And the Protobuf protocol is a free and open-source cross-platform data format +used to serialize structured data and developed by google, and details see on the official website [16]. +4.3.2 Selection of Basic Simulator with better performance +Because of the limitation of performance and functions of the Gazebo, there are many applications that can not in Gazebo +easy to realize, such as the weather and itself change, and the accuracy and reality of 3-D models also have limitations. +The usage of high-accuracy models would make the load became heavier on the Gazebo because of the fall behind the +optimization of the Gazebo simulator. In fact, there are many excellent simulators, and they also provide many application +development packages that can shorten the development period, such as Unity3D [17] or Unreal engine simulator [18]. In +the team of an autonomous driving simulator and the benchmark there is an application demo on Unity3D simulator and +figure Fig. 4.4 shows the difference between Gazebo and Unity3D. +The construction and simulation in Unity3D have much better rendering quality close to the real world than Gazebo, and +the simulation FPS can maintain above 30 or even 60 fps. And for the YoloV5 detection result, according to the analysis +in section 4.2.1, the result by Unity3D is better than the performance by Gazebo simulator because of more precision +3-D models and better rendering quality of models (Example see Fig. 4.5). The better choice for the development as +the basic simulator and world expresser is recommended to develop on Unity3D or other game engines. And actually, +NRP-Core will push a new version that integrates the interfaces with Unity3D and could use Protobuf protocol to ensure +better performance for a real-world simulation. +4.3.3 Comparing to other Communication Systems and frameworks +There are also many communication transmission frameworks and systems that are widely used in academia or business +for robot development, especially ROS (Robot Operating System) system already has many applications and development. +Actually, ROS has already been widely and mainly used for Robot-development with different algorithms: detection +algorithm and computer vision, SLAM (Simultaneous Localization and Mapping) and Motion-control, and so on. ROS +has already provided relatively mature and stable methods and schemes to undertake the role of transmitting these necessary +data from sensors to the robot’s algorithms and sending the corresponding control command codes to the robot body or +actors. But the reason chosen NRP-Core to be the communication system is based on the concepts of Engines and +Transceiver Functions. Compared to ROS or other framework NRP platform has many advantages: This platform is very +easy to build multi-agents in simulation and conveniently load in or delete from the configuration of simulation; The + +22 +(a) Sunny +(b) Foggy +(c) Raining +(d) Snowy +Figure 4.4 Construction of simulation world in Unity3D with weather application +(a) Detection by YOLOv5 on Gazebo +(b) Detection by YOLOv5 on Unity3D +Figure 4.5 Comparing of the detection result by different platforms +management of information is easier to identify than ROS-topics-system; The transmission of information is theoretically +more efficient, and modularization and this platform can also let ROS at the same time as parallel transmission method to +match and adapt to another systems or simulations. From this viewpoint, the NRP platform generalizes the transmission of +data and extends the boundary of the development of the robot, which makes the development more modular and efficient. +ROS system can also realize the multi-agents union simulation but is not convenient to manage based on the "topic" system. +ROS system is now more suitable for a single agent simulation and the simulation environment. As mentioned before, +the real interacting environment is not easy to realize. But NRP-Core has the potential because that NRP-Core can at the +same time run the ROS system and let the agent developed based on the ROS system easily join in the simulation. That is +meaningful to develop further on the NRP-Core platform. +5 Conclusion and Epilogue +This project focuses on the first construction of the basic framework on the Neurorobotics Platform for applying the +Autonomous Driving Simulator and Benchmark. Most of the functions including the template of the autonomous driving +function and object-detection functions are realized. The part of the benchmark because there are no suitable standards +and further development is a huge project regarded as further complete development for the application. + +umbre +umbrella 0.69 +suitcase0.57 +truck 0.72 +person 0.85 +00tedpldnt0.4truck0.49 +person 0.81 +fire hydrant 0.87OGGY1 +Burger:Queen +KSC23 +This project started with researching the basic characters to build a simulation experiment on the NRP-Core Platform. +Then the requirements of the construction of the simulation are listed and each necessary component and object of the +NRP-Core is given the basic and key understanding and attention. The next step according to the frame of the NRP-Core +is the construction of the application of the autonomous driving simulator. Started with establishing the physic model of +the vehicle and the corresponding environment in the SDF file, then building the “close loop” - autonomous driving based +on PID control along the pre-defined trajectory and finally the “open loop” – objects-detection based on YoloV5 algorithm +and successfully achieve the goal to demonstrate the detected current frame image in a window and operated as camera +monitor. And at last, the current problems and the points of improvement are listed and discussed in this development +document. +And at the same time there are also many problems that should be optimized and solved. At present the simulation +application can only regard as research for the probability of the multi-agent simulation. The performance of the scripts +has a lot of space to improve, and it is recommended to select a high-performance simulator as the carrier of the real-world +simulation. In fact the NRP-Core platform has shown enormous potential for the construction of a simulation world with +each object interacting function and the high efficiency to control and manage the whole simulation project. In conclusion +the NRP-Core platform has great potential to achieve the multi-agents simulation world. +References +[1] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. CARLA: An open urban +driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pages 1–16, 2017. +[2] Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor. Airsim: High-fidelity visual and physical simulation +for autonomous vehicles. In Field and Service Robotics, 2017. +[3] PTV Group. Ptv vissim. https://www.ptvgroup.com/en/solutionsproducts/ptv-vissim/. +[4] Human Brain Project. Neurorobotics platform. https://neurorobotics.net/. +[5] Nathan Koenig and Andrew Howard. Design and use paradigms for gazebo, an open-source multi-robot simulator. +In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), +volume 3, pages 2149–2154. IEEE, 2004. +[6] ROS Wiki. urdf/xml. https://wiki.ros.org/urdf/XML. +[7] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural +networks. Communications of the ACM, 60(6):84–90, 2017. +[8] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv +preprint arXiv:1409.1556, 2014. +[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In +Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. +[10] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C +Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016. +[11] G Jocher, K Nishimura, T Mineeva, and R Vilarino. Yolov5 by ultralytics. Disponıvel em: https://github. com/ultr- +alytics/yolov5, 2020. +[12] Yolov5 documentation. https://docs.ultralytics.com/. +[13] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming +Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. +Advances in neural information processing systems, 32, 2019. +[14] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000. +[15] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting +Blender Foundation, Amsterdam, 2018. +[16] Kenton Varda. Protocol buffers: Google’s data interchange format. Technical report, Google, 6 2008. +[17] Unity Technologies. Real-time 3d tools and more. https://unity.com/. +[18] Epic Games. Unreal engine. https://www.unrealengine.com/. +A Appendix + +24 +Name +Description +Type +Default +Array +Values +SimulationLoop +Type of simulation loop used in +the experiment +enum +"FTILoop" +"FTILoop" +"EventLoop" +SimulationTimeout +Experiment Timeout (in +seconds). It refers to simulation +time +integer +0 +SimulationTimestep +Time in seconds the simulation +advances in each Simulation +Loop. It refers to simulation +time. +number +0.01 +ProcessLauncherType ProcessLauncher type to be used +for launching engine processes +string +Basic +EngineConfigs +Engines that will be started in +the experiment +EngineBase +X +DataPackProcessor +Framework used to process and +rely datapack data between +engines. Available options are +the TF framework (tf) and +Computation Graph (cg) +enum +"tf" +"tf", "cg" +DataPackProcessing- +Functions +Transceiver and Preprocessing +functions that will be used in the +experiment +TransceiverFunction +X +StatusFunction +Status Function that can be used +to exchange data between NRP +Python Client and Engines +StatusFunction +ComputationalGraph +List of filenames defining +the ComputationalGraph that +will be used in the experiment +string +X +EventLoopTimeout +Event loop timeout (in seconds). +0 means no timeout. If not +specified ’SimulationTimeout’ +is used instead +integer +0 +EventLoopTimestep +Time in seconds the event loop +advances in each loop. If not +specified ’SimulationTimestep’ +is used instead +number +0.01 +ExternalProcesses +Additional processes that will +be started in the experiment +ProcessLauncher +X +ConnectROS +If this parameter is present a +ROS node is started by +NRPCoreSim +ROSNode +ConnectMQTT +If this parameter is present an +MQTT client is instantiated and +connected +MQTTClient +Table A.1 Simulation configuration + +25 +Name +Description +Type +Default +Required Array +EngineName +Name of the engine +string +X +EngineType +Engine type. Used +by EngineLauncherManager to +select the correct engine launcher +string +X +EngineProcCmd +Engine Process Launch command +string +EngineProcStartParams +Engine Process Start Parameters +string +[ ] +X +EngineEnvParams +Engine Process Environment +Parameters +string +[ ] +X +EngineLaunchCommand +LaunchCommand with parameters +that will be used to launch the +engine process +object +"LaunchType": +"BasicFork" +EngineTimestep +Engine Timestep in seconds +number +0.01 +EngineCommandTimeout +Engine Timeout (in seconds). It +tells how long to wait for the +completion of the engine runStep. +0 or negative values are interpreted +as no timeout +number +0.0 +Table A.2 Engine Base Parameter + diff --git a/FNAyT4oBgHgl3EQfSffh/content/tmp_files/load_file.txt b/FNAyT4oBgHgl3EQfSffh/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..6e4bd7fa300f6cd61c84e8e99a641b448b48e0ed --- /dev/null +++ b/FNAyT4oBgHgl3EQfSffh/content/tmp_files/load_file.txt @@ -0,0 +1,934 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf,len=933 +page_content='Chair of Robotics, Artificial Intelligence and Real-Time Systems TUM School of Computation, Information and Technology Technical University of Munich 1 Autonomous Driving Simulator based on Neurorobotics Platform Wei Cao, Liguo Zhou �, Yuhong Huang, and Alois Knoll Chair of Robotics, Artificial Intelligence and Real-Time Systems, Technical University of Munich � liguo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='zhou@tum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='de Abstract — There are many artificial intelligence algorithms for autonomous driving in the present market, but directly installing these algorithms on vehicles is unrealistic and expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' At the same time, many of these algorithms need an environment to train and optimize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Simulation is a valuable and meaningful solution with training and testing functions, and it can say that simulation is a critical link in the autonomous driving world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' There are also many different applications or systems of simulation from companies or academies such as SVL and Carla.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' These simulators flaunt that they have the closest real-world simulation, but their environment objects, such as pedestrians and other vehicles around the agent-vehicle, are already fixed programmed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' They can only move along the pre-setting trajectory, or random numbers determine their movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' What is the situation when all environmental objects are also installed by Artificial Intelligence, or their behaviors are like real people or natural reactions of other drivers?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This problem is a blind spot for most of the simulation applications, or these applications cannot be easy to solve this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The Neurorobotics Platform from the TUM team of Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Alois Knoll has the idea about "Engines" and "Transceiver Functions" to solve the multi-agents problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This report will start with a little research on the Neurorobotics Platform and analyze the potential and possibility of developing a new simulator to achieve the true real-world simulation goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Then based on the NRP-Core Platform, this initial development aims to construct an initial demo experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The consist of this report starts with the basic knowledge of NRP-Core and its installation, then focus on the explanation of the necessary components for a simulation experiment, at last, about the details of constructions for the autonomous driving system, which is integrated object detection function and autonomous driving control function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' At the end will discuss the existing disadvantages and improvements of this autonomous driving system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Keywords— Simulation, Neurorobotics Platform, NRP-Core, Engines, Transceiver Functions, Autonomous Driving, Object Detection, PID Trajectory Control 1 Introduction 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1 Motivation At present, there are many different Artificial Intelligence (AI) algorithms used for autonomous driving.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Some algorithms are used to perceive the environment, such as object detection and semantic/instance segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Some algorithms are dedicated to making the best trajectory strategy and control decisions based on the road environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Others contribute to many different applications, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' path planning and parking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Simulation is the best cost-performance way to develop these algorithms before they are truly deployed to actual vehicles or robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' So, the performance of a simulation platform is influencing the performance of the AI algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' In the present market or business world, there are already a lot of different “real-world” simulation applications such as CARLA [1] for simulating the algorithm for autonomous driving, AirSim [2] from Microsoft for autonomous vehicle and quadrotor and PTV Vissim [3] from Germany PTV Group for flexible traffic simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Although these simulators are dedicated to the “real world” simulation, they have more or less “unreal” problems on some sides in the process of simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' For example, besides the problem about the unreal 3-D models and environment, these simulators have an obvious feature, these AI algorithms are only deployed to target experimental subjects, vehicles, or robots, and the environment such as other vehicles, motorbikes, and pedestrian looks very close to the “real” environment but actually these environmental subjects are already in advance pre-programmed and have a fix motion trail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The core problem of most of them focuses on basic information transmission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' They only transfer the essential or necessary traffic information to the agent subject in the simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This transmission is one-way direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Considering this situation, can let other subjects in this simulation have their own different AI algorithms at the same time that they can react to the agent’s behavior?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' In the future world, there would be not only one vehicle owning one algorithm from one company, but they must also have much interaction with other agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The interaction between different algorithms can take which influence back on these algorithms, and this problem is also a blind point for many simulators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This large range of interaction between lots of agents is the main problem that these applications should pay attention to and these existing applications do not have an efficient way to solve this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' A simulation platform that is truly arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='00089v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='RO] 31 Dec 2022 2 like the real world, whose environment is not only a fixed pre-definition program, the objects in the environment can make a relative objective interaction with vehicles with the testing autonomous driving algorithms and they can influence each other, the goal and concept is an intractable problem for the construction of a simulation platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' There is a platform called The Neurorobotics Platform (NRP) from the TUM team of Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Alois Knoll that provides a potential idea to solve this interaction problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This research project focuses on preliminary implementation and searches for the possibility of solving the previously mentioned interaction problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2 Neurorobotics Platform (NRP) Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1 The base model of Neurorobotics Platform (NRP) Neurorobotics Platform [4] is an open-source integrative simulation framework platform developed by the group of the chair of Robotics, Artificial Intelligence and Real-Time Systems of the Technical University of Munich in the context of the Human Brain Project - a FET Flagship funded by the European Commission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The basic starting point of this platform enables to choose and test of different brain models (ranging from spiking neural networks to deep networks) for robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This platform builds an efficient information transmission framework to let simulated agents interact with their virtual environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The new Version of NRP called NRP Core provides a new idea,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' which regards all the Participator in the Simulation- system as "Engines",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' just like the object in the programming language C++/python,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' the properties of the simulation participator such as the robot,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' autonomous-driving car,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' weather,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' or pedestrian and their "behaviors" would be completely constructed in their own "Engine"-object and let all the participates become a "real" object and can each other influence in the simulation world and they would not be a fix definite "Program".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' And the NRP-Platform is the most important transport median between these engines and they are called the Transceiver Function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It transmits the "Information" such as the image from the camera and sends the image to an autonomous-driving car and the same time would send other information to other engines by different transfer protocols such as JSON or ROS system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' That means the transmission of information is highly real-time and lets the simulation world very close to the real world and it has high simulation potency, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' the platform sends the image information to the autonomous-driving car and lets the car computes the situation and makes the right strategy and rational decision, and at the same moment the environment-cars or "drivers" also get the location information from the autonomous-driving car and make their own decisions such like drive further or change velocity and lanes, and the same time these cars are influenced by the situation of the weather, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' in raining days the brake time of the car would be longer and let the decision making and object detection more significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' NRP-core is mostly written in C++, with the Transceiver Function framework relying on Python for better usability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It guarantees a fully deterministic execution of the simulation, provided every simulator used is itself deterministic and works on the basis of controlled progression through time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Users should thus take note that event-based simulators may not be suitable for integration in NRP-core (to be analyzed on a case-by-case basis).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Communications to and from NRP-core are indeed synchronous, and function calls are blocking;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' as such, the actual execution time of a simulation based on NRP-core will critically depend on the slowest simulator integrated therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The aforementioned feature of the NRP-Core platform is significant to build multi-object which interact with other agencies in the simulation progress and lets the simulation be close to the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2 NRP-Core configurations for simulation progress NRP-Core has many application scenarios for different demands of simulation situations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' For a specific purpose, the model of NRP-Core can be widely different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This development for the Autonomous-driving benchmark focuses on the actual suggested development progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It concentrates on the construction of the simulation application, the details of Close Transceiver Functions Loop Engine3 the operation mechanism of NRP-Core would not be discussed, and deep research in this development documentation, the principle of the operation mechanism can be found on the homepage of NRP-Core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1 Installation of NRP-Core and setting environment For the complete installation, refer to the homepage of the NRP-Core Platform by "Getting Started" under the page "Installation Instructions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='" This section lists only all the requirements for applying the autonomous driving simulator and benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' WARNING: Previous versions of the NRP install forked versions of several libraries, notably NEST and Gazebo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Installing NRP-core in a system where a previous version of NRP is installed is known to cause conflicts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' That will be strongly recommended not to install the last version at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Operating System: recommend on Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='04 Setting the Installation Environment: To properly set the environment to run experiments with NRP-core, please make sure that it is added the lines below to your /.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='bashrc file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 1 # Start setting environment 2 export NRP_INSTALL_DIR ="/home/${USER }/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' local/nrp" # The installation directory , which was given before 3 export NRP_DEPS_INSTALL_DIR ="/home/${USER }/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' local/nrp_deps" 4 export PYTHONPATH="${ NRP_INSTALL_DIR }"/lib/python3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='8/site -packages:"${ NRP_DEPS_INSTALL_DIR }"/lib/python3 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='8/site -packages:$PYTHONPATH 5 export LD_LIBRARY_PATH ="${ NRP_INSTALL_DIR }"/lib:"${ NRP_DEPS_INSTALL_DIR }"/lib:${ NRP_INSTALL_DIR }/lib/ nrp_gazebo_plugins : $LD_LIBRARY_PATH 6 export PATH=$PATH:"${ NRP_INSTALL_DIR }"/bin:"${ NRP_DEPS_INSTALL_DIR }"/bin 7 export GAZEBO_PLUGIN_PATH =${ NRP_INSTALL_DIR }/lib/ nrp_gazebo_plugins :${ GAZEBO_PLUGIN_PATH } 8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' /usr/share/gazebo -11/ setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='sh 9 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' /opt/ros/noetic/setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='bash 10 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' ${CATKIN_WS }/ devel/setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='bash 11 # End of setting environment Dependency installation: 1 # Start of dependencies installation 2 # Pistache REST Server 3 sudo add -apt -repository ppa:pistache+team/unstable 4 5 # Gazebo repository 6 sudo sh -c ’echo "deb http :// packages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' osrfoundation .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='org/gazebo/ubuntu -stable ‘ lsb_release -cs ‘ main"> /etc/apt/sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='d/gazebo -stable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='list ’ 7 wget https :// packages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='osrfoundation .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='org/gazebo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='key -O - | sudo apt -key add - ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='9 sudo apt update ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='10 sudo apt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='install ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='git cmake ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='libpistache -dev libboost -python -dev libboost - ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='filesystem -dev libboost -numpy -dev libcurl4 -openssl -dev nlohmann -json3 -dev ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='libzip -dev cython3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='python3 -numpy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='libgrpc ++-dev protobuf -compiler -grpc ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='libprotobuf -dev ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='doxygen ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='libgsl -dev libopencv -dev python3 -opencv ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='python3 -pil ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='python3 -pip libgmock -dev ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='12 # required by gazebo ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='engine ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='13 sudo apt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='install ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='libgazebo11 -dev ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='gazebo11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='gazebo11 -plugin -base ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='14 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='15 # Remove the flask if it was ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='installed to ensure it is installed ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='from pip ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='16 sudo apt remove ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='python3 -flask python3 -flask -cors ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='17 # required by Python ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='engine ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='18 # If you are ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='planning to use The ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='Virtual ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='Brain ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='framework ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' you will most likely have to use flask version 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 19 # By installing flask version 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='4 markupsafe library (included with flask) has to be downgraded to version 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1 to run properly with gunicorn 20 # You can install that version with 21 # pip install flask ==1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='4 gunicorn markupsafe ==2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1 22 pip install flask gunicorn 23 24 # required by nest -server (which is built and installed along with nrp -core) 4 25 sudo apt install python3 - restrictedpython uwsgi -core uwsgi -plugin -python3 26 pip install flask_cors mpi4py docopt 27 28 # required by nrp -server ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' which uses gRPC python bindings 29 pip install grpcio -tools pytest psutil docker 30 31 # Required for using docker with ssh 32 pip install paramiko 33 34 # ROS ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' when not needed ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' can jump to the next step 35 36 # Install ROS: follow the installation instructions: http :// wiki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='ros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='org/noetic Installation/Ubuntu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' To enable ros support in nrp on ‘ros -noetic -ros -base ‘ is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 37 38 #Tell nrpcore where your catkin workspace is located: export a variable CATKIN_WS pointing to an existing catkin workspace root folder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' If the variable does not exist , a new catkin workspace will be created at ‘${HOME }/ catkin_ws ‘.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 39 40 # MQTT , if needed , see the homepage of NRP -Core 41 42 # End of dependencies installation NRP installation: 1 # Start of installation 2 git clone https :// bitbucket.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='org/ hbpneurorobotics /nrp -core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='git 3 cd nrp -core 4 mkdir build 5 cd build 6 # See the section "Common NRP -core CMake options" in the documentation for the additional ways to configure the project with CMake 7 cmake .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='. -DCMAKE_INSTALL_PREFIX ="${ NRP_INSTALL_DIR }" - DNRP_DEP_CMAKE_INSTALL_PREFIX ="${ NRP_DEPS_INSTALL_DIR }" 8 mkdir -p "${ NRP_INSTALL_DIR }" 9 # the installation process might take some time , as it downloads and compiles Nest as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 10 # If you haven ’t installed MQTT libraries , add ENABLE_MQTT=OFF definition to cmake (-DENABLE_MQTT=OFF).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 11 make 12 make install 13 # Just in case of wanting to build the documentation .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Documentation can then be found in a new doxygen folder 14 make nrp_doxygen 15 # End of installation Common NRP-core CMake options: Here is the list of the CMake options that can help modify the project configu- ration (turn on and off the support of some components and features).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Developers options: – COVERAGE enables the generation of the code coverage reports during the testing – BUILD_RST enables the generation of the reStructuredText source files from the Doxygen documentation Communication protocols options: – ENABLE_ROS enables compilation with ROS support;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' – ENABLE_MQTT enables compilation with the MQTT support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' ENABLE_SIMULATOR and BUILD_SIMULATOR_ENGINE_SERVER options: – ENABLE_NEST and BUILD_NEST_ENGINE_SERVER;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' – ENABLE_GAZEBO and BUILD_GAZEBO_ENGINE_SERVER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The ENABLE_SIMULATOR and BUILD_SIMULATOR_ENGINE_SERVER flags allow disabling the compilation of those parts of nrp-core that depend on or install a specific simulator (eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' gazebo, nest).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The expected behavior for each of these pairs of flags is as follows: 5 the NRPCoreSim is always built regardless of any of the flags values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' if ENABLE_SIMULATOR is set to OFF: – the related simulator won’t be assumed to be installed in the system, ie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' make won’t fail if it isn’t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Also it won’t be installed in the compilation process if this possibility is available (as in the case of nest) – The engines connected with this simulator won’t be built (nor client nor server components) – tests that would fail if the related simulator is not available won’t be built if the ENABLE_SIMULATOR is set to ON and BUILD_SIMULATOR_ENGINE_SERVER is set to OFF: Same as above, but: – the engine clients connected to this simulator will be built.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This means that they should not depend on or link to any specific simulator – the engine server-side components might or might not be built, depending on if the related simulator is required at compilation time if both flags are set to ON the simulator is assumed to be installed or it will be installed from the source if this option is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' All targets connected with this simulator will be built.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This flag system allows configuring the resulting NRP-Core depending on which simulators are available on the system, both for avoiding potential dependency conflicts between simulators and enforcing modularity, opening the possibility of having specific engine servers running on a different machine or inside containers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2 Introduction of basic components of simulation by NRP Some important elements for constructing a simulation example by the NRP platform are: Engines, Transceiver Function (TF) + Preprocessing Function (PF), Simulation Configuration JSON file, Simulation model file and DataPack, which are basic components of simulation progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' In this section, list and declare their definition, content and implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1 Engine Engines are a core aspect of the NRP-core framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' They run the actual simulation software (which can be comprised of any number of heterogeneous modules), with the Simulation Loop and TransceiverFunctions merely being a way to synchronize and exchange data between them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The data exchange is carried out through an engine client (see paragraph below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' An Engine can run any type of software, from physics engines to brain simulators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The only requirement is that they should be able to manage progressing through time with fixed-duration time steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' There are different engines already implemented in NRP-Core: Nest: two different implementations that integrate the NEST Simulator into NRP-core.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Gazebo: engine implementation for the Gazebo physics simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' PySim: engine implementation based on the Python JSON Engine wrapping different simulators (Mujoco, Opensim, and OpenAI) with a python API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The Virtual Brain: engine implementation based on the Python JSON Engine and TVB Python API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' and so on are provided by NRP and as the first user-interested engines for research Spiking neural Networks and the like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' These applications are distributed to the specific simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This platform provides also Python JSON Engine, this versatile engine enables users to execute a user-defined python script as an engine server, thus ensuring synchronization and enabling DataPack data transfer with the Simulation Loop process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It can be used to integrate any simulator with a Python API in an NRP-core experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This feature allows users to modular develop experiment agents in constructed simulation world and is flexible to manage plural objects with different behaviors and characters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2 DataPack and Construction format The carrier of Information which is transported between engines and lets engines with each other communicate is DataPack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' By NRP are there three types of supported DataPack, all of them are simple objects which wrap around arbitrary data structures, one is JSON DataPack, second is Protobuf DataPack and another is ROS msg DataPack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' They provide the necessary abstract interface, which is understood by all components of NRP-Core, while still allowing the passing of data in various formats.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' DataPack is also an important feature or property of a specific Engine, meaning the parameters and form of data of a specific DataPack be declared in the Engine (Example see section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' A DataPack consists of two parts: 6 DataPack ID: which allows unique identify the object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' DataPack data: this is the data stored by the DataPack, which can be in the principle of any type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' DataPacks are mainly used by Transceiver functions to relay data between engines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Each engine type is designed to accept only datapacks of a certain type and structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Every DataPack contains a DataPackIdentifier, which uniquely identifies the datapack object and allows for the routing of the data between transceiver functions, engine clients and engine servers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' A datapack identifier consists of three fields: name - the name of the DataPack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It must be unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' type - string representation of the DataPack data type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This field will most probably be of no concern for the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It is set and used internally and is not in human-readable form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' engine name - the name of the engine to which the DataPack is bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' DataPack is a template class with a single template parameter, which specifies the type of data contained by the DataPack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This DataPack data can be in the principle of any type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' In practice, there are some limitations though, since DataPacks, which are C++ objects, must be accessible from TransceiverFunctions, which are written in Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Therefore the only DataPack data types which can be actually used in NRP-core are those for which Python bindings are provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It is possible for a DataPack to contain no data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This is useful, for example, when an Engine is asked for a certain DataPack but it is not able to provide it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' In this case, an Engine can return an empty DataPack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This type of Datapack contains only a Datapack identifier and no data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Attempting to retrieve the data from an empty DataPack will result in an exception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' A method "isEmpty" is provided to check whether a DataPack is empty or not before attempting to access its data: 1 if(not datapack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='isEmpty ()): 2 # It’s safe to get the data 3 print(datapack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='data) 4 else: 5 # This will raise an exception 6 print(datapack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='data) The Format of getting DataPack from a particular Engine: 1 # Declare datapack with "datapack_name " name from engine "engine_name" as input using the @EngineDataPack decorator 2 # The transceiver function must accept an argument with the same name as " keyword" in the datapack decorator 3 4 @EngineDataPack (keyword="datapack",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' id= DataPackIdentifier (" datapack_name ",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' "engine_name")) 5 @TransceiverFunction ("engine_name") 6 def transceiver_function (datapack): 7 print(datapack.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='data) 8 9 # Multiple input datapacks from different engines can be declared 10 @EngineDataPack (keyword="datapack1", id= DataPackIdentifier (" datapack_name1 " , "engine_name1")) 11 @EngineDataPack (keyword="datapack2", id= DataPackIdentifier (" datapack_name2 " , "engine_name2")) 12 @TransceiverFunction ("engine_name1 ") 13 def transceiver_function (datapack1 , datapack2): 14 print(datapack1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='data) 15 print(datapack2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='data) PS: The details of two Decorators of TransceiverFunction see below in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The Format of setting information in DataPack and sending to particular Engine: 1 # NRP -Core expects transceiver functions to always return a list of datapacks 2 @TransceiverFunction ("engine_name") 3 def transceiver_function (): 4 datapack = JsonDataPack(" datapack_name ",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' "engine_name") 5 return [ datapack ] 6 7 7 # Multiple datapacks can be returned 8 9 @TransceiverFunction ("engine_name") 10 def transceiver_function (): 11 datapack1 = JsonDataPack(" datapack_name1 ",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' "engine_name") 12 datapack2 = JsonDataPack(" datapack_name2 ",' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' "engine_name") 13 14 return [ datapack1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' datapack2 ] 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='3 Transceiver Function and Preprocessing Function 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Transceiver Function Transceiver Functions are user-defined Python functions that take the role of transmitting DataPacks between engines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' They are used in the architecture to convert, transform or combine data from one or multiple engines and relay it to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The definition of a Transceiver Function must use Decorator before the user-defined “def” transceiver function,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' which means: Sending the DataPack to the target Engine: 1 @TransceiverFunction ("engine_name") To request datapacks from engines,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' additional decorators can be prepended to the Transceiver Function,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' with the form (Attention: Receive-Decorator must be in the front of TransceiverFunction): 1 @EngineDataPack (keyword_datapack ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' id_datapack) keyword_datapack: user-defined new data name of DataPacks,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' this keyword is used as Input to Transceiver Function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' id_datapack: the id of from particular Engine received DataPack, “DataPack ID” = “DataPack Name” + “Engine Name” (Examples see 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Preprocessing Function Preprocessing Function is very similar to Transceiver Function but has different usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Preprocessing Functions are introduced to optimize expensive computations on DataPacks attached to a single engine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' In some cases, there might be necessary to apply the same operations on a particular DataPack in multiple Transceiver Functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' An example of this might be applying a filter to a DataPack containing an image from a physics simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' In order to allow to execute this operation just once and let other TFs access the processed DataPack data, PreprocessingFunctions (PFs) are introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' They show two main differences with respect to Transceiver Functions: Their output datapacks are not sent to the corresponding Engines,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' they are kept in a local datapack cache and can be used as input in TransceiverFunctions PFs just can take input DataPacks from the Engine they are linked to The format of Preprocessing Function is similar to Transceiver Function: 1 @PreprocessingFunction ("engine_name") 2 @PreprocessedDataPack (keyword_datapack ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' id_datapack) These Decorators “@PreprocessingFunction” and “@PreprocessedDataPack” must be used in Preprocessing Functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Since the output of Preprocessing Function is stored in the local cache and does not need to process on the Engine Server side, Preprocessing Function can return any type of DataPack without restrictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='4 Simulation Configuration Json file The details of configuration information for any simulation with Engines and Transceiver Functions are stored in a single JSON file, this file contains the objects of engines, Transceiver functions, and also their important necessary parameters to initialize and execute a simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This file is usually written in the “example_simlation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='json” file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The JSON format is here a JSON schema, which is highly readable and offers similar capabilities as XML Schema.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' The advantage of composability and inheritance allows the simulation to use reference keywords to definite the agent and to validate inheritance by referring to other schemas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' That means that the same basement of an engine can at the same time create plural agents or objects with only different identify IDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Simulation Parameters For details, see appendix Table A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='1: Simulation configuration parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Example form 8 1 { 2 " SimulationName": " example_simulation ", 3 " SimulationDescription ": "Launch two python engines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' " 4 " SimulationTimeout ": 1, 5 " EngineConfigs": 6 [ 7 { 8 "EngineType": "python_json", 9 "EngineName": "python_1", 10 " PythonFileName": "engine_1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='py" 11 }, 12 { 13 "EngineType": "python_json", 14 "EngineName": "python_2", 15 " PythonFileName": "engine_2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='py" 16 } 17 ], 18 " DataPackProcessingFunctions ": 19 [ 20 { 21 "Name": "tf_1", 22 "FileName": "tf_1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='py" 23 } 24 ] 25 } EngineConfigs: this section list all the engines are participating in the simulation progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' There are some important parameters should be declared: – EngineType: which type of engine is used for this validated engine, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=', gazebo engine, python JSON engine – EngineName: user-defined unit identification name for the validated engine – Other Parameters: These Parameters should be declared according to the type of engines (details see appendix Table A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2: Engine Base Parameter) ∗ Python Json engine: “PythonFileName” – reference base python script for validated engine ∗ Gazebo engine: see in section DataPackProcessingFunctions: this section lists all the Transceiver functions validated in simulation progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Mostly are there two parameters that should be declared: – Name: user-defined identification name for validated Transceiver Function – FileName: which file as reference base python script to validate Transceiver Function Other Simulation Parameters: see section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='4 – 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' Simulation Parameters Launch a simulation: This simulation configuration JSON file is also the launch file and uses the NRP command to start a simulation experiment with the following command: 1 NRPCoreSim -c user_defined_simulation_config .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='json Tip: In a user-defined simulation, the folder can simultaneously exist many different named configuration JSON files.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' It is very useful to config the target engine or Transceiver Functions that which user wants to launch and test with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' To start and launch the target simulation experiment, just choose the corresponding configuration file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='5 Simulation model file In this experiment for Autonomous driving on the NRP platform Gazebo physics simulator [5] is the world description simulator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' For the construction of the simulation, the world can use the “SDF” file based on XML format to describe all the necessary information about 3D models in a file, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' sunlight, environment, friction, wind, landform, robots, vehicles, and other physics objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' This file can in detail describe the static or dynamic information of the robot, the relative position and motion information, the declaration of sensor or control plugins, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' And Gazebo is a simulator that has a close correlation to the ROS system and provides simulation components for ROS, so the ROS system describes many similar details about the construction of SDF file [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FNAyT4oBgHgl3EQfSffh/content/2301.00089v1.pdf'} +page_content=' According to XML format label to describe components of the simulation world and construct the dependence relation- ship of these components: 9 World Label 1