jackkuo commited on
Commit
67ab4af
·
verified ·
1 Parent(s): 9ece554

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. 1NAyT4oBgHgl3EQfofhG/content/tmp_files/2301.00507v1.pdf.txt +1138 -0
  3. 1NAyT4oBgHgl3EQfofhG/content/tmp_files/load_file.txt +0 -0
  4. 2tAzT4oBgHgl3EQfRvvX/content/tmp_files/2301.01222v1.pdf.txt +1230 -0
  5. 2tAzT4oBgHgl3EQfRvvX/content/tmp_files/load_file.txt +0 -0
  6. 49FIT4oBgHgl3EQf7St_/content/tmp_files/2301.11397v1.pdf.txt +1532 -0
  7. 49FIT4oBgHgl3EQf7St_/content/tmp_files/load_file.txt +0 -0
  8. 4dFQT4oBgHgl3EQf4Ta7/content/tmp_files/2301.13431v1.pdf.txt +0 -0
  9. 4dFQT4oBgHgl3EQf4Ta7/content/tmp_files/load_file.txt +0 -0
  10. 5NE3T4oBgHgl3EQfQgli/content/tmp_files/2301.04413v1.pdf.txt +1246 -0
  11. 5NE3T4oBgHgl3EQfQgli/content/tmp_files/load_file.txt +0 -0
  12. 5tE2T4oBgHgl3EQfkQe0/content/tmp_files/2301.03977v1.pdf.txt +1182 -0
  13. 5tE2T4oBgHgl3EQfkQe0/content/tmp_files/load_file.txt +0 -0
  14. 6dAyT4oBgHgl3EQfpfjS/content/2301.00528v1.pdf +3 -0
  15. A9E1T4oBgHgl3EQf9QZb/content/tmp_files/2301.03554v1.pdf.txt +0 -0
  16. A9E1T4oBgHgl3EQf9QZb/content/tmp_files/load_file.txt +0 -0
  17. B9E1T4oBgHgl3EQfpgVg/content/tmp_files/2301.03332v1.pdf.txt +2144 -0
  18. B9E1T4oBgHgl3EQfpgVg/content/tmp_files/load_file.txt +0 -0
  19. C9E0T4oBgHgl3EQfyQLe/content/tmp_files/2301.02658v1.pdf.txt +313 -0
  20. C9E0T4oBgHgl3EQfyQLe/content/tmp_files/load_file.txt +153 -0
  21. D9E0T4oBgHgl3EQfywIT/content/tmp_files/2301.02662v1.pdf.txt +2540 -0
  22. D9E0T4oBgHgl3EQfywIT/content/tmp_files/load_file.txt +0 -0
  23. DNFQT4oBgHgl3EQf_zdP/content/tmp_files/2301.13459v1.pdf.txt +1163 -0
  24. DNFQT4oBgHgl3EQf_zdP/content/tmp_files/load_file.txt +0 -0
  25. E9E1T4oBgHgl3EQfEgPe/content/tmp_files/2301.02892v1.pdf.txt +1366 -0
  26. E9E1T4oBgHgl3EQfEgPe/content/tmp_files/load_file.txt +0 -0
  27. ENE0T4oBgHgl3EQfywJf/content/tmp_files/2301.02663v1.pdf.txt +645 -0
  28. ENE0T4oBgHgl3EQfywJf/content/tmp_files/load_file.txt +0 -0
  29. EtAyT4oBgHgl3EQfSfdv/content/tmp_files/2301.00087v1.pdf.txt +2084 -0
  30. EtAyT4oBgHgl3EQfSfdv/content/tmp_files/load_file.txt +0 -0
  31. FtE4T4oBgHgl3EQfHAzF/content/tmp_files/2301.04900v1.pdf.txt +753 -0
  32. FtE4T4oBgHgl3EQfHAzF/content/tmp_files/load_file.txt +467 -0
  33. KNAzT4oBgHgl3EQfVfxu/content/tmp_files/2301.01285v1.pdf.txt +606 -0
  34. KNAzT4oBgHgl3EQfVfxu/content/tmp_files/load_file.txt +356 -0
  35. L9AyT4oBgHgl3EQf6voI/content/tmp_files/2301.00825v1.pdf.txt +1024 -0
  36. L9AyT4oBgHgl3EQf6voI/content/tmp_files/load_file.txt +0 -0
  37. R9E4T4oBgHgl3EQf_g7i/content/tmp_files/2301.05372v1.pdf.txt +964 -0
  38. R9E4T4oBgHgl3EQf_g7i/content/tmp_files/load_file.txt +0 -0
  39. U9E3T4oBgHgl3EQf0QsH/content/tmp_files/2301.04735v1.pdf.txt +0 -0
  40. U9E3T4oBgHgl3EQf0QsH/content/tmp_files/load_file.txt +0 -0
  41. W9FKT4oBgHgl3EQfoC5E/content/tmp_files/2301.11864v1.pdf.txt +2545 -0
  42. W9FKT4oBgHgl3EQfoC5E/content/tmp_files/load_file.txt +0 -0
  43. YNE2T4oBgHgl3EQfvAgF/content/tmp_files/2301.04085v1.pdf.txt +1609 -0
  44. YNE2T4oBgHgl3EQfvAgF/content/tmp_files/load_file.txt +0 -0
  45. _NE1T4oBgHgl3EQfCwKv/content/tmp_files/2301.02869v1.pdf.txt +453 -0
  46. _NE1T4oBgHgl3EQfCwKv/content/tmp_files/load_file.txt +318 -0
  47. btAyT4oBgHgl3EQfwPm0/content/tmp_files/2301.00646v1.pdf.txt +623 -0
  48. btAyT4oBgHgl3EQfwPm0/content/tmp_files/load_file.txt +0 -0
  49. ctAzT4oBgHgl3EQfZ_wC/content/tmp_files/2301.01359v1.pdf.txt +1405 -0
  50. ctAzT4oBgHgl3EQfZ_wC/content/tmp_files/load_file.txt +0 -0
.gitattributes CHANGED
@@ -243,3 +243,4 @@ VtFOT4oBgHgl3EQf6zSP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
243
  9tAyT4oBgHgl3EQfqPjX/content/2301.00541v1.pdf filter=lfs diff=lfs merge=lfs -text
244
  x9FJT4oBgHgl3EQfhSy6/content/2301.11565v1.pdf filter=lfs diff=lfs merge=lfs -text
245
  M9E0T4oBgHgl3EQfTABv/content/2301.02230v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
243
  9tAyT4oBgHgl3EQfqPjX/content/2301.00541v1.pdf filter=lfs diff=lfs merge=lfs -text
244
  x9FJT4oBgHgl3EQfhSy6/content/2301.11565v1.pdf filter=lfs diff=lfs merge=lfs -text
245
  M9E0T4oBgHgl3EQfTABv/content/2301.02230v1.pdf filter=lfs diff=lfs merge=lfs -text
246
+ 6dAyT4oBgHgl3EQfpfjS/content/2301.00528v1.pdf filter=lfs diff=lfs merge=lfs -text
1NAyT4oBgHgl3EQfofhG/content/tmp_files/2301.00507v1.pdf.txt ADDED
@@ -0,0 +1,1138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00507v1 [math.DG] 2 Jan 2023
2
+ On Geodesics of Sprays and Projective Completeness
3
+ Guojun Yang
4
+ Abstract
5
+ Geodesics, which play an important role in spray-Finsler geometry, are integral
6
+ curves of a spray vector field on a manifold. Some comparison theorems and rigidity
7
+ issues are established on the completeness of geodesics of a spray or a Finsler metric. In
8
+ this paper, projectively flat sprays with weak Ricci constant (eps. constant curvature)
9
+ are classified at the level of geodesics. Further, a geodesic method is introduced to
10
+ determine an n-dimensional spray based on a family of curves with 2(n−1) free constant
11
+ parameters as geodesics. Finally, it shows that a spray is projectively complete under
12
+ certain condition satisfied by the domain of geodesic parameter of all geodesics.
13
+ Keywords: Spray, Geodesic, Completeness, Path Space, Finsler Metric
14
+ MR(2000) subject classification:
15
+ 53B40, 53C60
16
+ 1
17
+ Introduction
18
+ Spray geometry studies the properties of sprays on a manifold, and it is closely related to
19
+ Finsler geometry. Every Finsler metric induces a natural spray but there are a lot of sprays
20
+ which are not Finsler-metrizable (not be induced by any Finsler metric) ([3, 5, 10]). So
21
+ a popular topic is to investigate whether a given spray is metrizable or not, and what’s
22
+ more important is to give necessary and sufficient conditions for certain class of sprays to
23
+ be metrizable ([2, 11, 12]). It is also important to investigate the properties of some special
24
+ classes of sprays, for example, (locally) projectively flat sprays, Berwald sprays, sprays of
25
+ scalar (resp. isotropic, constant) curvature, Hamel (resp. Funk) sprays ([2, 6, 11, 12]).
26
+ A spray G on a manifold M defines a special vector filed on a conical region C of
27
+ T M \ {0}, and it naturally defines its integral curves and the projections of the integral
28
+ curves onto the manifold M are called geodesics. Geodesics play an important role in the
29
+ studies of comparison theorems and rigidity issues on spray or Finsler manifolds. In [8], Z.
30
+ Shen studies two pointwise projectively related Einstein Finsler metrics and determine the
31
+ metrics along geodesics. In [10], the present author obtains a comparison theorem on the
32
+ Ricci curvatures of a spay and a Finsler metric which are pointwise projectively related and
33
+ the corresponding projective factor is estimated. In [1], R. Bryant proves that a geodesically
34
+ reversible Finlser metric on S2 with positive constant flag curvature is a Riemann metric.
35
+ In [7], C. Robles classifies geodesics of Randers metrics of constant flag curvature. In [4],
36
+ L. Huang and X. Mo obtain the relation between the geodesics of two Finsler metrics F
37
+ and ˜F, where ˜F is defined by the navigation data (F, V ) with V being a homothetic vector
38
+ field of F. In this paper, we study projectively flat sprays with weak Ricci constant, the
39
+ construction of sprays from a geodesic method and the projective completeness of sprays.
40
+ In [11], it introduces sprays of constant curvature and a spray G of constant curvature
41
+ is weakly Ricci constant (the Ricci curvature is constant along any geodesic of G). For
42
+ two pointwise projectively related sprays, they have same geodesics as point sets and their
43
+ geodesic parameters are closely related by the projective factor. Starting from this fact, we
44
+ can determine a projectively flat spray with weak Ricci constant at geodesic level.
45
+ 1
46
+
47
+ We consider a projectively flat spray manifold (G, M), that is,
48
+ Gi = �Gi + Pyi,
49
+ (1)
50
+ where �G is a locally Minkowski spray on M. We have the following theorem.
51
+ Theorem 1.1 If the spray G in (1) is weakly Ricci constant Ric;0 = 0 or of constant
52
+ curvature, then along any geodesic x = x(s) of G, P(s) := P(x(s), x′(s)) is given by one of
53
+ the following cases:
54
+ P(s) =
55
+ 1
56
+ s + κ,
57
+ P(s) = −c · tan(cs + κ),
58
+ P(s) = −c(1 − κe2cs)
59
+ 1 + κe2cs ,
60
+ (2)
61
+ where c, κ are constant. Further, if G is complete, then P(s) is given by
62
+ P(s) = −c(1 − κe2cs)
63
+ 1 + κe2cs .
64
+ (3)
65
+ In Theorem 1.1, we can further give the relation between the geodesic parameters of G
66
+ and �G by (2) (see Proposition 3.1, Corollary 3.3).
67
+ The family of geodesics of an n-dimensional spray considered as point sets or paths is
68
+ dependent on 2(n − 1) free constant parameters. A path space is a family of curves satis-
69
+ fying certain conditions (Definition 4.1). We can freely give many interesting path spaces,
70
+ especially in dimension two. Starting from a path space, we can construct its corresponding
71
+ spray.
72
+ Theorem 1.2 In an n-dimensional path space G, all paths in a local coordinate system (xi)
73
+ can be parameterized under a variable t with 2(n−1) free constant parameters u, v as follows:
74
+ x = x(t) = σ(t; u, v),
75
+ (u, v ∈ Rn−1).
76
+ (4)
77
+ Further, the parametric equation (4) induces a spray G whose geodesics are given by (4)
78
+ with t as its geodesic parameter, and if a new variable s = s(t) = s(t; u, v) is given with
79
+ s′(t) > 0, then it gives a spray ¯G ∈ Proj(G) with s as its geodesic parameter.
80
+ If a family of curves can be parameterized in the form (4), then with an auxiliary pa-
81
+ rameter c > 0 multiplied by t in (4), we can obtain the corresponding spray by eliminating
82
+ the parameters u, v, c, t. We give some examples to show how to solve the sprays from given
83
+ path spaces (see Examples 4.5-4.8).
84
+ In the study of rigidity issues on a Finsler or spray manifold, it is important to assume
85
+ that the (Finsler) spray in consideration be (positively/negatively) complete. A given spray
86
+ is not necessarily (positively/negatively) complete. So a natural problem is whether a spray
87
+ can be projectively (positively/negatively) complete or not. We solve this problem under
88
+ certain conditions in the following result (Theorem 1.3).
89
+ Theorem 1.3 Let G be a spray on a manifold M with its each geodesic x = x(t) being
90
+ defined on the maximal interval I given by one case of the following
91
+ I = (a, b),
92
+ or (a, +∞),
93
+ or (−∞, b),
94
+ (5)
95
+ where a = a(u, v) < 0, b = b(u, v) > 0 with u = x(0), v = x′(0) are C∞ functions on a
96
+ conical region C of T M \ {0}. Then G is projectively (positively/negatively) complete on C.
97
+ In Theorem 1.3, usually we can also put u, v as that in (4) (see Example 5.5). If (5)
98
+ is not satisfied, it is uncertain that G is projectively complete (cf.
99
+ Example 5.5)).
100
+ We
101
+ give Examples 5.2-5.5 as an application of Theorem 1.3. A Finsler metric is not necessarily
102
+ projectively (positively/negatively) complete, namely, if G in Theorem 1.3 is a Finsler spray,
103
+ the corresponding spray projective to G may not be a Finsler spray.
104
+ 2
105
+
106
+ 2
107
+ Geodesic parameters in projective relations
108
+ A spray on M, in our consideration, is a smooth vector field G on a conical region C of
109
+ T M \ {0} (an important case is C = T M \ {0}) expressed in a local coordinate system
110
+ (xi, yi) in T M as follows
111
+ G = yi ∂
112
+ ∂xi − 2Gi ∂
113
+ ∂yi ,
114
+ where Gi are local homogeneous functions satisfying Gi(x, λy) = λ2Gi(x, y) for λ > 0. If
115
+ C = T M \ {0}, G is called regular; otherwise, it is called singular.
116
+ The integral curves of G projected onto M are the geodesics of G. Let x = x(s) be a
117
+ geodesic of G. Then it satisfies the following ODE:
118
+ d2xi
119
+ ds2 + 2Gi(x, dx
120
+ ds ) = 0,
121
+ where s is called a geodesic parameter of the geodesic x = x(s). Reparameterizing a geodesic
122
+ x = x(s) by a general parameter t with ds/dt > 0, we have
123
+ d2xi
124
+ dt2 + 2Gi(x, dx
125
+ dt ) = γ(t)dxi
126
+ dt ,
127
+ (6)
128
+ where γ(t) is given by
129
+ γ(t) = d2s
130
+ dt2
131
+ �ds
132
+ dt = − d2t
133
+ ds2
134
+
135
+ ( dt
136
+ ds)2.
137
+ (7)
138
+ Let G, ¯G be two sprays pointwise projectively related by ¯Gi = Gi + Pyi. Let x = x(t)
139
+ be a geodesic of G or ¯G as a point set for a general parameter t. Then along the geodesic
140
+ x = x(t), it follows from (6) and (7) that
141
+ 2P(t) = ¯s′′(t)
142
+ ¯s′(t) − s′′(t)
143
+ s′(t) ,
144
+
145
+ P(t) := P(x(t), x′(t))
146
+
147
+ ,
148
+ (8)
149
+ where s, ¯s are the geodesic parameters of the curve x = x(t) in G, ¯G respectively.
150
+ In
151
+ particular, along a geodesic x = x(s) of G, it follows from (8) that
152
+ 2P(s) = ¯s′′(s)
153
+ ¯s′(s) ,
154
+
155
+ P(s) := P(x(s), x′(s))
156
+
157
+ ,
158
+ (9)
159
+ If we express the geodesic x = x(s) of G as the geodesic x = x(¯s) of ¯G, by (9), we have
160
+ 2P(¯s) = 2P(x(s), x′(s))ds
161
+ d¯s =
162
+ ¯s′′(s)
163
+
164
+ ¯s′(s)
165
+ �2 ,
166
+
167
+ P(¯s) := P(x(¯s), x′(¯s))
168
+
169
+ .
170
+ (10)
171
+ So if P(s) or P(¯s) is known, the relation ¯s = ¯s(s) can be obtained from (9) or (10).
172
+ Example 2.1 Let F be the Funk metric on a strongly convex domain Ω ⊂ Rn. Define a
173
+ projectively flat spray G by
174
+ Gi = Pyi,
175
+ P := cF,
176
+ where c is a constant. Any geodesic x = x(t) (as a point set) of G is given by
177
+ x = x(t) = vt + u,
178
+
179
+
180
+ 1
181
+ F(u, −v) < t <
182
+ 1
183
+ F(u, v)
184
+
185
+ ,
186
+ 3
187
+
188
+ where u, v ∈ Rn are constant vectors. We have
189
+ F(vt + u, v) =
190
+ F(u, v)
191
+ 1 − tF(u, v).
192
+ (11)
193
+ Let s be a geodesic parameter of G. Then by (9) and (11) we have
194
+ s′′(t)
195
+ s′(t) = 2cF(vt + u, v) =
196
+ 2cF(u, v)
197
+ 1 − tF(u, v),
198
+ (12)
199
+ integration of which with s(0) = 0 gives
200
+ s = s(t) =
201
+
202
+ κ ln
203
+
204
+ 1 − tF(u, v)
205
+
206
+ ,
207
+ (c = 1
208
+ 2),
209
+ κ
210
+
211
+ 1 −
212
+
213
+ 1 − tF(u, v)
214
+ �1−2c�
215
+ ,
216
+ (c ̸= 1
217
+ 2),
218
+ (13)
219
+ where κ is a constant with κ < 0 for c ≥ 1/2, and κ > 0 for c < 1/2. Thus the spray
220
+ is positively complete for c ≥ 1/2, and any geodesic is defined on a finite open interval for
221
+ c < 1/2. Besides, the spray G is (locally) metrizable if and only if c = 0, 1, 1/2 (see [10]).
222
+ Example 2.2 In Example 2.1, if the spray G is given by
223
+ Gi(y) := Pyi,
224
+ P := c
225
+
226
+ F(y) − F(−y)
227
+
228
+ ,
229
+ then by (11) and
230
+ F(vt + u, −v) =
231
+ F(u, −v)
232
+ 1 + tF(u, −v).
233
+ (14)
234
+ it follows from (9) that
235
+ s′′(t)
236
+ s′(t) =
237
+ 2cF(u, v)
238
+ 1 − tF(u, v) −
239
+ 2cF(u, −v)
240
+ 1 + tF(u, −v),
241
+ integration of which with s(0) = 0 gives
242
+ s = s(t) = κ
243
+ � t
244
+ 0
245
+ ��
246
+ 1 − tF(u, v)
247
+ ��
248
+ 1 + tF(u, −v)
249
+ ��−2c
250
+ dt,
251
+ (15)
252
+ where κ > 0 is constant. From (15), it is clear to conclude that G is complete if c ≥ 1/2; s
253
+ is bounded in a finite open interval if c < 1/2.
254
+ 3
255
+ Projective flat sprays with weak Ricci constant
256
+ For a spray G, the Riemann curvature tensor Ri
257
+ k is defined by
258
+ Ri
259
+ k := 2∂kGi − yj(∂j ˙∂kGi) + 2Gj( ˙∂j ˙∂kGi) − ( ˙∂jGi)( ˙∂kGj),
260
+ where we define ∂k := ∂/∂xk, ˙∂k := ∂/∂yk. The trace of Ri
261
+ k is called the Ricci curvature,
262
+ Ric := Ri
263
+ i.
264
+ For a spray tensor T = Tidxi as an example, the horizontal and vertical
265
+ derivatives of T with respect to Berwald connection are given by
266
+ Ti;j = δjTi − TrGr
267
+ ij,
268
+ Ti.j = ˙∂jTi,
269
+ (δi := ∂i − Gr
270
+ i ˙∂r, Gk
271
+ ir := ˙∂r ˙∂iGk)).
272
+ 4
273
+
274
+ A spay is called weakly Ricci constant if Ric;0 := Ric;ryr = 0. A spray G is said to be of
275
+ constant curvature if Ri
276
+ k is given by Ri
277
+ k = Rδi
278
+ k − τkyi with ([11])
279
+ τi;k = 0 ( ⇔ R = τk = 0, or R;i = 0(R ̸= 0)).
280
+ By definition, it is clear that a spray of constant curvature is weakly Ricci constant. For
281
+ two pointwise projectively related sprays G, ¯G with ¯Gi = Gi + Pyi, their Ricci curvatures
282
+ Ric, ¯Ric are related by
283
+ ¯Ric = Ric − (n − 1)(P;0 − P 2).
284
+ (16)
285
+ We consider a projectively flat spray manifold (G, M) given by (1), that is,
286
+ Gi = �Gi + Pyi,
287
+ where �G is a locally Minkowski spray on M ( �G has local straight lines as geodesics). If
288
+ G is weakly Ricci constant, then we can determine the projective factor P along geodesics,
289
+ which is shown in Theorem 1.1.
290
+ Proof of Theorem 1.1 : By (16) and �Gi = Gi − Pyi, the Ricci curvature Ric of G is
291
+ given by
292
+ Ric = −(n − 1)(P 2 + P;0).
293
+ Therefore, Ric;0 = 0 is equivalent to P;0;0 + 2PP;0 = 0. Then along a geodesic x = x(s) of
294
+ G, we have
295
+ P ′′(s) + 2P(s)P ′(s) = 0.
296
+ whose solution is given by one of the three cases in (2). Further, if G is complete, it is clear
297
+ that (3) follows from (2).
298
+ Q.E.D.
299
+ If the spray G in (1) is weakly Ricci constant Ric;0 = 0, then applying (2) and (10), we
300
+ obtain the following proposition.
301
+ Proposition 3.1 Let the spray G in (1) be weakly Ricci constant (esp. of constant curva-
302
+ ture). For any geodesic σ, let s and t be the geodesic parameters of σ with respect to G and
303
+ �G respectively. Then s = s(t) is given by one of the following cases:
304
+ s = at, (a > 0);
305
+ s = b ln(1 + at), (ab > 0);
306
+ (17)
307
+ s =
308
+ bt
309
+ 1 + at, (a ̸= 0, b > 0);
310
+ s = c
311
+
312
+ arctan(at + b) − arctan b
313
+
314
+ , (ac > 0);
315
+ (18)
316
+ s = c ln 1 + bt
317
+ 1 + at,
318
+
319
+ (b − a)c > 0, ab ̸= 0
320
+
321
+ ,
322
+ (19)
323
+ where a, b, c are constant, and in (19), it further requires s′(t) > 0 (see Remark 3.2).
324
+ Proof : By (10) we need to solve the following ODE with initial conditions:
325
+ s′′(t)
326
+ s′(t) = 2P(s)s′(t),
327
+ (s(0) = 0, s′(t) > 0),
328
+ integration of which gives
329
+ s′(t) = ae2 � P (s)ds,
330
+
331
+ e−2 � P (s)dsds = at + b,
332
+ (20)
333
+ 5
334
+
335
+ where a, b are two constants. Now P(s) is given by (2) from Theorem 1.1, and thus we can
336
+ obtain s = s(t) by plugging P(s) into (20).
337
+ If P(s) = 0, then (20) gives s = at+b. Since s(0) = 0, s′(t) > 0, we obtain s = at (a > 0),
338
+ which gives the first formula in (17).
339
+ If P(s) = c ̸= 0 is constant, then (20) gives the second formula in (17) with ab > 0.
340
+ If P(s) is given by the first formula in (2), then (20) gives
341
+ s = −κ +
342
+ 1
343
+ at + b,
344
+ which can be rewritten as the form of the first formula in (18) by s(0) = 0, s′(t) > 0.
345
+ If P(s) is given by the second formula in (2) (c ̸= 0), then (20) gives
346
+ s = −κ − arctan(at + b)
347
+ c
348
+ ,
349
+ which can be rewritten as the second formula in (18) by s(0) = 0, s′(t) > 0.
350
+ If P(s) is given by the third formula in (2) (cκ ̸= 0), then (20) gives
351
+ s = 1
352
+ 2c ln
353
+
354
+ 1
355
+ at + b − 1
356
+ κ
357
+
358
+ which can be rewritten as the formula in (19) by s(0) = 0, s′(t) > 0.
359
+ Q.E.D.
360
+ In (19), by s′(t) > 0, we have further restriction on the constant parameters a, b, c, which
361
+ is shown in the following remark.
362
+ Remark 3.2 In (19), let t be defined on the maximal interval (κ1, κ2) with κ1 < 0 < κ2. It
363
+ is easy to conclude the following cases from s′(t) > 0:
364
+ a > 0, b > 0 :
365
+
366
+ t ∈ (κ1, κ2) ⊂ (− 1
367
+ a, +∞),
368
+ (b < a)
369
+ t ∈ (κ1, κ2) ⊂ (− 1
370
+ b, +∞), (b > a),
371
+ a < 0, b < 0 :
372
+
373
+ t ∈ (κ1, κ2) ⊂ (−∞, − 1
374
+ a),
375
+ (b > a)
376
+ t ∈ (κ1, κ2) ⊂ (−∞, − 1
377
+ b), (b < a),
378
+ a > 0, b < 0 : t ∈ (κ1, κ2) ⊂ (−1
379
+ a, −1
380
+ b),
381
+ a < 0, b > 0 : t ∈ (κ1, κ2) ⊂ (−1
382
+ b , −1
383
+ a).
384
+ By Proposition 3.1 and Remark 3.2, we directly obtain the following corollary.
385
+ Corollary 3.3 If the spray G in Proposition 3.1 (P ̸= 0) is complete, then s = s(t) is given
386
+ by one of the following two cases:
387
+ s = b ln(1 + at), (ab > 0),
388
+ (21)
389
+ s = c ln 1 + bt
390
+ 1 + at,
391
+
392
+ (b − a)c > 0, ab < 0
393
+
394
+ ,
395
+ (22)
396
+ where in (21) and (22), we respectively have
397
+ t ∈ (−∞, −1
398
+ a) if a < 0, and t ∈ (−1
399
+ a, +∞) if a > 0;
400
+ t ∈ (−1
401
+ a, −1
402
+ b) if a > 0, b < 0, and t ∈ (−1
403
+ b , −1
404
+ a) if a < 0, b > 0.
405
+ 6
406
+
407
+ Now in the following, we give some projectively flat sprays to verify the above results on
408
+ the projective factors and the geodesic parameters.
409
+ Example 3.4 Consider the spray G in Example 2.1. A direct computation shows that G
410
+ is weakly Ricci constant or of constant curvature if and only if c = 0, 1, 1/2. Let x = x(t) =
411
+ vt + u be a geodesic (as a point set) of G, and the geodesic parameter s in G is given by
412
+ (13). Then it follows from (10) and (13) that P = cF is given by
413
+ P(s) =
414
+
415
+ − 1
416
+ 2κ,
417
+ (c = 1
418
+ 2)
419
+ c
420
+ 2c−1 ·
421
+ 1
422
+ s−κ,
423
+ (c ̸= 1
424
+ 2).
425
+ (23)
426
+ It is clear from (23) that P(s) is in one of the forms in (2) if and only if c = 0, 1, 1/2.
427
+ Meanwhile, s = s(t) is given in Proposition 3.1 if and only if c = 1, 1/2, 1, and in this case,
428
+ s = s(t) is in the respective forms shown in (17) and the first formula in (18).
429
+ Example 3.5 Let G be the spray in Example 2.2 with c = 1/2. G is complete and it is of
430
+ constant curvature. Then it follows from (15) that
431
+ s = κ ln 1 + tF(u, −v)
432
+ 1 − tF(u, v) ,
433
+ (24)
434
+ where κ > 0 is a constant. In this case, s = s(t) is in the form (22), and it is easy to verify
435
+ that P(s) is in the form of the third formula in (2) by plugging (24) into (10).
436
+ Example 3.6 Let G be a spray on Rn defined by
437
+ Gi := Pyi,
438
+ P := − ⟨x, y⟩
439
+ 1 + |x|2 .
440
+ G is metrizable and it is of constant curvature. Let x = x(t) = vt + u be a geodesic (as a
441
+ point set) of G, and by (9), the geodesic parameter s of G satisfies
442
+ s′′(t)
443
+ s′(t) = −
444
+ 2(|v|2t + ⟨u, v⟩)
445
+ 1 + |u|2 + 2⟨u, v⟩t + |v|2t2 .
446
+ Solving the ODE, we obtain
447
+ s = s(t) = κ1 + κ2 arctan
448
+ |v|2t + ⟨u, v⟩
449
+
450
+ (1 + |u|2)|v|2 − ⟨u, v⟩2 ,
451
+ where κ1, κ2 are constant. It is clear that s = s(t) is in the form of the second formula in
452
+ (18) if s(0) = 0, and P(s) is in the form of the second formula in (2) by plugging the above
453
+ s = s(t) into (10).
454
+ 4
455
+ Construction of sprays from geodesics
456
+ Given a family of curves G on a manifold, if G can constitute the geodesics of a spray G
457
+ on M, how can we solve G (at least locally)? A spray induces a (local) semispray and two
458
+ pointwise projectively related sprays induce a same semispray (cf. [9]). A semispray can
459
+ also be considered as a special parameterized family of curves, which forms a path space.
460
+ In this section, we will start from a path space and introduce some ways to construct
461
+ sprays based on a path space and its parameterization. We call it the geodesic method of
462
+ construction of sprays.
463
+ Similarly to a spray, a path space G on a manifold M is usually defined on a conical
464
+ region C of T M \ {0} (see Definition 4.1), and G is called singular if C ̸= T M \ {0}.
465
+ 7
466
+
467
+ Definition 4.1 Let G be a family of C∞ parameterized curves (called paths) on an n-
468
+ dimensional manifold M. G or (M, G) is called an n-dimensional path space if on a conical
469
+ region C of T M it satisfies
470
+ (i) for y ∈ Cx, there is a curve σ : (−ǫ, ǫ) → M in G with σ′(0) = y;
471
+ (ii) for any σ, τ in G with σ′(0) = τ ′(0), σ and τ coincide in a small intervals of 0;
472
+ (iii) if a curve σ is in G, then for any constants λ > 0 and to, the curve η is also in G,
473
+ where η is defined by η(t) := σ(λt + to).
474
+ An equivalent version of Definition 4.1 in regular case is refereed to [9] (P52).
475
+ Example 4.2 Consider a set G of a family of curves x = x(s) on R2 in the form
476
+ x(s) = σ(s; xo, yo),
477
+
478
+ x(0) = xo = (a, b),
479
+ x′(0) = yo = (u, v)
480
+
481
+ ,
482
+ σ(s; xo, yo) := (a, b) + (u, v)s − (0, 1)
483
+ �1
484
+ 3u3s3 + au2s2�
485
+ ,
486
+ where a, b, u, v are arbitrary parameters. It can be directly verified that G is a path space on
487
+ R2, since Definition 4.1 (i) (ii) automatically hold, and Definition 4.1 (iii) follows from
488
+ σ(λs + so; xo, yo) = σ(s; ˆxo, ˆyo),
489
+ where we define
490
+ xo = (a, b),
491
+ yo = (u, v),
492
+ ˆxo = (ˆa,ˆb),
493
+ ˆyo = (ˆu, ˆv),
494
+ ˆa := a + uso,
495
+ ˆb := b + vso − 1
496
+ 3u2(3a + uso)(so)2,
497
+ ˆu := λu,
498
+ ˆv := λv − λu2so(2a + uso).
499
+ For a path space G, we have different ways to parameterize the paths in G under a para-
500
+ metric variable and some constant parameters (see Theorem 1.2 and Lemma 4.3). Example
501
+ 4.2 satisfies (25) and (26) in the following Lemma 4.3 with
502
+ f(s; xo, yo) := −(0, 1)
503
+ �1
504
+ 3u3s3 + au2s2�
505
+ ,
506
+ xo = (a, b), yo = (u, v).
507
+ Lemma 4.3 An n-dimensional path space (M, G) is locally expressed as the following family
508
+ of curves x = x(s) with arbitrary constant parameters xo, yo ∈ Rn:
509
+ x(s) = σ(s; xo, yo) = xo + yos + f(s; xo, yo),
510
+ (25)
511
+ where f is a smooth function satisfying f(0; xo, yo) = f ′(0; xo, yo) = 0 and
512
+ f(s; ˆxo, ˆyo) = f(λs + so; xo, yo) − f(so; xo, yo) − λf ′(so; xo, yo)s,
513
+ (26)
514
+
515
+ ˆxo := xo + yoso + f(so; xo, yo),
516
+ ˆyo := λyo + λf ′(so; xo, yo)
517
+
518
+ .
519
+ It is clear that the collection of geodesics of a spray naturally forms a path space. Shen
520
+ proves the converse in the following lemma ([9]). We also give the proof for convenience.
521
+ Lemma 4.4 A path space G induces a spray G with the set of geodesics of G being G.
522
+ Proof : Let (G, M) be a path space on a conical region C. For a given y ∈ Cx, there is a
523
+ curve σ : (−ǫ, ǫ) → M in G with σ(0) = x, σ′(0) = y by Definition 4.1 (i). Define
524
+ Gi(y) := −1
525
+ 2
526
+ d2σ
527
+ ds2 (0),
528
+ 8
529
+
530
+ which is independent of the choice of σ by Definition 4.1 (ii). We are going to verify that G
531
+ is a spray. For any constant λ > 0, let η(s) := σ(λs) ∈ G (see Definition 4.1 (iii)). Then we
532
+ have
533
+ Gi(λy) = −1
534
+ 2
535
+ d2η
536
+ ds2 (0) = −1
537
+ 2λ2 d2σ
538
+ ds2 (0) = λ2Gi(y),
539
+ which implies that Gi is positively homogeneous of degree two. Further, for any η : (a, b) →
540
+ M in G and any fixed t ∈ (a, b), define γ(s) := η(s + t). Then we have
541
+ η′(t) = γ′(0),
542
+ η′′(t) = γ′′(0).
543
+ So by the definition of Gi, we get
544
+ Gi�
545
+ η′(t)
546
+
547
+ = Gi�
548
+ γ′(0)
549
+
550
+ = −1
551
+ 2
552
+ d2γi
553
+ ds2 (0) = −1
554
+ 2
555
+ d2ηi
556
+ ds2 (t),
557
+ which implies that η satisfies the following ODE:
558
+ d2ηi
559
+ ds2 + 2Gi�dη
560
+ ds
561
+
562
+ = 0.
563
+ Therefore, G is a spray, and the set of geodesics of G coincides with G.
564
+ Q.E.D.
565
+ In Lemma 4.4, by different choices of the parametric variables, it (locally) induces a
566
+ projective class Proj(G) of G, each of which is projective to G.
567
+ For a given path space G, it induces a spray G by Lemma 4.4. Then G defines a semispray
568
+ ˆG (see [9]: P37) and the geodesics of G and ˆG are closely related (see [9]: Lemma 3.1.1).
569
+ Therefore, in G, any path can be locally expressed as
570
+ xa = xa(x1; u, v),
571
+ (u, v ∈ Rn−1, 2 ≤ a ≤ n),
572
+ where u, v are free constant parameters. So all paths in an n-dimensional path space depend
573
+ only on 2(n−1) free constant parameters, where the Jaccobi determinant is not zero, namely,
574
+ det
575
+ �∂xa/∂u
576
+ ∂xa/∂v
577
+ ∂ya/∂u
578
+ ∂ya/∂v
579
+
580
+ ̸= 0,
581
+
582
+ ya := dxa
583
+ dx1
584
+
585
+ .
586
+ Then we obtain Theorem 1.2 for the construction of sprays based on the parametric equations
587
+ of path spaces.
588
+ If we write (4) in the form
589
+ x(t) = σ(λt + µ, u, v),
590
+ (27)
591
+ where λ, µ are constant numbers, then under the 2n constant parameters λ, µ, u, v, this
592
+ family of curves satisfies Definition 4.1 (i)(ii)(iii).
593
+ For instance, the 2-dimensional path
594
+ space in Example 4.2 can be written as the following family of curves
595
+ x(s) = τ(λs + µ; bo, vo) = (0, bo) + (1, vo)(λs + µ) − 1
596
+ 3(0, 1)(λs + µ)3,
597
+
598
+ λ := u, µ := a, vo := v
599
+ u + a2, bo = b − avo + 1
600
+ 3a3�
601
+ .
602
+ By Theorem 1.2, if the set A of a family of curves on an n-dimensional manifold defines
603
+ a path space, then A just depends on 2(n − 1) free constant parameters. For example, in
604
+ Rn, all circles with fixed radius cannot define a path space when n ≥ 3, because in this case,
605
+ the circles depend on more than 2(n − 1) free constant parameters.
606
+ 9
607
+
608
+ Now we introduce a method of constructing a spray G determined by a path space
609
+ considered as the geodesics of G, which is similar to Okubo’s method for the construction of
610
+ a Finlser metric from a hypersurface as its indicatrix. We can start from a family of curves
611
+ given by (25) or (4) to determine a corresponding spray.
612
+ Method (I): For a family of curves given by (25) satisfying (26), actually we can reduce
613
+ one constant parameter since (25) can be written as (if y1
614
+ o ̸= 0)
615
+ x(t) = σ(t; xo, ¯yo) = xo + ¯yot + f(t; xo, ¯yo),
616
+
617
+ t := y1
618
+ os, ¯ya
619
+ o := ya
620
+ o/y1
621
+ o, ¯yo := (¯ya
622
+ o)
623
+
624
+ .
625
+ Let a path space be determined by (25) and we put
626
+ x = xo + yos + f(s; xo, yo),
627
+ y(= dx
628
+ ds ) = yo + f ′(s; xo, yo)),
629
+ (28)
630
+ Gi := −1
631
+ 2
632
+ d2x
633
+ ds2 = −1
634
+ 2f ′′(s; xo, yo)).
635
+ (29)
636
+ Then we obtain a spray G from (29) by eliminating xo, yo, s in (29) from (28), where s is a
637
+ geodesic parameter of the spray G.
638
+ Method (II): Suppose that a family of curves are given by the parametric equation (4) with
639
+ 2(n − 1) free constant parameters u, v. This case is more convenient to construct sprays.
640
+ With an auxiliary parameter c > 0, we put
641
+ x = σ(cs; u, v),
642
+ y = dx
643
+ ds = cdσ
644
+ dˆs (cs; u, v),
645
+ ˆs := cs.
646
+ (30)
647
+ Theoretically, we can express c, s, u, v as functions of x, y from (30). Then plugging them
648
+ into the following
649
+ Gi := −1
650
+ 2
651
+ d2xi
652
+ ds2 = c2 d2σi
653
+ dˆs2 (cs; u, v),
654
+ (31)
655
+ we obtain a spray G given by (31), where s is a geodesic parameter of the spray G.
656
+ Now in the following Examples 4.5-4.8, we use Method (I) or Method (II) to show how
657
+ we construct sprays from given path spaces by eliminating the corresponding parameters.
658
+ Example 4.5 Consider a set G of a family of curves on R3:
659
+ x(s) = (a, b, c) + (u, v, w)s − (0, 1, 0)h(s),
660
+
661
+ h(s) := −1
662
+ 3(u3 + w3)s3 − (au2 + cw2)s2�
663
+ ,
664
+ where a, b, c, u, v, w are constant parameters. G is a path space. By (28) we get
665
+ x1 = a + us,
666
+ x3 = c + ws,
667
+ y1 = u,
668
+ y3 = w.
669
+ (32)
670
+ By (29), the induced spray G is given by
671
+ G1 = −1
672
+ 2
673
+ d2x1
674
+ ds2 = 0,
675
+ G3 = −1
676
+ 2
677
+ d2x3
678
+ ds2 = 0,
679
+ G2 = −1
680
+ 2
681
+ d2x2
682
+ ds2 = (u3 + w3)s + (au2 + cw2)
683
+ = (u3 + w3)s +
684
+
685
+ (x1 − us)u2 + (x3 − ws)w2� �
686
+ by (32)
687
+
688
+ = x1u2 + x3w2 = x1(y1)2 + x3(y3)2 �
689
+ by (32)
690
+
691
+ .
692
+ 10
693
+
694
+ G has zero Riemann curvature and so it is metrizable (a Finsler spray) ([11]).
695
+ Example 4.6 Let G be the set of all circles with fixed radius r on R2. We parameterize G
696
+ by
697
+ x1(s) = a + r cos s,
698
+ x2(s) = b + r sin s,
699
+ where a, b are arbitrary constant parameters. G depends on just two free constant parameters.
700
+ By Theorem 1.2, G defines a spray G on R2 with s as a geodesic parameter of G. We show
701
+ the spray as follows. With an auxiliary parameter c > 0, it follows from (30) that
702
+ x1 = a + r cos cs,
703
+ x2 = b + r sin cs,
704
+ y1 = −cr sin cs,
705
+ y2 = cr cos cs.
706
+ (33)
707
+ Then plugging the latter two formulas of (33) into (31) yields a spray G given by
708
+ G1 = −c2r cos cs = −1
709
+ r y2�
710
+ (y1)2 + (y2)2,
711
+ G2 = −c2r sin cs = 1
712
+ r y1�
713
+ (y1)2 + (y2)2.
714
+ This circle spray first appears in [9] (P49), and even locally it is not metrizable ([11]).
715
+ Example 4.7 Consider a family of semicircles G on the positive semi-plane R2
716
+ + with center
717
+ on x1-axis and arbitrary radius. Note that G is singular at the direction parallel to x2-axis.
718
+ We can parameterize G by
719
+ x1 = a + b coss,
720
+ x2 = b sin s,
721
+ (x2 > 0, b ≥ 0),
722
+ where a, b are arbitrary constant parameters. G depends on just two free constant parameters.
723
+ By Theorem 1.2, G defines a spray G on R2
724
+ + with s as a geodesic parameter of G. With an
725
+ auxiliary parameter c > 0, by (30) we get
726
+ x1 = a + b cos cs,
727
+ x2 = b sin cs,
728
+ y1 = −bc sincs,
729
+ y2 = bc coscs.
730
+ (34)
731
+ Then similarly, by the elimination of the parameters a, b, c, s in (31) from (34), the spray G
732
+ with s being a geodesic parameter is given by
733
+ G1 = −y1y2
734
+ 2x2 ,
735
+ G2 = (y1)2
736
+ 2x2 .
737
+ (35)
738
+ The spray G is regular on R2
739
+ + (any straight lines parallel to x2-axis are geodesics of G). G
740
+ is of isotropic curvature, and locally it is not metrizable by the method in [2, 11].
741
+ Example 4.8 Let Bn be the unit ball in Rn and G be all circle arcs in Bn which are
742
+ perpendicular to the boundary Sn−1 = ∂Bn. Let s be the arc-length parameter of a circle arc
743
+ induced by the Euclidean metric. What is the spray G induced by G with s being a geodesic
744
+ parameter of G (see Example 4.1.4 in [9])? We will show that G is given by
745
+ Gi = ⟨x, y⟩yi − |y|2xi
746
+ 1 − |x|2
747
+ ,
748
+ (36)
749
+ which is not metrizable by [11]. Now for arbitrarily given p, q ∈ Sn−1, there is a circle arc
750
+ γ in G, in which γ is perpendicular to Sn−1 at p, q. Let C be the circle with γ ⊂ C. The
751
+ center and radius of C are respectively given by
752
+ τ(p + q),
753
+ |p − τ(p + q)|,
754
+ (τ := (1 + pq)−1),
755
+ 11
756
+
757
+ where pq is the Euclidean inner product of p, q. Then γ is parameterized by the equation
758
+ x(s) = x(s; p, q) = [p − τ(p + q)] cos s + |p − τ(p + q)|p sin s + τ(p + q).
759
+ (37)
760
+ Since there are just 2(n − 1) free constant parameters in (37), the family of curves in the
761
+ form (37) define a path space by Theorem 1.2. Now based on (30) and (31), we can give
762
+ the spray G from (37) with s being a geodesic parameter of G. With an auxiliary parameter
763
+ c > 0, by (30) we put
764
+ x = [p − τ(p + q)] cos cs + |p − τ(p + q)|p sin cs + τ(p + q),
765
+ (38)
766
+ y = −c[p − τ(p + q)] sin cs + c|p − τ(p + q)|p cos cs.
767
+ (39)
768
+ By (31) we have
769
+ 2Gi := c2�
770
+ [p − τ(p + q)]i cos cs + |p − τ(p + q)|pi sin cs
771
+
772
+ .
773
+ (40)
774
+ By a direct lengthy computation, we can eliminate the parameters p, q, c, s in (40) from (38)
775
+ and (39) (the details are omitted). Finally, the spray G is given by (36).
776
+ 5
777
+ Projectively complete sprays
778
+ For a given spray G, if we know the general solutions of all geodesics of G, then under another
779
+ parameter as a geodesic parameter, we can determine a corresponding spray projectively
780
+ related to G. Now suppose that the general solutions of geodesics of G are locally given by
781
+ x = σ(t) = σ(t; u, v),
782
+
783
+ u, v ∈ Rn−1�
784
+ ,
785
+ (41)
786
+ where t is a geodesic parameter of G and u, v are free constant parameters. Sometimes, it is
787
+ also convenient to put u = σ(0), v = σ′(0) for the elimination of parameters. Make a change
788
+ of the variables from t to s with
789
+ t = t(s) = t(s; u, v),
790
+ dt
791
+ ds > 0.
792
+ (42)
793
+ With an auxiliary parameter c > 0, we put
794
+ x = σ(t(cs); u, v),
795
+ y = dx
796
+ ds = cdσ
797
+ dt
798
+ dt
799
+ ds,
800
+ (43)
801
+ where dt/ds, as a function of s, takes the value at cs. Further, we have
802
+ d2xi
803
+ ds2 = c2 d2σi
804
+ dt2
805
+ � dt
806
+ ds
807
+ �2 + c2 dσi
808
+ dt
809
+ d2t
810
+ ds2
811
+ = −2Gi(x, dσ
812
+ dt )c2� dt
813
+ ds
814
+ �2 + c2 dσi
815
+ dt
816
+ d
817
+ dt
818
+ � dt
819
+ ds
820
+ � dt
821
+ ds
822
+ = −2Gi(x, y) + c d
823
+ dt
824
+ � dt
825
+ ds
826
+
827
+ yi.
828
+ (44)
829
+ Expressing c, t in terms of x, y from (43), and then plugging c, t into (44), we obtain a spray
830
+ ¯G given by
831
+ ¯Gi = Gi − 1
832
+ 2
833
+ d
834
+ dt
835
+ � dt
836
+ ds
837
+
838
+ cyi = Gi + Pyi,
839
+ (45)
840
+
841
+ P = P(x, y) := −1
842
+ 2
843
+ d
844
+ dt
845
+ � dt
846
+ ds
847
+
848
+ c
849
+
850
+ ,
851
+ with s being a geodesic parameter of ¯G.
852
+ 12
853
+
854
+ Lemma 5.1 Suppose that the general solutions of geodesics of a spray G are given by (41).
855
+ Let s be another parameter related to t by (42). Then a spray ¯G projective to G with s being
856
+ its geodesic parameter is given by (45), where c, t are determined by (43).
857
+ Under certain condition, a spray can be projectively (positively/negatively) complete,
858
+ which is shown in Theorem 1.3. Now we give the proof of Theorem 1.3.
859
+ Proof of Theorem 1.3 : Let G be a spray on a manifold M. For an arbitrary geodesic
860
+ x = x(t), suppose that t belongs to the maximal interval I given by (5).
861
+ If I = (a, +∞) or I = (−∞, b), we respectively make a change of the variables from t to
862
+ s by
863
+ s = ln(1 − t
864
+ a),
865
+ or
866
+ s = − ln(1 − t
867
+ b),
868
+ (46)
869
+ either of which gives s(0) = 0, s′(t) > 0 and the maximal interval of s with s ∈ (−∞, +∞).
870
+ If I = (a, b), make a change by (46) and then we respectively have
871
+ s ∈
872
+
873
+ − ∞, ln(1 − b
874
+ a)
875
+
876
+ ,
877
+ or
878
+ s ∈
879
+
880
+ − ln(1 − a
881
+ b ), +∞).
882
+ If I = (a, b), make a change of the variables from t to s by
883
+ s = ln 1 − t/a
884
+ 1 − t/b ,
885
+ or
886
+ s = tan
887
+
888
+ π
889
+ b − a(t − a + b
890
+ 2
891
+ )
892
+
893
+ + tan
894
+ �b + a
895
+ b − a
896
+ π
897
+ 2
898
+
899
+ ,
900
+ (47)
901
+ either of which gives s(0) = 0, s′(t) > 0 and the maximal interval of s with s ∈ (−∞, +∞).
902
+ Therefore, by the change (46) or (47), we obtain a (positively/negatively) complete spray
903
+ which is projective to G. This completes the proof.
904
+ Q.E.D.
905
+ As an application of Theorem 1.3, we give the following Examples 5.2-5.4 to show the
906
+ construction of the (positively/negatively) complete sprays projective to given sprays.
907
+ Example 5.2 Let F be the Funk metric on a strongly convex domain Ω ⊂ Rn.
908
+ The
909
+ Minkowski spray G = 0 on Ω has its geodesics given by
910
+ x(t) = vt + u,
911
+
912
+
913
+ 1
914
+ F(u, −v) < t <
915
+ 1
916
+ F(u, v)
917
+
918
+ ,
919
+ where u, v ∈ Rn are arbitrary constant vectors. By (46), put t = t(s) as
920
+ s = − ln
921
+
922
+ 1 − tF(u, v)
923
+
924
+ .
925
+ (48)
926
+ With s being a geodesic parameter, we obtain a projectively flat and positively complete spray
927
+ ¯G, which will be shown to be the Finsler spray induced by F, namely,
928
+ ¯Gi = 1
929
+ 2Fyi.
930
+ (49)
931
+ Actually, it follows from (2) and (11) that
932
+ dt
933
+ ds =
934
+ 1
935
+ F(u, v) − t =
936
+ 1
937
+ F(vt + u, v).
938
+ (50)
939
+ Then (43) gives
940
+ x = vt + u,
941
+ y = cv dt
942
+ ds.
943
+ (51)
944
+ 13
945
+
946
+ It is clear from (51) and (50) that
947
+ F(x, y) = cF(vt + u, v) dt
948
+ ds = c.
949
+ (52)
950
+ Therefore, by (45), (50) and (52), the spray G is given by (49).
951
+ Example 5.3 In Example 5.2, by (47), put t = t(s) as
952
+ s = ln 1 + tF(u, −v)
953
+ 1 − tF(u, v) .
954
+ (53)
955
+ With s being a geodesic parameter, we obtain a projectively flat and complete spray ¯G, which
956
+ will be shown to be the Finsler spray induced by the Klein metric ¯F(x, y) :=
957
+
958
+ F(x, y) +
959
+ F(x, −y)
960
+
961
+ /2, namely,
962
+ ¯Gi(x, y) = 1
963
+ 2
964
+
965
+ F(x, y) − F(x, −y)
966
+
967
+ yi.
968
+ (54)
969
+ Firstly, by (53), we get
970
+ dt
971
+ ds =
972
+
973
+ 1 − tF(u, v)
974
+ ��
975
+ 1 + tF(u, −v)
976
+
977
+ F(u, v) + F(u, −v)
978
+ ,
979
+ (55)
980
+ d
981
+ dt
982
+ � dt
983
+ ds
984
+
985
+ = F(u, −v) − F(u, v) − 2tF(u, v)F(u, −v)
986
+ F(u, v) + F(u, −v)
987
+ .
988
+ (56)
989
+ Secondly, (43) gives
990
+ x = vt + u,
991
+ y = cv dt
992
+ ds,
993
+ from which we have
994
+ F(x, y) = F(vt + u, v)c dt
995
+ ds,
996
+ F(x, −y) = F(vt + u, −v)c dt
997
+ ds.
998
+ (57)
999
+ Plugging (11), (14) and (55) into (57), we obtain
1000
+ c = F(x, y) + F(x, −y),
1001
+ t = F(x, y)F(u, −v) − F(x, −y)F(u, v)
1002
+ F(u, v)F(u, −v)[F(x, y) + F(x, −y)].
1003
+ (58)
1004
+ Finally, by (58) and (56), it follows from (45) that the spray ¯G is given by (54).
1005
+ Example 5.4 For the spray G in Example 5.2, we will introduce a different way from that
1006
+ in Example 5.3 to make G be complete, which is actually to use (46) to make complete the
1007
+ Finsler spray induced by the Funk metric F. For a geodesic x(t) = vt + u of G, put
1008
+ s = ln
1009
+
1010
+ 1 − ln(1 − tF(u, v))
1011
+ a
1012
+
1013
+ ,
1014
+ a := ln
1015
+
1016
+ 1 + F(u, v)
1017
+ F(u, −v)
1018
+
1019
+ .
1020
+ In a similar way to that for the computation in Example 5.3, we obtain a projectively flat
1021
+ and complete spray ¯G given as follows:
1022
+ ¯Gi(y) = Gi
1023
+ F (y) + 1
1024
+ 2
1025
+ F(y)
1026
+ ln
1027
+ F (−y)
1028
+ F (y)+F (−y)
1029
+ yi,
1030
+
1031
+ Gi
1032
+ F (y) := 1
1033
+ 2F(y)yi�
1034
+ .
1035
+ ¯G is of scalar curvature and actually we can verify that ¯G is not metrizable by using the
1036
+ method in [2].
1037
+ 14
1038
+
1039
+ Example 5.5 For the family of semicircles G on R2
1040
+ + as shown in Example 4.7, we can
1041
+ parameterize them in the following form
1042
+ x1 = u − v sin t,
1043
+ x2 = v cos t,
1044
+ (x2 > 0, v ≥ 0),
1045
+ (59)
1046
+ where u, v are arbitrary constant parameters. We have shown in Example 4.7 that the spray
1047
+ G determined by G is given by (35), that is,
1048
+ G1 = −y1y2
1049
+ 2x2 ,
1050
+ G2 = (y1)2
1051
+ 2x2 .
1052
+ (60)
1053
+ We can make G be projectively complete on the conical region C with the direction (0, 1)
1054
+ being deleted from T R2
1055
+ + \ {0}. Since −π/2 < t < π/2 for any u, v in (59), by (47), we let
1056
+ s = tan t.
1057
+ Then by (45), we get a complete spray ¯G projective to G with the projective factor P being
1058
+ given by
1059
+ P = c
1060
+
1061
+ − 1
1062
+ 2
1063
+ d
1064
+ dt
1065
+ � dt
1066
+ ds
1067
+ ��
1068
+ t=cs = c
1069
+
1070
+ cos t sin t
1071
+
1072
+ t=cs =
1073
+ c2s
1074
+ 1 + c2s2 .
1075
+ (61)
1076
+ Now it follows from (43) that
1077
+ x1 = u −
1078
+ vcs
1079
+
1080
+ 1 + c2s2 ,
1081
+ x2 =
1082
+ v
1083
+
1084
+ 1 + c2s2 ,
1085
+ y1 =
1086
+ −vc
1087
+ (1 + c2s2)3/2 ,
1088
+ y2 =
1089
+ −bc2s
1090
+ (1 + c2s2)3/2 ,
1091
+ from which we get
1092
+ s = −
1093
+ x2y2
1094
+ (y1)2 + (y2)2 ,
1095
+ c = −(y1)2 + (y2)2
1096
+ x2y1
1097
+ .
1098
+ Plugging s, c in the above into (61) yields P = −y2/x2. Thus the spray ¯G is given by
1099
+ ¯G1 = G1 + Py1 = −3y1y2
1100
+ 2x2 ,
1101
+ ¯G2 = G2 + Py2 = (y1)2 − 2(y2)2
1102
+ 2x2
1103
+ .
1104
+ ¯G is complete on the conical region C but not complete in the direction (0, 1). We don’t know
1105
+ whether the spray G in (60) can be projectively complete or not on T R2 \ {0}. Besides, ¯G
1106
+ is of isotropic curvature, and locally it is not metrizable by the method in [2, 11].
1107
+ References
1108
+ [1] R. Bryant, Geodesically reversible Finlser 2-spheres of constant curvature, Inspired by
1109
+ S. S. Chern, 95-111, Nankai Tracts Math., 11, World Sci. Publ., Hackensack, NJ, 2006.
1110
+ [2] I. Bucataru and Z. Muzsnay, Finsler metrizable isotropic sprays and Hilbert’s Fourth
1111
+ Problem, J. Aust. Math. Soc. 97 (2014), 27-47.
1112
+ [3] S. G. Elgendi and Z. Muzsnay, Metrizability of holonomy invariant projective defor-
1113
+ mation of sprays, Cana. Math. Bulletin, 2020.
1114
+ 15
1115
+
1116
+ [4] L.Huang and X. Mo, On geodesics of Finsler metrics via navigation problem, P. Am.
1117
+ Math. Soc., 139 (8) (2011), 3015-3024.
1118
+ [5] Y. Li, X. Mo and Y. Yu, Inverse problem of sprays with scalar curvature, Intern. J.
1119
+ Math. 30(6), 2019.
1120
+ [6] B. Li and Z. Shen, Sprays of isotropic curvature, Intern. J. math. 2019.
1121
+ [7] C. Robles, Geodesics in Randers spaces of constant curvature, Trans. Amer. Math.
1122
+ Soc. 359 (2007), 1633-1651.
1123
+ [8] Z. Shen: On projectively related Einstein metrics in Riemann-Finsler geometry, Math.
1124
+ Ann., 320(2001), 625-647.
1125
+ [9] Z. Shen: Differential geometry of spray and Finsler spaces, Kluwer Academic Publish-
1126
+ ers, Dordrecht, 2001.
1127
+ [10] G. Yang, Some classes of sprays in projective spray geometry, Diff. Geom. Appl., 29
1128
+ (2011), 606-614.
1129
+ [11] G. Yang, On sprays of scalar curvature and metrizability, J. Geom. Anal., 2022 (in
1130
+ press).
1131
+ [12] G. Yang, Sprays on Hamel-Funk functions model, preprint.
1132
+ Guojun Yang
1133
+ Department of Mathematics
1134
+ Sichuan University
1135
+ Chengdu 610064, P. R. China
1136
+ yangguojun@scu.edu.cn
1137
+ 16
1138
+
1NAyT4oBgHgl3EQfofhG/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2tAzT4oBgHgl3EQfRvvX/content/tmp_files/2301.01222v1.pdf.txt ADDED
@@ -0,0 +1,1230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Multi-Source Information Learning Framework
2
+ for Airbnb Price Prediction
3
+ Lu Jiang1, Yuanhan Li1, Na Luo1, Jianan Wang2,∗, Qiao Ning3,∗
4
+ 1Information Science and Technology, Northeast Normal University, Changchun
5
+ 2College of Physics, Northeast Normal University, Changchun
6
+ 3Information Science and Technology, Dalian Maritime University, Dalian
7
+ {jiangl761, liyh447, luon110, wangjn}@nenu.edu.cn, ningq669@dlmu.edu.cn
8
+ Corresponding author*
9
+ Abstract—With the development of technology and sharing
10
+ economy, Airbnb as a famous short-term rental platform, has
11
+ become the first choice for many young people to select. The
12
+ issue of Airbnb’s pricing has always been a problem worth
13
+ studying. While the previous studies achieve promising results,
14
+ there are exists deficiencies to solve. Such as, (1) the feature
15
+ attributes of rental are not rich enough; (2) the research on
16
+ rental text information is not deep enough; (3) there are few
17
+ studies on predicting the rental price combined with the point of
18
+ interest(POI) around the house. To address the above challenges,
19
+ we proposes a multi-source information embedding(MSIE) model
20
+ to predict the rental price of Airbnb. Specifically, we first selects
21
+ the statistical feature to embed the original rental data. Secondly,
22
+ we generates the word feature vector and emotional score
23
+ combination of three different text information to form the text
24
+ feature embedding. Thirdly, we uses the points of interest(POI)
25
+ around the rental house information generates a variety of spatial
26
+ network graphs, and learns the embedding of the network to
27
+ obtain the spatial feature embedding. Finally, this paper combines
28
+ the three modules into multi source rental representations, and
29
+ uses the constructed fully connected neural network to predict
30
+ the price. The analysis of the experimental results shows the
31
+ effectiveness of our proposed model.
32
+ I. INTRODUCTION
33
+ Accommodation sharing systems are being introduced to
34
+ more and more cities recently, and therefore they have gener-
35
+ ated huge amounts of data. Airbnb is an online marketplace
36
+ for sharing home and experience which is suffering from the
37
+ chaotic pricing problem. Tenants need to know the reasonable
38
+ price of this rental house to prevent being deceived. The
39
+ homeowner needs to customize a reasonable price for their
40
+ short-term rental house to attract more customers. Therefore,
41
+ airbnb price prediction plays a key role in accommodation
42
+ sharing systems. However, rapid increase in the number
43
+ of tenants and homeowners makes traditional manual-based
44
+ methods [1] time-consuming and inefficient. Computational
45
+ methods have received more attention for accurate airbnb price
46
+ prediction [2].
47
+ Computational methods for price prediction can be mainly
48
+ divided into two categories: (1) feature-based methods [3], and
49
+ (2) deep learning methods [4, 5]. In feature-based methods,
50
+ various types of features extraction strategies are utilized to
51
+ extract price correlated features for tenants and homeowners.
52
+ Feature-based methods transform price prediction into a ma-
53
+ chine learning methods, such as support vector machine(SVM)
54
+ and random forest. For instance, in order to distinguish from
55
+ the traditional method of formulating prices, Li et al. selects
56
+ rough set (RS) and SVM algorithms to establish a new math-
57
+ ematical model of pricing on the basis of hedonic price [6].
58
+ PR Kalehbastiet al. proposed a price prediction model us-
59
+ ing machine learning, deep learning, and natural language
60
+ processing techniques to embed the features of the rentals,
61
+ owner characteristics, and the customer reviews [7]. Deep
62
+ learning methods, which use multi-layer neural network to
63
+ map the correlation between input features and output results.
64
+ For instance, Chen et al. applied auto regressive integrated
65
+ moving average model to generate the baseline while LSTM
66
+ networks to build prediction model [8].
67
+ However, the research of airbnb price prediction based on
68
+ feature-based methods consider the single feature in most
69
+ cases. With the development of representation learning [9, 10],
70
+ the spatial embedding [11, 12] have received more attention.
71
+ There has been work to model the statistical features, text
72
+ feature and spatial features related to housing prices, but there
73
+ is no unified framework to integrate the above features. Based
74
+ on the above disadvantages, we proposes a prediction model
75
+ based on multi-source information embedding to study the
76
+ Airbnb price problem. The major contributions are summa-
77
+ rized below.
78
+ • Firstly, in order to obtain the best feature set, this paper
79
+ selects the features of the house itself to obtain statistical
80
+ information features.
81
+ • Secondly, the text information in this paper is divided into
82
+ three categories, and the house description and landlord
83
+ introduction are converted into feature matrix. The tenant
84
+ reviews are then converted into sentiment scores about
85
+ each house.
86
+ • Then, we uses different types of point-of-interest (POI)
87
+ data and houses to form various spatial network graphs
88
+ and learns their network embeddings to obtain spatial
89
+ information features.
90
+ • Finally, we combines these three types of feature embed-
91
+ dings are combined into multi-resource housing features
92
+ as input, and the neural network constructed in this paper
93
+ is used for price prediction. The effectiveness of our
94
+ model is demonstrated with two real data.
95
+ arXiv:2301.01222v1 [cs.LG] 1 Jan 2023
96
+
97
+ II. PRELIMINARY
98
+ We first introduce some key definitions and the problem
99
+ definition. Then, we present the overview of the proposed
100
+ method.
101
+ A. Definitions and Problem Statement
102
+ Definition 1. Statistic Feature The statistics feature con-
103
+ structed by our model is S =(s1, s2, ..., sn), where si
104
+ is the preprocessed listing features includes ’host since’,
105
+ ‘host is superhost’, ’verification’, etc.
106
+ Definition 2. Text Feature There are three types of text
107
+ features: listing description, host introduction, and tenant
108
+ review. We convert listing description and host introduction
109
+ into feature vector L =(l1, l2, ..., ln) and H =(h1, h2, ..., hn),
110
+ and transform the tenant review to sentiment score R
111
+ =(r1, r2, ..., rn). Thus, we define the text features as T =
112
+ (L, H, R).
113
+ Definition 3. Spatial Feature We first combine each rental
114
+ house and the POI with in 1,000m around it into a spatial
115
+ network G = (V, E, W). Then, we learn network embedding
116
+ through SDNE [13], and get the spatial feature matrix P
117
+ =(p1, p2, ..., pn).
118
+ Definition 4. Problem Statement In this paper, we study the
119
+ problem of airbnb price prediction. We formulate the problem
120
+ as a multi-source feature embedding task. Formally, we aim
121
+ to find a mapping function f : (S, T, P) → V that takes
122
+ the statistic feature S, text feature T, spatial feature P as
123
+ input, and outputs a unified vectorized representations V , for
124
+ predicting the specific listing price.
125
+ B. Framework Overview
126
+ Figure 1 shows an overall framework for the multi-source
127
+ feature embedding. Specifically, we embed the original data
128
+ from three aspects. (1) For the statistical feature embedding,
129
+ we uses Lasso CV to select the feature set with the rental
130
+ house feature. (2) For the text feature embedding, we divides
131
+ the text feature into three categories, include house descrip-
132
+ tion, landlord introduction and tenant comments. Through the
133
+ negative sampling CBOW model, the house description and
134
+ landlord introduction are converted into word feature vectors,
135
+ and the Bayesian model based on naive Bayesian principle is
136
+ used to convert tenant comments into emotional scores, we
137
+ combine them as the text feature. (3) For the spatial feature
138
+ embedding, we collects different types of POIs, and combines
139
+ the POI of each house and the surrounding area within 1,000m
140
+ into a spatial network. Through the SDNE model to learn the
141
+ spatial feature. (4) Three different features are combined into
142
+ a multi-source feature and input into the neural network to
143
+ obtain the final rental price.
144
+ III. MULTI-SOURCE INFORMATION LEARNING
145
+ In this section, we introduce the core architecture of our
146
+ framework as follows: (1) statistic feature embedding; (2) text
147
+ feature embedding; (3) spatial feature embedding.
148
+ A. Statistics Feature Embedding
149
+ Each
150
+ house’s
151
+ statistic
152
+ feature
153
+ is
154
+ represented
155
+ by
156
+ a
157
+ 245-
158
+ dimensional
159
+ vector
160
+ which
161
+ describes
162
+ the
163
+ listing
164
+ of
165
+ a
166
+ house,
167
+ including
168
+ listing id,
169
+ host id,
170
+ host since,
171
+ host response rate, host is superhost, host has profile pic,
172
+ host identity verified,
173
+ bathrooms,
174
+ bedrooms,
175
+ latitude,
176
+ longitude, accommodates, security deposit, guests included,
177
+ verification, etc.
178
+ We use the Lasso CV to do the feature set selection. The
179
+ loss function is defined as follows:
180
+ obj = 1
181
+ 2
182
+ n
183
+
184
+ i=1
185
+
186
+ yi − wT xi
187
+ �2 + α
188
+ m
189
+
190
+ j=1
191
+ |wi|
192
+ (1)
193
+ where n is the number of houses, m is the number of
194
+ parameters, α is the regularization coefficient, α �m
195
+ j=1 |wi| is
196
+ the L1 regularization term, yi is the rental price, xi is the
197
+ statistical features of rental housing, w is the coefficient matrix
198
+ of rental housing features, xi = si. Statistical feature matrix
199
+ S =(s1, s2, ..., sn). Lasso CV can compress the coefficients of
200
+ unimportant features to 0, realizing the purpose of feature se-
201
+ lection, and ultimately leaving the important statistical feature
202
+ set that this paper wants.
203
+ B. Text Feature Embedding
204
+ We extract three types of text data from the original data,
205
+ there are listing description, host introduction and tenant re-
206
+ view. Listing description mainly about introducing the location
207
+ of the house, surrounding environment, indoor layout and
208
+ housing regulations, etc. Host introduction mainly introduces
209
+ the age, height, occupation, hobbies and personality of the
210
+ host, and tenant review expresses the tenants’ feelings about
211
+ housing rentals and the evaluation of the host’s attitudes. Since
212
+ tenant review contain emotional value, we use two different
213
+ methods to model text feature. We first use CBOW [14]
214
+ model to embed the text feature of listing description, host
215
+ introduction. We selects the Wikipedia Chinese thesaurus after
216
+ preprocessing as the training corpus W, the objective function
217
+ is defined as follows:
218
+ L =
219
+
220
+ c∈W
221
+
222
+
223
+ �log
224
+
225
+ σ
226
+
227
+ xT
228
+ c θc��
229
+ +
230
+
231
+ u∈NEG(c)
232
+ log
233
+
234
+ σ
235
+
236
+ −xT
237
+ c θu��
238
+
239
+
240
+
241
+ (2)
242
+ Then the above objective function is optimized by using the
243
+ random gradient rise method to obtain:
244
+ L(c, u) = Lc(u) log
245
+
246
+ σ
247
+
248
+ xT
249
+ c θu��
250
+ +[1 − Lc(u)] log
251
+
252
+ 1 − σ
253
+
254
+ xT
255
+ c θu��
256
+ (3)
257
+ Then calculate the gradient of L(c, u) to obtain:
258
+ v(˜c) := v(˜c) + η
259
+
260
+ u∈{c}∪NEG(c)
261
+ ∂L(c, u)
262
+ ∂xc
263
+ (4)
264
+
265
+ Geospatial Information Embedding
266
+ Text Information Embedding
267
+ listing information
268
+ host introduction
269
+ tenant review
270
+ feature vector
271
+ feature vector
272
+ sentiment score
273
+ Original
274
+ Listing
275
+ Data
276
+ preprocess
277
+ Feature
278
+ Selection
279
+ Statistics Information Embedding
280
+ Real
281
+ Airbnb
282
+ Listing
283
+ Data
284
+ Multi
285
+ Source
286
+ Listing
287
+ Feature
288
+ listing
289
+ POI
290
+ Predict
291
+ Price
292
+ Fig. 1: Framework Overview.
293
+ In this paper, we set the dimension of the word vector as
294
+ 100. l and h represent the embedding of listing description,
295
+ host introduction, respectively.
296
+ l = 1
297
+ Z
298
+ Z
299
+
300
+ i=1
301
+ v (˜cl)
302
+ (5)
303
+ h = 1
304
+ z
305
+ z
306
+
307
+ i=1
308
+ v (˜ch)
309
+ (6)
310
+ Therefore, we get the text information feature matrix of the
311
+ listing description and host introduction: L =(l1, l2, ..., ln) and
312
+ H =(h1, h2, ..., hn).
313
+ For the tenant review embedding, since it contains strong
314
+ emotional expression, in order to reflect whether the tenants’
315
+ evaluation of the house is positive or negative, we uses the
316
+ naive Bayes method to generate the corresponding emotional
317
+ score r ∈ [0, 1] for each house, where 0 represents the negative
318
+ and 1 represents the positive. Specifically, the probability that
319
+ a tenant review text belongs to the positive class can be
320
+ expressed as:
321
+ P (pos | c1, . . . cd) = P (c1, . . . cd | pos) P(pos)
322
+ P (c1, . . . cd)
323
+ (7)
324
+ After simplifying the above formula, we can obtain:
325
+ P (pos | c1, . . . cd) =
326
+ 1
327
+ 1 + γ
328
+ (8)
329
+ In this work, a text represents a tenant’s review, and a tenant
330
+ has many reviews, so the emotional score of a tenant can be
331
+ expressed as:
332
+ r = 1
333
+ q
334
+ q
335
+
336
+ i=1
337
+ P (pos | c1, . . . cd)
338
+ (9)
339
+ where q represents the number of reviews on a rental,
340
+ d represents the total number of words in a review, and
341
+ P (pos | c1, . . . cd) represents the probability that the review
342
+ belongs to the category of positive emotions. Therefore, the
343
+ emotional score vector of tenant reviews can be expressed as
344
+ R =(r1, r2, ..., rn).
345
+ C. Spatial Feature Embedding
346
+ We proposes a method to learn spatial embedding. First, POI
347
+ is divided into 8 different types, and the rented houses and the
348
+ surrounding different type POIs form a spatial network. Then
349
+ learn the network embedding of these spatial graphs through
350
+ SDNE model. This method can accurately capture the spatial
351
+ features related to important POIs such as scenic spots and
352
+ railway stations.
353
+ We uses Euclidean distance to calculate the weight W
354
+ between house and poi as follows:
355
+ W = R · arccos( dis ) · π/180
356
+ (10)
357
+ where dis = sin(LatA) sin(LatB) cos(LonA − LonB) +
358
+ cos(LatA) cos(LatB), the two types of nodes, A and B,
359
+
360
+ Net-
361
+ work1
362
+ Learning Network Embedding
363
+ Net-
364
+ work2
365
+ Net-
366
+ work3
367
+ Net-
368
+ workkInput
369
+ layer
370
+ Hidden
371
+ layer
372
+ Output
373
+ layeirepresent the rental houses and POI respectively. LonA and
374
+ LonB are their longitudes, LatA and LatB are their latitudes,
375
+ and R is the average radius of the earth, taking the value of
376
+ 6371.004km.
377
+ In SDNE model, the encoder is from xi to y(k)
378
+ i
379
+ , the decoder
380
+ is from y(k)
381
+ i
382
+ to �xi, y(k)
383
+ i
384
+ is the node embedding of vi, in this
385
+ paper,y(k)
386
+ i
387
+ = pi. The formula of encoder is:
388
+ y(k)
389
+ i
390
+ = σ
391
+
392
+ W (k)y(k−1)
393
+ i
394
+ + b(k)�
395
+ (11)
396
+ Therefore, the spatial embedding can be expressed as P
397
+ =(p1, p2, ..., pn). After we get the statistic embedding, the text
398
+ embedding and the spatial embedding. We combine the above
399
+ features into a multi-source feature M = (S, T, P), and use
400
+ the fully connected neural network to predict the rental price.
401
+ We take the multi-source feature matrix M =(m1, m2, ..., mn)
402
+ as the input of the neural network, then obtain as follows:
403
+ y = wT m + b,
404
+ A = σ(y)
405
+ (12)
406
+ where y is the actual rental price, m is the multi-source
407
+ feature, w is the parameter matrix, and b is the offset term.
408
+ A is the activation function, we uses ReLU function as the
409
+ activation function. At last, the output layer uses a neuron to
410
+ output and get the predicted price �yi.
411
+ IV. EXPERIMENT
412
+ In this section, we first introduce two real dataset and
413
+ evaluation metrics. Then, we design experiments to answer
414
+ the following three questions:
415
+ • Q1. How is the performance of our proposed MSIE in
416
+ the airbnb price prediction task?
417
+ • Q2. How do the feature combination affect the price
418
+ prediction performance?
419
+ • Q3. What is the key influence on the airbnb price?
420
+ A. Dataset
421
+ We collect the dataset from an open online airbnb website.
422
+ Table I shows the statistics of our two real airbnb datasets
423
+ from two cities: Beijing and Shanghai after preprocess.
424
+ TABLE I: Statistics of the data
425
+ City
426
+ # Houses
427
+ # Reviews
428
+ Time Period
429
+ Beijing
430
+ 10779
431
+ 191876
432
+ 01/2017-06/2019
433
+ Shanghai
434
+ 8638
435
+ 159069
436
+ 01/2020-07/2021
437
+ Besides, we also collect the POI of the Beijing and Shanghai
438
+ in Table II. We divide it into 8 categories, include, Education,
439
+ Entertainment, Food, Beverage Shopping, Tourist, Transporta-
440
+ tion, Medical Service, and Public Service.
441
+ B. Evaluation Metrics
442
+ We evaluate the model performances in terms of the fol-
443
+ lowing metrics.
444
+ TABLE II: POI Categories
445
+ Number
446
+ POI Category Name
447
+ #Beijing
448
+ #Shanghai
449
+ 1
450
+ Education
451
+ 8711
452
+ 2635
453
+ 2
454
+ Entertainment
455
+ 6501
456
+ 2607
457
+ 3
458
+ Food
459
+ 5744
460
+ 6301
461
+ 4
462
+ Beverage Shopping
463
+ 6601
464
+ 5632
465
+ 5
466
+ Tourist
467
+ 6713
468
+ 4176
469
+ 6
470
+ Transportation
471
+ 4322
472
+ 1753
473
+ 7
474
+ Medical Service
475
+ 3660
476
+ 2862
477
+ 8
478
+ Public Service
479
+ 5976
480
+ 3699
481
+ (1) Mean Absolute Error(MAE) represents the average of
482
+ the absolute value of the error between the predicted value
483
+ and the true value.
484
+ MAE = 1
485
+ n
486
+ n
487
+
488
+ i=1
489
+ |ˆy − y|
490
+ (13)
491
+ (2) Mean Squared Error(MSE) is a measure of the close-
492
+ ness of the predicted value relative to the actual value.
493
+ MSE = 1
494
+ n
495
+ n
496
+
497
+ i=1
498
+ (ˆy − y)2
499
+ (14)
500
+ (3) Root Mean Squared Error(RMSE) is defined as fol-
501
+ lows:
502
+ RMSE =
503
+
504
+
505
+
506
+ � 1
507
+ n
508
+ n
509
+
510
+ i=1
511
+ (ˆy − y)2
512
+ (15)
513
+ where ˆy is the predicted price from the regression and y is the
514
+ actual price. The lower the RMSE, the better the method.
515
+ (4) The coefficient of determination(R2) convert the pre-
516
+ dicted results into accuracy, the results are between [0,1].
517
+ R2 = 1 −
518
+ �n
519
+ i=1 (ˆyi − yi)2
520
+ �n
521
+ i=1 (¯yl − yi)2
522
+ (16)
523
+ The higher the value of R2 , the more accurate the estima-
524
+ tion method.
525
+ C. Baseline Algorithms
526
+ To prove the effectiveness of our model, we compare our
527
+ method with the following algorithms.
528
+ (1)Extreme Gradient Boosting XGBOOST [15] is an
529
+ improvement on the boosting algorithm based on Gradient
530
+ Boosting Decision Tree to make it faster and more efficient.
531
+ (2)Random Forest RF [16] is an algorithm that integrates
532
+ multiple trees through the idea of integrated learning. Its basic
533
+ unit is the decision tree.
534
+ (3)Support Vector Regression SVR [17] is an algorithm
535
+ that applies support vector machine to regression problems.
536
+ (4)TAPE TAPE [18] analyzed the relationship between
537
+ the description of each rental and the price, and added the
538
+ geographical factor component to recommend a reasonable
539
+ price for each new rental of the landlord.
540
+
541
+ MAE
542
+ 0.0
543
+ 0.1
544
+ 0.2
545
+ 0.3
546
+ 0.4
547
+ XGB
548
+ RF
549
+ SVR
550
+ TAPE
551
+ GSNE
552
+ MSIE
553
+ (a) MAE
554
+ MSE
555
+ 0.00
556
+ 0.05
557
+ 0.10
558
+ 0.15
559
+ 0.20
560
+ 0.25
561
+ 0.30
562
+ XGB
563
+ RF
564
+ SVR
565
+ TAPE
566
+ GSNE
567
+ MSIE
568
+ (b) MSE
569
+ RMSE
570
+ 0.0
571
+ 0.1
572
+ 0.2
573
+ 0.3
574
+ 0.4
575
+ 0.5
576
+ 0.6
577
+ XGB
578
+ RF
579
+ SVR
580
+ TAPE
581
+ GSNE
582
+ MSIE
583
+ (c) RMSE
584
+ R2
585
+ 0.0
586
+ 0.2
587
+ 0.4
588
+ 0.6
589
+ 0.8
590
+ XGB
591
+ RF
592
+ SVR
593
+ TAPE
594
+ GSNE
595
+ MSIE
596
+ (d) R2
597
+ Fig. 2: Overall comparison on Beijing dataset.
598
+ MAE
599
+ 0.0
600
+ 0.1
601
+ 0.2
602
+ 0.3
603
+ 0.4
604
+ 0.5
605
+ XGB
606
+ RF
607
+ SVR
608
+ TAPE
609
+ GSNE
610
+ MSIE
611
+ (a) MAE
612
+ MSE
613
+ 0.00
614
+ 0.05
615
+ 0.10
616
+ 0.15
617
+ 0.20
618
+ 0.25
619
+ 0.30
620
+ 0.35
621
+ XGB
622
+ RF
623
+ SVR
624
+ TAPE
625
+ GSNE
626
+ MSIE
627
+ (b) MSE
628
+ RMSE
629
+ 0.0
630
+ 0.1
631
+ 0.2
632
+ 0.3
633
+ 0.4
634
+ 0.5
635
+ 0.6
636
+ XGB
637
+ RF
638
+ SVR
639
+ TAPE
640
+ GSNE
641
+ MSIE
642
+ (c) RMSE
643
+ R2
644
+ 0.0
645
+ 0.2
646
+ 0.4
647
+ 0.6
648
+ 0.8
649
+ XGB
650
+ RF
651
+ SVR
652
+ TAPE
653
+ GSNE
654
+ MSIE
655
+ (d) R2
656
+ Fig. 3: Overall comparison on Shanghai dataset.
657
+ (5)GSNE GSNE [19] is a geospatial embedding framework,
658
+ which can accurately capture the geospatial neighborhood re-
659
+ lationship between houses and surrounding POIs. Essentially,
660
+ it is to learn the low dimensional Gaussian embedding on
661
+ the geospatial network node, and can be combined with the
662
+ regression method, which has a certain effect on house price
663
+ prediction.
664
+ Besides, our proposed model has three variants of the
665
+ feature set combination: (1) MSIE-S, where the model utilizes
666
+ the statistic feature; (2) MSIE-ST, where the model utilizes
667
+ the statistic feature and text feature; (3) MSIE-STP, where
668
+ the model utilizes the statistic feature, text feature and spatial
669
+ feature. We evaluate these three variants with our model.
670
+ In the experiment, we split the dataset into two nonover-
671
+ lapping sets: for all records, the earliest 80% of records
672
+ are the training set and the remaining 20% are testing set.
673
+ We implement the model by Pytorch and run the code on
674
+ Windows10, Inter(R) Core(TM) i7-7700HQ @2.80GHZ and
675
+ memory size 8G.
676
+ D. Overall Performances
677
+ We present the results for “MAE”, “MSE”, “RMSE” and
678
+ “R2”, compared with baseline algorithms. Figure 2 and Fig-
679
+ ure 3 show that our proposed method ”MSIE” outperform the
680
+ baselines over both the Beijing and Shanghai dataset. The
681
+ lower value of ”MAE”, ”MSE”, ”RMSE”, and the higher
682
+ value of ”R2”, means the performance better. In all cases, we
683
+ observe an improvement with respect to baseline algorithms,
684
+ especially on “MSE” and “R2”. One interesting observation is
685
+ that the traditional machine learning method(such as, ”XGB”,
686
+ ”RF” and ”SVR”) performs better than ”TAPE”. We analysis
687
+ (a) Beijing
688
+ (b) Shanghai
689
+ Fig. 4: The loss curve on two datasets.
690
+ the reason is that our algorithm feature engineering proposed
691
+ in this paper is well done and has universality.
692
+ Besides, we uses the fully connected neural network con-
693
+ structed as the prediction model. In order to prevent over
694
+ fitting, we sets 128 neurons in the input layer, uses 2 hidden
695
+ layers and set Epoch=120, the size of Batch as 256. Figure 4
696
+ show the loss curves of neural network training on the two
697
+ datasets respectively.
698
+ E. Robustness Check
699
+ We evaluate the feature embedding contribution on mod-
700
+ eling representations. To set the control group, we de-
701
+ velop a variant of the proposed ”MSIE”, namely ”MSIE-S”,
702
+ ”MSIE-ST”, ”MSIE-STP”. ”MSIE-S”, ”MSIE-ST”, ”MSIE-
703
+ STP” takes the different combination of feature set as the
704
+ input, while other component of remains the same. Table III
705
+ and Table IV show the comparison results. We can observe
706
+
707
+ LossCurve
708
+ 1.0
709
+ Training loss
710
+ 0.8
711
+ 0.6
712
+ loss
713
+ 0.4
714
+ 0.2
715
+ 0.0
716
+ Fo
717
+ 20
718
+ 40
719
+ 60
720
+ 80
721
+ 100
722
+ 120
723
+ epochLoss Curve
724
+ 1.0
725
+ Training loss
726
+ 0.8
727
+ 0.6
728
+ loss
729
+ 0.4 -
730
+ 0.2
731
+ 0.0
732
+ 0
733
+ 20
734
+ 40
735
+ 60
736
+ 80
737
+ 100
738
+ 120
739
+ epochTABLE III: The feature combination on Beijing dataset.
740
+ Feature set
741
+ MAE
742
+ MSE
743
+ RMSE
744
+ R2
745
+ MSIE-S
746
+ 0.3652
747
+ 0.2341
748
+ 0.4839
749
+ 0.5545
750
+ MSIE-ST
751
+ 0.2941
752
+ 0.1688
753
+ 0.4109
754
+ 0.6786
755
+ MSIE-STP
756
+ 0.2905
757
+ 0.1669
758
+ 0.4086
759
+ 0.6824
760
+ TABLE IV: The feature combination on Shanghai dataset.
761
+ Feature set
762
+ MAE
763
+ MSE
764
+ RMSE
765
+ R2
766
+ MSIE-S
767
+ 0.4003
768
+ 0.2852
769
+ 0.5340
770
+ 0.5824
771
+ MSIE-ST
772
+ 0.3512
773
+ 0.2371
774
+ 0.4869
775
+ 0.6527
776
+ MSIE-STP
777
+ 0.3310
778
+ 0.2065
779
+ 0.4544
780
+ 0.6977
781
+ that the performance of ”MSIE-STP” outperforms ”MSIE-
782
+ S” and ”MSIE-ST” in terms of the four metrics over both
783
+ two datasets. The results validate that the integration of text
784
+ feature and spatial feature indeed enhances the modeling of
785
+ price prediction.
786
+ F. Analysis of Key Influence
787
+ In order to analysis the key feature-influence of price, ac-
788
+ cording to the previous studies, we use three feature selection
789
+ method, include manual selection, P-value [20] and Lasso
790
+ CV [21]. We use R2 as an indicator to analyze, and the results
791
+ are shown in Figure 5. The best result is to use Lasso CV to
792
+ select features from the original data.
793
+ (a) Beijing
794
+ (b) Shanghai
795
+ Fig. 5: The R2 with different feature selection method.
796
+ Then, we selects 20 features with the highest correlation for
797
+ rental price prediction according to P-value, and uses Lasso
798
+ CV to select features to obtain statistical information as input
799
+ for prediction. Figure 6 show the 10 features with the highest
800
+ correlation with rental prices in the two datasets.
801
+ V. RELATED WORK
802
+ In this work, we propose a model to predict the price of
803
+ listings on Airbnb. Some studies used sentiment analysis to
804
+ study the problem of Airbnb. Martinez R D et al. studied
805
+ the relationship between Airbnb host’s listing description and
806
+ occupancy rates by sentiment analysis [22]. Zhang et al.
807
+ proposed a text analytics framework to study the relationship
808
+ among self-description, trust perception and purchase behavior
809
+ (a) Beijing
810
+ (b) Shanghai
811
+ Fig. 6: The feature most relevant to price.
812
+ on Airbnb. They used text mining to extracted sentiment
813
+ intensity and use regression method to identify the impact
814
+ of linguistic and semantic features on trust perception [23].
815
+ Kalehbast P R et al. used sentiment analysis and machine
816
+ learning to predict the price of listings on Airbnb [24].
817
+ Most researches have proposed many works to study the
818
+ price by using the listing information. Wang et al. studied
819
+ the relationship between a price and its determinants by
820
+ various listing information (e.g., host identity verified, accom-
821
+ modates, wireless Internet, amenities and services, and free
822
+ parking) [25]. P.Choudhary et al. analyzed Airbnb listings in
823
+ San Francisco to better understand how different attributes
824
+ (e.g., bedrooms, location, and listing type) can be used to
825
+ accurately predict the price of a new listing, which is optimal
826
+ in terms of the host’s profitability yet affordable to their
827
+ guests [26]. Shen et al. analyzed the relationship between
828
+ the description of each listing and its price, and proposed a
829
+ text-based model to recommend a reasonable price for newly
830
+ added listings [27]. Tang et al. labeled text information for
831
+ nine handpicked classes, extracted image-related features, and
832
+ finally used all features to predict a listing’s neighborhood and
833
+ its price [28].
834
+ VI. CONCLUSION
835
+ In this paper, we proposes a prediction model based on
836
+ multi-source information embedding to study the Airbnb price
837
+ problem. Specifically, in order to obtain the best feature set,
838
+ we first selects the features of the house itself to obtain
839
+ statistical information features. Secondly, the text information
840
+ in this paper is divided into three categories, and the house
841
+ description and landlord introduction are converted into feature
842
+ matrix. The tenant reviews are then converted into sentiment
843
+ scores about each house. Then, we uses different types of
844
+ point-of-interest (POI) data and houses to form various spatial
845
+ network graphs and learns their network embeddings to obtain
846
+ spatial information features. Finally, we combines these three
847
+ types of feature embeddings are combined into multi-resource
848
+ housing features as input, and the neural network constructed
849
+ in this paper is used for price prediction. The effectiveness
850
+ of our model is demonstrated with two real data. For future
851
+ work, we plan to combine some heuristic methods [29, 30] to
852
+ further improve performance.
853
+
854
+ 0.8
855
+ 0.7
856
+ 0.6
857
+ 0.5
858
+ 0.4
859
+ 0.3
860
+ 0.2
861
+ 0.1 -
862
+ 0.0
863
+ Manual selection
864
+ P-value
865
+ Lasso CV0.8
866
+ 0.7
867
+ 0.6
868
+ 0.5
869
+ 0.4
870
+ 0.3
871
+ 0.2
872
+ 0.1 -
873
+ 0.0
874
+ Manual selection
875
+ p-value
876
+ Lasso CV1.0
877
+ price
878
+ 1.00
879
+ 0.63
880
+ 0.56
881
+ 0.56
882
+ 0.44
883
+ 0.41
884
+ 0.30
885
+ 0.24
886
+ 0.22
887
+ 0.21
888
+ accommodates
889
+ 0.63
890
+ 1.00
891
+ 0.82
892
+ 0.82
893
+ 0.63
894
+ 0.19
895
+ 0.34
896
+ 0.26
897
+ 0.22
898
+ 0.8
899
+ bedrooms
900
+ 0.56
901
+ 0.82
902
+ 1.00
903
+ 0.28
904
+ 0.72
905
+ 0.64
906
+ 0.14
907
+ 0.28
908
+ 0.23
909
+ 0.23
910
+ Entirehome/apt
911
+ 0.56
912
+ 0.38
913
+ 0.28
914
+ 1.00
915
+ 0.21
916
+ 0.09
917
+ 0.35
918
+ 0.18
919
+ 0.04
920
+ -0.01
921
+ 0.6
922
+ beds
923
+ 0.44
924
+ 0.82
925
+ 0.72
926
+ 0.21
927
+ 1.00
928
+ 09:0
929
+ 0.10
930
+ 0.27
931
+ 0.23
932
+ 0.20
933
+ bathrooms
934
+ 0.41
935
+ 0.63
936
+ 0.64
937
+ 0.09
938
+ 0.60
939
+ 1.00
940
+ 0.06
941
+ 0.15
942
+ 0.25
943
+ 0.26
944
+ 0.4
945
+ TV
946
+ 0.30
947
+ 0.19
948
+ 0.14
949
+ 0.35
950
+ 0.10
951
+ 0.06
952
+ 1.00
953
+ 800
954
+ 0.08
955
+ 0.06
956
+ guests_included
957
+ 0.24
958
+ 0.28
959
+ 0.18
960
+ 0.27
961
+ 0.15
962
+ 0.08
963
+ 1.00
964
+ 0.04
965
+ 0.02
966
+ 0.2
967
+ Suitable_for_events
968
+ 0.22
969
+ 0.26
970
+ 0.23
971
+ 0.04
972
+ 0.23
973
+ 0.25
974
+ 0.08
975
+ 0.04
976
+ 1.00
977
+ 0.24
978
+ latitude
979
+ 0.21
980
+ 0.22
981
+ 0.23
982
+ -0.01
983
+ 0.20
984
+ 0.26
985
+ 0.06
986
+ 0.02
987
+ 0.24
988
+ 1.00
989
+ 0.0
990
+ price
991
+ commodates
992
+ bedrooms
993
+ ire home/apt
994
+ beds
995
+ bathrooms
996
+ 2
997
+ sts_included
998
+ _for_events
999
+ latitude1.00
1000
+ price
1001
+ 1.00
1002
+ 0.72
1003
+ 0.66
1004
+ 0.61
1005
+ 0.42
1006
+ 0.34
1007
+ 0.20
1008
+ 0.20
1009
+ 0.20
1010
+ 0.19
1011
+ accommodates
1012
+ 0.72
1013
+ 1.00
1014
+ 0.87
1015
+ 0.85
1016
+ 0.49
1017
+ 0.28
1018
+ 0.18
1019
+ 0.23
1020
+ 0.22
1021
+ 0.23
1022
+ 0.75
1023
+ bedrooms
1024
+ 0.66
1025
+ 0.87
1026
+ 1.00
1027
+ 0.89
1028
+ 0.50
1029
+ 0.23
1030
+ 0.17
1031
+ 0.22
1032
+ 0.20
1033
+ 0.21
1034
+ beds
1035
+ 0.61
1036
+ 0.85
1037
+ 0.89
1038
+ 1.00
1039
+ 0.47
1040
+ 0.18
1041
+ 0.18
1042
+ 0.22
1043
+ 0.21
1044
+ 0.21
1045
+ 0.50
1046
+ Entire villa
1047
+ 0.42
1048
+ 0.49
1049
+ 0.50
1050
+ 0.47
1051
+ 1.00
1052
+ 0.18
1053
+ 0.22
1054
+ 0.22
1055
+ 0.17
1056
+ 0.22
1057
+ Entirehome/apt
1058
+ 0.34
1059
+ 0.28
1060
+ 0.23
1061
+ 0.18
1062
+ 0.18
1063
+ 1.00
1064
+ -0.23
1065
+ -0.11
1066
+ -0.05
1067
+ -0.17
1068
+ 0.25
1069
+ _Backyard
1070
+ 0.20
1071
+ 0.18
1072
+ 0.17
1073
+ 0.18
1074
+ 0.22
1075
+ -0.23
1076
+ 1.00
1077
+ 0.41
1078
+ 0.27
1079
+ 0.49
1080
+ _Barbecue_utensils
1081
+ 0.20
1082
+ 0.23
1083
+ 0.22
1084
+ 0.22
1085
+ 0.22
1086
+ -0.11
1087
+ 0.41
1088
+ 1.00
1089
+ 0.48
1090
+ 0.62
1091
+ 0.00
1092
+ Board_games
1093
+ 0.20
1094
+ 0.22
1095
+ 0.20
1096
+ 0.21
1097
+ 0.17
1098
+ -0.05
1099
+ 0.27
1100
+ 0.48
1101
+ 1.00
1102
+ 0.33
1103
+ _BBQ_grill
1104
+ 0.19
1105
+ 0.23
1106
+ 0.21
1107
+ 0.21
1108
+ 0.22
1109
+ -0.17
1110
+ 0.49
1111
+ 0.62
1112
+ 1.00
1113
+ price
1114
+ commodates
1115
+ bedrooms
1116
+ speq
1117
+ Entire villa
1118
+ ire home/apt
1119
+ cue_utensils
1120
+ oardACKNOWLEDGMENTS
1121
+ This work is supported by the Natural Science Research
1122
+ Foundation of Jilin Province of China under Grant No.
1123
+ YDZJ202201ZYTS415, the Fundamental Research Funds for
1124
+ the Central Universities 2412019ZD013, NSFC (under Grant
1125
+ Nos.61976050 and 61972384).
1126
+ REFERENCES
1127
+ [1] S. Rosen, “Hedonic Prices and Implicit Markets: Product
1128
+ Differentiation in Pure Competition,” Journal of Political
1129
+ Economy, vol. 82, no. 1, pp. 34–55, Jan.-Feb. 1974.
1130
+ [2] P. Morano and F. Tajani, “Bare ownership evaluation.
1131
+ hedonic price model vs. artificial neural network,” Int. J.
1132
+ Bus. Intell. Data Min., vol. 8, no. 4, pp. 340–362, 2013.
1133
+ [3] X. Xu, Z. Huang, J. Wu, Y. Fu, N. Luo, W. Chen,
1134
+ J. Wang, and M. Yin, “Finding the key influences on
1135
+ the house price by finite mixture model based on the
1136
+ real estate data in changchun,” in DASFAA, vol. 11448,
1137
+ 2019, pp. 378–382.
1138
+ [4] X. Xu, Y. Fu, J. Wu, Y. Wang, Z. Huang, Z. Fu,
1139
+ and M. Yin, “Adaptive weighted finite mixture model:
1140
+ Identifying the feature-influence of real estate,” Trans.
1141
+ Data Sci., vol. 1, no. 3, pp. 20:1–20:16, 2020.
1142
+ [5] Y. Fu, Y. Ge, Y. Zheng, Z. Yao, Y. Liu, H. Xiong, and
1143
+ J. Yuan, “Sparse real estate ranking with online user
1144
+ reviews and offline moving behaviors,” in ICDM, 2014,
1145
+ pp. 120–129.
1146
+ [6] Y. qing Li, T. Wang, and S. fei Zhao, “Application of
1147
+ svm based on rough set in real estate prices prediction,”
1148
+ in WiCOM, 2008, pp. 1–4.
1149
+ [7] P. R. Kalehbasti, L. Nikolenko, and H. Rezaei, “Airbnb
1150
+ price prediction using machine learning and sentiment
1151
+ analysis,” in Lecture Notes in Computer Science, 2021,
1152
+ pp. 173–184.
1153
+ [8] X. Chen, L. Wei, and J. Xu, “House price prediction
1154
+ using LSTM,” CoRR, vol. abs/1709.08432, 2017.
1155
+ [9] P. Wang, Y. Fu, J. Zhang, X. Li, and D. Lin, “Learning
1156
+ urban community structures: A collective embedding per-
1157
+ spective with periodic spatial-temporal mobility graphs,”
1158
+ ACM Trans. Intell. Syst. Technol., vol. 9, no. 6, pp. 63:1–
1159
+ 63:28, 2018.
1160
+ [10] P. Wang, Y. Fu, H. Xiong, and X. Li, “Adversarial
1161
+ substructured representation learning for mobile user
1162
+ profiling,” in SIGKDD.
1163
+ ACM, 2019, pp. 130–138.
1164
+ [11] P. Wang, K. Liu, L. Jiang, X. Li, and Y. Fu, “Incremen-
1165
+ tal mobile user profiling: Reinforcement learning with
1166
+ spatial knowledge graph for modeling event streams,” in
1167
+ KDD, 2020.
1168
+ [12] P. Wang, Y. Fu, J. Zhang, P. Wang, Y. Zheng, and C. C.
1169
+ Aggarwal, “You are how you drive: Peer and temporal-
1170
+ aware representation learning for driving behavior anal-
1171
+ ysis,” in SIGKDD, 2018, pp. 2457–2466.
1172
+ [13] D. Wang, P. Cui, and W. Zhu, “Structural deep network
1173
+ embedding,” in SIGKDD, 2016, pp. 1225–1234.
1174
+ [14] Q. Luo, W. Xu, and J. Guo, “A study on the CBOW
1175
+ model’s overfitting and stability,” in Web-KR@CIKM.
1176
+ ACM, 2014, pp. 9–12.
1177
+ [15] T. Chen and C. Guestrin, “Xgboost: A scalable tree
1178
+ boosting system,” in ACM, 2016.
1179
+ [16] L. Breiman, “Random forests,” Mach. Learn., vol. 45,
1180
+ no. 1, pp. 5–32, 2001.
1181
+ [17] M. Awad and R. Khanna, Support Vector Regression.
1182
+ Berkeley, CA: Apress, 2015, pp. 67–80.
1183
+ [18] L. Shen, Q. Liu, G. Chen, and S. Ji, “Text-based price
1184
+ recommendation system for online rental houses,” Big
1185
+ Data Min. Anal., vol. 3, no. 2, pp. 143–152, 2020.
1186
+ [19] S. S. S. Das, M. E. Ali, Y. Li, Y. Kang, and T. Sellis,
1187
+ “Boosting house price predictions using geo-spatial net-
1188
+ work embedding,” Data Min. Knowl. Discov., vol. 35,
1189
+ no. 6, pp. 2221–2250, 2021.
1190
+ [20] R. Feise, “Do multiple outcome measures require p-value
1191
+ adjustment?” BMC Med Res Methodol., vol. 2, no. 8,
1192
+ 2002.
1193
+ [21] H. Zou, “The adaptive lasso and its oracle properties,”
1194
+ Journal of the American Statistical Association, vol. 101,
1195
+ no. 476, pp. 1418–1429, 2006.
1196
+ [22] R. D. Martinez, A. Carrington, T. Kuo, L. Tarhuni, and
1197
+ N. Abdel-Motaal, “The impact of an airbnb host’s listing
1198
+ description ’sentiment’ and length on occupancy rates,”
1199
+ 2017.
1200
+ [23] L. Zhang, Q. Yan, and L. Zhang, “A text analytics frame-
1201
+ work for understanding the relationships among host self-
1202
+ description, trust perception and purchase behavior on
1203
+ airbnb,” Decision Support Systems, vol. 133, p. 113288,
1204
+ 2020.
1205
+ [24] P. R. Kalehbasti, L. Nikolenko, and H. Rezaei, “Airbnb
1206
+ price prediction using machine learning and sentiment
1207
+ analysis,” 2019.
1208
+ [25] D. Wang and J. L. Nicolau, “Price determinants of
1209
+ sharing economy based accommodation rental: A study
1210
+ of listings from 33 cities on airbnb.com,” International
1211
+ Journal of Hospitality Management, vol. 62, pp. 120–
1212
+ 131, 2017.
1213
+ [26] P. Chou D Hary, A. Jain, and R. Baijal, “Unravelling
1214
+ airbnb predicting price for new listing,” Papers, 2018.
1215
+ [27] L. Shen, Q. Liu, G. Chen, and S. Ji, “Text-based price
1216
+ recommendation system for online rental houses,” Big
1217
+ Data Mining and Analytics, vol. 3, no. 2, pp. 143–152,
1218
+ 2020.
1219
+ [28] E. Tang and K. Sangani, “Neighborhood and price pre-
1220
+ diction for san francisco airbnb listings,” 2015.
1221
+ [29] Y. Wang, S. Cai, J. Chen, and M. Yin, “Sccwalk: An
1222
+ efficient local search algorithm and its improvements for
1223
+ maximum weight clique problem,” Artif. Intell., vol. 280,
1224
+ p. 103230, 2020.
1225
+ [30] S. Pan, Y. Ma, Y. Wang, Z. Zhou, J. Ji, M. Yin,
1226
+ and S. Hu, “An improved master-apprentice evolution-
1227
+ ary algorithm for minimum independent dominating set
1228
+ problem,” Frontiers of Computer Science, vol. 17, no. 4,
1229
+ pp. 1–14, 2023.
1230
+
2tAzT4oBgHgl3EQfRvvX/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
49FIT4oBgHgl3EQf7St_/content/tmp_files/2301.11397v1.pdf.txt ADDED
@@ -0,0 +1,1532 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ IEEE TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, MANUSCRIPT ID
2
+ 1
3
+
4
+ Automating Knowledge-Driven Model
5
+ Recommendation: Methodology, Evaluation,
6
+ and Key Challenges
7
+ Adam A. Butchy, Cheryl A. Telmer, and Natasa Miskov-Zivanov
8
+ Abstract—There is significant interest in using existing repositories of biological entities, relationships, and models to automate
9
+ biological model assembly and extension. Current methods aggregate human-curated biological information into executable,
10
+ simulatable models, but these models do not resemble human curated models and do not recapitulate experimental results. Here,
11
+ we outline the process of automated model assembly and extension, while demonstrating it on both synthetic models and human-
12
+ curated models of biological signaling networks. We begin with an iterative, greedy, and combinatoric approach to automated
13
+ assembly and demonstrate the key difficulties inherent to contextless assembly. We publicly release the software used in this
14
+ paper to enable further exploration of this problem.
15
+ Index Terms— Automatic Model Creation; Biological Networks; Extending Biological Networks; Model Construction; Network
16
+ Reconstruction.
17
+ —————————— u ——————————
18
+ 1 INTRODUCTION
19
+ omputational approaches to modeling large complex
20
+ systems standardize the representation of knowledge,
21
+ while simulation of computational models illuminates the
22
+ dynamics of systems, allowing for discoveries and theoret-
23
+ ical advances [1]. Due to the complexity and redundancy
24
+ of biological systems, computational models are difficult
25
+ and laborious to create and update. There are two main ap-
26
+ proaches to modeling these systems, bottom-up and top-
27
+ down [2]. In a bottom-up approach, known molecular in-
28
+ teractions are assembled into a model to help explain the
29
+ system’s behavior and predict how the system will re-
30
+ spond to new stimuli or inputs. This method has been used
31
+ extensively by biologists, biochemists, and molecular biol-
32
+ ogists to manually create models based on the interactions
33
+ within cells involved in signaling that are supported by sci-
34
+ entific literature. In a top-down approach, experimental
35
+ data—usually collected with high-throughput methods—
36
+ is used to infer correlations between element behavior and
37
+ determine causal relationships. Top-down approaches em-
38
+ ploy many different methods such as Bayesian Inference
39
+ [3], ANOVA calculations [4], and Fuzzy Logic [5]. In both
40
+ the mechanistic bottom-up approach and the data-driven
41
+ top-down approach, the model is used to predict the be-
42
+ havior of individual elements in the network [6, 7]. Re-
43
+ cently, there has been a push to integrate the two methods,
44
+ using experimental data to inform the bottom-up ap-
45
+ proach, and incorporating prior knowledge into the top-
46
+ down approach to reduce the number of potential models
47
+ [8-12]. Despite these hybrid approaches, this problem re-
48
+ mains a combinatoric one, with large, complex systems be-
49
+ ing prohibitively difficult to investigate and model manu-
50
+ ally.
51
+ It is a direct result of these factors that system and com-
52
+ putational biologists have endeavored to automate the
53
+ process of model creation and extension. To automatically
54
+ create models, information can be extracted from litera-
55
+ ture, queried from databases, or taken from existing path-
56
+ ways and models. Public databases such as Reactome [13],
57
+ MetaCyc [14], OmniPath [15], and STRING [16] offer easy
58
+ access to millions of interactions. Additionally, there exist
59
+ a number of model databases with published models that
60
+ are publicly available such as The Nature Pathway Interac-
61
+ tion Database [17], WikiPathways [18], BioModels [19], the
62
+ Cell Collective [20], and KEGG pathways [21]. These data-
63
+ bases contain highly targeted, curated published and un-
64
+ published models which are created for specific biological
65
+ context and may not be generalizable to explain other phe-
66
+ nomena. When new interactions are discovered, and de-
67
+ scribed in a scientific publication, state-of-the-art machine
68
+ reading engines such as REACH [22], TRIPS [23], and
69
+ EVEX [24] can extract them, together with other relevant
70
+ information. These automated readers are able to extract
71
+ tens of thousands of biological entity interactions from
72
+ hundreds of papers in a few hours, and produce a ma-
73
+ chine-readable, structured output [22]. Despite this abun-
74
+ dance of available interactions, there is still no efficient
75
+ way to assemble them into accurate models that correctly
76
+ reflect the system under investigation and the same biolog-
77
+ ical context and recapitulate the observed experimental be-
78
+ havior.
79
+ Recently, a few tools, such as Path2Models [25] and IN-
80
+ DRA [26, 27], have been created to help modelers collect
81
+ biological interactions, assemble a model, and perform
82
+ xxxx-xxxx/0x/$xx.00 © 200x IEEE Published by the IEEE Computer Society
83
+ ————————————————
84
+ • A.A. Butchy is with the Department of Bioengineering, University of Pitts-
85
+ burgh, Pittsburgh, PA 15213. E-mail: adam.butchy@pitt.edu.
86
+ • C.A. Telmer is with the Department of Biological Sciences, Carnegie
87
+ Mellon University, Pittsburgh, PA 15213. E-mail: ctelmer@cmu.edu.
88
+ • N. Miskov-Zivanov is with the Departments of Electrical and Computer
89
+ Engineering, Bioengineering, and Computational Biology, University of
90
+ Pittsburgh, Pittsburgh, PA 15213. E-mail: nmzivanov@pitt.edu.
91
+
92
+ C
93
+
94
+ 2
95
+ IEEE TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, MANUSCRIPT ID
96
+
97
+ simulations. These tools assemble quantitative and quali-
98
+ tative models using available pathway information; how-
99
+ ever, the quality of the assembled models is dependent
100
+ upon the modeling approach, and the granularity of the
101
+ information they are given. These techniques rely on accu-
102
+ rate information, and their performance suffers when the
103
+ interaction information is incomplete, from a different bio-
104
+ logical context, or erroneous. Other methods have been
105
+ proposed to automatically expand, test, and select the best
106
+ model, with respect to a given performance metric. These
107
+ approaches integrate stochastic model simulations with
108
+ statistical model checking only [28], or also incorporating
109
+ Markov clustering [29], or genetic algorithm [30], and
110
+ therefore have different strengths and weaknesses. The
111
+ Markov clustering approach to model extension is well
112
+ suited for the combinatorial explosion in the number of
113
+ possible model extensions while the genetic algorithm ap-
114
+ proach is overwhelmed by large number of extensions.
115
+ Markov clustering prioritizes strongly connected compo-
116
+ nents at the expense of interactions involving nodes of low
117
+ degree. The genetic algorithm explores the effect of single
118
+ extensions distributed throughout the network.
119
+ In this work, we examine the complexities inherent to
120
+ automatic model assembly and extension. We use two
121
+ novel algorithms, Breadth First Addition (BFA) and Depth
122
+ First Addition (DFA), which utilize the same principles as
123
+ the breadth-first search and depth-first search algorithms
124
+ in network studies [31] to illustrate the key limitations of
125
+ iterative model assembly and extension. In contrast to pre-
126
+ vious work [28-30], these methods not only represent a
127
+ new approach to bottom-up model assembly but are also
128
+ used to demonstrate the existence of key biological prop-
129
+ erties which hinder automated modeling of biological sys-
130
+ tems. We demonstrate these properties using both syn-
131
+ thetic networks, Erdös-Rényi random networks (ER) [32]
132
+ and Barabási-Albert scale-free networks (BA) [33], as well
133
+ as two published expert curated and validated models, a T
134
+ cell large granular lymphocyte (TLGL) leukemia model
135
+ [34], and a model of naïve Tcell differentiation (Tcell) [35].
136
+ By using different network structures, we are able to more
137
+ comprehensively explore automated model assembly and
138
+ identify the main difficulties with the BFA and DFA ap-
139
+ proaches.
140
+ 2 METHODS
141
+ 2.1 Discrete Models and Simulations
142
+ The underlying structure of models that we study here is a
143
+ network 𝐺(𝑉, 𝐸), where 𝑉 is a set of nodes (model ele-
144
+ ments), and 𝐸 is a set of directed edges (regulatory influ-
145
+ ences between elements). A few toy examples of such net-
146
+ works are shown in Figure 1 (A). Model elements usually
147
+ represent proteins, genes, chemicals, or biological pro-
148
+ cesses. For each model element 𝑣! ∈ 𝑉 (𝑖 = 1. . 𝑁, where
149
+ 𝑁 = |𝑉|), we define an update rule 𝑣! = 𝑓"!(𝑣#, 𝑣$, … , 𝑣%),
150
+ which can either be a constant (input nodes in network 𝐺)
151
+ or it can depend on a subset of elements from 𝑉. In the lat-
152
+ ter case, for each element 𝑣! this subset is often referred to
153
+ as an influence set for 𝑣! and it consists of its positive (acti-
154
+ vating) and negative (inhibiting) regulators. Positive
155
+ regulators of 𝑣! comprise set 𝑉&'(
156
+ ! and are represented with
157
+ regular arrowheads in Figure 1 (A). Negative regulators of
158
+ 𝑣! comprise set 𝑉)*+
159
+ ! and are represented with blunt arrow-
160
+ heads in Figure 1 (A).
161
+ The high throughput retrieval of interaction infor-
162
+ mation from literature typically only includes knowledge
163
+ of the sign of influence (positive or negative) and rarely ad-
164
+ ditional information about relationships between regula-
165
+ tors. In such cases, logic functions and elements with two
166
+ levels, 0 (low) and 1 (high), have been found most suitable.
167
+ To broaden the application beyond just Boolean functions
168
+ to other cases where interactions were enriched either
169
+ through manual curation or more specific information re-
170
+ trieval, we will assume that each element 𝑣! can have 𝐿!
171
+ number of discrete levels. While the choice of function
172
+ does not affect the main algorithms described in Section
173
+ 2.2, in order to simulate models, and closely approximate
174
+ different functions, including Boolean, we adopted the
175
+ common approach that computes a (weighted) sum of reg-
176
+ ulator values to determine element update values. The
177
+ general form of this function is:
178
+ 𝑔"! = 𝑓"!(𝑣#, ����$, … , 𝑣%) = ∑
179
+ 𝑤,𝑣,
180
+ ""∈.#$%
181
+ !
182
+ − ∑
183
+ 𝑤/𝑣/
184
+ "&∈.'()
185
+ !
186
+
187
+ (1)
188
+ The weighting factors 𝑤, and 𝑤/ can be used to account for
189
+ different influence strengths for regulators. To remain
190
+ within boundaries of the allowed levels for element 𝑣! (0..
191
+ 𝐿! − 1), the function 𝑔"! is then used to determine a suitable
192
+ increment/decrement for 𝑣!, 𝛿"! = 𝑓(𝑔"!), such that:
193
+ 𝑣!,)*12 = 8
194
+ 0
195
+ 𝑣! + 𝛿"! ≤ 0
196
+ 𝑣! + 𝛿"!
197
+ 0 < 𝑣! + 𝛿"! < 𝐿! − 1
198
+ 𝐿! − 1
199
+ 𝑣! + 𝛿"! ≥ 𝐿! − 1
200
+
201
+ (2)
202
+ Together, the set of model elements 𝑉, element influences
203
+ forming the set 𝐸, and the set of element update rules 𝐹,
204
+ comprise an Executable Model, ℳ(𝑉, 𝐸, 𝐹), a model that in-
205
+ cludes all the necessary information for simulation and dy-
206
+ namic analysis.
207
+ We use the Discrete, Stochastic, Heterogeneous simula-
208
+ tor (DiSH) [36] which allows for simulations of discrete
209
+ models with various types of update functions, and has
210
+ several different simulation schemes, that can be either de-
211
+ terministic or stochastic. For the analysis we conducted
212
+ here, we used the USB-RSQ simulation scheme in DiSH
213
+ (uniform, step-based, random-order, sequential update
214
+ scheme, described in detail in [36]) . It has been shown pre-
215
+ viously [36, 37] that, by taking into account the random-
216
+ ness in timing of signaling events, the USB-RSQ simulation
217
+ scheme is able to recapitulate the network dynamics within
218
+ cells. DiSH simulates the models starting from an initial
219
+ state 𝒒ℳ,4 = A𝑠"*,4, 𝑠"+,4, … , 𝑠",,4C (assigned before simula-
220
+ tions), where 𝑠"!,4 denotes the state value of element 𝑣! at
221
+ time point 𝑡 = 0, and for a pre-defined number of time
222
+ steps, 𝑇 (e.g., when the steady state is reached). Each such
223
+ simulation run, 𝑟, yields for every model element 𝑣! ∈ 𝑉, a
224
+ trajectory of values, 𝒔"!
225
+ 5 = A𝑠"!,#
226
+ 5 , 𝑠"!,$
227
+ 5 , … 𝑠"!,6
228
+ 5
229
+ C, where 𝑠"!,2
230
+ 5 is
231
+ the state value of element 𝑣! at time point 𝑡 (𝑡 = 1, . . , 𝑇)
232
+ within run 𝑟. Due to the randomness of the update scheme,
233
+ element trajectories may vary across multiple runs that
234
+
235
+ BUTCHY ET AL.: TITLE
236
+ 3
237
+
238
+ start with the same initial state. Therefore, for the same
239
+ time step 𝑡, following the approach from [36], we compute
240
+ the mean of values 𝑠"!,2
241
+ 5 across different runs, to obtain av-
242
+ erage trajectories for all elements. More formally, we com-
243
+ pute an average element trajectory of element 𝑣! as:
244
+ 𝒔H"! = 1
245
+ 𝑅 J 𝒔"!
246
+ 5
247
+ 7
248
+ 58#
249
+ = 1
250
+ 𝑅 JA𝑠"!,#
251
+ 5 , 𝑠"!,$
252
+ 5 , … 𝑠"!,6
253
+ 5
254
+ C
255
+ 7
256
+ 58#
257
+
258
+ = A𝑠̅"!,#, 𝑠̅"!,$, … 𝑠̅"!,6C
259
+
260
+ (3)
261
+ where 𝑅 is the overall number of conducted simulation
262
+ runs. For example, in Figure 1 (B), we illustrate simulation
263
+ trajectories for elements of the toy models in Figure 1 (A).
264
+ We denote average model state for model ℳ(𝑉, 𝐸, 𝐹) at time
265
+ step 𝑡 as a vector of average element states at time step 𝑡:
266
+
267
+ 𝒒ℳ,2
268
+ 9"+ = A𝑠̅"*,2, 𝑠̅"+,2, … , 𝑠̅",,2C
269
+ (4)
270
+ We define model behavior resulting from a specific initial
271
+ model state 𝒒ℳ,4 = (S#, S$, … , S%) as:
272
+
273
+ 𝑸ℳ = A𝒒ℳ,4, 𝒒ℳ,#
274
+ 9"+, … , 𝒒ℳ,6
275
+ 9"+C
276
+ (5)
277
+ 2.2 Extension method inputs
278
+ We define here inputs used by extension methods and by
279
+ our evaluation methodology: Baseline Model, Golden
280
+ Model, and Candidate Knowledge.
281
+ Existing models of a system of interest are often lever-
282
+ aged and contextualized for a specific purpose. The Base-
283
+ line Model is the existing, high confidence model before
284
+ updating with extensions. As a special case, we can also
285
+ assume that the Baseline Model is an empty network with
286
+ no nodes or edges. The Golden Model is assumed to
287
+ contain all relevant knowledge about the system, including
288
+ accurate element relationships and update functions. The
289
+ Candidate Knowledge is a set of directed edges, including
290
+ their source and target nodes, which are candidates for ad-
291
+ dition to the Baseline Model.
292
+ Given the Golden Model knowledge, through simula-
293
+ tions, for different initial states representing different con-
294
+ ditions and scenarios, we can obtain Golden Model behav-
295
+ ior, 𝑸:;, as in Equation 5. 𝑸:; represents the true expected
296
+ behavior of the system being modeled. As part of 𝑸:;, we
297
+ also obtain the average Golden Model state at the final sim-
298
+ ulation time step 𝑇 (e.g., steady state), 𝒒:;,6
299
+ 9"+ .
300
+ The above definition of Golden Model is important for
301
+ the rest of our discussion since Golden Model is used as an
302
+ input to our evaluation methodology. However, in real sce-
303
+ narios, the Golden Model is usually not known in advance.
304
+ Instead, the goal of model assembly and extension algo-
305
+ rithms is to discover the Golden Model, while only the real
306
+ system behavior, i.e., measured state values for system
307
+ components, may be available. The system state data can
308
+ be used to form the target behavior 𝑸N. Ideally, the Golden
309
+ Model behavior is identical to the target behavior, ���:; =
310
+ 𝑸N. The target state at time 𝑇 is part of the target behavior
311
+ and is denoted as 𝒒O6.
312
+ As will be detailed in the following sub-sections, exten-
313
+ sion algorithms start with the Baseline Model for which
314
+ 𝑸<; ≠ 𝑸N. Next, they add selected edges from the Candi-
315
+ date Knowledge to create new models, called Candidate
316
+ Models, which are then iteratively updated and simulated
317
+ to obtain 𝑸=; in each iteration, and to ultimately find a
318
+ model that most closely reproduces the target behavior 𝑸N.
319
+ Figure 1. A toy example illustrating directed cyclic network models explored in this work and the flow of the proposed meth-
320
+ odology for evaluating extension algorithms. (A) (top) An example Golden Model used in evaluation; (middle) Example input
321
+ graphs, Candidate Knowledge, and Baseline Model, used in extension methods ([28-30] and this work); (bottom) An example
322
+ Candidate Model recommended by extension methods. (B) Average element trajectories obtained from stochastic simulation
323
+ for the three example models (Golden, Baseline, and Candidate). (C) An example iterative procedure that uses the Total Model
324
+ Error (TME) metric to evaluate each intermediate Candidate Model.
325
+
326
+
327
+ A
328
+ Golden Model
329
+ B
330
+ c
331
+ Iterative Extension*
332
+ Golden Model Behavior
333
+ A
334
+ B
335
+ a
336
+ C
337
+ D
338
+ e
339
+ E
340
+ TME
341
+ F
342
+ Edge Removal
343
+ Candidate
344
+ Baseline Model
345
+ Baseline Model Behavior
346
+ Knowledge
347
+ F
348
+ 0
349
+ A
350
+ B
351
+ B
352
+ 1
353
+ 2
354
+ B
355
+ D
356
+ D
357
+ (Baseline)
358
+ E
359
+ E
360
+ Iteration
361
+ B
362
+ A
363
+ C
364
+ F
365
+ c
366
+ Where:
367
+ a = TME of Baseline Model
368
+ Model Extension
369
+ b = TME of Candidate Model 3
370
+ c = TME of Candidate Model 1
371
+ Candidate Model
372
+ Candidate Model Behavior
373
+ d = TME of Candidate Model 2
374
+ A
375
+ e = TME of Candidate Model 4
376
+ E
377
+ B
378
+ f = TME of Candidate Model 5
379
+ D
380
+ → = Path of minimizing TME
381
+ C
382
+ → = Explored TME paths
383
+ F4
384
+ IEEE TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, MANUSCRIPT ID
385
+
386
+ 2.3 Model evaluation metric
387
+ Given two models, ℳ# and ℳ$, if they have the same ele-
388
+ ment sets, 𝑉ℳ* ≡ 𝑉ℳ+ ≡ 𝑉 (𝑁 = |𝑉|), and if we simulate
389
+ them starting from the same initial state, 𝒒ℳ*,4 = 𝒒ℳ+,4 =
390
+ A𝑠"*,4, 𝑠"+,4, … , 𝑠",,4C, to obtain their behaviors, 𝑸ℳ* and
391
+ 𝑸ℳ+, respectively, we can compute the difference between
392
+ the two model behaviors, Δ2A𝑸ℳ*, 𝑸ℳ+C at any simulation
393
+ time step 𝑡 as:
394
+
395
+ Δ2A𝑸ℳ*, 𝑸ℳ+C = ∑
396
+ S𝑠̅"!,2
397
+ ℳ* − 𝑠̅"!,2
398
+ ℳ+S
399
+ %
400
+ !8#
401
+
402
+ (6)
403
+ In other words, Δ2 finds the absolute difference between an
404
+ element’s average state in time step 𝑡, in model ℳ# (𝑠̅"!,2
405
+ ℳ*)
406
+ and in model ℳ$ (𝑠̅"!,2
407
+ ℳ+) and sums these differences across
408
+ all model elements.
409
+ From (6), we derive the Total Model Error (TME) metric,
410
+ as Δ6, when 𝑡 = 𝑇, between a Candidate Model behavior
411
+ 𝑸=; and known target behavior 𝑸N:
412
+
413
+ TMEA𝑸=;, 𝑸NC = Δ6A𝑸=;, 𝑸NC = ∑
414
+ S𝑠̅"!,6
415
+ =; − 𝑠̂"!,6S
416
+ %
417
+ !8#
418
+ (7)
419
+ Or, in the case when a Golden Model is used:
420
+ TME(𝑸=;, 𝑸:;) = Δ6(𝑸=;, 𝑸:;) = ∑
421
+ S𝑠̅>!,6
422
+ =; − 𝑠̅>!,6
423
+ :;S
424
+ %
425
+ !8#
426
+ (8)
427
+ Besides the above defined Δ2, other types of functions
428
+ could be used to compute the difference between two mod-
429
+ els, such as the squared error, or more statistic-based eval-
430
+ uation methods like the Chi-squared test to compare the
431
+ Figure 2. The Breadth and Depth First Addition (BFA and DFA, respectively) algorithms. Top: The pseudocode for the two
432
+ algorithms. Bottom: An example illustrating the Candidate Knowledge and Baseline Model inputs and steps for BFA and DFA
433
+ algorithms: (A, D) The inputs to the BFA and DFA algorithms. (B) In the BFA extension process, the Baseline Model is extended
434
+ with single interactions from Candidate Knowledge and the TME is calculated for each Candidate Model. The Candidate Model
435
+ with the lowest TME is selected and becomes the Baseline Model for the next iteration. (E) In the DFA extension process, the
436
+ Baseline Model is extended with a single interaction from Candidate Knowledge and the TME is calculated to determine if the
437
+ Candidate Model has a lower TME than the Baseline Model. As soon as the TME decreases, that edge of Candidate Knowledge
438
+ is incorporated into the Candidate Model, and it becomes the Baseline Model for the next iteration. (C, F) For both algorithms,
439
+ the process is repeated with the remaining Candidate Knowledge until all edges are added back, the TME reaches zero, or there
440
+ are no edges that reduce the TME below its current lowest value.
441
+
442
+
443
+
444
+
445
+ Algorithm: Breadth First Addition (BFA)
446
+ Algorithm: Depth First Addition (DFA)
447
+ Input: baseline model (MBM), list of edges (ENEw), TME of the baseline model
448
+ Input: baseline model, list of edges, expected performance of the golden
449
+ (TMEBM), expected performance of the golden model (QGM)
450
+ model, current TME
451
+ Output: extended baseline model that minimizes the TME
452
+ Output: extended baseline model that minimizes the TME
453
+ 1: while (TMEBM!= 0) and (ENEw != [)
454
+ 1:
455
+ EADDED = FALSE
456
+ 2:
457
+ Initialize scores = []
458
+ 2:
459
+ for edge in ENEw:
460
+ 3:
461
+ for edge in ENEw:
462
+ 3:
463
+ 4:
464
+ McM = a candidate model is created by adding the edge to the MBM
465
+ 4:
466
+ simulate McM
467
+ 5:
468
+ simulate McM
469
+ 5:
470
+ TME(QGM,QcM ) use the TME function to compare
471
+ 6:
472
+ TME(QGM,QcM ) use the TME function to compare the
473
+ 6:
474
+ the candidate model to the expected performance
475
+ 7:
476
+ 7:
477
+ candidate model to the expected performance of the golden model
478
+ of the golden model
479
+ 8:
480
+ 8:
481
+ Append TMEcM to the scores list
482
+ if TMEcM < TMEBM:
483
+ 9:
484
+ 9:
485
+ end for
486
+ McM = a candidate model is created by
487
+ 10:
488
+ 10:
489
+ find index = min(scores)
490
+ adding the edge to the MBM
491
+ 11:
492
+ 11:
493
+ TMEcM = scores(index)
494
+ MBM = McM
495
+ 12:
496
+ 12:
497
+ if TMEcM < TMEBM:
498
+ ENEw.delete(edge)
499
+ 13:
500
+ 13:
501
+ McM = a candidate model is created by adding
502
+ 14:
503
+ 14:
504
+ the edge to the MBM
505
+ EADDED = TRUE
506
+ 15:
507
+ 15:
508
+ MBM = McM
509
+ exit for loop
510
+ 16:
511
+ 16:
512
+ 17:
513
+ ENEw.delete(index)
514
+ 17:
515
+ end if
516
+ 18:
517
+ TMEBM = TMEcM
518
+ 18:
519
+ end for
520
+ 19:
521
+ else
522
+ 19:
523
+ if (EADDED =- FALSE)
524
+ 20:
525
+ return MBM
526
+ 20:
527
+ return MBM
528
+ 21:
529
+ end if
530
+ 21:
531
+ end if
532
+ 22: end while
533
+ 22:
534
+ : end while
535
+ 23: return MBM
536
+ 23: return MBMCandidate
537
+ Baseline Model
538
+ Candidate
539
+ Baseline Model
540
+ A
541
+ Knowledge
542
+ TME = 5.0
543
+ Knowledge
544
+ TME = 5.0
545
+ BFA
546
+ DFA
547
+ B
548
+ D
549
+ Inputs
550
+ Inputs
551
+ 2)
552
+ 2)
553
+ 3)
554
+ 3)
555
+ B
556
+ E
557
+ First
558
+ First
559
+ Extension
560
+ Extension
561
+ Round
562
+ Round
563
+ Extension 1
564
+ Extension 2
565
+ Extension 3
566
+ Extension 1
567
+ TME = 3.0
568
+ TME = 2.0
569
+ TME = 6.0
570
+ TME = 3.0
571
+ c
572
+ :
573
+ Second
574
+ Second
575
+ Extension
576
+ Extension
577
+ Round
578
+ Round
579
+ Extension 2&1
580
+ Extension 2&3
581
+ Extension 1&2
582
+ Extension 1&3
583
+ TME = 0.0
584
+ TME = 4.0
585
+ TME = 0.0
586
+ TME = 4.0BUTCHY ET AL.: TITLE
587
+ 5
588
+
589
+ distribution of model states at time step 𝑡. We use the ab-
590
+ solute difference of the model’s end state (𝑡 = 𝑇) for a few
591
+ reasons: it would not exaggerate the effect of large differ-
592
+ ences (as would be observed in the squared error); it is less
593
+ computationally expensive than the Chi-squared test; and
594
+ it more accurately matches how computational biologists
595
+ compare computational model simulations against sparse
596
+ biological measurements, where the full time-course of the
597
+ model elements is often unknown.
598
+ 2.4 Methodology for evaluating model extension
599
+ In this work, we are interested in evaluating automated
600
+ model extension, that is, the limitations of automatically
601
+ extending the Baseline Model with behavior 𝑸<; to
602
+ achieve the target or Golden Model behavior 𝑸:;. There-
603
+ fore, in our studies we assume that the Golden Model is
604
+ known, and to obtain Baseline Models we use the proce-
605
+ dure illustrated in Figure 1 (A) and described as follows.
606
+ For a given Golden Model, we create multiple Baseline
607
+ Models by removing edges from the Golden Model, in order
608
+ to disrupt its behavior and to determine whether the ex-
609
+ tension algorithms are able to recover the Golden Model
610
+ from a range of Baseline Models. The removed edges form
611
+ the Candidate Knowledge sets (Figure 1 (A)). The extension
612
+ algorithms are given the Baseline Model and the Candi-
613
+ date Knowledge and tasked with extending the Baseline
614
+ Model using edges from the Candidate Knowledge, to cre-
615
+ ate Candidate Models (Figure 1 (A)) and reproduce the
616
+ Golden Model behavior.
617
+ Using the DiSH simulator, we simulate the Golden,
618
+ Baseline, and Candidate Models to observe how elements
619
+ of each model behave over time, and to obtain model be-
620
+ haviors 𝑸:;, 𝑸<;, 𝑸=;, respectively (Figure 1 (B)). The goal
621
+ of this procedure is to find Candidate Model(s) with be-
622
+ havior similar to the Golden Model behavior. By tracking
623
+ TME (Equation 8) across consecutive extension iterations,
624
+ we can add Candidate Knowledge to the Baseline Model
625
+ to form new Candidate Models and determine whether
626
+ these new models perform more closely to the Golden
627
+ Model (Equation 8). If the TME decreases, the Candidate
628
+ Model is considered an improvement to the Baseline
629
+ Model. If the TME increases, the Candidate Model is
630
+ considered worse than the model from previous iteration,
631
+ and the Candidate Knowledge incorporated is removed
632
+ from the model. At each iteration, all Candidate
633
+ Knowledge is added one interaction at a time and the TME
634
+ is calculated. Candidate Knowledge with the largest de-
635
+ crease of TME is incorporated.
636
+ 2.5 The Breadth First and Depth First Algorithms
637
+ In this analysis, we employ two algorithms to illustrate two
638
+ different philosophies in automated assembly and exten-
639
+ sion; namely (i) incorporating the least amount of infor-
640
+ mation necessary into the model that best improves the
641
+ model and (ii) incorporating the most amount of infor-
642
+ mation into the model as long as it relates to and improves
643
+ the model. These algorithms are called the: (i) Breadth First
644
+ Addition (BFA) algorithm that compares all potential ad-
645
+ ditions against each other to only add the best supported
646
+ information at any one time, and the (ii) Depth First Addi-
647
+ tion (DFA) algorithm that incorporates any new infor-
648
+ mation that improves the model. The pseudocode for the
649
+ two algorithms is shown in Figure 2 (top) and we depict
650
+ example demonstrations for both algorithms in Figure 2
651
+ (bottom).
652
+ The Breadth First Addition (BFA) algorithm starts by
653
+ evaluating the contribution of each new edge to decreasing
654
+ TME, that is, it simulates the model that consists of the
655
+ original Baseline Model and a selected new edge, and then
656
+ computes TME of that extended model according to Eq 8.
657
+ Next, it permanently incorporates the new edge that leads
658
+ to the largest decrease in the original TME, and then it re-
659
+ peats the steps with this new extended model, i.e., similar
660
+ to what was done with the original model, it evaluates ad-
661
+ dition of the remaining edges to this new model by com-
662
+ puting their TME values. This process is repeated until at
663
+ least one of the following conditions is satisfied: (i) the ex-
664
+ tended model matches the expected end values of the
665
+ Golden Model; (ii) there are no more edges to evaluate; (iii)
666
+ no edge can be added to the Baseline Model without in-
667
+ creasing TME. The pseudocode and the toy example for the
668
+ BFA algorithm are shown in Error! Reference source not
669
+ found. (left).
670
+ The Depth First Addition (DFA) algorithm, similar to
671
+ Figure 3. Network structure illustration, standard graph attributes, and node degree distribution histograms for different net-
672
+ work types: Erdos-Renyi random networks, Barabasi-Albert scale-free networks, and two human-curated published biological
673
+ networks, TLGL and Tcell.
674
+
675
+
676
+ Erdos-Renyi
677
+ Barabasi-Albert
678
+ Published
679
+ Published
680
+ Network Type
681
+ Network
682
+ Network
683
+ TLGL Model
684
+ Tcell Model
685
+ Network Structure
686
+ Number of Nodes
687
+ 48.4 ± 1.4
688
+ 50.0 ± 0.0
689
+ 87
690
+ 80
691
+ Number of Edges
692
+ 85.5 ± 8.9
693
+ 96.0 ± 0.0
694
+ 171
695
+ 122
696
+ Model Density
697
+ 0.04 ± 0.00
698
+ 0.04 ± 0.00
699
+ 0.024
700
+ 0.019
701
+ Model Average Degree
702
+ 3.53 ± 0.31
703
+ 3.84 ± 0.00
704
+ 4.07
705
+ 3.05
706
+ Undirected Model Clustering
707
+ 0.06 ± 0.03
708
+ 0.20 ± 0.06
709
+ 0.28
710
+ 0.0
711
+ Undirected Model Diameter
712
+ 6.9 ± 0.7
713
+ 5.0 ± 0.4
714
+ 4.0
715
+ 12.0
716
+ Number of Models Used
717
+ 50
718
+ 50
719
+ 1
720
+ 1
721
+ Node Degree Distribution
722
+ 0.0
723
+ 89 104
724
+ 0.0
725
+ 89 10+6
726
+ IEEE TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, MANUSCRIPT ID
727
+
728
+ the BFA algorithm, starts with evaluation of edges by com-
729
+ puting their contribution to decreasing TME of the Base-
730
+ line Model. Different from BFA, as soon as it finds an edge
731
+ which leads to a TME lower than the current TME, it adds
732
+ that edge to the Baseline Model. These steps are then re-
733
+ peated using the new extended model and the remaining
734
+ edges. Same as for the BFA algorithm, the DFA algorithm
735
+ stops when at least one of the three conditions above, (i)-
736
+ (iii) is satisfied. The pseudocode and the toy example for
737
+ the DFA algorithm are shown in Error! Reference source n
738
+ ot found. (right).
739
+ 3 RESULTS
740
+ We describe here our experimental setup, including the set
741
+ of benchmarks that we created (Sections 3.1 and 3.2), and
742
+ we follow with a discussion of the outcomes of our study
743
+ (Sections 3.3-3.5).
744
+ 3.1 Benchmarks: Synthetic and Curated Models
745
+ In this analysis, we explore how the BFA and DFA algo-
746
+ rithms affect automated assembly and extension of two
747
+ types of synthetic networks and two manually curated
748
+ published biological signaling pathway networks.
749
+ The Erdos-Renyi (ER) network type is considered a ran-
750
+ dom graph and does not share many similarities to biolog-
751
+ ical networks. The Barabasi-Albert (BA) network type is a
752
+ scale-free network that has many shared characteristics
753
+ with biological networks (most notably their node-degree
754
+ distribution). Since we generated the ER and BA networks
755
+ in a random manner, we created 50 models for each net-
756
+ work type. We employed the python package, NetworkX
757
+ [38] to create all synthetic networks.
758
+ The last two networks we used in our studies are the
759
+ human-curated biological model of T cell large granular
760
+ lymphocyte (TLGL) leukemia [34] and the biological
761
+ model of naïve T cell differentiation (Tcell) [35]. The TLGL
762
+ model has been used previously [39, 40] to perform struc-
763
+ tural and dynamic analysis in order to identify potential
764
+ therapeutic targets, while the Tcell model was created to
765
+ explore the control circuitry of naïve T cell differentiation
766
+ [41][42].
767
+ In Figure 3, we show example networks illustrating dif-
768
+ ferent structure of these models. We also list several de-
769
+ scriptive statistics for networks to demonstrate the similar-
770
+ ities and differences between these network types. Model
771
+ Density is the fraction of edges present over all possible
772
+ edges between nodes. Model Average Degree is the sum of
773
+ each node’s degree across all model nodes (with degree be-
774
+ ing the number of edges that are incident to the node), di-
775
+ vided by the number of nodes in the graph. Undirected
776
+ Model Clustering [43] is a measure of the degree to which
777
+ nodes in a graph tend to cluster together in groups of local
778
+ triangles. Undirected Model Diameter is the maximum dis-
779
+ tance from any node in the network to any other node. In
780
+ the last row in Figure 3, we provide histograms of the Node
781
+ Degree Distribution metric. In the case of ER and BA net-
782
+ works, the histograms show average values for 50 gener-
783
+ ated models.
784
+ 3.2 Experimental Setup
785
+ For the purposes of the evaluation discussed here, we
786
+ assume that each model element 𝑣! ∈ 𝑉 (𝑖 = 1. . ��, where
787
+ 𝑁 = |𝑉|), can be in one of the three states, OFF (value 0),
788
+ LOW activity (value 1), and HIGH activity (value 2). This
789
+ assumption makes the synthetic networks comparable to
790
+ the published biological models. We randomly initialized
791
+ the synthetic networks (as they are not based on human-
792
+ curated or biological knowledge) while we initialized the
793
+ Tcell [35] and TLGL [34] models based on the values listed
794
+ in their corresponding publications. As nodes and edges
795
+ are added back into the model, we assume that the initial
796
+ state value of each model element 𝑣!, is 𝑠"!,#
797
+ 5
798
+ = 1. For each
799
+ created model, we conducted 𝑅 = 100 simulation runs. For
800
+ synthetic models, we simulated ER and BA models each
801
+ with T = 2,500 time steps, while we simulated human cu-
802
+ rated models—TLGL and Tcell— for T = 5,000 time steps.
803
+ The simulation length was governed by how long each net-
804
+ work type required to reach a steady state.
805
+ 3.3 Network structure and baseline information
806
+ complicate model assembly
807
+ For each Golden Model, we used five different removal
808
+ probabilities 𝑝5*?'"9@ ∈ [0.10, 0.25, 0.50, 0.75, 1.00] to ran-
809
+ domly select edges for removal from the Golden Model.
810
+ Edges that were removed formed the Candidate
811
+ Knowledge and the remaining edges formed the Baseline
812
+ Model. When 𝑝5*?'"9@ = 1.00, the Baseline Model is empty
813
+ (no edges) and both the BFA and DFA algorithms will at-
814
+ tempt to reassemble the biological networks with only
815
+ Candidate Knowledge. In all conducted studies (𝑝5*?'"9@ ∈
816
+ [0.10, 0.25, 0.50, 0.75, 1.00]), both the BFA and DFA algo-
817
+ rithms were given the exact same Baseline Models and
818
+ Candidate Knowledge and tasked to reconstruct the
819
+ Golden Model. The recall—or ratio of edges returned to the
820
+ Baseline Model out of all removed edges—is shown in Fig-
821
+ ure 4 for each network type (Erdos-Renyi - blue, Barabasi-
822
+ Figure 4. Recall distributions for all explored scenarios, for
823
+ each network type (Erdos-Renyi - blue, Barabasi-Albert - red,
824
+ TLGL - green, Tcell – purple) and at different edge removal
825
+ probability (𝑝5*?'"9@ ∈ [0.10, 0.25, 0.50, 0.75, 1.00]). (A) BFA
826
+ algorithm results and (B) DFA algorithm results.
827
+
828
+
829
+
830
+
831
+ ER
832
+
833
+ BA
834
+ TLGL
835
+ Tcell
836
+ A
837
+ 1.0-
838
+ 0.8
839
+ recall
840
+ 0.6
841
+ 0.4
842
+ 0.2
843
+ 0.0
844
+ 10
845
+ 25
846
+ 50
847
+ 75
848
+ 100
849
+ Premoval
850
+ B
851
+ 1.0-
852
+ 0.8
853
+ recall
854
+ 0.6
855
+ 0.4
856
+ 0.2
857
+
858
+ 0.0
859
+ 10
860
+ 25
861
+ 50
862
+ 75
863
+ 100
864
+ PremovalBUTCHY ET AL.: TITLE
865
+ 7
866
+
867
+ Albert - red, TLGL - green, Tcell - purple) and each algo-
868
+ rithm (BFA – part A, DFA – part B).
869
+ In general, network type drastically affects recall rates,
870
+ and for the most part, each network’s recall trends down
871
+ with higher 𝑝5*?'"9@. This makes intuitive sense as the
872
+ more edges that are removed from each network, the more
873
+ information there is to add back, and therefore the recall
874
+ has a larger denominator (i.e., the size of the Candidate
875
+ Knowledge set). Even with many missing edges, both BFA
876
+ and DFA can still converge on local minima as long as each
877
+ edge reduces TME. Both ER and Tcell network types corre-
878
+ spond to higher rates of recall than in BA and TLGL. As
879
+ both BFA and DFA add edges back based on each edge’s
880
+ effect on TME, this points to ER and Tcell networks having
881
+ more edges which tangibly reduce TME. BA networks are
882
+ noted for their hub and spoke structure, with a small num-
883
+ ber of highly connected nodes, and a large number of
884
+ sparsely connected nodes. These networks are known for
885
+ their redundancy, with the removal of an edge often com-
886
+ pensated for by the rest of the network, the behavior that
887
+ is observed in our results (Figure 4).
888
+ 3.4 Model performance is difficult to encapsulate
889
+ into one metric to optimize
890
+ We also examined the relationship between the selected
891
+ 𝑝5*?'"9@ and TME. We expected the TME to be proportional
892
+ to the amount of the information removed from the model
893
+ (i.e., the number of edges in the Candidate Knowledge). To
894
+ explore the effect of network structure on automated as-
895
+ sembly and extension, we evaluated the starting TME of
896
+ each Baseline Model. For each Baseline Model of each net-
897
+ work type, we calculated the actual percentage of edges re-
898
+ moved based on the 𝑝5*?'"9@. This percentage was termed
899
+ the “Percent Removed”. For each network type, we plotted
900
+ the Percent Removed from the Golden Model and the TME
901
+ before extension started. Next, starting with a Golden
902
+ Model of each network type, we removed every combina-
903
+ tion of two edges and calculated the TME of the resultant
904
+ Baseline Models. The results of these two analyses are
905
+ shown in Figure 5.
906
+ We observed from our analysis that TME is not propor-
907
+ tional to missing information and that the contribution of
908
+ different edges to the model’s TME can vary. At higher lev-
909
+ els of Percent Removed, the relationship to TME is not lin-
910
+ ear. This points to the fact that even with only a few edges
911
+ missing, a model can have quite high TME. We found that
912
+ while TME does generally increase with more information
913
+ removed, this increase is not directly proportional or con-
914
+ sistent with information removed. TME functions as a sim-
915
+ plified error function that approximates the Baseline Mod-
916
+ els deviation from Golden Model behavior but does not
917
+ completely reflect how much information is missing from
918
+ the Baseline Model or indicate how much information the
919
+ algorithm must add back.
920
+ Additionally, network type has a large influence on the
921
+ TME response to missing information. Networks like the
922
+ Barabasi-Albert networks appear more robust to infor-
923
+ mation removal, with no single edge resulting in large
924
+ changes in TME. This same behavior is not observed in the
925
+ Erdos-Renyi or human curated models where only a few
926
+ edges can strongly affect TME. Indeed, returning to Figure
927
+ 4, it appears that BA networks are some of the hardest to
928
+ assemble and extend with automated methods relying on
929
+ error evaluation, due to each edge only contributing a little
930
+ to TME. A more comprehensive error function would re-
931
+ quire more information about the Golden Model’s network
932
+ Figure 5. (A) Percent Removed plotted against TME for the ER, BA, TLGL, and Tcell network types. (B) The effect of the removal
933
+ of pairs of edges from the network, first edge index indicated by the x-axis value, second edge index indicated by the y-axis
934
+ value. The TME values are represented with shades of blue, from the minimum observed (i.e., no error, TME=0, shown in white)
935
+ to the maximum observed (TME=50, shown in blue). Solid blue lines show the importance of particular edges to model perfor-
936
+ mance and TME.
937
+
938
+
939
+ ER
940
+ BA
941
+ TLGL
942
+ Tcell
943
+ 60
944
+ 607
945
+ 60
946
+ 601
947
+ A
948
+ 40
949
+ 40J
950
+ 40
951
+ 40
952
+ .
953
+
954
+ .
955
+ :
956
+ 20
957
+ 20}
958
+ 20
959
+ 20
960
+ :
961
+ 0
962
+ .
963
+ .
964
+ of
965
+ 0+
966
+ 01
967
+ 0
968
+ 50
969
+ 100
970
+ 0
971
+ 50
972
+ 100
973
+ 0
974
+ 50
975
+ 100
976
+ 0
977
+ 50
978
+ 100
979
+ Percent Removed
980
+ Percent Removed
981
+ Percent Removed
982
+ Percent Removed
983
+ 82
984
+ 72-
985
+ 50
986
+ B
987
+ 92
988
+ 162
989
+ 72-
990
+ 82
991
+ 142-
992
+ 62
993
+ 0 62
994
+ Second Edge Removed
995
+ Second Edge Removed
996
+ 72-
997
+ 40
998
+ 122
999
+ 52
1000
+ 62-
1001
+ 52-
1002
+ 42
1003
+ 30
1004
+ 82
1005
+ 42-
1006
+ 32-
1007
+ 62
1008
+ 32
1009
+ 20
1010
+ 22
1011
+ 22
1012
+ 42
1013
+ 12
1014
+ A2
1015
+ 22
1016
+ 12-
1017
+ 10
1018
+ 2
1019
+ 2-4
1020
+ 21
1021
+ 2122232425262
1022
+ 7282
1023
+ 122232425262728292
1024
+ .、
1025
+ 22
1026
+ 42
1027
+ 62
1028
+ 82102122142162
1029
+ 2
1030
+ 12
1031
+ 22
1032
+ 62
1033
+ 72
1034
+ Do
1035
+ First Edge Removed
1036
+ First Edge Removed
1037
+ First Edge Removed
1038
+ First Edge Removed8
1039
+ IEEE TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, MANUSCRIPT ID
1040
+
1041
+ structure and dynamics; however, this proves elusive as
1042
+ the more information about the Golden Model there is, the
1043
+ easier this problem becomes.
1044
+ 3.5 Initialization values play a small but important
1045
+ role in network assembly
1046
+ Finally, when adding Candidate Knowledge back into
1047
+ the model, if a new node is introduced into the Baseline
1048
+ Model, there is no information surrounding how it should
1049
+ be initialized. In Figure 6 we show the effects of different
1050
+ initialization
1051
+ assumptions
1052
+ when
1053
+ adding
1054
+ Candidate
1055
+ Knowledge back into the model. Each network type was
1056
+ extended with BFA and DFA algorithms using one of five
1057
+ different initialization schemes: initializing new model el-
1058
+ ements with a fixed value (0, 1 or 2), initializing the model
1059
+ with the correct initialization used in the Golden Model,
1060
+ and randomly assigning an initial value. In general, initial-
1061
+ ization does not play a large role in automated assembly or
1062
+ extension. In Figure 6, there appears to be little difference
1063
+ between initialization types for the ER and BA network
1064
+ types, and the TLGL model. Although the human curated
1065
+ models (TLGL and Tcell) do diverge slightly from this
1066
+ trend, this is much more prominent for the Tcell model,
1067
+ which is an outlier, with automated model assembly and
1068
+ extension suffering due to the focused nature of the model.
1069
+ This is not to say that initialization is a problem that can be
1070
+ disregarded in model assembly, rather it is to be considered
1071
+ after the correct structure of the model has been identified.
1072
+ This is particularly true in the case of logical models where
1073
+ initialization can impact downstream model elements de-
1074
+ pending upon nature of the logic functions that are used.
1075
+ For example, initializing to 0 a model element involved in
1076
+ many “AND” operations will affect downstream model el-
1077
+ ements. As described in Section 2.1, we used summation
1078
+ functions in this analysis. This choice likely made the role
1079
+ of initialization less important, as the inclusion of a new
1080
+ edge (and thus, a new regulator for some element in the
1081
+ model) would not impact the effect of other regulators in
1082
+ such a substantial way as would be present with logic
1083
+ update functions.
1084
+ Taken together, the discussion in Sections 3.3-3.5 and
1085
+ Figures 4-6 demonstrate the key difficulties to automated
1086
+ model assembly and extension. Several methods exist
1087
+ which create such automated pipelines but do not focus on
1088
+ how they incorporate biological information into executa-
1089
+ ble models [27, 44, 45]. To date, only a few methods have
1090
+ been proposed to automatically assemble and extend mod-
1091
+ els, while also evaluating the available information and its
1092
+ impact on the created executable model [28-30]. Still, even
1093
+ these methods do not fully assess the structural and dy-
1094
+ namic impacts of adding new biological information to an
1095
+ executable model, and therefore do not address the com-
1096
+ plexities to this problem.
1097
+ 4 CONCLUSION
1098
+ In this paper, we have presented an automated assembly
1099
+ and extension pipeline to depict the types and magnitudes
1100
+ of the problems facing computational and system biolo-
1101
+ gists as they work to solve automated model assembly.
1102
+ Through the largest assembly and extension analysis of
1103
+ synthetic and human-curated models to date, we have
1104
+ characterized the complexities of the automated model as-
1105
+ sembly problem. Our findings demonstrate that iterative
1106
+ model assembly, devoid of context, lacking starting struc-
1107
+ tural information in the form of a baseline model, and
1108
+ without robust dynamic information describing the golden
1109
+ model’s behavior, is intractable. More often, model assem-
1110
+ bly creates models which perform similarly in dynamics,
1111
+ but do not represent the full information of a full “Golden”
1112
+ network.
1113
+ In this paper, we have demonstrated that particular fo-
1114
+ cus must be paid to a model’s structure and baseline infor-
1115
+ mation, as these can complicate model assembly. In pick-
1116
+ ing a metric to optimize during model assembly, we have
1117
+ illustrated that a single metric more often serves to sim-
1118
+ plify the golden model, rather than recapitulate it. Lastly,
1119
+ we have shown that initializing model elements only play
1120
+ Figure 6. The ten networks for each network type were disassembled and then reassembled (either through BFA or DFA) under
1121
+ different initialization schemes. In “0” new model elements are initialized with a starting value of 0. Similarly, “1” and “2”
1122
+ follow similar schemes. “Golden” initializes the model element as it would be seen in the Golden Model, while “Random”
1123
+ randomly initializes the model element. In each assembly, the number of edges added back were recorded and used to calculate
1124
+ each assembly method’s recall.
1125
+
1126
+
1127
+ ER
1128
+ BA
1129
+ TLGL
1130
+ Tcell
1131
+ 1.0-
1132
+ 1.0-
1133
+ 1.0
1134
+ 1.0-
1135
+ 0.8-
1136
+ 0.8-
1137
+ 0.8-
1138
+ 0.8-
1139
+ 50%
1140
+ 0.4-
1141
+ 0.4-
1142
+ TT
1143
+ IT
1144
+ 0.2-
1145
+ 0.2-
1146
+ 0.2-
1147
+ 0.2
1148
+ 0.0
1149
+ 0.0
1150
+ 0.0
1151
+ 0.0.
1152
+ BFA
1153
+ DFA
1154
+ BFA
1155
+ DFA
1156
+ BFA
1157
+ DFA
1158
+ BFA
1159
+ DFA
1160
+ 1.0-
1161
+ 1.0
1162
+ 1.0-
1163
+ 1.0
1164
+ 0.8-
1165
+ 0.8-
1166
+ 0.8-
1167
+ 0.8-
1168
+ 0.6
1169
+ 0.6-
1170
+ 100%
1171
+ & 0.4-
1172
+ P 0.4
1173
+ 0.2-
1174
+ 1
1175
+ 0.2-
1176
+ 0.2-
1177
+ 0.2
1178
+
1179
+ 0.0
1180
+ 0.0
1181
+ 0.0
1182
+ DFA
1183
+ 0.0
1184
+ BFA
1185
+ DFA
1186
+ BFA
1187
+ BFA
1188
+ DFA
1189
+ BFA
1190
+ DFA
1191
+ 0
1192
+ 1
1193
+ □2
1194
+ Golden
1195
+ RandomBUTCHY ET AL.: TITLE
1196
+ 9
1197
+
1198
+ a small role in network assembly.
1199
+ In future work, we plan to further investigate the effect
1200
+ of network type, additional parametrization of update
1201
+ functions (e.g., timing effects), methods to determine initial
1202
+ state for simulations, and other error functions on the qual-
1203
+ ity of recommended Candidate Models. We will also ex-
1204
+ plore the effect of erroneous Candidate Knowledge on ex-
1205
+ tension methods.
1206
+ ACKNOWLEDGMENT
1207
+ NMZ is the corresponding author. This work was funded
1208
+ in part by DARPA award W911NF-17-1-0135. The authors
1209
+ would like to thank Kai-Wen Liang for his instrumental
1210
+ work in the implementation of the BFA algorithm.
1211
+ REFERENCES
1212
+ [1]
1213
+ J. M. Epstein, "Why Model?," 2008. [Online]. Available:
1214
+ http://jasss.soc.surrey.ac.uk/11/4/12.html.
1215
+ [2]
1216
+ E. A. Sobie, Y.-S. Lee, S. L. Jenkins, and R. Iyengar, "Systems
1217
+ biology—biomedical modeling," Sci. Signal., vol. 4, no. 190,
1218
+ pp. tr2-tr2, 2011.
1219
+ [3]
1220
+ J. Schäfer and K. Strimmer, "An empirical Bayes approach
1221
+ to inferring large-scale gene association networks,"
1222
+ Bioinformatics, vol. 21, no. 6, pp. 754-764, 2004.
1223
+ [4]
1224
+ R. Küffner, T. Petri, P. Tavakkolkhah, L. Windhager, and R.
1225
+ Zimmer, "Inferring gene regulatory networks by ANOVA,"
1226
+ Bioinformatics, vol. 28, no. 10, pp. 1376-1382, 2012.
1227
+ [5]
1228
+ K. Raza, "Fuzzy logic based approaches for gene regulatory
1229
+ network inference," Artificial intelligence in medicine, 2018.
1230
+ [6]
1231
+ P. B. Madhamshettiwar, S. R. Maetschke, M. J. Davis, A.
1232
+ Reverter, and M. A. Ragan, "Gene regulatory network
1233
+ inference: evaluation and application to ovarian cancer
1234
+ allows the prioritization of drug targets," Genome medicine,
1235
+ vol. 4, no. 5, p. 41, 2012.
1236
+ [7]
1237
+ P. D’haeseleer, S. Liang, and R. Somogyi, "Genetic network
1238
+ inference: from co-expression clustering to reverse
1239
+ engineering," Bioinformatics, vol. 16, no. 8, pp. 707-726, 2000.
1240
+ [8]
1241
+ J. Linde, S. Schulze, S. G. Henkel, and R. Guthke, "Data-and
1242
+ knowledge-based modeling of gene regulatory networks:
1243
+ an update," EXCLI journal, vol. 14, p. 346, 2015.
1244
+ [9]
1245
+ N. Wani and K. Raza, "Integrative Approaches to
1246
+ Reconstruct Regulatory Networks From Multi-Omics Data:
1247
+ A Review of State-of-the-Art Methods," 2018.
1248
+ [10]
1249
+ M. Hecker, S. Lambeck, S. Toepfer, E. Van Someren, and R.
1250
+ Guthke,
1251
+ "Gene
1252
+ regulatory
1253
+ network
1254
+ inference:
1255
+ data
1256
+ integration in dynamic models—a review," Biosystems, vol.
1257
+ 96, no. 1, pp. 86-103, 2009.
1258
+ [11]
1259
+ M. Banf and S. Y. Rhee, "Enhancing gene regulatory
1260
+ network inference through data integration with markov
1261
+ random fields," Scientific reports, vol. 7, p. 41174, 2017.
1262
+ [12]
1263
+ M. Recamonde-Mendoza, A. V. Werhli, and A. Biolo,
1264
+ "Systems biology approach identifies key regulators and
1265
+ the interplay between miRNAs and transcription factors for
1266
+ pathological cardiac hypertrophy," Gene, Mar 4 2019, doi:
1267
+ 10.1016/j.gene.2019.02.056.
1268
+ [13]
1269
+ A.
1270
+ Fabregat
1271
+ et
1272
+ al.,
1273
+ "The
1274
+ Reactome
1275
+ Pathway
1276
+ Knowledgebase," Nucleic Acids Research, vol. 46, no. D1, pp.
1277
+ D649-D655, 2018, doi: 10.1093/nar/gkx1132.
1278
+ [14]
1279
+ P. D. Karp, M. Riley, S. M. Paley, and A. Pellegrini-Toole,
1280
+ "The MetaCyc Database," Nucleic acids research, vol. 30, no.
1281
+ 1,
1282
+ pp.
1283
+ 59-61,
1284
+ 2002.
1285
+ [Online].
1286
+ Available:
1287
+ http://www.ncbi.nlm.nih.gov/pubmed/11752254
1288
+ http://www.pubmedcentral.nih.gov/articlerender.fcgi?ar-
1289
+ tid=PMC99148.
1290
+ [15]
1291
+ D. Türei, T. Korcsmáros, and J. Saez-Rodriguez, "OmniPath:
1292
+ guidelines and gateway for literature-curated signaling
1293
+ pathway resources," Nature Methods, vol. 13, no. 12, pp. 966-
1294
+ 967, 2016, doi: 10.1038/nmeth.4077.
1295
+ [16]
1296
+ D. Szklarczyk et al., "The STRING database in 2017: quality-
1297
+ controlled protein-protein association networks, made
1298
+ broadly accessible," Nucleic acids research, vol. 45, no. D1, pp.
1299
+ D362-D368, 2017, doi: 10.1093/nar/gkw937.
1300
+ [17]
1301
+ C. F. Schaefer et al., "PID: the pathway interaction database,"
1302
+ Nucleic Acid Res, vol. 37, 2009, doi: 10.1093/nar/gkn653.
1303
+ [18]
1304
+ D. N. Slenter et al., "WikiPathways: a multifaceted pathway
1305
+ database bridging metabolomics to other omics research,"
1306
+ Nucleic acids research, vol. 46, no. D1, pp. D661-D667, 2018,
1307
+ doi: 10.1093/nar/gkx1064.
1308
+ [19]
1309
+ V. Chelliah et al., "BioModels: ten-year anniversary," Nucleic
1310
+ Acids Research, vol. 43, no. D1, pp. D542-D548, 2015, doi:
1311
+ 10.1093/nar/gku1181.
1312
+ [20]
1313
+ T. Helikar et al., "The Cell Collective: toward an open and
1314
+ collaborative approach to systems biology," BMC Syst Biol,
1315
+ vol. 6, 2012, doi: 10.1186/1752-0509-6-96.
1316
+ [21]
1317
+ M. Kanehisa, M. Furumichi, M. Tanabe, Y. Sato, and K.
1318
+ Morishima, "KEGG: new perspectives on genomes,
1319
+ pathways, diseases and drugs," Nucleic Acids Research, vol.
1320
+ 45,
1321
+ no.
1322
+ D1,
1323
+ pp.
1324
+ D353-D361,
1325
+ 2017,
1326
+ doi:
1327
+ 10.1093/nar/gkw1092.
1328
+ [22]
1329
+ M. A. Valenzuela-Escárcega, G. Hahn-Powell, and M.
1330
+ Surdeanu, "Description of the Odin Event Extraction
1331
+ Framework and Rule Language," 2015. [Online]. Available:
1332
+ http://arxiv.org/abs/1509.07513.
1333
+ [23]
1334
+ G. Ferguson and J. F. Allen, "TRIPS: An integrated
1335
+ intelligent problem-solving assistant," 1998: AAAI Press,
1336
+ pp. 567-572, doi: 10.1080/00021369.1971.10860128. [Online].
1337
+ Available:
1338
+ https://dl.acm.org/citation.cfm?id=295737
1339
+ http://dblp.uni-
1340
+ trier.de/db/conf/aaai/aaai98.html#FergusonA98%5Cnhtt
1341
+ p://www.aaai.org/Papers/AAAI/1998/AAAI98-080.pdf
1342
+ [24]
1343
+ K. Hakala, S. Van Landeghem, T. Salakoski, Y. Van de Peer,
1344
+ and F. Ginter, "Application of the EVEX resource to event
1345
+ extraction and network construction: Shared Task entry and
1346
+ result analysis," BMC Bioinformatics, vol. 16, no. Suppl 16,
1347
+ pp. S3-S3, 2015, doi: 10.1186/1471-2105-16-S16-S3.
1348
+ [25]
1349
+ F. Buchel et al., "Path2Models: large-scale generation of
1350
+ computational models from biochemical pathway maps,"
1351
+ BMC Syst Biol, vol. 7, no. 1, p. 116, 2013, doi: 10.1186/1752-
1352
+ 0509-7-116.
1353
+ [26]
1354
+ B. M. Gyori, J. A. Bachman, K. Subramanian, J. L. Muhlich,
1355
+ L. Galescu, and P. K. Sorger, "From word models to
1356
+ executable models of signaling networks using automated
1357
+ assembly," Molecular systems biology, vol. 13, no. 11, pp. 954-
1358
+ 954, 2017, doi: 10.15252/msb.20177651.
1359
+ [27]
1360
+ R. Sharp et al., "Eidos, INDRA, & Delphi: From free text to
1361
+
1362
+ 10
1363
+ IEEE TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, MANUSCRIPT ID
1364
+
1365
+ executable causal models," in Proceedings of the 2019
1366
+ Conference of the North American Chapter of the Association for
1367
+ Computational Linguistics (Demonstrations), 2019, pp. 42-47.
1368
+ [28]
1369
+ K.-W. Liang, Q. Wang, C. Telmer, D. Ravichandran, P.
1370
+ Spirtes, and N. Miskov-Zivanov, "Methods to Expand Cell
1371
+ Signaling Models Using Automated Reading and Model
1372
+ Checking," Springer, Cham, 2017, pp. 145-159.
1373
+ [29]
1374
+ Y.
1375
+ Ahmed,
1376
+ C.
1377
+ Telmer,
1378
+ and
1379
+ N.
1380
+ Miskov-Zivanov,
1381
+ "ACCORDION: Clustering and Selecting Relevant Data for
1382
+ Guided Network Extension and Query Answering," arXiv
1383
+ preprint arXiv:2002.05748, 2020.
1384
+ [30]
1385
+ K. Sayed, K. N. Bocan, and N. Miskov-Zivanov,
1386
+ "Automated Extension of Cell Signaling Models with
1387
+ Genetic Algorithm," 2018/07//: IEEE, pp. 5030-5033, doi:
1388
+ 10.1109/EMBC.2018.8513431.
1389
+ [Online].
1390
+ Available:
1391
+ https://ieeexplore.ieee.org/document/8513431/
1392
+ [31]
1393
+ D. C. Kozen, "Depth-First and Breadth-First Search," in The
1394
+ Design and Analysis of Algorithms. New York, NY: Springer
1395
+ New York, 1992, pp. 19-24.
1396
+ [32]
1397
+ P. Erdős and A. Rényi, "ON THE EVOLUTION OF
1398
+ RANDOM
1399
+ GRAPHS."
1400
+ [Online].
1401
+ Available:
1402
+ http://leonidzhukov.net/hse/2014/socialnetworks/pape
1403
+ rs/erdos-1960-10.pdf.
1404
+ [33]
1405
+ A.-L. Barabasi and R. Albert, "Emergence of scaling in
1406
+ random networks," Science (New York, N.Y.), vol. 286, no.
1407
+ 5439, pp. 509-12, 1999, doi: 10.1126/SCIENCE.286.5439.509.
1408
+ [34]
1409
+ R. Zhang et al., "Network model of survival signaling in
1410
+ large granular lymphocyte leukemia," Proc Natl Acad Sci U
1411
+ S A, vol. 105, no. 42, pp. 16308-13, Oct 21 2008, doi:
1412
+ 10.1073/pnas.0806447105.
1413
+ [35]
1414
+ N. Miskov-Zivanov, M. S. Turner, L. P. Kane, P. A. Morel,
1415
+ and J. R. Faeder, "The duration of T cell stimulation is a
1416
+ critical determinant of cell fate and plasticity," (in eng),
1417
+ Science signaling, Research Support, N.I.H., Extramural
1418
+ Research Support, Non-U.S. Gov't Research Support, U.S.
1419
+ Gov't, Non-P.H.S. vol. 6, no. 300, p. ra97, Nov 5 2013, doi:
1420
+ 10.1126/scisignal.2004217.
1421
+ [36]
1422
+ K. Sayed, Y.-H. Kuo, A. Kulkarni, and N. Miskov-Zivanov,
1423
+ "Dish simulator: capturing dynamics of cellular signaling
1424
+ with
1425
+ heterogeneous
1426
+ knowledge,"
1427
+ presented
1428
+ at
1429
+ the
1430
+ Proceedings of the 2017 Winter Simulation Conference, Las
1431
+ Vegas, Nevada, 2017. https://github.com/pitt-miskov-zi-
1432
+ vanov-lab/dyse_wm
1433
+ [37]
1434
+ S. M. Assmann and R. Albert, "Discrete Dynamic Modeling
1435
+ with Asynchronous Update, or How to Model Complex
1436
+ Systems in the Absence of Quantitative Information," in
1437
+ Plant Systems Biology, D. A. Belostotsky Ed. Totowa, NJ:
1438
+ Humana Press, 2009, pp. 207-225.
1439
+ [38]
1440
+ Proceedings of the Python in Science Conference (SciPy):
1441
+ Exploring Network Structure, Dynamics, and Function using
1442
+ NetworkX.
1443
+ (2008).
1444
+ [Online].
1445
+ Available:
1446
+ http://conference.scipy.org/proceedings/scipy2008/pape
1447
+ r_2/
1448
+ [39]
1449
+ Y. Ahmed, C. A. Telmer, and N. Miskov-Zivanov,
1450
+ "CLARINET: Efficient learning of dynamic network models
1451
+ from literature," Bioinformatics Advances, vol. 1, no. 1, p.
1452
+ vbab006, 2021.
1453
+ [40]
1454
+ A. Saadatpour et al., "Dynamical and structural analysis of
1455
+ a T cell survival network identifies novel candidate
1456
+ therapeutic
1457
+ targets
1458
+ for
1459
+ large
1460
+ granular
1461
+ lymphocyte
1462
+ leukemia," PLoS computational biology, vol. 7, no. 11, p.
1463
+ e1002267, 2011.
1464
+ [41]
1465
+ W. F. Hawse et al., "Cutting edge: differential regulation of
1466
+ PTEN by TCR, Akt, and FoxO1 controls CD4+ T cell fate
1467
+ decisions," The Journal of Immunology, vol. 194, no. 10, pp.
1468
+ 4615-4619, 2015.
1469
+ [42]
1470
+ N. Miskov-Zivanov, M. Turner, L. Kane, P. Morel, and J.
1471
+ Faeder, "Model Predicts Duration of T Cell Stimulation is a
1472
+ Critical Determinant of Cell Fate and Plasticity, under
1473
+ submission," 2013.
1474
+ [43]
1475
+ J. Saramäki, M. Kivelä, J.-P. Onnela, K. Kaski, and J. Kertesz,
1476
+ "Generalizations of the clustering coefficient to weighted
1477
+ complex networks," Physical Review E, vol. 75, no. 2, p.
1478
+ 027105, 2007.
1479
+ [44]
1480
+ F. Büchel et al., "Path2Models: large-scale generation of
1481
+ computational models from biochemical pathway maps,"
1482
+ BMC Systems Biology, vol. 7, no. 1, pp. 116-116, 2013, doi:
1483
+ 10.1186/1752-0509-7-116.
1484
+ [45]
1485
+ B. M. Gyori, J. A. Bachman, K. Subramanian, J. L. Muhlich,
1486
+ L. Galescu, and P. K. Sorger, "From word models to
1487
+ executable models of signaling networks using automated
1488
+ assembly," Molecular systems biology, vol. 13, no. 11, 2017.
1489
+
1490
+ Adam A. Butchy. Adam is a PhD candi-
1491
+ date in the Bioengineering Department
1492
+ at the University of Pittsburgh. Adam
1493
+ completed a B.S. in Chemical Engineer-
1494
+ ing and a B.S. in Biochemistry at Villa-
1495
+ nova University. He is working on Dis-
1496
+ crete Modeling of Macrophage Activa-
1497
+ tion and its role in the cancer microenvi-
1498
+ ronment and lung.
1499
+
1500
+ Cheryl A. Telmer Dr. Telmer is a Re-
1501
+ search Biologist at Carnegie Mellon Uni-
1502
+ versity. Cheryl and Natasa began work-
1503
+ ing together as iGEM advisors in 2013
1504
+ and have expanded their collaborations
1505
+ through the DARPA Big Mechanism and
1506
+ World Modelers programs. Biologists
1507
+ are constantly trying new tools that have
1508
+ the potential to improve our understand-
1509
+ ing of complex systems, and the standardized representation and
1510
+ computational modeling approaches being developed by the Melody
1511
+ Lab are a great contribution.
1512
+
1513
+ Natasa Miskov-Zivanov Dr. Miskov-Zi-
1514
+ vanov is an Assistant Professor of Elec-
1515
+ trical and Computer Engineering, Bioen-
1516
+ gineering, and Computational and Sys-
1517
+ tems Biology at the University of Pitts-
1518
+ burgh. She received a B.Sc. degree in
1519
+ electrical engineering and computer sci-
1520
+ ence from University of Novi Sad, Serbia
1521
+ and M.Sc. and Ph.D. degrees in electri-
1522
+ cal and computer engineering from Carnegie Mellon University. Be-
1523
+ fore joining University of Pittsburgh as a faculty, she spent several
1524
+ years as a postdoctoral researcher in Computational and Systems Bi-
1525
+ ology at the University of Pittsburgh, and as research scientist and
1526
+ instructor in Computer Science and in Electrical and Computer Engi-
1527
+ neering at Carnegie Mellon University. Dr. Miskov-Zivanov’s research
1528
+ interests include hybrid, knowledge-driven and data-driven, model
1529
+ recommendation and reasoning for complex systems with applica-
1530
+ tions in systems and synthetic biology.
1531
+
1532
+
49FIT4oBgHgl3EQf7St_/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4dFQT4oBgHgl3EQf4Ta7/content/tmp_files/2301.13431v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
4dFQT4oBgHgl3EQf4Ta7/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5NE3T4oBgHgl3EQfQgli/content/tmp_files/2301.04413v1.pdf.txt ADDED
@@ -0,0 +1,1246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.04413v1 [cs.IR] 11 Jan 2023
2
+ CoSPLADE: Contextualizing SPLADE for
3
+ Conversational Information Retrieval
4
+ Nam Le Hai1[0000−0002−9020−8790], Thomas Gerald2, Thibault Formal1,3,
5
+ Jian-Yun Nie4, Benjamin Piwowarski1[0000−0001−6792−3262], and Laure
6
+ Soulier1,2[0000−0001−9827−7400]
7
+ 1 Sorbonne Université, CNRS, ISIR, F-75005 Paris, France
8
+ first.last @sorbonne-universite.fr
9
+ 2 Université Paris-Saclay, CNRS, LISN, 91405 Orsay France first.last @lisn.fr
10
+ 3 Naver Labs Europe, Meylan, France first.last @naverlabs.com
11
+ 4 University of Montreal, Montreal, Canada nie@iro.umontreal.ca
12
+ Abstract. Conversational search is a difficult task as it aims at retriev-
13
+ ing documents based not only on the current user query but also on the
14
+ full conversation history. Most of the previous methods have focused on
15
+ a multi-stage ranking approach relying on query reformulation, a criti-
16
+ cal intermediate step that might lead to a sub-optimal retrieval. Other
17
+ approaches have tried to use a fully neural IR first-stage, but are ei-
18
+ ther zero-shot or rely on full learning-to-rank based on a dataset with
19
+ pseudo-labels. In this work, leveraging the CANARD dataset, we propose
20
+ an innovative lightweight learning technique to train a first-stage ranker
21
+ based on SPLADE. By relying on SPLADE sparse representations, we
22
+ show that, when combined with a second-stage ranker based on T5Mono,
23
+ the results are competitive on the TREC CAsT 2020 and 2021 tracks.
24
+ Keywords: information retrieval · conversational search · first-stage
25
+ ranking.
26
+ 1
27
+ Introduction
28
+ With the introduction of conversational assistants like Siri, Alexa or Cortana,
29
+ conversational Information Retrieval, a variant of adhoc IR, has emerged as
30
+ an important research domain [4,6]. In conversational IR, a search is conducted
31
+ within a session, and the user’s information need is expressed through a sequence
32
+ of queries, similarly to natural conversations – thus introducing complex inter-
33
+ dependencies between queries and responses.
34
+ Not surprisingly, neural IR models have been shown to perform the best on
35
+ conversational IR [5,7]. Most prior works rely on a Historical Query Expansion
36
+ step [34], i.e. a query expansion mechanism that takes into account all past
37
+ queries and their associated answers. Such query expansion model is learned on
38
+ the CANARD dataset [8], which is composed of a series of questions and their
39
+ associated answers, together with a disambiguated query, referred to as gold
40
+ query in this paper. However, relying on a reformulation step is computationally
41
+
42
+ 2
43
+ Le Hai et al.
44
+ costly and might be sub-optimal as underlined in [13,16]. Krasakis et al. [13]
45
+ proposed to use ColBERT [12] in a zero-shot manner, replacing the query by the
46
+ sequence of queries, without any training of the model. Lin et al. [16] proposed
47
+ to learn a dense contextualized representation of the query history, optimizing
48
+ a learning-to-rank loss over a dataset composed of weak labels. This makes the
49
+ training process complex (labels are not reliable) and long.
50
+ In this work, we follow this direction of research but propose a much lighter
51
+ training process for the first-stage ranker, where we focus on queries and do not
52
+ make use of any passage – and thus of a learning-to-rank training. It moreover
53
+ sidesteps the problem of having to derive weak labels from the CANARD dataset.
54
+ Given this strong supervision, we can consider more context – i.e. we use the
55
+ answers provided by the system the user is interacting with, which allows to
56
+ better contextualize the query, as shown in our experiments. The training loss we
57
+ propose leverages the sparse representation of queries and documents provided
58
+ by the SPLADE model [9]. In a nutshell, we require that the representation of
59
+ the query matches that of the disambiguated query (i.e. the gold query). Our
60
+ first-stage ranker achieves high performances, especially on recall – the most
61
+ important measure in a multi-stage approach, comparable to the best systems
62
+ in TREC CAsT [7], but also on precision-oriented measures – which shows the
63
+ potential of our methodology.
64
+ Finally, to perform well, the second-stage ranker (i.e. re-ranker) needs to
65
+ consider the conversation as well, which might require a set of heuristics to select
66
+ some content and/or query reformulation such as those used in [18]. Leveraging
67
+ the fact that our first-stage ranker outputs weights over the (BERT) vocabulary,
68
+ we propose a simple mechanism that provides a conversational context to the
69
+ re-ranker in the form of keywords selected by SPLADE.
70
+ In summary, our contributions are the following:
71
+ 1. We propose a new loss to optimize a first-stage ranker resulting in a lightweight
72
+ training strategy and state-of-the-art results in terms of recall;
73
+ 2. We show that, when combined with a second-stage ranker based on a context
74
+ derived from the SPLADE query representation of the first stage, we obtain
75
+ results on par with the best approaches in TREC CAsT 2020 and 2021.
76
+ 2
77
+ Related Works
78
+ The first edition [5] of the TREC Conversational Assistance Track (CAsT) was
79
+ implemented in 2019, providing a new challenge on Conversational Search. The
80
+ principle is the following: a user queries the system with questions in natural
81
+ language, and each time gets a response from the system. The challenge differs
82
+ from classical search systems as involving previous utterances (either queries
83
+ or answers) is key to better comprehending the user intent. In conversational
84
+ IR, and in TREC CAsT [6,5,7] in particular, the sheer size of the document
85
+ collection implies to design an efficient (and effective) search system.
86
+ Conversational IR is closely related to conversational Question-Answering
87
+ [25,27,26] in the sense that they both include interaction turns in natural lan-
88
+ guage. However, the objective is intrinsically different. While the topic or the
89
+
90
+ CoSPLADE: Contextualizing SPLADE for Conversational IR
91
+ 3
92
+ context (i.e., the passage containing answers) is known in conversational QA,
93
+ conversational IR aims to search among a huge collection of documents with po-
94
+ tentially more exploratory topics. With this in mind, in the following, we focus
95
+ on the literature review of conversational IR.
96
+ We can distinguish two lines of work in conversational search. The first one
97
+ [29,30,32,3] focuses on a Contextual Query Reformulation (CQR) to produce
98
+ a (plain or bag-of-words) query, representing ideally the information need free
99
+ of context, which is fed into a search model. One strategy of CQR consists in
100
+ selecting keywords from previous utterances by relying on a graph weighted by
101
+ either word2vec similarity [29], term-based importance using BM25 [19], or clas-
102
+ sification models [30]. Other approaches [14,19,18,33,28] leverage the potential
103
+ of generative language models (e.g., GPT2 or T5) to rewrite the query. Such
104
+ approaches are particularly effective, reaching top performances in the TREC
105
+ CAsT 2020 edition [5]. Query reformulation models also differ in the selected
106
+ evidence sources. Models either focus on the early stage of the conversation [1],
107
+ on a set of the queries filtered either heuristically [2] or by a classification model
108
+ [21], or on both previous queries and documents [31]. Finally, to avoid the prob-
109
+ lem of generating a single query, [14,20] have proposed to use different generated
110
+ queries and aggregate the returned documents.
111
+ The reformulation step is however a bottleneck since there is no guarantee
112
+ that the “gold query” is optimal and thus generalizes well [16,13]. Moreover,
113
+ generating text is time-consuming. To avoid these problems, the second line of
114
+ work aims to directly integrate the conversation history into the retrieval model,
115
+ bypassing the query reformulation step. As far as we know, only a few studies
116
+ followed this path in conversational search. Qu et al [24] compute a query rep-
117
+ resentation using the k last queries in the dialogue [15]. Similarly Lin et al. [16]
118
+ average contextualized tokens embeddings over the whole query history. The
119
+ representation is learned by optimizing a learning-to-rank loss over a collection
120
+ with weak labels, which requires much care to ensure good generalization. Fi-
121
+ nally, Krasakis et al. [13] use a more lexical neural model, i.e. ColBERT [12],
122
+ to encode the query with its context – but they do not finetune it at all. In
123
+ this work, we go further by using a sparse model SPLADE [9], using a novel
124
+ loss tailored to such sparse representations, and by using a lightweight train-
125
+ ing procedure that does not rely on passages, but only on a dataset containing
126
+ reformulated queries.
127
+ 3
128
+ Model
129
+ In TREC CAsT [5,7], each retrieval session contains around 10 turns of ex-
130
+ change. Each turn corresponds to a query and its associated canonical answer5
131
+ is provided as context for future queries. Let us now introduce some notations
132
+ that we use to describe our model. For each turn n ≤ N, where N is the last
133
+ turn of the conversation, we denote by qn and an respectively the corresponding
134
+ query and its response. Finally, the context of a query qn at turn n corresponds
135
+ 5 Selected by the organizer as the most relevant answer of a baseline system.
136
+
137
+ 4
138
+ Le Hai et al.
139
+ to all the previous queries and answers, i.e. q1, a1, q2, a2, ..., qn−1, an−1. The
140
+ main objective of the TREC CAsT challenges is to retrieve, for each query qn
141
+ and its context, the relevant passages.
142
+ In the next sections, we present our first-stage ranker and second-stage re-
143
+ ranker, along with their training procedure, both based, directly or indirectly,
144
+ on the SPLADE (v2) model described in [9]. SPLADE has shown results on
145
+ par with dense approaches on in-domain collections while exhibiting stronger
146
+ abilities to generalize in a zero-shot setting [9]. It outputs a sparse representation
147
+ of a document or a query in the BERT vocabulary, which is key to our model
148
+ during training and inference. The SPLADE model we use includes a contextual
149
+ encoding function, followed by some aggregation steps: ReLU, log saturation,
150
+ and max pooling over each token in the text. The output of SPLADE is a sparse
151
+ vector with only positive or zero components in the BERT vocabulary space R|V |.
152
+ In this work, we use several sets of parameters for the same SPLADE architecture
153
+ and distinguish each version by its parameters θ, and the corresponding model
154
+ by SPLADE(. . . ; θ).
155
+ 3.1
156
+ First stage
157
+ The original SPLADE model [9] scores a document using the dot product be-
158
+ tween the sparse representation of a document ( ˆd) and of a query (ˆq):
159
+ s(ˆq, ˆd) = ˆq · ˆd
160
+ (1)
161
+ In our work, like in [16], we suppose that the document representation has
162
+ been sufficiently well-tuned on the standard ad-hoc IR task. The document
163
+ embedding ˆd is thus obtained using the pre-trained SPLADE model, i.e. ˆd =
164
+ SPLADE([CLS] d; θSP LADE) where θSP LADE are the original SPLADE param-
165
+ eters obtained from HuggingFace6. These parameters are not fine-tuned during
166
+ the training process. We can thus use standard indices built from the original
167
+ SPLADE document representations to retrieve efficiently the top-k documents.
168
+ In the following, we present how to contextualize the query representation using
169
+ the conversation history. Then, we detail the training loss aiming at reducing
170
+ the gap between the representation of the gold query and the contextualized
171
+ representation.
172
+ Query representation. Like state-of-the-art approaches for first-stage conversa-
173
+ tional ranking [16,13], we contextualize the query with the previous ones. Going
174
+ further, we propose to include the answers in the query representation process,
175
+ which is easier to do thanks to our lightweight training.
176
+ To leverage both contexts, we use a simple model where the contextual query
177
+ representation at turn n, denoted by ˆqn,k, is the combination of two representa-
178
+ tions, ˆqqueries
179
+ n
180
+ which encodes the current query in the context of all the previous
181
+ 6 The weights can be found at https://huggingface.co/naver/splade-cocondenser-ensembledistil
182
+
183
+ CoSPLADE: Contextualizing SPLADE for Conversational IR
184
+ 5
185
+ queries, and ˆqanswers
186
+ n,k
187
+ which encodes the current query in the context of k the
188
+ past answers7. Formally, the contextualized query representation ˆqn,k is:
189
+ ˆqn,k = ˆqqueries
190
+ n
191
+ + ˆqanswers
192
+ n,k
193
+ (2)
194
+ where we use two versions of SPLADE parameterized by θqueries for the full
195
+ query history and θanswers,k for the answers. These parameters are learned by
196
+ optimizing the loss defined in Eq. (8).
197
+ Following [16], we define ˆqqueries
198
+ n
199
+ to be the query representation produced by
200
+ encoding the concatenation of the current query and all the previous ones:
201
+ ˆqqueries
202
+ n
203
+ = SPLADE([CLS] qn [SEP] q1 [SEP] . . . [SEP] qn−1; θqueries)
204
+ (3)
205
+ using a set of specific parameters θqueries.
206
+ To take into account the answers that the user had access to, we need to
207
+ include them in the representation. Following prior work [2], we can consider a
208
+ various number of answers k, and in particular, we can either choose k = 1 (the
209
+ last answer) or k = n−1 (all the previous answers). Formally, the representation
210
+ ˆqanswers
211
+ n,k
212
+ is computed as:
213
+ ˆqanswers
214
+ n,k
215
+ = 1
216
+ k
217
+ n−1
218
+
219
+ i=n−k
220
+ SPLADE(qn [SEP] ai; θanswers,k)
221
+ (4)
222
+ Training Based on the above, training aims at obtaining a good representation
223
+ ˆqn for the last issued query qn, i.e. to contextualize qn using the previous queries
224
+ and answers. To do so, we can leverage the gold query q∗
225
+ n, that is, a (hopefully)
226
+ contextualized and unambiguous query. We can compute the representation ˆq∗
227
+ n
228
+ of this query by using the original SPLADE model, i.e.
229
+ ˆq∗
230
+ n = SPLADE(q∗
231
+ n; θSP LADE)
232
+ (5)
233
+ For example, for a query "How old is he?" the matching gold query could be
234
+ "How old is Obama?". The representation of the latter given by SPLADE would
235
+ be as follows:
236
+ [(”Obama”, 1.5), (”Barack”, 1.2), (”age”, 1.2), (”old”, 1.0), (”president”, 0.8), ...]
237
+ where the terms “Obama” and “Barack” clearly appear alongside other words
238
+ related to the current query (“old” and the semantically related “age”).
239
+ We can now define the goal of the training, which is to reduce the difference
240
+ between the gold query representation ˆq∗
241
+ n and the representation ˆqn,k computed
242
+ by our model. An obvious choice of a loss function is to match the predicted
243
+ and gold representations using cosine loss (since the ranking is invariant when
244
+ scaling the query). However, as shown in the result section, we experimentally
245
+ 7 In the experiments, we also explore an alternative model where answers and queries
246
+ are considered at once.
247
+
248
+ 6
249
+ Le Hai et al.
250
+ found better results with a modified MSE loss, whose first component is the
251
+ standard MSE loss:
252
+ LossMSE(ˆqn,k, ˆq∗
253
+ n) = MSE(ˆqn,k, ˆq∗
254
+ n)
255
+ (6)
256
+ In our experiments, we observed that models trained with the direct MSE do
257
+ not capture well words from the context, especially for words from the answers.
258
+ The reason is that the manually reformulated gold query usually only contains a
259
+ few additional words from the previous turns that are directly implied by the last
260
+ query. Other potentially useful words from the answers may not be included. This
261
+ is a conservative expansion strategy which may not be the best example to follow
262
+ by an automatic query rewriting process. We thus added an asymmetric MSE,
263
+ designed to encourage term expansion from past answers, but avoid introducing
264
+ noise by restricting the terms to those present in the gold query q∗
265
+ n. Formally,
266
+ our asymmetric loss is:
267
+ Lossasym(ˆqanswers
268
+ n,k
269
+ , ˆq∗
270
+ n) =
271
+
272
+ max(ˆq∗
273
+ n − ˆqanswers
274
+ n,k
275
+ , 0)
276
+ �2
277
+ (7)
278
+ where the maximum is component-wise. This loss thus pushes the answer-biased
279
+ representation ˆqanswers
280
+ n,k
281
+ to include tokens from the gold answer. Contrarily to
282
+ MSE, it does not impose (directly) an upper bound on the components of the
283
+ ˆqanswers
284
+ n,k
285
+ representation – this is done indirectly through the final loss function
286
+ described below.
287
+ The final loss we optimize is a simple linear combination of the losses defined
288
+ above, and only relies on computing two query representations:
289
+ Loss(ˆqn,k, ˆq∗
290
+ n) = LossMSE(ˆqn,k, ˆq∗
291
+ n) + Lossasym(ˆqanswers
292
+ n,k
293
+ , ˆq∗
294
+ n)
295
+ (8)
296
+ There is an interplay between the two components of the global loss. More pre-
297
+ cisely, Lossasym pushes the ˆqanswers
298
+ n,k
299
+ representation to match the golden query
300
+ representation ˆq∗
301
+ n if it can, and LossMSE pushes the queries-biased representa-
302
+ tion ˆqn,k to compensate if not. It thus puts a strong focus on extracting infor-
303
+ mation from past answers, which is shown to be beneficial in our experiments.
304
+ Implementation details. For the first-stage, we initialize both encoders (one en-
305
+ coding the queries, and the other encoding the previous answer) with pre-trained
306
+ weights from SPLADE model for adhoc retrieval. We use the ADAM optimizer
307
+ with train batch size 16, learning rate 2e-5 for the first encoder and 3e-5 for the
308
+ second. We fine-tune for only 1 epoch over the CANARD dataset.
309
+ 3.2
310
+ Reranking
311
+ We perform reranking using a T5Mono [22] approach, where we enrich the raw
312
+ query qn with keywords identified by the first-stage ranker. Our motivation is
313
+ that these words should capture the information needed to contextualize the raw
314
+ query. The enriched query q+
315
+ n for conversational turn n is as follows:
316
+ q+
317
+ n = qn. Context : q1 q2 . . . qn−1. Keywords : w1, w2, ..., wK
318
+ (9)
319
+
320
+ CoSPLADE: Contextualizing SPLADE for Conversational IR
321
+ 7
322
+ where the wi are the top-K most important words that we select by leveraging
323
+ the first-stage ranker as follows. First, to reduce noise, we only consider words
324
+ that appear either in any query qi or in the associated answers ai (for i ≤ n−1).
325
+ Second, we order words by using the maximum SPLADE weight over tokens
326
+ that compose the word.8
327
+ We denote the T5 model fine-tuned for this input as T 5+. As in the original
328
+ paper [22], the relevance score of a document d for the query qn is the proba-
329
+ bility of generating the token “true” given a prompt pt(q+
330
+ n , d) = “Query: q+
331
+ n .
332
+ Document: d. Relevant:”:
333
+ score(q+
334
+ n , d; θ) =
335
+ pT 5(true|pt(q+
336
+ n , d); θ)
337
+ pT 5(true|pt(q+
338
+ n , d); θ) + pT 5(false|pt(q+
339
+ n , d); θ)
340
+ (10)
341
+ where θ are the parameters of the T5Mono model.
342
+ Differently to the first stage training, we fine-tune the ranker by aligning the
343
+ scores of the documents, and not the weight of a query (which is obviously not
344
+ possible with the T5 model). Here the “gold” score of a document is computed us-
345
+ ing the original T5Mono with the gold query q∗
346
+ n. The T5 model is initialized with
347
+ weights made public by the original authors9, denoted as θT 5. More precisely,
348
+ we finetune the pre-trained T5Mono model using the MSE-Margin loss [11]. The
349
+ loss function for the re-ranker (at conversation turn n, given documents d1 and
350
+ d2) is calculated as follows:
351
+ LR =
352
+ ��
353
+ s(q+
354
+ n , d1; θT 5+) − s(q+
355
+ n , d2; θT 5+)
356
+
357
+ − (s(q∗
358
+ n, d1; θT 5) − s(q∗
359
+ n, d2; θT 5))
360
+ �2
361
+ We optimize the θT 5+ parameters by keeping the original θT 5 to evaluate the
362
+ score of gold queries.
363
+ Implementation details. We initialize θT 5+ as θT 5, and fine-tune for 3 epochs,
364
+ with a batch size of 8 and a learning rate 1e-4. We sample pairs (d1, d2) using the
365
+ first-stage top-1000 documents: d1 is sampled among the top-3, and d2 among
366
+ the remaining 997 to push the model to focus on important differences in scores.
367
+ 4
368
+ Experimental Protocol
369
+ We designed the evaluation protocol to satisfy two main evaluation objectives:
370
+ (i) Evaluating separately the effectiveness of the first-stage and the second-stage
371
+ ranking components of our CoSPLADE model; (ii) Comparing the effectiveness
372
+ of our CoSPLADE model with TREC CAsT 2020 and 2021 participants.
373
+ 8 To improve coherence, we chose to make keywords follow their order of appearance
374
+ in the context, but did not vary this experimental setting.
375
+ 9 We used the Huggingface checkpoint https://huggingface.co/castorini/monot5-base-msmarco
376
+
377
+ 8
378
+ Le Hai et al.
379
+ 4.1
380
+ Datasets
381
+ To train our model, we used the CANARD corpus10, a conversational dataset fo-
382
+ cusing on context-based query rewriting. More specifically, the CANARD dataset
383
+ is a list of conversation histories, each being composed of a series of queries, short
384
+ answers (human written) and reformulated queries (contextualised). The train-
385
+ ing, development, and test sets include respectively 31.538, 3.418, and 5.571
386
+ contextual and reformulated queries.
387
+ To evaluate our model, we used the TREC CAsT 2020 and 2021 datasets
388
+ which include respectively 25 and 26 information needs (topics) and a document
389
+ collection composed of the MS MARCO dataset, an updated dump of Wikipedia
390
+ from the KILT benchmark, and the Washington Post V4 collection. For each
391
+ topic, a conversation is available, alternating questions and responses (manually
392
+ selected passages from the collection, aka canonical answers). For each question
393
+ (216 and 239 in total), the dataset provides its manually rewritten form as well
394
+ as a set of about 20 relevant documents. We use the former to define an upper-
395
+ bound baseline (Splade_GoldQuery).
396
+ 4.2
397
+ Metrics and baselines
398
+ We used the official evaluation metrics considered in the TREC CAsT 2020 and
399
+ 2021, namely nDCG@3, MRR, Recall@X, MAP@X, nDCG@X, where the cut-off
400
+ is set to 1000 for the CAsT 2020 and 500 for the CAsT 2021. For each metric,
401
+ we calculate the mean and variance of performance across the different queries
402
+ in the dataset. With this in mind, we present below the different baselines and
403
+ scenarios used to evaluate each component of our model.
404
+ First-stage ranking scenarios. To evaluate the effectiveness of our first-stage
405
+ ranking model (Section 3.1), we compare our approach CoSPLADE, based on
406
+ the query representation of Eq. (2) with different variants (the document en-
407
+ coder is set to the original SPLADE encoder throughout our experiments):
408
+ SPLADE_rawQuery (lower bound): SPLADE [10] using only the original
409
+ and ambiguous user queries qn; SPLADE_goldQuery (kind of upper bound):
410
+ SPLADE using the manually rewritten query q∗
411
+ n; CQE [16], a state-of-the-art
412
+ dense contextualized query representation learned using learning-to-rank on a
413
+ dataset with pseudo-labels.
414
+ To model answers when representing the query using ˆqanswers
415
+ n,k
416
+ , we used two
417
+ historical ranges (“All” with k = n−1 answers and “Last” where we use only the
418
+ last one, i.e. k = 1) and three types of answer inputs: Answer in which answers
419
+ are the canonical answers; Answer-Short in which sentences are filtered as
420
+ in the best performing TREC CAsT approach [18]. This allows for consistent
421
+ input length, at the expense of losing information; Answer-Long As answers
422
+ from CANARD are short (a few sentences extracted from Wikipedia – contrarily
423
+ to CAsT ones), we expand them to reduce the discrepancy between training and
424
+ 10 https://sites.google.com/view/qanta/projects/canard
425
+
426
+ CoSPLADE: Contextualizing SPLADE for Conversational IR
427
+ 9
428
+ inference. For each sentence, we find the Wikipedia passage it appears in (if it
429
+ exists in ORConvQA [23]), and sample a short snippet of 3 adjacent sentences
430
+ from it.
431
+ Finally, we also conducted ablation studies (on the best of the above vari-
432
+ ants) by modifying either the way to use the historical context or the training
433
+ loss: flatContext a one-encoder version of our SPLADE approach in which we
434
+ concatenate all information of the context to apply SPLADE to obtain a single
435
+ representation of the query (instead of two representations ˆqqueries
436
+ n
437
+ and ˆqanswers
438
+ n,k
439
+ as in Equations 2 and 3) trained using a MSE loss function (Eq. 6) since there is
440
+ no more two representations. MSE the version of our SPLADE approach trained
441
+ with the MSE loss (Eq. 6) instead of the proposed one (Eq. 8); cosine the ver-
442
+ sion of our SPLADE approach trained with a cosine loss instead of the proposed
443
+ loss (Eq. 8). The cosine loss is interesting because it is invariant to the scaling
444
+ factor that preserves the document ordering (Eq.1).
445
+ Second-stage ranking scenarios. We consider different scenario for our second-
446
+ stage ranking model: T5Mono_RawQuery the T5Mono ranking model [22]
447
+ applied on raw queries; T5Mono_GoldQuery the T5Mono ranking model
448
+ applied on gold queries; T5Mono_CQR the T5Mono ranking model applied
449
+ on query reformulation generated with a pre-trained T5 (using the CANARD
450
+ dataset); CoSPLADE_[context]_[number] : different versions of our second-
451
+ stage ranking model input (Eq. 9), varying 1) the number K of keywords identi-
452
+ fied as relevant by the first-stage ranker: 5, 10, 20, and 2) the presence or absence
453
+ of the past queries within the reformulation.
454
+ TREC participant baselines. For each evaluation campaign (2020 and 2021),
455
+ we also compare our model with the best, the median and the lowest TREC
456
+ CAsT participants presented in the two overviews [5,7], where participant are
457
+ ranked according to the nDCG@3 metric.
458
+ 5
459
+ Results
460
+ 5.1
461
+ First-stage ranking effectiveness
462
+ In this section, we focus on the first-stage ranking component of our CoSPLADE
463
+ model. To do so, we experiment different scenarios aiming at evaluating the
464
+ impact of the designed loss (Eq. 8) and the modeling/utility of evidence sources
465
+ (Equations 3 and 4). Results of these different baselines and scenarios on the
466
+ TREC CAsT 2021 dataset are provided in Table 1. Similar trends are observed
467
+ on CAsT 2020, but are not reported due to space limit.
468
+ In general, one can see that all variants of our approach (CoSPLADE_*
469
+ models) outperform the scenario applying the initial version of SPLADE on raw
470
+ and, more importantly, gold queries. This is very encouraging since this latter
471
+ scenario might be considered an oracle, i.e. the query is manually disambiguated.
472
+ Finally, we improve the results over CQE [16] for all the metrics – showing that
473
+
474
+ 10
475
+ Le Hai et al.
476
+ Recall@500 MAP@500
477
+ MRR
478
+ nDCG@500 nDCG@3
479
+ Baselines
480
+ SPLADE_rawQuery
481
+ 30.8±2.7
482
+ 5.5±0.9
483
+ 21.3±2.9
484
+ 17.8±1.8
485
+ 13.1±2.1
486
+ SPLADE_goldQuery
487
+ 68.8±2.0
488
+ 16.1±1.2
489
+ 55.5±3.3
490
+ 42.8±1.7
491
+ 38.3±2.8
492
+ CQE [17] from [7]
493
+ 79.1
494
+ 28.9
495
+ 60.3
496
+ 55.7
497
+ 43.8
498
+ Effect of answer processing: CoSPLADE_. . .
499
+ AllAnswers
500
+ 79.5±2.2
501
+ 28.8±1.7
502
+ 61.7±3.1
503
+ 55.3±2.0
504
+ 46.5±2.9
505
+ AllAnswers-short
506
+ 72.8±2.6
507
+ 25.7±1.9
508
+ 54.4±3.3
509
+ 49.5±2.3
510
+ 40.1±3.0
511
+ AllAnswers-long
512
+ 80.4±2.1
513
+ 29.3±1.8
514
+ 62.0±3.2
515
+ 55.6±2.1
516
+ 48.9±3.0
517
+ LastAnswer
518
+ 83.4±2.0
519
+ 31.2±1.8
520
+ 61.8±3.1
521
+ 58.1±2.0
522
+ 47.4±3.0
523
+ LastAnswer-short
524
+ 79.2±2.2
525
+ 28.1±1.8
526
+ 61.4±3.3
527
+ 54.3±2.1
528
+ 46.4±3.0
529
+ LastAnswer-long
530
+ 85.2±1.8
531
+ 32.0±1.7 64.3±03.0 59.4±1.9
532
+ 48.6±3.0
533
+ CoSPLADE_LastAnswer-long variants
534
+ flatContext
535
+ 77.0±2.0
536
+ 26.0±2.0
537
+ 55.0±3.0
538
+ 52.0±2.0
539
+ 42.0±3.0
540
+ MSE loss
541
+ 70.9±2.4
542
+ 21.6±1.7
543
+ 48.7±3.4
544
+ 45.2±2.3
545
+ 39.6±3.1
546
+ cosine loss
547
+ 70.4±2.5
548
+ 22.6±1.7
549
+ 52.5±3.3
550
+ 46.9±2.2
551
+ 39.0±3.0
552
+ Table 1. Effectiveness of different scenarios of our first-stage ranking model on the
553
+ TREC CAsT 2021.
554
+ our simple learning mechanism, combined with SPLADE, allows for achieving
555
+ SOTA performance.
556
+ Leveraging queries and answers history better contextualizes the current query.
557
+ The results of the flatContext scenario w.r.t. to the SPLADE_goldQuery allows
558
+ comparing the impact of evidence sources related to the conversation since they
559
+ both use the same architecture (SPLADE). We can observe that it obtains better
560
+ results than SPLADE_goldQuery (e.g., 77 vs. 68.8 for the Recall@500 metric),
561
+ highlighting the usefulness of context to better understand the information need.
562
+ More detailed answers perform better. Since answers are more verbose than ques-
563
+ tions, including them is more complex, and we need to study the different pos-
564
+ sibilities (CoSPLADE_AllAnswers* and CoSPLADE_LastAnswer*). One can
565
+ see that: 1) trimming answers (*-short) into a few keywords is less effective than
566
+ considering canonical answers, but 2) it might be somehow effective when com-
567
+ bined with the associated Wikipedia passage (*-long). Moreover, it seems more
568
+ effective to consider only the last answer rather than the whole set of answers
569
+ in the conversation history11. Taking all together, these observations highlight
570
+ the importance of the way to incorporate information from answers into the
571
+ reformulation process.
572
+ Dual query representation with asymmetric loss leverages sparse query represen-
573
+ tations. The results of the flatContext scenario show that considering at once
574
+ past queries and answers perform better (compared to the MSE loss scenario
575
+ which is directly comparable). However, if we separate the representations and
576
+ 11 This might be due to the simple way to use past answers, i.e. Eq. 4, but all the other
577
+ variations we tried did not perform better
578
+
579
+ CoSPLADE: Contextualizing SPLADE for Conversational IR
580
+ 11
581
+ Recall@500 MAP@500
582
+ MRR
583
+ nDCG@500 nDCG@3
584
+ Baselines
585
+ T5Mono_RawQuery
586
+ 78.4±2.3
587
+ 21.0±1.8
588
+ 39.6±3.2
589
+ 45.9±2.1
590
+ 28.4±3.0
591
+ T5Mono_GoldQuery
592
+ 86.1±1.7
593
+ 44.1±1.9
594
+ 78.7±2.7
595
+ 68.5±1.8
596
+ 64.6±2.8
597
+ T5Mono_CQR
598
+ 80.4±2.2
599
+ 30.0±1.9
600
+ 58.2±3.4
601
+ 55.3±2.1
602
+ 44.6±3.2
603
+ coSPLADE-based second stage variants
604
+ CoSPLADE_NoContext_5
605
+ 84.3±1.8
606
+ 31.7±2.0
607
+ 61.6±3.3
608
+ 58.1±2.0
609
+ 45.9±3.1
610
+ CoSPLADE_NoContext_10
611
+ 83.1±1.9
612
+ 32.0±1.7
613
+ 66.0±3.1
614
+ 59.1±1.9
615
+ 49.8±2.9
616
+ CoSPLADE_NoContext_20
617
+ 84.8±1.7
618
+ 33.4±1.8
619
+ 66.0±3.0
620
+ 60.4±1.8
621
+ 49.6±2.9
622
+ CoSPLADE_Context_5
623
+ 85.0±1.7
624
+ 35.0±1.8
625
+ 68.4±3.0
626
+ 61.7±1.9
627
+ 51.5±02.9
628
+ CoSPLADE_Context_10
629
+ 84.8±1.7
630
+ 36.5±1.9 67.8±3.1
631
+ 63.0±1.9
632
+ 53.3±3.1
633
+ CoSPLADE_Context_20
634
+ 84.9±1.7
635
+ 35.5±1.8 69.8±3.0
636
+ 62.2±1.9
637
+ 54.4±2.9
638
+ Table 2. Effectiveness of different scenarios of our second-stage ranking model on
639
+ TREC CAsT 2021.
640
+ use an asymmetric loss function, the conclusion changes. Moreover, the compar-
641
+ ison of our best scenario CoSPLADE_LastAnswer-long with a similar scenario
642
+ trained by simply using a MSE or a cosine losses reveals the effectiveness of our
643
+ asymmetric MSE (Equation 7). Remember that this asymmetric loss encourages
644
+ the consideration of previous answers in the query encoding. This reinforces our
645
+ intuition that the conversation context, and particularly verbose answers, is im-
646
+ portant for the conversational search task. It also reveals that the context should
647
+ be included at different levels in the architecture (input and loss).
648
+ 5.2
649
+ Second-stage ranking effectiveness
650
+ In this section, we rely on the CoSPLADE_LastAnswer-long model as a first
651
+ stage ranker, and evaluate different variants of the second-stage ranking method
652
+ relying on the T5Mono model. For fair comparison, we also mention results ob-
653
+ tained by a T5Mono ranking model applied on raw and gold queries, as well as
654
+ query reformulated using a T5 generative model. Results on the TREC CAsT
655
+ 2021 dataset are presented in Table 2.
656
+ The analysis of the CoSPLADE model variants allows to highlight different
657
+ observations regarding the usability of the context and the number of keywords
658
+ added to the query. First, adding the previous questions to the current query
659
+ in the prompt (i.e., “Context”) seems to improve the query understanding and,
660
+ therefore, positively impacts the retrieval effectiveness. For instance, when 5
661
+ keywords are added, the context allows reaching 51.5% for the nDCG@3 against
662
+ 45.9% without context. Second, the effectiveness metrics tend to increase with
663
+ the number of additional keywords, particularly for scenarios without context,
664
+ which is sensible. This trend is less noticeable for the scenarios with context since
665
+ the best metrics are alternatively obtained by the scenario adding either 10 or
666
+ 20 keywords. It is worth noting however that adding 10 or 20 keywords is more
667
+ valuable than adding only 5 (e.g. 54.4% vs. 51.5% for the nDCG@3 metric). It
668
+ thus seems that 1) keywords help to reformulate the initial information need,
669
+
670
+ 12
671
+ Le Hai et al.
672
+ 2) but they can lead to saturation when they are both numerous and combined
673
+ with other information.
674
+ By comparing the best model scenarios with the more basic scenarios apply-
675
+ ing the T5Mono second-stage ranker on raw and gold queries, we can observe that
676
+ our method allows improving the retrieval effectiveness regarding initial queries
677
+ but is not sufficient for reaching the performance of T5Mono_GoldQuery. How-
678
+ ever, results obtained when applying T5Mono on queries reformulated by T5
679
+ highlight that the contextualization of an initial query is a difficult task. Indeed,
680
+ the T5Mono_CQR scenario is less effective than the T5Mono_GoldQuery one
681
+ with between 6 and 20 points of difference depending on the metrics.
682
+ Moreover, it is interesting to note that the SPLADE model applied on raw
683
+ and gold queries (first-stage ranking in Table 1) obtains lower results than the
684
+ T5Mono model on the same data (second-stage ranking in Table 2). It can be ex-
685
+ plained by the purpose of those two architectures which are different: SPLADE is
686
+ a sparse model focusing on query/document expansion while T5Mono is partic-
687
+ ularly devoted to increase precision. However, it is worth noting that combining
688
+ SPLADE and T5Mono as first and second-stage rankers reaches the highest ef-
689
+ fectiveness results in our experimental evaluation. This shows the effectiveness
690
+ of CoSPLADE to both contextualize queries and effectively rank documents.
691
+ 5.3
692
+ Effectiveness compared to TREC CAsT participants
693
+ We finally compare our approach with TREC CAsT participants for the 2020
694
+ and 2021 evaluation campaigns. For both years, we can see that we obtain effec-
695
+ tiveness metrics that are very close or higher than the ones reached by the best
696
+ participants. Indeed, CoSPLADE surpasses the best TREC participant for the
697
+ 2020 evaluation campaign regarding Recall@1000 and nDCG@1000. For 2021,
698
+ our model obtains better results than the best one for the MRR and nDCG@3
699
+ metrics. For both years, the best participant is the h2oloo team [18,7] where
700
+ they use query reformulation techniques, either using AllenAI or T5. Our re-
701
+ sults suggest that our approach focusing on a sparse first-stage ranking model
702
+ allows combining the benefit of query expansion and document ranking in a sin-
703
+ gle model that eventually helps the final reranking step. In other words, simply
704
+ rewriting the query without performing a joint learning document ranking can
705
+ hinder the overall performance of the search task.
706
+ 6
707
+ Conclusion
708
+ In this paper, we have shown how a sparse retrieval neural IR model, namely
709
+ SPLADE [9], could be leveraged together with a lightweight learning process
710
+ to obtain a state-of-the-art first-stage ranker. We further showed that this first-
711
+ stage ranker could be used to provide context to the second-stage ranker, leading
712
+ to results comparable with the best-performing systems. Future work may ex-
713
+ plore strategies to better capture the information from the context or to explicitly
714
+ treat user feedback present in the evaluation dataset.
715
+
716
+ CoSPLADE: Contextualizing SPLADE for Conversational IR
717
+ 13
718
+ TREC CAsT 2020
719
+ Recall@1000 MAP@1000
720
+ MRR
721
+ nDCG@1000 nDCG@3
722
+ TREC Participant (best)
723
+ 63.3
724
+ 30.2
725
+ 59.3
726
+ 52.6
727
+ 45.8
728
+ TREC Participant (median)
729
+ 52.1
730
+ 15.1
731
+ 42.2
732
+ 36.4
733
+ 30.4
734
+ TREC Participant (low)
735
+ 27.9
736
+ 1.0
737
+ 5.9
738
+ 11.1
739
+ 2.2
740
+ CoSPLADE
741
+ 82.4±2.0
742
+ 26.9±1.5
743
+ 58.1±2.9
744
+ 54.2±1.8
745
+ 44.0±2.7
746
+ TREC CAsT 2021
747
+ Recall@500
748
+ MAP@500
749
+ MRR
750
+ nDCG@500 nDCG@3
751
+ TREC Participants 1 (best)
752
+ 85.0
753
+ 37.6
754
+ 67.9
755
+ 63.6
756
+ 52.6
757
+ TREC Participants 2 (median)
758
+ 36.4
759
+ 17.6
760
+ 53.4
761
+ 33.6
762
+ 37.7
763
+ TREC Participants 3 (low)
764
+ 58.9
765
+ 7.6
766
+ 27.0
767
+ 31.4
768
+ 15.4
769
+ CoSPLADE
770
+ 84.9±1.7
771
+ 35.5±1.8
772
+ 69.8±3
773
+ 62.2±1.9
774
+ 54.4±2.9
775
+ Table 3. TREC CAsT 2020 and 2021 performances regarding participants
776
+ References
777
+ 1. Aliannejadi,
778
+ M.,
779
+ Chakraborty,
780
+ M.,
781
+ Ríssola,
782
+ E.A.,
783
+ Crestani,
784
+ F.:
785
+ Har-
786
+ nessing
787
+ evolution
788
+ of
789
+ multi-turn
790
+ conversations
791
+ for
792
+ effective
793
+ answer
794
+ retrieval
795
+ pp.
796
+ 33–42.
797
+ https://doi.org/10.1145/3343413.3377968,
798
+ http://arxiv.org/abs/1912.10554
799
+ 2. Arabzadeh, N., Clarke, C.L.A.: Waterlooclarke at the trec 2020 conversational
800
+ assistant track (2020)
801
+ 3. Clarke,
802
+ C.L.A.:
803
+ Waterlooclarke
804
+ at
805
+ the
806
+ TREC
807
+ 2019
808
+ conversational
809
+ as-
810
+ sistant
811
+ track.
812
+ In:
813
+ Voorhees,
814
+ E.M.,
815
+ Ellis,
816
+ A.
817
+ (eds.)
818
+ Proceedings
819
+ of
820
+ the
821
+ Twenty-Eighth
822
+ Text
823
+ REtrieval
824
+ Conference,
825
+ TREC
826
+ 2019,
827
+ Gaithersburg,
828
+ Maryland,
829
+ USA,
830
+ November
831
+ 13-15,
832
+ 2019.
833
+ NIST
834
+ Special
835
+ Publication,
836
+ vol. 1250. National Institute of Standards and Technology (NIST) (2019),
837
+ https://trec.nist.gov/pubs/trec28/papers/WaterlooClarke.C.pdf
838
+ 4. Culpepper,
839
+ J.S.,
840
+ Diaz,
841
+ F.,
842
+ Smucker,
843
+ M.D.:
844
+ Research
845
+ frontiers
846
+ in
847
+ information
848
+ retrieval:
849
+ Report
850
+ from
851
+ the
852
+ third
853
+ strategic
854
+ workshop
855
+ on
856
+ information
857
+ retrieval
858
+ in
859
+ lorne
860
+ (SWIRL
861
+ 2018).
862
+ SIGIR
863
+ Forum
864
+ 52(1),
865
+ 34–90
866
+ (2018).
867
+ https://doi.org/10.1145/3274784.3274788,
868
+ https://doi.org/10.1145/3274784.3274788
869
+ 5. Dalton, J., Xiong, C., Callan, J.: CAsT 2020: The conversational assistance track
870
+ overview p. 10
871
+ 6. Dalton, J., Xiong, C., Callan, J.: TREC CAsT 2019: The conversational assistance
872
+ track overview http://arxiv.org/abs/2003.13624
873
+ 7. Dalton, J., Xiong, C., Callan, J.: TREC CAsT 2021: The Conversational Assistance
874
+ Track Overview p. 7 (2021)
875
+ 8. Elgohary,
876
+ A.,
877
+ Peskov,
878
+ D.,
879
+ Boyd-Graber,
880
+ J.:
881
+ Can
882
+ You
883
+ Unpack
884
+ That?
885
+ Learning
886
+ to
887
+ Rewrite
888
+ Questions-in-Context.
889
+ In:
890
+ Proceedings
891
+ of
892
+ the
893
+ 2019
894
+ Conference
895
+ on
896
+ Empirical
897
+ Methods
898
+ in
899
+ Natural
900
+ Language
901
+ Processing
902
+ and
903
+ the
904
+ 9th
905
+ International
906
+ Joint
907
+ Conference
908
+ on
909
+ Natural
910
+ Language
911
+ Processing
912
+ (EMNLP-IJCNLP). pp. 5918–5924.
913
+ Association
914
+ for
915
+ Computational Linguis-
916
+ tics, Hong Kong, China (Nov 2019). https://doi.org/10.18653/v1/D19-1605,
917
+ https://aclanthology.org/D19-1605
918
+ 9. Formal,
919
+ T.,
920
+ Lassance,
921
+ C.,
922
+ Piwowarski,
923
+ B.,
924
+ Clinchant,
925
+ S.:
926
+ From
927
+ Distil-
928
+ lation
929
+ to
930
+ Hard
931
+ Negative
932
+ Sampling:
933
+ Making
934
+ Sparse
935
+ Neural
936
+ IR
937
+ Mod-
938
+ els
939
+ More
940
+ Effective.
941
+ In:
942
+ Proceedings
943
+ of
944
+ the
945
+ 45th
946
+ International
947
+ ACM
948
+ SI-
949
+ GIR
950
+ Conference
951
+ on
952
+ Research
953
+ and
954
+ Development
955
+ in
956
+ Information
957
+ Retrieval.
958
+ pp.
959
+ 2353–2359.
960
+ SIGIR
961
+ ’22,
962
+ Association
963
+ for
964
+ Computing
965
+ Machinery,
966
+ New
967
+
968
+ 14
969
+ Le Hai et al.
970
+ York,
971
+ NY,
972
+ USA
973
+ (Jul
974
+ 2022).
975
+ https://doi.org/10.1145/3477495.3531857,
976
+ http://doi.org/10.1145/3477495.3531857
977
+ 10. Formal, T., Piwowarski, B., Clinchant, S.: SPLADE: Sparse Lexical and Ex-
978
+ pansion Model for First Stage Ranking. In: Proceedings of the 44th In-
979
+ ternational ACM SIGIR Conference on Research and Development in In-
980
+ formation Retrieval. pp. 2288–2292. SIGIR ’21, Association for Computing
981
+ Machinery, New York, NY, USA (Jul 2021).
982
+ https://doi.org/10/gm2tf2,
983
+ https://doi.org/10.1145/3404835.3463098
984
+ 11. Hofstätter, S., Althammer, S., Schröder, M., Sertkan, M., Hanbury, A.: Improv-
985
+ ing efficient neural ranking models with cross-architecture knowledge distillation.
986
+ ArXiv abs/2010.02666 (2020)
987
+ 12. Khattab, O., Zaharia, M.: ColBERT: Efficient and effective passage search via
988
+ contextualized late interaction over BERT http://arxiv.org/abs/2004.12832
989
+ 13. Krasakis,
990
+ A.M.,
991
+ Yates,
992
+ A.,
993
+ Kanoulas,
994
+ E.:
995
+ Zero-shot
996
+ Query
997
+ Contextualiza-
998
+ tion for
999
+ Conversational
1000
+ Search. In: Proceedings
1001
+ of the 45th
1002
+ International
1003
+ ACM SIGIR Conference on Research and Development in Information Re-
1004
+ trieval.
1005
+ pp. 1880–1884.
1006
+ SIGIR ’22,
1007
+ Association
1008
+ for
1009
+ Computing
1010
+ Machinery,
1011
+ New York, NY, USA (Jul 2022). https://doi.org/10.1145/3477495.3531769,
1012
+ https://doi.org/10.1145/3477495.3531769
1013
+ 14. Kumar, V., Callan, J.: Making information seeking easier: An improved pipeline
1014
+ for conversational search p. 10
1015
+ 15. Lan, Z.,
1016
+ Chen,
1017
+ M.,
1018
+ Goodman,
1019
+ S.,
1020
+ Gimpel,
1021
+ K.,
1022
+ Sharma, P.,
1023
+ Soricut,
1024
+ R.:
1025
+ ALBERT: A lite BERT for self-supervised learning of language representa-
1026
+ tions. In: 8th International Conference on Learning Representations, ICLR
1027
+ 2020,
1028
+ Addis
1029
+ Ababa,
1030
+ Ethiopia,
1031
+ April
1032
+ 26-30,
1033
+ 2020.
1034
+ OpenReview.net
1035
+ (2020),
1036
+ https://openreview.net/forum?id=H1eA7AEtvS
1037
+ 16. Lin, S.C., Yang, J.H., Lin, J.: Contextualized query embeddings for conversational
1038
+ search http://arxiv.org/abs/2104.08707
1039
+ 17. Lin,
1040
+ S.C.,
1041
+ Yang,
1042
+ J.H.,
1043
+ Lin,
1044
+ J.:
1045
+ In-batch
1046
+ negatives
1047
+ for
1048
+ knowledge
1049
+ dis-
1050
+ tillation
1051
+ with
1052
+ tightly-coupled
1053
+ teachers
1054
+ for
1055
+ dense
1056
+ retrieval.
1057
+ In:
1058
+ Pro-
1059
+ ceedings
1060
+ of
1061
+ the
1062
+ 6th
1063
+ Workshop
1064
+ on
1065
+ Representation
1066
+ Learning
1067
+ for
1068
+ NLP
1069
+ (RepL4NLP-2021).
1070
+ pp.
1071
+ 163–173.
1072
+ Association
1073
+ for
1074
+ Computa-
1075
+ tional
1076
+ Linguistics.
1077
+ https://doi.org/10.18653/v1/2021.repl4nlp-1.17,
1078
+ https://aclanthology.org/2021.repl4nlp-1.17
1079
+ 18. Lin, S.C., Yang, J.H., Lin, J.: TREC 2020 Notebook: CAsT Track. Tech. rep.,
1080
+ TREC (Dec 2021)
1081
+ 19. Lin, S.C., Yang, J.H., Nogueira, R., Tsai, M.F., Wang, C.J., Lin, J.: Multi-stage
1082
+ conversational passage retrieval: An approach to fusing term importance estimation
1083
+ and neural query rewriting http://arxiv.org/abs/2005.02230
1084
+ 20. Lin, S., Yang, J., Nogueira, R., Tsai, M., Wang, C., Lin, J.: Query reformu-
1085
+ lation using query history for passage retrieval in conversational search. CoRR
1086
+ abs/2005.02230 (2020), https://arxiv.org/abs/2005.02230
1087
+ 21. Mele, I., Muntean, C.I., Nardini, F.M., Perego, R., Tonellotto, N.: Finding Context
1088
+ through Utterance Dependencies in Search Conversations. Tech. rep. (2021)
1089
+ 22. Nogueira, R., Jiang, Z., Pradeep, R., Lin, J.: Document ranking with a pre-
1090
+ trained sequence-to-sequence model. In: Findings of the Association for Com-
1091
+ putational Linguistics: EMNLP 2020. pp. 708–718. Association for Compu-
1092
+ tational Linguistics. https://doi.org/10.18653/v1/2020.findings-emnlp.63,
1093
+ https://www.aclweb.org/anthology/2020.findings-emnlp.63
1094
+
1095
+ CoSPLADE: Contextualizing SPLADE for Conversational IR
1096
+ 15
1097
+ 23. Qu,
1098
+ C.,
1099
+ Yang,
1100
+ L.,
1101
+ Chen,
1102
+ C.,
1103
+ Qiu,
1104
+ M.,
1105
+ Croft,
1106
+ W.B.,
1107
+ Iyyer,
1108
+ M.:
1109
+ Open-retrieval
1110
+ conversational
1111
+ question
1112
+ answer-
1113
+ ing
1114
+ pp.
1115
+ 539–548.
1116
+ https://doi.org/10.1145/3397271.3401110,
1117
+ http://arxiv.org/abs/2005.11364
1118
+ 24. Qu,
1119
+ C.,
1120
+ Yang,
1121
+ L.,
1122
+ Chen,
1123
+ C.,
1124
+ Qiu,
1125
+ M.,
1126
+ Croft,
1127
+ W.B.,
1128
+ Iyyer,
1129
+ M.:
1130
+ Open-
1131
+ retrieval conversational question answering. In: Proceedings of the 43rd In-
1132
+ ternational ACM SIGIR Conference on Research and Development in Infor-
1133
+ mation Retrieval. p. 539–548. SIGIR ’20, Association for Computing Machin-
1134
+ ery, New York, NY, USA (2020). https://doi.org/10.1145/3397271.3401110,
1135
+ https://doi.org/10.1145/3397271.3401110
1136
+ 25. Qu, C., Yang, L., Qiu, M., Croft, W.B., Zhang, Y., Iyyer, M.: Bert with history
1137
+ answer embedding for conversational question answering. In: Proceedings of the
1138
+ 42nd International ACM SIGIR Conference on Research and Development in In-
1139
+ formation Retrieval. p. 1133–1136. SIGIR’19, Association for Computing Machin-
1140
+ ery, New York, NY, USA (2019). https://doi.org/10.1145/3331184.3331341,
1141
+ https://doi.org/10.1145/3331184.3331341
1142
+ 26. Qu, C., Yang, L., Qiu, M., Zhang, Y., Chen, C., Croft, W.B., Iyyer, M.: Attentive
1143
+ history selection for conversational question answering. In: Proceedings of the 28th
1144
+ ACM International Conference on Information and Knowledge Management. pp.
1145
+ 1391–1400 (2019)
1146
+ 27. Reddy,
1147
+ S.,
1148
+ Chen,
1149
+ D.,
1150
+ Manning,
1151
+ C.D.:
1152
+ CoQA:
1153
+ A
1154
+ conversational
1155
+ ques-
1156
+ tion
1157
+ answering
1158
+ challenge.
1159
+ Transactions
1160
+ of
1161
+ the
1162
+ Association
1163
+ for
1164
+ Computa-
1165
+ tional Linguistics 7, 249–266 (2019). https://doi.org/10.1162/tacl_a_00266,
1166
+ https://aclanthology.org/Q19-1016
1167
+ 28. Vakulenko,
1168
+ S.,
1169
+ Longpre,
1170
+ S.,
1171
+ Tu,
1172
+ Z.,
1173
+ Anantha,
1174
+ R.:
1175
+ Question
1176
+ rewrit-
1177
+ ing
1178
+ for
1179
+ conversational
1180
+ question
1181
+ answering.
1182
+ In:
1183
+ Proceedings
1184
+ of
1185
+ the
1186
+ 14th
1187
+ ACM
1188
+ International
1189
+ Conference
1190
+ on
1191
+ Web
1192
+ Search
1193
+ and
1194
+ Data
1195
+ Min-
1196
+ ing.
1197
+ pp.
1198
+ 355–363.
1199
+ ACM.
1200
+ https://doi.org/10.1145/3437963.3441748,
1201
+ https://dl.acm.org/doi/10.1145/3437963.3441748
1202
+ 29. Voskarides, N., Li, D., Panteli, A., Ren, P.: ILPS at TREC 2019 conversational
1203
+ assistant track p. 4
1204
+ 30. Voskarides,
1205
+ N.,
1206
+ Li,
1207
+ D.,
1208
+ Ren,
1209
+ P.,
1210
+ Kanoulas,
1211
+ E.,
1212
+ de
1213
+ Rijke,
1214
+ M.:
1215
+ Query
1216
+ resolution
1217
+ for
1218
+ conversational
1219
+ search
1220
+ with
1221
+ limited
1222
+ super-
1223
+ vision
1224
+ pp.
1225
+ 921–930.
1226
+ https://doi.org/10.1145/3397271.3401130,
1227
+ http://arxiv.org/abs/2005.11723
1228
+ 31. Yan, X., Clarke, C.L.A., Arabzadeh, N.: Waterlooclarke at the trec 2021 conver-
1229
+ sational assistant track (2021)
1230
+ 32. Yang, J.H., Lin, S.C., Wang, C.J., Lin, J.J., Tsai, M.F.: Query and answer expan-
1231
+ sion from conversation history. In: TREC (2019)
1232
+ 33. Yu, S., Liu, J., Yang, J., Xiong, C., Bennett, P., Gao, J., Liu, Z.: Few-shot gener-
1233
+ ative conversational query rewriting http://arxiv.org/abs/2006.05009
1234
+ 34. Zamani,
1235
+ H.,
1236
+ Trippas,
1237
+ J.R.,
1238
+ Dalton,
1239
+ J.,
1240
+ Radlinski,
1241
+ F.:
1242
+ Conversational
1243
+ In-
1244
+ formation Seeking (Jan 2022). https://doi.org/10.48550/arXiv.2201.08808,
1245
+ http://arxiv.org/abs/2201.08808, arXiv:2201.08808 [cs]
1246
+
5NE3T4oBgHgl3EQfQgli/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5tE2T4oBgHgl3EQfkQe0/content/tmp_files/2301.03977v1.pdf.txt ADDED
@@ -0,0 +1,1182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Service Differentiation and Fair Sharing
2
+ in Distributed Quantum Computing
3
+ Claudio Cicconettia,∗, Marco Contia, Andrea Passarellaa
4
+ aIIT, National Research Council, Pisa, Italy
5
+ Abstract
6
+ In the future, quantum computers will become widespread and a network of
7
+ quantum repeaters will provide them with end-to-end entanglement of remote
8
+ quantum bits. As a result, a pervasive quantum computation infrastructure will
9
+ emerge, which will unlock several novel applications, including distributed quan-
10
+ tum computing, that is the pooling of resources on multiple computation nodes
11
+ to address problem instances that are unattainable by any individual quantum
12
+ computer. In this paper, we first investigate the issue of service differentiation
13
+ in this new environment. Then, we define the problem of how to select which
14
+ computation nodes should participate in each pool, so as to achieve a fair share
15
+ of the quantum network resources available. The analysis is performed via an
16
+ open source simulator and the results are fully and readily available.
17
+ Keywords:
18
+ Distributed Quantum Computing, Quantum Internet, Quantum
19
+ Routing
20
+ 1. Introduction
21
+ Quantum Computing (QC) exploits the properties of matter at very small
22
+ scale to solve some problems much faster than a classical counterpart. Even
23
+ though QC has been theorized 40 years ago [1], only recently the technology
24
+ evolution and a spur of investments have made it possible to obtain practical
25
+ ∗Corresponding author
26
+ Email addresses: c.cicconetti@iit.cnr.it (Claudio Cicconetti), m.conti@iit.cnr.it
27
+ (Marco Conti), a.passarella@iit.cnr.it (Andrea Passarella)
28
+ Preprint submitted to Elsevier
29
+ January 11, 2023
30
+ arXiv:2301.03977v1 [quant-ph] 10 Jan 2023
31
+
32
+ results and speculate about approaching mass deployments [2]. QC is being al-
33
+ ready used in the chemical and pharmaceutical industry, while new applications
34
+ are being progressively unlocked in material science, Machine Learning (ML)
35
+ and engineering optimization, production and logistics, post-quantum security
36
+ [3]. Essentially, the computational advantage of QC stems from the proper-
37
+ ties of superposition and entanglement of the qubits (i.e., the “quantum bits”):
38
+ (i) superposition, which means that a qubit can be in a combination of multiple
39
+ states at the same time; and (ii) entanglement, which is a property exhibited by
40
+ a set of qubits that maintain their correlation even separated in space or time.
41
+ We can expect that the computational power of a single QC will remain
42
+ relatively limited in the near future, due to scalability issues in maintaining
43
+ a very stable and controlled environment to cope with the flimsy nature of
44
+ qubits. On the other hand, the realization of the Quantum Internet is progress-
45
+ ing steadily [4], with the long-term goal to enable the entanglement of qubits that
46
+ reside in QCs across geographical distances. With the diffusion of QCs and their
47
+ gradual interconnection via quantum networks, a pervasive infrastructure will
48
+ therefore materialize, with the potential to combine opportunistically resources
49
+ from multiple QCs for the execution of specialized algorithms in a distributed
50
+ fashion. A general framework for such distributed quantum computing has been
51
+ proposed in [5], where the authors propose practical examples, e.g., a quantum
52
+ version of the k-means clustering, which is used in unsupervised ML.
53
+ A preliminary analysis of the allocation of resources among multiple quantum
54
+ computers based on the characteristics of the underlying quantum network has
55
+ been presented in [6]. In the same work, we have also proposed a practical solu-
56
+ tion inspired by a well-known algorithm in classical data networks, i.e., Deficit
57
+ Round Robin (DRR) [7], which we have evaluated through simulations. We
58
+ have found that some fundamental properties of quantum networks immensely
59
+ impact on the provisioning of resources, which calls for new research in this
60
+ area. This is especially manifest when considering networks of first-generation
61
+ (1G) quantum repeaters [8], which do not have error-correction capabilities and
62
+ are expected to be next in line for the industrialization and mass deployment
63
+ 2
64
+
65
+ in the following years [9].
66
+ The contribution of this paper is twofold.
67
+ 1. We evaluate the performance of the resource allocation algorithm proposed
68
+ in [6] with differentiated services coexisting within the same quantum net-
69
+ work. Furthermore, we do so by comparing the performance with two alter-
70
+ native algorithms, inspired by equivalents in classical problems with similar
71
+ settings. This extends and completes the preliminary analysis in our previous
72
+ work.
73
+ 2. We define a new problem related to fair share of resources in a quantum
74
+ network among multiple applications wishing to perform distributed QC:
75
+ how to best choose the peers among those available? After introducing a
76
+ mathematical formulation of the problem, we propose a greedy approxima-
77
+ tion algorithm, which is then evaluated thoroughly and compared to two
78
+ alternatives.
79
+ All the experiments in the paper are carried out via simulations, which are
80
+ fully reproducible and publicly available on GitHub, including the simulation
81
+ software source code, the scripts to run the analysis, and the artifacts and plots.
82
+ The rest of this paper is structured as follows. We summarize the system
83
+ model assumptions and findings in [6] in Sec. 2. We then review the related work
84
+ on routing in quantum network in Sec. 3. The main contributions are reported
85
+ in Sec. 4, where we study the service differentiation, and in Sec. 5, where we
86
+ tackle the problem of fair sharing of resources. Sec. 6 concludes the paper and
87
+ identifies the most important open research directions in this context.
88
+ 2. System Model
89
+ In this section, we describe in short the quantum network abstract model
90
+ adopted in the paper (Sec. 2.1), the resource allocation algorithm proposed in [6]
91
+ (Sec. 2.2), and the simulation methodology and tool (Sec. 2.3). For more details,
92
+ we refer the reader to [6], in particular sections II and IV, and references within.
93
+ 3
94
+
95
+ end-to-end
96
+ entanglement
97
+ path
98
+ Figure 1: Quantum network model. End-to-end entanglement can be established between two
99
+ nodes s and d for which there is a path in G(V, E), where the intermediate nodes perform
100
+ entanglement swapping. F is the fidelity with which the local link EPR pairs are generated; q
101
+ is the measurement success probability, which affects the entanglement swapping procedure;
102
+ Cij is the capacity of edge eij, in EPR-pairs/s.
103
+ 2.1. Quantum network model
104
+ The quantum network model is illustrated in Fig. 1 as a graph G(V, E),
105
+ where nodes represent quantum devices (repeaters or computers), and edges
106
+ represent direct quantum communication links between them [10]. We assume
107
+ that maximally entangled EPR (Einstein–Podolsky–Rosen) pairs, e.g., |Φ+⟩,
108
+ are generated periodically at each link, and they have initial fidelity equal to
109
+ ¯F ∈ [0.5, 1]. The fidelity is a measure of how close a given quantum state is to
110
+ a reference state, with 1 meaning that they are identical. Every edge eij ∈ E
111
+ has a given capacity Cij, which is the rate of generation of EPR-pairs, which
112
+ in turn depends on the physical characteristics of the quantum network devices
113
+ and links.
114
+ Quantum networks will offer the capability to produce entangled
115
+ EPR-pairs between remote nodes, i.e., nodes that are interconnected through
116
+ intermediate hops, which will perform the procedure of entanglement swapping
117
+ to this purpose.
118
+ Such a procedure is stochastic in nature, in 1G quantum
119
+ repeaters, and we assume that it succeeds with probability q [11]. An end-to-
120
+ end EPR-pair can only be used in a meaningful manner if all the entanglement
121
+ 4
122
+
123
+ swaps along the path have succeeded, which leads to the following formula to
124
+ compute the maximum net rate that can be used by a quantum application
125
+ consuming resources along a path p = {(s, v1), . . . , (vN, d)}:
126
+ r(p) ≤ mineij∈p{Cij}
127
+ q|p|−1
128
+ ,
129
+ (1)
130
+ where |p| is the length of the path p, in number of edges. Furthermore, en-
131
+ tanglement swapping reduces the fidelity of the end-to-end entangled EPR-pair
132
+ according to the following formula [12]:
133
+ F(p) = 1
134
+ 4 + 3
135
+ 4
136
+ �4 ¯F − 1
137
+ 3
138
+ �|p|
139
+ .
140
+ (2)
141
+ We can say that r(p) and F(p) define the effective rate at which two end-points
142
+ can transfer EPR-pairs, which is the logical equivalent of throughput in classical
143
+ networks.
144
+ In our previous work [6] we have classified the quantum applications in two
145
+ categories:
146
+ – Flows. They are characterized by the need for two specific nodes to exchange
147
+ a constant flow of EPR pairs in a point-to-point manner for the whole dura-
148
+ tion of a session. If the rate of EPR pairs falls below the requested amount,
149
+ then the application Quality of Service (QoS) degrades. Examples of such ap-
150
+ plications are: clock synchronization and Quantum Key Distribution (QKD).
151
+ – Apps. In this category we find distributed quantum computing applications,
152
+ each characterized by a given quantum computer (host) running an algorithm
153
+ that pools the resources of a number of other quantum computers (peers or
154
+ workers). There is no required EPR-pair rate, but the application wishes
155
+ to consume as many EPR-pairs as possible to complete the execution faster.
156
+ Such kind of service is equivalent to best-effort or elastic traffic in a classical
157
+ data network.
158
+ Since the network operator might provide a differentiated
159
+ service, we foresee that each app is also assigned a weight (ρ), which is a
160
+ relative indication of how much throughput (in EPR-pair/s) it should be
161
+ given in the long-term compared to another app with a different weight.
162
+ 5
163
+
164
+ For both categories, we foresee the minimum fidelity (F min) can be also a
165
+ user requirement. Since in this work we focus on distributed quantum computing
166
+ applications, whose traffic is better modeled by apps than flows, in the rest of
167
+ the paper we do not consider further the latter.
168
+ 2.2. QDRR resource allocation algorithm
169
+ We report below a recap of the apps’ resource allocation algorithm in [6],
170
+ which in the following will be called Quantum DRR (QDRR) as it was inspired
171
+ by the well-known DRR algorithm [7]. The basic idea of QDRR is to provide all
172
+ applications with a fair chance to be allocated a fraction of the quantum network
173
+ resources. This is enforced by visiting the applications in a round robin: at each
174
+ visit, the application can be allocated capacity across multiple paths towards
175
+ its peers, up until a given amount that is proportional to the application’s
176
+ weight. Shortest paths are always preferred to longer ones, because they are
177
+ more efficient. A more detailed explanation follows.
178
+ The algorithm has two system parameters, which are set based on our pre-
179
+ vious results: k = 4 and the round size φ = 10 EPR-pairs/s. For a set of apps
180
+ i ∈ A, each defined by a host node {hi} and a set of candidate peers Wi, QDRR
181
+ consists of the following steps:
182
+ 0. ∀i ∈ A, ∀j ∈ Wi: find k shortest paths from hi to wij and add them to
183
+ Pi; at the end, each Pi contains up to k paths for each possible peer of i.
184
+ Initialize the active list of apps L with the identifiers of all the apps A. Copy
185
+ the graph G(V, E) into a temporary copy G′.
186
+ 1. If L = ∅ terminate. Otherwise, let a be the next application to be visited in
187
+ L in round robin order.
188
+ 2. Set the residual capacity that can be used by the current app a in this round
189
+ to δa = φ
190
+ ρa
191
+
192
+ i∈A ρi , i.e., a fraction of the round size φ proportional to its
193
+ priority.
194
+ 3. Select the shortest path p ∈ Pa. The shortest path is the one that requires
195
+ the least amount of resources among those available for the current app,
196
+ 6
197
+
198
+ according to Eq. (1), and gives the maximum fidelity, according to Eq. (2).
199
+ If p is not feasible anymore because it contains edges that have been removed
200
+ from G′ discard it and move to the next shortest path. If Pa = ∅ remove a
201
+ from the active list L and continue from Step 1.
202
+ 4. Determine the gross rate to be assigned to the current application a along
203
+ path p at this round as R = min {δa, mine∈p Ce} ≥ 0. The corresponding net
204
+ rate will be r = R · q|p|−1, as per Eq. (1).
205
+ 5. Remove R from the capacity of all the edges along the path p. Remove from
206
+ G′ all the vanishing edges.
207
+ 6. Update δa ← δa − R. If δa = 0 restart from Step 1, otherwise continue from
208
+ Step 3.
209
+ 2.3. Simulation methodology and tool
210
+ We conclude the section by describing the methodology and tool adopted
211
+ for the performance evaluation in Sec. 4 and Sec. 5.
212
+ Like in [13], we use a Poisson Point Process (PPP) to generate the position
213
+ of an average of µ nodes in a flat square grid with edge size 60 km; a link is
214
+ added between two nodes with probability plink = 0.5 if their Euclidean distance
215
+ is smaller than a threshold τ. The capacity of each link is drawn from a r.v.
216
+ uniformly distributed between 1 Bell pair/s and 400 Bell pairs/s, as in [10]. The
217
+ initial fidelity of Bell pairs is ¯F = 0.95, which is widely used in the literature,
218
+ and the entanglement swapping success probability is q = 0.5, which is the
219
+ best value that can be obtained with linear optics components [11]. Based on
220
+ previous results in [6] we have selected two representative topologies:
221
+ – dense: µ = 100, τ = 20 km;
222
+ – sparse: µ = 50, τ = 15 km.
223
+ The following metrics are used to evaluate the performance:
224
+ – The net rate of the apps, i.e., the number of EPR-pairs that the end-points
225
+ can consume in the unit of time, which is a direct operational measure from
226
+ the point of view of the end users;
227
+ 7
228
+
229
+ – The max-min fairness, which is the difference between the top and lowest net
230
+ rates assigned to the apps.
231
+ – The fidelity, weighted for each app on the net rate assigned to the correspond-
232
+ ing peer, which impacts on the accuracy and convergence of the distributed
233
+ QC applications.
234
+ – The inter-class unfairness index, for a class of apps with R priority weights
235
+ ρj, provided that each app is allocated net rate ri, defined as:
236
+
237
+
238
+
239
+
240
+ R
241
+
242
+ i=2
243
+ � ri
244
+ r1
245
+ − ρi
246
+ ρ1
247
+ �2
248
+ (3)
249
+ This measures the distance of the net allocations with respect to an ideal case
250
+ where the proportions between rates is exactly the same as the proportions
251
+ between priorities.
252
+ We used a Monte Carlo approach: for any combination of the parameters
253
+ under study, we simulated 6,000 drops with randomly generated networks and
254
+ random workload. Statistical significance has been verified for all the metrics
255
+ in the experiments performed, but we seldom include error bars in plots for
256
+ better readability. The simulation tool used is a custom simulator, developed
257
+ in C++ and using the Boost Graph Library, available as open source under a
258
+ MIT license on GitHub:
259
+ https://github.com/ccicconetti/quantum-routing/
260
+ For full reproducibility, the repository also includes the scripts to run the
261
+ experiments, as well as the artifacts obtained and the Gnuplot files to produce
262
+ the plots: see tag v1.5, experiments labeled 004 and 005.
263
+ 3. Related Work
264
+ The literature on quantum networking and distributed QC is not vast: even
265
+ though the basic ingredients have been known since a long time ago —consider
266
+ 8
267
+
268
+ for instance the seminal paper by Bouwmeester et al.
269
+ on quantum telepor-
270
+ tation [14] published on Nature in 1997— only recently there have been in-
271
+ vestments in an order of magnitude sufficient for technology to take off. This
272
+ revamped interest has triggered new research activities in this area, briefly re-
273
+ viewed below.
274
+ In general terms, the problem of quantum routing is formulated as follows:
275
+ given a network of quantum nodes (repeaters or computers) and a set of traffic
276
+ flows identified by their sources, destinations, and application requirements (e.g.,
277
+ the minimum fidelity), find the “best” paths that fulfill the constraints. Some
278
+ works have studied the problem by reusing the findings in the area of routing
279
+ in classical networks.
280
+ Van Meter et al.
281
+ proposed a quantum version of the
282
+ famous Dijkstra’s shortest path algorithm, which was shown to give very good
283
+ performance with an appropriate selection of the routing metric that considers
284
+ the specific properties of quantum networks [15]. More recently, Caleffi et al.
285
+ have proposed a slightly less efficient variation of Dijkstra’s algorithm that can
286
+ work with non-isotonic routing metrics, which they have advocated to provide
287
+ superior performance in selected use cases [16].
288
+ Dijkstra’s algorithm is also
289
+ the subject of [17], where the authors lay some mathematical foundations that
290
+ allow them to derive upper bounds of performance in specific network topologies,
291
+ including grid and ring.
292
+ A different direction is explored by Pant et al., who studied the distribution
293
+ of routing information to the nodes [18]; for this they propose a time-slotted
294
+ approach: in the first part of the slot every repeater tries to create a local entan-
295
+ glement with all its neighbors, then in the second part the paths are established
296
+ as instructed by a centralized authority. One interesting aspect of the paper
297
+ is that multiple paths are selected for the same (source, destination) to maxi-
298
+ mize the rate of end-to-end EPR-pairs. We have also adopted this time-slotted
299
+ model in [19], where we have investigated the issue of “scheduling” of traffic
300
+ flows, i.e., determining the order in which to assign paths to pending requests,
301
+ in case the network resources are not sufficient to serve them all. This prob-
302
+ lem is called “distribution” in [13], where the authors formulate it as an Integer
303
+ 9
304
+
305
+ Linear Programming (ILP), for which they derive closed formula performance
306
+ bounds in the case of a homogeneous chain of quantum repeaters. The issue
307
+ is also addressed in [10], where the authors have proposed to split the overall
308
+ quantum routing problem in two to reduce the computational complexity: first,
309
+ they determine the rates achievable by the traffic flows under the given network
310
+ constraints using an approach based on multi-commodity flow optimization,
311
+ then they map these rates to paths. The paper adopts a network model using
312
+ probabilistic entanglement swapping, which we reuse in this work (described in
313
+ Sec. 2).
314
+ An important reference for our study is [20], where the authors study the
315
+ allocation strategy of traffic flows for which the paths have been pre-determined:
316
+ they do so by borrowing the fairness concept from data networks and re-using
317
+ traditional algorithms from the relevant literature. In our paper, we also bor-
318
+ row from the same literature, though we apply the concepts to a different
319
+ class of applications, as it will be clear in the next section.
320
+ As a matter of
321
+ fact, all the scientific works cited above have focused on point-to-point traffic
322
+ flows, while in this paper we focus on a different type of traffic that is more
323
+ suitable to model distributed QC, with distinguishing features that do not al-
324
+ low the reuse of state-of-the-art solutions. Rather, we claim that any existing
325
+ routing/allocation/scheduling solutions should work in parallel to our proposed
326
+ scheme to provide an effective resource allocation to each of the two traffic
327
+ classes.
328
+ In addition to mere routing aspects, system-wide studies have also been
329
+ published. We mention [21], which is a compendium of several previous studies
330
+ from the same authors that illustrates an overall architecture of the Quantum
331
+ Internet, also including application, protocol, and deployment aspects at a high
332
+ level. On the other hand, other works have focused on specific components,
333
+ which are complementary to the research activity presented, e.g., [22] on con-
334
+ gestion control in transport protocols and [23] on the link layer, with a focus on
335
+ hardware and physical-layer considerations.
336
+ Furthermore, some research groups have been working to define the basic
337
+ 10
338
+
339
+ principles of distributed QC. Parekh et al. have defined an elegant frame-
340
+ work for the parallel execution of a broad class of quantum algorithms on mul-
341
+ tiple nodes [5], both using remote entanglement and with Local Operations and
342
+ Classical Communication (LOCC) only, also studying in depth three classes
343
+ of algorithms: variational quantum eigensolver, low-depth quantum amplitude
344
+ estimation, and quantum k-means clustering. In [24] the authors address the
345
+ problem of the efficient compilation of circuits for distributed QC by considering
346
+ that some gate operations will be executed remotely, hence with much different
347
+ latency and reliability than on-chip operations. The research of Dahlber et al.
348
+ moved in the same direction and went as far as defining a set of low-level in-
349
+ structions (called NetQASM) for distributed QC systems seamlessly supporting
350
+ local and remote gates [25]. These works confirm that there is a growing interest
351
+ in distributed QC, which is a motivation for our work.
352
+ On another line of research, solutions have been proposed to trade capacity
353
+ for fidelity, by using purification (or distillation) techniques [26].
354
+ In brief,
355
+ they consist in entangling multiple pairs of qubits with low fidelity and then
356
+ collapsing them into a single one with high fidelity. We do not consider network-
357
+ level purification in this work to remain consistent with the positioning of our
358
+ contribution within the realm of 1G-repeater quantum networks. End-to-end
359
+ purification is also possible, that is the operation is performed by quantum
360
+ computers after the qubits have been entangled all along the path(s). This is
361
+ studied, e.g., in [27], where the authors propose a quantum routing algorithm
362
+ that maximizes the rate of EPR-pairs, while deciding not only the paths but
363
+ also the purification patterns. These works complement our contributions since
364
+ they operate on constant-rate point-to-point flows only and they do not take into
365
+ account network provisioning issues.
366
+ Finally, in line with the vast majority of prior works, we only consider bi-
367
+ partite entanglement, i.e., made of two qubits, each situated in a quantum
368
+ computer.
369
+ While there are some promising theoretical studies on repeater-
370
+ assisted multi-partite entanglement, i.e., involving more than two qubits (e.g.,
371
+ [28]), the research in that area is in still its infancy. One noteworthy contribution
372
+ 11
373
+
374
+ is [29], where the authors propose to adopt n-fusion of bi-partite entanglements
375
+ to create higher level entanglements between n > 2 quantum computers. An
376
+ appealing property that they demonstrate, under some assumptions, is that the
377
+ entanglement rate between nodes remains constant with increasing distance,
378
+ in number of hops. Ways to exploit this phenomenal quality are still under
379
+ study. Multi-partite entanglement in a quantum network is generated starting
380
+ from elementary bi-partite entanglements, which is the subject of this work.
381
+ 4. Service Differentiation
382
+ In this section we extend the study in [6] by analyzing the QDRR algorithm
383
+ along two directions which have remained so far uninvestigated: service differ-
384
+ entiation by assigning apps different ρ values (Sec. 4.1) and apps with different
385
+ fidelity thresholds F min (Sec. 4.2). In all the simulations in this section the
386
+ peers are selected as follows: for each app i on node v we draw at random be-
387
+ tween 2 and 4 candidate nodes by sampling in a uniform manner from the set
388
+ of all nodes that are reachable from v in 2–7 hops. For benchmarking purposes,
389
+ QDRR is compared to two baseline algorithms: random and best-fit. The Step 0
390
+ in Sec. 2.2 is the same for all the algorithms, that is for each app we find k = 4
391
+ shortest paths to reach any peer. All the algorithms then loop through all the
392
+ possible paths for all candidates for each app until there are no more feasible
393
+ paths to be assigned, but they differ in how they do so: QDRR is descriped by
394
+ Steps 1–6 in Sec. 2.2 (and in far more details in Sec. IV-C in [6]), while:
395
+ – Random: at each iteration one app with remaining paths is chosen at random
396
+ and assigned its shortest path among any of its peers, which is allocated
397
+ the maximum rate along the path. Random is representative of allocation
398
+ algorithms that provide fair access to the quantum network resources in a
399
+ per-app manner.
400
+ – Best-fit: at each iteration, select the app with the shortest path to reach
401
+ one of its peers and allocate the maximum rate along that path. Best-fit is
402
+ representative of allocation algorithms that strive to maximize the efficiency,
403
+ 12
404
+
405
+ that is the ratio between net entanglement rate and the quantum network
406
+ capacity allocated.
407
+ 0
408
+ 10
409
+ 20
410
+ 30
411
+ 40
412
+ 50
413
+ 60
414
+ 70
415
+ 0
416
+ 100
417
+ 200
418
+ 300
419
+ 400
420
+ 500
421
+ 600
422
+ 700
423
+ 800
424
+ 900
425
+ 1000
426
+ Iterations (x1000)
427
+ Load (#apps)
428
+ Dense|Random
429
+ Dense|BestFit
430
+ Dense|QDRR
431
+ Sparse|Random
432
+ Sparse|BestFit
433
+ Sparse|QDRR
434
+ Figure 2: Number of iterations (expect Step 0 in Sec. 2.2) with random, best-fit, QDRR, in a
435
+ dense vs. sparse topology, when increasing the number of apps with ρ ∈ {1, 2, 4}.
436
+ Unlike QDRR, both random and best-fit can be considered greedy algo-
437
+ rithms, since they never backtrack to a previously selected combination of
438
+ (app, peer, path), which is always allocated as much throughput as possi-
439
+ ble. Therefore, their worst-case computational complexity (expect Step 0) is
440
+ O (k|A| log |A|E[Wi]): k is the number of shortest paths selected in Step 0; |A|
441
+ is the number of apps; log |A| takes into account the random selection or the ex-
442
+ traction from an sorted data structure, respectively for the random and best-fit
443
+ resource allocation algorithms; and, E[Wi] is the average number of peers per
444
+ app. The complexity of QDRR is discussed in [6] and it depends on the choice
445
+ of φ. To give an idea of the relative average complexity between QDRR and
446
+ random/best-fit, we report their number of iterations in the simulations dis-
447
+ cussed in Sec. 4.1 below in Fig. 2. As can be seen, in a sparse scenario the time
448
+ complexity of QDRR is only marginally higher than that of greedy algorithms
449
+ random and best-fit, but it becomes clearly higher in a dense scenario. If this is
450
+ an issue, the value of φ can always be tuned to reduce the number of iterations,
451
+ trading off fairness for speed.
452
+ 4.1. Different traffic priorities
453
+ In a first batch of results we increase the load of the network from 10 to
454
+ 1000 apps. For every app, its priority weight ρ is drawn randomly in {1, 2, 4},
455
+ 13
456
+
457
+ 0
458
+ 0.2
459
+ 0.4
460
+ 0.6
461
+ 0.8
462
+ 1
463
+ 0
464
+ 100
465
+ 200
466
+ 300
467
+ 400
468
+ 500
469
+ 600
470
+ 700
471
+ 800
472
+ 900
473
+ 1000
474
+ Residual/total capacity
475
+ Load (#apps)
476
+ Dense|Random
477
+ Dense|BestFit
478
+ Dense|QDRR
479
+ Sparse|Random
480
+ Sparse|BestFit
481
+ Sparse|QDRR
482
+ Figure 3: Ratio between the residual capacity and the total capacity with random, best-fit,
483
+ QDRR, in a dense vs. sparse topology, when increasing the number of apps with ρ ∈ {1, 2, 4}.
484
+ while F min is the same for all and equal to 0.7. As shown in Fig. 3, for all
485
+ the allocation algorithms and topologies the relative residual capacity decreases
486
+ with a sub-linear trend as the load increases, which is due to the exponential
487
+ relation between the net rate, in EPR-pairs/s, and the number of hops as per
488
+ Eq. (1).
489
+ The utilization is higher in a sparse topology, while the difference
490
+ between the allocation algorithms is negligible. In the following we report only
491
+ the results in a dense topology, due to limited space; the complete results can
492
+ be retrieved from the public GitHub repo above.
493
+ 120
494
+ 140
495
+ 160
496
+ 180
497
+ 200
498
+ 220
499
+ 240
500
+ 260
501
+ 280
502
+ 300
503
+ 320
504
+ 0
505
+ 100
506
+ 200
507
+ 300
508
+ 400
509
+ 500
510
+ 600
511
+ 700
512
+ 800
513
+ 900
514
+ 1000
515
+ Max-min fairness (EPR pairs/s)
516
+ Load (#apps)
517
+ Random (class avg)
518
+ Best-fit (class avg)
519
+ QDRR (ρ = 1)
520
+ QDRR (ρ = 2)
521
+ QDRR (ρ = 4)
522
+ Figure 4: Max-min fairness with random, best-fit, QDRR, in a dense topology, when increasing
523
+ the number of apps with ρ ∈ {1, 2, 4}.
524
+ We begin by showing the max-min fairness in Fig. 4. Since random and
525
+ best-fit do not differentiate based on the ρ values, for them we show an aggre-
526
+ gate average, while we keep separate curves for QDRR. With very low loads, the
527
+ max-min fairness is good, i.e., low, for all allocation algorithms, because there is
528
+ 14
529
+
530
+ little contention on resources. However, as the load increases, the max-min fair-
531
+ ness increases steeply and the behavior is significantly affected by the allocation
532
+ algorithm: with random the curve reaches a peak, which then slowly decreases
533
+ towards high loads; best-fit performs worst, as expected, by continuing to in-
534
+ crease, even though only slightly after the initial spur; for all traffic categories,
535
+ QDRR provides increasingly better fairness as the load increases. The latter can
536
+ be explained as follows. When there are few apps, it is likely that there are not
537
+ many shared nodes/paths, hence QDRR does not really have a chance to dis-
538
+ tribute the resources proportional to the apps’ weights; on the other hand, with
539
+ more apps, it is increasingly easier for QDRR to enforce priorities by regulating
540
+ the resources in common.
541
+ 1
542
+ 1.2
543
+ 1.4
544
+ 1.6
545
+ 1.8
546
+ 2
547
+ 2.2
548
+ 2.4
549
+ 2.6
550
+ 2.8
551
+ 3
552
+ 3.2
553
+ 0
554
+ 100
555
+ 200
556
+ 300
557
+ 400
558
+ 500
559
+ 600
560
+ 700
561
+ 800
562
+ 900
563
+ 1000
564
+ Inter-class unfairness
565
+ Load (#apps)
566
+ Random
567
+ Best-fit
568
+ QDRR
569
+ Figure 5: Inter-class unfairness index with random, best-fit, QDRR, in a dense topology, when
570
+ increasing the number of apps with ρ ∈ {1, 2, 4}.
571
+ To better show the service differentiating behavior of QDRR, we show the
572
+ inter-class unfairness index, as defined in Eq. (3), in Fig. 5. It is clear that
573
+ QDRR is the only allocation algorithm providing the apps with a clear service
574
+ differentiation, which improves as the load increases for the same reason above.
575
+ Finally, in Fig. 6 we show the net rate/app, in EPR-pairs/s: even though
576
+ it slowly decreases for all the resource allocation algorithms, we can see that
577
+ random and best-fit achieve better rates. Therefore, QDRR is effective in dif-
578
+ ferentiating service across apps with different priority weights, but this incurs a
579
+ cost, in terms of net rate.
580
+ 15
581
+
582
+ 0
583
+ 5
584
+ 10
585
+ 15
586
+ 20
587
+ 25
588
+ 30
589
+ 35
590
+ 40
591
+ 0
592
+ 100
593
+ 200
594
+ 300
595
+ 400
596
+ 500
597
+ 600
598
+ 700
599
+ 800
600
+ 900
601
+ 1000
602
+ Net rate/app (EPR pairs/s)
603
+ Load (#apps)
604
+ Random (class avg)
605
+ Best-fit (class avg)
606
+ QDRR (ρ = 1)
607
+ QDRR (ρ = 2)
608
+ QDRR (ρ = 4)
609
+ Figure 6: Net rate/app with random, best-fit, QDRR, in a dense topology, when increasing
610
+ the number of apps with ρ ∈ {1, 2, 4}.
611
+ 50
612
+ 100
613
+ 150
614
+ 200
615
+ 250
616
+ 300
617
+ 350
618
+ 0
619
+ 100
620
+ 200
621
+ 300
622
+ 400
623
+ 500
624
+ 600
625
+ 700
626
+ 800
627
+ 900
628
+ 1000
629
+ Max-min fairness (EPR pairs/s)
630
+ Load (#apps)
631
+ Dense|Random
632
+ Dense|Best-fit
633
+ Dense|QDRR
634
+ Sparse|Random
635
+ Sparse|Best-fit
636
+ Sparse|QDRR
637
+ Figure 7: Max-min fairness with random, best-fit, QDRR, in dense vs. sparse topologies, when
638
+ increasing the number of apps with F min ∈ {0.7, 0.8, 0.9}.
639
+ 16
640
+
641
+ 4.2. Different fidelity thresholds
642
+ We now report the results obtained in a new batch, which follows the same
643
+ direction as above, but we set ρ = 1 for all apps and draw randomly the mini-
644
+ mum fidelity F min from {0.7, 0.8, 0.9}, instead. We show the max-min fairness
645
+ in Fig. 7, for both topologies. Like in Sec. 4.1, QDRR achieves significantly
646
+ better performance than random and best-fit, the latter performing worst.
647
+ 0
648
+ 20
649
+ 40
650
+ 60
651
+ 80
652
+ 100
653
+ 120
654
+ 0
655
+ 100
656
+ 200
657
+ 300
658
+ 400
659
+ 500
660
+ 600
661
+ 700
662
+ 800
663
+ 900
664
+ 1000
665
+ Net rate/app (EPR pairs/s)
666
+ Load (#apps)
667
+ Dense|Random
668
+ Dense|Best-fit
669
+ Dense|QDRR
670
+ Sparse|Random
671
+ Sparse|Best-fit
672
+ Sparse|QDRR
673
+ Figure 8: Net rate/app with random, best-fit, QDRR, in dense vs. sparse topologies, when
674
+ increasing the number of apps with F min ∈ {0.7, 0.8, 0.9}.
675
+ However, as can be seen in Fig. 8, also with a mix of fidelity thresholds, the
676
+ net rate/app of QDRR is slightly less than that with both greedy allocation
677
+ strategies, which confirms the trade-off already identified in the previous batch
678
+ of experiments. It is worth noting that such a trade-off is well-known also in
679
+ different contexts: for instance, in cellular systems, greedy scheduling algorithms
680
+ that prioritize user terminals with good channel conditions (often known as “max
681
+ C/I”) are bound to provide a higher cell throughput at the cost of an inferior
682
+ fairness compared to milder strategies such as Proportional Fair [30].
683
+ In conclusion, like for heterogeneous weights, QDRR can provide service dif-
684
+ ferentiation to apps with different fidelity thresholds, but the net rate achievable
685
+ is slightly reduced compared to alternatives that do not differentiate.
686
+ 5. Fair Sharing
687
+ In this section we address a problem that is preliminary and complementary
688
+ to that defined in our previous work [6] and investigated in Sec. 4 with differen-
689
+ 17
690
+
691
+ tiated service. So far we have assumed that all nodes are equal, and each host
692
+ is wishing to cooperate with a given set of workers, without elaborating further
693
+ on how such a set is selected; for performance evaluation purposes, such a set
694
+ was selected randomly from candidates depending only on the distance as per
695
+ the specific scenario simulated. In the following, instead, we address specifically
696
+ this issue: indeed, how does one decide which are the possible workers of a host
697
+ node?
698
+ end users
699
+ data centers
700
+ Figure 9: Example of quantum network with three end users, labeled from u1 to u3, wishing to
701
+ host distributed quantum computing algorithms with data center nodes d1 or d2, represented
702
+ with multiple co-located circles to indicate that they are expected to be more powerful than end
703
+ users and, possibly, they might have a more complex internal structure that is not elaborated
704
+ further in this paper; the other nodes in G(V, E) participate to the end-to-end entanglement of
705
+ qubits as intermediate hops. Like in Sec. 2.1/Fig. 1, the network is characterized by capacity
706
+ Cij of the link between nodes i and j, initial generation fidelity F, and entanglement swapping
707
+ success probability q.
708
+ We adapt our system model to the new landscape by specializing the role of
709
+ nodes. As illustrated in the example in Fig. 9, we assume that nodes can be of
710
+ three types: (i) end users (ui): they act as the “home QCs” of customers wish-
711
+ ing to run quantum algorithms on them, also exploiting quantum computation
712
+ resources offered by other nodes through distributed quantum computing; a cus-
713
+ tomer operates the end user via a classical computer for, e.g., input preparation,
714
+ 18
715
+
716
+ circuit compilation, and quantum network resource reservations; (ii) data centers
717
+ (dj): these are QCs that can provide customers with extra quantum computa-
718
+ tion capacity to be added to their respective end users for the purpose of solving
719
+ bigger instances of their problems via distributed quantum computing, exploit-
720
+ ing end-to-end entanglement of qubits through an underlying quantum network;
721
+ (iii) intermediate nodes: quantum repeaters who do not consume or offer QC
722
+ capacity but contribute to the quantum network by performing entanglement
723
+ swapping between links as instructed by the resource allocation algorithm (e.g.,
724
+ QDRR). A node can play any combination of the three roles above. We assume
725
+ that end user ui may perform distributed quantum computing with any combi-
726
+ nation of data centers {dj}, following commercial agreements that are out of the
727
+ scope of this work. All other quantum network assumptions in Sec. 2.1 remain
728
+ the same. In Sec. 5.1 we formulate the problem in mathematical terms and
729
+ propose a solution, which is then evaluated via simulation in Sec. 5.2, compared
730
+ to two alternatives.
731
+ 5.1. Quantum Workers’ Assignment Problem
732
+ In its most general formulation, the problem of how to best assign each host
733
+ a set of workers depends on several factors that may be not known or under the
734
+ control of the quantum network operator. They include, for instance: the quan-
735
+ tum algorithms to be run, the different characteristics of the end user and data
736
+ center QCs, the schedule of the task execution, not to mention administrative
737
+ factors (contracts, partnerships, billing issues) and technical constraints (do the
738
+ QCs need to have the same hardware or software?). Since both quantum net-
739
+ working and distributed quantum computing are in their infancy, we consider
740
+ unrealistic to address the problem under such general settings. Rather, we focus
741
+ on aspects that are captured by our network model (in Sec. 2.1) with the goal
742
+ of providing an initial understanding of the problem, to be used as a stepping
743
+ stone by future studies as the technologies involved become more mature. Our
744
+ formulation, in natural language, is the following:
745
+ Quantum Workers’ Assignment Problem (QWAP): find the sets of workers, se-
746
+ 19
747
+
748
+ lected from the data center nodes, to be assigned to each host, from the end
749
+ user nodes, so that the overall profit of the hosts is maximum while balancing
750
+ the load of data centers.
751
+ The problem can be formulated in a formal manner once we define the no-
752
+ tions of “profit” and “load balancing”. Based on the prior works in the literature
753
+ (see Sec. 3), we consider the net entanglement rate in Eq. (1) as the profit, i.e.,
754
+ between two possible data centers d1 and d2 considered as candidate workers
755
+ for end user u, we prefer the one that potentially achieves the highest net en-
756
+ tanglement rate. On the other hand, we introduce load balancing as follows.
757
+ First, we define the system parameter W as the target number of workers per
758
+ end user. Then, we force each data center to be assigned as worker to at most
759
+ B end users, where B is determined as the minimum value that allows this con-
760
+ straint to be provided with given W, Nu end users, and Nd data centers. This
761
+ way, we force an even load across data centers. Note that this formulation can
762
+ be trivially extended to the case where data centers have different capabilities
763
+ by introducing appropriate weights, which we do not consider in this work to
764
+ keep the notation succinct.
765
+ Assuming without loss of generality, again to simplify notation, that all end
766
+ users have one and only one request to run distributed quantum computing, the
767
+ QWAP can be expressed as an optimization problem with objective function:
768
+ max
769
+ Nu
770
+
771
+ u=1
772
+ Nd
773
+
774
+ d=1
775
+ πudxud
776
+ (4)
777
+ such that:
778
+ 20
779
+
780
+ Nu
781
+
782
+ u=1
783
+ xud ≤ B
784
+ d = 1, . . . , Nd
785
+ (5)
786
+ Nd
787
+
788
+ d=1
789
+ xud ≤ W
790
+ u = 1, . . . , Nu
791
+ (6)
792
+ xud ∈ {0, 1}
793
+ (7)
794
+ B =
795
+ �Nu · W
796
+ Nd
797
+
798
+ (8)
799
+ where the profit πud between end user u and data center d is defined by:
800
+ πud =
801
+
802
+
803
+
804
+
805
+
806
+ 0
807
+ if ∀p ∈ Pu,d : F(p) ≤ F min
808
+ u
809
+ rud¯p
810
+ where ¯p = arg maxp
811
+
812
+ rudp|F(p) ≥ F min
813
+ u
814
+ � ,
815
+ (9)
816
+ where F(p) is the fidelity of the end-to-end entanglement along the path p,
817
+ according to Eq. (2), F min
818
+ u
819
+ is the minimum fidelity requested by end user u,
820
+ Pu,d is the set of all paths between u and d in G(V, E), and the net rate rudp
821
+ between end user u and data center d along path p is as follows:
822
+ rudp = max(i,j)∈p {Cij}
823
+ q|p|−1
824
+ .
825
+ (10)
826
+ The output assignment integer variables, as per Eq. (8), are the xud, with
827
+ the constraints as follows: Eq. (5) ensures that no data center is assigned more
828
+ than its fair share of B users, where B is computed via Eq. (8); Eq. (6) ensures
829
+ that no end user is assigned more than W workers. The profit πud, defined
830
+ through Eqs. (9–10), corresponds to the maximum net rate of EPR-pairs/s that
831
+ can assigned to end user u along any path towards data center d that fulfils its
832
+ minimum fidelity requirement. Before proceeding, we state two key observations
833
+ about the QWAP.
834
+ Observation#1.
835
+ In a general graph G(V, E), the number of paths between
836
+ two nodes can be exponential with the size of the graph. For instance, in a
837
+ complete graph this number is ⌊(V − 2)!e⌋. Therefore, the preparation of the
838
+ problem input in Eq. (9) might be very computation-intensive in practice. In
839
+ 21
840
+
841
+ the evaluation below, we adopt a reasonable approximation: rather than finding
842
+ all the paths Pud between nodes u and d, we use a reduced set P¯k
843
+ ud that consists
844
+ of the ¯k shortest paths (¯k = 10 in the simulations in Sec. 5.2). The rationale is
845
+ that long paths have a small chance of being selected as ¯p in the second branch
846
+ of Eq. (9), because the net rate decreases exponentially with the path length as
847
+ per Eq. (10).
848
+ Observation#2. When W = 1, then Eqs. (4–10) above can be trivially trans-
849
+ formed into an assignment problem, whose optimum solution can be found ef-
850
+ ficiently.
851
+ With W > 1, however, the QWAP becomes a “multiple knapsack
852
+ problem”, which is NP-hard, but for which several efficient heuristics are well-
853
+ known in the operations research literature (e.g., [31]).
854
+ Based on the two observations above, we propose our load balancing al-
855
+ gorithm to solve the QWAP, which we implemented in our simulator and eval-
856
+ uated in the next section. The idea of the load balancing algorithm is to find
857
+ the optimal allocation for the first worker of each end user using the Hungar-
858
+ ian algorithm [32], which finds the best (exact) solution in assignment problem
859
+ instances; then, it progresses by considering one more worker at a time, until
860
+ W, each time invoking the Hungarian algorithm again on the data centers with
861
+ residual slots. The algorithm is greedy because it never backtracks prior deci-
862
+ sions and it always terminates after a fixed number of iterations. More formally,
863
+ the load balancing algorithm consists of the following three steps:
864
+ 1. Prepare the problem input, in particular the profits πud, by finding up to ¯k
865
+ shortest paths between u and d in G(E, V ) using Yen’s algorithm [33].
866
+ 2. Determine B based on Nu, Nd, and W using Eq. (8).
867
+ 3. Arrange the output of Step 1 in a profit matrix where the rows are the end
868
+ users and there are B columns for each data center, i.e., the matrix size is
869
+ Nu × BNd. Then run W iterations of the Hungarian algorithm. After each
870
+ iteration, set πud ← 0 in all columns where a data center has been assigned
871
+ (avoids that the constraint Eq. (5) is violated) and in all cells that refer to the
872
+ same pair u, d that has been assigned (avoids that a user is assigned multiple
873
+ 22
874
+
875
+ times the same data center).
876
+ 5.2. Evaluation
877
+ Yen's algorithm using Dijkstra with Fibonacci heap
878
+ preparation
879
+ execution
880
+ for each end user
881
+ and data center
882
+ number of shortest
883
+ paths to search
884
+ Hungarian algorithm
885
+ number of iterations
886
+ Dijkstra with Fibonacci heap
887
+ for each end user
888
+ and worker
889
+ for each end user and worker
890
+ random selection
891
+ of data center
892
+ a)
893
+ b)
894
+ c)
895
+ Figure 10: Worst case time complexity of a) the load balancing algorithm to solve the QWAP
896
+ in Sec. 5.1 vs. the comparison algorithms b) random and c) shortest path.
897
+ In this section we evaluate the load balancing algorithm defined in Sec. 5.1
898
+ using the simulator described in Sec. 2.3. End users and data centers are selected
899
+ randomly from the set of nodes V , with the following composition of the nodes:
900
+ 10% end users, 10% data centers, 80% intermediate nodes.
901
+ As comparison
902
+ algorithms, we define:
903
+ – Random, which assigns each end user W data centers at random, thus it
904
+ maximizes fairness.
905
+ – Shortest path, which assigns each end user the W data centers that are closest
906
+ in G(E, V ), in number of hops, thus it maximizes the net rate.
907
+ After the assignment, the resources are allocated using QDRR.
908
+ We report in Fig. 10 the worst case time complexity of the three algorithms,
909
+ where we assume that the shortest path is computed using Dijkstra’s algorithm
910
+ with the help of a Fibonacci heap to keep edges sorted [34]. As can be seen, the
911
+ 23
912
+
913
+ 50
914
+ 100
915
+ 150
916
+ 200
917
+ 250
918
+ 300
919
+ 350
920
+ 400
921
+ 450
922
+ 500
923
+ 1
924
+ 2
925
+ 3
926
+ 4
927
+ 5
928
+ Net rate/app (EPR-pairs/s)
929
+ W
930
+ Random
931
+ Shortest path
932
+ Load balancing
933
+ 0
934
+ 0.5
935
+ 1
936
+ 1.5
937
+ 2
938
+ 2.5
939
+ 3
940
+ 3.5
941
+ 4
942
+ 4.5
943
+ 1
944
+ 2
945
+ 3
946
+ 4
947
+ 5
948
+ Max-min num users per data center
949
+ W
950
+ Random
951
+ Shortest path
952
+ Load balancing
953
+ Figure 11: Net rate/app (top) and max-min number of users per data center (bottom) with
954
+ random, shortest path, and load balancing, in a dense topology, with 10 apps, when increasing
955
+ W from 1 to 5.
956
+ load balancing algorithm is far more complex than random and shortest-path,
957
+ in both the preparation and the execution phases; random, in particular, does
958
+ not even depend on the graph size. However, in the following we will see that
959
+ the added complexity brings benefits that can be of potential interest to the
960
+ future quantum network operators.
961
+ In Fig. 11 (top) we show the net rate/app with Nu = 10 and W increasing
962
+ from 1 to 5, in a dense topology. As can be seen, the random algorithm per-
963
+ forms poorly, because it does not consider at all the network topology. On the
964
+ other hand, shortest path and load balancing perform similarly, with the latter
965
+ exhibiting a higher net rate with small values of W. In Fig. 11 (bottom) we
966
+ plot a measure of the spread of resources, as the max-min number of users per
967
+ data center. Load balancing performs consistently and significantly better than
968
+ both random and shortest path, with the latter exhibiting the highest unfair-
969
+ ness. We note that our conclusions are limited to the settings of the scenarios
970
+ 24
971
+
972
+ simulated; in particular, different topologies or link capacity distributions can
973
+ lead to situations where the gap between load balancing and either random or
974
+ shortest path is reduced significantly. However, a crucial advantage of our pro-
975
+ posed solution, compared to its alternatives under test, is that it can adapt to
976
+ different settings, thus it can perform well even when the scenario is not known
977
+ or changes dynamically.
978
+ 0.82
979
+ 0.84
980
+ 0.86
981
+ 0.88
982
+ 0.9
983
+ 0.92
984
+ 0.94
985
+ 0.96
986
+ load balancing
987
+ random
988
+ shortest-path
989
+ load balancing
990
+ random
991
+ shortest-path
992
+ Fidelity
993
+ Dense|10 apps
994
+ Dense|20 apps
995
+ Sparse|10 apps
996
+ Sparse|20 apps
997
+ Figure 12:
998
+ Fidelity with random, shortest path, and load balancing, in dense vs. sparse
999
+ topologies, with 10 vs. 20 apps and W = 1.
1000
+ The fidelity is shown in Fig. 12, with Nu = {10, 20} and in dense vs. sparse
1001
+ topologies. The number of end users/apps, i.e., Nu, does not affect the perfor-
1002
+ mance in a noticeable manner. On the other hand, the fidelity is generally lower
1003
+ in sparse topologies, as expected. Random performs worse, while load balanc-
1004
+ ing and shortest path give similar results, with the former performing slightly
1005
+ worse only in the sparse case. This can be explained by following the same line
1006
+ of reasoning for the net rate/app above.
1007
+ To conclude the analysis, we modify the mix of nodes.
1008
+ In one batch of
1009
+ simulations we increase the ratio of end users from 10% (like in the results so
1010
+ far) to 50%; in another one we do the same for data centers. Results are shown
1011
+ only for load balancing in Fig. 13, in terms of the net rate, with Nu = 15 and
1012
+ W = 3. As the ratio of data centers increases (green curves), the net rate/app
1013
+ increases almost linearly, as well. On the other hand, increasing the number
1014
+ of users (blue curves), only provides a sub-linear performance improvement.
1015
+ This is because the former case corresponds to increasing the physical resources
1016
+ 25
1017
+
1018
+ 50
1019
+ 100
1020
+ 150
1021
+ 200
1022
+ 250
1023
+ 300
1024
+ 350
1025
+ 400
1026
+ 450
1027
+ 500
1028
+ 550
1029
+ 600
1030
+ 0.1
1031
+ 0.2
1032
+ 0.3
1033
+ 0.4
1034
+ 0.5
1035
+ Net rate per app (EPR-pairs/s)
1036
+ Ratio of (end users | data centers)
1037
+ [end users]|Dense
1038
+ [end users]|Sparse
1039
+ [data centers]|Dense
1040
+ [data centers]|Sparse
1041
+ Figure 13: Net rate/app with load balancing, in dense vs. sparse topologies, with 15 apps and
1042
+ W = 3, when increasing the fraction of nodes as either end users or data centers.
1043
+ provided to the users, while the latter only to a more uniform distribution of the
1044
+ same resources. The conclusions are the same for dense and spare topologies,
1045
+ though the net rate for the latter is significantly lower.
1046
+ In conclusion, assigning data centers to end users through the load balancing
1047
+ algorithm, which provides an approximate solution of the QWAP, achieves an
1048
+ even distribution of resources, better than both random and shortest path as-
1049
+ signment, without compromising on the net rate and fidelity of the end-to-end
1050
+ entanglement paths.
1051
+ 6. Conclusions
1052
+ In this paper we have studied two open issues in quantum networking for
1053
+ distributed quantum computing. First, we have assessed through simulation
1054
+ the effectiveness of the QDRR resource allocation algorithm [6] in handling sce-
1055
+ narios where applications have different priority weights or minimum fidelity
1056
+ requirements. Second, we have defined a novel problem involving the selection
1057
+ of workers for a set of nodes hosting computation, called the Quantum Work-
1058
+ ers’ Assignment Problem (QWAP), which we have modeled as an optimization
1059
+ problem and for which we have proposed a heuristic called “load balancing”.
1060
+ The latter has been evaluated in comparison to alternatives seeking to maxi-
1061
+ mize only fairness and the net rate of end-to-end entanglement, respectively, and
1062
+ the results have shown that load balancing achieves an excellent compromise in
1063
+ 26
1064
+
1065
+ terms of the two metrics. The source code and simulation scripts are publicly
1066
+ available to the community.
1067
+ Further open research areas are: the use of purification to increase fidelity
1068
+ at the expense of capacity; modeling distributed QC applications to understand
1069
+ their characteristic time scales and requirements; integration with link layer pro-
1070
+ tocols; incorporation in the simulation of more realistic models for the quantum
1071
+ channel and repeaters.
1072
+ Acknowledgment
1073
+ Work co-funded by EU, PON Ricerca e Innovazione 2014–2020 FESR/FSC
1074
+ Project ARS01_00734 QUANCOM, and European High-Performance Comput-
1075
+ ing Joint Undertaking (JU) under grant agreement No 101018180 HPCQS. The
1076
+ paper reflects only the authors’ view and the funding agencies are not respon-
1077
+ sible for any use that may be made of its content.
1078
+ References
1079
+ [1] J. Preskill, Quantum computing 40 years later, arXiv:2106.10522 [quant-
1080
+ ph]ArXiv: 2106.10522.
1081
+ [2] J. Sevilla, C. J. Riedel, Forecasting timelines of quantum computing,
1082
+ arXiv:2009.05045 [quant-ph]ArXiv: 2009.05045.
1083
+ [3] Quantum Technology and Application Consortium – QUTAC, A. Bayer-
1084
+ stadler, et al. Industry quantum computing applications, EPJ Quantum
1085
+ Technology 8 (1) (2021) 25. doi:10.1140/epjqt/s40507-021-00114-x.
1086
+ [4] L. Gyongyosi, S. Imre, Advances in the quantum internet, Communications
1087
+ of the ACM 65 (8) (2022) 52–63. doi:10.1145/3524455.
1088
+ [5] R. Parekh, A. Ricciardi, A. Darwish, S. DiAdamo, Quantum Algo-
1089
+ rithms and Simulation for Parallel and Distributed Quantum Computing,
1090
+ arXiv:2106.06841 [quant-ph]ArXiv: 2106.06841.
1091
+ 27
1092
+
1093
+ [6] C. Cicconetti, M. Conti, A. Passarella, Resource Allocation in Quan-
1094
+ tum Networks for Distributed Quantum Computing, Proc. IEEE SMART-
1095
+ COMP 2022.
1096
+ [7] M. Shreedhar, G. Varghese, Efficient fair queueing using Deficit Round
1097
+ Robin, ACM SIGCOMM Computer Comm. Review 25 (4) (1995) 231–242.
1098
+ [8] S. Muralidharan, L. Li, J. Kim, N. Lütkenhaus, M. D. Lukin, L. Jiang,
1099
+ Optimal architectures for long distance quantum communication, Scientific
1100
+ Reports 6 (1) (2016) 20463. doi:10.1038/srep20463.
1101
+ [9] Y. Wang, A. N. Craddock, R. Sekelsky, M. Flament, M. Namazi, Field-
1102
+ deployable Quantum Memory for Quantum Networking, Phys. Rev. Ap-
1103
+ plied 18, 044058, 2022.
1104
+ [10] K. Chakraborty, D. Elkouss, B. Rijsman, S. Wehner, Entanglement Distri-
1105
+ bution in a Quantum Network: A Multicommodity Flow-Based Approach,
1106
+ IEEE Transactions on Quantum Engineering 1 (2020) 1–21.
1107
+ [11] N. Sangouard, C. Simon, H. de Riedmatten, N. Gisin, Quantum repeaters
1108
+ based on atomic ensembles and linear optics, Reviews of Modern Physics
1109
+ 83 (1) (2011) 33–80. doi:10.1103/RevModPhys.83.33.
1110
+ [12] H.-J. Briegel, W. Dür, J. I. Cirac, P. Zoller, Quantum repeaters for com-
1111
+ munication, arXiv:quant-ph/9803056, 1998.
1112
+ [13] W. Dai, T. Peng, M. Z. Win, Optimal Remote Entanglement Distribution,
1113
+ IEEE Journal on Selected Areas in Communications 38 (3) (2020) 540–556.
1114
+ doi:10.1109/JSAC.2020.2969005.
1115
+ [14] D. Bouwmeester,
1116
+ J.-W. Pan,
1117
+ K. Mattle,
1118
+ M. Eibl,
1119
+ H. Weinfurter,
1120
+ A. Zeilinger, Experimental quantum teleportation, Nature 390 (6660)
1121
+ (1997) 575–579. doi:10.1038/37539.
1122
+ [15] R. Van Meter, T. Satoh, T. D. Ladd, W. J. Munro, K. Nemoto, Path Selec-
1123
+ tion for Quantum Repeater Networks, Networking Science 3 (1-4) (2013)
1124
+ 82–95, arXiv: 1206.5655.
1125
+ 28
1126
+
1127
+ [16] M. Caleffi, Optimal Routing for Quantum Networks, IEEE Access 5 (2017)
1128
+ 22299–22312. doi:10.1109/ACCESS.2017.2763325.
1129
+ [17] K. Chakraborty, F. Rozpedek, A. Dahlberg, S. Wehner, Distributed Rout-
1130
+ ing in a Quantum Internet, [quant-ph]ArXiv: 1907.11630.
1131
+ [18] M. Pant, H. Krovi, D. Towsley, L. Tassiulas, L. Jiang, P. Basu, D. Englund,
1132
+ S. Guha, Routing entanglement in the quantum internet, npj Quantum
1133
+ Information 5 (1) (2019) 25. doi:10.1038/s41534-019-0139-x.
1134
+ [19] C. Cicconetti, M. Conti, A. Passarella, Request Scheduling in Quantum
1135
+ Networks, IEEE Transactions on Quantum Engineering 2 (2021) 2–17,
1136
+ [20] C. Li, T. Li, Y.-X. Liu, P. Cappellaro, Effective routing design for remote
1137
+ entanglement generation on quantum networks, npj Quantum Information
1138
+ 7 (1) (2021) 10. doi:10.1038/s41534-020-00344-4.
1139
+ [21] R. Van Meter, R. Satoh, N. Benchasattabuse, T. Matsuo, M. Hajdušek,
1140
+ T. Satoh, S. Nagayama, S. Suzuki, A Quantum Internet Architecture, Proc.
1141
+ IEEE QCE 2022, pp. 341–352.
1142
+ [22] Y. Zhao, C. Qiao, Quantum Transport Protocols for Distributed Quantum
1143
+ Computing, arXiv:2105.08109, 2021.
1144
+ [23] A. Dahlberg, M. Skrzypczyk, T. Coopmans, L. Wubben, F. Rozpędek,
1145
+ M. Pompili, A. Stolk, P. Pawełczak, R. Knegjens, J. de Oliveira Filho,
1146
+ R. Hanson, S. Wehner, A link layer protocol for quantum networks, Proc.
1147
+ ACM SIGCOMM 2019, pp. 159–173.
1148
+ [24] D. Cuomo, M. Caleffi, K. Krsulich, F. Tramonto, G. Agliardi, E. Prati,
1149
+ A. S. Cacciapuoti, Optimized compiler for Distributed Quantum Comput-
1150
+ ing, ACM Trans. on Quantum Computing, 2023 (to appear).
1151
+ [25] A. Dahlberg, B. v. d. Vecht, C. D. Donne, M. Skrzypczyk, I. t. Raa, W. Ko-
1152
+ zlowski, S. Wehner, NetQASM—a low-level instruction set architecture for
1153
+ 29
1154
+
1155
+ hybrid quantum–classical programs in a quantum internet, Quantum Sci-
1156
+ ence and Technology 7 (3) (2022).
1157
+ [26] R. Van Meter, T. Ladd, W. Munro, K. Nemoto, System Design for a Long-
1158
+ Line Quantum Repeater, IEEE/ACM Trans. on Networking 17 (3) (2009).
1159
+ [27] Y. Zhao, G. Zhao, C. Qiao, E2E Fidelity Aware Routing and Purification for
1160
+ Throughput Maximization in Quantum Networks, Proc. IEEE INFOCOM
1161
+ 2022, pp. 480–489.
1162
+ [28] M. Pompili, S. L. N. Hermans, S. Baier, H. K. C. Beukers, P. C. Humphreys,
1163
+ R. N. Schouten, R. F. L. Vermeulen, M. J. Tiggelman, L. d. S. Martins,
1164
+ B. Dirkse, S. Wehner, R. Hanson, Realization of a multi-node quantum
1165
+ network of remote solid-state qubits, Science 372 (6539) (2021) 259–264.
1166
+ [29] A. Patil, M. Pant, D. Englund, D. Towsley, S. Guha, Entanglement gen-
1167
+ eration in a quantum network at distance-independent rate, npj Quantum
1168
+ Information 8 (1) (2022).
1169
+ [30] A. Jalali, R. Padovani, R. Pankaj, Data throughput of CDMA-HDR a high
1170
+ efficiency-high data rate personal communication wireless system, Proc.
1171
+ IEEE VTC2000-Spring 2000, pp. 1854–1858 vol.3.
1172
+ [31] S. Martello, P. Toth, A Bound and Bound algorithm for the zero-one multi-
1173
+ ple knapsack problem, Discrete Applied Mathematics 3 (4) (1981) 275–288.
1174
+ [32] H. W. Kuhn, The Hungarian method for the assignment problem, Naval
1175
+ Research Logistics Quarterly 2 (1-2) (1955) 83–97.
1176
+ [33] J. Y. Yen, Finding the K Shortest Loopless Paths in a Network, Manage-
1177
+ ment Science 17 (11) (1971) 712–716.
1178
+ [34] M. Fredman, R. Tarjan, Fibonacci Heaps And Their Uses In Improved
1179
+ Network Optimization Algorithms, in: 25th Annual Symposium onFoun-
1180
+ dations of Computer Science, J. ACM 34, 3 (July 1987), 596–615.
1181
+ 30
1182
+
5tE2T4oBgHgl3EQfkQe0/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6dAyT4oBgHgl3EQfpfjS/content/2301.00528v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80d06829a9ca0f2ab2d1bde005b620de802b89358f544d7d782412be8a0e7bf8
3
+ size 458264
A9E1T4oBgHgl3EQf9QZb/content/tmp_files/2301.03554v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
A9E1T4oBgHgl3EQf9QZb/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
B9E1T4oBgHgl3EQfpgVg/content/tmp_files/2301.03332v1.pdf.txt ADDED
@@ -0,0 +1,2144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.03332v1 [math.AP] 9 Jan 2023
2
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN
3
+ INEQUALITY ON THE H-TYPE GROUP
4
+ QIAOHUA YANG
5
+ Abstract. We determine the optimal constant in the L2 Folland-Stein in-
6
+ equality on the H-type group, which partially confirms the conjecture given
7
+ by Garofalo and Vassilev (Duke Math. J., 2001). The proof is inspired by the
8
+ work of Frank and Lieb (Ann. of Math., 2012) and Hang and Wang.
9
+ 1. Introduction
10
+ Let G be a stratified, simply connected nilpotent Lie group (in short a Carnot
11
+ group) of step r. Denote by g the Lie algebra of G. It is known that g = �r
12
+ i=1 Vi
13
+ satisfying (see e.g. [10])
14
+ [V1, Vj] = Vj+1, 1 ≤ j ≤ r − 1; [V1, Vr] = {0}.
15
+ As a simply connected nilpotent group, G is differential with RN, N = �r
16
+ i=1 dim Vi,
17
+ via the exponential map exp : g → G. There is a natural family of nonisotropic
18
+ dilations δλ : g → g for λ > 0 and we define it as follows:
19
+ δλ(X1 + · · · + Xr) = λX1 + · · · + λrXr, Xj ∈ Vj, 1 ≤ j ≤ r.
20
+ The homogeneous dimension of G, associated with δλ, is Q = �r
21
+ j=1 j dim Vj. Via
22
+ the exponential map exp : g → G, we define the group of dilations on G as follows:
23
+ δλ(g) = exp ◦δλ ◦ exp−1(g), g ∈ G.
24
+ Set nj = dim Vj, 1 ≤ j ≤ r. Let {X1, · · · , Xn1} be a basis of V1 and denote
25
+ by ∇G = (X1, · · · , Xn1) the horizontal gradient of G. The sub-Laplacian on G is
26
+ ∆G = �n1
27
+ i=1 X2
28
+ i . The Sobolev space W 1,p
29
+ 0
30
+ (G) is the closure of C∞
31
+ 0 (G) with respect
32
+ to the norm
33
+ ∥u∥W 1,p
34
+ 0
35
+ (G) =
36
+ ��
37
+ G
38
+ |∇Gu|pdg
39
+ � 1
40
+ 2
41
+ ,
42
+ where dg is the Haar measure on G. We remark that the Haar measure on G,
43
+ induced by the exponential mapping from the Lebesgue measure on g = RN, coin-
44
+ cides the Lebesgue measure on RN. The Folland-Stein inequality on G reads that
45
+ there exits some constant C > 0 such that for each u ∈ W 1,p
46
+ 0
47
+ (G) (see [8, 9]),
48
+ ��
49
+ G
50
+ |u|
51
+ pQ
52
+ Q−p dg
53
+ � Q−p
54
+ pQ
55
+ ≤ C
56
+ ��
57
+ G
58
+ |∇Gu|pdg
59
+ � 1
60
+ p
61
+ , 1 < p < Q.
62
+ (1.1)
63
+ 2000 Mathematics Subject Classification. Primary: 43A80; 46E35; 22E25.
64
+ Key words and phrases. Folland-Stein inequality; Heisenberg group; H-type group; best
65
+ constant.
66
+ The
67
+ work
68
+ was
69
+ partially
70
+ supported
71
+ by
72
+ the
73
+ National
74
+ Natural
75
+ Science
76
+ Foundation
77
+ of
78
+ China(No.11201346).
79
+ 1
80
+
81
+ 2
82
+ QIAOHUA YANG
83
+ For the existence and regularity of minimizers of the Folland-Stein inequality (1.1),
84
+ we refer to [24].
85
+ The Heisenberg group is the simplest example of Carnot group of step 2. We
86
+ denote it by Hn = (Cn × R, ◦). The group law on Hn is given by
87
+ (z, t) ◦ (z′, t′) = (z + z′, t + t′ + 2Imz · z′),
88
+ where z · z′ = �n
89
+ j=1 zj¯z′
90
+ j. The homogeneous norm on Hn is given by
91
+ |(z, t)| = (|z|4 + t2)
92
+ 1
93
+ 4 .
94
+ In a series of papers [18, 19, 20], Jerison and Lee, among other results, determined
95
+ the explicit computation of the extremal functions in (1.1) in the case p = 2 and
96
+ G = Hn. In fact, the extremal functions are, up to group translations and dilations,
97
+ c((1 + |z|2)2 + t2)− Q−2
98
+ 4 , c ∈ R.
99
+ Such inequalities play an important role in the study of CR Yamabe problems.
100
+ Later, in a celebrated paper [11], Frank and Lieb established sharp Hardy-Littlewood-
101
+ Sobolev inequalities on Hn. We state the result as follows:
102
+ Theorem 1.1 (Frank-Lieb). Let 0 < λ < Q and p =
103
+ 2Q
104
+ Q−λ. Then for any f, g ∈
105
+ Lp(Hn),
106
+ �����
107
+ � �
108
+ Hn×Hn
109
+ f(z, t)g(z′, t′)
110
+ |(z, t)−1 ◦ (z′, t′)|λ dzdtdz′dt′
111
+ ����� ≤
112
+ � πn+1
113
+ 2n−1n!
114
+ �λ/Q n!Γ( Q−λ
115
+ 2 )
116
+ Γ2( Q−2λ
117
+ 4
118
+ )
119
+ ∥f∥p∥g∥p,
120
+ with equality if and only if, up to group translations and dilations,
121
+ f = c((1 + |z|2)2 + t2)− 2Q−λ
122
+ 4
123
+ , g = c′((1 + |z|2)2 + t2)− 2Q−λ
124
+ 4
125
+ for some c, c′ ∈ C.
126
+ In particular, choosing λ = Q−2 in Theorem 1.1 yields the Jerison and Lee’s in-
127
+ equality. Using the method in [11], Frank and Lieb [12] also gave a new, rearrangement-
128
+ free proof of sharp Hardy-Littlewood-Sobolev inequalities on Rn. Recently, Hang
129
+ and Wang [15] present a shorter proof of the Frank-Lieb inequality, in which they
130
+ bypasses the subtle proof of existence and the Hersch-type argument via subcritical
131
+ approximation.
132
+ Some of the results of Theorem 1.1 have been generalized to the cases of quater-
133
+ nionic Heisenberg group and octonionic Heisenberg group (see [4, 5, 16, 17]). We
134
+ note that Heisenberg group, quaternionic Heisenberg group and octonionic Heisen-
135
+ berg group are known as the groups of Iwasawa type, i.e., the nilpotent component
136
+ in the Iwasawa decomposition of simple groups of rank one (see e.g. [6]).
137
+ The aim of this paper is to look for the optimal constant of (1.1) when p = 2 and
138
+ G is a group of Heisenberg type (in short a H-type group). Recall that a H-type
139
+ group G is a Carnot group of step two with the following properties (see Kaplan
140
+ [21]): the Lie algebra g of G is endowed with an inner product ⟨, ⟩ such that, if z
141
+ is the center of g, then [z⊥, z⊥] = z and moreover, for every fixed z ∈ z, the map
142
+ Jz : z⊥ → z⊥ defined by
143
+ ⟨Jz(v), ω⟩ = ⟨z, [v, ω]⟩, ∀ω ∈ z���
144
+ (1.2)
145
+ is an orthogonal map whenever ⟨z, z⟩ = 1. It is known (see [6]) that a H-type group
146
+ G is the group of Iwasawa type if and only if its Lie algebra satisfies the following
147
+
148
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
149
+ 3
150
+ J2-condition: for any v ∈ z⊥ and z, z′ ∈ z such that ⟨z, z′⟩ = 0, there exists z′′ ∈ z
151
+ such that
152
+ JzJz′v = Jz′′v.
153
+ Therefore, most of H-type groups are not groups of Iwasawa type.
154
+ Set m = dim z⊥ and n = dim z. Since G has step two, we can fix on G a system
155
+ of coordinates (x, t) such that the group law on G has the form (see [2])
156
+ (x, t) ◦ (x′, t′) =
157
+
158
+ xi + x′
159
+ i, i = 1, 2, · · · , m
160
+ tj + t′
161
+ j + 1
162
+ 2⟨x, U (j)x′⟩, j = 1, 2, · · · , n
163
+
164
+ (1.3)
165
+ for suitable skew-symmetric matrices U (j)’s. Nextly, we set
166
+ U(ξ) =
167
+ ��
168
+ 1 + |x|2
169
+ 4
170
+ �2
171
+ + |t|2
172
+ �− Q−2
173
+ 4
174
+ , ξ = (x, t) ∈ G;
175
+ (1.4)
176
+ Uλ,η(ξ) =λ
177
+ Q−2
178
+ 2 U(δλ(η−1 ◦ ξ)),
179
+ η ∈ G.
180
+ (1.5)
181
+ It has been shown that [m(Q − 2)]
182
+ Q−2
183
+ 4 Uλ,η(ξ) satisfies the Yamabe-type equation
184
+ (see [13, 14])
185
+ ∆G[m(Q − 2)]
186
+ Q−2
187
+ 4 Uλ,η + {[m(Q − 2)]
188
+ Q−2
189
+ 4 Uλ,η}
190
+ Q+2
191
+ Q−2 = 0,
192
+ or equivalently,
193
+ ∆GUλ,η + m(Q − 2)U
194
+ Q+2
195
+ Q−2
196
+ λ,η
197
+ = 0.
198
+ (1.6)
199
+ In the paper [13], Garofalo and Vassilev gave the following conjecture:
200
+ Conjecture (Garofalo-Vassilev). In a H-type group G, the functions [m(Q −
201
+ 2)]
202
+ Q−2
203
+ 4 Uλ,η(ξ) are the only nontrivial entire solutions to
204
+
205
+ ∆Gu + u
206
+ Q+2
207
+ Q−2 = 0,
208
+ u ∈ W 1,2
209
+ 0
210
+ (G), u ≥ 0.
211
+ If the conjecture is true, then one can obtain the optimal constant of L2 Folland-
212
+ Stein inequality on H-type groups. In this paper we shall use the method given by
213
+ Frank and Lieb [11, 12] and Hang and Wang [15] to determine the optimal constant,
214
+ instead of proving the conjecture directly. To this end, we have
215
+ Theorem 1.2. It holds that
216
+
217
+ G
218
+ |∇Gu|2dxdt ≥ Sm,n
219
+ ��
220
+ G
221
+ |u|
222
+ 2Q
223
+ Q−2 dxdt
224
+ � Q−2
225
+ Q
226
+ , u ∈ W 1,2
227
+ 0
228
+ (G),
229
+ (1.7)
230
+ where
231
+ Sm,n = 4− 2n
232
+ Q m(Q − 2)π
233
+ m+n
234
+ Q
235
+ � Γ( m+n
236
+ 2
237
+ )
238
+ Γ(m + n)
239
+ �1/Q
240
+ .
241
+ The inequality is sharp and an extremal function is
242
+ U(x, t) =
243
+ ��
244
+ 1 + |x|2
245
+ 4
246
+ �2
247
+ + |t|2
248
+ �− Q−2
249
+ 4
250
+ .
251
+
252
+ 4
253
+ QIAOHUA YANG
254
+ By Theorem 1.2, it is easy to see that the functions cUλ,η(ξ)(c ∈ R) are also
255
+ extremal functions of inequality (1.7).
256
+ As an application of Theorem 1.2, we study the eigenvalues of
257
+ −∆Gv = µU
258
+ 4
259
+ Q−2
260
+ λ,η v, v ∈ W 1,2
261
+ 0
262
+ (G).
263
+ (1.8)
264
+ We note that the eigenvalues of (1.8) play an important role in the study of stability
265
+ for the Folland-Stein inequality (see [1, 3, 7] for the case of Euclidean space). In
266
+ Lemma 3.2 we show that the embedding map W 1,2
267
+ 0
268
+ (G) ֒→ L2(G, U(x, t)
269
+ 4
270
+ Q−2 dxdt) is
271
+ compact. So the spectrum of (1.8) is discrete. Furthermore, we have the following
272
+ theorem:
273
+ Theorem 1.3. Let µi, i = 1, 2, · · · be the eigenvalues of (1.8) given in increasing
274
+ order. Then
275
+ (1) µ1 = m(Q − 2) is simple with eigenfunction Uλ,η.
276
+ (2) µ2 = m(Q + 2) and {∂λUλ,η, ∇ηUλ,η} are eigenfunctions.
277
+ Furthermore, the eigenvalues do not depend on λ and η.
278
+ Remark 1.4. It seems that µ2 has multiplicity m + n + 1 with corresponding
279
+ eigenspace spanned by {∂λUλ,η, ∇ηUλ,η}.
280
+ However, we fail to prove it.
281
+ Once
282
+ it has been proven, it would provide a generalization of the results of Bianchi and
283
+ Egnell ([1], Lemma A1) to the setting of H-type groups.
284
+ 2. preliminaries on H-type groups
285
+ In the rest of paper, we let G be a H-type group with group law given by (1.3).
286
+ The nonisotropic dilations δλ on G is
287
+ δλ(x, t) = (λx, λ2t).
288
+ For (x, t) ∈ G, the homogeneous norm of (x, t) is
289
+ ρ(x, t) =
290
+ �|x|4
291
+ 16 + |t|2
292
+ � 1
293
+ 4
294
+ .
295
+ With this norm ρ, we can define the ball centered at origin with radius R
296
+ BR(0) = {(x, t) ∈ G : ρ(x, t) < R}
297
+ and the unit sphere Σ = ∂B1(0) = {(x, t) ∈ G : ρ(x, t) = 1}.
298
+ Given any (x, t) ∈ G with ρ(x, t) ̸= 0, we set x∗ =
299
+ x
300
+ ρ(x,t) and t∗ =
301
+ t
302
+ ρ(x,t)2 . The
303
+ polar coordinates on G associated with ρ are the following (see [10]):
304
+
305
+ G
306
+ f(x, t)dxdt =
307
+ � ∞
308
+ 0
309
+
310
+ Σ
311
+ f(ρx∗, ρ2t∗)ρQ−1dσdρ, f ∈ L1(G).
312
+ The following theorem was proved in [2], Theorem A.2.:
313
+ Theorem 2.1. G is a H-type group if and only if G is (isomorphic to) Rm+n
314
+ with the group law in (1.3) and the matrices U (1), U (2), · · · , U (n) have the following
315
+ properties:
316
+ (1) U (j) is a m × m skew symmetric and orthogonal matrix, for every j =
317
+ 1, 2, · · · , n;
318
+ (2) U (i)U (j) + U (j)U (i) = 0 for every i, j ∈ {1, 2, · · · , n} with i ̸= j.
319
+
320
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
321
+ 5
322
+ The vector field in the Lie algebra g that agrees at the origin with
323
+
324
+ ∂xj (j =
325
+ 1, · · · , m) is given by
326
+ Xj =
327
+
328
+ ∂xj
329
+ + 1
330
+ 2
331
+ n
332
+
333
+ k=1
334
+ � m
335
+
336
+ i=1
337
+ U (k)
338
+ i,j xi
339
+
340
+
341
+ ∂tk
342
+ and g is spanned by the left-invariant vector fields X1, · · · , Xm, T1 =
343
+
344
+ ∂t1 , · · · , Tn =
345
+
346
+ ∂tn . Furthermore (see [2], Page 200, (A.4) ),
347
+ [Xi, Xj] =
348
+ n
349
+
350
+ r=1
351
+ U (r)
352
+ i,j Tr, i, j ∈ {1, 2, · · · , n}.
353
+ (2.1)
354
+ The exponential map exp : g → G is
355
+ exp : g → Rm+n,
356
+ m
357
+
358
+ i=1
359
+ xiXi +
360
+ n
361
+
362
+ j=1
363
+ tjTj �→ (x, t).
364
+ We note that by exponential mapping, the group law (1.3) is nothing but the
365
+ Baker-Campbell-Hausdorff formula (see [2], the proof of Theorem A.2)
366
+ exp X ◦ exp Y = exp(X + Y + 1
367
+ 2[X, Y ]), X, Y ∈ g.
368
+ Using (2.1), we have that for t = (t1, · · · , tn) = t1T1 + · · · + tnTn and x =
369
+ (x1, · · · , xm) = x1X1 + · · · + xmXm, the map Jt, defined by (1.2), is (see also
370
+ [2], Page 201)
371
+ Jtx =
372
+ n
373
+
374
+ r=1
375
+ m
376
+
377
+ i=1
378
+ trxiJTr(Xi) =
379
+ n
380
+
381
+ r=1
382
+ m
383
+
384
+ i=1
385
+ trxi
386
+
387
+
388
+ m
389
+
390
+ j=1
391
+ U (r)
392
+ i,j Xj
393
+
394
+
395
+ =
396
+ m
397
+
398
+ j=1
399
+ � n
400
+
401
+ r=1
402
+ m
403
+
404
+ i=1
405
+ trxiU (r)
406
+ i,j
407
+
408
+ Xj.
409
+ Since Jt is an orthogonal map whenever |t| = 1, we obtain
410
+ |Jtx|2 = |t|2|x|2 =
411
+ m
412
+
413
+ j=1
414
+ � n
415
+
416
+ r=1
417
+ m
418
+
419
+ i=1
420
+ trxiU (r)
421
+ i,j
422
+ �2
423
+ .
424
+ (2.2)
425
+ The horizontal gradient on G is ∇G = (X1, · · · , Xm). The sub-Laplacian on G
426
+ is given by (see [2], Remark A.6.)
427
+ ∆G =
428
+ m
429
+
430
+ j=1
431
+ X2
432
+ j =
433
+ m
434
+
435
+ j=1
436
+
437
+
438
+ ∂xj
439
+ + 1
440
+ 2
441
+ n
442
+
443
+ k=1
444
+ � m
445
+
446
+ i=1
447
+ U (k)
448
+ i,j xi
449
+
450
+
451
+ ∂tk
452
+ �2
453
+ = ∆x + 1
454
+ 4|x|2∆t +
455
+ n
456
+
457
+ k=1
458
+ ⟨x, U (k)∇x⟩ ∂
459
+ ∂tk
460
+ ,
461
+ where
462
+ ∆x =
463
+ m
464
+
465
+ j=1
466
+ � ∂
467
+ ∂xj
468
+ �2
469
+ , ∆t =
470
+ n
471
+
472
+ k=1
473
+ � ∂
474
+ ∂tk
475
+ �2
476
+ .
477
+ We remark that ∆G is homogeneous of degree two with respect to δλ.
478
+ By using (1.6), we have the following Hardy inequality (see [22], Corollary 1.4
479
+ for Hardy inequality of fractional powers of the sublaplacian on G).
480
+
481
+ 6
482
+ QIAOHUA YANG
483
+ Lemma 2.2. It holds that, for u ∈ W 1,2
484
+ 0
485
+ (G),
486
+
487
+ G
488
+ |∇Gu|2dxdt ≥ m(Q − 2)
489
+
490
+ G
491
+ u2
492
+ (1 + |x|2
493
+ 4 )2 + |t|2 dxdt,
494
+ with equality if and only if u = cU(x, t), where c ∈ R and U(x, t) is given by (1.4).
495
+ Proof. We have, through integration by parts,
496
+ 0 ≤
497
+
498
+ G
499
+ U 2|∇G(U(x, t)−1u)|2dxdt
500
+ =
501
+
502
+ G
503
+ ���∇Gu − u
504
+ U ∇GU
505
+ ���
506
+ 2
507
+ dxdt
508
+ =
509
+
510
+ G
511
+ |∇Gu|2dxdt +
512
+
513
+ G
514
+ |∇GU|2
515
+ U 2
516
+ u2dxdt −
517
+
518
+ G
519
+ 1
520
+ U ⟨∇Gu2, ∇GU⟩dxdt
521
+ =
522
+
523
+ G
524
+ |∇Gu|2dxdt +
525
+
526
+ G
527
+ u2 1
528
+ U ∆GUdxdt
529
+ =
530
+
531
+ G
532
+ |∇Gu|2dxdt − m(Q − 2)
533
+
534
+ G
535
+ u2
536
+ (1 + |x|2
537
+ 4 )2 + |t|2 dxdt.
538
+ (2.3)
539
+ To get the last equality, we use (1.6). The desired result follows.
540
+
541
+ Set η = (y1, · · · , ym, w1, · · · , wn) ∈ G. By (1.6), we have
542
+ ∆G
543
+ ∂Uλ,η
544
+ ∂yj
545
+ + m(Q + 2)U
546
+ 4
547
+ Q−2
548
+ λ,η
549
+ ∂Uλ,η
550
+ ∂yj
551
+ = 0,
552
+ j = 1, , · · · , m,
553
+ ∆G
554
+ ∂Uλ,η
555
+ ∂wr
556
+ + m(Q + 2)U
557
+ 4
558
+ Q−2
559
+ λ,η
560
+ ∂Uλ,η
561
+ ∂wr
562
+ = 0,
563
+ r = 1, , · · · , n,
564
+ ∆G
565
+ ∂Uλ,η
566
+ ∂λ
567
+ + m(Q + 2)U
568
+ 4
569
+ Q−2
570
+ λ,η
571
+ ∂Uλ,η
572
+ ∂λ
573
+ = 0.
574
+ (2.4)
575
+ Furthermore, we have the following lemma:
576
+ Lemma 2.3. It holds that
577
+ m
578
+
579
+ j=1
580
+ ����
581
+ ∂Uλ,η
582
+ ∂yj
583
+ |λ=1,η=0
584
+ ����
585
+ 2
586
+ +
587
+ n
588
+
589
+ r=1
590
+ ����
591
+ ∂Uλ,η
592
+ ∂wr
593
+ |λ=1,η=0
594
+ ����
595
+ 2
596
+ + 1
597
+ 4
598
+ ����
599
+ ∂Uλ,η
600
+ ∂λ |λ=1,η=0
601
+ ����
602
+ 2
603
+ = (Q − 2)2
604
+ 16
605
+ U(ξ)2.
606
+ Proof. It is easy to see η−1 = −η. Therefore, by (1.3) and (1.5), we have
607
+ Uλ,η(x, t) =λ
608
+ Q−2
609
+ 2
610
+
611
+
612
+
613
+ 1 + λ2
614
+ 4
615
+ m
616
+
617
+ i=1
618
+ (xi − yi)2
619
+ �2
620
+ + λ4
621
+ n
622
+
623
+ r=1
624
+
625
+ tr − wr − ⟨y, U (r)x⟩
626
+ 2
627
+ �2
628
+
629
+ − Q−2
630
+ 4
631
+ .
632
+
633
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
634
+ 7
635
+ We compute
636
+ ∂Uλ,η
637
+ ∂yj
638
+ |λ=1,η=0 = − Q − 2
639
+ 4
640
+ U(ξ)
641
+ Q+2
642
+ Q−2
643
+
644
+ 2
645
+
646
+ 1 + |x|2
647
+ 4
648
+
649
+ ·
650
+
651
+ −xj
652
+ 2
653
+
654
+ +
655
+ n
656
+
657
+ r=1
658
+ tr
659
+
660
+
661
+ m
662
+
663
+ i=1
664
+ U (r)
665
+ j,i xi
666
+ ��
667
+ =Q − 2
668
+ 4
669
+ U(ξ)
670
+ Q+2
671
+ Q−2
672
+ ��
673
+ 1 + |x|2
674
+ 4
675
+
676
+ xj +
677
+ n
678
+
679
+ r=1
680
+ m
681
+
682
+ i=1
683
+ trxiU (r)
684
+ j,i
685
+
686
+ , j = 1, · · · , m;
687
+ ∂Uλ,η
688
+ ∂wr
689
+ |λ=1,η=0 = − Q − 2
690
+ 4
691
+ U(ξ)
692
+ Q+2
693
+ Q−2 (−2tr)
694
+ =Q − 2
695
+ 2
696
+ U(ξ)
697
+ Q+2
698
+ Q−2 tr,
699
+ r = 1, · · · , n;
700
+ ∂Uλ,η
701
+ ∂λ |λ=1,η=0 =Q − 2
702
+ 2
703
+ U(ξ) − Q − 2
704
+ 4
705
+ U(ξ)
706
+ Q+2
707
+ Q−2
708
+
709
+ 2
710
+
711
+ 1 + |x|2
712
+ 4
713
+
714
+ · |x|2
715
+ 2
716
+ + 4|t|2
717
+
718
+ = − Q − 2
719
+ 2
720
+ U(ξ)
721
+ Q+2
722
+ Q−2
723
+
724
+ −1 + |x|4
725
+ 16 + |t|2
726
+
727
+ .
728
+ Since each U (j)(1 ≤ j ≤ n) is a m × m skew symmetric matrix, we have, by using
729
+ (2.2),
730
+ m
731
+
732
+ j=1
733
+ ����
734
+ ∂Uλ,η
735
+ ∂yj
736
+ |λ=1,η=0
737
+ ����
738
+ 2
739
+ =(Q − 2)2
740
+ 16
741
+ U(ξ)2 Q+2
742
+ Q−2
743
+ m
744
+
745
+ j=1
746
+ ��
747
+ 1 + |x|2
748
+ 4
749
+
750
+ xj −
751
+ n
752
+
753
+ r=1
754
+ m
755
+
756
+ i=1
757
+ trxiU (r)
758
+ i,j
759
+ �2
760
+ =(Q − 2)2
761
+ 16
762
+ U(ξ)2 Q+2
763
+ Q−2
764
+ ��
765
+ 1 + |x|2
766
+ 4
767
+ �2
768
+ |x|2 + |t|2|x|2−
769
+ 2
770
+
771
+ 1 + |x|2
772
+ 4
773
+
774
+ n
775
+
776
+ r=1
777
+ tr
778
+
779
+
780
+ m
781
+
782
+ i=1
783
+ m
784
+
785
+ j=1
786
+ U (r)
787
+ i,j xixj
788
+
789
+
790
+
791
+
792
+ =(Q − 2)2
793
+ 16
794
+ U(ξ)2 Q+2
795
+ Q−2
796
+ ��
797
+ 1 + |x|2
798
+ 4
799
+ �2
800
+ |x|2 + |t|2|x|2
801
+
802
+ .
803
+ To get the last equality, we use the fact
804
+ m
805
+
806
+ i=1
807
+ m
808
+
809
+ j=1
810
+ U (r)
811
+ i,j xixj = 0
812
+
813
+ 8
814
+ QIAOHUA YANG
815
+ since U (r)(1 ≤ r ≤ n) is a m × m skew symmetric matrix. Therefore, we have
816
+ m
817
+
818
+ j=1
819
+ ����
820
+ ∂Uλ,η
821
+ ∂yj
822
+ |λ=1,η=0
823
+ ����
824
+ 2
825
+ +
826
+ n
827
+
828
+ r=1
829
+ ����
830
+ ∂Uλ,η
831
+ ∂wr
832
+ |λ=1,η=0
833
+ ����
834
+ 2
835
+ + 1
836
+ 4
837
+ ����
838
+ ∂Uλ,η
839
+ ∂λ |λ=1,η=0
840
+ ����
841
+ 2
842
+ =(Q − 2)2
843
+ 16
844
+ U(ξ)2 Q+2
845
+ Q−2
846
+ ��
847
+ 1 + |x|2
848
+ 4
849
+ �2
850
+ |x|2 + |t|2|x|2
851
+
852
+ + (Q − 2)2
853
+ 4
854
+ U(ξ)2 Q+2
855
+ Q−2 |t|2+
856
+ (Q − 2)2
857
+ 16
858
+ U(ξ)2 Q+2
859
+ Q−2
860
+
861
+ −1 + |x|4
862
+ 16 + |t|2
863
+ �2
864
+ =(Q − 2)2
865
+ 16
866
+ U(ξ)2 Q+2
867
+ Q−2
868
+ ��
869
+ 1 + |x|2
870
+ 4
871
+ �2
872
+ |x|2 + |t|2|x|2 + 4|t|2 +
873
+
874
+ −1 + |x|4
875
+ 16 + |t|2
876
+ �2�
877
+ =(Q − 2)2
878
+ 16
879
+ U(ξ)2 Q+2
880
+ Q−2
881
+ ��
882
+ 1 + |x|2
883
+ 4
884
+ �2
885
+ + |t|2
886
+ �2
887
+ =(Q − 2)2
888
+ 16
889
+ U(ξ)2.
890
+ To get the third equality, we use the fact
891
+
892
+ −1 + |x|4
893
+ 16 + |t|2
894
+ �2
895
+ =
896
+ ��
897
+ 1 + |x|2
898
+ 4
899
+ �2
900
+ + |t|2 − 2
901
+
902
+ 1 + |x|2
903
+ 4
904
+ ��2
905
+ =
906
+ ��
907
+ 1 + |x|2
908
+ 4
909
+ �2
910
+ + |t|2
911
+ �2
912
+ + 4
913
+
914
+ 1 + |x|2
915
+ 4
916
+ �2
917
+ − 4
918
+
919
+ 1 + |x|2
920
+ 4
921
+ � ��
922
+ 1 + |x|2
923
+ 4
924
+ �2
925
+ + |t|2
926
+
927
+ =
928
+ ��
929
+ 1 + |x|2
930
+ 4
931
+ �2
932
+ + |t|2
933
+ �2
934
+
935
+
936
+ 1 + |x|2
937
+ 4
938
+ �2
939
+ |x|2 − |t|2|x|2 − 4|t|2.
940
+ This completes the proof of Lemma 2.3.
941
+
942
+ For simplicity, we set
943
+ ωj =
944
+ 4
945
+ Q − 2U(ξ)−1 ∂Uλ,η
946
+ ∂yj
947
+ |λ=1,η=0, j = 1, · · · , m;
948
+ ωj+r =
949
+ 4
950
+ Q − 2U(ξ)−1 ∂Uλ,η
951
+ ∂wr
952
+ |λ=1,η=0, r = 1, · · · , n;
953
+ ωm+n+1 =
954
+ 2
955
+ Q − 2U(ξ)−1 ∂Uλ,η
956
+ ∂λ |λ=1,η=0.
957
+ (2.5)
958
+ By Lemma 2.3 and (2.4), we have
959
+ m+n+1
960
+
961
+ j=1
962
+ ω2
963
+ j =1;
964
+ (2.6)
965
+ ∆G(U(ξ)ωj) + m(Q + 2)U(ξ)
966
+ Q+2
967
+ Q−2 ωj =0, 1 ≤ j ≤ m + n + 1.
968
+ (2.7)
969
+
970
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
971
+ 9
972
+ 3. Proof of Theorem 1.2 and 1.3
973
+ In this section, we shall prove Theorem 1.2 and 1.3. The proof depends on a
974
+ scheme of subcritical approximation due to Hang and Wang [15]. We first establish
975
+ the following subcritical Sobolev inequality on G.
976
+ Lemma 3.1. Let 2 ≤ p <
977
+ 2Q
978
+ Q−2. There exists C > 0 such that for each u ∈ W 1,2
979
+ 0
980
+ (G),
981
+
982
+ G
983
+ |∇Gu|2dxdt ≥ C
984
+ ��
985
+ G
986
+ |u|pU(x, t)
987
+ 2Q
988
+ Q−2 −pdxdt
989
+ � 2
990
+ p
991
+ .
992
+ Proof. By H¨older’s inequality, we have
993
+
994
+ G
995
+ |u|pU(x, t)
996
+ 2Q
997
+ Q−2 −pdxdt
998
+ =
999
+
1000
+ G
1001
+
1002
+ |u|U(x, t)
1003
+ 2
1004
+ Q−2
1005
+ �Q− Q−2
1006
+ 2
1007
+ p
1008
+ |u|
1009
+ Q
1010
+ 2 (p−2)dxdt
1011
+
1012
+ ��
1013
+ G
1014
+ |u|2U(x, t)
1015
+ 4
1016
+ Q−2 dxdt
1017
+ � 2Q−(Q−2)p
1018
+ 4
1019
+ ��
1020
+ G
1021
+ |u|
1022
+ 2Q
1023
+ Q−2 dxdt
1024
+ � (Q−2)(p−2)
1025
+ 4
1026
+ =
1027
+ ��
1028
+ G
1029
+ u2
1030
+ (1 + |x|2
1031
+ 4 )2 + |t|2 dxdt
1032
+ � 2Q−(Q−2)p
1033
+ 4
1034
+ ��
1035
+ G
1036
+ |u|
1037
+ 2Q
1038
+ Q−2 dxdt
1039
+ � (Q−2)(p−2)
1040
+ 4
1041
+ ≤C
1042
+
1043
+ G
1044
+ |∇Gu|2dxdt,
1045
+ where C is a positive constant independent of u. To get the last inequality above,
1046
+ we use Folland-Stein inequality (1.1) and Lemma 2.2. This completes the proof of
1047
+ Lemma 3.1.
1048
+
1049
+ By Lemma 3.1, we have
1050
+ W 1,2
1051
+ 0
1052
+ (G) ֒→ Lp(G, U(x, t)
1053
+ 2Q
1054
+ Q−2 −pdxdt).
1055
+ Furthermore, the embedding map is compact.
1056
+ Lemma 3.2. Let 2 ≤ p <
1057
+ 2Q
1058
+ Q−2. The embedding map W 1,2
1059
+ 0
1060
+ (G) ֒→ Lp(G, U(x, t)
1061
+ 2Q
1062
+ Q−2 −pdxdt)
1063
+ is compact.
1064
+ Proof. The proof is similar to that given by Schneider (see [23], section 2.2).
1065
+ Let φ : G → [0, 1] be a cut-off function that is equal to one in B1(0) and zero
1066
+ outside of B2(0). Consider the operator
1067
+ IR : W 1,2
1068
+ 0
1069
+ (G) ֒→ Lp(G, U(x, t)
1070
+ 2Q
1071
+ Q−2 −pdxdt)
1072
+
1073
+ 10
1074
+ QIAOHUA YANG
1075
+ defined by IR(u) = u(x, t)φ
1076
+ � x
1077
+ R,
1078
+ t
1079
+ R2
1080
+
1081
+ . Since the imbedding map W 1,2
1082
+ 0
1083
+ (B2(0)) ֒→
1084
+ Lp(B2(0)) is compact, so is IR. Moreover, by H¨older’s inequality,
1085
+
1086
+ G
1087
+ |u − IR(u)|pU(x, t)
1088
+ 2Q
1089
+ Q−2 −pdxdt
1090
+
1091
+
1092
+ G\BR(0)
1093
+ |u|pU(x, t)
1094
+ 2Q
1095
+ Q−2 −pdxdt
1096
+
1097
+ ��
1098
+ G\BR(0)
1099
+ |u|
1100
+ 2Q
1101
+ Q−2 dxdt
1102
+ � Q−2
1103
+ 2Q p ��
1104
+ G\BR(0)
1105
+ U(x, t)
1106
+ 2Q
1107
+ Q−2 dxdt
1108
+ �1− Q−2
1109
+ 2Q p
1110
+ ≤C
1111
+ ��
1112
+ G
1113
+ |∇Gu|2dxdt
1114
+ � p
1115
+ 2 ��
1116
+ G\BR(0)
1117
+ U(x, t)
1118
+ 2Q
1119
+ Q−2 dxdt
1120
+ �1− Q−2
1121
+ 2Q p
1122
+ .
1123
+ To get the last inequality above, we use the Folland-Stein inequality (1.1). By polar
1124
+ coordinates, we have
1125
+
1126
+ G\BR(0)
1127
+ U(x, t)
1128
+ 2Q
1129
+ Q−2 dxdt =
1130
+
1131
+ G\BR(0)
1132
+ ��
1133
+ 1 + |x|2
1134
+ 4
1135
+ �2
1136
+ + |t|2
1137
+ �− Q
1138
+ 2
1139
+ dxdt
1140
+
1141
+
1142
+ G\BR(0)
1143
+ 1
1144
+ ( |x|2
1145
+ 4 + |t|2)
1146
+ Q
1147
+ 2 dxdt
1148
+ =
1149
+ � ∞
1150
+ R
1151
+
1152
+ Σ
1153
+ 1
1154
+ ρ2Q ρQ−1dρdσ
1155
+ =|Σ|
1156
+ 1
1157
+ QRQ → 0, R → ∞,
1158
+ where |Σ| is the volume of Σ. Therefore, the embedding map
1159
+ W 1,2
1160
+ 0
1161
+ (G) ֒→ Lp(G, U(x, t)
1162
+ 2Q
1163
+ Q−2 −pdxdt)
1164
+ is a limit of compact operators and thus it is compact. The proof of Lemma 3.2 is
1165
+ thereby completed.
1166
+
1167
+ By Lemma 3.2, the minimization problem
1168
+ Λp = inf
1169
+ ��
1170
+ G
1171
+ |∇Gu|2dxdt :
1172
+
1173
+ G
1174
+ |u|pU(x, t)
1175
+ 2Q
1176
+ Q−2 −pdxdt = 1
1177
+
1178
+ , 2 ≤ p <
1179
+ 2Q
1180
+ Q − 2,
1181
+ (3.1)
1182
+ has a positive solution u.
1183
+ We shall show that such u satisfies a moment zero
1184
+ condition. The main result is the following lemma:
1185
+ Lemma 3.3. Let 2 ≤ p <
1186
+ 2Q
1187
+ Q−2 and u be a positive solution of (3.1). Then we have
1188
+
1189
+ G
1190
+ upU(x, t)
1191
+ 2Q
1192
+ Q−2 −pωidxdt = 0, i = 1, 2, · · · , m + n + 1,
1193
+ (3.2)
1194
+ where ωi(1 ≤ i ≤ m + n + 1) is given by (2.5).
1195
+ Proof. For simplicity, we set
1196
+ Fp(u) =
1197
+
1198
+ G |∇Gu|2dxdt
1199
+ ��
1200
+ G |u|pU(x, t)
1201
+ 2Q
1202
+ Q−2 −pdxdt
1203
+ � 2
1204
+ p
1205
+
1206
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
1207
+ 11
1208
+ and
1209
+ uλ−1,η−1(ξ) =λ− Q−2
1210
+ 2 u(δλ−1(η ◦ ξ)), λ > 0, η = (y1, · · · , ym, w1, · · · , wn) ∈ G.
1211
+ A simple calculation shows
1212
+
1213
+ G
1214
+ |∇Guλ−1,η−1|2dxdt =
1215
+
1216
+ G
1217
+ |∇Gu|2dxdt;
1218
+
1219
+ G
1220
+ up
1221
+ λ−1,η−1U(x, t)
1222
+ 2Q
1223
+ Q−2 −pdxdt =
1224
+
1225
+ G
1226
+ upUλ,η(x, t)
1227
+ 2Q
1228
+ Q−2 −pdxdt,
1229
+ where Uλ,η is given by (1.5). Therefore,
1230
+ Fp(uλ−1,η−1(ξ)) =
1231
+
1232
+ G |∇Gu|2dxdt
1233
+ ��
1234
+ G |u|pUλ,η(x, t)
1235
+ 2Q
1236
+ Q−2 −pdxdt
1237
+ � 2
1238
+ p .
1239
+ (3.3)
1240
+ Since u is a positive solution of (3.1), we have
1241
+
1242
+ ∂yj
1243
+ Fp(uλ−1,η−1(ξ))|λ=1,η=0 =0, j = 1, · · · , m;
1244
+
1245
+ ∂wr
1246
+ Fp(uλ−1,η−1(ξ))|λ=1,η=0 =0, r = 1, · · · , n;
1247
+ ���
1248
+ ∂λFp(uλ−1,η−1(ξ))|λ=1,η=0 =0.
1249
+ (3.4)
1250
+ Combining (3.3) and (3.4) yields (3.2). This completes the proof of Lemma 3.3.
1251
+
1252
+ Remark 3.4. We remark that Lemma 3.3 is also valid for u > 0 satisfying the
1253
+ Yamabe-type equation
1254
+ ∆Gu + ΛpupU(x, t)
1255
+ 2Q
1256
+ Q−2 −p = 0.
1257
+ The proof is same and we omit it (see [15], Corollary 1 for the case of CR sphere).
1258
+ Lemma 3.5. It holds that, for any u ∈ W 1,2
1259
+ 0
1260
+ (G),
1261
+ m+n+1
1262
+
1263
+ i=1
1264
+
1265
+ G
1266
+ |∇G(uωi)|2dxdt =
1267
+
1268
+ G
1269
+ |∇Gu|2dxdt + 4m
1270
+
1271
+ G
1272
+ u2
1273
+ (1 + |x|2
1274
+ 4 )2 + |t|2 dxdt.
1275
+
1276
+ 12
1277
+ QIAOHUA YANG
1278
+ Proof. Let u = U(ξ)v. We compute, through integration by parts,
1279
+ m+n+1
1280
+
1281
+ i=1
1282
+
1283
+ G
1284
+ |∇G(uωi)|2dxdt =
1285
+ m+n+1
1286
+
1287
+ i=1
1288
+
1289
+ G
1290
+ |∇G(vUωi)|2dxdt
1291
+ =
1292
+ m+n+1
1293
+
1294
+ i=1
1295
+
1296
+ G
1297
+ |Uωi∇Gv + v∇G(Uωi)|2dxdt
1298
+ =
1299
+ m+n+1
1300
+
1301
+ i=1
1302
+ ��
1303
+ G
1304
+ |∇Gv|2U 2ω2
1305
+ i dxdt +
1306
+
1307
+ G
1308
+ |∇G(Uωi)|2v2dxdt+
1309
+ 1
1310
+ 2
1311
+
1312
+ G
1313
+ ⟨∇G(Uωi)2, ∇Gv2⟩dxdt
1314
+
1315
+ =
1316
+ m+n+1
1317
+
1318
+ i=1
1319
+ ��
1320
+ G
1321
+ |∇Gv|2U 2ω2
1322
+ i dxdt −
1323
+
1324
+ G
1325
+ v2Uωi∆G(Uωi)dxdt
1326
+
1327
+ =
1328
+
1329
+ G
1330
+ |∇Gv|2U 2dxdt + m(Q + 2)
1331
+
1332
+ G
1333
+ v2U
1334
+ 2Q
1335
+ Q−2 dxdt.
1336
+ (3.5)
1337
+ To get the last equality, we use (2.7). On the other hand, by (2.3), we have
1338
+
1339
+ G
1340
+ |∇Gv|2U 2dxdt =
1341
+
1342
+ G
1343
+ ���∇G
1344
+ u
1345
+ U
1346
+ ���
1347
+ 2
1348
+ U 2dxdt
1349
+ =
1350
+
1351
+ G
1352
+ |∇Gu|2dxdt − m(Q − 2)
1353
+
1354
+ G
1355
+ u2
1356
+ (1 + |x|2
1357
+ 4 )2 + |t|2 dxdt.
1358
+ (3.6)
1359
+ Substituting (3.6) into (3.5), we obtain
1360
+ m+n+1
1361
+
1362
+ i=1
1363
+
1364
+ G
1365
+ |∇G(uωi)|2dxdt =
1366
+
1367
+ G
1368
+ |∇Gu|2dxdt + 4m
1369
+
1370
+ G
1371
+ u2
1372
+ (1 + |x|2
1373
+ 4 )2 + |t|2 dxdt.
1374
+ This completes the proof of Lemma 3.5.
1375
+
1376
+ Now we can give the proof of Theorem 1.2. The idea is due to Frank and Lieb
1377
+ [11, 12] and Hang and Wang [15].
1378
+ Proof of Theorem 1.2. Let 2 ≤ p <
1379
+ 2Q
1380
+ Q−2 and up be a positive solution of
1381
+ (3.1). The 2nd variation of the functional Fp around up shows that
1382
+
1383
+ G
1384
+ |∇Gf|2dxdt
1385
+
1386
+ G
1387
+ up
1388
+ pU(x, t)
1389
+ 2Q
1390
+ Q−2 −pdxdt−
1391
+ (p − 1)
1392
+
1393
+ G
1394
+ |∇Gup|2dxdt
1395
+
1396
+ G
1397
+ up−2
1398
+ p
1399
+ Uλ,η(x, t)
1400
+ 2Q
1401
+ Q−2 −pf 2dxdt ≥ 0
1402
+ for any f with
1403
+
1404
+ G
1405
+ up
1406
+ pUλ,η(x, t)
1407
+ 2Q
1408
+ Q−2 −pfdxdt = 0.
1409
+
1410
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
1411
+ 13
1412
+ By Lemma 3.3, we may choose f = upωi, i = 1, 2, · · · , m + n + 1. Summing the
1413
+ corresponding inequalities for all such f’s yields, in view of (2.6) and Lemma 3.5,
1414
+ 0 ≤
1415
+ m+n+1
1416
+
1417
+ i=1
1418
+
1419
+ G
1420
+ |∇G(upωi)|2dxdt − (p − 1)
1421
+
1422
+ G
1423
+ |∇Gup|2dxdt
1424
+ =4m
1425
+
1426
+ G
1427
+ u2
1428
+ p
1429
+ (1 + |x|2
1430
+ 4 )2 + |t|2 dxdt − (p − 2)
1431
+
1432
+ G
1433
+ |∇Gup|2dxdt,
1434
+ i.e.
1435
+ (p − 2)
1436
+ ��
1437
+ G
1438
+ |∇Gup|2dxdt − m(Q − 2)
1439
+
1440
+ G
1441
+ u2
1442
+ p
1443
+ (1 + |x|2
1444
+ 4 )2 + |t|2 dxdt
1445
+
1446
+ ≤m(Q − 2)
1447
+ � 2Q
1448
+ Q − 2 − p
1449
+ � �
1450
+ G
1451
+ u2
1452
+ p
1453
+ (1 + |x|2
1454
+ 4 )2 + |t|2 dxdt
1455
+ ≤m(Q − 2)
1456
+ � 2Q
1457
+ Q − 2 − p
1458
+ � ��
1459
+ G
1460
+ up
1461
+ pU(x, t)
1462
+ 2Q
1463
+ Q−2 −pdxdt
1464
+ � 2
1465
+ p ��
1466
+ G
1467
+ U(x, t)
1468
+ 2Q
1469
+ Q−2 dxdt
1470
+ �1− 2
1471
+ p
1472
+ =m(Q − 2)
1473
+ � 2Q
1474
+ Q − 2 − p
1475
+ � ��
1476
+ G
1477
+ U(x, t)
1478
+ 2Q
1479
+ Q−2 dxdt
1480
+ �1− 2
1481
+ p
1482
+ → 0, p ր
1483
+ 2Q
1484
+ Q − 2.
1485
+ To get the last equality, we use the fact
1486
+
1487
+ G up
1488
+ pU(x, t)
1489
+ 2Q
1490
+ Q−2 −pdxdt = 1. Therefore, by
1491
+ Lemma 2.2, we obtain
1492
+
1493
+ G
1494
+ |∇Gup|2dxdt − m(Q − 2)
1495
+
1496
+ G
1497
+ u2
1498
+ p
1499
+ (1 + |x|2
1500
+ 4 )2 + |t|2 dxdt → 0, p ր
1501
+ 2Q
1502
+ Q − 2,
1503
+ or equivalently,
1504
+
1505
+ G
1506
+ |∇G(U −1up)|2U 2dxdt → 0, p ր
1507
+ 2Q
1508
+ Q − 2.
1509
+ So we can choose a sequence {pk : k = 1, 2, · · ·} such that pk ր
1510
+ 2Q
1511
+ Q−2 and upk
1512
+ converges to a nonzero function c0U (for reader’s convenience, we prove it in Lemma
1513
+ 3.6). Thus c0U is an extremal function of
1514
+ Λ = inf
1515
+ ��
1516
+ G
1517
+ |∇Gu|2dxdt :
1518
+
1519
+ G
1520
+ |u|
1521
+ 2Q
1522
+ Q−2 dxdt = 1
1523
+
1524
+ .
1525
+ The value Sm,n has been calculated in [13], Theorem 1.6. The proof of Theorem
1526
+ 1.2 is thereby completed.
1527
+ Lemma 3.6. Let up(2 ≤ p <
1528
+ 2Q
1529
+ Q−2) be a positive solution of (3.1). If
1530
+
1531
+ G
1532
+ |∇Gup|2dxdt − m(Q − 2)
1533
+
1534
+ G
1535
+ u2
1536
+ p
1537
+ (1 + |x|2
1538
+ 4 )2 + |t|2 dxdt → 0, p ր
1539
+ 2Q
1540
+ Q − 2,
1541
+ then there exists c0 > 0 and a sequence {pk : k = 1, 2, · · ·} such that pk ր
1542
+ 2Q
1543
+ Q−2 and
1544
+
1545
+ G
1546
+ |∇G(upk − c0U)|2dxdt → 0, k → ∞.
1547
+
1548
+ 14
1549
+ QIAOHUA YANG
1550
+ Proof. By Lemma 2.2, µ1 = m(Q − 2) is simple with eigenfunction U of (1.8) with
1551
+ λ = 1 and η = 0. Decompose up as
1552
+ up = λpU + vp
1553
+ (3.7)
1554
+ with
1555
+ λp =
1556
+
1557
+ G U
1558
+ Q+2
1559
+ Q−2 updxdt
1560
+
1561
+ G U
1562
+ 2Q
1563
+ Q−2 dxdt
1564
+ > 0.
1565
+ Then vp ⊥ U, i.e.
1566
+
1567
+ G
1568
+ U
1569
+ 4
1570
+ Q−2 · Uvpdx =
1571
+
1572
+ G
1573
+ U
1574
+ Q+2
1575
+ Q−2 vpdxdt = 0,
1576
+
1577
+ G
1578
+ ⟨∇GU, ∇Gvp⟩dx = 0.
1579
+ (3.8)
1580
+ Therefore, we have
1581
+
1582
+ G
1583
+ |∇Gvp|2dxdt ≥ µ2
1584
+
1585
+ G
1586
+ U
1587
+ 4
1588
+ Q−2 · v2
1589
+ pdx = µ2
1590
+
1591
+ G
1592
+ v2
1593
+ p
1594
+ (1 + |x|2
1595
+ 4 )2 + |t|2 dxdt,
1596
+ (3.9)
1597
+ where µ2 is the second eigenvalue of (1.8) with with λ = 1 and η = 0. We compute,
1598
+ by using (3.8) and (3.9),
1599
+
1600
+ G
1601
+ |∇Gup|2dxdt − m(Q − 2)
1602
+
1603
+ G
1604
+ u2
1605
+ p
1606
+ (1 + |x|2
1607
+ 4 )2 + |t|2 dxdt
1608
+ =
1609
+
1610
+ G
1611
+
1612
+ λ2
1613
+ p|∇GU|2 + |∇Gvp|2�
1614
+ dxdt − µ1
1615
+
1616
+ G
1617
+ λ2
1618
+ pU 2 + v2
1619
+ p
1620
+ (1 + |x|2
1621
+ 4 )2 + |t|2 dxdt
1622
+ =
1623
+
1624
+ G
1625
+ |∇Gvp|2dxdt − µ1
1626
+
1627
+ G
1628
+ v2
1629
+ p
1630
+ (1 + |x|2
1631
+ 4 )2 + |t|2 dxdt
1632
+ =µ1
1633
+ µ2
1634
+ ��
1635
+ G
1636
+ |∇Gvp|2dxdt − µ2
1637
+
1638
+ G
1639
+ u2
1640
+ p
1641
+ (1 + |x|2
1642
+ 4 )2 + |t|2 dxdt
1643
+
1644
+ +
1645
+ µ2 − µ1
1646
+ µ2
1647
+
1648
+ G
1649
+ |∇Gvp|2dxdt
1650
+ ≥µ2 − µ1
1651
+ µ2
1652
+
1653
+ G
1654
+ |∇Gvp|2dxdt.
1655
+ Therefore,
1656
+
1657
+ G
1658
+ |∇Gvp|2dxdt
1659
+
1660
+ µ2
1661
+ µ2 − µ1
1662
+ ��
1663
+ G
1664
+ |∇Gup|2dxdt − m(Q − 2)
1665
+ ��
1666
+ G
1667
+ u2
1668
+ p
1669
+ (1 + |x|2
1670
+ 4 )2 + |t|2 dxdt
1671
+
1672
+ → 0, p ր
1673
+ 2Q
1674
+ Q − 2.
1675
+ (3.10)
1676
+
1677
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
1678
+ 15
1679
+ On the other hand, by (3.7), Minkowski’s inequalities, H¨older’s inequality and
1680
+ (1.1), we have
1681
+ λp
1682
+ ��
1683
+ G
1684
+ U(x, t)
1685
+ 2Q
1686
+ Q−2 dxdt
1687
+ � 1
1688
+ p
1689
+ =
1690
+ ��
1691
+ G
1692
+ (up − vp)pU(x, t)
1693
+ 2Q
1694
+ Q−2 −pdxdt
1695
+ � 1
1696
+ p
1697
+
1698
+ ��
1699
+ G
1700
+ up
1701
+ pU(x, t)
1702
+ 2Q
1703
+ Q−2 −pdxdt
1704
+ � 1
1705
+ p
1706
+ +
1707
+ ��
1708
+ G
1709
+ |vp|pU(x, t)
1710
+ 2Q
1711
+ Q−2 −pdxdt
1712
+ � 1
1713
+ p
1714
+ =1 +
1715
+ ��
1716
+ G
1717
+ |vp|pU(x, t)
1718
+ 2Q
1719
+ Q−2 −pdxdt
1720
+ � 1
1721
+ p
1722
+ ≤1 +
1723
+ ��
1724
+ G
1725
+ |vp|
1726
+ 2Q
1727
+ Q−2 dxdt
1728
+ � Q−2
1729
+ 2Q ��
1730
+ G
1731
+ U
1732
+ 2Q
1733
+ Q−2 dxdt
1734
+ � 1
1735
+ p − Q−2
1736
+ 2Q
1737
+ ≤1 + C
1738
+ ��
1739
+ G
1740
+ |∇Gvp|2dxdt
1741
+ � 1
1742
+ 2 ��
1743
+ G
1744
+ U
1745
+ 2Q
1746
+ Q−2 dxdt
1747
+ � 1
1748
+ p − Q−2
1749
+ 2Q
1750
+ .
1751
+ (3.11)
1752
+ Substituting (3.10) into (3.11), we obtain
1753
+ lim sup
1754
+ pր 2Q
1755
+ Q−2
1756
+ λp ≤
1757
+ ��
1758
+ G
1759
+ U(x, t)
1760
+ 2Q
1761
+ Q−2 dxdt
1762
+ �− Q−2
1763
+ 2Q
1764
+ .
1765
+ Therefore, there exists c0 ≥ 0 and a sequence {pk : k = 1, 2, · · · } such that pk ր
1766
+ 2Q
1767
+ Q−2 and
1768
+ λpk → c0, k → ∞.
1769
+ (3.12)
1770
+ We claim that
1771
+
1772
+ G
1773
+ |∇G(upk − c0U)|2dxdt → 0, k → ∞.
1774
+ (3.13)
1775
+ In fact, by using (3.10) and (3.12), we obtain
1776
+
1777
+ G
1778
+ |∇G(upk − c0U)|2dxdt
1779
+ =
1780
+
1781
+ G
1782
+ |∇G(vpk + (λpk − c0)U)|2dxdt
1783
+ =
1784
+
1785
+ G
1786
+ |∇Gvpk|2dxdt + (λpk − c0)2
1787
+
1788
+ G
1789
+ |∇GU|2dxdt → 0, k → ∞.
1790
+ This proves the claim.
1791
+ Finally, we show that c0 > 0. In fact, if c0 = 0, then by (3.13),
1792
+
1793
+ G
1794
+ |∇Gupk|2dxdt → 0, k → ∞.
1795
+
1796
+ 16
1797
+ QIAOHUA YANG
1798
+ On the other hand, by H¨older’s inequality and (1.1), we obtain
1799
+ 1 =
1800
+ ��
1801
+ G
1802
+ upk
1803
+ pkU(x, t)
1804
+ 2Q
1805
+ Q−2 −pkdxdt
1806
+ � 1
1807
+ pk
1808
+
1809
+ ��
1810
+ G
1811
+ |upk|
1812
+ 2Q
1813
+ Q−2 dxdt
1814
+ � Q−2
1815
+ 2Q ��
1816
+ G
1817
+ U
1818
+ 2Q
1819
+ Q−2 dxdt
1820
+ � 1
1821
+ pk − Q−2
1822
+ 2Q
1823
+ ≤C
1824
+ ��
1825
+ G
1826
+ |∇Gupk|2dxdt
1827
+ � 1
1828
+ 2 ��
1829
+ G
1830
+ U
1831
+ 2Q
1832
+ Q−2 dxdt
1833
+ � 1
1834
+ pk − Q−2
1835
+ 2Q
1836
+ → 0, k → ∞,
1837
+ which is a contradiction. So c0 > 0. The proof of Lemma 3.6 is thereby completed.
1838
+
1839
+ Finally, we give the proof of Theorem 1.3.
1840
+ Proof of Theorem 1.3 A simple scaling argument shows that the eigenvalues
1841
+ do not depend on λ and η. So we may assume λ = 1 and η = 0.
1842
+ From Lemma 2.2 we know that µ1 = m(Q − 2) is simple with eigenfunction U.
1843
+ Nextly, we show µ2 ≥ m(Q + 2). Let V ̸= 0 be a eigenfunction of µ2. Then
1844
+ (3.14)
1845
+ µ2 =
1846
+
1847
+ G |∇GV |2dxdt
1848
+
1849
+ G U
1850
+ 4
1851
+ Q−2 V 2dxdt
1852
+ .
1853
+ Furthermore, since V ⊥ U, we have
1854
+ (3.15)
1855
+
1856
+ G
1857
+ ⟨∇GU, ∇GV ⟩dxdt = 0,
1858
+
1859
+ G
1860
+ U
1861
+ 4
1862
+ Q−2 · UV dxdt =
1863
+
1864
+ G
1865
+ U
1866
+ Q+2
1867
+ Q−2 V dxdt = 0.
1868
+ Set
1869
+ Φ(ǫ) =
1870
+
1871
+ G |∇G(U + ǫV )|2dxdt
1872
+ ��
1873
+ G |U + ǫV |
1874
+ 2Q
1875
+ Q−2 dxdt
1876
+ � Q−2
1877
+ Q , ǫ ∈ R.
1878
+ By Theorem 1.2, U is an extremal function of Folland-Stein inequality (1.7). So we
1879
+ have Φ′(0) = 0 and Φ′′(0) ≥ 0. We compute
1880
+ Φ′(ǫ) =2
1881
+
1882
+ G⟨∇G(U + ǫV ), ∇GV ⟩dxdt
1883
+ ��
1884
+ G |U + ǫV |
1885
+ 2Q
1886
+ Q−2 dxdt
1887
+ � Q−2
1888
+ Q
1889
+
1890
+ 2
1891
+
1892
+ G |∇G(U + ǫV )|2dxdt
1893
+ ��
1894
+ G |U + ǫV |
1895
+ 2Q
1896
+ Q−2 dxdt
1897
+ � 2Q−2
1898
+ Q
1899
+
1900
+ G
1901
+ |U + ǫV |
1902
+ 4
1903
+ Q−2 (U + ǫV )V dxdt
1904
+ =Φ1(ǫ) − Φ2(ǫ),
1905
+ where
1906
+ Φ1(ǫ) =2
1907
+
1908
+ G⟨∇G(U + ǫV ), ∇GV ⟩dxdt
1909
+ ��
1910
+ G |U + ǫV |
1911
+ 2Q
1912
+ Q−2 dxdt
1913
+ � Q−2
1914
+ Q
1915
+ ;
1916
+ Φ2(ǫ) =2
1917
+
1918
+ G |∇G(U + ǫV )|2dxdt
1919
+ ��
1920
+ G |U + ǫV |
1921
+ 2Q
1922
+ Q−2 dxdt
1923
+ � 2Q−2
1924
+ Q
1925
+
1926
+ G
1927
+ |U + ǫV |
1928
+ 4
1929
+ Q−2 (U + ǫV )V dxdt.
1930
+
1931
+ THE OPTIMAL CONSTANT IN THE L2 FOLLAND-STEIN INEQUALITY
1932
+ 17
1933
+ By using (3.15), we have
1934
+ Φ′
1935
+ 1(0) =2
1936
+
1937
+ G |∇GV |2dxdt
1938
+ ��
1939
+ G |U|
1940
+ 2Q
1941
+ Q−2 dxdt
1942
+ � Q−2
1943
+ Q
1944
+ − 4
1945
+
1946
+ G⟨∇GU, ∇GV ⟩dxdt
1947
+ ��
1948
+ G |U|
1949
+ 2Q
1950
+ Q−2 dxdt
1951
+ � 2Q−2
1952
+ Q
1953
+
1954
+ G
1955
+ U
1956
+ Q+2
1957
+ Q−2 V dxdt
1958
+ =2
1959
+
1960
+ G |∇GV |2dxdt
1961
+ ��
1962
+ G |U|
1963
+ 2Q
1964
+ Q−2 dxdt
1965
+ � Q−2
1966
+ Q ;
1967
+ Φ′
1968
+ 2(0) =4
1969
+
1970
+ G⟨∇GU, ∇GV ⟩dxdt
1971
+ ��
1972
+ G |U|
1973
+ 2Q
1974
+ Q−2 dxdt
1975
+ � 2Q−2
1976
+ Q
1977
+
1978
+ G
1979
+ U
1980
+ Q+2
1981
+ Q−2 V dxdt
1982
+ − 8(Q − 1)
1983
+ Q − 2
1984
+
1985
+ G |∇GU|2dxdt
1986
+ ��
1987
+ G |U|
1988
+ 2Q
1989
+ Q−2 dxdt
1990
+ � 3Q−2
1991
+ Q
1992
+ ��
1993
+ G
1994
+ U
1995
+ 4
1996
+ Q−2 V 2dxdt
1997
+ �2
1998
+ + 2(Q + 2)
1999
+ Q − 2
2000
+
2001
+ G |∇GU|2dxdt
2002
+ ��
2003
+ G U
2004
+ 2Q
2005
+ Q−2 dxdt
2006
+ � 2Q−2
2007
+ Q
2008
+
2009
+ G
2010
+ U
2011
+ 4
2012
+ Q−2 V 2dxdt
2013
+ =2(Q + 2)
2014
+ Q − 2
2015
+
2016
+ G |∇GU|2dxdt
2017
+ ��
2018
+ G U
2019
+ 2Q
2020
+ Q−2 dxdt
2021
+ � 2Q−2
2022
+ Q
2023
+
2024
+ G
2025
+ U
2026
+ 4
2027
+ Q−2 V 2dxdt.
2028
+ Therefore,
2029
+ 0 ≤ Φ′′(0) =Φ′
2030
+ 1(0) − Φ′
2031
+ 2(0)
2032
+ =2
2033
+
2034
+ G |∇GV |2dxdt
2035
+ ��
2036
+ G |U|
2037
+ 2Q
2038
+ Q−2 dxdt
2039
+ � Q−2
2040
+ Q
2041
+ − 2(Q + 2)
2042
+ Q − 2
2043
+
2044
+ G |∇GU|2dxdt
2045
+ ��
2046
+ G U
2047
+ 2Q
2048
+ Q−2 dxdt
2049
+ � 2Q−2
2050
+ Q
2051
+
2052
+ G
2053
+ U
2054
+ 4
2055
+ Q−2 V 2dxdt,
2056
+ i.e.
2057
+
2058
+ G |∇GV |2dxdt
2059
+
2060
+ G |U|
2061
+ 4
2062
+ Q−2 V 2dxdt
2063
+ ≥ Q + 2
2064
+ Q − 2
2065
+
2066
+ G |∇GU|2dxdt
2067
+
2068
+ G |U|
2069
+ 2Q
2070
+ Q−2 dxdt
2071
+ .
2072
+ (3.16)
2073
+ Combing (3.14) and (3.16) yields
2074
+ µ2 ≥ Q + 2
2075
+ Q − 2
2076
+
2077
+ G |∇GU|2dxdt
2078
+
2079
+ G |U|
2080
+ 2Q
2081
+ Q−2 dxdt
2082
+ = Q + 2
2083
+ Q − 2µ1 = m(Q + 2).
2084
+ On the other hand, by (2.4), {∂λUλ,η|λ=1,η=0, ∇ηUλ,η|λ=1,η=0} are eigenfunctions
2085
+ of m(Q + 2). So µ2 = m(Q + 2). This completes the proof of Theorem 1.3.
2086
+ References
2087
+ [1] G. Bianchi, H. Egnell, A note on the Sobolev inequality, J. Funct. Anal., 100 (1991), 18-24.
2088
+ [2] A. Bonfiglioli, F. Uguzzoni, Nonlinear Liouville theorems for some critical problems on H-type
2089
+ groups, J. Funct. Anal., 207(2004), 161-215.
2090
+ [3] H. Brezis, E. Lieb, Sobolev inequalities with remainder terms, J. Funct. Anal., 62(1985),
2091
+ 73-86.
2092
+ [4] M. Christ, H. Liu, A. Zhang, Sharp Hardy-Littlewood-Sobolev inequalities on quaternionic
2093
+ Heisenberg groups, Nonlinear Analysis, 130(2016), 361-395.
2094
+ [5] M. Christ, H. Liu, A. Zhang, Sharp Hardy-Littlewood-Sobolev inequalities on octonionic
2095
+ Heisenberg group, Calc. Var. 55, Article number: 11 (2016).
2096
+
2097
+ 18
2098
+ QIAOHUA YANG
2099
+ [6] M. Cowling, A. H. Dooley, A. Kor´anyi, F. Ricci, H-type groups and Iwasawa decompositions,
2100
+ Adv. Math., 87 (1991), 1-41.
2101
+ [7] J. Dolbeault, M. J. Esteban, A. Figalli, R. L. Frank, M. Loss, Stability for the Sobolev
2102
+ inequality with explicit constants, arXiv:2209.08651v2 [math.AP].
2103
+ [8] G. B. Folland, Subelliptic estimates and function spaces on nilpotent Lie groups, Ark. Mat.
2104
+ 13 (1975), 161-207.
2105
+ [9] G. B. Folland, E. M. Stein, Estimates for the ¯∂b complex and analysis on the Heisenberg
2106
+ group, Comm. Pure Appl. Math. 27 (1974), 429-522.
2107
+ [10] G.B. Folland, E.M. Stein, Hardy spaces on homogeneous groups, Princeton University Press,
2108
+ Princeton, NJ, 1982.
2109
+ [11] R. L. Frank, E. Lieb, Sharp constants in several inequalities on the Heisenberg group. Ann.
2110
+ of Math. (2) 176 (2012), no. 1, 349-381.
2111
+ [12] R. L. Frank, E. Lieb, A new, rearrangement-free proof of the sharp Hardy-Littlewood- Sobolev
2112
+ inequality. (English summary) Spectral theory, function spaces and inequalities, 55-67, Oper.
2113
+ Theory Adv. Appl., 219, Birkh¨auser/Springer Basel AG, Basel, 2012.
2114
+ [13] N. Garofalo, D. Vassilev, Symmetry properties of positive entire solutions of Yamabe type
2115
+ equations on groups of Heisenberg type, Duke Math. J., 106 (2001), no. 3, 411-449.
2116
+ [14] N. Garofalo, D. Vassilev, Regularity near the characteristic set in the non-linear Dirich-
2117
+ let problem and conformal geometry of sub-Laplacians on Carnot groups, Math. Ann., 318
2118
+ (2000), no. 3, 453-516.
2119
+ [15] F. Hang, X. Wang, A simpler proof of Frank and Lieb’s sharp inequality on the Heisenberg
2120
+ Group, arXiv:2211.10301v2 [math.AP].
2121
+ [16] S. Ivanov, I. Minchev, D. Vassilev, Extremals for the Sobolev inequality on the seven-
2122
+ dimensional quaternionic Heisenberg group and the quaternionic contact Yamabe problem,
2123
+ J. Euro. Math. Soc., 12 (2010), no. 4, 1041-1067.
2124
+ [17] S. Ivanov, I. Minchev, D. Vassilev, The optimal constant in the L2 Folland-Stein inequality
2125
+ on the quaternionic Heisenberg group, Ann. Sc. Norm. Super. PisaCl. Sci. XI (2012), no. (5),
2126
+ 635-652.
2127
+ [18] D. Jerison, J. M. Lee, The Yamabe problem on CR manifolds. J. Diff. Geom. 25 (1987), no.
2128
+ 2, 167-197.
2129
+ [19] D. Jerison, J. M. Lee, Extremals for the Sobolev inequality on the Heisenberg group and the
2130
+ CR Yamabe problem. J. Amer. Math. Soc. 1 (1988), no. 1, 1-13.
2131
+ [20] D. Jerison, J. M. Lee, Intrinsic CR normal coordinates and the CR Yamabe problem, J. Diff.
2132
+ Geom. 29 (1989), 303-343.
2133
+ [21] A. Kaplan, Fundamental solutions for a class of hypoelliptic PDE generated by composition
2134
+ of quadratic forms, Trans. Amer. Math. Soc., 258 (1) (1980) 147-153.
2135
+ [22] L. Roncal, S. Thangavelu, An extension problem and trace Hardy inequality for the sublapla-
2136
+ cian on H-type groups, arXiv:1708.09258 [math.AP].
2137
+ [23] M. Schneider, Entire solutions of semilinear elliptic problems with indefinite nonlinearities,
2138
+ Shaker Verlag GmbH, Germany, 2001.
2139
+ [24] D. Vassilev, Regularity near the characteristic boundary for sub-laplacian operators, Pacific
2140
+ J. Math., 227 (2006), no. 2, 361-397.
2141
+ School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, People’s
2142
+ Republic of China
2143
+ Email address: qhyang.math@whu.edu.cn
2144
+
B9E1T4oBgHgl3EQfpgVg/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
C9E0T4oBgHgl3EQfyQLe/content/tmp_files/2301.02658v1.pdf.txt ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Measuring Power with a Saturated Photodiode
2
+ Shiekh Zia Uddin1,*
3
+ 1Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
4
+ *suddin@mit.edu
5
+ ABSTRACT
6
+ Accurate measurement of optical power is pivotal in many applications and scientific research. However, traditional power
7
+ meters are unable to measure power levels beyond a certain saturation point, limiting their usefulness in high-power applications.
8
+ In this technical note, I discuss how optical power can be measured using a saturated photodiode. I demonstrate that by
9
+ monitoring both the dc photocurrent and ac noise, it is possible to accurately measure power levels beyond its saturation point.
10
+ Keywords: Power meter, Photodiode, Saturation, Noise.
11
+ Introduction
12
+ Optical power measurement is a critical aspect of many applications. It is the conventional wisdom that a saturated photodiode
13
+ can not be used to measure power. Saturation power of photodiodes can be pushed to higher levels by applying a reverse bias
14
+ voltage, however there is a limit to the amount of bias voltage due to the reverse breakdown which can be catastrophic to the
15
+ diode. This limitation can be problematic in high-power applications, where it is important to be able to accurately measure
16
+ power levels at high speed. In this technical note, I discuss a method for measuring optical power using a saturated photodiode.
17
+ I demonstrate that the photocurrent noise decreases with power beyond saturation which can be used to accurately measure
18
+ power at levels beyond its saturation point. This information might be useful in photon noise measurements.
19
+ Background
20
+ If a photoevent generated at t = 0 produces an electric pulse h(t), of area e, in the external circuit. A photoevent generated
21
+ at time t1 then produces a displaced pulse, h(t −t1). Dividing the time axis into incremental time intervals ∆t so that the
22
+ probability p that a photoevent occurs within an interval is P = ηΦ∆t. The electric current i at time t is written as
23
+ i(t) = ∑
24
+ l
25
+ Xlh(t −l∆t),
26
+ (1)
27
+ where Xl assumes the value 1 with probability p, and 0 with probability 1− p. The variables Xl are independent. The mean
28
+ value of Xl is E[Xl] = 0×(1− p)+1× p = p. Its mean-square value is E[X2
29
+ l ] = 02 ×(1− p)+12 × p = p. The mean of the
30
+ product XlXk is p2 if l ̸= k, and p if l = k. The mean and mean-square values of i(t) are now determined via
31
+ i = E[i(t)] = ∑
32
+ l
33
+ ph(t −l∆t),
34
+ (2)
35
+ E[i2(t)] = ∑
36
+ l ∑
37
+ k
38
+ E[XlXk]h(t −l∆t)h(t −k∆t)
39
+ (3)
40
+ = ∑∑
41
+ l̸=k
42
+ p2h(t −l∆t)h(t −k∆t)+∑
43
+ l
44
+ ph2(t −l∆t)
45
+ (4)
46
+ Substituting p = ηΦ∆t, and taking the limit ∆t → 0 so that the summations become integrals, previous equations yield,
47
+ respectively,
48
+ E[i(t)] = ηΦ
49
+
50
+ h(t)dt,
51
+ (5)
52
+ E[i2(t)] =
53
+
54
+ ηΦ
55
+
56
+ h(t)dt
57
+ �2
58
+ +ηΦ
59
+
60
+ h2(t)dt
61
+ (6)
62
+ The limits of the integration is zero to infinity. It follows that
63
+ σ2
64
+ i = E[i2]−E[i]2 = ηΦ
65
+
66
+ h2(t)dt
67
+ (7)
68
+ arXiv:2301.02658v1 [physics.ins-det] 7 Jan 2023
69
+
70
+ Definition of the bandwidth B as
71
+ B = 1
72
+ 2e2
73
+ � ∞
74
+ 0 h2(t)dt =
75
+ � ∞
76
+ 0 h2(t)dt
77
+ 2(
78
+ � ∞
79
+ 0 h(t)dt)2 ,
80
+ (8)
81
+ can be readily verified by noting that the Fourier transform of h(t) is its transfer function H(v). The area under h(t) is simply
82
+ H(0) = e. In accordance with Parseval’s theorem, the area under h2(t) is equal to the area under the symmetric function |H(v)|2,
83
+ so that
84
+ B =
85
+ � ∞
86
+ 0
87
+ ����
88
+ H(v)
89
+ H(0)
90
+ ����
91
+ 2
92
+ dv
93
+ (9)
94
+ The quantity B is therefore the power-equivalent spectral width of the function H(v) (i.e., the bandwidth of the device/circuit
95
+ combination). As an example, if H(v) = 1 for −Vc < v < Vc and 0 elsewhere, we get B = Vc. Using this definition of bandwidth,
96
+ we get back our familiar expression for the noise in photocurrent
97
+ σ2
98
+ i = 2eE[i]B.
99
+ (10)
100
+ So far this is a standard derivation of the shot noise1. Note that this expression hinges on the assumption that the area under h(t)
101
+ is simply e, basically one photoevent cause one electrons worth of charge to flow as current. However in saturation its definitely
102
+ not the case and the photodiode response becomes a function of intensity. In the first order approximation, one can make the
103
+ assumption that h(t) = f(Φ)g(t), where f(Φ) is a power dependent function that is 1 at low power and decreases at high power
104
+ and area under g(t) is e. Then we get
105
+ E[i] = ηΦ f(Φ)
106
+
107
+ g(t)dt
108
+ (11)
109
+ σ2
110
+ i = ηΦ f 2(Φ)
111
+
112
+ g2(t)dt,
113
+ (12)
114
+ which gives us the key insight that the average value of current and noise power scales differently with incident optical power.
115
+ If we take a ratio
116
+ E[i]2
117
+ σ2
118
+ i
119
+ = η2Φ2 f 2(Φ)e2
120
+ ηΦ f 2(Φ)2e2B ∝ Φ
121
+ (13)
122
+ we find that the signal to noise ratio (SNR) in principle is proportional to incident power despite the nonlinearity in the response.
123
+ Even if this proportionality does not hold exactly, with proper calibration therefore it should be possible to measure the power
124
+ with a saturated photodiode by measuring both the average photocurrent and the photocurrent noise.
125
+ Results
126
+ Figure 1A shows a schematic diagram of a standard photodiode driving circuit. The photodiode can be modelled as a current
127
+ source in parallel with a diode2. A reverse bias voltage is applied to set the operating point, but there is a limit to how much
128
+ voltage can be applied defined by junction breakdown3. Such a circuit can be simulated using conventional electrical circuit
129
+ theory4 (MATLAB code below), the resulting operating current with realistic circuit parameters is shown in Fig. 1B. We can see
130
+ that the photodiode saturates at some power and the saturation knee increases with reverse bias voltage. The highest measurable
131
+ power is determined by the highest reverse bias voltage that can be applied across a junction. Below saturation the current is
132
+ linear with incident power, which is expected.
133
+ Experimental data of a reverse biased photodiode is shown in Fig. 2 where reverse bias is seen to increase the saturation
134
+ power. Before saturation the voltage is proportional to power. After saturation no measurement of power is possible, which is
135
+ the current paradigm.
136
+ We can now attempt to measure power beyond saturation. In Fig. 3A we show the photovoltage and the photocurrent noise
137
+ as a function of incident power. Photocurrent noise is measured around 1.5 MHz with a spectrum analyzer (10 kHz resolution
138
+ bandwidth, 1 MHz span, with preamplifier on and no attenuation, electronic noise floor is −165 dBm/Hz). Below saturation
139
+ photovoltage and noise increases simultaneously. As the photovoltage is saturated, the photocurrent noise suddenly decreases.
140
+ This qualitatively follows our theoretical voltage and noise shown in Fig. 1. In Fig. 3B we show the photovoltage and SNR
141
+ calculated from the experimental data. Beyond saturation SNR changes with incident power, which can be used to measure
142
+ power after proper calibration. Such behaviour also holds at other noise frequencies as long as they are lower than the badngap
143
+ of the photodiode and away from 1/ f noise.
144
+ .
145
+ 2/4
146
+
147
+ Figure 1. Schematic diagram of a photodiode driving circuit and simulated operating current I0.
148
+ Figure 2. Output voltage across the 25Ω load resistance from a Thorlabs FDGA055 InGaAs photodiode at different reverse
149
+ bias excited by a 1070 nm CW laser. It has a 0.95 A/W responsivity, 2.5 ns rise time and 0.5 mm active area diameter.
150
+ Figure 3. (A) Output voltage and noise from a Excelitas C30641GH6 InGaAs photodiode at 30 V reverse bias excited by a
151
+ 1550 nm femtosecond pulsed laser. It saturates around 25 mW. (B) Even though photovoltage has saturated, the SNR shows
152
+ response beyond the saturation power.
153
+ 3/4
154
+
155
+ A
156
+ B
157
+ 100
158
+ C
159
+ -140
160
+ Current
161
+ Photon
162
+ Vr (V)
163
+ (mA)
164
+ 80
165
+ Noise (dBm/Hz)
166
+ 0
167
+ g Current (
168
+ 10
169
+ 60
170
+ VR (V)
171
+ Load
172
+ 20
173
+ -160
174
+ 0
175
+ R
176
+ 30
177
+ 40
178
+ 10
179
+ Photocurrent
180
+ 20
181
+ Ip
182
+ Operating
183
+ 30
184
+ 20
185
+ Bias
186
+ 0
187
+ VR
188
+ -180
189
+ 0
190
+ 20
191
+ 40
192
+ 60
193
+ 80
194
+ 100
195
+ 0
196
+ 20
197
+ 40
198
+ 60
199
+ 80
200
+ 100
201
+ Incident Power (mW)
202
+ Incident Power (mW)1000
203
+ Voltage (mV)
204
+ 100
205
+ Vr (V)
206
+ 0
207
+ 5
208
+ 10
209
+ 15
210
+ 18
211
+ 20
212
+ 25
213
+ 30
214
+ SL
215
+ 10
216
+ 1
217
+ 10
218
+ 100
219
+ Incident Power (mWPhotovoltage (mV)
220
+ B
221
+ 30
222
+ 1.0
223
+ -135
224
+ Photovoltage (mV)
225
+ 25
226
+ 10
227
+ Noise (dBm/Hz)
228
+ 0.8
229
+ 20
230
+ (norm.)
231
+ -145
232
+ 50
233
+ 0.6
234
+ I
235
+ 0.4
236
+ SNR
237
+ -155
238
+
239
+ 5
240
+ 0.2
241
+ -165
242
+ 0.1
243
+ 0.0
244
+ 0
245
+ 10
246
+ 20
247
+ 30
248
+ 40
249
+ 50
250
+ 0
251
+ 10
252
+ 20
253
+ 30
254
+ 40
255
+ 50
256
+ Power (mW)
257
+ Power (mw)Discussion
258
+ Even when a photodiode is saturated, the information about the photon flux intensity are not completely lost and in a way
259
+ encoded in the photocurrent noise, which can be practically used to measure power at high speed after proper calibration.
260
+ Codes
261
+ 1
262
+ %% Matlab code to solve for photocurrent and noise in a circuit
263
+ 2
264
+ clc;clear all;close all;
265
+ 3
266
+ P=linspace(0,100,1000)*1e-3; % Incident power in W
267
+ 4
268
+ Responsivity=1;
269
+ 5
270
+ Ip=P*Responsivity; % Expected photocurrent
271
+ 6
272
+ R=500;
273
+ 7
274
+ V=[linspace(-50,0,1e5),linspace(0,.7,1e5)];
275
+ 8
276
+ VR=30; % Reverse Bias Voltage
277
+ 9
278
+ Iop=zeros(size(P)); % Operating Photocurrent
279
+ 10
280
+ 11
281
+ for indx=1:length(P)
282
+ 12
283
+ I1=-Ip(indx)+0.1e-9*exp(V/.0259);
284
+ 13
285
+ I2=-(V+VR)/R;
286
+ 14
287
+ [¬,pos]=min(abs(I1-I2));
288
+ 15
289
+ Iop(indx)=-I2(pos);
290
+ 16
291
+ end
292
+ 17
293
+ 18
294
+ figure(1);subplot(121), plot(P/1e-3,Iop); % Operating Current vs Power
295
+ 19
296
+ f=Iop./P; %nonlinear response function
297
+ 20
298
+ S=10*log10(2*1.6e-19*R*1*(P/1e-3).*(f.^2)); % Noise power vs optical power
299
+ 21
300
+ subplot(122), plot(P/1e-3,S);
301
+ References
302
+ 1. Saleh, B. E. & Teich, M. C. Fundamentals of Photonics (John Wiley & Sons, 2019).
303
+ 2. Bhattacharya, P. Semiconductor Optoelectronic Devices (Prentice-Hall, Inc., 1997).
304
+ 3. Neamen, D. A. Semiconductor Physics and Devices: Basic Principles (McGraw-hill, 2003).
305
+ 4. Sedra, A. S., Smith, K. C., Carusone, T. C. & Gaudet, V. Microelectronic Circuits, vol. 4 (Oxford University Press New
306
+ York, 2004).
307
+ 5. Thorlabs FDGA05. https://www.thorlabs.com/thorproduct.cfm?partnumber=FDGA05 (2022). Accessed: 2022-12-10.
308
+ 6. Excelitas C30641GH. https://www.excelitas.com/product/c30641gh-ingaas-pin-1mm-18 (2022). Accessed: 2022-12-10.
309
+ Acknowledgements
310
+ The author acknowledges Nicholas Rivera, Jamison Sloan, Yannick Salamin, chatGPT for their discussions. All equipment
311
+ used in the experiments are properties of MIT.
312
+ 4/4
313
+
C9E0T4oBgHgl3EQfyQLe/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf,len=152
2
+ page_content='Measuring Power with a Saturated Photodiode Shiekh Zia Uddin1,* 1Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA suddin@mit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
3
+ page_content='edu ABSTRACT Accurate measurement of optical power is pivotal in many applications and scientific research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
4
+ page_content=' However, traditional power meters are unable to measure power levels beyond a certain saturation point, limiting their usefulness in high-power applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
5
+ page_content=' In this technical note, I discuss how optical power can be measured using a saturated photodiode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
6
+ page_content=' I demonstrate that by monitoring both the dc photocurrent and ac noise, it is possible to accurately measure power levels beyond its saturation point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
7
+ page_content=' Keywords: Power meter, Photodiode, Saturation, Noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
8
+ page_content=' Introduction Optical power measurement is a critical aspect of many applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
9
+ page_content=' It is the conventional wisdom that a saturated photodiode can not be used to measure power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
10
+ page_content=' Saturation power of photodiodes can be pushed to higher levels by applying a reverse bias voltage, however there is a limit to the amount of bias voltage due to the reverse breakdown which can be catastrophic to the diode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
11
+ page_content=' This limitation can be problematic in high-power applications, where it is important to be able to accurately measure power levels at high speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
12
+ page_content=' In this technical note, I discuss a method for measuring optical power using a saturated photodiode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
13
+ page_content=' I demonstrate that the photocurrent noise decreases with power beyond saturation which can be used to accurately measure power at levels beyond its saturation point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
14
+ page_content=' This information might be useful in photon noise measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
15
+ page_content=' Background If a photoevent generated at t = 0 produces an electric pulse h(t), of area e, in the external circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
16
+ page_content=' A photoevent generated at time t1 then produces a displaced pulse, h(t −t1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
17
+ page_content=' Dividing the time axis into incremental time intervals ∆t so that the probability p that a photoevent occurs within an interval is P = ηΦ∆t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
18
+ page_content=' The electric current i at time t is written as i(t) = ∑ l Xlh(t −l∆t), (1) where Xl assumes the value 1 with probability p, and 0 with probability 1− p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
19
+ page_content=' The variables Xl are independent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
20
+ page_content=' The mean value of Xl is E[Xl] = 0×(1− p)+1× p = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
21
+ page_content=' Its mean-square value is E[X2 l ] = 02 ×(1− p)+12 × p = p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
22
+ page_content=' The mean of the product XlXk is p2 if l ̸= k, and p if l = k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
23
+ page_content=' The mean and mean-square values of i(t) are now determined via i = E[i(t)] = ∑ l ph(t −l∆t), (2) E[i2(t)] = ∑ l ∑ k E[XlXk]h(t −l∆t)h(t −k∆t) (3) = ∑∑ l̸=k p2h(t −l∆t)h(t −k∆t)+∑ l ph2(t −l∆t) (4) Substituting p = ηΦ∆t, and taking the limit ∆t → 0 so that the summations become integrals, previous equations yield, respectively, E[i(t)] = ηΦ � h(t)dt, (5) E[i2(t)] = � ηΦ � h(t)dt �2 +ηΦ � h2(t)dt (6) The limits of the integration is zero to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
24
+ page_content=' It follows that σ2 i = E[i2]−E[i]2 = ηΦ � h2(t)dt (7) arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
25
+ page_content='02658v1 [physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
26
+ page_content='ins-det] 7 Jan 2023 Definition of the bandwidth B as B = 1 2e2 � ∞ 0 h2(t)dt = � ∞ 0 h2(t)dt 2( � ∞ 0 h(t)dt)2 , (8) can be readily verified by noting that the Fourier transform of h(t) is its transfer function H(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
27
+ page_content=' The area under h(t) is simply H(0) = e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
28
+ page_content=' In accordance with Parseval’s theorem, the area under h2(t) is equal to the area under the symmetric function |H(v)|2, so that B = � ∞ 0 ���� H(v) H(0) ���� 2 dv (9) The quantity B is therefore the power-equivalent spectral width of the function H(v) (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
29
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
30
+ page_content=', the bandwidth of the device/circuit combination).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
31
+ page_content=' As an example, if H(v) = 1 for −Vc < v < Vc and 0 elsewhere, we get B = Vc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
32
+ page_content=' Using this definition of bandwidth, we get back our familiar expression for the noise in photocurrent σ2 i = 2eE[i]B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
33
+ page_content=' (10) So far this is a standard derivation of the shot noise1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
34
+ page_content=' Note that this expression hinges on the assumption that the area under h(t) is simply e, basically one photoevent cause one electrons worth of charge to flow as current.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
35
+ page_content=' However in saturation its definitely not the case and the photodiode response becomes a function of intensity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
36
+ page_content=' In the first order approximation, one can make the assumption that h(t) = f(Φ)g(t), where f(Φ) is a power dependent function that is 1 at low power and decreases at high power and area under g(t) is e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
37
+ page_content=' Then we get E[i] = ηΦ f(Φ) � g(t)dt (11) σ2 i = ηΦ f 2(Φ) � g2(t)dt, (12) which gives us the key insight that the average value of current and noise power scales differently with incident optical power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
38
+ page_content=' If we take a ratio E[i]2 σ2 i = η2Φ2 f 2(Φ)e2 ηΦ f 2(Φ)2e2B ∝ Φ (13) we find that the signal to noise ratio (SNR) in principle is proportional to incident power despite the nonlinearity in the response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
39
+ page_content=' Even if this proportionality does not hold exactly, with proper calibration therefore it should be possible to measure the power with a saturated photodiode by measuring both the average photocurrent and the photocurrent noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
40
+ page_content=' Results Figure 1A shows a schematic diagram of a standard photodiode driving circuit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
41
+ page_content=' The photodiode can be modelled as a current source in parallel with a diode2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
42
+ page_content=' A reverse bias voltage is applied to set the operating point, but there is a limit to how much voltage can be applied defined by junction breakdown3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
43
+ page_content=' Such a circuit can be simulated using conventional electrical circuit theory4 (MATLAB code below), the resulting operating current with realistic circuit parameters is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
44
+ page_content=' 1B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
45
+ page_content=' We can see that the photodiode saturates at some power and the saturation knee increases with reverse bias voltage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
46
+ page_content=' The highest measurable power is determined by the highest reverse bias voltage that can be applied across a junction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
47
+ page_content=' Below saturation the current is linear with incident power, which is expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
48
+ page_content=' Experimental data of a reverse biased photodiode is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
49
+ page_content=' 2 where reverse bias is seen to increase the saturation power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
50
+ page_content=' Before saturation the voltage is proportional to power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
51
+ page_content=' After saturation no measurement of power is possible, which is the current paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
52
+ page_content=' We can now attempt to measure power beyond saturation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
53
+ page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
54
+ page_content=' 3A we show the photovoltage and the photocurrent noise as a function of incident power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
55
+ page_content=' Photocurrent noise is measured around 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
56
+ page_content='5 MHz with a spectrum analyzer (10 kHz resolution bandwidth, 1 MHz span, with preamplifier on and no attenuation, electronic noise floor is −165 dBm/Hz).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
57
+ page_content=' Below saturation photovoltage and noise increases simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
58
+ page_content=' As the photovoltage is saturated, the photocurrent noise suddenly decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
59
+ page_content=' This qualitatively follows our theoretical voltage and noise shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
60
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
61
+ page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
62
+ page_content=' 3B we show the photovoltage and SNR calculated from the experimental data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
63
+ page_content=' Beyond saturation SNR changes with incident power, which can be used to measure power after proper calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
64
+ page_content=' Such behaviour also holds at other noise frequencies as long as they are lower than the badngap of the photodiode and away from 1/ f noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
65
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
66
+ page_content=' 2/4 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
67
+ page_content=' Schematic diagram of a photodiode driving circuit and simulated operating current I0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
68
+ page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
69
+ page_content=' Output voltage across the 25Ω load resistance from a Thorlabs FDGA055 InGaAs photodiode at different reverse bias excited by a 1070 nm CW laser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
70
+ page_content=' It has a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
71
+ page_content='95 A/W responsivity, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
72
+ page_content='5 ns rise time and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
73
+ page_content='5 mm active area diameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
74
+ page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
75
+ page_content=' (A) Output voltage and noise from a Excelitas C30641GH6 InGaAs photodiode at 30 V reverse bias excited by a 1550 nm femtosecond pulsed laser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
76
+ page_content=' It saturates around 25 mW.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
77
+ page_content=' (B) Even though photovoltage has saturated, the SNR shows response beyond the saturation power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
78
+ page_content=' 3/4 A B 100 C 140 Current Photon Vr (V) (mA) 80 Noise (dBm/Hz) 0 g Current ( 10 60 VR (V) Load 20 160 0 R 30 40 10 Photocurrent 20 Ip Operating 30 20 Bias 0 VR 180 0 20 40 60 80 100 0 20 40 60 80 100 Incident Power (mW) Incident Power (mW)1000 Voltage (mV) 100 Vr (V) 0 5 10 15 18 20 25 30 SL 10 1 10 100 Incident Power (mWPhotovoltage (mV) B 30 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
79
+ page_content='0 135 Photovoltage (mV) 25 10 Noise (dBm/Hz) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
80
+ page_content='8 20 (norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
81
+ page_content=') 145 50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
82
+ page_content='6 I 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
83
+ page_content='4 SNR 155 工 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
84
+ page_content='2 165 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
85
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
86
+ page_content='0 0 10 20 30 40 50 0 10 20 30 40 50 Power (mW) Power (mw)Discussion Even when a photodiode is saturated, the information about the photon flux intensity are not completely lost and in a way encoded in the photocurrent noise, which can be practically used to measure power at high speed after proper calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
87
+ page_content=' Codes 1 %% Matlab code to solve for photocurrent and noise in a circuit 2 clc;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
88
+ page_content='clear all;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
89
+ page_content='close all;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
90
+ page_content=' 3 P=linspace(0,100,1000)*1e-3;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
91
+ page_content=' % Incident power in W 4 Responsivity=1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
92
+ page_content=' 5 Ip=P*Responsivity;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
93
+ page_content=' % Expected photocurrent 6 R=500;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
94
+ page_content=' 7 V=[linspace(-50,0,1e5),linspace(0,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
95
+ page_content='7,1e5)];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
96
+ page_content=' 8 VR=30;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
97
+ page_content=' % Reverse Bias Voltage 9 Iop=zeros(size(P));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
98
+ page_content=' % Operating Photocurrent 10 11 for indx=1:length(P) 12 I1=-Ip(indx)+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
99
+ page_content='1e-9*exp(V/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
100
+ page_content='0259);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
101
+ page_content=' 13 I2=-(V+VR)/R;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
102
+ page_content=' 14 [¬,pos]=min(abs(I1-I2));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
103
+ page_content=' 15 Iop(indx)=-I2(pos);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
104
+ page_content=' 16 end 17 18 figure(1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
105
+ page_content='subplot(121), plot(P/1e-3,Iop);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
106
+ page_content=' % Operating Current vs Power 19 f=Iop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
107
+ page_content='/P;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
108
+ page_content=' %nonlinear response function 20 S=10*log10(2*1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
109
+ page_content='6e-19*R*1*(P/1e-3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
110
+ page_content=' *(f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
111
+ page_content='^2));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
112
+ page_content=' % Noise power vs optical power 21 subplot(122), plot(P/1e-3,S);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
113
+ page_content=' References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
114
+ page_content=' Saleh, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
115
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
116
+ page_content=' & Teich, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
117
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
118
+ page_content=' Fundamentals of Photonics (John Wiley & Sons, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
119
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
120
+ page_content=' Bhattacharya, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
121
+ page_content=' Semiconductor Optoelectronic Devices (Prentice-Hall, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
122
+ page_content=', 1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
123
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
124
+ page_content=' Neamen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
125
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
126
+ page_content=' Semiconductor Physics and Devices: Basic Principles (McGraw-hill, 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
127
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
128
+ page_content=' Sedra, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
129
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
130
+ page_content=', Smith, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
131
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
132
+ page_content=', Carusone, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
133
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
134
+ page_content=' & Gaudet, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
135
+ page_content=' Microelectronic Circuits, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
136
+ page_content=' 4 (Oxford University Press New York, 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
137
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
138
+ page_content=' Thorlabs FDGA05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
139
+ page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
140
+ page_content='thorlabs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
141
+ page_content='com/thorproduct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
142
+ page_content='cfm?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
143
+ page_content='partnumber=FDGA05 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
144
+ page_content=' Accessed: 2022-12-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
145
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
146
+ page_content=' Excelitas C30641GH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
147
+ page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
148
+ page_content='excelitas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
149
+ page_content='com/product/c30641gh-ingaas-pin-1mm-18 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
150
+ page_content=' Accessed: 2022-12-10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
151
+ page_content=' Acknowledgements The author acknowledges Nicholas Rivera, Jamison Sloan, Yannick Salamin, chatGPT for their discussions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
152
+ page_content=' All equipment used in the experiments are properties of MIT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
153
+ page_content=' 4/4' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/C9E0T4oBgHgl3EQfyQLe/content/2301.02658v1.pdf'}
D9E0T4oBgHgl3EQfywIT/content/tmp_files/2301.02662v1.pdf.txt ADDED
@@ -0,0 +1,2540 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Robust knapsack ordering for a partially-informed
2
+ newsvendor with budget constraint
3
+ Guus Boonstra
4
+ Retail Consulting Department, IG&H Consultants, guus.boonstra@igh.com
5
+ Wouter J.E.C. van Eekelen
6
+ Department of Econometrics and Operations Research, Tilburg University, w.j.e.c.vaneekelen@tilburguniversity.edu
7
+ Johan S.H. van Leeuwaarden
8
+ Department of Econometrics and Operations Research, Tilburg University, j.s.h.vanleeuwaarden@tilburguniversity.edu
9
+ This paper studies the multi-item newsvendor problem with a constrained budget and information about
10
+ demand limited to its range, mean and mean absolute deviation. We consider a minimax model that deter-
11
+ mines order quantities by minimizing the expected overage and underage costs for the worst-case demand
12
+ distributions. The resulting optimization problem turns out to be solvable by a method reminiscent of the
13
+ greedy algorithm that solves the continuous knapsack problem, purchasing items in order of marginal value.
14
+ This method has lower computational complexity compared to directly solving the model and leads to a
15
+ simple policy that (i) sorts items based on their marginal effect on the total cost and (ii) determines order
16
+ quantities according to this ranking until the budget is spent.
17
+ Key words : distributionally robust optimization, multi-item newsvendor model, knapsack problem,
18
+ minimax analysis, inventory management
19
+ History : This paper was first submitted on March 8, 2022.
20
+ 1.
21
+ Introduction
22
+ The newsvendor model is one of the cornerstones of inventory management, introduced by
23
+ Arrow et al. (1951) for finding the order quantity that minimizes expected costs in view
24
+ of unknown demand and the trade-off between leftovers and lost sales. The newsvendor
25
+ model finds many applications in e.g. perishable food, fashion and high-tech industries,
26
+ particularly when the total time span of production and lead times exceeds the market
27
+ lifetime of a product; see Nahmias (1982) and Fisher and Raman (1996).
28
+ Manufacturers and retailers need to decide how to employ the available budget or re-
29
+ sources when determining the optimal order quantities of different products. A budget
30
+ constraint makes the problem multidimensional—as ordering more of one item leaves less
31
+ budget for other items—and gives rise to a challenging optimization problem. Hadley and
32
+ Whitin (1963) solve this problem with Lagrangian optimization. Abdel-Malek et al. (2004)
33
+ and Lau and Lau (1996) provide alternative solution methods, Erlebacher (2000) estab-
34
+ lishes closed-form solutions for special demand distributions and Nahmias and Schmidt
35
+ 1
36
+ arXiv:2301.02662v1 [math.OC] 5 Jan 2023
37
+
38
+ 2
39
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
40
+ (1984) develop heuristic solutions. All these works are for the full information setting,
41
+ where the demand distributions for all items are fully specified. In this paper we perform
42
+ a distribution-free analysis of the multi-item newsvendor problem with budget constraint.
43
+ This analysis does not rely on full specification of the demand distributions, but only re-
44
+ quires for each item knowledge of the mean, mean absolute deviation (MAD) and range.
45
+ Given this partial demand information, we obtain a robust ordering policy by employing
46
+ distributionally robust optimization (DRO) methods.
47
+ The newsvendor model in this paper seeks to minimize the expected costs as function
48
+ of the order quantity. The cost function depends on the order quantity, but also on the
49
+ demand, which is a random variable with some distribution. Given the demand distribu-
50
+ tion, the single-item newsvendor model finds the optimal order quantity that minimizes
51
+ the expected costs. In traditional approaches, the demand distribution is fully specified,
52
+ so that the expected costs can be calculated, and the optimal order quantity can be deter-
53
+ mined. A robust version of this problem assumes partial information, and only knows that
54
+ the demand distribution belongs to some ambiguity set that contains all distributions that
55
+ comply with this partial information. We adopt a minimax strategy that can be viewed as
56
+ a game between the newsvendor and nature: the newsvendor first picks the order quantity
57
+ after which nature chooses a demand distribution that maximizes the expected costs. The
58
+ goal then becomes to solve this minimax problem.
59
+ The way we solve this minimax problem in this paper fits in a much richer class of DRO
60
+ approaches that first calculate worst-case model performance, over the set of distributions
61
+ satisfying some partial information, and then optimize against these worst-case circum-
62
+ stances. Such DRO techniques found applications in many domains including scheduling
63
+ (Kong et al., 2013; Mak et al., 2014), portfolio optimization (Popescu, 2007; Delage and
64
+ Ye, 2010), pricing (Elmachtoub et al., 2021; Chen et al., 2022; Kleer and van Leeuwaarden,
65
+ 2022), complex networks (van Leeuwaarden and Stegehuis, 2021), and inventory manage-
66
+ ment (Scarf, 1958; Gallego, 1992; Perakis and Roels, 2008; Ben-Tal et al., 2013). A classic
67
+ distributionally robust approach is due to Scarf (1958), who considered the single-item
68
+ newsvendor problem with mean-variance demand information. Scarf was able to derive
69
+ explicit expressions for the worst-case distribution, and solved the minimax problem to
70
+ obtain the optimal order quantity. Whether a minimax problem is solvable depends on
71
+ both the function to be optimized and the choice of ambiguity set. There are many ways
72
+
73
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
74
+ 3
75
+ to characterize a set of distributions. In DRO, one can define ambiguity by using distance-
76
+ based metrics, such as total variation or Kullback-Leibler distance. Another popular class
77
+ of ambiguity uses summary statistics. The ambiguity set studied in this paper contains all
78
+ distributions with known mean and MAD. The maximization part of the minimax problem
79
+ can then be viewed as a semi-infinite linear optimization problem with three constraints,
80
+ and an infinite number of variables (all distributions in the ambiguity set). In fact, such
81
+ minimax problems are related to generalized moment bound problems, for which general
82
+ theory says there exists an extremal distribution solving the maximization part with at
83
+ most a number of support points equal to the number of moment constraints (Rogosinski,
84
+ 1958). See Rahimian and Mehrotra (2019) for overviews of many more DRO applications
85
+ and techniques.
86
+ For the multi-item newsvendor model in this paper, we solve the multi-dimensional mini-
87
+ max problem with a random vector that describes the demand for all items. Compared with
88
+ tractable one-dimensional problems such as the single-item newsvendor model, applying
89
+ DRO techniques to such problems with multiple random variables might present consider-
90
+ able challenges in terms of computational complexity. For example, given information on
91
+ the mean and covariance of the demands, the distributionally robust multi-item newsvendor
92
+ is significantly harder to solve than its single-item counterpart (Hanasusanto et al., 2015).
93
+ However, for the multi-item newsvendor model in conjunction with mean-MAD ambiguity,
94
+ solving the minimax problem becomes tractable, and in fact has an elegant algorithmic
95
+ solution. The key insight will prove to be that the worst-case demand distribution—the
96
+ solution to the maximization part of the minimax problem—is identical for any order
97
+ quantity. As a result, the minimax problem reduces to a known-distribution optimization
98
+ problem. This known distribution is in fact, for each item, a unique three-point distribu-
99
+ tion. In turn, the minimization problem with this known (discrete) distribution can be
100
+ solved using a reduction to a knapsack problem.
101
+ The main contributions of this paper are as follows:
102
+ (i) Solution of minimax problem. We solve the minimax problem for mean-MAD ambi-
103
+ guity and a budget constraint. We first show that the worst-case scenarios arise when
104
+ item demands follow specific three-point distributions that comply with the partial
105
+ demand information. We minimize the associated worst-case costs to obtain a robust
106
+
107
+ 4
108
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
109
+ ordering policy as the solution to a knapsack problem. As opposed to existing meth-
110
+ ods for the newsvendor model under full demand information, the knapsack problem
111
+ leads to an effective closed-form ordering policy, also for scenarios with many items.
112
+ As such, the present paper further develops DRO theory that uses MAD information
113
+ to formulate tractable minimax problems.
114
+ (ii) Budget consistency. The robust ordering policy only depends on the minimal, mean
115
+ and maximal demand for each item. Hence, the worst-case distributions are indepen-
116
+ dent of all other model parameters, which makes the robust ordering policy ‘budget
117
+ consistent’. When the budget is increased, the orders for the original budget remain
118
+ unaltered, while only the additional budget is further divided over the items. Such
119
+ budget consistency is useful because the optimization model needs to be solved only
120
+ once. That is, for the initial budget value the decision maker can generate an ordered
121
+ list of items as the solution to the knapsack problem, using only standard spreadsheet
122
+ software, and this solution is valid for all budget levels. In contrast, most other exact
123
+ and robust methods for the multi-item newsvendor model do not have this feature,
124
+ which means that the decision maker has to recompute the optimal policy for each
125
+ budget level.
126
+ (iii) Performance of ordering policy. Through a range of numerical examples we demon-
127
+ strate the performance of the knapsack ordering. We draw comparisons with full infor-
128
+ mation settings and other robust approaches that require partial demand information
129
+ by assessing the so-called expected value of additional information (EVAI). Overall,
130
+ the performance of the robust policy only deviates a few percent from the optimal
131
+ performance with full information availability. We also quantify the value of MAD
132
+ information by comparing the performance with the situations when only the mean
133
+ and range of demand is known, and show that MAD indeed provides crucial infor-
134
+ mation for providing good performance. In addition, we construct an ordering policy
135
+ that attains the optimal value of a matching minimin problem which, in conjunction
136
+ with the optimal value of the minimax problem, yields tight performance guarantees.
137
+ We next discuss some related literature on the newsvendor model. Gallego and Moon
138
+ (1993) consider the multi-item newsvendor model with budget constraint when the mean
139
+ and variance of demand is known. Gallego and Moon (1993) extend the ideas in Scarf
140
+ (1958) to obtain an optimization problem that can be solved with Lagrange multiplier
141
+
142
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
143
+ 5
144
+ techniques, similar to the full information setting with a known distribution. In contrast,
145
+ our minimax analysis with mean-MAD-range information yields a knapsack ordering pol-
146
+ icy that generates a sorted list and prescribes to sort items successively according to that
147
+ list, with order sizes equal to the minimal, mean or maximum demand. Other related
148
+ works that consider the multi-item newsvendor model under partial information include
149
+ Vairaktarakis (2000), who assumes only the support of demand is known, and Ardestani-
150
+ Jaafari and Delage (2016) who assume knowledge of partial moments and rephrase the
151
+ robust optimization problem as a tractable linear program. Natarajan et al. (2018) assume
152
+ knowledge of mean, variance and semivariance, for which the newsvendor model is solvable
153
+ in the single-item setting using a semi-infinite linear program, but largely intractable in
154
+ the multi-item setting. Natarajan et al. (2018) therefore consider a relaxation that gives
155
+ a semidefinite program (SDP) to find a lower bound (which is not tight). Hanasusanto
156
+ et al. (2015) consider mean and covariance knowledge. They prove that the distributionally
157
+ robust problem is NP-hard but admits a semidefinite programming formulation with an ex-
158
+ ponential number of inequalities (that grows in the number of items). Xu et al. (2018) and
159
+ Natarajan and Teo (2017) present more tractable bounds for mean-covariance information.
160
+ In the present paper we assume only marginal information is available, since covariance
161
+ information and other dependency structures are difficult to estimate, and fixing covari-
162
+ ance information often leads to difficult optimization problems with non-intuitive solutions
163
+ (policies). The knapsack ordering policy that we obtain in this paper deals with the worst-
164
+ case demand distributions among all demand distributions with a given mean, MAD and
165
+ range, not conditioning on a specific dependency structure. This approach makes the knap-
166
+ sack ordering policy robust, but also suitable for scarce-data settings, as the mean, MAD
167
+ and range are relatively easy to estimate.
168
+ Section 2 introduces the single-item model and the multi-item model with budget, under
169
+ the traditional assumption of full information about the demand distributions. In Section 3
170
+ we present our main results for the distributionally robust setting with partial information.
171
+ Section 4 presents a detailed numerical study that demonstrates the robust policies. We
172
+ present conclusions and several directions for future work in Section 5. Supplementary
173
+ material appears in the Electronic Companion (EC), including several proofs, additional
174
+ numerical experiments, and model extensions.
175
+
176
+ 6
177
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
178
+ 2.
179
+ Classical newsvendor analysis
180
+ We introduce the newsvendor model and several well-known results in Section 2.1 for the
181
+ single-item setting, and in Section 2.2 for the multi-item setting with budget constraint.
182
+ 2.1.
183
+ Classical single-item setting
184
+ Consider an item with purchase price c and selling pricing p. The decision maker places
185
+ an order of size q. The demand for items is assumed to be the random variable D with
186
+ distribution function FD(·). Unsold items will be salvaged at the end of the period for
187
+ salvage value s per item. The mark-up m > 0 represents the profit per sold item and
188
+ satisfies p = c(1 + m) and the discount factor d > 0 captures the loss through s = (1 − d)c.
189
+ The expected costs consist of two terms: opportunity costs of lost sales and overage costs
190
+ in case of overstocking. This gives the cost function
191
+ G(q,D) =
192
+
193
+
194
+
195
+
196
+
197
+ (p − c)(D − q)
198
+ if q ⩽ D,
199
+ (c − s)(q − D)
200
+ if q > D.
201
+ (1)
202
+ The case q ⩽ D amounts to lost sales and q > D results in overstocking. The objective is to
203
+ order the quantity q of items that minimizes the expected costs. Let E denote expectation,
204
+ and define µ = E[D] and x+ = max(x,0). Write the expected costs as
205
+ C(q) := E[G(q,D)] = (c−s)q+(p−s)E(D−q)+−(c−s)µ = c
206
+
207
+ d(q − µ) + (m + d)E(D − q)+�
208
+ .
209
+ (2)
210
+ To keep notation simple (and without loss of generality) set c = 1. Then, the optimal order
211
+ quantity
212
+ q∗ = argmin
213
+ q⩾0
214
+ C(q) ≡ argmin
215
+ q⩾0
216
+ dq + (m + d)E(D − q)+,
217
+ (3)
218
+ is given by
219
+ q∗ = inf
220
+
221
+ q : F(q) ⩾
222
+ m
223
+ m + d
224
+
225
+ .
226
+ (4)
227
+ A proof of (4) is provided in most standard textbooks on inventory management; see e.g.
228
+ Hadley and Whitin (1963); Silver et al. (1998); Nahmias (2009).
229
+ 2.2.
230
+ Multi-item setting
231
+ Consider n different items and order qi units for item i for a given period where i = 1,...,n.
232
+ For item i, the unit purchasing and selling price are ci and pi respectively. Possible leftovers
233
+ will be salvaged at the end of the period for unit salvage value si. We define the model
234
+
235
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
236
+ 7
237
+ in terms of the mark-up mi > 0 and discount factor di > 0. The mark-up represents the
238
+ profit per sold unit and the discount factor the loss, i.e. pi = ci(1 + mi) and si = (1 − di)ci.
239
+ The random demand for item i in one period is represented by the nonnegative random
240
+ variable Di, distributed according to Fi(·).
241
+ As in the single-item setting, we minimize the expected costs. Define the multi-item cost
242
+ function as
243
+ G(q,D) :=
244
+ n
245
+
246
+ i=1
247
+ ci
248
+
249
+ di(qi − Di) + (mi + di)(Di − qi)+�
250
+ .
251
+ (5)
252
+ We also introduce the budget constraint �n
253
+ i=1 ciqi ⩽ B with B the available budget. The
254
+ multi-item newsvendor model, with decision vector q = (q1,...,qn), is then given by
255
+ min
256
+ q
257
+ C(q) := E[G(q,D)] =
258
+ n
259
+
260
+ i=1
261
+ ci
262
+
263
+ di(qi − µi) + (mi + di)E(Di − qi)+�
264
+ s.t.
265
+ n
266
+
267
+ i=1
268
+ ciqi ⩽ B,
269
+ qi ⩾ 0,
270
+ i = 1,...,n.
271
+ (6)
272
+ Its solution, referred to as the optimal ordering policy, will be denoted by q∗. In the
273
+ single-item setting the purchase costs had no influence on the objective function, but in
274
+ the multi-item setting the optimal order quantity is affected by ci. It is well known that
275
+ model (3) is a convex optimization problem. In (6) we take the summation over n convex
276
+ functions, which preserves convexity. Moreover, the constraints form a convex set, so that
277
+ (6) is a convex optimization problem (Boyd and Vandenberghe, 2004).
278
+ 3.
279
+ Proposed robust approach
280
+ Section 3.1 presents the robust ordering policy for the single-item setting. This result serves
281
+ as building block for the robust analysis of the multi-item setting in Section 3.2, which
282
+ describes the optimal policy as the solution of a linear program (LP). In Section 3.3 we
283
+ show that this LP can be viewed as a knapsack problem. All these results are based on a
284
+ tight upper bound for the cost function. In Section 3.4 we derive a matching tight lower
285
+ bound for the cost function.
286
+ 3.1.
287
+ Distribution-free ordering policy for single item
288
+ Let P denote a probability distribution, and write EP for E to emphasize that the expec-
289
+ tation is taken with respect to the distribution P of D. The MAD for random demand D
290
+
291
+ 8
292
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
293
+ is defined as δ := EP|D − µ|, where µ is the expected value of D. Similar to the variance,
294
+ the MAD is a measure of dispersion or variability. We mention several properties of MAD
295
+ in EC.2. For the random variable D with mean µ, MAD δ, and (bounded) support [a,b],
296
+ where 0 ⩽ a ⩽ b < ∞, the mean-MAD ambiguity set is defined as
297
+ P(µ,δ) := {P|EP[D] = µ, EP|D − µ| = δ, supp(D) ⊆ [a,b]}.
298
+ We thus assume that the ‘true’ distribution ˜P of the random demand D is contained in
299
+ this ambiguity set, that is, ˜P ∈ P(µ,δ).
300
+ To obtain the robust order quantity, we solve
301
+ min
302
+ q
303
+ max
304
+ P∈P(µ,δ) dq + (m + d)EP(D − q)+,
305
+ for which we first consider maxP∈P(µ,δ) EP(D − q)+. To characterize this tight bound, we
306
+ apply a general upper bound for convex functions of a random variable by Ben-Tal and
307
+ Hochman (1972). To make this paper self-contained, we provide a proof of the following
308
+ result in EC.1.
309
+ Lemma 1. The extremal distribution that solves
310
+ max
311
+ P∈P(µ,δ) EP(D − q)+ is a three-point dis-
312
+ tribution on the values a, µ and b that does not depend on q.
313
+ From the proof of Lemma 1, it follows that the worst-case probability distribution of D,
314
+ the extremal distribution that solves maxP∈P(µ,δ) EP(D − q)+, is a three-point distribution
315
+ defined as
316
+ P(D = x) =
317
+
318
+
319
+
320
+
321
+
322
+
323
+
324
+
325
+
326
+
327
+
328
+
329
+
330
+ δ
331
+ 2(µ − a),
332
+ for x = a,
333
+ 1 −
334
+ δ
335
+ 2(µ − a) −
336
+ δ
337
+ 2(b − µ), for x = µ,
338
+ δ
339
+ 2(b − µ),
340
+ for x = b.
341
+ (7)
342
+ Applying this worst-case distribution, the robust order quantity follows from solving
343
+ qU = argminq CU(q) with
344
+ CU(q) := d(q − µ) + δ(m + d)
345
+ 2(µ − a) (a − q)+ + (m + d)
346
+
347
+ 1 −
348
+ δ
349
+ 2(µ − a) −
350
+ δ
351
+ 2(b − µ)
352
+
353
+ (µ − q)+
354
+ + δ(m + d)
355
+ 2(b − µ) (b − q)+.
356
+ (8)
357
+ To illustrate the mean-MAD bound and robust order quantity qU, consider an example in
358
+ which D is distributed according to a beta distribution with both shape parameters set
359
+
360
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
361
+ 9
362
+ to 1. For a general beta distribution, a = 0 and b = 1. In Figure 1a, we have m = 1 and
363
+ d = 0.8. This leads to qU = µ. In Figure 1b, the mark-up increases to m = 3. In this case
364
+ the mean-MAD order quantity increases to qU = b. When computing this upper bound,
365
+ 0.0
366
+ 0.2
367
+ 0.4
368
+ 0.6
369
+ 0.8
370
+ 1.0
371
+ Order quantity
372
+ 0.25
373
+ 0.30
374
+ 0.35
375
+ 0.40
376
+ 0.45
377
+ 0.50
378
+ 0.55
379
+ Expected costs
380
+ Beta
381
+ Mean-MAD bound
382
+ Mean-variance bound
383
+ (a) m = 1
384
+ 0.0
385
+ 0.2
386
+ 0.4
387
+ 0.6
388
+ 0.8
389
+ 1.0
390
+ Order quantity
391
+ 0.4
392
+ 0.6
393
+ 0.8
394
+ 1.0
395
+ 1.2
396
+ 1.4
397
+ 1.6
398
+ Expected costs
399
+ Beta
400
+ Mean-MAD bound
401
+ Mean-variance bound
402
+ (b) m = 3
403
+ Figure 1
404
+ Mean-MAD and mean-variance bounds and corresponding ordering policies. The upper curve corre-
405
+ sponds to the mean-variance upper bound that follows from P(1/2,1/12). The middle curve depicts the
406
+ mean-MAD upper bound. The ‘true’ cost function assumes that D follows a beta distribution with
407
+ both shape parameters equal to 1 (the lower curve).
408
+ observe that the mean-MAD bound touches the ‘true’ cost function in the points a,µ and b.
409
+ This property actually holds in general. Clearly, for q = a or b, it holds that CU(q) = C(q).
410
+ When q = µ, the cost function equals
411
+ C(µ) = d(µ − µ) + (m + d)E(D − µ)+ = δ(m + d)
412
+ 2
413
+ = CU(µ),
414
+ since E(D − µ)+ = E|D − µ|/2.
415
+ By analyzing (8) one can obtain an explicit ordering rule for qU. The objective func-
416
+ tion of (8) is composed of piecewise linear functions. By exploiting this structure, we
417
+ can construct an explicit ordering policy. For scalars α1,...,αm,ν1,...,νm ∈ R, f(x) =
418
+ maxi=1,...,m{αix+νi} denotes a convex, piecewise linear function. The function CU(q) in (8)
419
+ admits a representation of the form
420
+ CU(q) = d(q − µ) + (m + d)E(D − q) = m(µ − q) =: f0(q),
421
+ for q ∈ [0,a) and
422
+ CU(q) = d(q − µ) + (m + d)
423
+
424
+ 1 −
425
+ δ
426
+ 2(µ − a) −
427
+ δ
428
+ 2(b − µ)
429
+
430
+ (µ − q) + δ(m + d)
431
+ 2(b − µ) (b − q)
432
+ = q(δ(m + d)
433
+ 2(µ − a) − m) + ν1 =: f1(q),
434
+
435
+ 10
436
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
437
+ for q ∈ [a,µ), where ν1 is some constant value. For q ∈ [a,µ), the mean-MAD objective
438
+ function is defined by the linear function f1(q). For the interval q ∈ [µ,b], we obtain
439
+ CU(q) = d(q − µ) + δ(m + d)
440
+ 2(b − µ) (b − q) = q
441
+
442
+ d − δ(m + d)
443
+ 2(b − µ)
444
+
445
+ + ν2 =: f2(q)
446
+ for some constant ν2. The cost function is thus the pointwise maximum of the three linear
447
+ functions f0(q), f1(q) and f2(q):
448
+ CU(q) = max{f0(q), f1(q), f2(q)}.
449
+ Since CU(q) = maxj=0,1,2{αjq + νj} is a convex function, it holds that α0 ⩽ α1 ⩽ α2. Since
450
+ we assume that m > 0, we know that α0 < 0. Therefore, from the derivatives α1, α2 of
451
+ CU(q), we can derive an explicit order quantity by examining for which linear piece the
452
+ slope turns positive. This allows us to state Theorem 1.
453
+ Theorem 1 (Mean-MAD order quantity). The
454
+ robust
455
+ order
456
+ quantity
457
+ qU
458
+
459
+ argminq CU(q) is given by
460
+ (a) If m <
461
+ δd
462
+ 2(µ − a) − δ, then qU = a.
463
+ (b) If
464
+ δd
465
+ 2(µ − a) − δ < m < d(2(b − µ) − δ)
466
+ δ
467
+ , then qU = µ.
468
+ (c) If d(2(b − µ) − δ)
469
+ δ
470
+ < m, then qU = b.
471
+ (d) If m =
472
+ δd
473
+ 2(µ − a) − δ and m = d(2(b − µ) − δ)
474
+ δ
475
+ , then qU ∈ [a,µ] and qU ∈ [µ,b], respec-
476
+ tively.
477
+ According to Theorem 1, the robust order quantity qU for mean-MAD-range information
478
+ consists of three predictable values (minimal, mean, maximum demand) that do not depend
479
+ on the mark-up m and discount factor d, whereas the conditions that dictate how much
480
+ to order do depend on them (in addition to the demand mean, MAD and range).
481
+ 3.2.
482
+ Multiple items and budget constraint
483
+ A distribution-free analysis of the multi-item model requires a multivariate ambiguity set.
484
+ As in the single-item case, the partial information is the mean µi, MAD δi and support
485
+ supp(Di) = [ai,bi] for each random variable Di, i = 1,...,n. The mean-MAD ambiguity set
486
+ is defined as
487
+ P(µ,δ) := {P|EP (Di) = µi, EP |Di − µi| = δi, supp(Di) ⊆ [ai,bi], ∀i}.
488
+ (9)
489
+
490
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
491
+ 11
492
+ We henceforth assume that the distribution of the vector of random variables D =
493
+ (D1,...,Dn) belongs to this ambiguity set, i.e., P ∈ P(µ,δ). Since the objective function in
494
+ (6) is separable, one can apply the single-item bound to each term E(Di − qi)+ in the
495
+ summation individually. The following result, for the multi-item problem, is then a direct
496
+ consequence of Lemma 1.
497
+ Lemma 2. The extremal distribution that solves max
498
+ P∈P(µ,δ) EP[G(q,D)] consists for each Di
499
+ of a three-point distribution with values ξ(i)
500
+ 1 = ai, ξ(i)
501
+ 2 = µi, ξ(i)
502
+ 3 = bi and probabilities
503
+ p(i)
504
+ 1 =
505
+ δi
506
+ 2(µi − ai),
507
+ p(i)
508
+ 2 = 1 −
509
+ δi
510
+ 2(µi − ai) −
511
+ δi
512
+ 2(bi − µi),
513
+ p(i)
514
+ 3 =
515
+ δi
516
+ 2(bi − µi).
517
+ (10)
518
+ For the multi-item newsvendor model based on mean-MAD ambiguity, we use Lemma 2
519
+ to solve the maximization part of
520
+ min
521
+ q:�
522
+ i ciqi⩽B,qi⩾0 max
523
+ P∈P(µ,δ) EP
524
+
525
+ n
526
+
527
+ i=1
528
+ cidi(qi − µi) + ci(mi + di)(Di − qi)+ �
529
+ ,
530
+ (11)
531
+ and obtain
532
+ min
533
+ q
534
+ n
535
+
536
+ i=1
537
+ ci
538
+
539
+ di(qi − µi) + (mi + di)
540
+
541
+ p(i)
542
+ 1 (ai − qi)+ + p(i)
543
+ 2 (µi − qi)+ + p(i)
544
+ 3 (bi − qi)+��
545
+ s.t.
546
+ n
547
+
548
+ i=1
549
+ ciqi ⩽ B,
550
+ qi ⩾ 0,
551
+ i = 1,...,n.
552
+ (12)
553
+ The objective function of (12) has a piecewise linear structure. Moreover, because of this
554
+ result and since the constraints are linear, (12) can be cast as a linear program (LP). In
555
+ particular, as explained below, the robust ordering policy qU can be found by solving
556
+ min
557
+ q
558
+ n
559
+
560
+ i=1
561
+ max
562
+ j=0,1,2{αi,jqi + νi,j}
563
+ s.t.
564
+ n
565
+
566
+ i=1
567
+ ciqi ⩽ B,
568
+ qi ⩾ 0,
569
+ i = 1,...,n,
570
+ (13)
571
+ where
572
+ αi,0 = −cimi,
573
+ νi,0 = cimiµi,
574
+ αi,1 = ci
575
+ �δi(mi + di)
576
+ 2(µi − ai) − mi
577
+
578
+ ,
579
+ νi,1 = ci(mi + di)
580
+
581
+ µi −
582
+ δiai
583
+ 2(µi − ai)
584
+
585
+ − cidiµi,
586
+ αi,2 = ci
587
+
588
+ di − δi(mi + di)
589
+ 2(bi − µi)
590
+
591
+ ,
592
+ νi,2 = ciδi(mi + di)bi
593
+ 2(bi − µi)
594
+ − cidiµi,
595
+ for i = 1,...,n.
596
+
597
+ 12
598
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
599
+ Let fi,j(x) = αi,jx + νi,j for i = 1,...,n and j = 0,1,2. From the single-item case, we know
600
+ that the objective, for each item i, can be written as maxj=0,1,2{fi,j(qi)} with αi,0 ⩽ αi,1 ⩽
601
+ αi,2, and thus the objective functions of (12) and (13) are equal, which makes the two
602
+ models equivalent. Since we know from linear programming theory that convex, piecewise
603
+ linear objective functions can be written as linear constraints, problem (13) admits an LP
604
+ representation (Boyd and Vandenberghe, 2004).
605
+ 3.3.
606
+ Knapsack algorithm
607
+ It turns out that problem (13) is intimately related to the continuous knapsack problem,
608
+ thus making available efficient sorting-based algorithms to solve (13). We next describe an
609
+ efficient algorithm that determines the robust ordering policy.
610
+ Define the linear funtion fi,j for each item i, and let αi,j represent its derivative with
611
+ respect to qi, for items i = 1,...,n and linear pieces j = 0,1,2. That is,
612
+ dfi,j(qi)
613
+ dqi
614
+ = αi,j.
615
+ For each item i, fi,0, fi,1 and fi,2 represent the marginal effect on the value of (13) when
616
+ we increase qi to ai,µi and bi respectively. The parameter αi,j represents the slope of these
617
+ linear functions and an order quantity is increased only when αi,j < 0, because otherwise
618
+ it will not reduce the expected costs. We consecutively allocate budget to the item that
619
+ causes the largest relative decrease in expected costs; that is, item k with the smallest
620
+ negative derivative αk,i relative to its cost ck. Define the set of all items as N = {1,...,n}.
621
+ Since only order quantities that decrease the expected costs are considered, define the
622
+ ordered set:
623
+ G := {(i,j) | αi,j < 0,i ∈ N,j ∈ {0,1,2}},
624
+ (14)
625
+ where the ordering is determined according to the value of αi,j/ci. For m = |G|, this
626
+ ordering is represented by the sequence (i1,j1),...,(im,jm) for which it holds that
627
+ αi1,j1/ci1 ⩽ ··· ⩽ αim,jm/cim. Here G contains tuples (i,j) for which i represents an item in
628
+ the newsvendor model and j a linear piece of the piecewise function. As these functions
629
+ are convex, the linear pieces appear for each item i in increasing order in the set G. We
630
+ can now state the knapsack algorithm for the distribution-free multi-item newsvendor
631
+ model.
632
+
633
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
634
+ 13
635
+ Algorithm 1 (Knapsack algorithm). For a budget level B ⩾ 0, the ordering policy
636
+ qU is found by the following procedure:
637
+ (i) Initialize by setting q = (0,...,0), and construct G. Continue to (ii).
638
+ (ii) Select the first element (i,j) ∈ G. If the set G is empty, the optimal solution is qU = q.
639
+ Otherwise, continue to (iii).
640
+ (iii) If j = 0, set qi = ai. If j = 1, set qi = µi. If j = 2, set qi = bi. Continue to (iv).
641
+ (iv) Determine whether the budget constraint �n
642
+ i=1 ciqi ⩽ B is violated. If so, set qi such
643
+ that ciqi = B − �
644
+ k∈N|k̸=i ckqk, and the optimal solution is qU = q. Otherwise, remove
645
+ element (i,j) from G and return to step (ii).
646
+ This algorithm yields an optimal solution to (13), as asserted in the following theorem.
647
+ Theorem 2 (Knapsack ordering policy). The robust ordering policy qU that solves
648
+ the multi-item newsvendor model (13) is determined by Algorithm 1.
649
+ Proof.
650
+ To prove that this algorithm produces an optimal solution, we construct a con-
651
+ tinuous knapsack problem that solves (13). In the following, (ik,jk) corresponds to the kth
652
+ entry of the ordered sequence of items in G. Define the following auxiliary model:
653
+ min
654
+ x
655
+ m
656
+
657
+ k=1
658
+ pkxk
659
+ s.t.
660
+ m
661
+
662
+ k=1
663
+ ckxk ⩽ B,
664
+ 0 ⩽ xk ⩽ uk
665
+ ∀k = 1,...,m,
666
+ (15)
667
+ where
668
+ uk =
669
+
670
+
671
+
672
+
673
+
674
+
675
+
676
+ aik,
677
+ for jk = 0
678
+ µik − aik, for jk = 1
679
+ bik − µik, for jk = 2
680
+ and pk = αik,jk and ck = cik. From the order of the sequence, it follows that p1/c1 ⩽ ... ⩽
681
+ pm/cm. Assume that (x∗
682
+ 1,...,x∗
683
+ m) is an optimal solution to optimization problem (15).
684
+ For i ∈ N, let qU
685
+ i = �
686
+ k=1,...,m|i=ik x∗
687
+ k. Since αi,0 ⩽ αi,1 ⩽ αi,2, the pieces jk appear in G in
688
+ increasing order for each item i. Thus, in an optimal solution, uik,jk will only be attained if
689
+ its predecessor uik,jl is also attained. By construction, qU is feasible for (13). Moreover, the
690
+ objective values of problems (13) and (15) only differ by a constant term, so both problems
691
+ have the same optimal solution. For the continuous knapsack problem, a greedy allocation
692
+ produces an optimal solution (see EC.3). Hence, qU = (qU
693
+ 1 ,...,qU
694
+ n ) is optimal for (13).
695
+
696
+
697
+ 14
698
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
699
+ Theorem 2 shows that there exists a ranking for the selection of items. Take an initial
700
+ budget B = 0. If we increase the budget B by some small value, we first increase item i
701
+ to ai for the item that has the highest mark-up mi. This makes sense intuitively because
702
+ the product with the highest mark-up is most profitable and, since qi < ai, we have no
703
+ risk of overstocking. We successively select the items with the greatest marginal benefit
704
+ αi,j/ci, and increase the order quantity consecutively to either ai, µi or bi. This procedure
705
+ continues until we have spent the entire budget, or reached the uncapacitated optimum.
706
+ Items that are ordered in the beginning of this procedure have the largest impact on the
707
+ decrease in costs for the multi-item newsvendor model.
708
+ As the main complexity of the knapsack algorithm in Theorem 2 stems from sorting
709
+ the set G, the greedy approach is of computational complexity O(nlog n). Moreover, the
710
+ solution can be found in O(n) time by first identifying the critical element (is,js) that will
711
+ violate the budget constraint, as proposed by Balas and Zemel (1980) for the continuous
712
+ knapsack problem. One then compares each αi,j/ci with the ratio of the critical element to
713
+ determine the optimal allocation of budget to the items. The optimal solution can also be
714
+ found through the LP (13), which we solve with the simplex method. We remark that a
715
+ single iteration of the simplex method takes O(n2) arithmetic operations (Ill´es and Terlaky,
716
+ 2002), which exceeds the time requirement of the knapsack algorithm.
717
+ 3.4.
718
+ A matching lower bound
719
+ The robust analysis so far was based on finding a tight upper bound on the cost function
720
+ when we know the mean, MAD and range of the demand distributions. When additional
721
+ information is available, we can also construct a matching lower bound. We include the
722
+ skewness information βi = P(Di ⩾ µi) in the mean-MAD ambiguity set to obtain the tight
723
+ lower bound. For the random variables D = (D1,...,Dn), define the ambiguity set as
724
+ P(µ,δ,β) := {P|P ∈ P(µ,δ), P(Di ⩾ µi) = βi, i = 1,...,n}
725
+ with P(µ,δ,β) ⊆ P(µ,δ). The proof of the following result is identical to that of Lemma 2,
726
+ but now uses the tight lower bound for a convex function of random variables discussed in
727
+ Ben-Tal and Hochman (1972). To make this paper self-contained, a proof for the univariate
728
+ case is provided in EC.1. This is sufficient since the univariate result can be applied to
729
+ each term of the summation in G(q,D) separately, as with Lemma 2.
730
+
731
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
732
+ 15
733
+ Lemma 3. The extremal distribution that solves
734
+ min
735
+ P∈P(µ,δ,β) EP[G(q,D)] consists for each
736
+ Di of a two-point distribution with values µi + δi
737
+ 2βi, µi −
738
+ δi
739
+ 2(1−βi) and probabilities βi, 1 − βi,
740
+ respectively.
741
+ Using this result, we obtain
742
+ min
743
+ q
744
+ CL(q) :=
745
+ n
746
+
747
+ i=1
748
+ ci
749
+
750
+ di(qi − µi) + (mi + di)
751
+
752
+ βi(µi + δi
753
+ 2βi
754
+ − qi)+ + (1 − βi)(µi −
755
+ δi
756
+ 2(1 − βi) − qi)+
757
+ ��
758
+ s.t
759
+ n
760
+
761
+ i=1
762
+ ciqi ⩽ B,
763
+ qi ⩾ 0,
764
+ for i = 1,...,n,
765
+ (16)
766
+ as a model to provide a lower bound for the multi-item newsvendor. As the objective
767
+ function in problem (16) also consists of piecewise linear functions, there exists an LP
768
+ representation and knapsack algorithm for (16) similar to the results for problem (12).
769
+ We can now solve (13) and (16) to obtain tight performance intervals for the multi-item
770
+ newsvendor model, using recent DRO results (see EC.4 and Postek et al., 2018). For all
771
+ feasible ordering policies q and P ∈ P(µ,δ,β), it holds that
772
+ C(q) ∈
773
+
774
+ CL(q),CU(q)
775
+
776
+ .
777
+ In addition, for the optimal solutions to the newsvendor problem and its distributionally
778
+ robust counterparts,
779
+ C(q∗) ∈
780
+
781
+ CL(qL),CU(qU)
782
+
783
+ .
784
+ One can find the tightest upper and lower bounds, based on mean-MAD ambiguity, for
785
+ the multi-item newsvendor model by calculating the optimal solutions to models (12) and
786
+ (16), respectively.
787
+ 4.
788
+ Numerical examples of robust ordering
789
+ We will now illustrate and visualize the robust ordering policies. To demonstrate the
790
+ ‘budget-consistency’ property, Section 4.1 applies the knapsack algorithm for a setting
791
+ where the budget is increased. In Section 4.2 we contrast the performance of the knapsack
792
+ policy for partial demand information against that of the optimal solution for the full
793
+ information setting. Our code is made available in the form of an online supplement.
794
+
795
+ 16
796
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
797
+ 4.1.
798
+ Numerical illustration of the ‘budget-consistency’ property
799
+ We illustrate the knapsack algorithm and the process of allocating budget to different order
800
+ quantities for items in the newsvendor model. Consider n = 5 identically distributed items
801
+ with support a = 10, b = 50 and mean µ = 30. From Figure 2, we can infer that item 1
802
+ is the most profitable. Low budget levels are allocated to this item such that we obtain
803
+ q1 = µ. Item number 3 is the last item to which the budget is allocated. Hence, it is the
804
+ least profitable item. Table 1 displays the ordered set G. From this table, we can indeed
805
+ infer that item 1 has the smallest value for αi,0/ci and therefore is increased first.
806
+ 0
807
+ 50
808
+ 100
809
+ 150
810
+ 200
811
+ 250
812
+ 300
813
+ Budget
814
+ 0
815
+ 10
816
+ 20
817
+ 30
818
+ 40
819
+ 50
820
+ Order quantity
821
+ Item 1
822
+ Item 2
823
+ Item 3
824
+ Item 4
825
+ Item 5
826
+ Figure 2
827
+ Development of the order quantities when the budget increases according to the knapsack algorithm
828
+ Table 1
829
+ Table containing αi,j/ci and corresponding information of the ordered set G
830
+ G
831
+ 1
832
+ 2
833
+ 3
834
+ 4
835
+ 5
836
+ 6
837
+ 7
838
+ 8
839
+ 9
840
+ 10
841
+ 11
842
+ 12
843
+ 13
844
+ 14
845
+ 15
846
+ αi,j/ci
847
+ -0.92 -0.75 -0.72 -0.49 -0.3 -0.15 -0.1 -0.08 -0.03 -0.01 0.14 0.42 0.45 0.7 0.7
848
+ Function piece
849
+ 0
850
+ 1
851
+ 0
852
+ 1
853
+ 0
854
+ 0
855
+ 1
856
+ 2
857
+ 1
858
+ 0
859
+ 1
860
+ 2
861
+ 2
862
+ 2
863
+ 2
864
+ Item
865
+ 1
866
+ 1
867
+ 2
868
+ 2
869
+ 4
870
+ 5
871
+ 4
872
+ 1
873
+ 5
874
+ 3
875
+ 3
876
+ 5
877
+ 2
878
+ 4
879
+ 3
880
+ Figure 2 nicely illustrates that when the budget is increased, the orders for the original
881
+ budget remain unaltered, while only the additional budget is further divided over the items.
882
+ To further illustrate the ‘budget-consistency’ property, consider the multi-item newsvendor
883
+ model for which n = 2, m2 = 2, the remaining cost parameters equal 1, and demand is
884
+ identically distributed according to a symmetric triangle distribution supported on [10,50].
885
+ In Figure 3 we plot the expected costs and order quantities for various budget levels.
886
+
887
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
888
+ 17
889
+ Figure 3a contains the allocation between both order quantities. For low budget values, one
890
+ first increases the order quantity of item one, the most profitable item. Figure 3b shows the
891
+ upper bound (12) and lower bound (16) that together lead to a tight performance interval
892
+ for the expected costs.
893
+ For the sake of comparison, we also show results for the partial demand information
894
+ setting considered in Gallego and Moon (1993), assuming that the mean and variance of
895
+ demands are known; see EC.5 for more details. The results of Gallego and Moon (1993) de-
896
+ pend (non-trivially) on all model parameters, including the budget B. This lack of budget-
897
+ consistency forces the decision maker to solve an optimization problem, see (EC.13), for
898
+ each budget level separately, and explains the smooth curve in Figure 3a. In contrast,
899
+ our knapsack algorithm generates a sorted ordering list that does not depend on B, and
900
+ prescribes to sort items successively according to that list, with order sizes equal to the
901
+ minimal, mean or maximum demand.
902
+ 0
903
+ 5
904
+ 10
905
+ 15
906
+ 20
907
+ 25
908
+ 30
909
+ Order quantity of item 1
910
+ 0
911
+ 5
912
+ 10
913
+ 15
914
+ 20
915
+ 25
916
+ 30
917
+ 35
918
+ Order quantity of item 2
919
+ Optimal order quantity
920
+ Mean-MAD policy
921
+ Mean-variance policy
922
+ (a) Ordering policy
923
+ 0
924
+ 10
925
+ 20
926
+ 30
927
+ 40
928
+ 50
929
+ 60
930
+ Budget
931
+ 10
932
+ 20
933
+ 30
934
+ 40
935
+ 50
936
+ 60
937
+ 70
938
+ 80
939
+ 90
940
+ Expected costs
941
+ Triangular
942
+ Mean-MAD lower bound
943
+ Mean-MAD upper bound
944
+ Mean-variance bound
945
+ (b) Newsvendor costs
946
+ Figure 3
947
+ Mean-variance and mean-MAD bounds and ordering policies for the newsvendor model. The mean-
948
+ variance curves are obtained through solving (EC.13). The mean-MAD policy corresponds to the optimal
949
+ solution of (12). The mean-MAD upper and lower bounds correspond to the extremal three- and two-
950
+ point distributions, respectively. The ‘true’ cost function assumes that D follows a symmetric triangular
951
+ distribution on [10,50].
952
+ We emphasize that these results are not meant to numerically compare the mean-MAD
953
+ and mean-variance policies, because the displayed differences merely express different ways
954
+ of dealing with ambiguity. Indeed, it is hard to compare both policies as the respective
955
+ ambiguity sets can contain vastly different distributions. For instance, a finite variance
956
+ excludes distributions with an infinite second moment, while finite MAD does not. For
957
+
958
+ 18
959
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
960
+ our purposes, MAD and variance are equally adequate descriptors of dispersion, and both
961
+ are easily calibrated on data using basic statistical estimators. The crucial difference in
962
+ the DRO context of this paper is that MAD leads to a simple, budget-consistent ordering
963
+ policy.
964
+ 4.2.
965
+ Expected value of additional information
966
+ We introduce as performance measure the expected value of additional information (EVAI),
967
+ defined as
968
+ EVAI(qU
969
+ B) = C(qU
970
+ B) − C(q∗
971
+ B)
972
+ C(q∗
973
+ B)
974
+ ,
975
+ where qU
976
+ B is the robust ordering policy and q∗
977
+ B is the optimal ordering policy when the
978
+ joint demand distribution is known. We let B run from 0 to �n
979
+ i=1 q∗
980
+ i =: Bopt, and consider
981
+ nine different demand distributions, listed in Table 2.
982
+ Table 2
983
+ Nine distributions used for multi-item performance analysis
984
+ Case
985
+ Case
986
+ Case
987
+ 1
988
+ Uniform[10,50]
989
+ 4
990
+ Beta(1,3) on [0,50]
991
+ 7
992
+ Triangular(10,50,18)
993
+ 2
994
+ Uniform[10,100]
995
+ 5
996
+ Beta(2,2) on [0,50]
997
+ 8
998
+ Triangular(10,50,30)
999
+ 3
1000
+ Uniform[10,200]
1001
+ 6
1002
+ Beta(3,1) on [0,50]
1003
+ 9
1004
+ Triangular(10,50,42)
1005
+ We consider n = 25 items. For each item i, let ci = di = 1 and assume identically dis-
1006
+ tributed demand. For example, in Case 2 the demand Di for each item i follows the uniform
1007
+ distribution with parameters ai = 10 and bi = 100. Table 3 provides an overview for the
1008
+ mark-up, representing low, average and high margins.
1009
+ For the low margin regime, Figure 4 shows results for each of the nine cases, for both the
1010
+ robust ordering policy with mean-MAD-range information, and for the policy that uses
1011
+ the additional information βi = P(Di ⩾ µi). For the former, the worst performance over all
1012
+ nine cases has a maximum deviation of approximately 23% compared to the optimal order
1013
+ quantity q∗
1014
+ B. Overall, the performance of the robust policy only deviates a few percent from
1015
+ the optimal performance with full information availability. For the uniformly distributed
1016
+ cases (Cases 1-3), the performance decreases when the range increases. For beta distributed
1017
+ demand (Cases 4-6), right-tailed distributions perform worse than left-tailed distributions.
1018
+ This effect is also observed for the triangular distributions (Cases 7-9). The policy with
1019
+ additional information βi = P(Di ⩾ µi) performs somewhat better in most cases.
1020
+
1021
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1022
+ 19
1023
+ Table 3
1024
+ Mark-up values for all 25 items in the newsvendor model
1025
+ Mark-up
1026
+ m1
1027
+ m2
1028
+ m3
1029
+ m4
1030
+ m5
1031
+ m6
1032
+ m7
1033
+ m8
1034
+ m9
1035
+ m10
1036
+ m11
1037
+ m12
1038
+ m13
1039
+ Low margin
1040
+ 0.1
1041
+ 0.14 0.18 0.21 0.25 0.29 0.33 0.36
1042
+ 0.4
1043
+ 0.44 0.48 0.51 0.55
1044
+ Average margin
1045
+ 1
1046
+ 1.13 1.25 1.38
1047
+ 1.5
1048
+ 1.63 1.75 1.88
1049
+ 2
1050
+ 2.13 2.25 2.38
1051
+ 2.5
1052
+ High margin
1053
+ 4
1054
+ 4.21 4.42 4.63 4.83 5.04 5.25 5.46 5.67 5.88 6.08 6.29
1055
+ 6.5
1056
+ Mark-up
1057
+ m14
1058
+ m15
1059
+ m16
1060
+ m17
1061
+ m18
1062
+ m19
1063
+ m20
1064
+ m21
1065
+ m22
1066
+ m23
1067
+ m24
1068
+ m25
1069
+ Low margin
1070
+ 0.59 0.63 0.66
1071
+ 0.7
1072
+ 0.74 0.78 0.81 0.85 0.89 0.93 0.96
1073
+ 1
1074
+ Average margin 2.63 2.75 2.88
1075
+ 3
1076
+ 3.13 3.25 3.38
1077
+ 3.5
1078
+ 3.63 3.75 3.88
1079
+ 4
1080
+ High margin
1081
+ 6.71 6.92 7.12 7.33 7.54 7.75 7.96 8.17 8.37 8.58 8.79
1082
+ 9
1083
+ Figure 5 shows similar results for high margins. The EVAI for the robust policy remains
1084
+ mostly below 10% for lower budget levels, but starts increasing rapidly when the budget
1085
+ approaches Bopt (i.e., when approaching the unconstrained model). When the budget is less
1086
+ restrictive, additional distributional information provides substantial value. In particular,
1087
+ since the policy uses skewness information βi, it performs better (in expectation) for higher
1088
+ budget levels than the robust ordering policy. We present some more performance plots
1089
+ for the average margin setting and additional numerical experiments with mean-variance
1090
+ information in EC.6.
1091
+ We next quantify the value of MAD information by comparing the performance with
1092
+ the situations when only the mean and range of demand is known. For the low margin
1093
+ setting, Figure 6 shows the EVAI for the ordering policy with only mean-range informa-
1094
+ tion. Like the mean-MAD policy, this policy follows from a discrete distribution, in this
1095
+ case the extremal distribution on {a,b} with probabilities b−µ
1096
+ b−a and µ−a
1097
+ b−a that attains the
1098
+ Edmundson-Madansky bound (see Ben-Tal and Hochman, 1972). That is, instead of the
1099
+ worst-case three-point distribution, we take the expectation in (6) over this two-point dis-
1100
+ tribution and find the robust mean-range ordering policy using the resulting LP. The plots
1101
+ clearly demonstrate that knowledge on dispersion in terms of MAD improves performance
1102
+ considerably.
1103
+
1104
+ 20
1105
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1106
+ 0
1107
+ 224
1108
+ 449
1109
+ 674
1110
+ 0.00
1111
+ 0.05
1112
+ 0.09
1113
+ 0.14
1114
+ Case 1: Uniform (10,50)
1115
+ Mean-MAD
1116
+ Mean-MAD-
1117
+ 0
1118
+ 401
1119
+ 802
1120
+ 1204
1121
+ 0.00
1122
+ 0.05
1123
+ 0.09
1124
+ 0.14
1125
+ Case 2: Uniform (10,100)
1126
+ 0
1127
+ 755
1128
+ 1510 2265
1129
+ 0.00
1130
+ 0.05
1131
+ 0.09
1132
+ 0.14
1133
+ Case 3: Uniform (10,200)
1134
+ 0
1135
+ 70
1136
+ 140
1137
+ 211
1138
+ 0.00
1139
+ 0.06
1140
+ 0.11
1141
+ 0.17
1142
+ Case 4: Beta (1,3)
1143
+ 0
1144
+ 187
1145
+ 374
1146
+ 561
1147
+ 0.00
1148
+ 0.07
1149
+ 0.13
1150
+ 0.20
1151
+ Case 5: Beta (2,2)
1152
+ 0
1153
+ 312
1154
+ 624
1155
+ 937
1156
+ 0.00
1157
+ 0.06
1158
+ 0.12
1159
+ 0.18
1160
+ Case 6: Beta (3,1)
1161
+ 0
1162
+ 190
1163
+ 380
1164
+ 571
1165
+ 0.00
1166
+ 0.08
1167
+ 0.15
1168
+ 0.23
1169
+ Case 7: Triangular (10,50,18)
1170
+ 0
1171
+ 236
1172
+ 472
1173
+ 709
1174
+ 0.00
1175
+ 0.07
1176
+ 0.13
1177
+ 0.20
1178
+ Case 8: Triangular (10,50,30)
1179
+ 0
1180
+ 277
1181
+ 554
1182
+ 831
1183
+ 0.00
1184
+ 0.06
1185
+ 0.11
1186
+ 0.17
1187
+ Case 9: Triangular (10,50,42)
1188
+ Figure 4
1189
+ The results for the low margin setting. The x-axis corresponds to B and the y-axis to the EVAI.
1190
+ 0
1191
+ 370
1192
+ 740
1193
+ 1110
1194
+ 0.00
1195
+ 0.11
1196
+ 0.23
1197
+ 0.34
1198
+ Case 1: Uniform (10,50)
1199
+ Mean-MAD
1200
+ Mean-MAD-
1201
+ 0
1202
+ 729
1203
+ 1458 2187
1204
+ 0.00
1205
+ 0.11
1206
+ 0.23
1207
+ 0.34
1208
+ Case 2: Uniform (10,100)
1209
+ 0
1210
+ 1446 2892 4339
1211
+ 0.00
1212
+ 0.12
1213
+ 0.23
1214
+ 0.35
1215
+ Case 3: Uniform (10,200)
1216
+ 0
1217
+ 201
1218
+ 403
1219
+ 605
1220
+ 0.00
1221
+ 0.21
1222
+ 0.42
1223
+ 0.63
1224
+ Case 4: Beta (1,3)
1225
+ 0
1226
+ 319
1227
+ 638
1228
+ 958
1229
+ 0.00
1230
+ 0.18
1231
+ 0.37
1232
+ 0.55
1233
+ Case 5: Beta (2,2)
1234
+ 0
1235
+ 396
1236
+ 792
1237
+ 1188
1238
+ 0.00
1239
+ 0.11
1240
+ 0.23
1241
+ 0.34
1242
+ Case 6: Beta (3,1)
1243
+ 0
1244
+ 306
1245
+ 612
1246
+ 918
1247
+ 0.00
1248
+ 0.18
1249
+ 0.36
1250
+ 0.54
1251
+ Case 7: Triangular (10,50,18)
1252
+ 0
1253
+ 329
1254
+ 658
1255
+ 987
1256
+ 0.00
1257
+ 0.19
1258
+ 0.38
1259
+ 0.57
1260
+ Case 8: Triangular (10,50,30)
1261
+ 0
1262
+ 361
1263
+ 722
1264
+ 1084
1265
+ 0.00
1266
+ 0.20
1267
+ 0.41
1268
+ 0.61
1269
+ Case 9: Triangular (10,50,42)
1270
+ Figure 5
1271
+ The results for the high margin setting. The x-axis corresponds to B and the y-axis to the EVAI.
1272
+
1273
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1274
+ 21
1275
+ 0
1276
+ 224
1277
+ 449
1278
+ 674
1279
+ 0.00
1280
+ 0.26
1281
+ 0.51
1282
+ 0.77
1283
+ Case 1: Uniform (10,50)
1284
+ Mean-MAD
1285
+ Mean-MAD-
1286
+ E-M
1287
+ 0
1288
+ 401
1289
+ 802
1290
+ 1204
1291
+ 0.00
1292
+ 0.26
1293
+ 0.51
1294
+ 0.77
1295
+ Case 2: Uniform (10,100)
1296
+ 0
1297
+ 755
1298
+ 1510 2265
1299
+ 0.00
1300
+ 0.26
1301
+ 0.51
1302
+ 0.77
1303
+ Case 3: Uniform (10,200)
1304
+ 0
1305
+ 70
1306
+ 140
1307
+ 211
1308
+ 0.00
1309
+ 0.16
1310
+ 0.32
1311
+ 0.48
1312
+ Case 4: Beta (1,3)
1313
+ 0
1314
+ 187
1315
+ 374
1316
+ 561
1317
+ 0.00
1318
+ 0.45
1319
+ 0.90
1320
+ 1.35
1321
+ Case 5: Beta (2,2)
1322
+ 0
1323
+ 312
1324
+ 624
1325
+ 937
1326
+ 0.00
1327
+ 1.03
1328
+ 2.07
1329
+ 3.10
1330
+ Case 6: Beta (3,1)
1331
+ 0
1332
+ 190
1333
+ 380
1334
+ 571
1335
+ 0.00
1336
+ 0.34
1337
+ 0.69
1338
+ 1.03
1339
+ Case 7: Triangular (10,50,18)
1340
+ 0
1341
+ 236
1342
+ 472
1343
+ 709
1344
+ 0.00
1345
+ 0.54
1346
+ 1.09
1347
+ 1.63
1348
+ Case 8: Triangular (10,50,30)
1349
+ 0
1350
+ 277
1351
+ 554
1352
+ 831
1353
+ 0.00
1354
+ 0.63
1355
+ 1.26
1356
+ 1.89
1357
+ Case 9: Triangular (10,50,42)
1358
+ Figure 6
1359
+ The results for the low margin setting. The x-axis corresponds to B and the y-axis to the EVAI. The
1360
+ E-M performance plot refers to the model with only mean information.
1361
+ 5.
1362
+ Conclusions
1363
+ This paper establishes new ordering policies for the newsvendor with partial demand in-
1364
+ formation (mean, MAD and range) with a budget constraint. The ordering policies follow
1365
+ from a minimax approach, where we search for the order quantities with minimal costs
1366
+ for the maximal (worst-case) cost function restricted to demand distributions that comply
1367
+ with the partial information.
1368
+ The minimax analysis for the multi-item setting gives rise to a knapsack problem, and
1369
+ the solution of this knapsack problem in fact is the ordering policy. This policy prescribes
1370
+ to sort items based on their marginal effect on the total costs, reminiscent of the greedy
1371
+ algorithm that solves the continuous knapsack problem. The ordering policy only orders
1372
+ the minimum, mean or maximum demand for each item. Hence, the decision maker can
1373
+ rank the items based on their marginal effects, and then start ordering items according to
1374
+ this list until the budget is spent. The fact that the ranking list is easy to generate, and
1375
+ that the ‘order of ordering’ does not depend on the budget, makes the policy transparent
1376
+ and easy to implement. Existing approaches for full and partial (such as mean-variance)
1377
+ knowledge of the demand distribution lack this property of ‘budget-consistency’.
1378
+
1379
+ 22
1380
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1381
+ The minimax approach provides robustness, with an ordering policy that protects against
1382
+ all distributions that comply with the partial information. This approach avoids the need
1383
+ to estimate the demand distribution, which can be a daunting process in practice and
1384
+ is prone to errors. However, the minimax approach comes at the risk of being overly
1385
+ conservative. Through extensive numerical experiments we compared the robust policies
1386
+ for partial demand settings with the policies for full demand settings, and observed that
1387
+ the proposed policies perform well.
1388
+ At the heart of our analysis lies the idea to set up the robust minimax analysis with
1389
+ MAD information. With MAD as dispersion measure we obtained a tractable optimization
1390
+ model, with a solution in terms of a robust ordering policy that satisfies the budget-
1391
+ consistency property. Using MAD to formulate solvable minimax problems can also be
1392
+ applied to other inventory models. We demonstrate this idea in EC.7 for three extended
1393
+ settings: the newsvendor with multiple contraints, the newsvendor with unreliable supply,
1394
+ and the risk-averse newsvendor. In all three cases, the minimax analysis leads to a tractable
1395
+ mathematical program, either a knapsack problem or a linear program.
1396
+ References
1397
+ Abdel-Malek, L., Montanari, R., and Morales, L. C. (2004). Exact, approximate, and generic iterative models
1398
+ for the multi-product newsboy problem with budget constraint. International Journal of Production
1399
+ Economics, 91(2):189–198.
1400
+ Ardestani-Jaafari, A. and Delage, E. (2016). Robust optimization of sums of piecewise linear functions with
1401
+ application to inventory problems. Operations Research, 64(2):474–494.
1402
+ Arrow, K. J., Harris, T., and Marschak, J. (1951). Optimal inventory policy. Econometrica: Journal of the
1403
+ Econometric Society, 19(3):250–272.
1404
+ Balas, E. and Zemel, E. (1980). An algorithm for large zero-one knapsack problems. Operations Research,
1405
+ 28(5):1130–1154.
1406
+ Ben-Tal, A., Den Hertog, D., De Waegenaere, A., Melenberg, B., and Rennen, G. (2013). Robust solutions
1407
+ of optimization problems affected by uncertain probabilities. Management Science, 59(2):341–357.
1408
+ Ben-Tal, A. and Hochman, E. (1972). More bounds on the expectation of a convex function of a random
1409
+ variable. Journal of Applied Probability, 9(4):803–812.
1410
+ Ben-Tal, A. and Hochman, E. (1985).
1411
+ Approximation of expected returns and optimal decisions under
1412
+ uncertainty using mean and mean absolute deviation. Zeitschrift f¨ur Operations Research, 29(7):285–
1413
+ 300.
1414
+ Boyd, S. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press, Cambridge, UK.
1415
+
1416
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1417
+ 23
1418
+ Chen, H., Hu, M., and Perakis, G. (2022). Distribution-free pricing. Manufacturing & Service Operations
1419
+ Management. ePub ahead of print January 20, https://doi.org/10.1287/msom.2021.1055.
1420
+ Chen, W., Sim, M., Sun, J., and Teo, C.-P. (2010). From CVaR to uncertainty set: Implications in joint
1421
+ chance-constrained optimization. Operations Research, 58(2):470–485.
1422
+ Dada, M., Petruzzi, N. C., and Schwarz, L. B. (2007). A newsvendor’s procurement problem when suppliers
1423
+ are unreliable. Manufacturing & Service Operations Management, 9(1):9–32.
1424
+ Delage, E. and Ye, Y. (2010). Distributionally robust optimization under moment uncertainty with applica-
1425
+ tion to data-driven problems. Operations Research, 58(3):595–612.
1426
+ Elmachtoub, A. N., Gupta, V., and Hamilton, M. L. (2021). The value of personalized pricing. Management
1427
+ Science, 67(10):6055–6070.
1428
+ Erlebacher, S. J. (2000). Optimal and heuristic solutions for the multi-item newsvendor problem with a
1429
+ single capacity constraint. Production and Operations Management, 9(3):303–318.
1430
+ Fisher, M. and Raman, A. (1996). Reducing the cost of demand uncertainty through accurate response to
1431
+ early sales. Operations Research, 44(1):87–99.
1432
+ Gallego, G. (1992).
1433
+ A minmax distribution free procedure for the (Q,R) inventory model.
1434
+ Operations
1435
+ Research Letters, 11(1):55–60.
1436
+ Gallego, G. and Moon, I. (1993). The distribution free newsboy problem: review and extensions. Journal of
1437
+ the Operational Research Society, 44(8):825–834.
1438
+ Hadley, G. and Whitin, T. M. (1963). Analysis of Inventory Systems. Prentice-Hall, Englewood Cliffs, NJ.
1439
+ Hanasusanto, G. A., Kuhn, D., Wallace, S. W., and Zymler, S. (2015). Distributionally robust multi-item
1440
+ newsvendor problems with multimodal demand distributions. Mathematical Programming, 152(1):1–32.
1441
+ Ill´es, T. and Terlaky, T. (2002). Pivot versus interior point methods: Pros and cons. European Journal of
1442
+ Operational Research, 140(2):170–190.
1443
+ K¨aki, A., Liesi¨o, J., Salo, A., and Talluri, S. (2015). Newsvendor decisions under supply uncertainty. Inter-
1444
+ national Journal of Production Research, 53(5):1544–1560.
1445
+ Kellerer, H., Pferschy, U., and Pisinger, D. (2004). Knapsack Problems. Springer-Verlag, Berlin.
1446
+ Kleer, P. and van Leeuwaarden, J. (2022). Optimal stopping theory for a distributionally robust seller.
1447
+ Kong, Q., Lee, C.-Y., Teo, C.-P., and Zheng, Z. (2013). Scheduling arrivals to a stochastic service delivery
1448
+ system using copositive cones. Operations Research, 61(3):711–726.
1449
+ Lau, H.-S. and Lau, A. H.-L. (1996). The newsstand problem: A capacitated multiple-product single-period
1450
+ inventory problem. European Journal of Operational Research, 94(1):29–42.
1451
+ Mak, H.-Y., Rong, Y., and Zhang, J. (2014). Appointment scheduling with limited distributional information.
1452
+ Management Science, 61(2):316–334.
1453
+
1454
+ 24
1455
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1456
+ Merzifonluoglu, Y. and Feng, Y. (2014). Newsvendor problem with multiple unreliable suppliers. Interna-
1457
+ tional Journal of Production Research, 52(1):221–242.
1458
+ Nahmias, S. (1982). Perishable inventory theory: A review. Operations Research, 30(4):680–708.
1459
+ Nahmias, S. (2009). Production and Operations Analysis. McGraw-hill Education, New York, 6th edition.
1460
+ Nahmias, S. and Schmidt, C. P. (1984). An efficient heuristic for the multi-item newsboy problem with a
1461
+ single constraint. Naval Research Logistics Quarterly, 31(3):463–474.
1462
+ Natarajan, K., Sim, M., and Uichanco, J. (2018). Asymmetry and ambiguity in newsvendor models. Man-
1463
+ agement Science, 64(7):3146–3167.
1464
+ Natarajan, K. and Teo, C.-P. (2017). On reduced semidefinite programs for second order moment bounds
1465
+ with applications. Mathematical Programming, 161(1):487–518.
1466
+ Nemirovski, A. and Shapiro, A. (2007). Convex approximations of chance constrained programs. SIAM
1467
+ Journal on Optimization, 17(4):969–996.
1468
+ Perakis, G. and Roels, G. (2008). Regret in the newsvendor model with partial information. Operations
1469
+ research, 56(1):188–203.
1470
+ Perakis, G., Singhvi, D., and Spantidakis, Y. (2020). Leveraging the newsvendor for inventory distribution
1471
+ at a large fashion e-retailer with depth and capacity constraints. Preprint available at SSRN 3632459.
1472
+ Popescu, I. (2007).
1473
+ Robust mean-covariance solutions for stochastic optimization.
1474
+ Operations Research,
1475
+ 55(1):98–112.
1476
+ Postek, K., Ben-Tal, A., den Hertog, D., and Melenberg, B. (2018). Robust optimization with ambiguous
1477
+ stochastic constraints under mean and dispersion information. Operations Research, 66(3):814–833.
1478
+ Rahimian, H. and Mehrotra, S. (2019).
1479
+ Distributionally robust optimization: A review.
1480
+ arXiv preprint
1481
+ arXiv:1908.05659
1482
+ Rockafellar, R. T. and Uryasev, S. (2000). Optimization of conditional value-at-risk. Journal of Risk, 2:21–42.
1483
+ Rogosinski, W. W. (1958). Moments of non-negative mass. Proceedings of the Royal Society of London.
1484
+ Series A. Mathematical and Physical Sciences, 245(1240):1–27.
1485
+ Roos, E. and den Hertog, D. (2020). Reducing conservatism in robust optimization. INFORMS Journal on
1486
+ Computing, 32(4):1109–1127.
1487
+ Scarf, H. E. (1958). A min-max solution of an inventory problem. In Arrow, K. J., Karlin, S., and Scarf,
1488
+ H. E., editors, Studies in the Mathematical Theory of Inventory and Production. Stanford University
1489
+ Press, Palo Alto, CA.
1490
+ Shapiro, A., Dentcheva, D., and Ruszczy´nski, A. (2009). Lectures on Stochastic Programming: Modeling and
1491
+ Theory. SIAM, Philadelphia.
1492
+ Shapiro, A. and Kleywegt, A. (2002). Minimax analysis of stochastic problems. Optimization Methods and
1493
+ Software, 17(3):523–542.
1494
+
1495
+ Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1496
+ 25
1497
+ Silver, E. A., Pyke, D. F., and Peterson, R. (1998). Inventory Management and Production Planning and
1498
+ Scheduling. John Wiley & Sons, New York, 3th edition.
1499
+ Vairaktarakis, G. L. (2000). Robust multi-item newsboy models with a budget constraint. International
1500
+ Journal of Production Economics, 66(3):213–226.
1501
+ van Eekelen, W., den Hertog, D., and van Leeuwaarden, J. S. H. (2022). MAD dispersion measure makes
1502
+ extremal queue analysis simple. ePub ahead of print January 12, https://doi.org/10.1287/ijoc.
1503
+ 2021.1130.
1504
+ van Leeuwaarden, J. S. and Stegehuis, C. (2021). Robust subgraph counting with distribution-free random
1505
+ graph analysis. Physical Review E, 104(4):044313.
1506
+ Xu, H., Liu, Y., and Sun, H. (2018). Distributionally robust optimization with matrix moment constraints:
1507
+ Lagrange duality and cutting plane methods. Mathematical Programming, 169(2):489–529.
1508
+ Zhu, S. and Fukushima, M. (2009). Worst-case conditional value-at-risk with application to robust portfolio
1509
+ management. Operations Research, 57(5):1155–1168.
1510
+ Zymler, S., Kuhn, D., and Rustem, B. (2013). Distributionally robust joint chance constraints with second-
1511
+ order moment information. Mathematical Programming, 137(1):167–198.
1512
+
1513
+ e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendorec1
1514
+ E-Companion to “Robust knapsack ordering for a
1515
+ partially-informed newsvendor with budget constraint”
1516
+ EC.1.
1517
+ Proofs
1518
+ Proof of Lemma 1.
1519
+ In their original work, Ben-Tal and Hochman (1972) prove this
1520
+ result for general convex functions by dividing the support into two intervals [a,µ] and [µ,b]
1521
+ and then applying the Edmundson-Madansky bound to both subintervals. The following
1522
+ proof uses semi-infinite programming duality and is taken from van Eekelen et al. (2022).
1523
+ Consider a general convex function f(x) (this includes (x − q)+ as a special case). For
1524
+ X ∼ P ∈ P(µ,δ), we solve
1525
+ max
1526
+ P(x)⩾0
1527
+ � b
1528
+ a
1529
+ f(x)dP(x)
1530
+ s.t.
1531
+ � b
1532
+ a
1533
+ dP(x) = 1,
1534
+ � b
1535
+ a
1536
+ xdP(x) = µ,
1537
+ � b
1538
+ a
1539
+ |x − µ|dP(x) = δ,
1540
+ (EC.1)
1541
+ Consider the dual of (EC.1),
1542
+ min
1543
+ λ0,λ1,λ2
1544
+ λ0 + λ1µ + λ2δ
1545
+ s.t.
1546
+ M(x) := λ0 + λ1x + λ2|x − µ| ⩾ f(x), ∀x ∈ [a,b].
1547
+ (EC.2)
1548
+ The function M(x) has a ‘kink’ at x = µ. Since the dual problem (EC.2) has three variables,
1549
+ the optimal M(x) touches f(x) at three points: x = a, µ and b. For this choice of M(x),
1550
+ λ0 = f(a) − λ1a − λ2(µ − a), λ1 = 1
1551
+ 2
1552
+ �f(b) − f(µ)
1553
+ b − µ
1554
+ + f(µ) − f(a)
1555
+ µ − a
1556
+
1557
+ ,
1558
+ λ2 = 1
1559
+ 2
1560
+ �f(b) − f(µ)
1561
+ b − µ
1562
+ − f(µ) − f(a)
1563
+ µ − a
1564
+
1565
+ .
1566
+ Because the majorant is piecewise linear and convex, we can majorize every convex function
1567
+ f(x) by letting M(x) touch at the boundary points a,b and at the kink point x = µ.
1568
+ According to the complementary slackness property, these points constitute the support
1569
+ of the extremal distribution, and the optimal probabilities follow from solving the linear
1570
+ system resulting from the equations of (EC.1). This is a linear system of three unknown
1571
+ probabilities and three equations, with the solution
1572
+ pa =
1573
+ δ
1574
+ 2(µ − a),
1575
+ pµ = 1 −
1576
+ δ
1577
+ 2(µ − a) −
1578
+ δ
1579
+ 2(b − µ),
1580
+ pb =
1581
+ δ
1582
+ 2(b − µ).
1583
+ Finally, for these primal and dual solutions, we verify that the objective values of problems
1584
+ (EC.1) and (EC.2) agree, which confirms that strong duality holds.
1585
+
1586
+
1587
+ ec2e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1588
+ Proof of Lemma 3.
1589
+ We prove this result for general convex f(x). For a random variable
1590
+ X with distribution P ∈ P(µ,d,β), the tight lower bound follows from
1591
+ max
1592
+ P(x)⩾0
1593
+ � b
1594
+ a
1595
+ f(x)dP(x)
1596
+ s.t.
1597
+ � b
1598
+ a
1599
+ dP(x) = 1,
1600
+ � b
1601
+ a
1602
+ xdP(x) = µ,
1603
+ � b
1604
+ a
1605
+ |x − µ|dP(x) = δ,
1606
+ � b
1607
+ a
1608
+ 1{x⩾µ}dP(x) = β.
1609
+ (EC.3)
1610
+ Consider the dual of (EC.3),
1611
+ min
1612
+ λ0,λ1,λ2
1613
+ λ0 + λ1µ + λ2δ + λ3β
1614
+ s.t.
1615
+ M(x) := λ0 + λ1x + λ2|x − µ| + λ31{x⩾µ} ⩽ f(x), ∀x ∈ [a,b].
1616
+ (EC.4)
1617
+ Here M(x) has both a ‘kink’ and a jump discontinuity at x = µ. Let the function M(x)
1618
+ touch the epigraph of f(x) in two points on opposite sides of µ. If we insert this knowledge,
1619
+ the constraints in the dual problem reduce to two equality constraints. From the Karush-
1620
+ Kuhn-Tucker conditions, we deduce the optimal tangent points:
1621
+ x1 = µ + δ
1622
+ 2β ,
1623
+ x2 = µ −
1624
+ δ
1625
+ 2(1 − β),
1626
+ which correspond to υ1 and υ2. Substituting this solution and solving for λ0,λ1,λ2 and λ3
1627
+ gives
1628
+ λ0 = f(υ2) + (λ1 − λ2)δ
1629
+ 2(1 − β) − λ1µ,
1630
+ λ3 = f(υ1) − f(υ2) +
1631
+ λ2δ
1632
+ (1 − β) − (λ2 + λ1)δ
1633
+ 2β(1 − β) ,
1634
+ and hence the optimal value is given by βf(υ1) + (1 − β)f(υ2). To ensure the solution is
1635
+ dual feasible, we assign suitable values to the two free decision variables. That is, we let
1636
+ λ1 + λ2 and λ1 − λ2 equal the slope of f(x) at x = υ1 and υ2, respectively. The optimal
1637
+ probabilities of (EC.3) are obtained by solving the linear system resulting from (EC.3).
1638
+
1639
+ EC.2.
1640
+ Known properties of MAD
1641
+ We recall some well-known properties of the MAD; see e.g. Ben-Tal and Hochman (1985).
1642
+ Denote by σ2 the variance of the random variable X, whose distribution is known to belong
1643
+ to the set P(µ,δ). Then
1644
+ δ2
1645
+ 4β(1 − β) ⩽ σ2 ⩽ δ(b − a)
1646
+ 2
1647
+ .
1648
+
1649
+ e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendorec3
1650
+ In particular, since
1651
+ δ2 ⩽ 4β(1 − β)σ2 ⩽ σ2,
1652
+ it holds that δ ⩽ σ. For a proof, we refer the reader to Ben-Tal and Hochman (1985). For
1653
+ the distributions used in the paper, explicit formulas for δ are available:
1654
+ • Uniform distribution on [a,b]:
1655
+ δ = 1
1656
+ 4(b − a)
1657
+ • Beta distribution with parameters k,λ on support [a,b]:
1658
+ δ =
1659
+ 2kkλλΓ(k + λ)
1660
+ (k + λ)k+λ+1Γ(k)Γ(λ)(b − a)
1661
+ • Triangular distribution on [a,b] with mode c:
1662
+ δ =
1663
+
1664
+
1665
+
1666
+
1667
+
1668
+ 2(b+c−2a)3
1669
+ 81(a−b)(a−c),
1670
+ for a + b < 2c,
1671
+ 2(a+c−2b)3
1672
+ 81(a−b)(b−c),
1673
+ for a + b > 2c
1674
+ • Normal distribution N(µ,σ2):
1675
+ δ =
1676
+
1677
+ 2
1678
+ πσ
1679
+ • Gamma distribution with parameters λ and k (for which µ = k/λ):
1680
+ δ =
1681
+ 2kk
1682
+ Γ(k)exp(k)
1683
+ 1
1684
+ λ.
1685
+ The MAD is known to satisfy the bound
1686
+ 0 ⩽ δ ⩽ 2(b − µ)(µ − a)
1687
+ b − a
1688
+ .
1689
+ (EC.5)
1690
+ Let β = P(X ⩾ µ). For example, in the case of continuous symmetric distribution of X we
1691
+ know that β = 0.5. This quantity is known to satisfy the bounds:
1692
+ δ
1693
+ 2(b − µ) ⩽ β ⩽ 1 −
1694
+ δ
1695
+ 2(µ − a).
1696
+ (EC.6)
1697
+ EC.3.
1698
+ The knapsack problem
1699
+ The knapsack problem (Kellerer et al., 2004) is an integer programming problem and can
1700
+ be formulated as
1701
+ max
1702
+ x
1703
+
1704
+ i=1
1705
+ pixi
1706
+ s.t.
1707
+ n
1708
+
1709
+ i=1
1710
+ cixi ⩽ B,
1711
+ xi ∈ {0,1},
1712
+ 1 = 1,...,n.
1713
+ (EC.7)
1714
+
1715
+ ec4e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1716
+ for decision variable x, budget B, price p > 0 and costs c. Assume B < �n
1717
+ i=1 ci. The contin-
1718
+ uous version is obtained by considering the linear relaxation, i.e., we replace the integrality
1719
+ constraints by 0 ⩽ xi ⩽ 1, i = 1,...,n. The so-called greedy choice algorithm produces an
1720
+ optimal solution for the continuous knapsack problem.
1721
+ We first renumber the items xi such that p1/c1 ⩾ ... ⩾ pn/cn. Hence, the first item causes
1722
+ the largest increase in value relative to its costs. We now iterate over x1,...,xn and in each
1723
+ iteration, set xi to its maximum capacity. When the budget constraint is violated, set
1724
+ xi = B −
1725
+ i−1
1726
+
1727
+ i=1
1728
+ cixi.
1729
+ This greedy choice algorithm produces the optimal solution to (EC.7). Below we will state
1730
+ its proof, which is an adaptation from the proof in Kellerer et al. (2004).
1731
+ Assume that without loss of generality that p1/c1 > ··· > pn/cn. If we would have pi/ci =
1732
+ pi+1/ci+1 for some i, then we are indifferent between those items and the proof below can
1733
+ be easily adapted to satisfy this. The greedy choice algorithm produces a solution such
1734
+ that, for some index j, we have 1 = x1 = ··· = xj−1 > xj ⩾ xj+1 = ··· = xn = 0. Suppose
1735
+ we would have a different feasible optimal solution y ̸= x. Since pi > 0 and �n
1736
+ i=1 ci > B, it
1737
+ must hold that �n
1738
+ i=1 ciyi = B as otherwise we could spend additional capital to increase
1739
+ the optimal value. Because p1/c1 ⩾ ... ⩾ pn/cn, there exists a smallest index k such that
1740
+ yk < 1 and let l be the smallest index such that k < l and yl > 0. This solution must exists,
1741
+ else we would have y = x. Now, we will increase the value of yk and decrease the value of
1742
+ yl. By choosing ϵ = min{ck(1 − yk),clyl} > 0 and increasing yk by ϵ/ck and decreasing yl
1743
+ by ϵ/cl, we maintain feasibility and preserve �n
1744
+ i=1 ciyi = B. The solution value changes by
1745
+ pkϵ/ck −plϵ/cl = ϵ(pk/ck − pl/cl) > 0. This contradicts the assumption that y is an optimal
1746
+ solution. Therefore, x is optimal which concludes the proof.
1747
+ EC.4.
1748
+ DRO results
1749
+ In Ben-Tal and Hochman (1972), the following result was proved (for a much larger class
1750
+ of functions f(y,X) than in our case):
1751
+ Proposition EC.1. If f(y,·) is convex,
1752
+ sup
1753
+ P∈P(µ,δ)
1754
+ EP[f(y,X)] = gU(y) =
1755
+
1756
+ κ∈{1,2,3}n
1757
+ n
1758
+
1759
+ i=1
1760
+ p(i)
1761
+ κi f(y,ξ(1)
1762
+ κ1 ,...,ξ(n)
1763
+ κn ),
1764
+ (EC.8)
1765
+
1766
+ e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendorec5
1767
+ with p(i)
1768
+ κi ,ξ(i)
1769
+ κi defined as in Lemma 2. If f(y,·) is concave,
1770
+ sup
1771
+ P∈P(µ,δ,β)
1772
+ EP[f(y,X)] = gL(y) =
1773
+
1774
+ κ∈{1,2}n
1775
+ n
1776
+
1777
+ i=1
1778
+ ˆp(i)
1779
+ κi f(y,υ(1)
1780
+ κ1 ,...,υ(n)
1781
+ κn ),
1782
+ (EC.9)
1783
+ with υ(i)
1784
+ 1 = µi + δi
1785
+ 2βi, υ(i)
1786
+ 2 = µi −
1787
+ δi
1788
+ 2(1−βi) and ˆp(i)
1789
+ 1 = βi, ˆp(i)
1790
+ 2 = 1 − βi.
1791
+ Hence, gU(·) in (EC.8) inherits the convexity in y from f(·,X) and its functional form
1792
+ depends only on the form of f(·,X) (and similarly for gL(·)). The upper and lower bound
1793
+ give a closed interval for
1794
+ ValP(y) = EP[f(y,X)]
1795
+ ∀P ∈ P(µ,δ,β).
1796
+ (EC.10)
1797
+ Corollary EC.1. If f(y,·) is convex for all y then ValP(y) ∈ [gL(y),gU(y)] ∀P ∈
1798
+ P(µ,δ,β). If f(y,·) is concave for all y then ValP(y) ∈ [gU(y),gL(y)] ∀P ∈ P(µ,δ,β).
1799
+ From Proposition EC.1 we see that the extremal distribution is independent of y. Hence,
1800
+ we can substitute the 3n terms. This leads to a convex function in y, and hence the
1801
+ minimization problem over y is tractable.
1802
+ EC.5.
1803
+ Robust analysis with mean-variance knowledge
1804
+ EC.5.1.
1805
+ Scarf’s result for single item
1806
+ Scarf (1958) introduced a distribution-free analysis for the single-item newsvendor model
1807
+ by assuming that the decision maker only knows the mean and variance of the demand.
1808
+ Define the ambiguity set containing all distributions with the same mean and variance as
1809
+ P(µ,σ) := {P|EP(D) = µ, EP(D2) = σ2 + µ2}.
1810
+ Scarf (1958) determined an upper bound on the cost function C(q) by finding the worst-
1811
+ case distribution in the ambiguity set. To find the order quantity that protects against the
1812
+ ambiguity in P(µ,σ), the following minimax optimization problem is solved:
1813
+ min
1814
+ q
1815
+ max
1816
+ P∈P(µ,σ) dq + (m + d)EP(D − q)+.
1817
+ Since
1818
+ max
1819
+ P∈P(µ,σ) EP(D − q)+ ⩽
1820
+
1821
+ σ2 + (µ − q)2 + (µ − q)
1822
+ 2
1823
+ ,
1824
+
1825
+ ec6e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
1826
+ this minimax optimization problem becomes minq maxP CS(q) with
1827
+ CS(q) := d(q − µ) + (m + d)
1828
+
1829
+ σ2 + (µ − q)2 + (µ − q)
1830
+ 2
1831
+ .
1832
+ (EC.11)
1833
+ and solution
1834
+ qS := argmin
1835
+ q
1836
+ CS(q) = µ + σ
1837
+ 2
1838
+ ��m
1839
+ d −
1840
+
1841
+ d
1842
+ m
1843
+
1844
+ .
1845
+ (EC.12)
1846
+ The quantity qS is known as Scarf’s order quantity which prescribes to order more than
1847
+ the expected demand when m > d, and less than the expected demand when d < m.
1848
+ EC.5.2.
1849
+ Gallego and Moon
1850
+ When the model is based on mean-variance information, Gallego and Moon (1993) formu-
1851
+ late the problem as
1852
+ min
1853
+ q CS(q) :=
1854
+ n
1855
+
1856
+ i=1
1857
+ ci
1858
+
1859
+ �di(qi − µi) + (mi + di)
1860
+
1861
+ σ2
1862
+ i + (qi − µi)2 − (qi − µi)
1863
+ 2
1864
+
1865
+
1866
+ s.t.
1867
+ n
1868
+
1869
+ i=1
1870
+ ciqi ⩽ B,
1871
+ (EC.13)
1872
+ q ⩾ 0.
1873
+ The optimal solution to problem (EC.13) is referred to as qS. Applying Scarf’s bound
1874
+ for each item individually results in (EC.13). Similar to the full information setting with
1875
+ a known distribution, this optimization problem can be solved with Lagrange multiplier
1876
+ techniques.
1877
+ EC.6.
1878
+ Additional numerical experiments
1879
+ This section presents additional numerical results. Section EC.6.1 presents the performance
1880
+ plots for the average margin setting. We compare the mean-MAD and mean-variance
1881
+ ordering policies in Section EC.6.2.
1882
+ EC.6.1.
1883
+ More mean-MAD results
1884
+ Figure EC.2 depicts the results for the average profitability scenario. A quick glance re-
1885
+ veals that these plots exhibit a different impression than the low profitability scenario. We
1886
+ conclude that the mean-MAD EVAI remains below some bound for budget levels ranging
1887
+ from zero to two-thirds of the maximum budget. For all cases, this bound on the EVAI
1888
+ is around 10%.As the budget passes two-thirds of the maximum budget, the performance
1889
+ starts to decrease. However, the mean-MAD-β EVAI decreases when approaching the max-
1890
+ imal budget.
1891
+
1892
+ e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendorec7
1893
+ 10
1894
+ 20
1895
+ 30
1896
+ 40
1897
+ 50
1898
+ 0.024
1899
+ 0.025
1900
+ 0.026
1901
+ Case 1: Uniform (10,50)
1902
+ 10 25 40 55 70 85 100
1903
+ 0.0105
1904
+ 0.0110
1905
+ 0.0115
1906
+ Case 2: Uniform (10,100)
1907
+ 10 40 70 100130160190
1908
+ 0.0050
1909
+ 0.0052
1910
+ 0.0054
1911
+ Case 3: Uniform (10,200)
1912
+ 10
1913
+ 20
1914
+ 30
1915
+ 40
1916
+ 50
1917
+ 0.00
1918
+ 0.03
1919
+ 0.06
1920
+ 0.09
1921
+ 0.12
1922
+ Case 4: Beta (1,3)
1923
+ 10
1924
+ 20
1925
+ 30
1926
+ 40
1927
+ 50
1928
+ 0.00
1929
+ 0.03
1930
+ 0.06
1931
+ 0.09
1932
+ 0.12
1933
+ Case 5: Beta (2,2)
1934
+ 10
1935
+ 20
1936
+ 30
1937
+ 40
1938
+ 50
1939
+ 0.00
1940
+ 0.03
1941
+ 0.06
1942
+ 0.09
1943
+ 0.12
1944
+ Case 6: Beta (3,1)
1945
+ 10
1946
+ 20
1947
+ 30
1948
+ 40
1949
+ 50
1950
+ 0.00
1951
+ 0.02
1952
+ 0.04
1953
+ Case 7: Triangular c = (18)
1954
+ 10
1955
+ 20
1956
+ 30
1957
+ 40
1958
+ 50
1959
+ 0.00
1960
+ 0.02
1961
+ 0.04
1962
+ Case 8: Triangular c = (30)
1963
+ 10
1964
+ 20
1965
+ 30
1966
+ 40
1967
+ 50
1968
+ 0.00
1969
+ 0.02
1970
+ 0.04
1971
+ Case 9: Triangular c = (42)
1972
+ Figure EC.1
1973
+ Nine probability density functions used for multi-item performance analysis
1974
+ 0
1975
+ 314
1976
+ 628
1977
+ 942
1978
+ 0.00
1979
+ 0.07
1980
+ 0.13
1981
+ 0.20
1982
+ Case 1: Uniform (10,50)
1983
+ Mean-MAD
1984
+ Mean-MAD-
1985
+ 0
1986
+ 602
1987
+ 1205 1808
1988
+ 0.00
1989
+ 0.07
1990
+ 0.13
1991
+ 0.20
1992
+ Case 2: Uniform (10,100)
1993
+ 0
1994
+ 1180 2360 3540
1995
+ 0.00
1996
+ 0.07
1997
+ 0.13
1998
+ 0.20
1999
+ Case 3: Uniform (10,200)
2000
+ 0
2001
+ 137
2002
+ 275
2003
+ 413
2004
+ 0.00
2005
+ 0.04
2006
+ 0.08
2007
+ 0.12
2008
+ Case 4: Beta (1,3)
2009
+ 0
2010
+ 263
2011
+ 527
2012
+ 791
2013
+ 0.00
2014
+ 0.08
2015
+ 0.15
2016
+ 0.23
2017
+ Case 5: Beta (2,2)
2018
+ 0
2019
+ 367
2020
+ 735
2021
+ 1103
2022
+ 0.00
2023
+ 0.07
2024
+ 0.13
2025
+ 0.20
2026
+ Case 6: Beta (3,1)
2027
+ 0
2028
+ 252
2029
+ 505
2030
+ 758
2031
+ 0.00
2032
+ 0.05
2033
+ 0.11
2034
+ 0.16
2035
+ Case 7: Triangular (10,50,18)
2036
+ 0
2037
+ 287
2038
+ 574
2039
+ 861
2040
+ 0.00
2041
+ 0.07
2042
+ 0.14
2043
+ 0.21
2044
+ Case 8: Triangular (10,50,30)
2045
+ 0
2046
+ 330
2047
+ 661
2048
+ 992
2049
+ 0.00
2050
+ 0.11
2051
+ 0.21
2052
+ 0.32
2053
+ Case 9: Triangular (10,50,42)
2054
+ Figure EC.2
2055
+ The results for the average margin setting. The x-axis corresponds to B and the y-axis to the EVAI.
2056
+ EC.6.2.
2057
+ Mean-variance comparison
2058
+ We start the performance analysis for the low margin scenario. The x-axis refers to the
2059
+ budget level B, and the y-axis refers to the EVAI. In each plot, the blue line corresponds
2060
+
2061
+ ec8e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
2062
+ to the EVAI for the mean-MAD model and the orange line to the mean-variance EVAI.
2063
+ Figure EC.3 contains the performance plots for each of the nine cases we are considering.
2064
+ 0
2065
+ 224
2066
+ 449
2067
+ 674
2068
+ 0.00
2069
+ 0.05
2070
+ 0.09
2071
+ 0.14
2072
+ Case 1: Uniform (10,50)
2073
+ Mean-MAD
2074
+ Mean-variance
2075
+ 0
2076
+ 401
2077
+ 802
2078
+ 1204
2079
+ 0.00
2080
+ 0.05
2081
+ 0.09
2082
+ 0.14
2083
+ Case 2: Uniform (10,100)
2084
+ 0
2085
+ 755
2086
+ 1510 2265
2087
+ 0.00
2088
+ 0.05
2089
+ 0.09
2090
+ 0.14
2091
+ Case 3: Uniform (10,200)
2092
+ 0
2093
+ 70
2094
+ 140
2095
+ 211
2096
+ 0.00
2097
+ 0.06
2098
+ 0.11
2099
+ 0.17
2100
+ Case 4: Beta (1,3)
2101
+ 0
2102
+ 187
2103
+ 374
2104
+ 561
2105
+ 0.00
2106
+ 0.07
2107
+ 0.13
2108
+ 0.20
2109
+ Case 5: Beta (2,2)
2110
+ 0
2111
+ 312
2112
+ 624
2113
+ 937
2114
+ 0.00
2115
+ 0.06
2116
+ 0.12
2117
+ 0.18
2118
+ Case 6: Beta (3,1)
2119
+ 0
2120
+ 190
2121
+ 380
2122
+ 571
2123
+ 0.00
2124
+ 0.08
2125
+ 0.15
2126
+ 0.23
2127
+ Case 7: Triangular (10,50,18)
2128
+ 0
2129
+ 236
2130
+ 472
2131
+ 709
2132
+ 0.00
2133
+ 0.07
2134
+ 0.13
2135
+ 0.20
2136
+ Case 8: Triangular (10,50,30)
2137
+ 0
2138
+ 277
2139
+ 554
2140
+ 831
2141
+ 0.00
2142
+ 0.06
2143
+ 0.11
2144
+ 0.17
2145
+ Case 9: Triangular (10,50,42)
2146
+ Figure EC.3
2147
+ The results for the low margin scenario. The x-axis corresponds to the budget level and the y-axis
2148
+ to the EVAI.
2149
+ In Figure EC.3 we compare the mean-MAD policy with the mean-variance ordering
2150
+ policy in terms of EVAI for the scenario with low margins and a total of nine ground-
2151
+ truth demand distributions. While both policies generally give low EVAIs, the EVAI of
2152
+ the mean-variance policy is typically lower. We stress that this does not mean that the
2153
+ mean-variance policy is better. Indeed, a fair numerical comparison is impossible, as the
2154
+ respective ambiguity sets can contain vastly different distributions. While a finite variance
2155
+ excludes distributions with infinite-second moment, MAD does not. In general, the worst-
2156
+ case scenarios or extremal distributions are ‘more extreme’ for MAD than for variance.
2157
+ This also offers a possible explanation for the slightly higher EVAI.
2158
+ EC.7.
2159
+ Extensions
2160
+ We now present a distribution-free analysis for three extensions of the multi-item newsven-
2161
+ dor model. Section EC.7.1 deals with multiple constraints, Section EC.7.2 considers uncer-
2162
+ tain supply, and Section EC.7.3 discusses the risk-averse newsvendor where the conditional
2163
+ value at risk (CVaR) is chosen as objective function.
2164
+
2165
+ e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendorec9
2166
+ EC.7.1.
2167
+ Multiple constraints
2168
+ Lau and Lau (1996) consider the newsvendor problem with multiple constraints, and pro-
2169
+ pose a numerical solution procedure that computes the Lagrange multipliers as roots of a
2170
+ system of nonlinear equations. Perakis et al. (2020) also consider multiple capacity con-
2171
+ straints in a retail environment, and distinguish between warehouse capacity and inventory
2172
+ availability constraints. By exploiting Lagrangian duality the problem is decomposed into
2173
+ two subproblems, which are solved iteratively by binary search.
2174
+ We now argue that the distribution-free analysis developed in the present paper also
2175
+ carries over to the setting with multiple constraints, and takes the form
2176
+ min
2177
+ q
2178
+ n
2179
+
2180
+ i=1
2181
+ ci
2182
+
2183
+ di(qi − µi) + (mi + di)
2184
+
2185
+ p(i)
2186
+ 1 (ai − qi)+ + p(i)
2187
+ 2 (µi − qi)+ + p(i)
2188
+ 3 (bi − qi)+��
2189
+ s.t.
2190
+ n
2191
+
2192
+ i=1
2193
+ ci,jqi ⩽ Bj
2194
+ j = 1,...,m
2195
+ qi ⩾ 0
2196
+ i = 1,...,n.
2197
+ (EC.14)
2198
+ By introducing dummy variables τ (i)
2199
+ k , we reformulate problem (EC.14) as
2200
+ min
2201
+ q,τ
2202
+ n
2203
+
2204
+ i=1
2205
+ ci
2206
+
2207
+ di(qi − µi) + (mi + di)
2208
+
2209
+ p(i)
2210
+ 1 τ (i)
2211
+ 1 + p(i)
2212
+ 2 τ (i)
2213
+ 2 + p(i)
2214
+ 3 τ (i)
2215
+ 3
2216
+ ��
2217
+ s.t.
2218
+ n
2219
+
2220
+ i=1
2221
+ ci,jqi ⩽ Bj,
2222
+ j = 1,...,m,
2223
+ τ (i)
2224
+ k ⩾ ξ(i)
2225
+ k − qi,
2226
+ k = 1,2,3; i = 1,...,n,
2227
+ τ (i)
2228
+ k ⩾ 0,
2229
+ k = 1,2,3; i = 1,...,n,
2230
+ qi ⩾ 0,
2231
+ i = 1,...,n,
2232
+ (EC.15)
2233
+ which remains a tractable LP, solvable for large-scale problems with interior-point meth-
2234
+ ods. Moreover, by solving the dual problem of (EC.15), shadow prices of the m budget
2235
+ constraints can be computed that quantify marginal expected net benefit of allocating an
2236
+ additional unit of budget to Bj, j = 1,...m.
2237
+ EC.7.2.
2238
+ Supply and demand uncertainty
2239
+ The newsvendor might take different decisions when the delivery of an order for q units is
2240
+ not necessarily complete (uncertain supply). K¨aki et al. (2015) consider uncertain supply
2241
+ and uncertain demand, when supply and demand are independent or follow a particular
2242
+
2243
+ ec10e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
2244
+ copula-based dependency structure. In the mean-variance setting and under the indepen-
2245
+ dence assumption, Gallego and Moon (1993) solve the distribution-free newsvendor prob-
2246
+ lem with random yield, but assume the yield is a binomial random variable that depends
2247
+ on the order size q. That is, when an order for q units is made, each individual unit is
2248
+ received with some fixed probability, or is not delivered at all.
2249
+ As opposed to Gallego and Moon (1993), we do introduce an ambiguity set for the
2250
+ random supply. Consider the setting with multiplicative yield Zi, where the random supply
2251
+ is given by Zi·qi. Assume Zi has mean ˜µi, MAD ˜δi and support [˜ai,˜bi], where 0 ⩽ ˜ai ⩽ ˜bi ⩽ 1.
2252
+ The distribution of Zi then resides in P(˜µi,˜δi). The extremal three-point distribution for Zi
2253
+ has probabilities
2254
+ ˜p(i)
2255
+ 1 =
2256
+ ˜δi
2257
+ 2(˜µi − ˜ai),
2258
+ ˜p(i)
2259
+ 2 = 1 −
2260
+ ˜δi
2261
+ 2(˜µi − ˜ai) −
2262
+ ˜δi
2263
+ 2(˜bi − ˜µi)
2264
+ ,
2265
+ ˜p(i)
2266
+ 3 =
2267
+ ˜δi
2268
+ 2(˜bi − ˜µi)
2269
+ ,
2270
+ and is supported on ζ(i)
2271
+ 1 = ˜ai, ζ(i)
2272
+ 2 = ˜µi, ζ(i)
2273
+ 3 = ˜bi, respectively. The multi-item newsvendor
2274
+ with supply ambiguity is equivalent to
2275
+ min
2276
+ q
2277
+ n
2278
+
2279
+ i=1
2280
+ max
2281
+ P∈Pi EP
2282
+
2283
+ ci
2284
+
2285
+ di(Zi · qi − Di) + (mi + di)(Di − Zi · qi)+��
2286
+ s.t.
2287
+ n
2288
+
2289
+ i=1
2290
+ ciqi ⩽ Bj,
2291
+ j = 1,...,m,
2292
+ qi ⩾ 0,
2293
+ i = 1,...,n,
2294
+ (EC.16)
2295
+ with Pi := P(µi,δi) × P(˜µi,˜δi). Since the newsvendor problem is jointly convex in the pairwise
2296
+ independent random variables Di and Zi, the distributions that maximize the objective
2297
+ function of (EC.16) are the extremal three-point distributions. Applying these worst-case
2298
+ distributions to (EC.16) results in
2299
+ min
2300
+ q
2301
+ n
2302
+
2303
+ i=1
2304
+ ci
2305
+
2306
+ di( ˜µiqi − µi) + (mi + di)
2307
+
2308
+ κ∈{1,2,3}2
2309
+ p(i)
2310
+ κ1 ˜p(i)
2311
+ κ2τ (i)
2312
+ κ
2313
+
2314
+ s.t.
2315
+ n
2316
+
2317
+ i=1
2318
+ ciqi ⩽ B,
2319
+ τ (i)
2320
+ κ ⩾ ξ(i)
2321
+ κ1 − ζ(i)
2322
+ κ2 qi,
2323
+ κ ∈ {1,2,3}2; i = 1,...,n,
2324
+ τ (i)
2325
+ κ ⩾ 0,
2326
+ κ ∈ {1,2,3}2; i = 1,...,n,
2327
+ qi ⩾ 0,
2328
+ i = 1,...,n.
2329
+ (EC.17)
2330
+
2331
+ e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendorec11
2332
+ To demonstrate the distribution-free newsvendor with uncertain supply, consider the
2333
+ one-dimensional case with random demand D with a uniform distribution on [20,80] and
2334
+ multiplicative yield Z uniformly distributed on [0.65,0.95]. Figure EC.4 depicts the tight
2335
+ lower and upper bounds that follow from optimizing over the ambiguity sets that contain
2336
+ the distributions of D and Z. As the extremal distributions are discrete, the objective
2337
+ function of (EC.17) admits a piecewise linear representation.
2338
+ 0
2339
+ 20
2340
+ 40
2341
+ 60
2342
+ 80
2343
+ 100
2344
+ 120
2345
+ Order quantity
2346
+ 15
2347
+ 20
2348
+ 25
2349
+ 30
2350
+ 35
2351
+ 40
2352
+ 45
2353
+ 50
2354
+ Expected costs
2355
+ Uniform distributions
2356
+ Mean-MAD upper bound
2357
+ Mean-MAD lower bound
2358
+ Figure EC.4
2359
+ Tight bounds for the multi-item newsvendor with uncertain supply yield, where m = 1 and d =
2360
+ 0.8. The upper piecewise linear function is obtained by evaluating E[D − Z · q], with D following
2361
+ the extremal distribution that lies in P(50,15,20,80) and Z the worst-case three-point distribution in
2362
+ P(0.8,0.075,0.65,0.95). The lower bound follows from the best-case two-point distributions. The middle
2363
+ curve depicts the ‘true’ costs, where D has a uniform distribution on [20,80], and Z is uniformly
2364
+ distributed on [0.65,0.95].
2365
+ Because problem (EC.16) can be written in terms of a piecewise linear function, the
2366
+ optimal solution follows from a knapsack algorithm similar to Theorem 2. Further, one can
2367
+ gain additional insights by explicitly deriving the optimal order quantities for the robust
2368
+ single-item model, as in Theorem 1. The problem is similar for additive yield, also resulting
2369
+ in a three-point distribution for the worst case. Other directions for future research include
2370
+ solving (EC.16) with multiple unreliable and non-identical suppliers (Dada et al., 2007)
2371
+ and the newsvendor problem with fixed ordering costs and supplier capacity restrictions
2372
+ (Merzifonluoglu and Feng, 2014).
2373
+
2374
+ ec12e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
2375
+ EC.7.3.
2376
+ Risk aversion
2377
+ We next consider a risk-averse decision maker, as in Chen et al. (2010), who makes decisions
2378
+ based on CVaR. The decision maker no longer optimizes the expected costs, but instead
2379
+ minimizes the average value of the costs exceeding the γth-quantile of the newsvendor’s
2380
+ cost distribution. For the cost function G(q,D), CVaR can be calculated by solving a
2381
+ convex minimization problem (Rockafellar and Uryasev, 2000):
2382
+ min
2383
+ θ∈R
2384
+
2385
+ θ +
2386
+ 1
2387
+ 1 − γ E(G(q,D) − θ)+
2388
+
2389
+ .
2390
+ Calculating CVaR requires full knowledge of the demand distribution. However, in practice,
2391
+ committing to a particular distribution might be problematic for the decision maker if
2392
+ there is not enough data available. Hence, we consider the partial information setting as
2393
+ in Zhu and Fukushima (2009); Delage and Ye (2010), and seek to solve
2394
+ min
2395
+ q:�
2396
+ i ciqi⩽B,qi⩾0 max
2397
+ P∈P(µ,δ) min
2398
+ θ∈R
2399
+
2400
+ θ +
2401
+ 1
2402
+ 1 − γ EP(G(q,D) − θ)+
2403
+
2404
+ .
2405
+ (EC.18)
2406
+ Let us first consider the single-item model. Because the objective function of (EC.18)
2407
+ is finite, P(µ,δ) is weakly compact as supp(D) is compact, and the objective function of
2408
+ (EC.18) is linear in P and convex in θ, we are allowed to interchange the maximization and
2409
+ minimization operators by virtue of the minimax theorem (Shapiro and Kleywegt, 2002).
2410
+ Since (G(q,D) − θ)+ is a convex function of the uncertain demand, the three-point distri-
2411
+ bution (10) also maximizes EP(G(q,D)−θ)+. When β = P(D ⩾ µ) is known, the two-point
2412
+ distribution in Lemma 3 attains the matching lower bound. For the multivariate problem,
2413
+ notice that (G(q,D) − θ)+ is again a convex function of the uncertain demand, where
2414
+ D ∼ P ∈ P(µ,δ). By Proposition EC.1 and the reasoning above, the risk-averse newsvendor
2415
+
2416
+ e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendorec13
2417
+ admits the following LP representation:
2418
+ min
2419
+ q,τ,η,θ θ +
2420
+ 1
2421
+ 1 − γ
2422
+
2423
+ κ∈{1,2,3}n
2424
+ n
2425
+
2426
+ i=1
2427
+ p(i)
2428
+ κi ηκ
2429
+ s.t.
2430
+ n
2431
+
2432
+ i=1
2433
+ ciqi ⩽ B,
2434
+ ηκ ⩾
2435
+
2436
+ n
2437
+
2438
+ i=1
2439
+ ci
2440
+
2441
+ di(qi − ξ(i)
2442
+ κi ) + (mi + di)τ (i)
2443
+ κ
2444
+ ��
2445
+ − θ,
2446
+ κ ∈ {1,2,3}n,
2447
+ ηκ ⩾ 0,
2448
+ κ ∈ {1,2,3}n,
2449
+ τ (i)
2450
+ κ ⩾ ξ(i)
2451
+ κi − qi,
2452
+ κ ∈ {1,2,3}n; i = 1,...,n,
2453
+ τ (i)
2454
+ κ ⩾ 0,
2455
+ κ ∈ {1,2,3}n; i = 1,...,n,
2456
+ qi ⩾ 0,
2457
+ i = 1,...,n.
2458
+ (EC.19)
2459
+ We show in Figure EC.5 the bounds for the single-item model with demand having
2460
+ support [10,50], µ = 30, δ = 20/3 and β = 1/2. Solving (EC.19) for γ = 0.75,0.95 and
2461
+ different order sizes yields the upper bounds. We solve an analogous problem, but with the
2462
+ expectation taken over the extremal two-point distribution, stated in Lemma 3, to obtain
2463
+ the tight lower bounds. As a point of reference, we also plot the exact values of the CVaR
2464
+ and expected costs when D follows a symmetric triangular distribution on [10,50].
2465
+ 10
2466
+ 15
2467
+ 20
2468
+ 25
2469
+ 30
2470
+ 35
2471
+ 40
2472
+ CVaR99%
2473
+ 6
2474
+ 8
2475
+ 10
2476
+ 12
2477
+ 14
2478
+ 16
2479
+ 18
2480
+ 20
2481
+ C(q)
2482
+ q = 10
2483
+ q = 20
2484
+ q = 30
2485
+ q = 40
2486
+ q = 50
2487
+ Triangular
2488
+ Mean-MAD upper bound
2489
+ Mean-MAD lower bound
2490
+ (a) Expected costs and CVaR
2491
+ 10
2492
+ 15
2493
+ 20
2494
+ 25
2495
+ 30
2496
+ 35
2497
+ 40
2498
+ 45
2499
+ 50
2500
+ Order quantity
2501
+ 5
2502
+ 10
2503
+ 15
2504
+ 20
2505
+ 25
2506
+ 30
2507
+ CVaR75%
2508
+ Triangular
2509
+ Mean-MAD upper bound
2510
+ Mean-MAD lower bound
2511
+ (b) Mean-MAD bounds for CVaR
2512
+ Figure EC.5
2513
+ An illustration of the tight mean-MAD bounds for the risk-averse newsvendor with CVAR as objec-
2514
+ tive criterion, where m = 1, d = 0.8 and γ = 0.75,0.99. The middle curve corresponds to the CVaR
2515
+ when D follows a symmetric triangular distribution on [10,50]. The upper and lower bounds follow
2516
+ from optimizing over the ambiguity sets that contain this distribution.
2517
+ Solving (EC.19) can be challenging since the objective function (G(q,D) − θ)+ is no
2518
+ longer separable, thus resulting in an exponential number of variables and constraints. To
2519
+
2520
+ ec14e-companion to Boonstra, van Eekelen, and van Leeuwaarden: Robust knapsack ordering for a partially-informed newsvendor
2521
+ alleviate this computational difficulty, one might resort to sampling-based procedures such
2522
+ as sample average approximation (Shapiro et al., 2009).
2523
+ We also mention ambiguous chance constraints that can be conservatively approximated
2524
+ by CVaR (Nemirovski and Shapiro, 2007). In the risk-averse newsvendor setting, the deci-
2525
+ sion maker introduces an ambiguous chance constraint that restricts the probability of the
2526
+ costs exceeding a certain threshold t to be less than 1 − γ, considering all distributions in
2527
+ the ambiguity set. For the multi-item setting, this means ensuring
2528
+ P(G(q,D) > t) ⩽ 1 − γ,
2529
+ ∀P ∈ P(µ,δ),
2530
+ which is implied by
2531
+ max
2532
+ P∈P(µ,δ) CVaRγ[G(q,D)] ⩽ t.
2533
+ In addition, the newsvendor might require a minimal probability that all customer orders
2534
+ will be completely covered by the inventory on hand, i.e., the type-1 service level (Silver
2535
+ et al., 1998). When several of these probabilistic constraints are interrelated, the decision
2536
+ maker should conservatively approximate joint chance constraints. For this one can again
2537
+ use CVaR; see Chen et al. (2010); Zymler et al. (2013); Roos and den Hertog (2020). Adding
2538
+ ambiguous chance constraints to the models developed in this paper is a worthwhile topic
2539
+ for further research.
2540
+
D9E0T4oBgHgl3EQfywIT/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
DNFQT4oBgHgl3EQf_zdP/content/tmp_files/2301.13459v1.pdf.txt ADDED
@@ -0,0 +1,1163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Learning Generalized Hybrid Proximity Representation for
2
+ Image Recognition
3
+ 1st Zhiyuan Li
4
+ Department of Computer Science
5
+ University of Cincinnati
6
+ Cincinnati, OH, United States
7
+ li3z3@mail.uc.edu,
8
+ 2nd Anca Ralescu
9
+ Department of Computer Science
10
+ University of Cincinnati
11
+ Cincinnati, OH, United States
12
+ ralescal@ucmail.uc.edu
13
+ Abstract—Recently, deep metric learning techniques received
14
+ attentions, as the learned distance representations are useful to
15
+ capture the similarity relationship among samples and further
16
+ improve the performance of various of supervised or unsuper-
17
+ vised learning tasks. We propose a novel supervised metric
18
+ learning method that can learn the distance metrics in both
19
+ geometric and probabilistic space for image recognition. In
20
+ contrast to the previous metric learning methods which usually
21
+ focus on learning the distance metrics in Euclidean space, our
22
+ proposed method is able to learn better distance representation
23
+ in a hybrid approach. To achieve this, we proposed a Generalized
24
+ Hybrid Metric Loss (GHM-Loss) to learn the general hybrid
25
+ proximity features from the image data by controlling the trade-
26
+ off between geometric proximity and probabilistic proximity.
27
+ To evaluate the effectiveness of our method, we first provide
28
+ theoretical derivations and proofs of the proposed loss function,
29
+ then we perform extensive experiments on two public datasets
30
+ to show the advantage of our method compared to other state-
31
+ of-the-art metric learning methods.
32
+ Index Terms—Deep metric learning, proximity, probability
33
+ distribution, representation learning, image classification
34
+ I. INTRODUCTION
35
+ Metric learning takes input data to learn the similar and
36
+ dissimilar features between samples. The learned distance
37
+ metric provides a meaningful and robust representation to
38
+ discriminate the proximity or distance between samples and
39
+ can be further utilized for both supervised and unsupervised
40
+ learning tasks [1]. Recently, deep learning-based metric learn-
41
+ ing algorithms, i.e., deep metric learning, were widely applied
42
+ in the computer vision area by developing either a novel
43
+ network architecture or an intuitive and efficient loss function
44
+ [2]–[4]. Some typical works, such as the Siamese network [5],
45
+ Triplet network [2], SupCon [6], aim to formulate an instance
46
+ discrimination task to learn a useful feature representation
47
+ by optimizing the proximity function in the Euclidean space,
48
+ i.e., geometric distance or Cosine proximity between the
49
+ feature embeddings. In this paper, we seek to address the
50
+ inadequacies of geometric proximity of recent state-of-the-
51
+ art metric learning methods by reconsidering an alternative
52
+ Copyright (c) 2022 IEEE. Personal use of this material is permitted.
53
+ Permission from IEEE must be obtained for all other uses, in any current or
54
+ future media, including reprinting/republishing this material for advertising or
55
+ promotional purposes, creating new collective works, for resale or redistribu-
56
+ tion to servers or lists, or reuse of any copyrighted component of this work
57
+ in other works.
58
+ approach in which learned distance metrics are not biased to
59
+ only geometric proximity.
60
+ Metric learning methods have shown an excellent classifi-
61
+ cation performance in image recognition applications, due to
62
+ their extraordinary ability to discriminate similar information
63
+ between samples [1], [4], [7], [8]. Such metric learning tasks
64
+ can be either supervised or self-supervised. In supervised
65
+ metric learning, the model learns to pull together the samples
66
+ from the same classes and push away the samples from dif-
67
+ ferent classes [9]. Self-supervised metric learning, also named
68
+ contrastive learning, requires a data augmentation step to cre-
69
+ ate some pseudo-ground-truth from the data itself, where the
70
+ augmentation from the same sample is included in “positive”
71
+ pairs and the augmentation from different samples is included
72
+ in “negative” pairs [10]. Similar to supervised metric learning,
73
+ the self-supervised model learns similar representations from
74
+ the positive pairs and should be different than the representa-
75
+ tions of the negative pairs. Various types of metric/contrastive
76
+ learning works have been developed for image pattern recogni-
77
+ tion applications, including image classification [6], [11]–[13],
78
+ image clustering [14], [15], image segmentation [16], [17],
79
+ image reconstruction [18], [19], and object detection [20]. All
80
+ these works used geometric proximity (e.g., Cosine similarity
81
+ or Euclidean distance) as the proximity function in the training
82
+ an objective loss to learn the geometric representation of the
83
+ samples. However, the probability distribution of the samples
84
+ should not be ignored.
85
+ To overcome these limitations and boost the prediction per-
86
+ formance of metric learning, we proposed a novel supervised
87
+ metric learning method to learn the hybrid proximity that
88
+ combines the proximity in both geometric and probabilistic
89
+ space. To achieve it, we defined a supervised Generalized
90
+ Hybrid Metric Loss (GHM-Loss) to better learn the distance
91
+ representations in both geometric and probabilistic space. We
92
+ noticed that even if the geometric distance is small, the
93
+ probabilistic distance can be large when the sample variance
94
+ is large (Figure 1). This observation reminds us to reconsider
95
+ that the model may not sufficiently learn the distance features
96
+ based only on the geometric distance between data points.
97
+ Thus, enabling the model to partially learn the probabilistic
98
+ distance controls the trade-off between two types of distance
99
+ representation (geometric and probabilistic).
100
+ arXiv:2301.13459v1 [cs.CV] 31 Jan 2023
101
+
102
+ 𝑋2
103
+ 𝑋3
104
+ 𝐷 𝜇𝑋1, 𝜇𝑋2 > 𝐷 𝜇𝑋2, 𝜇𝑋3
105
+ 𝐾𝐿 𝑃𝑋1||𝑃𝑋2 < 𝐾𝐿 𝑃𝑋2||𝑃𝑋3
106
+ 𝑋1
107
+ Fig. 1.
108
+ Geometric distance vs. probabilistic distance between probability
109
+ distributions X1, X2 and X3. Without considering the variances, the geomet-
110
+ ric distance between mean values of µX1, µX2, and µX3 cannot represent
111
+ their probabilistic distance. D: Geometric distance; KL: Kullback–Leibler
112
+ divergence.
113
+ Our proposed GHM-Loss is formulated by a hybrid di-
114
+ vergence underlying the geometric and probabilistic space,
115
+ which is a generalized distance loss form for many defined
116
+ metric learning methods, including Triplet [2], N-pairs [21],
117
+ Max Margin [22], NTXent [13], SupCon [6], etc. We first
118
+ theoretically showing the advantage of using the probabilistic
119
+ distance in metric learning compared to the geometric-based
120
+ distance, and we employed two public datasets to show the
121
+ effectiveness of our method compared to other state-of-the-
122
+ art metric/contrastive learning methods. We also investigated
123
+ the superiority of the proposed GHM-Loss with other met-
124
+ ric/contrastive learning loss functions. To sum up, our main
125
+ findings and contributions to this work are as follows:
126
+ 1) We propose a novel supervised metric learning method
127
+ for enhancing the performance of image recognition
128
+ by defining a Generalized Hybrid Metric Loss (GHM-
129
+ Loss). The proposed GHM-Loss is able to learn better
130
+ distance representation that controls the trade-off be-
131
+ tween the geometric-based and the probabilistic distance
132
+ from feature embeddings.
133
+ 2) We define two proximity functions with certain proper-
134
+ ties in geometric and probabilistic space, respectively,
135
+ and provide proof for each property. Meanwhile, we
136
+ theoretically show the advantage of the GHM-Loss by
137
+ including the probabilistic proximity for learning the
138
+ distance between distributions.
139
+ 3) Our approach is supported both by a theoretical discus-
140
+ sion and by extensive experiments performed on two
141
+ common image classification tasks to demonstrate the
142
+ effectiveness of our method compared to other state-of-
143
+ the-art metric learning methods.
144
+ II. RELATED WORK
145
+ In this section, we first discuss some state-of-the-art meth-
146
+ ods of deep metric learning and some of its applications in
147
+ the computer vision domain. We further review related works
148
+ on contrastive learning.
149
+ A. Metric Learning
150
+ 1) Traditional Metric Learning: The early stage of the ma-
151
+ chine learning techniques requires a hand-crafted processing
152
+ step, i.e., feature engineering, such as feature selection and
153
+ feature extraction before training a machine learning model
154
+ for supervised (e.g., classification) or unsupervised learning
155
+ (e.g., clustering) tasks [7], [23], [24]. These methods, including
156
+ linear projections, i.e., principal component analysis (PCA)
157
+ [25], decomposition, i.e., non-negative matrix factorization
158
+ (NMF) [26] to extract useful feature information and are not
159
+ directly within the classification structure, resulting in a limited
160
+ performance on the certain complex structure, such as high-
161
+ dimensional data and non-linearity. Unlike traditional machine
162
+ learning approaches, metric learning performs the learning
163
+ process on the data to learn a distance feature representation
164
+ by decreasing the distance between similar samples and in-
165
+ creasing the distance between dissimilar ones in a embedding
166
+ space. The learned distance features will have a high ability
167
+ to discriminate the classes of the sample data. Usually, metric
168
+ learning approaches apply linear transformation techniques
169
+ to the input data and map it to a new feature space with
170
+ a higher-class separation [27]. However, these methods lack
171
+ the generalization capability and nonlinear knowledge of the
172
+ attributes [28].
173
+ 2) Deep Metric Learning: Unlike traditional metric learn-
174
+ ing methods, deep metric learning relies on training deep
175
+ neural networks with activation functions that capture nonlin-
176
+ ear properties [4], and it has dominated metric representation
177
+ learning in the image recognition community [2], [3], [5], [6],
178
+ [29]–[31]. For example, the Siamese network [5] used two
179
+ identical convolutional neural networks (CNNs) to encode a
180
+ pair of input samples and minimize the contrastive loss to learn
181
+ the representative distance features. Similar to the Siamese
182
+ network, Hoffer et al [2] proposed a Triplet network, including
183
+ the anchor, positive (similar), and negative (dissimilar) sample,
184
+ which learns the inequality that the positive sample stays closer
185
+ to the anchor compared to the negative sample. Afterward,
186
+ Wang et al [3] defined a new Angular loss to constrain
187
+ the angle at the negative sample from the Triplet network.
188
+ Later, Sohn [21] proposed an N-pair loss to address the
189
+ slow convergence problem of the Triplet loss. More recently,
190
+ Khosla et al [6] developed a supervised contrastive learning
191
+ framework with the more general form of metric learning loss,
192
+ i.e., SupCon, and showed the effectiveness of classification
193
+ performance compared to the Triplet loss and the N-pair loss.
194
+ These existing works have shown great promise for metric
195
+ and feature representation learning in a variety of image
196
+ classification tasks. Nevertheless, to the best of our knowledge,
197
+ most previous studies are focused on learning the geometric-
198
+ based metrics of the embedding space, while the probabilistic-
199
+ based metrics are usually ignored. In this work, our method
200
+ is able to learn the meaningful metrics of both geometric and
201
+ probabilistic space.
202
+
203
+ B. Contrastive Representation Learning
204
+ The main purpose of deep metric learning and contrastive
205
+ learning is to train a deep learning model to learn the distance
206
+ feature representations in an embedding space. The main
207
+ difference between these two methods is that contrastive learn-
208
+ ing is closely related to the self-supervised learning domain,
209
+ which contains a data augmentation step to generalize an
210
+ arbitrary number of positive and negative sample pairs from
211
+ each sample [32]. Given the stunning achievement of self-
212
+ supervised representation learning, many contrastive learning
213
+ methods have been developed for various computer vision
214
+ tasks [33]–[36]. For example, Ye et al [37] proposed an
215
+ embedding contrastive learning method with the Siamese
216
+ network to learn the invariant features of embedding space.
217
+ Chen et al [13] developed a famous contrastive learning
218
+ framework, SimCLR, in which the model is pretrained to
219
+ discriminate the positive pairs of data augmentation from the
220
+ same source image, demonstrating superior performance in
221
+ ImageNet classification. Similarly, He et al [12] proposed the
222
+ Moco v1, to maximize the proximity between the positive
223
+ pairs based on the monument network encoder. Additional
224
+ studies that are similar to SimCLR and Moco, including
225
+ BYOL [38], SimSam [39] Barlow Twins [40], etc., show the
226
+ exceptional performance of learned feature representation for
227
+ further supervised or unsupervised tasks.
228
+
229
+ 𝑥1
230
+ 𝑥2
231
+ 𝑥𝑁−1
232
+ 𝑥𝑁
233
+ 𝐟1
234
+ 𝐟2
235
+ 𝐟𝑁−1
236
+ 𝐟𝑁
237
+
238
+ 𝐟1
239
+ 𝐟2
240
+ 𝐟𝑁−1
241
+ 𝐟𝑁
242
+ Pull
243
+ Push
244
+ Learning Hybrid Metrics
245
+ Softmax
246
+ Encoder F(∙; 𝜽)
247
+ Classification
248
+ Normal
249
+ Normal
250
+
251
+ Fibrosis
252
+ Glaucoma
253
+ MLP
254
+ Feature Extraction
255
+ MLP
256
+ Input
257
+ 𝐿2
258
+ Fig. 2. The overview of our proposed framework (example of fundus disease
259
+ diagnosis). We use a pretrained convolutional neural network (CNN) and a
260
+ multi-layer perceptron (MLP) to encode each image to the embedded feature.
261
+ Afterward, we propose a metric learning branch that is supervised with the
262
+ proposed GHM-Loss which is trained together with the cross-entropy loss of
263
+ a classification branch in a multi-task scheme.
264
+ III. METHODOLOGY
265
+ A. Overview
266
+ Our proposed supervised metric learning framework is il-
267
+ lustrated in Figure 2. We first denote a training image dataset
268
+ D = {xi, yi}N
269
+ i=1, where yi is the label of image xi, a set
270
+ of indices of all the positive samples for a randomly selected
271
+ image xi in a batch, U(i) = {j ∈ Θ|yj = yi, j ̸= i}, a set of
272
+ indices of all negative samples for a randomly selected image
273
+ xi in a batch, V (i) := {j ∈ Θ|yj ̸= yi, i ̸= j}. The problem
274
+ formulation is to learn a network F(·; θ) that maps each input
275
+ xi to a L2 normalized d-dimensional feature embedding fi,
276
+ such as fi = F(xi; θ) ∈ Rd. To achieve this, we use a pre-
277
+ trained CNN, i.e., ResNet18, followed by a MLP to produce N
278
+ high-level feature vectors and perform two supervised learning
279
+ branches. The first branch is a metric learning task, which aims
280
+ to learn the robust metrics by pulling all the samples with
281
+ indices U(i) and pushing away all the samples with indices
282
+ V (i). Meanwhile, the embedded feature fi is connected to
283
+ another MLP layer with a Softmax to generate the predicted
284
+ probability for the class label yi and is supervised with cross-
285
+ entropy loss to perform a classification task. Below, we will
286
+ elaborate on the procedure of the each branch, including the
287
+ definition of GHM-Loss and its advantage, and other network
288
+ details.
289
+ B. Generalized Hybird Metric Loss
290
+ 1) General Loss Form: To perform the metric learning
291
+ branch, we propose a general form metric loss function,
292
+ in which the network can learn the proximity information
293
+ between the embedded features {f1, f2, . . . , fN} by optimizing
294
+ the loss. Let S(·) denote the proximity function for two input
295
+ vectors fi and fj. That is, for fi, fj ∈ Rd, S(fi, fj) : Rd → R1.
296
+ Thus, the probability of xi, xu, u ∈ U(i) is being recognized
297
+ as yi is defined by
298
+ p(yi|xi, xu) =
299
+ exp [S(fi, fu)]
300
+
301
+ j��Θ,j̸=i exp [S(fi, fj)]
302
+ (1)
303
+ On the other hand, the probability of xi, xv, v ∈ V (i) is not
304
+ being recognized as yi is defined by
305
+ p(yi|xi, xv) =
306
+ exp [S(fi, fv)]
307
+
308
+ j∈Θ,j̸=i exp [S(fi, fj)]
309
+ (2)
310
+ Next, assume that all the probabilities of different images be-
311
+ ing recognized as image xi are independent, let q(yi|xi, xv) =
312
+ 1−p(yi|xi, xv) thus, the objective likelihood function that we
313
+ are interested is defined by
314
+ ℓi =
315
+
316
+ u∈U(i)
317
+
318
+ v∈V (i)
319
+ p(yi|xi, xu)q(yi|xi, xv)
320
+ (3)
321
+ Correspondingly, the negative log likelihood over all the data
322
+ points indexed by Θ yields:
323
+ L∗ = −
324
+
325
+ i∈Θ
326
+ ∥V (i)∥
327
+
328
+ u∈U(i)
329
+ log p(yi|xi, xu)
330
+
331
+
332
+ i∈Θ
333
+ ∥U(i)∥
334
+
335
+ v∈V (i)
336
+ log q(yi|xi, xv)
337
+ (4)
338
+ where ∥U(i)∥ and ∥V (i)∥ denotes the size of the set U(i)
339
+ and V (i), respectively.
340
+ 2) Geometric Proximity: We first consider the proximity
341
+ in the geometric space. Given a pair vectors fi and fj, the
342
+ proximity function Sg(fi, fj) satisfies the following properties:
343
+ 1 Sg(fi, fj) ∈ [0, 1];
344
+ 2 Sg(fi, fj) = Sg(fj, fi);
345
+ 3 ∀c ∈ [fi, fj], Sg(fi, fj) ≤ min{Sg(fi, c), Sg(c, fj)}.
346
+ The proximity measures that satisfy the above properties
347
+ include Cosine similarity.
348
+
349
+ Proof. Using Cosine similarity as the proximity metrics, such
350
+ that Sg(fi, fj) = fi · fj/∥fi∥∥fj∥ satisfies each property above.
351
+ 1. Obviously, Sg(fi, fj) ∈ [0, 1].
352
+ 2. Obviously, this property is true.
353
+ 3. Let c ∈ [fi, fj], we have |fi−c| ≤ |fi−fj|, |c−fj| ≤ |fi−
354
+ fj|, which means that Sg(fi, c) ≥ Sg(a, b), Sg(c, fj) ≥
355
+ S(a, b). Thus, Sg(fi, fj) ≤ min{Sg(fi, c), Sg(c, fj)}∀c ∈
356
+ [fi, fj].
357
+ 3) Probabilistic Proximity: Instead of only using geometric
358
+ proximity, which ignores the sampling probability distribution,
359
+ we consider the probabilistic proximity to summarize the
360
+ distribution of the embedded features {fi, f2, . . . , fN}. Given
361
+ a pair vectors fi and fj, with size of |fi|, |fj|, the probabilistic
362
+ proximity function satisfies the following properties:
363
+ 1. Sp(fi, fj) ∈ [0, 1];
364
+ 2. Sp(fi, fj) = Sp(fj, fi);
365
+ 3. Sp(fi, fj) = 0 if and only if fi = fj;
366
+ 4. Sp(fi, fj) ≤ Sp(fi, fc) + Sp(fc, fj) under the certain
367
+ condition, in which |fc| = |fi| = |fj|.
368
+ We use a Gaussian mixture model (GMM) to represent the
369
+ empirical distribution fi, which is defined by
370
+ p(fi) =
371
+
372
+ k∈K
373
+ wkN(fi; µk, σ2
374
+ k)
375
+ (5)
376
+ where wk is a latent variable followed by a categorical distri-
377
+ bution, denoting the k-th component, and N is the Gaussian
378
+ probability density function with parameters µk and σk, which
379
+ is defined as
380
+ N(fi; µk, σ2
381
+ k) =
382
+ 1
383
+
384
+ 2πσ2
385
+ k
386
+ exp
387
+
388
+ − 1
389
+ 2σ2
390
+ k
391
+ (fi − µk)2
392
+
393
+ (6)
394
+ Using this model, the probabilistic distance between p(fi)
395
+ and p(fj) is chosen with the symmetric divergence, i.e.,
396
+ Jensen–Shannon (JS)-divergence. For simplicity, we use pi
397
+ and pj to denotes the probability distributions of fi and fj,
398
+ respectively. Therefore, the Sp(fi, fj) is denoted by
399
+ Sp(fi, fj) = 1
400
+ 2 [dKL(pi∥¯pij) + dKL(pj∥¯pij)]
401
+ (7)
402
+ where ¯pij = (pi + pj) /2, dKL(·) presents a function of the
403
+ Kullback–Leibler (KL)-divergence.
404
+ Proof. We prove that Sp(fi, fj) satisfies the properties of
405
+ defined probabilistic proximity function.
406
+ 1. The range of the JS-divergence is within 0 and 1, thus
407
+ Sp(fi, fj) ∈ [0, 1] is true.
408
+ 2. Obviously, based on Eq (7), it is easy to have Sp(fi, fj) =
409
+ 1
410
+ 2 [dKL(pi∥¯pij) + dKL(pj∥¯pij)] = Sp(fi, fj).
411
+ 3. Sp(fi, fj) ≥ 0, as a sum of nonnegative terms. To have
412
+ Sp(fi, fj) = 0, each term of Sp(fi, fj) must be 0. Thus,
413
+ Sp(fi, fj) = 0 if and only if dKL(pi∥¯pij) = dKL(pj∥¯pij).
414
+ Since dKL(pi∥pj) = 0 if and only if pi = pj, thus,
415
+ Sp(fi, fj) = 0 if and only if fi = fj.
416
+ Now we prove property 4. Using the Shannon entropy,
417
+ H(pi) = − �
418
+ pi∈pi pi log pi, the explicit form of Sp(fi, fj)
419
+ can be written as
420
+ Sp(fi, fj) = H(¯pij) − 1
421
+ 2 [H(pi) + H(pj)]
422
+ Assume that H(¯pic) + H(¯pcj) ≥ H(¯pij) + H(¯pc), thus,
423
+ Sp(fi, fc) + Sp(fc, fj) − Sp(fi, fj) can be rewritten as
424
+ H(¯pic) − H(¯pc) + H(¯pcj) − H(¯pij) ≥ 0
425
+ Thus, Sp(fi, fc) + Sp(fc, fj) ≥ Sp(fi, fj) is true if and only if
426
+ H(¯pic) + H(¯pcj) ≥ H(¯pij) + H(¯pc).
427
+ C. Learning Hybrid Proximity
428
+ 1) Generalized Hybrid Metric Loss: The learning objective
429
+ loss function of the metric learning branch is the convex
430
+ combination of geometric proximity loss and probabilistic
431
+ proximity loss. As such, the objective is denoted by
432
+ L∗
433
+ GHM = λL∗
434
+ g − (1 − λ)L∗
435
+ p
436
+ (8)
437
+ where λ ∈ [0, 1] indicates the weighting factor to control the
438
+ geometric proximity loss, L∗
439
+ g, and the probabilistic proximity
440
+ loss, L∗
441
+ p. In this way, the network is able to capture both
442
+ geometric and probabilistic information during the training
443
+ process.
444
+ 2) Comparing With Geometric Proximity: To show the
445
+ advantage of including the probabilistic proximity loss in the
446
+ metric learning branch using the probabilistic view, we com-
447
+ pare the geometric proximity and the probabilistic proximity
448
+ between two probability distribution.
449
+ Consider a KL-divergence between fi and fj. For simplicity,
450
+ we use p(x) and q(x) to represent p(fi) and p(fj), respectively,
451
+ and assume x ∼ N(µ, σ2). Thus, the expanded form of
452
+ dKL(p∥q) for two Gaussians is denoted as
453
+ dKL(p∥q) =
454
+
455
+ x
456
+ p(x) log p(x) dx −
457
+
458
+ x
459
+ p(x) log q(x) dx
460
+ (9)
461
+ In here, we derive the result using the fact of [41]. For the
462
+ first term,
463
+
464
+ x p(x) log p(x) dx can be expanded as
465
+
466
+
467
+ x
468
+ p(x) log
469
+
470
+ 2πσ2p dx −
471
+
472
+ x
473
+ p(x)(x − µp)2
474
+ 2σ2p
475
+ dx
476
+ = − log
477
+
478
+ 2πσ2p −
479
+ 1
480
+ 2σ2p
481
+
482
+ x
483
+ p(x)(x − µp)2 dx
484
+ (10)
485
+ Next, we expand the quadratic form:
486
+ − log
487
+
488
+ 2πσ2p −
489
+ 1
490
+ 2σ2p
491
+
492
+ Ep(x2) − Ep(x)2�
493
+ = − log
494
+
495
+ 2πσ2p − 1
496
+ 2
497
+ (11)
498
+ Following the same derivation,
499
+
500
+ x p(x) log q(x) dx can be
501
+ expand by
502
+
503
+ x
504
+ p(x) log q(x) dx = − log
505
+
506
+ 2πσ2q −
507
+
508
+ σ2
509
+ q + (µp − µq)2�
510
+ 2σ2q
511
+ (12)
512
+
513
+ Assume σ2
514
+ p = σ2
515
+ q = c, c is a constant, based on Eq (10)-(12),
516
+ the KL-divergence between p(x) and q(x) is given by
517
+ dKL(p∥q) = −1
518
+ 2 − 1
519
+ 2c + 1
520
+ 2c (µp − µq)2
521
+ (13)
522
+ that is a linear function consisting of L2 distance between
523
+ two mean values, showing that the probabilistic proximity also
524
+ considers the variation of the sampling distribution, while the
525
+ geometric proximity does not. This derivation also supports
526
+ the phenomenon in Figure 1.
527
+ D. Network Implementation Details
528
+ As illustrated in Figure 2, the proposed framework consists
529
+ of a feature extraction backbone and two supervised learning
530
+ branches: one for metric learning and one for classification. We
531
+ used a pretrained ResNet18 [42], following the same setting as
532
+ the previous work [36]. We used max pooling on the attention
533
+ map after the last layer of the residual block in ResNet18.
534
+ Then, we flatted the output to a vector and sequentially connect
535
+ it with a MLP layer, batch normalization, and ReLU to reduce
536
+ the feature dimension to 128. Next, each fi 1) was connected
537
+ with a L2 normalization layer, i.e., ∥fi∥ = 1 to calculate the
538
+ hybrid proximity of the metric learning branch, and 2) connect
539
+ to another MLP layer and Softmax for classification.
540
+ The classification branch is to take the input batch {x}b
541
+ i=1 to
542
+ generate a prediction output. We optimized the cross-entropy
543
+ loss, LCE, together with the metric learning loss, L∗
544
+ GHM, in
545
+ a multi-task learning scheme. Thus, we defined our total
546
+ objective loss as the weighted combination of a metric learning
547
+ branch and a classification branch. The learning objective loss
548
+ is denoted by
549
+ Ltotal = βL∗
550
+ GHM + LCE
551
+ (14)
552
+ where β indicates the weighting factor to control the impor-
553
+ tance of the GHM-Loss In our experiments, we set β = 1 and
554
+ λ = 0.5, we also analyze the effects of both β and λ using
555
+ a grid search. Each input image of a batch was randomly
556
+ scaled within a factor range of [0.3, 1.0], and cropped into
557
+ patches of size 224 x 224. We set the batch size b = 8 in
558
+ the experiment and trained our framework using an Adam
559
+ optimization, the learning rate and weight decay are set to
560
+ 0.0001. We train our network for 2000 epochs. The whole
561
+ framework was implemented using python 3.8, Scikit-Learn
562
+ 0.24.1, Pytorch 1.9.1, and Cuda 11.1 with a NVIDIA GeForce
563
+ GTX 1660 SUPER GPU.
564
+ IV. DATA AND EXPERIMENTS
565
+ A. Datasets
566
+ To show the effectiveness of our method, same as Li
567
+ et al [36], we perform two binary (normal and abnormal)
568
+ classification tasks by diagnosing pathological myopia (PM)
569
+ and age-related macular degeneration (AMD) on two public
570
+ ophthalmic disease datasets of iChallenge-PM and iChallenge-
571
+ AMD.
572
+ 1) iChallenge-PM: iChallenge-PM [43] contains 1200 an-
573
+ notated retinal fundus images in which 50% are PM subjects.
574
+ More details of the iChallenge-PM dataset can be found on
575
+ the [43]. We perform a 10-fold cross-validation to evaluate our
576
+ method.
577
+ 2) iChallenge-AMD: There is a total of 1200 color fundus
578
+ images of the iChallenge-AMD dataset [44], in which 77% are
579
+ non-AMD subjects and 23% are AMD subjects. It provides the
580
+ disc boundaries and fovea locations, as well as the boundaries
581
+ of kinds of lesions. More details of the iChallenge-AMD
582
+ dataset can be found on [44]. Note, that we only used the
583
+ training dataset (400 fundus images) since only the training
584
+ dataset is released with annotations. We perform a 10-fold
585
+ cross-validation to evaluate our method.
586
+ B. Model Comparison Setting
587
+ 1) Evaluation Metrics: We used AUC, accuracy, precision,
588
+ recall, and F1-score to assess the classification performance.
589
+ AUC stands for Area Under the Receiver Operating Character-
590
+ istic (ROC) curve. The definition of accuracy, precision, recall,
591
+ and F1-score are denoted by:
592
+ Accuracy = (TP + TN)/(TP + TN + FP + FN)
593
+ Precision = TP/(TP + FP)
594
+ Recall = TP/(TP + FN)
595
+ F1 = 2 ∗ (Precision ∗ Recall)/(Precision + Recall)
596
+ where TP, TN, FP, and FN indicate the true positive, true
597
+ negative, false positive, and false negative, respectively.
598
+ To provide the statistical analysis of our method, we con-
599
+ ducted a non-parametric Wilcoxon test [45] with a α level of
600
+ 0.05. A p-value less than 0.05 is considered as statistical sig-
601
+ nificant for all inference. All statistical tests in the experiments
602
+ were performed using R-4.0.3 (RStudio, Boston, MA, USA).
603
+ 2) Competing State-of-the-Art Methods: To have a fair
604
+ comparison, we trained all peer methods with the pretrained
605
+ ResNet18 with the same hyperparameters, network architec-
606
+ tures, and optimizer under the 10-fold cross-validation. Since
607
+ our framework consists of metric learning and classification
608
+ branches, we fix the classification branch and only modify
609
+ the metric learning part when compared with other metric
610
+ learning methods in the experiment. Our proposed method was
611
+ compared with other deep metric learning methods, Siamese
612
+ [5], Triplet [2], SupCon [6], N-pair [21], and InfoNCE [46].
613
+ We run these metric learning methods with the code released
614
+ on iChallenge-PM and iChallenge-AMD datasets. We also
615
+ provided a supervised ‘Baseline’ method by modifying the
616
+ output layer of the last fully connected layer of the ResNet18
617
+ to 2 and trained with cross-entropy loss.
618
+ C. Comparison on the iChallenge-PM Dataset
619
+ We compared with other state-of-the-art methods on the
620
+ iChallenge-PM Dataset. The results are shown in Table I.
621
+ We found that each method can achieve over 95% prediction
622
+ performance on all evaluation metrics, which indicates that the
623
+ patterns of pathological myopia in color fundus images are
624
+
625
+ obvious. We can see that N-pair [21] achieved a limited result
626
+ and is due to this method requires large, annotated training
627
+ data that may not be suitable for the color fundus images.
628
+ Notably, our method significantly outperformed other peer
629
+ metric learning methods with 99.08% (p<0.0001) on AUC
630
+ and 99.01% (p<0.0001) on accuracy for PM diagnosis. These
631
+ results further demonstrate the effectiveness of our method
632
+ compared to other state-of-the-art metric learning methods.
633
+ TABLE I
634
+ MODEL COMPARISONS WITH OTHER DEEP METRIC LEARNING METHODS
635
+ ON THE ICHALLENGE-PM DATASET (UNIT: %).
636
+ AUC
637
+ Accuracy
638
+ Precision
639
+ Recall
640
+ F1
641
+ Baseline
642
+ 96.01
643
+ 95.45
644
+ 94.51
645
+ 97.25
646
+ 95.34
647
+ Siamese [5]
648
+ 97.45
649
+ 97.30
650
+ 96.15
651
+ 96.60
652
+ 96.58
653
+ Triplet [2]
654
+ 97.95
655
+ 98.64
656
+ 97.49
657
+ 96.14
658
+ 97.21
659
+ SupCon [6]
660
+ 98.06
661
+ 98.22
662
+ 98.36
663
+ 97.29
664
+ 97.64
665
+ N-pair [21]
666
+ 95.36
667
+ 95.83
668
+ 96.41
669
+ 97.25
670
+ 96.12
671
+ InfoNCE [46]
672
+ 98.11
673
+ 97.91
674
+ 96.83
675
+ 97.59
676
+ 97.36
677
+ Ours
678
+ 99.08
679
+ 99.01
680
+ 98.08
681
+ 99.12
682
+ 98.40
683
+ D. Comparison on the iChallenge-AMD Dataset
684
+ We compared with other state-of-the-art methods on the
685
+ iChallenge-AMD Dataset. As shown in Table II, we can
686
+ see that our method achieved the best prediction performance
687
+ among other competing metric learning methods. Compared
688
+ to the second-best method, InfoNCE [46], our method signif-
689
+ icantly improved the performance, i.e., 78.69% vs. 76.75%
690
+ (p<0.0001) on AUC and 88.04 % vs. 86.51% (p<0.0001)
691
+ on accuracy. Notably, our method also outperformed the
692
+ supervised ‘Baseline’ method on all evaluation metrics. These
693
+ results demonstrated the effectiveness of the proposed method.
694
+ TABLE II
695
+ MODEL COMPARISONS WITH OTHER DEEP METRIC LEARNING METHODS
696
+ ON THE ICHALLENGE-AMD DATASET (UNIT: %).
697
+ AUC
698
+ Accuracy
699
+ Precision
700
+ Recall
701
+ F1
702
+ Baseline
703
+ 76.51
704
+ 84.16
705
+ 82.54
706
+ 76.18
707
+ 78.86
708
+ Siamese [5]
709
+ 67.58
710
+ 82.45
711
+ 72.54
712
+ 68.26
713
+ 70.14
714
+ Triplet [2]
715
+ 69.52
716
+ 84.29
717
+ 76.87
718
+ 72.48
719
+ 73.21
720
+ SupCon [6]
721
+ 73.24
722
+ 85.64
723
+ 78.42
724
+ 74.15
725
+ 76.05
726
+ N-pair [21]
727
+ 69.58
728
+ 83.41
729
+ 75.14
730
+ 70.54
731
+ 71.86
732
+ InfoNCE [46]
733
+ 76.75
734
+ 86.51
735
+ 85.36
736
+ 72.35
737
+ 77.95
738
+ Ours
739
+ 78.69
740
+ 88.04
741
+ 82.95
742
+ 75.28
743
+ 78.24
744
+ E. Comparison with Transfer Learning Models
745
+ To show the robustness of learned features of our method,
746
+ we compared our method with the ImageNet pretrained mod-
747
+ els, including VGG-19 [47], InceptionNet v1 [48], and Effi-
748
+ cientNet B0 [49] on the iChallenge-AMD dataset. We modified
749
+ the output channel of the last fully connected layer in each
750
+ pretrained model to 2 and trained them with cross-entropy
751
+ loss. To have a fair comparison, all the models were trained
752
+ with the same number of epochs, learning rate, and weight
753
+ decay term on a 10-fold cross validation. The results are shown
754
+ in Table III. We can see that Efficient Net achieves the best
755
+ prediction performance among the transfer learning models.
756
+ Compared to Efficient Net, it is observed that our method can
757
+ achieve a higher prediction performance with around 1.5%
758
+ (p<0.0001) on AUC and 7% (p<0.0001) on accuracy. Note,
759
+ we trained our method with only 400 color fundus images
760
+ and performed better than ImageNet models, which were
761
+ pretrained with more than 1 million natural images. With this
762
+ observation, the results further show the practical value of our
763
+ method.
764
+ TABLE III
765
+ MODEL COMPARISONS WITH IMAGENET TRANSFER LEARNING MODELS
766
+ ON THE ICHALLENGE-AMD DATASET (UNIT: %).
767
+ AUC
768
+ Accuracy
769
+ Precision
770
+ Recall
771
+ F1
772
+ VGG-19 [47]
773
+ 74.14
774
+ 81.52
775
+ 76.54
776
+ 72.36
777
+ 73.89
778
+ Inception v1 [48]
779
+ 76.32
780
+ 77.35
781
+ 78.39
782
+ 75.54
783
+ 76.28
784
+ Efficient B0 [49]
785
+ 77.25
786
+ 81.52
787
+ 80.32
788
+ 79.25
789
+ 79.52
790
+ Ours
791
+ 78.69
792
+ 88.04
793
+ 82.95
794
+ 75.28
795
+ 78.24
796
+ F. Analytical Study
797
+ TABLE IV
798
+ THE IMPORTANCE OF THE GHM-LOSS IN THE METRIC LEARNING
799
+ BRANCH ON THE ICHALLENGE-AMD DATASET (UNIT: %).
800
+ AUC
801
+ Accuracy
802
+ Precision
803
+ Recall
804
+ F1
805
+ β = 0.0
806
+ 75.41
807
+ 83.21
808
+ 80.54
809
+ 72.88
810
+ 76.15
811
+ β = 0.5
812
+ 76.85
813
+ 85.42
814
+ 80.95
815
+ 74.54
816
+ 77.28
817
+ β = 1.0
818
+ 78.69
819
+ 88.04
820
+ 82.95
821
+ 75.28
822
+ 78.24
823
+ β = 2.0
824
+ 72.45
825
+ 79.41
826
+ 77.66
827
+ 70.23
828
+ 73.59
829
+ 1) Importance of the GHM-Loss: The proposed method
830
+ consists of metric learning branch and classification branch in
831
+ a multi-task scheme, in which we trained GHM-Loss together
832
+ with the cross-entropy loss. In this section, we analyzed
833
+ the importance of the GHM-Loss of our method on the
834
+ iChallenge-AMD dataset. We first fix the λ = 0.5 in GHM-
835
+ Loss and trained our framework with different β in Eq (14),
836
+ where β is the importance of the metric learning branch.
837
+ β = 0.0 denotes that the framework is only trained with
838
+ the cross-entropy loss. As β increases, the more weight or
839
+ importance of the GHM-Loss in the network training.
840
+ The results are shown in Table IV. As we can see, when
841
+ β = 0.0, the network only learns the classification branch
842
+ and the result is 75.41% on AUC and 83.21% on accuracy.
843
+ As β increases, we found that the prediction performance
844
+ improves to the best when β reached 1 (e.g., 78.69% on AUC,
845
+ 88.04% on accuracy). However, when β continues increasing,
846
+ the prediction performance starts to drop apparently from
847
+ 78.69% to 72.45% on AUC. The comparison shows that both
848
+ the metric learning branch and classification branch equally
849
+ contributed to our framework for PM diagnosis.
850
+ 2) Effects of Weighting Factors in the GHM-Loss: We
851
+ analyzed the effects of the weighting factors, i.e., β, λ in
852
+ the GHM-Loss on the iChallenge-AMD Dataset, in which
853
+ β indicates the importance of the metric learning branch of
854
+ our method and λ controls the weight size between geomet-
855
+ ric proximity and probabilistic proximity of the GHM-Loss.
856
+ Note, λ = 0.0 denotes that only probabilistic proximity was
857
+
858
+ considered between fi and fj. As we can see in Figure 3,
859
+ for each fix β, the classification performance increases to the
860
+ best performance when λ is reached 0.5 and drops apparently
861
+ as it continues to increase. These results demonstrate that 1)
862
+ both metric learning and classification branches are useful of
863
+ our method and 2) both geometric and probabilistic proximity
864
+ should be captured between fi and fj in the training.
865
+ Fig. 3.
866
+ Classification performance comparison on the iChallenge-AMD
867
+ Dataset with different weighting factors β and λ of the GHM-Loss. We use
868
+ AUC to choose the optimal β and λ using a grid search.
869
+ 3) Visualization of the Feature Distribution: We visualized
870
+ the feature embedding distribution, i.e., f1 (red line) and f2,
871
+ after ResNet18 for a positive pair color fundus image on
872
+ the iChallenge AMD dataset. The feature distributions are
873
+ shown in Figure 4. Before optimization, we can see that the
874
+ distribution of feature embeddings from a positive pair sample
875
+ are independent without overlaps. However, the probabilistic
876
+ distance of f1 (red line) and f2 is reduced and stays close
877
+ to each other after optimizing the network. Since we use
878
+ GMM to approximate the empirical distribution of each feature
879
+ embedding, the probability parameters of µ and σ of all
880
+ the images with the same label should be closed to each
881
+ other, thus, resulting in the similar probability densities. This
882
+ visualization also demonstrate that the proposed GHM-Loss
883
+ can efficiently capture the probabilistic patterns during the
884
+ training process.
885
+ V. DISCUSSION
886
+ Metric learning is an important technique in visual repre-
887
+ sentation area by learning the distance metric, which can be
888
+ further used to perform supervised and unsupervised learning
889
+ tasks, such as image classification [6], [12], [13], image
890
+ clustering [14], [50], and object detection [20], [51], etc. With
891
+ the advances of deep learning techniques, deep metric learn-
892
+ ing has been widely studied in the metric learning research
893
+ community. Although promising results were obtained on
894
+ previous works [2], [5], [6], [21], [46], these methods usually
895
+ ignore the probability distribution of the feature embeddings
896
+ during the training process, which may lead an inaccurate
897
+ Before Optimization
898
+ After Optimization
899
+ Fig. 4. The feature distribution between a positive pair of f1 (red line) and
900
+ f2 (blue line) of color fundus images during the training process on the
901
+ iChallenge AMD dataset. We applied Gaussian mixture model (GMM) to
902
+ approximate the empirically distribution of these features. The probabilistic
903
+ proximity between f1 (red line) and f2 are reduced after optimization.
904
+ prediction. In this work, we present a novel supervised metric
905
+ learning method that consists of learning both geometric and
906
+ probabilistic proximity for image recognition. We formulate a
907
+ Generalized Hybrid Metric Loss (GHM-Loss) to better learn
908
+ the distance representation, where geometric-based distance
909
+ and probabilistic-based distance are learned. Our method is
910
+ validated on two public ophthalmic disease datasets (e.g.,
911
+ iChallenge-PM and iChallenge-AMD), in which our method
912
+ can significantly outperform other state-of-the-art metric learn-
913
+ ing methods. With a convex combination of the geometric
914
+ proximity and probabilistic proximity, our method consistently
915
+ achieves the best prediction performance than the individual
916
+ proximity.
917
+ Although our method outperforms other state-of-the-art
918
+ metric learning methods, it comes with limitations. Our
919
+ method is a supervised learning approach, which relies on
920
+ a large number of annotated training data, and it is costly
921
+ to obtain. In future, we will investigate the unsupervised
922
+ metric learning approach or self-supervised learning approach
923
+ to address the human effort issue on image recognition com-
924
+ munities. The exploration of probabilistic unsupervised/self-
925
+ supervised metric learning would be our future work.
926
+ VI. CONCLUSION
927
+ In this paper, we present a novel supervised metric learning
928
+ method for image recognition. Our main idea is to learn a
929
+ hybrid proximity that consists of both geometric-based metric
930
+ and probabilistic-based metric. The geometric proximity of
931
+ proposed GHM-Loss helps the model learn the similarity
932
+ information under the Euclidean space and the probabilistic
933
+ proximity proposed GHM-Loss learns the similarity under the
934
+ empirical probability distribution. With extensive experiments,
935
+ our method consistently achieves the excellent prediction
936
+ performance compared with the other state-of-the-art metric
937
+ learning methods, showing the effectiveness of learned dis-
938
+ tance features of our method in image recognition.
939
+ REFERENCES
940
+ [1] B. Kulis et al., “Metric learning: A survey,” Foundations and Trends®
941
+ in Machine Learning, vol. 5, no. 4, pp. 287–364, 2013.
942
+ [2] E. Hoffer and N. Ailon, “Deep metric learning using triplet network,”
943
+ in International workshop on similarity-based pattern recognition.
944
+ Springer, 2015, pp. 84–92.
945
+
946
+ 0.50
947
+ 78
948
+ 73.54
949
+ 74.19
950
+ 76.85
951
+ 73.12
952
+ 71.25
953
+ 76
954
+ 1.00
955
+ 74.32
956
+ 75.56
957
+ 78.69
958
+ 75.21
959
+ 73.25
960
+ - 74
961
+ (%) :
962
+ 1.50
963
+ 72.15
964
+ 72.14
965
+ 75.32
966
+ 72.17
967
+ 70.74
968
+ AUC
969
+ 2.00
970
+ 70
971
+ 70.68
972
+ 71.21
973
+ 72.14
974
+ 69.84
975
+ 67.96
976
+ - 68
977
+ 4.00
978
+ -
979
+ 65.45
980
+ 67.18
981
+ 70.25
982
+ 66.41
983
+ 65.32
984
+ 66
985
+ 00'0
986
+ 0.25
987
+ 0.50
988
+ 0.75
989
+ 1.00
990
+ ^0.6
991
+ fi
992
+ 0.5
993
+ f2
994
+ 0.4
995
+ EO
996
+ 0.2
997
+ 0.1
998
+ 0.0
999
+ 0.28
1000
+ 0.32
1001
+ 0.34
1002
+ 9E0
1003
+ 0.380.8
1004
+ fi
1005
+ 0.7
1006
+ 0.6
1007
+ 0.5
1008
+ 0.4
1009
+ EO
1010
+ 0.2
1011
+ 0.1
1012
+ 0.25
1013
+ 0.26
1014
+ 0.27
1015
+ 0.28
1016
+ 0.29[3] J. Wang, F. Zhou, S. Wen, X. Liu, and Y. Lin, “Deep metric learning
1017
+ with angular loss,” in Proceedings of the IEEE international conference
1018
+ on computer vision, 2017, pp. 2593–2601.
1019
+ [4] M. Kaya and H. S¸. Bilge, “Deep metric learning: A survey,” Symmetry,
1020
+ vol. 11, no. 9, p. 1066, 2019.
1021
+ [5] G. Koch, R. Zemel, R. Salakhutdinov et al., “Siamese neural networks
1022
+ for one-shot image recognition,” in ICML deep learning workshop,
1023
+ vol. 2.
1024
+ Lille, 2015, p. 0.
1025
+ [6] P. Khosla et al., “Supervised contrastive learning,” Advances in Neural
1026
+ Information Processing Systems, vol. 33, pp. 18 661–18 673, 2020.
1027
+ [7] L. Yang and R. Jin, “Distance metric learning: A comprehensive survey,”
1028
+ Michigan State Universiy, vol. 2, no. 2, p. 4, 2006.
1029
+ [8] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon, “Information-
1030
+ theoretic metric learning,” in Proceedings of the 24th international
1031
+ conference on Machine learning, 2007, pp. 209–216.
1032
+ [9] P. H. Le-Khac, G. Healy, and A. F. Smeaton, “Contrastive representation
1033
+ learning: A framework and review,” IEEE Access, vol. 8, pp. 193 907–
1034
+ 193 934, 2020.
1035
+ [10] A. Jaiswal, A. R. Babu, M. Z. Zadeh, D. Banerjee, and F. Makedon,
1036
+ “A survey on contrastive self-supervised learning,” Technologies, vol. 9,
1037
+ no. 1, p. 2, 2020.
1038
+ [11] P. Wang, K. Han, X.-S. Wei, L. Zhang, and L. Wang, “Contrastive
1039
+ learning based hybrid networks for long-tailed image classification,”
1040
+ in Proceedings of the IEEE/CVF conference on computer vision and
1041
+ pattern recognition, 2021, pp. 943–952.
1042
+ [12] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast
1043
+ for unsupervised visual representation learning,” in Proceedings of the
1044
+ IEEE/CVF conference on computer vision and pattern recognition, 2020,
1045
+ pp. 9729–9738.
1046
+ [13] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework
1047
+ for contrastive learning of visual representations,” in International
1048
+ conference on machine learning.
1049
+ PMLR, 2020, pp. 1597–1607.
1050
+ [14] K. Do, T. Tran, and S. Venkatesh, “Clustering by maximizing mutual in-
1051
+ formation across views,” in Proceedings of the IEEE/CVF International
1052
+ Conference on Computer Vision, 2021, pp. 9928–9938.
1053
+ [15] H. Zhong et al., “Graph contrastive clustering,” in Proceedings of the
1054
+ IEEE/CVF International Conference on Computer Vision, 2021, pp.
1055
+ 9224–9233.
1056
+ [16] K. Chaitanya, E. Erdil, N. Karani, and E. Konukoglu, “Contrastive
1057
+ learning of global and local features for medical image segmentation
1058
+ with limited annotations,” Advances in Neural Information Processing
1059
+ Systems, vol. 33, pp. 12 546–12 558, 2020.
1060
+ [17] H. Hu, J. Cui, and L. Wang, “Region-aware contrastive learning for
1061
+ semantic segmentation,” in Proceedings of the IEEE/CVF International
1062
+ Conference on Computer Vision, 2021, pp. 16 291–16 301.
1063
+ [18] X. Chen et al., “Unpaired deep image deraining using dual contrastive
1064
+ learning,” in Proceedings of the IEEE/CVF Conference on Computer
1065
+ Vision and Pattern Recognition, 2022, pp. 2017–2026.
1066
+ [19] M. Zheng et al., “Weakly supervised contrastive learning,” in Proceed-
1067
+ ings of the IEEE/CVF International Conference on Computer Vision,
1068
+ 2021, pp. 10 042–10 051.
1069
+ [20] D. Kim, D. Jeong, H. Kim, K. Chong, S. Kim, and H. Cho, “Spatial
1070
+ contrastive learning for anomaly detection and localization,” IEEE
1071
+ Access, vol. 10, pp. 17 366–17 376, 2022.
1072
+ [21] K. Sohn, “Improved deep metric learning with multi-class n-pair loss
1073
+ objective,” Advances in neural information processing systems, vol. 29,
1074
+ 2016.
1075
+ [22] C.-Y. Wu, R. Manmatha, A. J. Smola, and P. Krahenbuhl, “Sampling
1076
+ matters in deep embedding learning,” in Proceedings of the IEEE
1077
+ international conference on computer vision, 2017, pp. 2840–2848.
1078
+ [23] E. Xing, M. Jordan, S. J. Russell, and A. Ng, “Distance metric learning
1079
+ with application to clustering with side-information,” Advances in neural
1080
+ information processing systems, vol. 15, 2002.
1081
+ [24] K. Q. Weinberger and L. K. Saul, “Distance metric learning for large
1082
+ margin nearest neighbor classification.” Journal of machine learning
1083
+ research, vol. 10, no. 2, 2009.
1084
+ [25] S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,”
1085
+ Chemometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp.
1086
+ 37–52, 1987.
1087
+ [26] P. Paatero and U. Tapper, “Positive matrix factorization: A non-negative
1088
+ factor model with optimal utilization of error estimates of data values,”
1089
+ Environmetrics, vol. 5, no. 2, pp. 111–126, 1994.
1090
+ [27] L. Yang, “An overview of distance metric learning,” in Proceedings of
1091
+ the computer vision and pattern recognition conference, 2007.
1092
+ [28] J. Hu, J. Lu, and Y.-P. Tan, “Discriminative deep metric learning for
1093
+ face verification in the wild,” in Proceedings of the IEEE conference on
1094
+ computer vision and pattern recognition, 2014, pp. 1875–1882.
1095
+ [29] H. Dong, K. Song, Q. Wang, Y. Yan, and P. Jiang, “Deep metric learning-
1096
+ based for multi-target few-shot pavement distress classification,” IEEE
1097
+ Transactions on Industrial Informatics, vol. 18, no. 3, pp. 1801–1810,
1098
+ 2021.
1099
+ [30] J. V. Sundgaard et al., “Deep metric learning for otitis media classifica-
1100
+ tion,” Medical Image Analysis, vol. 71, p. 102034, 2021.
1101
+ [31] M. Zhou and V. M. Patel, “Enhancing adversarial robustness for deep
1102
+ metric learning,” in Proceedings of the IEEE/CVF Conference on
1103
+ Computer Vision and Pattern Recognition, 2022, pp. 15 325–15 334.
1104
+ [32] W. Dai, X. Li, W. H. K. Chiu, M. D. Kuo, and K.-T. Cheng, “Adaptive
1105
+ contrast for image regression in computer-aided disease assessment,”
1106
+ IEEE Transactions on Medical Imaging, vol. 41, no. 5, pp. 1255–1268,
1107
+ 2021.
1108
+ [33] C.-Y. Chuang, J. Robinson, Y.-C. Lin, A. Torralba, and S. Jegelka, “De-
1109
+ biased contrastive learning,” Advances in neural information processing
1110
+ systems, vol. 33, pp. 8765–8775, 2020.
1111
+ [34] T. Park, A. A. Efros, R. Zhang, and J.-Y. Zhu, “Contrastive learning
1112
+ for unpaired image-to-image translation,” in European conference on
1113
+ computer vision.
1114
+ Springer, 2020, pp. 319–345.
1115
+ [35] M. Kang and J. Park, “Contragan: Contrastive learning for conditional
1116
+ image generation,” Advances in Neural Information Processing Systems,
1117
+ vol. 33, pp. 21 357–21 369, 2020.
1118
+ [36] X. Li et al., “Rotation-oriented collaborative self-supervised learning
1119
+ for retinal disease diagnosis,” IEEE Transactions on Medical Imaging,
1120
+ vol. 40, no. 9, pp. 2284–2294, 2021.
1121
+ [37] M. Ye, X. Zhang, P. C. Yuen, and S.-F. Chang, “Unsupervised em-
1122
+ bedding learning via invariant and spreading instance feature,” in Pro-
1123
+ ceedings of the IEEE/CVF Conference on Computer Vision and Pattern
1124
+ Recognition, 2019, pp. 6210–6219.
1125
+ [38] J.-B. Grill et al., “Bootstrap your own latent-a new approach to self-
1126
+ supervised learning,” Advances in neural information processing sys-
1127
+ tems, vol. 33, pp. 21 271–21 284, 2020.
1128
+ [39] X. Chen and K. He, “Exploring simple siamese representation learning,”
1129
+ in Proceedings of the IEEE/CVF Conference on Computer Vision and
1130
+ Pattern Recognition, 2021, pp. 15 750–15 758.
1131
+ [40] J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, “Barlow twins:
1132
+ Self-supervised learning via redundancy reduction,” in International
1133
+ Conference on Machine Learning.
1134
+ PMLR, 2021, pp. 12 310–12 320.
1135
+ [41] C. P. Robert, “Intrinsic losses,” Theory and decision, vol. 40, no. 2, pp.
1136
+ 191–214, 1996.
1137
+ [42] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
1138
+ recognition,” in Proceedings of the IEEE conference on computer vision
1139
+ and pattern recognition, 2016, pp. 770–778.
1140
+ [43] H. Fu et al., “Palm: Pathologic myopia challenge,” IEEE Dataport, 2019.
1141
+ [44] H. Fang et al., “Adam challenge: Detecting age-related macular degen-
1142
+ eration from fundus images,” IEEE Transactions on Medical Imaging,
1143
+ 2022.
1144
+ [45] R. F. Woolson, “Wilcoxon signed-rank test,” Wiley encyclopedia of
1145
+ clinical trials, pp. 1–3, 2007.
1146
+ [46] A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with
1147
+ contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
1148
+ [47] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
1149
+ large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
1150
+ [48] C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of
1151
+ the IEEE conference on computer vision and pattern recognition, 2015,
1152
+ pp. 1–9.
1153
+ [49] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for con-
1154
+ volutional neural networks,” in International conference on machine
1155
+ learning.
1156
+ PMLR, 2019, pp. 6105–6114.
1157
+ [50] Y. Li, P. Hu, Z. Liu, D. Peng, J. T. Zhou, and X. Peng, “Contrastive
1158
+ clustering,” in Proceedings of the AAAI Conference on Artificial Intelli-
1159
+ gence, vol. 35, no. 10, 2021, pp. 8547–8555.
1160
+ [51] E. Xie et al., “Detco: Unsupervised contrastive learning for object
1161
+ detection,” in Proceedings of the IEEE/CVF International Conference
1162
+ on Computer Vision, 2021, pp. 8392–8401.
1163
+
DNFQT4oBgHgl3EQf_zdP/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
E9E1T4oBgHgl3EQfEgPe/content/tmp_files/2301.02892v1.pdf.txt ADDED
@@ -0,0 +1,1366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Disorder-induced finite center-of-mass momentum Cooper pairing and its consequences to the
2
+ critical temperature and superconducting gap of overdoped cuprates
3
+ Victor Velasco1 and Marcello B. Silva Neto1
4
+ 1Instituto de F´ısica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, Brazil
5
+ One of the most studied classes of unconventional high-temperature superconductors is the hole-doped
6
+ cuprates, where special attention is given to those doped with extra interstitial oxygens. In this context, the
7
+ formation of spatially inhomogeneous agglomerates of dopant oxygen atoms in the form of nanosized puddles
8
+ is not only relevant, but also subject of intense recent experimental and theoretical surveys. Following these
9
+ efforts, in this work we show the consequences of the presence of networks of oxygen puddles in the supercon-
10
+ ducting state of overdoped cuprates. Starting from the inhomogeneous disordered background brought by the
11
+ network of puddles, we show that an effective interaction between electrons can be mediated by the local vibra-
12
+ tional degrees of freedom of each puddle, but the pairs arising from this interaction have a finite center-of-mass
13
+ momentum p, thus breaking up the Cooper channel. Furthermore, we derive an analytical expression for the
14
+ amplitude of the superconducting gap ∆k in terms of disorder and finite center-of-mass momentum and show
15
+ that amplitude fluctuations are induced in the superconducting state by the presence of the puddles, where both
16
+ the gap and the critical temperature are affect and reduced by disorder and finite momentum pairs. Finally, we
17
+ discuss our findings in the context of networks of superconducting oxygen nano-puddles in cuprates.
18
+ I.
19
+ INTRODUCTION
20
+ It is a well known fact within the Bardeen-Cooper-
21
+ Schrieffer (BCS) theory of superconductivity that the two
22
+ quasi-particles forming the bound states that constitute the
23
+ superconductor, named Cooper pairs, have momentum k and
24
+ −k, near the Fermi surface, with oposite spins ↑ and ↓, form-
25
+ ing a singlet with zero center-of-mass momentum [1], in
26
+ what is usually called the Cooper channel. However, the ex-
27
+ istence of a finite-momentum superconducting ground state
28
+ has recently been raised theoretically [2–7] and supported
29
+ by several experiments in correlated quantum materials [8–
30
+ 11]. Moreover, the possibility of emergent finite-momentum
31
+ pair states, in the form of pair density waves, in a variety
32
+ of well-established superconducting compounds, for example
33
+ transition-metal dichalcogenides and in cuprates [12], points
34
+ to the importance of understanding the intrinsic characteris-
35
+ tics of these states and its interplay with other common fea-
36
+ tures present in these systems, such as disorder [13] and in the
37
+ presence of magnetic fields [14].
38
+ Although condensed matter models usually start from the
39
+ notion of a perfect crystal, a plethora of notable effects are
40
+ only accessible when this notion is no longer true. One fa-
41
+ mous example is the problem of the high-Tc superconductiv-
42
+ ity on cuprates, in which a region of d-wave pairing occurs in
43
+ the form of a dome-shaped area and as a function of doping in
44
+ its phase diagram. Here, doping, either intentional or acciden-
45
+ tal, usually takes place, for example, via chemical substitution
46
+ in La2−xSrxCuO4 [15], or via inclusion of interstitial dopant
47
+ oxygen atoms (Oi) in Bi2Sr2CaCu2O8+δ [16], La2CuO4+y
48
+ [17] or YBa2Cu3O6.5+y [18], which can be treated as point-
49
+ like scattering centers as well as extended defects that intro-
50
+ duce disorder and deviate the neighboring atoms from their
51
+ crystallographic positions. This brings to light a fundamen-
52
+ tal question regarding the context of the dome-shaped area
53
+ of high temperature superconductivity in cuprates, on what
54
+ mechanism is responsible for the reduction in Tc upon over-
55
+ doping as well as to the subsequent disappearance of super-
56
+ conductivity at a critical doping. Usually, this is ascribed to
57
+ intrinsic effects, in which pairing correlations diminish with
58
+ doping, due to screening of local Coulomb interactions [19],
59
+ but some authours have also addressed the role of disorder in
60
+ surpressing superconductivity [20–22]. Disorder, however, is
61
+ usually incorporated as random on-site energies in Hubbard-
62
+ like models that can lead to Anderson localization phenomena
63
+ [23–25], thus it is important to extend this effects to include
64
+ also the possiblity of severe structural disorder within finite
65
+ regions of the crystal.
66
+ One of the most significant results from the study of disor-
67
+ der effects in superconductivity is the well known Anderson’s
68
+ theorem, which states that both the transition temperature, Tc,
69
+ and the isotropic gap, ∆0, of s−wave superconductors are in-
70
+ sensitive to the presence of weak disorder at the mean-field
71
+ level of BCS-like models [26–28]. One of the requirements
72
+ of the theorem is that the density of states remains unchanged
73
+ when compared to the pure metal case. As such, if the in-
74
+ fluence of disorder is strong enough to deplete the density
75
+ of states, the theorem no longer holds, and disorder dramat-
76
+ ically affects superconductivity [29]. In this case of strong
77
+ disorder and high concentration of impurity centers, the su-
78
+ perconducting correlation length is comparable to the disor-
79
+ der correlation length, and the mean-field equations can lead
80
+ to self-organized granularity where fluctuations of the local
81
+ order parameter are present [30]. This is likely to be the case
82
+ for overdoped cuprate superconductors with high concentra-
83
+ tion of interstitial oxygens that can lead to the formation of
84
+ nanosized oxygen puddles, regions with agglomeration of Oi,
85
+ that support superconductivity [17, 18, 31–33].
86
+ The case of unconventional high temperature d−wave su-
87
+ perconductivity in hole-doped cuprates is of experimental and
88
+ theoretical relevance since its discovery [34]. Apart from sev-
89
+ eral different physical characteristics, one of the main differ-
90
+ ences between these materials and the conventional BCS su-
91
+ perconductors is that the superconducting gap amplitude is
92
+ not homogenous when the system undergoes the supercon-
93
+ ducting transition. This is evidenced by scanning tunneling
94
+ microscopy (STM) spectra in Bi2Sr2CaCu2O8+δ at different
95
+ arXiv:2301.02892v1 [cond-mat.supr-con] 7 Jan 2023
96
+
97
+ 2
98
+ doping levels, where the inhomogenous gap in the supercon-
99
+ ducting regime is revealed to be represented by a variety of
100
+ gap sizes and amplitues occuring in all samples as the con-
101
+ centration of dopants is varied [16]. Most remarkably, there
102
+ is a clear correlation between the position of Oi agglomer-
103
+ ates and the amplitudes of the gaps, since regions with larger
104
+ groups of dopants are observed to correspond to regions of
105
+ larger gap amplitudes [16]. Paralelly, Oi dopants have been
106
+ observed to self-organize into nanosize regions, or puddles,
107
+ as mentioned above, via µXRS in HgBa2CuO4+δ [35], as
108
+ well as in other cuprate compounds [36]. Remarkably, it has
109
+ been observed that spatial variations in the self-organization
110
+ of the nanosized Oi-rich puddles have a direct effect on su-
111
+ perconductivity, through variations in the critical temperature
112
+ [37]. Therefore, it is of paramount importance a deeper un-
113
+ derstanding, from a theoretical perspective, of the role of the
114
+ oxygen puddles in the physics of hole-doped cuprates.
115
+ In this work, we aim to investigate the effects of how struc-
116
+ tural disorder caused by the agglomeration of Oi in puddles
117
+ is responsible for the appearence of finite (nonzero) center-of-
118
+ mass (CM) momentum Cooper pairs in overdoped cuprates.
119
+ This will be done by making use of a previously reported
120
+ model proposed to describe how superconductivity rises in
121
+ cuprates, on the underdoped side of the phase diagram, in
122
+ terms of the phase synchronization of networks of nanosized
123
+ superconducting puddles, rich in interstitial dopant oxygens
124
+ [38]. Following, we extend the puddle model to derive analyt-
125
+ ical expressions showing how the superconducting gap, and
126
+ thus the critical temperature, are affected by the presence of
127
+ Cooper pairs with finite CM momentum and structural dis-
128
+ order.
129
+ Finally, we show numerically that both Tc and ∆0
130
+ decrease with increasing disorder, thus pointing to a simple
131
+ physical mechanism to explain the closing of the supercon-
132
+ ducting dome-shaped area of the phase diagram, as being due
133
+ to the reduction of the available phase space for Cooper pair-
134
+ ing due to the development of a nonzero, finite CM momen-
135
+ tum Cooper pairs.
136
+ This paper is divided as following: in Sec. II we explain
137
+ the puddle model, which is the base for the calculations pre-
138
+ sented in this work, and derive the effective interaction be-
139
+ tween electrons and the network of puddles, giving rise to a
140
+ finite CM momentum pair state. Following, in Sec. III we de-
141
+ scribe the effects of structural disorder that the agglomeration
142
+ of Oi within each puddle causes to the system. In Sec. IV
143
+ we derive the the self-consistent equation for the amplitude of
144
+ the superconducting gap in terms of disorder and finite CM
145
+ momentum Cooper pairs. Then Sec. V is devoted to the nu-
146
+ merical calculations. Finally, we discuss the implications of
147
+ our results within the framework of networks of nano-sized
148
+ puddles and summarize our findings in Sec. VI.
149
+ II.
150
+ INHOMOGENEOUS OXYGEN PUDDLES
151
+ The oxygen rich nanopuddles have different elastic proper-
152
+ ties than their surroundings, and can therefore be considered
153
+ as elastic insertions in an otherwise homogeneous medium,
154
+ with its own vibrational mode, forming a network of super-
155
+ Figure 1. Pictorical view of the disordered background introduced by
156
+ the network of puddles (blue) in the system. The network consists of
157
+ puddles of different sizes, defined by the radius of each insertion.
158
+ Electrons (black and red) scatter in each puddle and, in the super-
159
+ conducting state, percolate within the network.
160
+ conducting nanoscale puddles, as shown in Fig. 1, which is
161
+ the starting point for the model that captured how supercon-
162
+ ductivity may arise in cuprates due to the phase synchroniza-
163
+ tion of each nanopuddle [38]. In terms of the Kuramoto model
164
+ for sychronization of phase oscillators [39, 40], each nano-
165
+ sized puddle is assigned to a phase, that in the underdoped
166
+ regime evolves independently of the others, giving rise to lo-
167
+ calized patches of superconductivity, as revaled by STM and
168
+ other techniques. With increased concentration of Oi through
169
+ doping, the superfluid density is responsible for the enhance-
170
+ ment of the interactions between the puddles and, in terms
171
+ of the Kuramoto model, to lock their phases in a synchronous
172
+ way. Following a BCS-like procedure, the order parameter for
173
+ synchronization is connected to the amplitude of the bulk su-
174
+ perconductor gap, that is non zero only after the locking of the
175
+ global phase in the synchronized phase. The synchronization
176
+ and the large frequency of the global network of puddles is
177
+ also responsible for large values of Tc in the optimally doped
178
+ cuprates within the model.
179
+ Inspired by these experimental and theoretical findings,
180
+ we introduce a model Hamiltonian that captures the inter-
181
+ action between electrons and localized vibrations that arise
182
+ from the agglomeration of interstitial oxygens in one pud-
183
+ dle. This interaction must be local, since each electron will
184
+ only interact with the quantized vibration whenever it is in
185
+ the region defined by the puddle (see Fig. 1). The minimal
186
+ model that captures this physical situation can be divided in
187
+ H = Hel + Hp + Hel−p, with
188
+ Hel =
189
+
190
+ k,σ
191
+ ξkc†
192
+ k,σck,σ +
193
+
194
+ k,k′
195
+ Tk,k′c†
196
+ k′,σck,σ,
197
+ where the first term represents a band of electrons with dis-
198
+ persion ξk measured relative to the chemical potential, with
199
+ creation c†
200
+ k,σ and annihilation ck,σ fermionic operators. The
201
+ second term represents the scattering of electrons in each in-
202
+
203
+ 3
204
+ homogeneity described by the puddles, with strenght con-
205
+ troled by the spin-preserving momentum transfer disorder ma-
206
+ trix Tk,k′. The oxygen puddles are described by local phonon
207
+ modes
208
+ Hp =
209
+
210
+ q
211
+ ℏωqa†
212
+ qaq,
213
+ with frequencies ωq and the creation (a†
214
+ q) and annihilation
215
+ (aq) bosonic operators, responsible for the description of the
216
+ localized vibration of each puddle. Finally, the interaction
217
+ term can be described as
218
+ Hel−p =
219
+
220
+ r,R,σ
221
+ g(r − R)c†
222
+ r,σcr,σ
223
+
224
+ a†
225
+ R + aR
226
+
227
+ ,
228
+ where r and R are the electron and puddle locations, respec-
229
+ tively. The puddle is a finite size region in space, thus R de-
230
+ fines the center of this region that can be modeled as a sphere.
231
+ The interaction strenght g(r − R) is only relevant whenever
232
+ the electron is in the region around the puddle, which can
233
+ be modeled using a Gogny-type short range interaction that
234
+ is dependent on the radius of the oxygen agglomeration re-
235
+ gion [41]. After performing the transformation to momentum
236
+ space, the interaction term is written as
237
+ Hel−p =
238
+
239
+ k,k′,σ,q
240
+ M(q, k − k′)c†
241
+ k,σck′,σ
242
+
243
+ a†
244
+ −q + aq
245
+
246
+ , (1)
247
+ where
248
+ M(q, k − k′) =
249
+
250
+ R
251
+ g(k − k′) exp [i(q − [k − k′]) · R]
252
+ is associated with the fact that the puddles are not present in all
253
+ sites, rather they are inhomogeneously distributed around the
254
+ system, thus the summation has to be retained only to these
255
+ regions, which is relevant for the case of Bi2Sr2CaCu2O8+δ,
256
+ since locations of dopant oxygens are observed to be consis-
257
+ tent with the position inferred from local strain analysis of the
258
+ incommensurate structure, as imaged by scanning transmis-
259
+ sion electron microscopy (STEM) [42], which means that the
260
+ crucial oxygen dopants are periodically distributed in corre-
261
+ lation with local strain. However, not all strained regions are
262
+ occupied with dopant oxygen atoms, that is the distribution of
263
+ Oi is inhomogeneous, which justifies our approximation and
264
+ is consistent with STM measurements [43]. In the limits of a
265
+ clean or a totally doped system, this term can be treated ex-
266
+ actly. The factor g(k − k′) is the Fourier transform of the in-
267
+ teracting potential between the electrons and the puddles and
268
+ controls the momentum transfer between the incoming and
269
+ scattered electron.
270
+ One can see from Eq. (1) that the presence of a finite den-
271
+ sity of puddles spread around the systems give rise to a off-
272
+ diagonal term associated with the momentum transfer k − k′
273
+ that comes from the interacting potential. In the limit that the
274
+ summation over M(q, k − k′) can be made exactly, one re-
275
+ covers the usual definition of an electron-phonon interaction,
276
+ where the momentum transfer is the momentum of the local
277
+ phononic mode q, as in the Frohlich [44] and Holstein [45]
278
+ models, for example. In order to explore the effects of this
279
+ kind of interaction in the form of pairing, we introduce an
280
+ unitary transformation H′ = e−SHeS, with an ansatz for the
281
+ transformation matrix
282
+ S =
283
+
284
+ k,k′,σ,q,Q
285
+ M(q, k − k′)c†
286
+ k,σck′,σ
287
+
288
+ xa†
289
+ −q + yaq
290
+
291
+ ,
292
+ where x and y are factors determined a posteriori. After the
293
+ transformation (see Appendix A for details), we end with an
294
+ effective interaction written as
295
+ Heff =
296
+
297
+ k,k′
298
+ ���
299
+ p,p′
300
+ V (k, k′)f(p, p′)c†
301
+ k,↑c†
302
+ p−k,↓cp′−k′,↓ck′,↑
303
+ (2)
304
+ with V (k, k′) = D(k, k′)|g(k − k′)|2 being the potential
305
+ arising from the interaction between electrons and puddles,
306
+ D(k, k′) the phononic propagator associated with the lo-
307
+ cal phonon modes produced by the vibrating puddles and
308
+ f(p, p′) = �
309
+ R e−i(p−p′)·R the phase factor controlling mo-
310
+ mentum transfer between the interacting electrons.
311
+ In the
312
+ regime where the phononic propagator is negative, given that
313
+ ξk ≈ ξk′, we have an effective attractive interaction between
314
+ the electrons mediated by the nanopuddles. Remarkably, this
315
+ interaction leads to the formation of finite center-of-mass mo-
316
+ mentum Cooper pairs represented by p and p′. Therefore,
317
+ from the perspective of inhomogeneously distributed puddles
318
+ bringing disorder to an otherwise clean medium, a bound state
319
+ between two electrons can be formed with a finite center-of-
320
+ mass momentum that is associated with the strenght of the
321
+ interaction between the electrons forming the pair and the ag-
322
+ glomeration of interstitial dopant oxygens in one nanopuddle.
323
+ It is important to notice that the states arising from the ef-
324
+ fective Hamiltonian in Eq. (2) are different from other pro-
325
+ posed pair states with finite CM momentum, as for example
326
+ the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state, where fi-
327
+ nite center-of-mass momentum Cooper pairs can be stabilized
328
+ under a finite magnetic field via the Zeeman coupling [46, 47],
329
+ and the recently proposed current driven FFLO state [48].
330
+ Moreover, it has been shown that, even without the presence
331
+ of a magnetic field or other external potentials, a finite CM
332
+ momentum Cooper pair can be stable in a superconducting
333
+ ground state as pointed in Ref. [7], but the authors do not ex-
334
+ plore the effects that can give rise to this kind of state. Here
335
+ we start from the fact that nanosized puddles are formed via
336
+ doping and the responsible for the CM momentum of the pairs
337
+ is disorder induced by the puddles in the system.
338
+ Eventhough we are not considering any specific form for
339
+ the interaction potential g(k − k′), it is important to com-
340
+ ment that the only requirement is that it must be a finite size
341
+ potential in real space, which means that is not a point-like
342
+ disorder center that is scattering the electrons in the interac-
343
+ tion term of Eq. (1), rather is a region in space defined by
344
+ the agglomeration of oxygen interstitials. In this case, we can
345
+ point to potentials like the Woods-Saxon potential [49] that is
346
+
347
+ 4
348
+ Figure 2. Top: Structure factor for disordered media from Hose-
349
+ mann’s paracrystalline theory [58], given by eq. (6) in the text. The
350
+ pristine case corresponds to the ℓ → ∞ limit, where the structure
351
+ factor is given by delta-peaks at reciprocal lattice vectors and mo-
352
+ mentum is conserved (here ℓ is a measure of disorder and for this
353
+ reason should be inversely related to the residual resistivity shift due
354
+ to structural disorder, ℓ ∝ 1/δρ0). Bottom left:: − Bragg diffraction
355
+ pattern for a structually disordered medium, showing Bragg peaks at
356
+ the central region and Bragg rings at the outter region; Bottom right
357
+ − plot of the structure factor as a function of momentum transfer,
358
+ ∆Q, showing well defined Bragg peaks, for small momentum trans-
359
+ fer, at the reciprocal lattice vectors, G, while the Bragg peaks be-
360
+ come ever broader, at larger momentum transfer, eventually merging
361
+ into rings.
362
+ used to describe the forces applied on protons and neutrons in
363
+ the atomic nucleous or the Gogny-type interactions [50–52],
364
+ which is another kind of nucleon-nucleon potential that has
365
+ also found applications in astrophysics [53], as possible can-
366
+ didates to describe the electron-puddle interaction. However,
367
+ a precise and detailed description of such potential would re-
368
+ quired more knowledege about the formation of the nanosized
369
+ puddles and its effects on the crystal structure of the host ma-
370
+ terial, which would affect the electronic degrees of freedom
371
+ [54], but this is outside the scope of the present study.
372
+ III.
373
+ STRUCTURAL DISORDER
374
+ Before we proceed to the characterization of the supercon-
375
+ ducting state that arises from the effective Hamiltonian de-
376
+ rived in the last section, it is important to briefly discuss which
377
+ kind of disorder is giving the Cooper pairs a finite CM mo-
378
+ mentum. In order to do that, we introduce concepts arising
379
+ from the study of structural disorder, which is the kind of per-
380
+ turbation that the agglomeration of Oi causes in the crystalline
381
+ structure of different cuprate systems, as for example by tilt-
382
+ ing the CuO6 octahedra in La2CuO4+δ [55] and by altering
383
+ the distance between the apical oxygen and the planar copper
384
+ atom in Bi2Sr2CaCu2O8+δ [56].
385
+ Translational invariance is one of the most fundamental
386
+ properties of pristine crystals.
387
+ The concept of a Brillouin
388
+ zone, that repeats itself by translations of reciprocal lattice
389
+ vectors, allows us to organize electrons in energy bands,
390
+ ϵn(k), labeled by a band index, n, and function of a quasi-
391
+ momentum (wave-vector) quantum number, k, in terms of
392
+ which periodic Bloch wave-functions, un,k(r), are defined.
393
+ A perfect crystal is characterized by very intense and sharp
394
+ peaks in the Fraunhofer diffraction pattern of Bragg scatter-
395
+ ing experiments. The existence of such sharp peaks follows
396
+ directly from Heisenberg’s uncertainty principle and their lo-
397
+ cation is determined by the crystalline-lattice structure factor.
398
+ Simply put, an extended Bloch wave with well defined mo-
399
+ mentum state, k, that interacts with ions located at arbitrary
400
+ positions, ri, of the crystal (infinite uncertainty ∆r → ∞),
401
+ scatters into another extended Bloch wave with momentum
402
+ state, k′, with zero uncertainty, ∆k → 0. The entire process
403
+ carries a phase
404
+ φ(k′ − k) =
405
+ 1
406
+
407
+ N
408
+
409
+ ri
410
+ friei(k′−k)·ri,
411
+ (3)
412
+ where N is the number of lattice sites in the crystal and fri is
413
+ an atomic form factor that gives the probability that an atom
414
+ is located at a certain crystallographic position. The scattered
415
+ intensity is proportional to |φ(k′−k)|2 and is thus determined
416
+ by the lattice structure factor,
417
+ S(k′ − k) = 1
418
+ N
419
+
420
+ ri,rj
421
+ frifrjei(k′−k)·(ri−rj).
422
+ (4)
423
+ For a pristine crystal all atoms are at their ideal locations,
424
+ fri = frj = 1 and thus S(k′ − k) = �
425
+ g δk′−k,g, where g is
426
+ a reciprocal lattice vector. The Fraunhoffer diffraction pattern
427
+ in this case thus corresponds to δ−like peaks as shown in Fig.
428
+ 2 and the kinematic constraint of quasi-momentum conserva-
429
+ tion,
430
+ k′ = k + g,
431
+ (5)
432
+ forms the basis for Bloch’s theorem. In the opposite limit of a
433
+ random atom gas, however, an extended Bloch wave with well
434
+ defined momentum state, k, that interacts with ions located at
435
+ a particular, well defined position, ri, of the crystal (zero un-
436
+ certainty ∆r → 0), scatters into another extended Bloch wave
437
+ with momentum state, k′, with infinite uncertainty, ∆k → ∞.
438
+ In this case, frifrj = δri,rj, and S(q) = 1. There are no
439
+ kinematic constraints whatsoever relating k and k′ to g and
440
+ the Fraunhoffer diffraction pattern in this case corresponds to
441
+ an isotropic disc of even intensity, as shown in Fig. 2.
442
+ Interpolating between the pristine and random limits de-
443
+ scribed above by increasing disorder is pivotal to the descrip-
444
+ tion of inherently inhomogeneous systems, such as the one
445
+ of ramdom oxygen puddles described in the present work. If
446
+ disorder is of the first type, namely weak disorder, all atoms
447
+ deviate only slightly from their ideal positions in the crys-
448
+ tal, independently of the deviations of their neighbors [57].
449
+ This is the case of pointlike defects, thermal vibrations or
450
+ micro-mechanical strains, and this kind of disorder preserves
451
+ long range crystalline order. In this case the widths of the
452
+ peaks in the Fraunhoffer diffraction pattern are not affected,
453
+
454
+ Bragg diffraction patterns
455
+ lα1/po
456
+ measures the amount of distortions
457
+ Smax(g)
458
+ S℃(k'- k) = k'-k-q,0
459
+ 1+l2(k'kqg)
460
+ g0
461
+ 20
462
+ Structure factor Sq(k'-k)
463
+ 15
464
+ 10
465
+ 5
466
+ 0
467
+ 0
468
+ 2
469
+ 4
470
+ Reciprocal vector AQ=k-k-q (units of G)
471
+ uncertainty in reciprocal lattice
472
+ breakdown momentum conservation5
473
+ and only their intensity is slightly reduced since for uncor-
474
+ related Gaussian disorder, frifrj = D2 < 1, where D2
475
+ is the Debye-Waller factor. The structure factor is given by
476
+ S(k′ − k) = D2 �
477
+ g δk′−k,g. If disorder of the second type,
478
+ namely strong disorder, however, the atoms deviate signifi-
479
+ cantly from their ideal positions in the crystal, and deviations
480
+ amogst neighboring atoms are correlated. This is the case of
481
+ extended defects, amorphous regions, molten materials, etc,
482
+ and this type of disorder causes the loss of long range crys-
483
+ talline order. In these paracrystalline structures, not only the
484
+ intensity of the diffraction peaks will decrease but, most im-
485
+ portantly, their widths will suffer from a nonlinear increase of
486
+ their integral breadth, δg, for successive orders of Bragg re-
487
+ flections. The complete paracrystalline theory was proposed
488
+ by Hosemann [58]. Hosemann included fluctuations of vari-
489
+ ance σ that introduce correlations between pairs of atoms,
490
+ ⟨frifrj⟩, that decrease with separation ultimately causing the
491
+ peaks in the structure factor of the material to broaden the
492
+ larger the reciprocal lattice. The result is a structure factor
493
+ composed by a sum of Lorentzians [59]
494
+ Sq(k′ − k) =
495
+
496
+ g
497
+ Smax(g)
498
+ 1 + ℓ2
499
+ hkl(q − k′ + k − g)2 ,
500
+ (6)
501
+ of amplitudes Smax(g) = 4/σ2g2 and breadths for Bragg
502
+ reflections, |δg| ≡ 1/ℓhkl = σ2π2(h2 + k2 + l2)/a0, given in
503
+ terms of the original lattice parameter a0 and the momentum
504
+ transfer, q. Hosemann’s paracrystalline theory allows us then
505
+ to interpolate continuously between pristine and random cases
506
+ through the fluctuation parameter σ:
507
+ • for σ → 0 we have ℓhkl → ∞, ∀h, k, l and we obtain
508
+ Sq(k′ − k) = �
509
+ g δq,k′−k+g, enforcing the kinematic
510
+ constraint of momentum conservation, q = k′ − k + g,
511
+ typical of pristine crystals [59];
512
+ • for σ → ∞ we have ℓhkl → 0, ∀h, k, l and we end
513
+ up with Sq(k′ − k) = Smax(0) → 1, isotropic, for
514
+ arbitrary q, k, k′ and determined solely by the g = 0
515
+ contribution, typical of infinite, aperiodic systems [59];
516
+ • for 0 ≤ σ ≤ ∞ we have ∞ ≥ ℓhkl ≥ 0 and the
517
+ structure factor, Sq(k′ −k), will be composed by sharp
518
+ Bragg peaks at small g (large ℓhkl) and isotropic discs
519
+ for larger g (small ℓhkl), as shown in Fig. 2, relax-
520
+ ing the kinematic constraint of momentum conserva-
521
+ tion, q ̸≈ k′ − k + g, typical of a paracrystal, liquids,
522
+ strongly disordered or amorphous systems [59].
523
+ IV.
524
+ DISORDER AND GAP FLUCTUATIONS
525
+ We now address how the superconducting state of the effec-
526
+ tive interaction derived in Sec. II is affected by the structural
527
+ disorder effects introduced in the previous section. We start
528
+ from the effective Hamiltonian in Eq. (2) and, within a mean-
529
+ field decoupling of the quartic term, write the equation for the
530
+ superconducting gap as
531
+ ∆k = −
532
+
533
+ k′,p′
534
+ Vk,k′f0,p′ ⟨cp′−k′↓ck′↑⟩ ,
535
+ (7)
536
+ where we set p = 0, since we want to describe ampli-
537
+ tude fluctuations for the superconducting gap in the Cooper
538
+ channel.
539
+ For the superconducting state formed by singlet
540
+ pairs with finite CM momentum, the system can be repre-
541
+ sented by the spin-independent imaginary time Green’s func-
542
+ tion G(k, k′, τ) = −
543
+
544
+ Tτck,σ(τ)c†
545
+ k′,σ(0)
546
+
547
+ and the anomolous
548
+ pair propagators F(k, k′, τ)
549
+ =
550
+ ⟨Tτck,σ(τ)ck′σ′(0)⟩ and
551
+ F∗(k, k′, τ) =
552
+
553
+ Tτc†
554
+ k,σ(τ)c†
555
+ k′,σ′(0)
556
+
557
+ for σ ̸= σ′. Within
558
+ Nambu’s formalism, we can write the decoupled effective
559
+ Hamiltonian from Eq. (2) and the electronic components from
560
+ Hel in matrix form and derive in first order perturbation theory
561
+ the electronic Green’s function for an inhomogeneous system
562
+ with disorder as
563
+ G (k, k′, iωn) = G0 (k, k′, iωn)
564
+ +
565
+
566
+ p,p′
567
+ G0 (k, p, iωn) Tp,p′σ3G (p′, k′, iωn) ,
568
+ where G0 (k, k′, iωn) is the matrix form of the translation-
569
+ ally invariant electronic Green’s function in frequency space,
570
+ iωn are the fermionic Matsubara frequencies and σ3 is a Pauli
571
+ matrix.
572
+ The diagonal elements of this matrix are defined
573
+ by the bare Green’s function in the superconducting state,
574
+ G0(k, iωn), and its off-diagonal terms are represented by the
575
+ anomalous propagators F0(k, iωn) which are written as
576
+ G0 (k, iωn) =
577
+ − (iωn + ξk)
578
+ ω2n + ξ2
579
+ k + |∆k|2 ,
580
+ F0 (k, iωn) =
581
+ ∆k
582
+ ω2n + ξ2
583
+ k + |∆k|2 .
584
+ In order to proceed, we shall take a couple of approximations:
585
+ first we consider the case of overdoped cuprates, which puts
586
+ the system in a high concentration of disorder, thus Tp,p′ =
587
+ T f(p, p′), where disorder influences the momentum trans-
588
+ fer controled by the phase factor f(p, p′) with strenght T .
589
+ Second we assume that for a translationally invariant system
590
+ the normal and anomalous Green’s functions can be rewrit-
591
+ ten as G0(k, k′, iωn) = G0(k, iωn)δk,k′ and F0(k, k′, iωn) =
592
+ F0(k, iωn)δ−k,k′. Following these couple of approximations,
593
+ the first order pertubation theory expansion of the interacting
594
+ Green’s function is simplified
595
+ G (k, k′, iωn) = G0 (k, iωn) δk,k′
596
+ + T fk,k′G0 (k, iωn) σ3G0 (k′, iωn) . (8)
597
+ From the gap equation in Eq. (7) and from the definition of
598
+ the anomalous propagator, we write
599
+
600
+ 6
601
+ ∆k = −
602
+
603
+ k′,p′
604
+ Vk,k′f0,p′ ⟨cp′−k′↓ck′↑⟩
605
+ = −
606
+
607
+ k′,p′
608
+ Vk,k′f0,p′
609
+
610
+ 1
611
+ β
612
+
613
+ ωn
614
+ F (p′ − k′, k′, iωn)
615
+
616
+ ,(9)
617
+ with β = 1/T being the inverse temperature (in units of
618
+ kB = 1). By using the matrix form in Eq. (8), we get the form
619
+ of the interacting anomalous propagator, where it is worth not-
620
+ ing that the normal and anomalous propagators mix in the
621
+ impurity scattering.
622
+ Despite the anomalous Green’s func-
623
+ tion being invariant for time reversal, the normal one is not,
624
+ and since disorder produces the transformation F0(k, iωn) ↔
625
+ G0(k, iωn) we clearly see this is a mechanism that breaks time
626
+ reversal invariance. As a consequence, this mechanism breaks
627
+ the Cooper pair that leaks into the normal metal surrounding
628
+ the puddles.
629
+ In order to understand the effects of disorder and finite CM
630
+ momentum in the gap equation, we substitute the form of the
631
+ anomalous propagator given by the matrix in Eq. (8) inside
632
+ Eq. (9) to write the gap equation as ∆k = ∆BCS
633
+ k
634
+ + δ∆k,
635
+ where
636
+ ∆BCS
637
+ k
638
+ = −
639
+
640
+ k′
641
+ Vk,k′∆k′
642
+ 2Ek′
643
+ tanh
644
+ �βEk′
645
+ 2
646
+
647
+ ,
648
+ (10)
649
+ is the BCS limit for the gap equation, arising from the first
650
+ term in Eq. (8), with the bare anomolous propagators and
651
+ Ek =
652
+
653
+ ξ2
654
+ k + ∆2
655
+ k. Then
656
+ δ∆k = T
657
+
658
+ k′,p′
659
+ Vk,k′f0,p′fp′,0
660
+ 1
661
+ β
662
+
663
+ ωn
664
+ {F0G0 + G0F0}(11)
665
+ is the correction to the superconductor gap due to effects of
666
+ disorder in the system. The factor [f0,p′fp′,0] can be treated
667
+ within a mean over disorder in order to calculate the inter-
668
+ ference factor as [f0,p′fp′,0] = |f0,p′|2 → S(p′), where
669
+ S(0, p′) is the static structure factor.
670
+ Thus, the correction
671
+ to the gap equation can be written in terms of the structure
672
+ factor and we see that fluctuations associated with small CM
673
+ momentum p′ → 0 are absent, since the structure factor
674
+ S(p′) → 0 and the gap equation is dominated by the BCS
675
+ contribution. On the other hand, fluctuations associated with a
676
+ finite center-of-mass momentum dominate over the BCS con-
677
+ tribution when p′ ≫ 0 and S(p′) → 1. In a general manner,
678
+ the structure factor can be written as a sum of Lorentzians with
679
+ peaks in wave vectors of the reciprocal lattice, as discussed in
680
+ Sec. III and shown in Fig. 2.
681
+ Finally, we proceed by taking the Matsubara summations
682
+ over the set of mixed Green’s functions as in Eq. (11) to arrive
683
+ at the correction in terms of the disorder strenght T and the
684
+ finite CM momentum of the Cooper pairs p′ as
685
+ δ∆k = T
686
+
687
+ k′,p′
688
+ Vk,k′S (p′) 1
689
+ 2
690
+ � ∆k′,p′
691
+ Ek′−p′
692
+ ξk′
693
+ Ek′ + ∆k′
694
+ Ek′
695
+ ξk′−p′
696
+ Ek′−p′
697
+
698
+ ×
699
+
700
+
701
+
702
+ Ek′−p′ tanh
703
+
704
+ βEk′
705
+ 2
706
+
707
+ − Ek′ tanh
708
+ � βEk′−p′
709
+ 2
710
+
711
+ E2
712
+ k′−p′ − E2
713
+ k′
714
+
715
+
716
+ � .
717
+ (12)
718
+ It is importance to notice the dependence of the correction
719
+ on the structure factor S(p′) controlling momentum transfer.
720
+ In the limit of small amount of disorder, the so called first-
721
+ type disorder [57], as discussed in Sec. III, pointlike deffects
722
+ does not affect the BCS gap, in accordance with Anderson’s
723
+ Theorem, as we shall see in the next section. On the other
724
+ hand, in the limit of high concentration of puddles, the system
725
+ is in the limit of second-type disorder, associated with strain-
726
+ induced lattice deformations, and both the amplitude of the
727
+ superconducting gap and the critical temperature are affected.
728
+ In order to proceed to the numerical analsysis, we perform
729
+ an approximation for the structure factor based on the limits
730
+ of disorder discussed above. For the first-type disorder, we
731
+ choose S(p′) = δ0,p′, since no momentum transfer will be
732
+ associated with pairs with finite CM momentum in the dilute
733
+ limit. On the other hand, for the second-type disorder, we
734
+ write S(p′) = 1, assuming a system with high concentration
735
+ of puddles. These two limits for the disorder of the 1st and
736
+ 2nd types can be understood as a hard cutoff for the CM mo-
737
+ mentum distribution within the structure factor and are made
738
+ to simplify Eq. (12) to the following numerical analysis.
739
+ V.
740
+ NUMERICAL ANALYSIS
741
+ In order to fully understand the effects of disorder and
742
+ CM momentum of the Cooper pairs in the superconduct-
743
+ ing gap amplitude we perfom a numerical integration of Eq.
744
+ (12). We use the decomposition Vk,k′ = −V0η(k)η(k′) and
745
+ ∆k = ∆0η(k), where η(k) = cos kx − cos ky is a d−wave
746
+ form factor, which gives the amplitude fluctuations of the or-
747
+ der parameter with the same symmetry. When stated for com-
748
+ parison, we shall also use Vk,k′ = −V0 and ∆k = ∆0 when
749
+ considering a s−wave symmetry for the interaction and the
750
+ gap. For the calculations in the square lattice, we consider a
751
+ two-dimensional electronic dispersion with nearest- and next-
752
+ nearest-neighbor hopping elements (t, t′) as
753
+ ϵk = −2t (cos kx + cos ky) + 4t′ cos kx cos ky − µ, (13)
754
+ where µ is the chemical potential that controls the electronic
755
+ density. This type of electronic dispersion is general for 2D
756
+ transport in strongly correlated systems and is suitable for the
757
+ description of the conduction band associated with the CuO2
758
+ planes of high-Tc cuprates.
759
+ In the following calculations, all parameters are defined in
760
+ units of 4t and we set µ/4t = −0.45, away from the half-
761
+ filled case µ/4t = 0.0 (see Fig. 3), since the mean-field theory
762
+
763
+ 7
764
+ Figure 3. Fermi surface structure used in calculations. Left: The 3D
765
+ plot of Eq. (13) in the first Brillouin zone in yellow and the chemical
766
+ potential cut defining the Fermi level in blue. Right: The Fermi level
767
+ defined by the cut at µ/4t = −0.45. The vectors k, fixed in the
768
+ direction (0, π), and k′, varying across the Fermi surface, are also
769
+ shown.
770
+ yields incorrect results for a two-dimensional lattice near half-
771
+ filling [60] and we avoid particle-hole symmetry [61]. For
772
+ this reason, we can take t′ = 0. We also set V0/4t = 1.0,
773
+ in the limit where the mean-field theory is still valid. For the
774
+ summations over p′, we define p′ = k − k′, where k, k′ are
775
+ the momenta of the two paired electrons, which we set |k| =
776
+ |k′| = kF as two momenta in the Fermi surface. The CM
777
+ momenta are then defined by fixing k in the direction of the
778
+ point (0, π) and by varying k′ across the Fermi surface, as
779
+ shown in Fig. 3.
780
+ We start by analyzing the zero temperature limit T = 0 of
781
+ Eq. (12), where the hyperbolic tangents can be simplified. In
782
+ Fig. 4 we show how the gap amplitude ∆0 is affect by disor-
783
+ der T in the limit of disorder of the 1st type, S(p′) = δ0,p′,
784
+ or weak concentration of puddles, and strong concentration,
785
+ S(p′) = 1, in the limit of disorder of the 2nd kind. The
786
+ gap amplitude is insensitive to disorder in the dilute limit for
787
+ s−wave pairing, thus ∆0 = ∆BCS
788
+ 0
789
+ and the BCS limit is re-
790
+ covered, in accordance with Anderson’s theorem. However,
791
+ in the opposite limit, the disorder strongly affects the ampli-
792
+ tude of the gap for d−wave pairing, introducing fluctuations
793
+ and decreasing its absolute value in about 50% in the strong
794
+ disorder limit, when compared to the clean case.
795
+ It is worth noting that the reduction is not linear as the
796
+ strenght of disorder approaches the values of the fixed pairing
797
+ potential, T → V0, where the pertubation theory still holds.
798
+ This can be traced back to the fact that the gap equation is
799
+ a self-consistent equation for the aboslute value of ∆0, even
800
+ after the approximations considered. Thus we see that even
801
+ in the zero temperature limit, disorder tends to destroy super-
802
+ conductivity in a system with high concentration of oxygen
803
+ interstititals, as in the overdoped cuprates.
804
+ We also investigate the effects of specific finite CM momen-
805
+ tum on the amplitude of the gap when T = 0. We choose a set
806
+ of momenta {p} and substitute in Eq. (12) the corresponding
807
+ structure factor, namely S(ps) = δp′,ps, where ps are the mo-
808
+ menta in the set. All ps are multiples of kF of each direction
809
+ considered, namely (0, π) and (π, π). In Fig. 5 we display the
810
+ Figure 4. T = 0 limit for the amplitude fluctuatios of the supercon-
811
+ ducting gap as a function of disorder strenght compared to the clean
812
+ system. Dilute limit (red), for disorder of the 1st kind and s−wave
813
+ symmetry, and high concentration of puddles for disorder of the 2nd
814
+ kind and d−wave symmetry (blue). The black dashed line is a guide
815
+ to the eye. Gap values are given in terms of ∆0 in the absence of
816
+ disorder T = 0.
817
+ evolution of the amplitude of the superconducting order pa-
818
+ rameter ∆0 as a function of the CM momentum of the Cooper
819
+ pairs p, for fixed disorder strenght T = 0.1. The supercon-
820
+ ducting order parameter is modulated, with period determined
821
+ by the distance between adjacents Fermi surfaces in each di-
822
+ rection, being 3.75|kF| for (0, π) and 5.75|kF| for (π, π). Re-
823
+ markably, this is in direct contact with the diffraction pattern
824
+ displayed in Fig. 2. However, since we are considering a
825
+ hard cutoff for the structure factor in terms of delta functions,
826
+ the amplitude of the gap modulation is not altered by the dis-
827
+ tance from the origin. We expect that by including a more
828
+ realistic model for the structure factor, the amplitudes of the
829
+ modulations will decay with p, with its effect stronger in the
830
+ (π, π) direction, since larger reciprocal lattice vectors G im-
831
+ ply a broader structure factor, thus diminishing the amplitude
832
+ of the superconducting gap. Altogether, the interplay between
833
+ disorder and finite center-of-mass momentum Cooper pairs is
834
+ able to strongly affect the superconducting order parameter.
835
+ Now we turn to the finite temperature case T ̸= 0 for the
836
+ d−wave symmetric order parameter to understand how disor-
837
+ der and CM momenta for the Cooper pairs affects the critical
838
+ temperature Tc. In Fig. 6 we show the evolution of the su-
839
+ perconducting gap with temperature, for different values of
840
+ the disorder strenght T . It is clear that with increasing dis-
841
+ order, not only ∆0(0) decreases, as pointed in the zero tem-
842
+ perature limit, but we also evidence a decrease in the criti-
843
+ cal temperature Tc, defined as the value of temperature that
844
+ ∆0(T, T ) → 0, with disorder, as shown in the inset. This
845
+ means that pair breaking is induced by the scattering of the
846
+ finite CM momentum Cooper pairs with the nanosized oxy-
847
+ gen puddles of the system and by increasing disorder, Tc is
848
+ significantly reduced.
849
+ This pair breaking effect is due to the fact that the phase
850
+ space required to pair formation is reduced when p increases
851
+ in absolute value. In the small scattering momentum transfer
852
+
853
+ 2
854
+ k
855
+ 03
856
+ 斤-2
857
+ 0
858
+ -1
859
+ 0
860
+ ky
861
+
862
+
863
+ 0
864
+ 2
865
+ Kx
866
+
867
+ 2
868
+ -2
869
+ -2
870
+ -2
871
+ 0
872
+ 2
873
+ kx1.0
874
+ 0.9
875
+ △o(T)/△o(0)
876
+ 0.8
877
+ 0.7
878
+ 2nd d wave
879
+ 0.6
880
+ lst s wave
881
+ 0.0
882
+ 0.2
883
+ 0.4
884
+ 0.6
885
+ 0.8
886
+ T8
887
+ Figure 5. Left: Extended Brillouin zones in the upper positive part of
888
+ momentum space. The arrows indicate the distance between the cen-
889
+ ters of each Fermi surface in terms of the Fermi vector |kF | of each
890
+ direction considered. Right: The amplitude of the superconducting
891
+ order parameter as a function of different CM momentum vectors
892
+ |p|, in the directions (0, π) and (π, π). ∆0(p) is given in units of
893
+ the gap at p = 0.
894
+ sector, p < |kF|, the gap is almost unnafacted by the presence
895
+ of disorder when comprared to the value when p = 0, since
896
+ the shape of the Fermi surface intersection of the two paired
897
+ electrons suffers little change. However, when p approaches
898
+ the maximum absolute value of 2|kF| within the first Brillouin
899
+ zone, the phase space for pair formation is greatly reduced and
900
+ disorder induces pair breaking, captured by the reduction of
901
+ the superconducting order paremeter. The modulation occurs
902
+ for p > 2|kF|, since electrons from different Brillouin zones
903
+ participate in the scattering and pairing process. Therefore,
904
+ these results point to the combined effect of finite center-of-
905
+ mass momentum pairs being scattered by structural disorder
906
+ induced by the network of oxygen puddles as a mechanism
907
+ for the reduction of the superconducting gap and the critical
908
+ temperature in the overdoped regime.
909
+ VI.
910
+ CONCLUSION AND DISCUSSION
911
+ In this work we presented an extension of the proposed
912
+ model for the formation of networks of puddles and its ef-
913
+ fects on the superconductivity in oxygen-doped cuprates [38].
914
+ We show that the presence of puddles, in the overdoped side
915
+ of the phase diagram, introduces strong disorder in the sys-
916
+ tem that induces the formation of finite center-of-mass mo-
917
+ mentum Cooper pairs. We derive an analytical expression for
918
+ the amplitude fluctuations in the superconducting gap induced
919
+ by the puddles, within a mean-field BCS-like approach, in
920
+ terms of the disorder strenght T and the finite CM momenta
921
+ p. We numerically solve this expression to show that even
922
+ in the zero temperature limit the gap is strongly affected by
923
+ disorder-induced CM Cooper pairs. In the limit of strong dis-
924
+ order, the gap tends to close and, in the finite temperature case,
925
+ Tc tracks the reduction of the superconducting gap, also being
926
+ strongly affected by disorder. It is important to emphasize
927
+ that we do not account the effect of longer-range Coulomb re-
928
+ pulsion, restricting the application of our results to screened
929
+ systems [68].
930
+ Figure 6. Temperature dependence of the superconducting order pa-
931
+ rameter for different values of disorder strenght (colored bar). Gap
932
+ alues are given in terms of the clean case T = 0 and temperature in
933
+ terms of T 0
934
+ c also of the clean case. Inset: The critical temperature
935
+ dependence normalized to the clean value as a function of disorder
936
+ strenght. The black dashed line is a guide to the eye.
937
+ The experimental observations of structural scale invari-
938
+ ance of dopants detected by scanning micro-x-ray diffraction
939
+ [36], the promotion of critical temperature [37], the agglom-
940
+ eration of interstitial oxygens in regions of strong local strain
941
+ in the crystal structure of cuprate superconductors [42, 43]
942
+ and the proposed theoretical reports regarding the presence
943
+ of networks of nanoscale superconducting islands in high-
944
+ temperature superconductors [62–65] are in close connection
945
+ with the results reported here. Eventhough we are showing
946
+ that the superconducting state is depleted in the presence of
947
+ strong disorder in the overdoped regime, it is clear from the
948
+ above mentioned surveys that the importance of these net-
949
+ works and its interplay with electronic degrees of freedom
950
+ pass across the whole phase diagram of hole-doped cuprates.
951
+ In Ref. [38], the present authors show how the complex
952
+ networks formed by the oxygen puddles can transionate to a
953
+ synchronized phase, controlled by the superfluid density, in a
954
+ way that the concentration of dopant atoms controls the emer-
955
+ gence of local superconductivity in the underdoped regime
956
+ and how the systems evolves to a bulk superconductor as the
957
+ concentration of dopants, thus puddles, increases as the sys-
958
+ tems approaches the optimally doped regime. It is important
959
+ to emphasize that within this framework, the state studied in
960
+ this work is described by the bulk superconductor state in the
961
+ synchronized phase of the network formed by the oxygen pud-
962
+ dles (see Fig. 1), in the sense that we require the network of
963
+ puddles to be fully synchronized in order to the band of elec-
964
+ trons to interact with the global mode of vibration of the syn-
965
+ chronized network. Our approach is based on a mean-field
966
+ approximation for the complex network, therefore we point to
967
+ the importance of describing different topologies for the orga-
968
+ nization of the puddles and how this can affect not only the
969
+ transition to the superconducting state [66], but also its pos-
970
+ sible interplay with the superconducting fluctuations of pre-
971
+ formed Cooper pairs observed in the pseudogap phase above
972
+ Tc [67], in terms of local superconductivity.
973
+
974
+ 8
975
+ 1.00
976
+ (0, πt)
977
+ (π, )
978
+ 6
979
+ 0.98
980
+ [ KF
981
+ 0.96
982
+ 4
983
+ 5
984
+ K
985
+ 7
986
+ .75/kFl
987
+ 0.94
988
+ 3
989
+ 2
990
+ 0.92
991
+ 0
992
+ 0.90
993
+ -2
994
+ 0
995
+ 2
996
+ 4
997
+ 6
998
+ 8
999
+ 01234567891011
1000
+ kx
1001
+ Ip/ / / kFl1.0
1002
+ 1.0
1003
+ 0.8
1004
+ 0.8
1005
+ .0.6
1006
+ 0.6
1007
+ T)/△o(
1008
+ 1.0
1009
+ Ao(T,
1010
+ 0.4
1011
+ 0.4
1012
+ .0.8
1013
+ 0.2
1014
+ 0.2
1015
+ 0.6
1016
+ 0.0
1017
+ 0.2
1018
+ 0.4
1019
+ 0.6
1020
+ 0.8
1021
+ T
1022
+ 0.0
1023
+ 0.0
1024
+ 0.1
1025
+ 0.2
1026
+ 0.3
1027
+ 0.4
1028
+ 0.5
1029
+ 0.6
1030
+ 0.7
1031
+ 0.8
1032
+ 0.9
1033
+ 1.09
1034
+ Appendix A: Unitary transformation
1035
+ In this Appendix section, we show the derivation of the
1036
+ effective Hamiltonian containing the pairing interaction be-
1037
+ tween two electrons forming a Cooper pair with finite center-
1038
+ of-mass momentum. The starting point is the full Hamiltonian
1039
+ written in momentum space H = Hel+Hp+Hel−p, which is
1040
+ the summation over the contributions of the electrons, puddles
1041
+ and electron-puddle interaction, respectively. Introducing an
1042
+ unitary transformation of the form H′ = e−SHeS, where S
1043
+ is the transformation matrix introduced in Sec. II, we can ex-
1044
+ pand the exponentials up to second order in powers of S to
1045
+ write the transformed Hamiltonian as
1046
+ H′ = H + [H, S] + 1
1047
+ 2[[H, S], S],
1048
+ (A1)
1049
+ and by treating Hel−p as a perturbation, we can divide the full
1050
+ Hamiltonian as H = H0 + Hel−p, where H0 contains the
1051
+ kinetic terms of electrons and puddles, to write
1052
+ H′ = H0 + Hel−p + [H0, S] + [Hel−p, S] + 1
1053
+ 2[[H0, S], S].
1054
+ Since the goal is to eliminate the interaction, the defining
1055
+ equation for the transformation matrix comes from the elim-
1056
+ ination of the first-order term [H0, S] + Hel−p = 0, from
1057
+ which we can extract the factors x and y for S. In this way,
1058
+ the transformed Hamiltonian can be written in terms of an ef-
1059
+ fective interaction that comes from recombining the terms in
1060
+ the commutators
1061
+ H′ = H0 + 1
1062
+ 2[Hel−p, S],
1063
+ (A2)
1064
+ thus the problem is reduced to an effective system described
1065
+ by H = H0 + Heff, where Heff =
1066
+ 1
1067
+ 2 [Hel−p, S].
1068
+ By
1069
+ performing the calculation over the commutator [H0, S], the
1070
+ choice of x and y that eliminate the first-order term is given
1071
+ by
1072
+ xk,k′,q =
1073
+ 1
1074
+ ξk′ − ξk − ωq
1075
+ ,
1076
+ yk,k′,q =
1077
+ 1
1078
+ ξk′ − ξk + ωq
1079
+ ,
1080
+ and the transformation matrix S is fully defined. Then we
1081
+ proceed to the calculation of the effective Hamiltonian that
1082
+ comes from the commutator of the now defined matrix S and
1083
+ the electron-puddle interaction, which gives a combination of
1084
+ M(q, Q)M(−q, Q′), where Q = k − k′ and Q′ = k′′ − k′′′
1085
+ are two auxiliar variables that accomodate the variety of in-
1086
+ dices arising from the commutator. Recalling the definition of
1087
+ the factor M given in the main text, we see that
1088
+ M(q, Q)M(−q, Q′) =
1089
+
1090
+ R,R′
1091
+ g(Q)g(Q′)
1092
+ × ei(R−R′)·qe−i(Q·R+Q′·R′),
1093
+ which can be simplified by taking R = R′ since each R de-
1094
+ scribes the position of a nanosized puddle and we are assum-
1095
+ ing the dilute limit of oxygen puddles, as discussed in the
1096
+ main text, in accordance with STEM and STM measurements
1097
+ [42, 43]. In this way, the effective Hamiltonian is written as
1098
+ Heff =
1099
+
1100
+ k′,k′′′,q,Q,Q′
1101
+ V (q, Q, Q′)M(q, Q)M(−q, Q′)
1102
+ × c†
1103
+ k′′′+Q′c†
1104
+ k′+Qck′ck′′′,
1105
+ (A3)
1106
+ with V (q, Q, Q′) = ωq/[(ξk′′′ − ξk′′′+Q′)2 − ω2
1107
+ q]. Proceed-
1108
+ ing with the calculation, we note that within BCS theory, the
1109
+ effective Hamiltonian describes the interaction between elec-
1110
+ trons with opposite momenta k′ = −k′′′, with zero CM mo-
1111
+ mentum. However, in our case, the auxiliar variables Q and
1112
+ Q′ introduces a momentum transfer connected with a finite
1113
+ CM momentum for the pairs, for each fermionic operator in
1114
+ the effective Hamiltonian that comes from the commutator
1115
+ [Hel−p, S]. In this sense, we perform a change of variables
1116
+ introducing the finite CM momentum k′ + k′′′ = p, in a way
1117
+ that we can eliminate the dependence on the auxiliar variables.
1118
+ The new variables introduced are written as k = k′′′ + Q′ and
1119
+ −k + p′ = k′ + Q, where p and p′ are the CM momenta of
1120
+ the Cooper pairs. In the limit where the interaction g(k, k′) is
1121
+ independent of the CM momenta, we can decouple the effec-
1122
+ tive interaction and end up with the effective Hamiltonian
1123
+ Heff =
1124
+
1125
+ k,k′
1126
+
1127
+ p,p′
1128
+ V (k, k′)f(p, p′)c†
1129
+ k,↑c†
1130
+ p−k,↓cp′−k′,↓ck′,↑,
1131
+ with
1132
+ V (k, k′) =
1133
+ ω0
1134
+ (ξk′ − ξk)2 − ω2
1135
+ 0
1136
+ |g(k − k′)|2
1137
+ f(p, p′) =
1138
+
1139
+ R
1140
+ e−i(p′−p)·R
1141
+ where we assume ωq = ω0, a dispersionless phonon mode for
1142
+ each puddle.
1143
+
1144
+ 10
1145
+ [1] J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Theory of Super-
1146
+ conductivity. Phys. Rev. 108, 1175 (1957)
1147
+ [2] D. F. Agterberg, J. S. Davis, S. D. Edkins, E. Fradkin, D. J.
1148
+ Van Harlingen, S. A. Kivelson, P. A. Lee, L. Radzihovsky,
1149
+ J. M. Tranquada, and Y. Wang, The Physics of Pair-Density
1150
+ Waves: Cuprate Superconductors and Beyond. Annu. Rev. Con-
1151
+ dens. Matter Phys. 11, 231 (2020).
1152
+ [3] Y. Wang, D. F. Agterberg, and A. Chubukov, Coexistence of
1153
+ Charge-Density-Wave and Pair-Density-Wave Orders in Under-
1154
+ doped Cuprates. Phys. Rev. Lett. 114, 197001 (2015).
1155
+ [4] D. Chakraborty, M. Grandadam, M. H. Hamidian, J. C. S.
1156
+ Davis, Y. Sidis, and C. P epin, Fractionalized pair density wave
1157
+ in the pseudogap phase of cuprate superconductors. Phys. Rev.
1158
+ B 100, 224511 (2019).
1159
+ [5] J. Wardh and M. Granath, Effective model for a supercurrent in
1160
+ a pair-density wave. Phys. Rev. B 96, 224503 (2017).
1161
+ [6] P. Choubey, S. H. Joo, K. Fujita, Z. Du, S. D. Edkins, M. H.
1162
+ Hamidian, H. Eisaki, S. Uchida, A. P. Mackenzie, J. Lee, J. C.
1163
+ S. Davis, and P. J. Hirschfeld, Atomic-scale electronic structure
1164
+ of the cuprate pair density wave state coexisting with supercon-
1165
+ ductivity. Proc. Natl. Acad. Sci. USA 117, 14805 (2020).
1166
+ [7] Florian Loder, Arno P. Kampf, and Thilo Kopp, Superconduct-
1167
+ ing state with a finite-momentum pairing mechanism in zero
1168
+ external magnetic field. Phys. Rev. B 81, 020511(R) (2010)
1169
+ [8] M. H. Hamidian, S. D. Edkins, S. H. Joo, A. Kostin, H. Eisaki,
1170
+ S. Uchida, M. J. Lawler, E.-A. Kim, A. P. Mackenzie, K. Fujita,
1171
+ J. Lee, and J. C. S. Davis, Detection of a Cooper-pair density
1172
+ wave in Bi2Sr2CaCu2O8+x. Nature 532, 343 (2016)
1173
+ [9] X. Liu, Y. X. Chong, R. Sharma, and J. C. S. Davis, Discov-
1174
+ ery of a Cooper-pair density wave state in a transition-metal
1175
+ dichalcogenide. Science 372, 1447 (2021).
1176
+ [10] H. Chen et al. Roton pair density wave in a strong-coupling
1177
+ kagome superconductor. Nature 599, 222 (2021).
1178
+ [11] Angela Q. Chen, Moon Jip Park, Stephen T. Gill, Yiran Xiao,
1179
+ Dalmau Reig-i-Plessis, Gregory J. MacDougall, Matthew J.
1180
+ Gilbert and Nadya Mason, Finite momentum Cooper pairing
1181
+ in three-dimensional topological insulator Josephson junctions.
1182
+ Nature Communications 9, 3478 (2018)
1183
+ [12] S. D. Edkins, A. Kostin, K. Fujita, A. P. Mackenzie, H. Eisaki,
1184
+ S. Uchida, S. Sachdev, M. J. Lawler, E.-A. Kim, J. C. Sea-
1185
+ mus Davis, and M. H. Hamidian, Magnetic field-induced pair
1186
+ density wave state in the cuprate vortex halo. Science 364, 976
1187
+ (2019)
1188
+ [13] I. A. Semenikhin, Influence of disordering on the critical tem-
1189
+ perature of superconductors with a short coherence length.
1190
+ Physics of the Solid State 45, 1622 (2003)
1191
+ [14] Debmalya Chakraborty and Annica M. Black-Schaffer, Inter-
1192
+ play of finite-energy and finite-momentum superconducting
1193
+ pairing. Phys. Rev. B 106, 024511 (2022)
1194
+ [15] J.-J. Wen et al, Observation of two types of charge-density-
1195
+ wave orders in superconducting La2−xSrxCuO4. Nature Com-
1196
+ munications 10, 3269 (2019)
1197
+ [16] K. McElroy, H. Eisaki, S. Uchida, and S. C. Davis, Atomic-
1198
+ Scale Sources and Mechanism of Nanoscale Electronic Disor-
1199
+ der in Bi2Sr2CaCu2O8+δ. Science 309, 1048 (2005).
1200
+ [17] Nicola Poccia, Matthieu Chorro, Alessandro Ricci, Wei Xu,
1201
+ Augusto Marcelli, Gaetano Campi, Antonio Bianconi, Percola-
1202
+ tive superconductivity in La2CuO4.06 by lattice granularity
1203
+ patterns with scanning micro x-ray absorption near edge struc-
1204
+ ture. Appl. Phys. Lett. 104, 221903 (2014)
1205
+ [18] Alessandro Ricci et al, Networks of superconducting nano-
1206
+ puddles in 1/8 doped YBa2Cu3O6.5+y controlled by thermal
1207
+ manipulation. New J. Phys. 16, 053030 (2014)
1208
+ [19] E. W. Huang, D. J. Scalapino, T. A. Maier, B. Moritz, and T. P.
1209
+ Devereaux, Decrease of d-wave pairing strength in spite of the
1210
+ persistence of magnetic excitations in the overdoped Hubbard
1211
+ model. Phys. Rev. B 96, 020503(R) (2017)
1212
+ [20] A. V. Balatsky, I. Vekhter, and Jian-Xin Zhu, Impurity-induced
1213
+ states in conventional and unconventional superconductors.
1214
+ Rev. Mod. Phys. 78, 373 (2006).
1215
+ [21] F. Rullier-Albenque, H. Alloul, F. Balakirev, and C. Proust, Dis-
1216
+ order, metal-insulator crossover and phase diagram in high-Tc
1217
+ cuprates, EPL 81, 37008 (2008)
1218
+ [22] N. R. Lee-Hone, H. U. Ozdemir, V. Mishra, D. M. Broun, and
1219
+ P. J. Hirschfeld, Low energy phenomenology of the overdoped
1220
+ cuprates: Viability of the Landau-BCS paradigm. Phys. Rev.
1221
+ Research 2, 013228 (2020)
1222
+ [23] Peter Henseler, Johann Kroha, and Boris Shapiro, Self-
1223
+ consistent study of Anderson localization in the Anderson-
1224
+ Hubbard model in two and three dimensions. Phys. Rev. B 78,
1225
+ 235116 (2008)
1226
+ [24] T. H. Y. Nguyen, D. A. Le and A. T. Hoang, Anderson localiza-
1227
+ tion in the Anderson–Hubbard model with site-dependent inter-
1228
+ actions. New J. Phys. 24, 053054 (2022)
1229
+ [25] Nathan Giovanni, Marcello Civelli, and Maria C. O. Aguiar,
1230
+ Anderson localization effects on the doped Hubbard model.
1231
+ Phys. Rev. B 103, 245134 (2021)
1232
+ [26] P. W. Anderson, Theory of Dirty Superconductors. J. Phys.
1233
+ Chem. Solids 11, 26 (1959).
1234
+ [27] A. A. Abrikosov and L. P. Gor’kov, On the theory of super-
1235
+ conducting alloys. 1. The electrodynamics of alloys at absolute
1236
+ zero. Zh. Eksp. Teor. Fiz. 35, 1558 (1958).
1237
+ [28] A. A. Abrikosov and L. P. Gor’kov, Superconducting alloys at
1238
+ finite temperatures, Zh. Eksp. Teor. Fiz. 36, 319 (1959).
1239
+ [29] T. Cren, D. Roditchev, W. Sacks, J. Klein, J.-B. Moussy, C.
1240
+ Deville-Cavellin, and M. Lagues, Influence of Disorder on the
1241
+ Local Density of States in High- Tc Superconducting Thin
1242
+ Films. Phys. Rev. Lett. 84, 147 (2000)
1243
+ [30] John F. Dodaro and Steven A. Kivelson, Generalization of An-
1244
+ derson’s Theorem for Disordered Superconductors. Phys. Rev.
1245
+ B 98, 174503 (2018)
1246
+ [31] Gaetano Campi, Alessandro Ricci, Nicola Poccia, Luisa Barba,
1247
+ Gianmichele Arrighetti, Manfred Burghammer, Alessandra
1248
+ Stella Caporale, and Antonio Bianconi, Scanning micro-x-ray
1249
+ diffraction unveils the distribution of oxygen chain nanoscale
1250
+ puddles in YBa2Cu3O6.33. Phys. Rev. B 87, 014517 (2013)
1251
+ [32] Alessandro Ricci, Nicola Poccia, Gaetano Campi, Francesco
1252
+ Coneri, Alessandra Stella Caporale, Davide Innocenti, Man-
1253
+ fred Burghammer, Martin v. Zimmermann and Antonio Bian-
1254
+ coni, Multiscale distribution of oxygen puddles in 1/8 doped
1255
+ YBa2Cu3O6.67. Scientific Reports 3, 2383 (2013)
1256
+ [33] Nicola Poccia et al, Spatially correlated incommensurate
1257
+ lattice modulations in an atomically thin high-temperature
1258
+ Bi2.1Sr1.9CaCu2O8+y superconductor. Phys. Rev. Materials
1259
+ 4, 114007 (2020)
1260
+ [34] J. G. Bednorz and K. A. Muller, Possible high Tc superconduc-
1261
+ tivity in the Ba − La − Cu − O system. Zeitschrift fur Physik
1262
+ B Condensed Matter 64, 189 (1986)
1263
+ [35] G. Campi et al, Inhomogeneity of charge-density-wave order
1264
+ and quenched disorder in a high-Tc superconductor. Nature
1265
+ 525, 359 (2015)
1266
+
1267
+ 11
1268
+ [36] Michela Fratini, Nicola Poccia, Alessandro Ricci, Gaetano
1269
+ Campi, Manfred Burghammer, Gabriel Aeppli and Antonio
1270
+ Bianconi, Scale-free structural organization of oxygen intersti-
1271
+ tials in La2CuO4+y. Nature 466, 841 (2010)
1272
+ [37] Alessandro Ricci et al, Networks of superconducting nano-
1273
+ puddles in 1/8 doped YBa2Cu3O6.5+y controlled by thermal
1274
+ manipulation. New J. Phys. 16, 053030 (2014)
1275
+ [38] V. Velasco and M. B. Silva Neto, Unconventional superconduc-
1276
+ tivity as a quantum Kuramoto synchronization problem in ran-
1277
+ dom elasto-nuclear oscillator networks. J. Phys. Commun. 5,
1278
+ 015003 (2020)
1279
+ [39] Y. Kuramoto, Self-entrainment of a population of coupled non-
1280
+ linear oscillators (International Symposium on Mathematical
1281
+ Problems in Theoretical Physics, Lecture Notes in Physics, vol
1282
+ 39) ed H Araki (Berlin: Springer) 420 (1975)
1283
+ [40] Y. Kuramoto and I. Nishikawa, Statistical macrodynamics of
1284
+ large dynamical systems. Case of a phase transition in oscillator
1285
+ communities. J. Stat. Phys. 49, 569 (1987)
1286
+ [41] D. Gogny, Simple separable expansions for calculating matrix
1287
+ elements of two-body local interactions with harmonic oscilla-
1288
+ tor functions. Nuclear Physica A 237(3), 399 (1975)
1289
+ [42] D. Song et al, Visualization of Dopant Oxygen Atoms in a
1290
+ Bi2Sr2CaCu2O8+δ Superconductor. Adv. Funct. Mater 29,
1291
+ 1903843 (2019)
1292
+ [43] I. Zeljkovic et al, Nanoscale Interplay of Strain and Doping in a
1293
+ High-Temperature Superconductor. Nano Letters 14(12), 6749
1294
+ (2014)
1295
+ [44] H. Frohlich, Theory of electrical breakdown in ionic crystals.
1296
+ Proc. R. Soc. Lond. A 160(901), 230 (1937)
1297
+ [45] T. Holstein, Studies of polaron motion: Part I. The molecular-
1298
+ crystal model. Annals of Physics 8(3), 325 (1959)
1299
+ [46] P. Fulde and A. Ferrell, Superconductivity in a Strong Spin-
1300
+ Exchange Field. Phys. Rev. 135, A550 (1964).
1301
+ [47] A. I. Larkin and Yu. N. Ovchinnikov, Nonuniform State of Su-
1302
+ perconductors. Sov. Phys. JETP 20, 762 (1965)
1303
+ [48] Hyeonjin Doh, Matthew Song, and Hae-Young Kee, Novel
1304
+ Route to a Finite Center-of-Mass Momentum Pairing State
1305
+ for Superconductors: A Current-Driven Fulde-Ferrell-Larkin-
1306
+ Ovchinnikov State. Phys. Rev. Lett. 97, 257001 (2006)
1307
+ [49] Roger D. Woods and David S. Saxon, Diffuse Surface Opti-
1308
+ cal Model for Nucleon-Nuclei Scattering. Phys. Rev. 95, 577
1309
+ (1954)
1310
+ [50] D. Gogny, in Proceeding of the International Conference on Nu-
1311
+ clear Physics, Munich, edited by J. De Boer and H. J. Mang,
1312
+ (North-Holland, Amsterdam, 1973), Vol. 1, p. 48.
1313
+ [51] D. Gogny, in Nuclear Self-Consistent Fields, Trieste, edited by
1314
+ G. Ripka and M. Porneuf (North-Holland, Amsterdam, 1975),
1315
+ p. 333.
1316
+ [52] J. Decharge and D. Gogny, Hartree-Fock-Bogolyubov calcula-
1317
+ tions with the D1 effective interaction on spherical nuclei. Phys.
1318
+ Rev. C 21, 1568 (1980).
1319
+ [53] C. Gonzalez-Boquera, M. Centelles, X. Vinas and L. M. Rob-
1320
+ ledo, New Gogny interaction suitable for astrophysical applica-
1321
+ tions. Physics Letters B 779, 195 (2018)
1322
+ [54] Y. He, T. S. Nunner, P. J. Hirschfeld, and H.-P. Cheng, Local
1323
+ Electronic Structure of Bi2Sr2CaCu2O8 near Oxygen Dopants:
1324
+ A Window on the High-Tc Pairing Mechanism. Phys. Rev. Lett.
1325
+ 96, 197002 (2006)
1326
+ [55] X. Zhang, H. Zhao and J. Zhu, Visualization and control of oxy-
1327
+ gen dopant ordering in a cuprate superconductor. Materials To-
1328
+ day Physics 23, 100629 (2022)
1329
+ [56] J. A. Slezak et al, Imaging the impact on cuprate supercon-
1330
+ ductivity of varying the interatomic distances within individ-
1331
+ ual crystal unit cells. Proc. Natl. Acad. Sci. USA 105(9), 3203
1332
+ (2008)
1333
+ [57] R. P. A. Dullens and A. V. Petukhov, Second-type disorder in
1334
+ colloidal crystals. EPL 77, 58003 (2007)
1335
+ [58] R. Hosemann, Z. Phys. 128, 1 (1950); ibid. 465 (1950).
1336
+ [59] R. Hosemann and A. M. Hindeleh, J. Macromol. Sci. − Phy.
1337
+ B34(4), 327-356 (1995).
1338
+ [60] R. Micnas, J. Ranninger, and S. Robaszkiewicz, Superconduc-
1339
+ tivity in narrow-band systems with local nonretarded attractive
1340
+ interactions. Rev. Mod. Phys. 62, 113 (1990)
1341
+ [61] P. J. H. Denteneer, R. T. Scalettar and N. Trivedi, Particle-Hole
1342
+ Symmetry and the Effect of Disorder on the Mott-Hubbard In-
1343
+ sulator. Phys. Rev. Lett. 87, 146401 (2001)
1344
+ [62] A. Perali, A. Bianconi, A. Lanzara and N.L. Saini, The gap
1345
+ amplification at a shape resonance in a superlattice of quantum
1346
+ stripes: A mechanism for high-Tc. Solid State Communications
1347
+ 100(3), 181 (1996)
1348
+ [63] E. V. L. de Mello1, Description and connection between the
1349
+ oxygen order evolution and the superconducting transition in
1350
+ La2CuO4+y. EPL 98, 57008 (2012)
1351
+ [64] Ginestra Bianconi, Superconductor-insulator transition on an-
1352
+ nealed complex networks. Phys. Rev. E 85, 061113 (2012)
1353
+ [65] D. Pelc et al, Emergence of superconductivity in the cuprates
1354
+ via a universal percolation process. Nat. Commun. 9, 4327
1355
+ (2018)
1356
+ [66] Ginestra Bianconi, Enhancement of Tc in the superconduc-
1357
+ tor–insulator phase transition on scale-free networks. J. Stat.
1358
+ Mech., P07021 (2012)
1359
+ [67] A. Dubroka et al. Evidence of a precursor superconducting
1360
+ phase at temperatures as high as 180 K in RBa2Cu3O7−δ
1361
+ (R = Y, Gd, Eu) superconducting crystals from infrared spec-
1362
+ troscopy. Phys. Rev. Lett. 106, 047006 (2011)
1363
+ [68] I. S. Burmistrov, I. V. Gornyi, and A. D. Mirlin, Enhancement
1364
+ of the Critical Temperature of Superconductors by Anderson
1365
+ Localization. Phys. Rev. Lett. 108, 017002 (2012)
1366
+
E9E1T4oBgHgl3EQfEgPe/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ENE0T4oBgHgl3EQfywJf/content/tmp_files/2301.02663v1.pdf.txt ADDED
@@ -0,0 +1,645 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02663v1 [math.GR] 6 Jan 2023
2
+ ON THE CHARACTERIZATION OF ALTERNATING GROUPS BY CODEGREES
3
+ MALLORY DOLORFINO, LUKE MARTIN, ZACHARY SLONIM, YUXUAN SUN, AND YONG YANG
4
+ Abstract. Let G be a finite group and Irr(G) the set of all irreducible complex characters of G. Define the
5
+ codegree of χ ∈ Irr(G) as cod(χ) := |G:ker(χ)|
6
+ χ(1)
7
+ and denote by cod(G) := {cod(χ) | χ ∈ Irr(G)} the codegree
8
+ set of G. Let An be an alternating group of degree n ≥ 5. In this paper, we show that An is determined up
9
+ to isomorphism by cod(An).
10
+ 1. Introduction
11
+ Let G be a finite group and Irr(G) the set of all irreducible complex characters of G. For any χ ∈ Irr(G),
12
+ define the codegree of χ as cod(χ) := |G:ker(χ)|
13
+ χ(1)
14
+ . Then define the codegree set of G as cod(G) := {cod(χ) |
15
+ χ ∈ Irr(G)}. The concept of codegrees was originally considered in [8], where the codegree was defined as
16
+ cod(χ) :=
17
+ |G|
18
+ χ(1), and it was later modified to its current definition by [22] so that cod(χ) is the same for
19
+ G and G/N when N ≤ ker(χ). Several properties of codegrees have been studied, such as the relationship
20
+ between the codegrees and element orders, codegrees of p-groups, and groups with few codegrees.
21
+ The codegree set of a group is closely related to the character degree set of a group, which is defined as
22
+ cd(G) := {χ(1) | χ ∈ Irr(G)}. The relationship between the character degree set and a group’s structure is
23
+ an active area of research – many properties of a group’s structure are largely determined by its character
24
+ degree set. In 1990, Bertram Huppert made the following conjecture about the relationship between a simple
25
+ group H and a finite group G having equal character degree sets.
26
+ Huppert’s Conjecture: Let H be a finite nonabelian simple group and G a finite group such that
27
+ cd(H) = cd(G). Then G ∼= H × A, where A is an abelian group.
28
+ Huppert’s conjecture has been verified for many cases such as the alternating groups, sporadic groups,
29
+ and simple groups of Lie type with low rank, but it has yet to be verified for simple groups of Lie type with
30
+ high rank. Recently, a similar conjecture related to codegrees has been posed.
31
+ Codegree Version of Huppert’s Conjecture: Let H be a finite nonabelian simple group and G a
32
+ finite group such that cod(H) = cod(G). Then G ∼= H.
33
+ This conjecture appears in the Kourovka Notebook of Unsolved Problems in Group Theory as question
34
+ 20.79 [18]. It has been verified for PSL(2, q), PSL(3, 4), Alt7, J1, 2B2(22f+1) where f ≥ 1, M11, M12, M22, M23
35
+ and PSL(3, 3) by [1, 3, 13]. The conjecture has also been verified for PSL(3, q) and PSU(3, q) in [19] and
36
+ 2G2(q) in [14]. Recently, the authors verified the conjecture for all sporadic simple groups in [11].
37
+ In this paper, we provide a general proof verifying this conjecture for all alternating groups of degree
38
+ greater than or equal to 5. The methods used may be generalized to simple groups of Lie type, giving
39
+ promising results for characterizing all simple groups by their codegree sets.
40
+ Theorem 1.1. Let An be an alternating group of degree n ≥ 5 and G a finite group. If cod(G) = cod(An),
41
+ then G ∼= An.
42
+ Throughout the paper, we follow the notation used in Isaacs’ book [16] and the ATLAS of Finite Groups
43
+ [9].
44
+ 2000 Mathematics Subject Classification. 20C15, 20D06.
45
+ 1
46
+
47
+ 2. Preliminary Results
48
+ We first introduce some lemmas which will be used later.
49
+ Lemma 2.1. [21, Lemma 4.2] Let S be a finite nonabelian simple group. Then there exists 1S ̸= χ ∈ Irr(S)
50
+ that extends to Aut(S).
51
+ Lemma 2.2. [17, Theorem 4.3.34] Let N be a minimal normal subgroup of G such that N = S1 × · · · × St
52
+ where Si ∼= S is a nonabelian simple group for each i = 1, . . . , t. If χ ∈ Irr(S) extends to Aut(S), then
53
+ χ × · · · × χ ∈ Irr(N) extends to G.
54
+ Lemma 2.3. [13, Remark 2.6] Let G be a finite group and S a finite nonabelian simple group with cod(G) =
55
+ cod(S). Then G is a perfect group.
56
+ Lemma 2.4. [15] Let G be a finite group and S a finite nonabelian simple group such that cod(S) ⊆ cod(G).
57
+ Then |S| divides |G|.
58
+ Lemma 2.5. Let G be a finite group with N ⊴ G. Then cod(G/N) ⊆ cod(G).
59
+ Proof. From [16, Lemma 2.22], we can define Irr(G/N) = {ˆχ(gN) = χ(g) | χ ∈ Irr(G) and N ⊆ ker(χ)}.
60
+ Take any ˆχ ∈ Irr(G/N). By definition, we know that ˆχ(1) = χ(1), so the denominators of cod(ˆχ) and cod(χ)
61
+ are equal. In addition, ker(ˆχ) ∼= ker(χ)/N, so |ker(χ)| = |N| · |ker(ˆχ)|. Thus |G/N : ker(ˆχ)| =
62
+ |G|/|N|
63
+ | ker(χ)|/|N| =
64
+ |G|
65
+ | ker(χ)|, so cod(ˆχ) = cod(χ) and therefore cod(G/N) ⊆ cod(G).
66
+
67
+ Lemma 2.6. Let G be a finite group with normal subgroups N and M such that N ≤ M. Then, cod(G/M) ⊆
68
+ cod(G/N).
69
+ Proof. By the Third Isomorphism Theorem, we know that G/M ∼= (G/N)/(M/N) is a quotient of G/N,
70
+ and by Lemma 2.5, cod(G/M) ⊆ cod(G/N).
71
+
72
+ Lemma 2.7. Let S be a finite nonabelian simple group and G be a nontrivial finite group with cod(G) ⊆
73
+ cod(S). Then, |S| < |G| · |Irr(G)|.
74
+ Proof. We know that for each irreducible character χ ∈ Irr(S), χ(1)2 < |S|. Because S is simple, if χ is non-
75
+ trivial, then ker(χ) = 1, so cod(χ) =
76
+ |S|
77
+ χ(1) >
78
+
79
+ |S|. Then, since cod(G) ⊆ cod(S), for each irreducible non-
80
+ trivial character ψ ∈ Irr(G), cod(ψ) >
81
+
82
+ |S|. Thus, |G:ker(ψ)|
83
+ ψ(1)
84
+ >
85
+
86
+ |S| which implies that
87
+ |G|
88
+ |ker(ψ)|√
89
+ |S| > ψ(1).
90
+ So, ψ(1) <
91
+ |G|
92
+
93
+ |S|.
94
+ Then �
95
+ ψ∈Irr(G) ψ(1)2 < | Irr(G)| |G|2
96
+ |S| , and by character theorems, we’ll have |G| <
97
+ | Irr(G)| |G|2
98
+ |S| . Thus |S| < |G| · |Irr(G)|.
99
+
100
+ 3. Main Results
101
+ We start with some lemmas which limit the simple groups whose codegree set can be contained in the
102
+ codegree set of an alternating group.
103
+ Lemma 3.1. Let H be an alternating group of degree m ̸= n, where m, n ≥ 5. Then cod(H) ̸⊆ cod(An).
104
+ Proof. Suppose cod(Am) ⊆ cod(An). Then, from Lemma 2.4, |Am| divides |An|, so m < n. Let ax denote
105
+ the minimal non-trivial codegree of Ax.
106
+ We show that an−1 < an so that cod(Am) ̸⊆ cod(An) follows
107
+ immediately.
108
+ We know that irreducible representations of the symmetric group Sn are in one-to-one correspondence
109
+ with the partitions of n. Let λ be a partition of n and Vλ be the corresponding irreducible representation
110
+ of Sn. We note that a partition of n can be visualized by a Young diagram and we let hλ(i, j) be the hook
111
+ length of the (i, j)th square of the Young diagram corresponding to λ, i.e. the number of cells (a, b) of λ
112
+ such that a = i and b ≥ j or b = j and a ≥ i. By the hook length formula,
113
+ n!
114
+ dim(Vλ) = � hλ(i, j) := Hλ.
115
+ Let Uλ be an irreducible constituent of the restriction of Vλ to An, ResSn
116
+ An Vλ. If λ is not self-conjugate
117
+ (λ ̸= λ′), then ResSn
118
+ An Vλ remains irreducible, so Uλ = ResSn
119
+ An Vλ. In this case,
120
+ n!
121
+ dim(Uλ) = Hλ. If λ is self-
122
+ conjugate, then the restriction of Vλ to An splits into two irreducible representations of the same dimension,
123
+ so dim(Uλ) = 1
124
+ 2dim(Vλ). In this case,
125
+ n!
126
+ dim(Uλ) = 2Hλ.
127
+ 2
128
+
129
+ Now, an = min{
130
+ n!/2
131
+ dim(Uλ) | Uλ ∈ Irr(An)} = 1
132
+ 2 min({Hλ | λ ̸= λ′} ∪ {2Hλ | λ = λ′}). We want to show that
133
+ an−1 < an. First, assume that an = 1
134
+ 22Hλ for some λ = λ′. Then we can remove a square from λ to give a
135
+ non-self-conjugate partition µ of n − 1. Since Hµ < Hλ < 2Hλ and an−1 ≤ 1
136
+ 2Hµ, we know an−1 < an.
137
+ Now assume that an = 1
138
+ 2Hλ for some λ ̸= λ′. Then if n ≥ 3, we can remove a square from λ to obtain a
139
+ non-self-conjugate partition µ of n − 1. Since Hµ < Hλ and an−1 ≤ 1
140
+ 2Hµ, an−1 < an. Thus, if m < n, then
141
+ am < an, contradicting the assumption that cod(Am) ⊆ cod(An).
142
+
143
+ Lemma 3.2. Let H be a sporadic simple group or the Tits group. Then if n ≥ 5, cod(H) ̸⊆ cod(An).
144
+ Proof. In search of a contradiction, let H be a sporadic simple group or the Tits group such that cod(H) ⊆
145
+ cod(An). From Lemmas 2.7 and 2.4, we deduce a tight restriction on the order of H. Namely, |H| = |An|/k
146
+ where 1 ≤ k < |Irr(H)| is an integer. Now, for each sporadic (or Tits) group H, we can computationally
147
+ check (using Julia [6]) which alternating groups An satisfy both |H| divides |An| and |An|
148
+ |H| < |Irr(H)|. We
149
+ find only one possible exception: An = A10 and H = J2 where |A10|
150
+ |J2| = 3 < 21 = |Irr(J2)|. In this case, we
151
+ check that cod(J2) ̸⊆ cod(A10) using the ATLAS [9].
152
+
153
+ Lemma 3.3. Let H be a classical simple group of Lie type. Then cod(H) ̸⊆ cod(An) for all n ≥ 5.
154
+ Proof. There are 6 families of classical simple groups of Lie type.
155
+ These are PSL(m + 1, q), Ω(2m +
156
+ 1, q), PSp(2m, q), O+(2m, q), PSU(m + 1, q), and O−(2m, q).l We prove the lemma in each case. Let k(G)
157
+ denote the number of conjugacy classes of G, we reproduce [12, Table 2] for reference.
158
+ Table 1. Class Numbers for Classical Groups
159
+ G
160
+ k(G) ≤
161
+ Comments
162
+ SL(n, q)
163
+ 2.5qn−1
164
+ SU(n, q)
165
+ 8.26qn−1
166
+ Sp(2n, q)
167
+ 10.8qn
168
+ q odd
169
+ Sp(2n, q)
170
+ 15.2qn
171
+ q even
172
+ SO(2n + 1, q)
173
+ 7.1qn
174
+ q odd
175
+ Ω(2n + 1, q)
176
+ 7.3qn
177
+ q odd
178
+ SO±(2n, q)
179
+ 7.5qn
180
+ q odd
181
+ Ω±(2n, q)
182
+ 6.8qn
183
+ q odd
184
+ O±(2n, q)
185
+ 9.5qn
186
+ q odd
187
+ SO±(2n, q)
188
+ 14qn
189
+ q even
190
+ O±(2n, q)
191
+ 15qn
192
+ q even
193
+ (1) Let H = PSL(m + 1, q) where q = pk and m ≥ 1. From the order formula found in [7], qm(m+1)/2
194
+ divides |PSL(m + 1, q)|. From Legendre’s formula, we know that for any prime p, |n!|p ≤ p
195
+ n
196
+ p−1 .
197
+ If q = pk, then we have |n!|q ≤ q
198
+ n
199
+ k(p−1) and thus |An|q ≤ q
200
+ n
201
+ k(p−1) . By Lemma 2.4, |PSL(m + 1, q)|
202
+ divides |An|, so qm(m+1)/2 divides |An|. Thus m(m+1)
203
+ 2
204
+
205
+ n
206
+ k(p−1), giving n ≥ m(m+1)k(p−1)
207
+ 2
208
+ . Therefore,
209
+ |An| ≥
210
+ ���A m(m+1)k(p−1)
211
+ 2
212
+ ���.
213
+ Now, we note that k(PSL(m + 1, q)) ≤ k(SL(m + 1, q)) since PSL(m + 1, q) is a quotient of
214
+ SL(m + 1, q). Then from Table 1, we have that |Irr(PSL(m + 1, q))| = k(PSL(m + 1, q)) ≤ k(SL(m +
215
+ 1, q)) ≤ 2.5qm.
216
+ Applying Lemma 2.7 gives |An| < |PSL(m + 1, q)| · |Irr(PSL(m + 1, q))|. Hence
217
+ |A m(m+1)k(p−1)
218
+ 2
219
+ | < |PSL(m + 1, q)| · 2.5qm. Now we show that if we consider the left and right sides
220
+ as functions of m with constants p and k, then asymptotically, the value of |A m(m+1)k(p−1)
221
+ 2
222
+ | grows
223
+ faster than that of |PSL(m + 1, q)| · 2.5qm. We know that the left function behaves asymptotically
224
+ as (m2)!, and using the order formula for PSL(m + 1, q), we know that the right function behaves
225
+ asymptotically as qf(m), where f(m) is a polynomial with degree 2. Thus the left function grows
226
+ faster than the right function since x! >> cx for any constant c when x is large. Similarly, we can
227
+ prove this result considering the two sides as functions of p and k.
228
+ 3
229
+
230
+ Then, we search for the maximum possible value of m which satisfies the inequality given the
231
+ smallest possible values of p and k, which are 2 and 1, respectively. We find that m ≤ 6 and, using
232
+ a similar process for p and k, that p ≤ 17 and k ≤ 63. Now, we have limited our search to a finite
233
+ number of groups which we can check in the same way as for the sporadic groups. From this, we
234
+ find a small list of exceptions, listed in Table 2:
235
+ Table 2. Exceptions satisfying |PSL(m + 1, q)| divides |An| and
236
+ |An| < |PSL(m + 1, q)| · 2.5qm
237
+ m
238
+ q
239
+ n
240
+ 1
241
+ 4
242
+ 5
243
+ 1
244
+ 4
245
+ 6
246
+ 1
247
+ 8
248
+ 7
249
+ 1
250
+ 9
251
+ 6
252
+ 1
253
+ 9
254
+ 7
255
+ 1
256
+ 5
257
+ 5
258
+ 1
259
+ 5
260
+ 6
261
+ 1
262
+ 7
263
+ 7
264
+ 2
265
+ 4
266
+ 8
267
+ 2
268
+ 4
269
+ 9
270
+ 3
271
+ 2
272
+ 8
273
+ 3
274
+ 2
275
+ 9
276
+ Now, all of these exceptions can be found in the ATLAS, and it is routine to check that none of
277
+ these groups satisfy cod(PSL(m + 1, q)) ⊆ cod(An) unless PSL(m + 1, q) ∼= An. Thus, if PSL(m +
278
+ 1, q) ̸∼= An, then cod(PSL(m + 1, q) ̸⊆ cod(An).
279
+ (2) Let H = Ω(2m+1, q) where q = pk is odd and m ≥ 2. Note that when q = 2k is even, Ω(2m+1, q) ∼=
280
+ PSp(2m, q), which we deal with in the next case. From [7], qm2 divides |Ω(2m + 1, q)|. Thus, using
281
+ Table 1 similarly to above, |Am2k(p−1)| < |Ω(2m+ 1, q)|·7.3qm. As above, we computationally check
282
+ that we get a contradiction if m > 2, p > 3, or k > 1, so m = 2, p = 3, and k = 1 is the only
283
+ possibility. We get the list of exceptions listed in Table 3 after checking divisibility.
284
+ Table 3. Exceptions satisfying |Ω(2m + 1, q)| divides |An| and
285
+ |An| < |Ω(2m + 1, q)| · 7.3qm
286
+ m
287
+ q
288
+ n
289
+ 2
290
+ 3
291
+ 9
292
+ Again, we check the ATLAS and find that cod(Ω(5, 3)) ̸⊆ cod(A9).
293
+ (3) Let H = PSp(2m, q) where q = pk and m ≥ 3. From [7], qm2 divides |PSp(2m, q)|. Since PSp(2m, q)
294
+ is a quotient of Sp(2m, q), we have k(PSp(2m, q)) ≤ k(Sp(2m, q)). From Table 1, |Am2k(p−1)| <
295
+ |PSp(2m, q)| · 15.2qm. We computationally check that we get a contradiction if m > 4, p > 2, or
296
+ k > 2, so m = 3 or 4, p = 2, and k = 1 or 2 are the only possibilities. We get no exceptions after
297
+ checking divisibility.
298
+ (4) Let H = O+(2m, q) where q = pk and m ≥ 4. From [7], qm(m−1) divides |O+(2m, q)|. Using Table
299
+ 1, we have that |Am(m−1)k(p−1)| < |O+(2m, q)| · 15qm. As above, we computationally check that we
300
+ get a contradiction if m > 4, p > 2, or k > 1 so m = 4, p = 2, and k = 1 is the only possibility, and
301
+ we get no possible exceptions after checking divisibility.
302
+ (5) Let H = PSU(m+1, q) where q = pk and m ≥ 2. From [7], qm(m+1)/2 divides |PSU(m+1, q)|. Since
303
+ PSU(m + 1, q) is a quotient of SU(m + 1, q), we have k(PSU(m + 1, q)) ≤ k(SU(m + 1, q)). From
304
+ Table 1, |A m(m+1)k(p−1)
305
+ 2
306
+ | < |PSU(m + 1, q)| · 8.26qm. Again, we computationally check that we get a
307
+ 4
308
+
309
+ contradiction if m > 6, p > 7, or k > 42 so m ≤ 6, p ≤ 7, and k ≤ 42 are the only possibilities. We
310
+ get Table 4 after checking divisibility:
311
+ Table 4. Exceptions satisfying |PSU(m + 1, q)| divides |An| and
312
+ |An| < |PSU(m + 1, q)| · 8.26qm
313
+ m
314
+ q
315
+ n
316
+ 2
317
+ 3
318
+ 9
319
+ 3
320
+ 2
321
+ 9
322
+ We check the ATLAS to find that cod(PSU(3, 3)) ̸⊆ cod(A9), and we note that PSU(4, 2) ∼=
323
+ Ω(5, 3), which we have already ruled out.
324
+ (6) Let H = O−(2m, q) where q = pk and m ≥ 4. From [7], qm(m−1) divides |O−(2m, q)|. Thus, using
325
+ Table 1 similarly to above, |Am(m−1)k(p−1)| < |O−(2m, q)| · 15qm. Again, we computationally check
326
+ that we get a contradiction if m > 5, p > 3, or k > 3 so m ≤ 5, p ≤ 3, and k ≤ 3 are the only
327
+ possibilities, and we get no possible exceptions after checking divisibility.
328
+
329
+ Lemma 3.4. Let H be an exceptional simple group of Lie type. Then if n ≥ 5, cod(H) ̸⊆ cod(An).
330
+ Proof. There are 10 familes of exceptional simple groups of Lie type (other than the Tits group). These are
331
+ E6(q), E7(q), E8(q), F4(q), G2(q),2 E6(q),3 D4(q),2 B2(q),2 F4(q), and 2G2(q). We prove the lemma in each
332
+ case. First, we reproduce [12, Table 1] for reference.
333
+ Table 5. Class Numbers for Exceptional Groups
334
+ G
335
+ k(G) ≤
336
+ Comments
337
+ 2B2(q)
338
+ q + 3
339
+ q = 22m+1
340
+ 2G2(q)
341
+ q + 8
342
+ q = 32m+1
343
+ G2(q)
344
+ q2 + 2q + 9
345
+ 2F4(q)
346
+ q2 + 4q + 17
347
+ q = 22m+1
348
+ 3D4(q)
349
+ q4 + q3 + q2 + q + 6
350
+ F4(q)
351
+ q4 + 2q3 + 7q2 + 15q + 31
352
+ E6(q)
353
+ q6 + q5 + 2q4 + 2q3 + 15q2 + 21q + 60
354
+ 2E6(q)
355
+ q6 + q5 + 2q4 + 4q3 + 18q2 + 26q + 62
356
+ E7(q)
357
+ q7 + q6 + 2q5 + 7q4 + 17q3 + 35q2 + 71q + 103
358
+ E8(q)
359
+ q8 + q7 + 2q6 + 3q5 + 10q4 + 16q3 + 40q2 + 67q + 112
360
+ (1) Let H ∼= E6(q) where q = pk. From the order formula found in [7], q36 divides |E6(q)|. From [5],
361
+ we know that for any prime p, |n!|p ≤ p
362
+ n
363
+ p−1 . If q = pk, then we have |n!|q ≤ q
364
+ n
365
+ k(p−1) and thus
366
+ |An|q ≤ q
367
+ n
368
+ k(p−1) where |An|p is the p-part of An. By Lemma 2.4, |E6(q)| divides |An| so q36 divides
369
+ |An|. Thus 36 ≤
370
+ n
371
+ k(p−1) and n ≥ 36k(p − 1). Therefore, |An| ≥ |A36k(p−1)|.
372
+ Now, we note from Table 5 that |Irr(E6(q))| = k(E6(q)) ≤ q6 + q5 + 2q4 + 2q3 + 15q2 + 21q + 60.
373
+ Applying Lemma 2.7 gives |An| < |E6(q)|·|Irr(E6(q))|. Hence, |A36k(p−1)| < |E6(q)|·(q6 +q5 +2q4 +
374
+ 2q3 + 15q2 + 21q + 60). As with the classical Lie type groups, we can computationally find an upper
375
+ bound on p and k since the left side grows faster in terms of p and k than the right side. In this
376
+ case, we find that no values of p and k satisfy the inequality, since substituting p = 2 and k = 1
377
+ gives |A36| > |E6(2)| · (26 + 25 + 2 · 24 + 2 · 23 + 15 · 22 + 21 · 2 + 60). Thus, there are no possible
378
+ values for q and n such that cod(E6(q)) ⊆ cod(An).
379
+ (2) Let H ∼= E7(q) where q = pk. From [7], q63 divides |E7(q)|. From Table 5, |A63k(p−1)| < |E7(q)| ·
380
+ (q7 +q7 +2q5 +7q4 +17q3 +35q2+71q +103). We computationally check that we get a contradiction
381
+ for p = 2, k = 1, so there are no possible exceptions.
382
+ 5
383
+
384
+ (3) Let H ∼= E8(q) where q = pk. From [7], q120 divides |E8(q)|. Thus, using Table 5 as above, we have
385
+ |A120k(p−1)| < |E8(q)|·(q8+q7+2q6+3q5+10q4+16q3+40q2+67q+112). Now, we computationally
386
+ check that we get a contradiction for p = 2, k = 1, so there are no possible exceptions.
387
+ (4) Let H ∼= F4(q) where q = pk. From [7], q24 divides |F4(q)|. From Table 5, |A24k(p−1)| < |F4(q)|·(q4 +
388
+ 2q3 + 7q2 + 15q + 31). Again, we computationally check that we get a contradiction for p = 2, k = 1,
389
+ so there are no possible exceptions.
390
+ (5) Let H ∼= G2(q) where q = pk. From [7], q6 divides |G2(q)|. Thus, using Table 5 as above, |A6k(p−1)| <
391
+ |G2(q)| · (q2 + 2q + 9). Now, we find that p = 2, k = 1 satisfies the inequality, but any other values
392
+ of p and k do not. However, we note that G2(2) is not simple, so we instead consider its derived
393
+ subgroup G2(2)′ (which still satisfies the above inequality). We check for exceptions where |G2(2)′|
394
+ divides |An| and |An| < |G2(2)′| · (22 + 2 · 2 + 9), but there are none.
395
+ (6) Let H ∼= 2E6(q) where q = pk.
396
+ From [7], q36 divides |2E6(q)|.
397
+ Using Table 5, |A36k(p−1)| <
398
+ |2E6(q)| · (q6 + q5 + 2q4 + 4q3 + 18q2 + 26q + 62). Again, we computationally check that we get a
399
+ contradiction for p = 2, k = 1, so there are no possible exceptions.
400
+ (7) Let H ∼= 3D4(q) where q = pk. From [7], q12 divides |3D4(q)|. Thus, using Table 5 similarly to
401
+ above, |A12k(p−1)| < |3D4(q)| · (q4 + q3 + q2 + q + 6). Now, we find that p = 2, k = 1 satisfies the
402
+ inequality, but any other values of p and k do not. As for the sporadic groups, we check for possible
403
+ exceptions where |3D4(2)| divides |An| and |An| < |3D4(2)| · (24 + 23 + 22 + 2 + 2), but there are
404
+ none.
405
+ (8) Let H ∼= 2B2(q) where q = 22m+1 and m ≥ 1. From [7], q2 divides |2B2(q)|. From Table 5, we
406
+ have that |A2(2m+1)| < |2B2(q)| · (q + 3). In this case, we computationally check that we get a
407
+ contradiction if m > 4, so m must be less than 5. However, checking the divisibility condition, we
408
+ get no exceptions.
409
+ (9) Let H ∼= 2F4(q) where q = 22m+1 and m ≥ 1. From [7], q12 divides |2F4(q)|. Thus, using Table
410
+ 5 as above, |A12(2m+1)| < |2F4(q)| · (q2 + 4q + 17). Now, we computationally check that we get a
411
+ contradiction for m = 1, so there are no exceptions
412
+ (10) Let H ∼= 2G2(q) where q = 32m+1 and m ≥ 1.
413
+ From [7], q3 divides |2G2(q)|.
414
+ From Table 5,
415
+ |A3(2m+1)·2| < |2G2(q)| · (q + 8). Again, we computationally check that we get a contradiction for
416
+ m = 1, so there are no exceptions.
417
+
418
+ Theorem 3.5. Let G be a finite group such that cod(G) = cod(An) where n ≥ 5. Let N be a maximal
419
+ subgroup of G. Then, G/N ∼= An.
420
+ Proof. By Lemma 2.3, G is perfect. Thus G/N is a nonabelian simple group. By Lemma 2.6, we have
421
+ cod(G/N) ⊆ cod(G) = cod(An). By Lemmas 3.1, 3.2, 3.3, and 3.4, G/N cannot be an alternating group
422
+ of degree m ̸= n, a sporadic simple group or the Tits group, a classical simple group of Lie type, or an
423
+ exceptional simple group of Lie type. Thus, G/N ∼= An.
424
+
425
+ Now we present the proof of Theorem 1.1.
426
+ Proof. Let G be a minimal counterexample and N be a maximal normal subgroup of G. By Lemma 2.3, G
427
+ is perfect, and by Theorem 3.5, G/N ∼= An. In particular, N ̸= 1 as G ̸∼= An.
428
+ Step 1: N is a minimal normal subgroup of G.
429
+ Suppose L is a non-trivial normal subgroup of G with L < N. Then by Lemma 2.6, we have cod(G/N) ⊆
430
+ cod(G/L) ⊆ cod(G).
431
+ However, cod(G/N) = cod(An) = cod(G) so equality must be obtained in each
432
+ inclusion. Thus, cod(G/L) = cod(An) which implies that G/L ∼= An since G is a minimal counterexample.
433
+ This is a contradiction since we also have G/N ∼= An, but L < N.
434
+ Step 2: N is the only non-trivial, proper normal subgroup of G.
435
+ Otherwise we assume M is another proper nontrivial normal subgroup of G. If N is included in M, then
436
+ M = N or M = G since G/N is simple, a contradiction. Then N ∩ M = 1 and G = N × M. Since M is also
437
+ a maximal normal subgroup of G, we have N ∼= M ∼= An. Choose ψ1 ∈ Irr(N) and ψ2 ∈ Irr(M) such that
438
+ cod(ψ1) = cod(ψ2) = max(cod(An)). Set χ = ψ1 · ψ2 ∈ Irr(G). Then cod(χ) = (max(cod(An)))2 /∈ cod(G),
439
+ a contradiction.
440
+ Step 3: For each non-trivial χ ∈ Irr(G|N) := Irr(G) − Irr(G/N), χ is faithful.
441
+ 6
442
+
443
+ We construct Irr(G/N) as the same as Lemma 2.5. Then it follows by the definition of Irr(G|N) that if
444
+ χ ∈ Irr(G|N), N ̸≤ ker(χ). Thus since N is the unique nontrivial, proper, normal subgroup of G, ker(χ) = G
445
+ or ker(χ) = 1. Therefore, ker(χ) = 1 for all nontrivial χ ∈ Irr(G|N).
446
+ Step 4: N is an elementary abelian group.
447
+ Suppose that N is not abelian. Since N is a minimal normal subgroup, by [10, Theorem 4.3A (iii)],
448
+ N = Sn where S is a nonabelian simple group and n ∈ Z+.
449
+ By Lemmas 2.1 and 2.2, there is a non-
450
+ trivial character χ ∈ Irr(N) which extends to some ψ ∈ Irr(G). Now, ker(ψ) = 1 by Step 3, so cod(ψ) =
451
+ |G|/ψ(1) = |G/N| · |N|/χ(1). However, by assumption, we have that cod(G) = cod(An) = cod(G/N). Thus,
452
+ cod(ψ) ∈ cod(G) = cod(G/N), so cod(ψ) = |G/N|/φ(1) for some φ ∈ Irr(G/N). Hence, |G/N| is divisible by
453
+ cod(ψ) which contradicts the fact that cod(ψ) = |G/N| · |N|/χ(1), as χ(1) ̸= |N|. Thus N must be abelian.
454
+ Now to show that N is elementary abelian, let a prime p divide |N|. Then N has a p-Sylow subgroup
455
+ K, and K is the unique p-Sylow subgroup of N since N is abelian, so K is characteristic in N. Thus,
456
+ K is a normal subgroup of G, so K = N as N is minimal.
457
+ Thus |N| = pn. Now, take the subgroup
458
+ N p = {np | n ∈ N} of N, which is proper by Cauchy’s theorem. Since N p is characteristic in N, it must
459
+ be normal in G, so N p is trivial by the uniqueness of N. Thus every element of N has order p, and N is
460
+ elementary abelian.
461
+ Step 5: CG(N) = N.
462
+ First note that since N is normal, CG(N) ⊴ G. Additionally, since N is abelian by Step 4, N ≤ CG(N).
463
+ By the maximality of N, we must have CG(N) = N or CG(N) = G. If CG(N) = N, we are done.
464
+ If not, then CG(N) = G, so N must be in the center of G. Then since N is the unique minimal normal
465
+ subgroup of G by Step 2, we must have that |N| is prime. If not, there always exists a proper non-trivial
466
+ subgroup K of N, and K is normal since it is contained in Z(G), contradicting the minimality of N. Moreover,
467
+ since G is perfect, we have that Z(G) = N, and N is isomorphic to a subgroup of the Schur multiplier of
468
+ G/N [16, Corollary 11.20].
469
+ Now, we note that it is well-known that for n > 7, the Schur multiplier of An is Z2, so G ∼= 2.An.
470
+ From [20], 2.An always has a character degree of order 2⌊(n−2)/2⌋. Let χ be such an irreducible character of
471
+ 2.An with χ(1) = 2⌊(n−2)/2⌋. Recall that by Step 2, there is only one non-trivial proper normal subgroup of
472
+ G ∼= 2.An. In particular N ∼= Z2 is the only non-trivial proper normal subgroup of G. Thus |ker(χ)| = 1
473
+ or 2. Then we have cod(χ) = |2.An:ker(χ)|
474
+ χ(1)
475
+ . If |ker(χ)| = 1, then cod(χ) =
476
+ n!
477
+ 2⌊(n−2)/2⌋ , and if |ker(χ)| = 2,
478
+ then cod(χ) =
479
+ n!/2
480
+ 2⌊(n−2)/2⌋ =
481
+ n!
482
+ 2⌊n/2⌋ . In either case, for any prime p ̸= 2, | cod(χ)|p = |n!|p = |An|p. Since
483
+ cod(G) = cod(An), we know that cod(χ) ∈ cod(An). Therefore, there is a character degree of An which is a
484
+ power of 2.
485
+ However, from [20], we know that for n > 7, An only has a character degree equal to a power of 2 when
486
+ n = 2d + 1 for some positive integer d. In this case, 2d = n − 1 ∈ cd(An) so we need |An|
487
+ n−1 =
488
+ |2.An|
489
+ 2⌊(n−2)/2⌋ or
490
+ |2.An|
491
+ 2⌊n/2⌋ . Hence,
492
+ 1
493
+ n−1 =
494
+ 2
495
+ 2⌊(n−2)/2⌋ =
496
+ 1
497
+ 2⌊(n−2)/2⌋−1 or
498
+ 1
499
+ 2⌊n/2⌋−1 so n − 1 = 2⌊(n−2)/2⌋−1 or 2⌊n/2⌋−1. However, the
500
+ only integer solution to either of these equations occurs when n = 9 and 9 − 1 = 8 = 23 = 2⌊9/2⌋−1. In this
501
+ case, we check the ATLAS [9] to find that the codegree sets of A9 and 2.A9 do not have the same order.
502
+ This is a contradiction, so CG(N) = N.
503
+ Step 6: Let λ be a non-trivial character in Irr(N) and ϑ ∈ Irr(IG(λ)|λ), the set of irreducible constituents
504
+ of λIG(λ), where IG(λ) is the inertia group of λ ∈ G. Then |IG(λ)|
505
+ ϑ(1)
506
+ ∈ cod(G). Also, ϑ(1) divides |IG(λ)/N|,
507
+ and |N| divides |G/N|. Lastly, IG(λ) < G, i.e. λ is not G-invariant.
508
+ Let λ be a non-trivial character in Irr(N) and ϑ ∈ Irr(IG(λ)|λ). Let χ be an irreducible constituent of
509
+ ϑG. By [16, Corollary 5.4], we know χ ∈ Irr(G), and by [16, Definition 5.1], we have χ(1) =
510
+ |G|
511
+ |IG(λ)| · ϑ(1).
512
+ Moreover, we know tat ker(χ) = 1 by Step 2, and thus cod(χ) =
513
+ |G|
514
+ χ(1) = |IG(λ)|
515
+ ϑ(1) , so |IG(λ)|
516
+ ϑ(1)
517
+ ∈ cod(G). Now,
518
+ since N is abelian, λ(1) = 1, so we have ϑ(1) = ϑ(1)/λ(1) which divides |IG(λ)|
519
+ |N| , so |N| divides
520
+ |IG(λ)|
521
+ ϑ(1) .
522
+ Moreover, we know that cod(G) = cod(G/N), and all elements in cod(G/N) divide |G/N|, so |N| divides
523
+ |G/N|.
524
+ Next, we want to show IG(λ) is a proper subgroup of G. To reach a contradiction, assume IG(λ) = G.
525
+ Then ker(λ) ⊴ G. From Step 2, we know ker(λ) = 1, and from Step 4, we know N is a cyclic group of prime
526
+ 7
527
+
528
+ order. Thus by the Normalizer-Centralizer theorem, we have G/N = NG(N)/CG(N) ≤ Aut(N) so G/N is
529
+ abelian, a contradiction.
530
+ Step 7: Final contradiction.
531
+ From Step 4, N is an elementary abelian group of order pm for some prime p and integer m ≥ 1. By
532
+ the Normalizer-Centralizer theorem, An ∼= G/N = NG(N)/CG(N) ≤ Aut(N) and m > 1. Note that in
533
+ general, Aut(N) = GL(m, p). By Step 6, |N| divides |G/N|, so we know that |N| = pm divides |An| and
534
+ G/N ∼= An ≲ GL(m, p). We prove by contradiction that this cannot occur.
535
+ First, we claim that if pm divides |An| and An ≲ (GL(m, p), then p must equal 2. To show this, we note
536
+ that for p > 2, by [5], we have that if pm divides |An|, then m < n
537
+ 2 . However, Theorem 1.1 of [24] shows that
538
+ if n > 6, the minimal faithful degree of a modular representation of An over a field of characteristic p is at
539
+ least n − 2. Since embedding An as a subgroup of GL(m, p) is equivalent to giving a faithful representation
540
+ of degree m over a field of characteristic p, we have that m ≥ n − 2. This is a contradiction since n
541
+ 2 > n − 2
542
+ implies n < 4. Therefore, p = 2.
543
+ Now, let p = 2.
544
+ As above, from [5], we obtain |n!|2 ≤ 2n−1.
545
+ Thus, if 2m divides |An|, then m ≤
546
+ |An|2 ≤ 2n−2. Now, Theorem 1.1 of [23] shows that if n > 8, then the minimal faithful degree of a modular
547
+ representation of An over a field of characteristic 2 is at least n − 2. Therefore, we must have m ≥ n − 2, so
548
+ m = |An|2 = 2n−2 is the only option.
549
+ Let λ ∈ Irr(N), ϑ ∈ Irr(IG(λ)|λ), and T := IG(λ). Then 1 < |G : T | < |N| = 2n−2 for |G : T | is
550
+ the number of all conjugates of λ. By Step 5, we know that
551
+ |T |
552
+ ϑ(1) ∈ cod(G) and moreover that |N| divides
553
+ |T |
554
+ ϑ(1). Since |N|2 = |An|2 and cod(G) = cod(An), we know that
555
+ ��� |T |
556
+ ϑ(1)
557
+ ���
558
+ 2 = |N|2. Thus
559
+ ��� |T/N|
560
+ ϑ(1)
561
+ ���
562
+ 2 = 1 so the
563
+ 2-parts of |T/N| and ϑ(1) are equal. Thus for every ϑ ∈ Irr(T | λ), we have |ϑ(1)|2 = |T/N|2. However,
564
+ |T/N| = �
565
+ ϑ∈Irr(T |λ) ϑ(1)2. Hence, if |ϑ(1)|2 = 2k ≥ 2 for every ϑ ∈ Irr(T | λ), we would have |T/N|2 = 22k
566
+ contradicting the fact that |ϑ(1)|2 = |T/N|2. Therefore, |T/N|2 = 1. Thus, since |G/N|2 ≥ |N|2 = 2n−2, we
567
+ have |G : T |2 = |G/N : T/N|2 ≥ 2n−2, so |G : T | ≥ 2n−2 = |N|, which is a contradiction.
568
+ We have one final exception to consider: n = 8, p = 2, and m = 4, 5 or 6. In this case, A8 ∼= GL(4, 2) and
569
+ 26 divides |A8|. Now, cod(A8) = {1, 26·32·5, 25·32·5, 24·32·7, 26·3·5, 24·32·5, 26·32, 26·7, 23·32·5, 32·5·7, 25·32}
570
+ from [13]. We will look at each possibility for m in turn.
571
+ First, let m = 4. Then we have G/N ∼= A8 ∼= GL(4, 2), N = (Z2)4 so G is an extension of GL(4, 2) by N.
572
+ Suppose first that this extension is split and G is a semidirect product. This semidirect product is defined
573
+ by a homomorphism φ : GL(4, 2) → Aut((Z2)4) ∼= GL(4, 2). However, since GL(4, 2) is simple, ker(φ) = 1 or
574
+ GL(4, 2). In the first case, we have the trivial direct product, so there are at least two copies of GL(4, 2) as
575
+ normal subgroups of G, which contradicts Step 2. In the second case, φ is some automorphism of GL(4, 2).
576
+ Here, we can check using GAP that any such φ creates a semidirect product GL(4, 2) ⋊φ (Z2)4 which does
577
+ not have the same codegree set as A8. Now, suppose that the extension is non-split. Then, [4] gives that
578
+ there is a unique non-split extension 24.GL(4, 2). However, we find using GAP that it doesn’t have the same
579
+ codegree set as A8.
580
+ Second, let m = 5.
581
+ As above, |G : T | < |N| = 25 and
582
+ |T |
583
+ ϑ(1) ∈ cod(G) such that 25 divides
584
+ |T |
585
+ ϑ(1).
586
+ Further, | |T/N|
587
+ ϑ(1) |2 ≤ 2 so |T/N|2 ≤ 4 and |G/N : T/N|2 ≥ 16. Thus, we have 16 divides |G/N : T/N| and
588
+ |G/N : T/N| < 32. But we check the index of all subgroups of G/N ∼= A8 using GAP and find that none of
589
+ them satisfy these two properties.
590
+ Finally, let m = 6. Now, |N|2 = |A8|2. For this case the same argument as above for general An holds,
591
+ and we reach a contradiction. Thus we find that every |N| = pm produces a contradiction, so N = 1 and
592
+ G ∼= An.
593
+
594
+ 4. Acknowledgements
595
+ This research was conducted under NSF-REU grant DMS-1757233, DMS-2150205 and NSA grant H98230-
596
+ 21-1-0333, H98230-22-1-0022 by Dolorfino, Martin, Slonim, and Sun during the Summer of 2022 under the
597
+ supervision of Yang. The authors gratefully acknowledge the financial support of NSF and NSA, and also
598
+ thank Texas State University for providing a great working environment and support. Yang was also partially
599
+ supported by grants from the Simons Foundation (#499532, #918096, to YY). The authors would also like
600
+ to thank Prof. Richard Stanley for his help.
601
+ 8
602
+
603
+ References
604
+ [1] N. Ahanjideh, Nondivisibility among irreducible character co-degrees. Bull. Aust. Math. Soc., 105 (2022), 68-74.
605
+ [2] K. Aziziheris, F. Shafiei, F. Shirjian, Simple groups with few irreducible character degrees. J. Algebra Appl., 20 (2021),
606
+ 2150139.
607
+ [3] A. Bahri, Z. Akhlaghi, B. Khosravi, An analogue of Huppert’s conjecture for character codegrees. Bull. Aust. Math. Soc.,
608
+ 104 (2021), no. 2, 278-286.
609
+ [4] A. B. M. Basheer and J. Moori, Fischer Matrices of Dempwolff Group 25.GL(5, 2). Int. J. Group Theory, 1 (2012), 43-63.
610
+ [5] C. Bessenrodt, H. P. Tong-Viet, J. Zhang, Huppert’s conjecture for alternating groups. J. Algebra, 470 (2017), 353-378.
611
+ [6] J. Bezanson, S. Karpinski, V. B. Shah, A. Edelman, Julia: A fast dynamic language for technical computing. ArXiv
612
+ Preprint, ArXiv:1209.5145.
613
+ [7] R. W. Carter, Simple Groups of Lie Type. Wiley, 1989.
614
+ [8] D. Chillag and M. Herzog, On character degrees quotients. Arch. Math., 55 (1990), 25-29.
615
+ [9] J. H. Conway et. al, Atlas of Finite Groups. Oxford Clarendon Press, 1985.
616
+ [10] J. D. Dixon and B. Mortimer, Permutation Groups. Spring, 1996.
617
+ [11] M. Dolorfino, L. Martin, Z. Slonim, Y. Sun, Y. Yang, On the characterization of sporadic simple groups by codegrees.
618
+ submitted.
619
+ [12] J. Fulman and R. Guralnick, Bounds on the number and sizes of conjugacy classes in finite Chevalley groups with appli-
620
+ cations to derangements. Trans. Amer. Math. Soc., 364 (2012), 3023-3070.
621
+ [13] M. Gintz, M. Kortje, M. laurence, Y. Liu, Z. Wang, Y. Yang, On the characterization of some nonabelian simple groups
622
+ with few codegrees. Comm. Algebra, 50 (2022), 3932-3939.
623
+ [14] H. Guan, X. Zhang, Y. Yang, Recognizing Ree groups
624
+ 2G2(q) using the codegree set. Bull. Aust. Math. Soc.,
625
+ https://www.doi.org/10.1017/S0004972722001022.
626
+ [15] N. N. Hung, Group pseudo-algebras of finite simple groups. In progress.
627
+ [16] I. M. Isaacs, Character Theory of Finite Groups. New York Academic Press, 1976.
628
+ [17] G. James and A. Kerber, The Representation Theory of the Symmetric Group. Addison-Wesley Publishing Company, 1981.
629
+ [18] E. I. Khukrho and V. D. Mazurov, Unsolved Problems in Group Theory. The Kourovka Notebook. No. 20. Russian Academy
630
+ of Sciences, 2022.
631
+ [19] Y. Liu and Y. Yang, Huppert’s analogue conjecture for PSL(3, q) and PSU(3, q). Results Math., 78 (2023), No. 7.
632
+ [20] G. Malle and A.E. Zalesskii, Prime power degree representations of quasi-simple groups. Arch. Math., 77 (2001), 461-468.
633
+ [21] A. Moret´o, Complex group algebra of finite groups: Brauer’s problem 1. Adv. Math., 208 (2007), 236-248.
634
+ [22] G. Qian, Y. Wang, H. Wei, Co-degrees of irreducible characters in finite groups. J. Algebra, 312 (2007), 946-955.
635
+ [23] A. Wagner, The faithful linear representations of least degree of Sn and An over a field of characteristic 2. Math. Z., 151
636
+ (1976), 127-138.
637
+ [24] A. Wagner, The faithful linear representations of least degree of Sn and An over a field of odd characteristics. Math. Z.,
638
+ 154 (1977), 104-113.
639
+ Mallory Dolorfino, Kalamazoo College, Kalamazoo, Michigan, USA, mallory.dolorfino19@kzoo.edu
640
+ Luke Martin, Gonzaga University, Spokane, Washington, USA, lwmartin2019@gmail.com
641
+ Zachary Slonim, University of California, Berkeley, Berkeley, California, USA, zachslonim@berkeley.edu
642
+ Yuxuan Sun, Haverford College, Haverford, Pennsylvania, USA, ysun1@haverford.edu
643
+ Yong Yang, Texas State University, San Marcos, Texas, USA, yang@txstate.edu
644
+ 9
645
+
ENE0T4oBgHgl3EQfywJf/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
EtAyT4oBgHgl3EQfSfdv/content/tmp_files/2301.00087v1.pdf.txt ADDED
@@ -0,0 +1,2084 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Mechanical feedback linearization of single-input
2
+ mechanical control systems
3
+ Marcin Nowicki1 and Witold Respondek2,3
4
+ 1Poznan University of Technology, Institute of Automatic Control
5
+ and Robotics, Piotrowo 3a, 61-138 Pozna´n, Poland
6
+ 2Lodz University of Technology, Institute of Automatic Control, B.
7
+ Stefanowskiego 18, 90-537 Lodz, Poland
8
+ 3INSA de Rouen Normandie, Laboratoire de Math´ematiques,
9
+ 76801 Saint-Etienne-du-Rouvray, France
10
+ January 3, 2023
11
+ Abstract
12
+ We present a new type of feedback linearization that is tailored for me-
13
+ chanical control systems. We call it a mechanical feedback linearization.
14
+ Its basic feature is preservation of the mechanical structure of the system.
15
+ For mechanical systems with a scalar control, we formulate necessary and
16
+ sufficient conditions that are verifiable using differentiations and algebraic
17
+ operations only. We illustrate our results with several examples.
18
+ 1
19
+ Introduction
20
+ An N-dimensional control-affine system with a scalar control
21
+ ˙z = F(z) + G(z)u,
22
+ (Σ)
23
+ where z ∈ Z, an open subset of RN, and u ∈ R, is said to be (locally) feedback
24
+ linearizable (F-linearizable) if there exist a (local) diffeomorphism Φ : Z → RN
25
+ and an invertible feedback of the form u = α(z) + β(z)˜u such that the control
26
+ system (Σ), in the new coordinates ˜z = Φ(z) and with the new control ˜u, is a
27
+ controllable linear system of the form ˙˜z = A˜z + b˜u. A geometric solution to the
28
+ problem of feedback linearization (inspired by [1], and developed independently
29
+ in [2] and [3]) provides powerful techniques for designing a closed-loop control
30
+ system that have been used in numerous engineering applications.
31
+ From a
32
+ theoretical point of view, that result identifies a class of nonlinear systems that
33
+ can be considered as linear ones in a well-chosen coordinates and with respect
34
+ to a well-modified control.
35
+ 1
36
+ arXiv:2301.00087v1 [math.OC] 31 Dec 2022
37
+
38
+ In this paper, we state and study the following fundamental question: if a
39
+ nonlinear control system (Σ) is mechanical and feedback linearizable, are those
40
+ two structures compatible? That is, can we feedback linearize the system pre-
41
+ serving its mechanical structure? For mechanical control systems, it is natural
42
+ to consider mechanical feedback equivalence (in particular, to a linear form)
43
+ under mechanical transformations (coordinates changes and feedback) that pre-
44
+ serve the mechanical structure of the system. In our recent paper [4], we showed
45
+ that even in the simplest underactuated case of 2 degrees of freedom, the struc-
46
+ tures (linear and mechanical) may not conform trivially. In the present paper,
47
+ we treat the single-input case in its full generality.
48
+ There are several motivations for preserving the mechanical structure when
49
+ feedback linearizing the system. First, our formulation of the problem of me-
50
+ chanical linearization preserves configurations and velocities. We reckon that it
51
+ is essential that new configurations (of the linearized system) are functions of the
52
+ original configurations only, as well as new velocities are true physical velocities
53
+ (in contrast to pseudo-velocities). Therefore, we do not lose the physical inter-
54
+ pretation of the system. This could be useful, e.g. for mechanical systems with
55
+ constraints on configurations, which are transformed into linear constraints on
56
+ configurations. Second, the configuration trajectories are preserved too, which
57
+ could be useful in e.g. the motion planning problem (the most natural way to
58
+ state the problem for mechanical systems is to follow configuration trajectories).
59
+ Third, it is worth mentioning that mechanical feedback linearizability guaran-
60
+ tees the linearizing outputs to be functions of configurations only. This may
61
+ be of constructional importance because one needs only configuration sensors,
62
+ not those of velocities. The next argument is the fact that the resultant linear
63
+ mechanical system allows us to employ dedicated techniques for mechanical sys-
64
+ tems. An example of such technique is the natural frequency method of tuning
65
+ a linear feedback. Finally, when applying mechanical feedback linearization, the
66
+ physical interpretation of the external action (force, torque, etc.) is preserved
67
+ but is lost for general feedback linearization.
68
+ This work is a mechanical counterpart of the classical results on feedback
69
+ linearization of control systems [1], [2], [3], see also monographs [6], [7]. Our
70
+ intention is to formulate conditions for mechanical linearization (shortly, MF-
71
+ linearization) in a possibly similar manner (e.g. using involutivity of certain
72
+ distributions).
73
+ For a geometric approach to mechanical control systems see [5], [8], [9], [10].
74
+ For mathematical preliminaries concerning the Lie derivative, the Lie bracket,
75
+ distributions, etc., see [6], [7]. For linearization of mechanical control systems
76
+ along controlled trajectories see [11]. For mechanical state-space linearization of
77
+ mechanical control systems see [12] and [13]. Compare also [14], for a pioneering
78
+ work on (partial) feedback linearization of mechanical systems.
79
+ Although the state-space of mechanical control system is the tangent bundle
80
+ TQ of the configuration space Q, we formulate our conditions using objects on
81
+ Q only. The key here is a geometric approach to mechanical systems [5] and
82
+ considering the Euler-Lagrange equations as the geodesic equation under an
83
+ influence of external forces.
84
+ 2
85
+
86
+ The outline of the paper is as follows. In Section 2, we state the problem. In
87
+ Section 3, we develop further the problem of mechanical feedback linearization
88
+ and formulate the main result, separately, for mechanical systems with n ≥ 3
89
+ in Theorem 1, and with n = 2 in Theorem 2.
90
+ In Section 4, we provide an
91
+ application of our results to MF-linearization of several mechanical systems.
92
+ Section 6 contains technical results used in proofs that could be of independent
93
+ interest.
94
+ 1.1
95
+ Notation
96
+ Throughout the Einstein summation convention is assumed, i.e. any expression
97
+ containing a repeated index (upper and lower) implies the summation over that
98
+ index up to n, e.g. ωiXi = �n
99
+ i=1 ωiXi.
100
+ AT
101
+ transpose of a matrix (of a vector) A,
102
+ In
103
+ n × n identity matrix,
104
+ Q
105
+ configuration manifold,
106
+ X(Q)
107
+ the set of smooth vector fields on a manifold Q,
108
+ TxQ
109
+ tangent space at x ∈ Q,
110
+ TQ = �
111
+ x∈Q TxQ
112
+ tangent bundle of Q,
113
+ x = (x1, . . . , xn)
114
+ a local coordinate system on Q,
115
+ φ
116
+ a diffeomorphism of Q, and Φ a diffeomorphism of TQ,
117
+ Dφ = ∂φ
118
+ ∂x
119
+ the Jacobian matrix of a diffeomorphism φ,
120
+ ∂˜xi
121
+ ∂xj := ∂φi
122
+ ∂xj
123
+ the (i, j)-element of the Jacobian matrix Dφ,
124
+ ∂xj
125
+ ∂˜xi
126
+ the (j, i)-element of the inverse of the Jacobian matrix Dφ,
127
+ LXα
128
+ Lie derivative of a function α defined as LXα = ∂α
129
+ ∂xi Xi,
130
+ [X, Y ] = ∂Y
131
+ ∂x X − ∂X
132
+ ∂x Y = adXY
133
+ Lie bracket of vector fields,
134
+
135
+ ∂xi
136
+ the i-th unity vector field, and dxi the i-th unity covector field, in a co-
137
+ ordinate system x = (x1, . . . , xn),
138
+ Ei = span
139
+
140
+ adj
141
+ eg, 0 ≤ j ≤ i
142
+
143
+ distribution on Q spanned by adj
144
+ eg,
145
+
146
+ covariant derivative, and ∇2 second covariant derivative,
147
+ Γi
148
+ jk
149
+ Christoffel symbols of the second kind of ∇,
150
+ 2
151
+ Problem statement
152
+ Consider an n-dimensional configuration space Q (an open subset of Rn or, in
153
+ general, an n-dimensional manifold) equipped with a symmetric affine connec-
154
+ tion ∇. The operator of the affine connection ∇ allows to define intrinsically the
155
+ acceleration as the covariant derivative ∇ ˙x(t) ˙x(t), see e.g. [5,8,17]. The covari-
156
+ ant derivative ∇ : X(Q) × X(Q) → X(Q) of an arbitrary vector field Y = Y i ∂
157
+ ∂xi
158
+ with respect to X = Xi ∂
159
+ ∂xi in coordinates reads
160
+ ∇XY =
161
+ �∂Y i
162
+ ∂xj Xj + Γi
163
+ jkXjY k
164
+ � ∂
165
+ ∂xi .
166
+ (1)
167
+ 3
168
+
169
+ A mechanical control system (MS) is a 4-tuple (Q, ∇, g, e), where g and e are,
170
+ respectively, controlled and uncontrolled vector fields on Q. A curve x(t) : I →
171
+ Q, I ⊂ R, is a trajectory of (MS) if it satisfies the following equation
172
+ ∇ ˙x(t) ˙x(t) = e (x(t)) + g (x(t)) u,
173
+ (2)
174
+ which can be viewed as an equation that balances accelerations of the system,
175
+ where the left-hand side represents geometric accelerations (i.e. accelerations
176
+ caused by the geometry of the system) and the right-hand side represents ac-
177
+ celerations caused by external actions on the system (controlled or not). Notice
178
+ that (2) is a second-order differential equation on Q (indeed, using (1) we con-
179
+ clude that ∇ ˙x ˙x depends on ¨x, see [5] for details) and can be rewritten as a
180
+ system of first-order differential equations on TQ, which we also call a mechan-
181
+ ical control system (MS):
182
+ ˙xi = yi
183
+ ˙yi = −Γi
184
+ jk(x)yjyk + ei(x) + gi(x)u,
185
+ (MS)
186
+ for 1 ≤ i ≤ n, where (x, y) =
187
+
188
+ x1, . . . , xn, y1, . . . , yn�
189
+ are local coordinates
190
+ on the tangent bundle TQ of the configuration manifold Q, and Γi
191
+ jk(x) are
192
+ Christoffel symbols of the affine connection ∇ that correspond to the Cori-
193
+ olis and centrifugal forces.
194
+ The vector fields e(x) = (e1(x), . . . , en(x))T and
195
+ g(x) = (g1(x), . . . , gn(x))T correspond to, respectively, uncontrolled and con-
196
+ trolled actions on the system. Throughout all objects are assumed to be smooth
197
+ and the word smooth means C∞-smooth.
198
+ Our obvious inspirations are Lagrangian mechanical control systems without
199
+ dissipative forces. For the correspondence between (MS) and the Lagrangian
200
+ equations of dynamics see [5], [8], [9] and our recent papers [13], [16]. However,
201
+ we will consider throughout a more general class of mechanical control systems
202
+ allowing for any symmetric (not necessarily a metric) connection and any (not
203
+ necessarily potential) vector field e(x).
204
+ Consider the group of mechanical feedback transformations GMF generated
205
+ by the following transformations:
206
+ (i) changes of coordinates in TQ given by Φ : TQ → T ˜Q
207
+ (x, y) �→ (˜x, ˜y) = Φ(x, y) =
208
+
209
+ φ(x), ∂φ
210
+ ∂x(x)y
211
+
212
+ ,
213
+ (3)
214
+ called a mechanical diffeomorphism, where φ : Q → ˜Q is a diffeomorphism
215
+ and ∂φ
216
+ ∂x its Jacobian matrix,
217
+ (ii) mechanical feedback transformations, denoted (α, β, γ), of the form
218
+ u = γjk(x)yjyk + α(x) + β(x)˜u,
219
+ (4)
220
+ where γjk, α, β are smooth functions on Q satisfying
221
+ γjk = γkj, β(·) ̸= 0. The matrix γ = (γjk) represents a (0, 2)−tensor field.
222
+ 4
223
+
224
+ Even if the diffeomorphism φ is possibly local on Q, the action of ∂φ
225
+ ∂x(x) is always
226
+ global on fibers TxQ.
227
+ Definition 1. The system (MS) is MF-linearizable if there exist mechanical
228
+ feedback transformations (Φ, α, β, γ) ∈ GMF bringing (MS) into a linear con-
229
+ trollable mechanical system of the form
230
+ ˙˜xi = ˜yi
231
+ ˙˜yi = Ei
232
+ j ˜xj + bi˜u,
233
+ (LMS)
234
+ where (˜x, ˜y) are coordinates on TRn = Rn × Rn, the matrix E = (Ei
235
+ j) is an
236
+ n × n real-valued matrix, the vector field b = bi ∂
237
+ ∂˜xi is constant, and the pair
238
+ (E, b) is controllable (see [15]).
239
+ Represent (MS) as ˙z = F(z) + G(z)u, where z = (x, y) ∈ TQ, F = yi ∂
240
+ ∂xi +
241
+
242
+ −Γi
243
+ jk(x)yjyk + ei(x)
244
+
245
+
246
+ ∂yi , and G = gi(x) ∂
247
+ ∂yi . The problem that we formulate
248
+ and solve in the paper is whether (MS) is MF-linearizable? That is, do there
249
+ exist Φ = (˜x, ˜y) = (φ, ∂φ
250
+ ∂xy) and (α, β, γ) such that
251
+ ∂Φ
252
+ ∂z (z)
253
+
254
+ F + G(yT γy + α)
255
+
256
+ (z) =
257
+
258
+ ˜y
259
+ E˜x
260
+
261
+ ,
262
+ ∂Φ
263
+ ∂z (z) (Gβ) (z) =
264
+ �0
265
+ b
266
+
267
+ ?
268
+ Note that MF-linearizability is stronger than the classical feedback lineariz-
269
+ ability since, for the latter, Φ : TQ → R2n can be any diffeomorphism (need not
270
+ be of mechanical form (3)) and yT γ(x)y +α(x) can be replaced by any function
271
+ α(x, y) on TQ and β(x) by any invertible function β(x, y) on TQ.
272
+ If we neglect the mechanical structure of ˙z = F(z) + G(z)u, and consider
273
+ it as a general control system, we can ask if the system is F-linearizable. The
274
+ well-known answer [2,3] asserts that, locally, this is the case if and only if the
275
+ distributions Di = span
276
+
277
+ adj
278
+ F G, 0 ≤ j ≤ i
279
+
280
+ are involutive and of constant rank
281
+ for i = 0, ..., 2n − 1 and D2n−1 = TQ. The natural question arises whether, for
282
+ F-linearizable (MS), the general feedback transformations (Φ(z), α(z), β(z)) are
283
+ mechanical (i.e. of the form (3) and (4)) or whether they can be replaced by
284
+ mechanical ones.
285
+ Example 1:
286
+ Consider the mechanical system
287
+ ˙x1 = y1
288
+ ˙x2 = y2
289
+ ˙y1 = −x2(y1)2 + x2
290
+ ˙y2 = u,
291
+ (5)
292
+ on R4. This system is locally F-linearizable. Indeed, the local diffeomorphism
293
+ ˜z = Φ(z), where z = (x1, x2, y1, y2), ˜z = (˜x1, ˜x2, ˜y1, ˜y2), given by
294
+ ˜x1 = x1
295
+ ˜x2 = x2 − x2(y1)2
296
+ ˜y1 = y1
297
+ ˜y2 =
298
+
299
+ (y1)2 − 1
300
+ � �
301
+ 2(x2)2y1 − y2�
302
+ ,
303
+ 5
304
+
305
+ together with the feedback u = 2(x2)3 +6(x2 −(x2)2)(y1)2 +
306
+ ˜u
307
+ (y1)2−1, render the
308
+ original system linear and controllable
309
+ ˙˜x1 = ˜y1
310
+ ˙˜x2 = ˜y2
311
+ ˙˜y1 = ˜x2
312
+ ˙˜y2 = ˜u.
313
+ Therefore, the system is F-linearizable. Note, however, that neither the change
314
+ of coordinates nor the feedback is mechanical (˜x2 depends on velocities, and
315
+ the function β depends on velocities as well) so the mechanical structure is
316
+ not preserved. Our question is whether this system can be linearized by other
317
+ transformations that preserve the mechanical structure, i.e.
318
+ can it be MF-
319
+ linearized?
320
+ The group of mechanical transformations GMF = {(Φ, α, β, γ)} preserves
321
+ trajectories, that is, maps the trajectories of (MS) into those of its MF-equivalent
322
+ system ( �
323
+ MS). Indeed, if z (t, z0, u(t)) is a trajectory of (MS) (passing through
324
+ z0 = (x0, y0) and corresponding to a control u(t)), then ˜z (t, ˜z0, ˜u(t)) = Φ (z (t, z0, u(t)))
325
+ is a trajectory of ( �
326
+ MS) passing through ˜z0 = Φ(z0) = (φ(x0), ∂φ
327
+ ∂x(x0)y0) and
328
+ corresponding to ˜u(t), where u(t) = y(t)T γ (x(t)) y(t) + α (x(t)) + β (x(t)) ˜u(t).
329
+ Moreover, via φ : Q → ˜Q, it establishes a correspondence between configuration
330
+ trajectories in Q and ˜Q, i.e. ˜x (t, ˜z0, ˜u(t)) = φ (x(t, z0, u(t))), making the fol-
331
+ lowing diagram commutative (notice, however, that π (z(t, z0, u)) = x(t, z0, u)
332
+ depends on z0 = (x0, y0), i.e. an initial configuration x0 and initial velocity y0):
333
+ z(t, z0, u)
334
+ ˜z(t, ˜z0, ˜u)
335
+ x(t, z0, u)
336
+ ˜x(t, ˜z0, ˜u)
337
+ (Φ,α,β,γ)
338
+ π
339
+ π
340
+ (φ,α,β,γ)
341
+ where π : TQ → Q, π(z) = π(x, y) = x, is the canonical projection which
342
+ assigns to the pair (x, y) the point x at which the velocity y is attached.
343
+ 3
344
+ Mechanical feedback linearization
345
+ Our main result uses two basic ingredients: the covariant derivative of the con-
346
+ nection ∇, see (1), and the involutivity of suitable distributions. We will also
347
+ need the second covariant derivative of a vector field Z in the directions (X, Y ),
348
+ which is a mapping
349
+ ∇2 : X(Q) × X(Q) × X(Q) → X(Q)
350
+ ∇2
351
+ X,Y Z = ∇X∇Y Z − ∇∇XY Z.
352
+ (6)
353
+ For properties of the second covariant derivative see Lemma 1 in Appendix.
354
+ In order to formulate the result, we associate with (MS) the following se-
355
+ quence of nested distributions E0 ⊂ E1 ⊂ E2 ⊂ . . . ⊂ Ei ⊂ . . . ⊂ TQ, where
356
+ E0 = span {g} ,
357
+ Ei = span
358
+
359
+ adj
360
+ eg, 0 ≤ j ≤ i
361
+
362
+ .
363
+ 6
364
+
365
+ Remark 1. To analyze the behavior of the distributions Ei under mechanical
366
+ feedback transformations (α, β, γ) notice, first, that Ei are invariant under γ
367
+ since γ does not act on them. If the distributions Ei are involutive, then they
368
+ are invariant under feedback transformations of the form (α, β, 0), i.e. for γ = 0
369
+ they remain unchanged if we replaced e and g by, respectively, e + gα and βg,
370
+ cf. [6], [7].
371
+ Now, we formulate our main result for MF-linearization.
372
+ First, we state
373
+ a theorem for (MS) with n ≥ 3 degrees of freedom. The remaining case of
374
+ n = 2 degrees of freedom is treated in Theorem 2. For an explanation of that
375
+ distinction, see the comment before Theorem 2 and Remark 3 for a comparison
376
+ of both results.
377
+ By a local MF-linearization around x0 ∈ Q we mean that it holds on
378
+
379
+ x∈O TxQ, where O is a neighborhood of x0; recall that all transformations
380
+ are global on tangent spaces TxQ.
381
+ Theorem 1. Assume n ≥ 3. A mechanical control system (MS) is, locally
382
+ around x0, MF-linearizable to a controllable (LMS) if and only if
383
+ (MF1) rank En−1 = n,
384
+ (MF2) Ei is involutive and of constant rank, for 0 ≤ i ≤ n − 2,
385
+ (MF3) ∇adieg g ∈ E0
386
+ for 0 ≤ i ≤ n − 1,
387
+ (MF4) ∇2
388
+ adkeg,adj
389
+ eg e ∈ E1
390
+ for 0 ≤ k, j ≤ n − 1,
391
+ Remark 2. Notice that (MF1)-(MF2) are the classical conditions (see [2,3,6,
392
+ 7]) that assure F-linearization of the system ˙x = e(x)+g(x)u on Q via ˜x = φ(x)
393
+ and u = α(x)+β(x)˜u. The remaining two, (MF3)-(MF4), can be interpreted as
394
+ compatibility conditions that guarantee vanishing the Christoffel symbols Γi
395
+ jk in
396
+ the linearizing coordinates ˜x = φ(x), except for those that can be compensated
397
+ by feedback u = γjk(x)yjyk + ˜u.
398
+ Proof. In the proof we will use two Lemmata 1 and 2, given in Appendix, that
399
+ are of independent interest.
400
+ Necessity. For (LMS), we have Γi
401
+ jk = 0, e = Ex and g = b. It follows that
402
+ adi
403
+ eg = (−1)iEib and therefore, using the definitions of ∇, given by (1), and of
404
+ ∇2, given by (6), we calculate
405
+ ∇adiegadj
406
+ eg = 0,
407
+ ∇2
408
+ adkeg,adj
409
+ ege = 0,
410
+ (7)
411
+ which implies that (MF1)-(MF4) hold for (LMS) (in particular, (MF1) holds
412
+ because (LMS) is assumed controllable). To prove necessity of (MF1)-(MF4),
413
+ we will show that they are MF-invariant.
414
+ All conditions (MF1)-(MF4) are
415
+ expressed in a geometrical way, therefore they are invariant under diffeomor-
416
+ phisms. The conditions (MF1) and (MF2) are mechanical feedback invariant,
417
+ see Remark 1. It remains to show that (MF3) and (MF4) are invariant under the
418
+ 7
419
+
420
+ mechanical feedback u = γjk(x)yjyk+α(x)+β(x)˜u. For the closed-loop system,
421
+ denoted by ”∼”, the Christoffel symbols ˜Γi
422
+ jk of ˜∇, ˜e, and ˜g are, respectively,
423
+ given by
424
+ ˜Γi
425
+ jk = Γi
426
+ jk − giγjk,
427
+ ˜e = e + gα,
428
+ ˜g = gβ.
429
+ (8)
430
+ For any X, Y ∈ X(Q), we have ˜∇XY = ∇XY − γ(X, Y )g = ∇XY
431
+ mod E0,
432
+ where γ(X, Y ) = γjkXjY k ∈ C∞(Q), therefore
433
+ ˜∇adi
434
+ ˜e˜g˜g = ∇adi
435
+ ˜e˜g˜g − γ(adi
436
+ ˜e˜g, ˜g)g = ∇adi
437
+ ˜e˜g˜g
438
+ mod E0.
439
+ By ∇X˜g = ∇X (gβ) = ∇Xg + (LXβ) g, it follows that instead of calculating
440
+ ∇adi
441
+ ˜e˜g˜g it is enough to calculate ∇adi
442
+ ˜e˜gg, since the second term (LXβ) g ∈ E0.
443
+ For i=0, we have ∇˜gg = ∇(gβ)g = β∇gg ∈ E0. It is easy to show that for any
444
+ 1 ≤ j ≤ n − 1, we have
445
+ adj
446
+ ˜e˜g = βadj
447
+ eg + dj−1,
448
+ (9)
449
+ where dj−1 ∈ Ej−1. Assume ∇adl
450
+ ˜e˜gg ∈ E0, for 0 ≤ l ≤ i − 1. Then, by formula
451
+ (9), ∇adi
452
+ ˜e˜gg = β∇adiegg + ∇di−1g ∈ E0, because the first term is in E0 by (MF3)
453
+ and the second by the induction assumption. We have thus proved necessity of
454
+ (MF3).
455
+ To show necessity of (MF4), using Lemma 1, calculate
456
+ ˜∇2
457
+ X,Y Z = ˜∇X ˜∇Y Z − ˜∇ ˜∇XY Z
458
+ = ˜∇X (∇Y Z − γ(Y, Z)g) − ˜∇(∇XY −γ(X,Y )g)Z
459
+ = ∇2
460
+ X,Y Z − γ(Y, Z)∇Xg + γ(X, Y )∇gZ
461
+ mod E0.
462
+ (10)
463
+ By the above formula, we get
464
+ ˜∇2
465
+ adk
466
+ ˜e ˜g,adj
467
+ ˜e˜g˜e =∇2
468
+ adk
469
+ ˜e ˜g,adj
470
+ ˜e˜g˜e − γ(adj
471
+ ˜e˜g, ˜e)∇adk
472
+ ˜e ˜gg
473
+ + γ(adk
474
+ ˜e˜g, adj
475
+ ˜e˜g)∇g˜e
476
+ mod E0.
477
+ The second term, on the right hand side, is in E0 (by (MF3) and its invariance),
478
+ while the third term is a function multiplying
479
+ ∇g˜e = ∇g (e + gα) = ∇ge + α∇gg + Lgα g ∈ E1,
480
+ since for (LMS) we have ∇ge = −adeg = −Eb ∈ E1.
481
+ The first term ∇2
482
+ adk
483
+ ˜e ˜g,adj
484
+ ˜e˜g˜e is, by (9) and Lemma 1(i), a linear combination
485
+ with smooth coefficients of ∇2
486
+ adieg,adleg˜e, with 0 ≤ i ≤ k and 0 ≤ l ≤ j. Thus
487
+ we calculate ∇2
488
+ adi
489
+ eg,adl
490
+ eg˜e = ∇2
491
+ adieg,adlege+∇2
492
+ adieg,adleg(gα). The first term vanishes
493
+ since (7) holds for (LMS). We calculate the second term using Lemma 1(iii),
494
+ and we have ∇2
495
+ adi
496
+ eg,adl
497
+ eg(gα) = α∇2
498
+ adieg,adlegg + Ladi
499
+ egα∇adlegg + Ladlegα∇adi
500
+ egg +
501
+ (∇2
502
+ adi
503
+ eg,adl
504
+ egα)g ∈ E0 because the first three terms vanish, due to (7), and
505
+ 8
506
+
507
+ the last one is in E0. Summarizing the above calculations, we conclude that
508
+ ˜∇2
509
+ adk
510
+ ˜e ˜g,adj
511
+ ˜e˜g˜e ∈ E1 = ˜E1, which proves necessity of (MF4).
512
+ Sufficiency. We will transform the system (MS), satisfying (MF1)-(MF4),
513
+ into (LMS) in two steps. In the first step, we will normalize the vector fields
514
+ e and g and show that condition (MF4) implies zeroing some of the Christoffel
515
+ symbols Γi
516
+ jk, which exhibit a triangular form in the normalizing coordinates. In
517
+ the second step, we compensate the remaining Christoffel symbols.
518
+ By conditions (MF1)-(MF2), there exists a function h satisfying Ladj
519
+ egh = 0,
520
+ for 0 ≤ j ≤ n − 2, and Ladn−1
521
+ e
522
+ gh ̸= 0, and thus (˜x, ˜y) = (φ(x), ∂φ
523
+ ∂x(x)y) is a
524
+ local mechanical diffeomorphism, where φ(x) = (Ln−1
525
+ e
526
+ h, . . . , Leh, h)T that can
527
+ be completed by a feedback transformation (α, β, 0) that map, respectively, βg
528
+ into ˜g = (1, 0, . . . , 0)T , e + gα into ˜e = (0, ˜x1, . . . , ˜xn−1)T , and Γi
529
+ jk into ˜Γi
530
+ jk, see
531
+ the classical results of feedback linearization [2], [6], [7]. Then, (Φ, α, β, γ) ∈
532
+ GMF , where (˜x, ˜y) = Φ(x, y) =
533
+
534
+ φ(x), ∂φ
535
+ ∂x(x)y
536
+
537
+ with φ, α, β just defined and
538
+ γjk = ˜Γ1
539
+ jk(˜x), brings (MS) into (we drop ”tildas” for readability)
540
+ ˙x1 = y1
541
+ ˙xi = yi
542
+ ˙y1 = u
543
+ ˙yi = −Γi
544
+ jkyjyk + xi−1,
545
+ 2 ≤ i ≤ n,
546
+ (11)
547
+ to which Lemma 2 applies.
548
+ We will show that the Christoffel symbols Γi
549
+ jk of (11) satisfy
550
+ Γi
551
+ kj = 0
552
+ for 1 ≤ k ≤ n − 1, 1 ≤ j ≤ i ≤ n,
553
+ Γi
554
+ nj =
555
+
556
+ 0
557
+ for 1 ≤ j < i ≤ n
558
+ λ(xn)
559
+ for 2 ≤ j = i ≤ n.
560
+ (12)
561
+ For system (11), we have adk−1
562
+ e
563
+ g = (−1)k−1
564
+
565
+ ∂xk and, in particular, g =
566
+
567
+ ∂x1 .
568
+ Calculate ∇adk−1
569
+ e
570
+ gg = (−1)k−1∇
571
+
572
+ ∂xk gi ∂
573
+ ∂xi = (−1)k−1∇
574
+
575
+ ∂xk
576
+
577
+ ∂x1 = (−1)k−1Γi
578
+ k1
579
+
580
+ ∂xi .
581
+ It follows that Γi
582
+ k1 = Γi
583
+ 1k = 0, for 2 ≤ i ≤ n by (MF3), and for i = 1 by the
584
+ above form.
585
+ Rewrite (MF4) as ∇2
586
+ adk−1
587
+ e
588
+ g,adj−1
589
+ e
590
+ ge = 0 mod E1, for 1 ≤ j, k ≤ n, and apply
591
+ it successively for j = 1, . . . , n and for all 1 ≤ k ≤ n. For j = 1, first calculate
592
+ ∇ge = ∇
593
+
594
+ ∂x1 e =
595
+
596
+ ∂x2 + Γi
597
+ 1ses ∂
598
+ ∂xi =
599
+
600
+ ∂x2
601
+ and then
602
+ ∇adk−1
603
+ e
604
+ g (∇ge) = (−1)k−1∇
605
+
606
+ ∂xk
607
+
608
+ ∂x2 = (−1)k−1Γi
609
+ k2
610
+
611
+ ∂xi .
612
+ On the other hand, ∇adk−1
613
+ e
614
+ gg = (−1)k−1∇
615
+
616
+ ∂xk
617
+
618
+ ∂x1 = (−1)k−1Γ1
619
+ k1
620
+
621
+ ∂x1 = 0 and
622
+ hence ∇∇adk−1
623
+ e
624
+ gge = 0. Thus, by (6),
625
+ ∇2
626
+ adk−1
627
+ e
628
+ g,ge = ∇adk−1
629
+ e
630
+ g (∇ge) − ∇∇adk−1
631
+ e
632
+ gge =
633
+ = (−1)k−1Γi
634
+ k2
635
+
636
+ ∂xi = 0
637
+ mod E1,
638
+ 9
639
+
640
+ implying that Γi
641
+ k2 = Γi
642
+ 2k = 0 for any 3 ≤ i ≤ n.
643
+ For j = 2, calculate
644
+ ∇adege = −∇
645
+
646
+ ∂x2 e = − ∂
647
+ ∂x3 + Γi
648
+ 2ses ∂
649
+ ∂xi = − ∂
650
+ ∂x3 − d
651
+ where d = d1(x) ∂
652
+ ∂x1 + d2(x) ∂
653
+ ∂x2 ∈ E1, and then
654
+ ∇adk−1
655
+ e
656
+ g (∇adege) = (−1)k∇
657
+
658
+ ∂xk
659
+ � ∂
660
+ ∂x3 + d
661
+
662
+ =
663
+ = (−1)k �
664
+ Γi
665
+ k3 + Γi
666
+ k1d1 + Γi
667
+ k2d2� ∂
668
+ ∂xi =
669
+ = (−1)kΓi
670
+ k3
671
+
672
+ ∂xi
673
+ mod E1.
674
+ On the other hand,
675
+ ∇adk−1
676
+ e
677
+ gadeg = (−1)k∇
678
+
679
+ ∂xk
680
+
681
+ ∂x2 = (−1)kΓi
682
+ k2
683
+
684
+ ∂xi =
685
+ = (−1)k
686
+
687
+ Γ1
688
+ k2
689
+
690
+ ∂x1 + Γ2
691
+ k2
692
+
693
+ ∂x2
694
+
695
+ and ∇∇adk−1
696
+ e
697
+ gadege = (−1)kΓ2
698
+ k2
699
+
700
+ ∂x3
701
+ mod E1. It follows that, modulo E1,
702
+ ∇2
703
+ adk−1
704
+ e
705
+ g,adege = (−1)k
706
+ � n
707
+
708
+ i=4
709
+ Γi
710
+ k3
711
+
712
+ ∂xi + (Γ3
713
+ k3 − Γ2
714
+ k2) ∂
715
+ ∂x3
716
+
717
+ ,
718
+ and, using (MF4), we conclude Γi
719
+ k3 = Γi
720
+ 3k = 0 for any 4 ≤ i ≤ n and Γ3
721
+ k3 = Γ2
722
+ k2.
723
+ Following the same line (with a more tedious calculation), one can prove the
724
+ general induction step. Namely, assuming, for a fixed j,
725
+ Γj
726
+ kj = Γj−1
727
+ kj−1
728
+ Γi
729
+ ks = Γi
730
+ sk = 0
731
+ s + 1 ≤ i ≤ n,
732
+ 1 ≤ s ≤ j,
733
+ (13)
734
+ one shows by calculating ∇2
735
+ adk−1
736
+ e
737
+ g,adj−1
738
+ e
739
+ ge, with the help of (24) of Lemma 2,
740
+ that
741
+ Γj+1
742
+ kj+1 = Γj
743
+ kj
744
+ Γi
745
+ kj+1 = 0
746
+ for j + 2 ≤ i ≤ n
747
+ and thus, by the induction assumption and symmetry of the Christoffel symbols,
748
+ Γi
749
+ ks = Γi
750
+ sk = 0
751
+ s + 1 ≤ i ≤ n,
752
+ 1 ≤ s ≤ j + 1.
753
+ (14)
754
+ It follows that for each 1 ≤ k ≤ n the matrices consisting of Christoffel symbols
755
+ (Γi
756
+ kj), for 2 ≤ i, j ≤ n are upper triangular. By the induction argument, (13)
757
+ holds for all 2 ≤ j ≤ n and implies, for any 1 ≤ k ≤ n − 1,
758
+ Γ2
759
+ k2 = . . . = Γn−1
760
+ kn−1 = Γn
761
+ kn = 0.
762
+ 10
763
+
764
+ since Γn
765
+ kn = Γn
766
+ nk = 0 (as n > k). On the other hand, for k = n, (13) implies
767
+ Γ2
768
+ n2 = . . . = Γn−1
769
+ nn−1 = Γn
770
+ nn = λ(x)
771
+ for a function λ(x).
772
+ Therefore for each 1 ≤ k ≤ n the matrices (Γi
773
+ kj), for
774
+ 2 ≤ i, j ≤ n, are strictly upper triangular, and the last one, for k = n, is upper
775
+ triangular with all diagonal elements equal to each other, which we denote by
776
+ λ(x). The matrices read
777
+
778
+ Γi
779
+ kj
780
+
781
+ =
782
+
783
+
784
+
785
+
786
+
787
+
788
+
789
+
790
+
791
+ 0
792
+ Γ2
793
+ k3
794
+ Γ2
795
+ k4
796
+ . . .
797
+ Γ2
798
+ kn−2
799
+ Γ2
800
+ kn−1
801
+ Γ2
802
+ kn
803
+ 0
804
+ 0
805
+ Γ3
806
+ k4
807
+ . . .
808
+ Γ3
809
+ kn−2
810
+ Γ3
811
+ kn−1
812
+ Γ3
813
+ kn
814
+ ...
815
+ 0
816
+ 0
817
+ 0
818
+ . . .
819
+ 0
820
+ Γn−2
821
+ kn−1
822
+ Γn−2
823
+ kn
824
+ 0
825
+ 0
826
+ 0
827
+ . . .
828
+ 0
829
+ 0
830
+ Γn−1
831
+ kn
832
+ 0
833
+ 0
834
+ 0
835
+ . . .
836
+ 0
837
+ 0
838
+ 0
839
+
840
+
841
+
842
+
843
+
844
+
845
+
846
+
847
+
848
+ ,
849
+ for 1 ≤ k ≤ n − 1, and
850
+
851
+ Γi
852
+ nj
853
+
854
+ =
855
+
856
+
857
+
858
+
859
+
860
+
861
+
862
+
863
+
864
+ λ
865
+ Γ2
866
+ n3
867
+ Γ2
868
+ n4
869
+ . . .
870
+ Γ2
871
+ nn−2
872
+ Γ2
873
+ nn−1
874
+ Γ2
875
+ nn
876
+ 0
877
+ λ
878
+ Γ3
879
+ n4
880
+ . . .
881
+ Γ3
882
+ nn−2
883
+ Γ3
884
+ nn−1
885
+ Γ3
886
+ nn
887
+ ...
888
+ 0
889
+ 0
890
+ 0
891
+ . . .
892
+ λ
893
+ Γn−2
894
+ nn−1
895
+ Γn−2
896
+ nn
897
+ 0
898
+ 0
899
+ 0
900
+ . . .
901
+ 0
902
+ λ
903
+ Γn−1
904
+ nn
905
+ 0
906
+ 0
907
+ 0
908
+ . . .
909
+ 0
910
+ 0
911
+ λ
912
+
913
+
914
+
915
+
916
+
917
+
918
+
919
+
920
+
921
+ ,
922
+ and are thus of the desired triangular structure (12) and it remains to prove
923
+ that λ = λ(xn). Note that in the above matrices we skip the first row Γ1
924
+ kj and
925
+ the first column Γi
926
+ k1. This is due to the fact that Γ1
927
+ kj = 0 (which can always be
928
+ achieved by a suitable feedback transformation) and Γi
929
+ k1 = 0 by (MF3).
930
+ Notice
931
+ that we have En−2 = span
932
+ � ∂
933
+ ∂x1 , . . . ,
934
+
935
+ ∂xn−1
936
+
937
+ and thus applying (24) of Lemma 2,
938
+ for j = n and any 1 ≤ k ≤ n, we conclude (set Γn
939
+ kn+1 = 0)
940
+ (−1)n+k−2∇2
941
+ adk−1
942
+ e
943
+ g,adn−1
944
+ e
945
+ ge = ∇2
946
+
947
+ ∂xk ,
948
+
949
+ ∂xn e
950
+ =
951
+ �∂Γn
952
+ ns
953
+ ∂xk es + Γn
954
+ nk+1 + Γn
955
+ kn+1 − Γn−1
956
+ kn
957
+ + (Γd
958
+ nsΓn
959
+ kd − Γd
960
+ knΓn
961
+ ds)es
962
+ � ∂
963
+ ∂xn
964
+ mod En−2
965
+ =
966
+ � ∂λ
967
+ ∂xk en + Γn
968
+ nk+1 − Γn−1
969
+ kn
970
+ � ∂
971
+ ∂xn
972
+ mod En−2,
973
+ (15)
974
+ since, due to the triangular structure (14), Γn
975
+ ns = 0 except for s = n giving
976
+ Γn
977
+ nn = λ and, moreover, the equality Γd
978
+ nsΓn
979
+ kd−Γd
980
+ knΓn
981
+ ds = 0 holds. Indeed, in the
982
+ latter, Γn
983
+ kd = 0 except d = k = n giving Γn
984
+ nsΓn
985
+ nn − Γn
986
+ nnΓn
987
+ ns = 0 and Γn
988
+ ds = 0
989
+ except for d = s = n giving Γn
990
+ nnΓn
991
+ kn − Γn
992
+ knΓn
993
+ nn = 0.
994
+ 11
995
+
996
+ For (15) we will apply (MF4) in three cases. First, if 1 ≤ k ≤ n − 2, then,
997
+ modulo En−2, we have
998
+ � ∂λ
999
+ ∂xk en + Γn
1000
+ nk+1 − Γn−1
1001
+ kn
1002
+
1003
+
1004
+ ∂xn =
1005
+ � ∂λ
1006
+ ∂xk xn−1
1007
+
1008
+
1009
+ ∂xn = 0,
1010
+ since all Γn
1011
+ nk+1 = 0 and all Γn−1
1012
+ kn
1013
+ = 0 by (14) and k ≤ n − 2.
1014
+ Second, for
1015
+ k = n − 1, we have modulo En−2,
1016
+
1017
+ ∂λ
1018
+ ∂xn−1 en + Γn
1019
+ nn − Γn−1
1020
+ n−1n
1021
+ � ∂
1022
+ ∂xn =
1023
+
1024
+ ∂λ
1025
+ ∂xn−1 en + λ − λ
1026
+
1027
+
1028
+ ∂xn
1029
+ =
1030
+
1031
+ ∂λ
1032
+ ∂xn−1 xn−1
1033
+
1034
+
1035
+ ∂xn = 0.
1036
+ Therefore
1037
+ ∂λ
1038
+ ∂xk = 0, for 1 ≤ k ≤ n − 1, implying that λ is a function of the last
1039
+ variable xn only, i.e. λ = λ(xn), which gives the system in the desired form (12)
1040
+ Third, for k = n, we have modulo En−2,
1041
+ � ∂λ
1042
+ ∂xn en+Γn
1043
+ nn+1−Γn−1
1044
+ nn
1045
+ � ∂
1046
+ ∂xn =
1047
+ � ∂λ
1048
+ ∂xn xn−1− Γn−1
1049
+ nn
1050
+ � ∂
1051
+ ∂xn = 0,
1052
+ implying that Γn−1
1053
+ nn
1054
+ = Leλ, since ∂λ(xn)
1055
+ ∂xn xn−1 = Leλ.
1056
+ Now, transform system (11), satisfying (12), via the local mechanical diffeo-
1057
+ morphism Φ : TQ → T ¯Q
1058
+ ¯x = φ(x)
1059
+ ¯y = Dφ(x)y,
1060
+ where
1061
+ φ(x) =
1062
+
1063
+ Ln−1
1064
+ e
1065
+ h, . . . , Leh, h
1066
+ �T ,
1067
+ (16)
1068
+ with h(xn) =
1069
+ � xn
1070
+ 0
1071
+ Λ(s2)ds2, where Λ(s2) = exp
1072
+ �� s2
1073
+ 0 λ(s1)ds1
1074
+
1075
+ .
1076
+ Denote by ¯Γi
1077
+ jk, ¯e, ¯g the objects of the system expressed in coordinates ¯x =
1078
+ φ(x). Applying feedback ¯u = −¯Γ1
1079
+ jk¯yj ¯yk + Ln
1080
+ e h + uLgLn−1
1081
+ e
1082
+ h, the transformed
1083
+ system becomes
1084
+ ˙¯x1 = ¯y1
1085
+ ˙¯xi = ¯yi
1086
+ ˙¯y1 = ¯u
1087
+ ˙¯yi = −¯Γi
1088
+ jk¯yj ¯yk + ¯xi−1,
1089
+ 2 ≤ i ≤ n,
1090
+ (17)
1091
+ whose vector fields are ¯e = ¯xi−1 ∂
1092
+ ∂¯xi , where x0 = 0, and ¯g =
1093
+
1094
+ ∂¯x1 . Transformed
1095
+ system (17) is still of the form (11) and at the moment we ignore how Γi
1096
+ jk have
1097
+ been changed into ¯Γi
1098
+ jk. Below we will prove that all ¯Γi
1099
+ jk vanish. To this end,
1100
+ we first calculate explicitly the time-evolution of the pair (¯xn, ¯yn)
1101
+ ˙¯xn = d
1102
+ dth(xn) = Λ(xn) ˙xn = Λ(xn)yn = ¯yn
1103
+ ˙¯yn = d
1104
+ dt (Λ(xn)yn) = Λ(xn)λ(xn) ˙xnyn + Λ(xn) ˙yn
1105
+ = Λ(xn)λ(xn)ynyn + Λ(xn) ˙yn
1106
+ = Λ(xn)λ(xn)ynyn + Λ(xn)
1107
+
1108
+ −Γn
1109
+ nn(xn)ynyn + xn−1�
1110
+ = Λ(xn)xn−1 = ¯xn−1,
1111
+ 12
1112
+
1113
+ since ¯xn−1 = Leh = Λ(xn)xn−1. It follows that ¯Γn
1114
+ jk = 0, for all 1 ≤ k, j ≤ n.
1115
+ For transformed system (17), we rewrite (24) by adding ”bars” as
1116
+ ∇2
1117
+ adk−1
1118
+ ¯e
1119
+ ¯g,adj−1
1120
+ ¯e
1121
+ ¯g¯e = (−1)j+k
1122
+ �∂¯Γi
1123
+ js
1124
+ ∂¯xk ¯es + ¯Γi
1125
+ jk+1
1126
+ + ¯Γi
1127
+ kj+1 + (¯Γd
1128
+ js¯Γi
1129
+ kd − ¯Γd
1130
+ kj ¯Γi
1131
+ ds)¯es − ¯Γi−1
1132
+ kj
1133
+ � ∂
1134
+ ∂¯xi
1135
+ (18)
1136
+ and by (MF4), we have
1137
+ ∇2
1138
+ adk−1
1139
+ ¯e
1140
+ ¯g,adj−1
1141
+ ¯e
1142
+ ¯g¯e = (−1)j+k¯an
1143
+ kj(¯x) ∂
1144
+ ∂¯xn = 0
1145
+ mod En−2,
1146
+ where ¯an
1147
+ kj(¯x) =
1148
+ ∂¯Γn
1149
+ js
1150
+ ∂¯xk ¯es + ¯Γn
1151
+ jk+1 + ¯Γn
1152
+ kj+1 + (¯Γd
1153
+ js¯Γn
1154
+ kd − ¯Γd
1155
+ kj ¯Γn
1156
+ ds)¯es − ¯Γn−1
1157
+ kj , which
1158
+ implies (since ¯Γn
1159
+ kj = 0, for 1 ≤ j, k ≤ n) that ¯an
1160
+ kj(¯x) = ¯Γn−1
1161
+ kj
1162
+ = 0. Now assume
1163
+ ¯Γi
1164
+ kj = 0 for a certain 1 ≤ i ≤ n − 1 and any 1 ≤ j, k ≤ n. Then (18) and (MF4)
1165
+ imply ¯Γi−1
1166
+ kj
1167
+ = 0. Therefore we have proved that all Christoffel symbols of (17)
1168
+ vanish and thus the system is a linear controllable (LMS), since the vector field
1169
+ ¯e = ¯xi−1 ∂
1170
+ ∂¯xi is linear and ¯g =
1171
+
1172
+ ∂¯x1 is constant.
1173
+ The above theorem does not work for systems with 2 degrees of freedom,
1174
+ i.e. for n=2, as that case is too restrictive for involutivity, see Remark 3 below.
1175
+ Therefore we state the following theorem for MF-linearization of (MS) with 2
1176
+ degrees of freedom.
1177
+ Theorem 2. A mechanical system (MS) with 2 degrees of freedom is, locally
1178
+ around x0, MF-linearizable to a controllable linear (LMS) if and only if it
1179
+ satisfies in a neighborhood of x0
1180
+ (MF1)’ g and adeg are independent at x0,
1181
+ (MF3)’ ∇g g ∈ E0 and ∇adeg g ∈ E0,
1182
+ (MF5)’ ∇2
1183
+ g,adeg adeg − ∇2
1184
+ adeg,g adeg ∈ E0.
1185
+ Remark 3. If n = 2, then E0 is of rank 1, thus involutive and (MF2) is trivially
1186
+ satisfied, and so is (MF4) because E1 = TQ (cf. Theorem 1). Therefore (MF2)’
1187
+ and (MF4)’ are absent and replaced by (MF5)’ that guarantees that we can
1188
+ compensate the Christoffel symbols (as do (MF3)-(MF4) for n ≥ 3).
1189
+ Proof. Necessity. Note that (MF1)’ is equivalent to (MF1) and (MF3)’ is (MF3)
1190
+ of Theorem 1. Although Theorem 1 applies to n ≥ 3, the necessity part of its
1191
+ proof remains valid for any n ≥ 2 so it shows necessity of (MF1)’-(MF3)’.
1192
+ Therefore we need to show necessity of (MF5)’. For a controllable (LMS) we
1193
+ have Γi
1194
+ jk = 0, g = b and adeg = −Eb are independent, and
1195
+ ∇adiegadj
1196
+ eg = 0, ∇2
1197
+ adj
1198
+ eg,adkegadi
1199
+ eg = 0,
1200
+
1201
+ adj
1202
+ eg, adk
1203
+ eg
1204
+
1205
+ = 0,
1206
+ (19)
1207
+ 13
1208
+
1209
+ for 0 ≤ i, j, k ≤ 1. We will use formula (10) to show that (MF5)’ is invariant
1210
+ under mechanical feedback. Denote ˜∇, ˜e, ˜g, γ as in (8). Then we calculate
1211
+ ˜∇2
1212
+ ˜g,ad˜e˜gad˜e˜g =∇2
1213
+ ˜g,ad˜e˜gad˜e˜g − γ(ad˜e˜g, ad˜e˜g)∇˜g˜g
1214
+ + γ(˜g, ad˜e˜g)∇˜gad˜e˜g
1215
+ mod E0,
1216
+ ˜∇2
1217
+ ad˜e˜g,˜gad˜e˜g =∇2
1218
+ ad˜e˜g,˜gad˜e˜g − γ(g, ad˜e˜g)∇ad˜e˜g˜g
1219
+ + γ(ad˜e˜g, ˜g)∇˜gad˜e˜g
1220
+ mod E0.
1221
+ The second terms of the right hand side of both equations are in E0 due to the
1222
+ feedback invariance of (MF3)’, while the third terms are equal since γ(X, Y ) =
1223
+ γ(Y, X) is symmetric. Therefore we conclude
1224
+ ˜∇2
1225
+ ˜g,ad˜e˜gad˜e˜g − ˜∇2
1226
+ ad˜e˜g,˜gad˜e˜g
1227
+ = ∇2
1228
+ ˜g,ad˜e˜gad˜e˜g − ∇2
1229
+ ad˜e˜g,˜gad˜e˜g
1230
+ mod E0.
1231
+ Denoting ad˜e˜g = βadeg + d0g (see (9)) and by Lemma 1 (i), we have
1232
+ ∇2
1233
+ ˜g,ad˜e˜gad˜e˜g = ∇2
1234
+ βg,βadeg+d0gad˜e˜g
1235
+ = β2∇2
1236
+ g,adegad˜e˜g + βd0∇2
1237
+ g,gad˜e˜g
1238
+ ∇2
1239
+ ad˜e˜g,˜gad˜e˜g = ∇2
1240
+ βadeg+d0g,βgad˜e˜g
1241
+ = β2∇2
1242
+ adeg,gad˜e˜g + βd0∇2
1243
+ g,gad˜e˜g,
1244
+ where the last terms on the right are equal, implying
1245
+ ∇2
1246
+ ˜g,ad˜e˜gad˜e˜g − ∇2
1247
+ ad˜e˜g,˜gad˜e˜g
1248
+ = β2 �
1249
+ ∇2
1250
+ g,adegad˜e˜g − β2∇2
1251
+ adeg,gad˜e˜g
1252
+
1253
+ and it remains to prove that ∇2
1254
+ g,adegad˜e˜g − ∇2
1255
+ adeg,gad˜e˜g ∈ E0, which we show
1256
+ using Lemma 1(iii), where X, Y stand for either g or adeg. Denote ∇Xβ = LXβ
1257
+ and ∇2
1258
+ X,Y β = LXLY β − L∇XY β (see Lemma 1) and calculate
1259
+ ∇2
1260
+ X,Y ad˜e˜g = ∇2
1261
+ X,Y
1262
+
1263
+ βadeg + d0g
1264
+
1265
+ = β∇2
1266
+ X,Y adeg
1267
+ + LXβ∇Y adeg + LY β∇Xadeg +
1268
+
1269
+ ∇2
1270
+ X,Y β
1271
+
1272
+ adeg
1273
+ + d0∇2
1274
+ X,Y g + LXd0∇Y g + LY d0∇Xg +
1275
+
1276
+ ∇2
1277
+ X,Y d0�
1278
+ g
1279
+ =
1280
+
1281
+ ∇2
1282
+ X,Y β
1283
+
1284
+ adeg
1285
+ mod E0,
1286
+ since all ∇2
1287
+ X,Y X = 0 and ∇XY = 0 , see (19). Therefore we have
1288
+ ∇2
1289
+ g,adegad˜e˜g − ∇2
1290
+ adeg,gad˜e˜g
1291
+ =
1292
+
1293
+ ∇2
1294
+ g,adegβ − ∇2
1295
+ adeg,gβ
1296
+
1297
+ adeg
1298
+ mod E0.
1299
+ Finally, we calculate
1300
+ ∇2
1301
+ g,adegβ − ∇2
1302
+ adeg,gβ = LgLadegβ − L∇gadegβ
1303
+
1304
+
1305
+ LadegLgβ − L∇adeggβ
1306
+
1307
+ = L[g,adeg]β = 0,
1308
+ 14
1309
+
1310
+ which shows necessity of (MF5)’.
1311
+ Sufficiency.
1312
+ By (MF1)’, rank E1 = 2, and E0 = span {g} is of constant
1313
+ rank 1 and thus always involutive, hence the system is, locally around x0 (since
1314
+ g(x0) ̸= 0), MF-equivalent to (cf. (11))
1315
+ ˙x1 = y1
1316
+ ˙x2 = y2
1317
+ ˙y1 = u
1318
+ ˙y2 = −Γ2
1319
+ jkyjyk + x2.
1320
+ We have g =
1321
+
1322
+ ∂x1 , adeg = − ∂
1323
+ ∂x2 and now we calculate
1324
+ ∇gg = Γ2
1325
+ 11
1326
+
1327
+ ∂x2
1328
+ ∇adegg = −Γ2
1329
+ 12
1330
+
1331
+ ∂x2 ,
1332
+ which by (MF3)’ are in E0 = span
1333
+ � ∂
1334
+ ∂x1
1335
+
1336
+ , implying Γ2
1337
+ 11 = Γ2
1338
+ 12 = Γ2
1339
+ 21 = 0. It
1340
+ follows ∇gg = ∇adegg = ∇gadeg = 0, and ∇adegadeg = Γ2
1341
+ 22
1342
+
1343
+ ∂x2 and thus
1344
+ ∇2
1345
+ g,adeg adeg − ∇2
1346
+ adeg,g adeg = ∇g∇adegadeg
1347
+ − ∇∇gadegadeg − ∇adeg∇gadeg − ∇∇adeggadeg
1348
+ = ∇g∇adegadeg = ∇
1349
+
1350
+ ∂x1 Γ2
1351
+ 22
1352
+
1353
+ ∂x2 = ∂Γ2
1354
+ 22
1355
+ ∂x1
1356
+
1357
+ ∂x2
1358
+ implying, by (MF5)’, ∂Γ2
1359
+ 22
1360
+ ∂x1 = 0, i.e. Γ2
1361
+ 22(x2) = λ(x2).
1362
+ Now, we transform the system via the local mechanical diffeomorphism Φ :
1363
+ TQ → T ¯Q (compare to (16))
1364
+ ¯x = φ(x)
1365
+ ¯y = Dφ(x)y,
1366
+ where
1367
+ φ(x) = (Leh, h)T ,
1368
+ with h(x2) =
1369
+ � x2
1370
+ 0
1371
+ Λ(s2)ds2 and Λ(s2) = exp
1372
+ �� s2
1373
+ 0 λ(s1)ds1
1374
+
1375
+ .
1376
+ We calculate the evolution of the pair (¯x(t), ¯y(t)) of transformed coordinates,
1377
+ using
1378
+ d
1379
+ dth
1380
+
1381
+ x2(t)
1382
+
1383
+ = Λ
1384
+
1385
+ x2(t)
1386
+
1387
+ ˙x2(t) and
1388
+ d
1389
+ dtΛ
1390
+
1391
+ x2(t)
1392
+
1393
+ = λ
1394
+
1395
+ x2(t)
1396
+
1397
+ Λ
1398
+
1399
+ x2(t)
1400
+
1401
+ ˙x2(t);
1402
+ first we get
1403
+ ˙¯x2 = d
1404
+ dth(x2) = Λ(x2)y2 = ¯y2
1405
+ ˙¯y2 = Λ(x2)λ(x2)y2y2 + Λ(x2) ˙y2 = Λ(x2)λ(x2)y2y2
1406
+ + Λ(x2)
1407
+
1408
+ −λ(x2)y2y2 + x2�
1409
+ = Λ(x2)x1 = ¯x1
1410
+ and then
1411
+ ˙¯x1 = Λ(x2)y1 + d
1412
+ dtΛ
1413
+
1414
+ x2(t)
1415
+
1416
+ x1y2 = ¯y1
1417
+ ˙¯y1 = −¯Γ1
1418
+ jk¯yj ¯yk + L2
1419
+ eh + uLgLeh,
1420
+ where we denote by ¯Γ1
1421
+ jk the Christoffel symbols in the ˙¯y1-equation of the trans-
1422
+ formed system. Applying the feedback ¯u = −¯Γ1
1423
+ jk¯yj ¯yk + L2
1424
+ eh + uLgLeh, we get
1425
+ a controllable linear mechanical system in the canonical form ˙¯x1 = ¯y1, ˙¯y1 =
1426
+ ¯u, ˙¯x2 = ¯y2, ˙¯y2 = ¯x1.
1427
+ 15
1428
+
1429
+ 4
1430
+ Examples
1431
+ Example 1 (cont.): For system (5), we have g =
1432
+
1433
+ ∂x2 and adeg = − ∂
1434
+ ∂x1 are in-
1435
+ dependent. We check MF-linearizability using Theorem 2. A simple calculation
1436
+ shows that ∇gg = ∇adegg = 0 ∈ E0, but ∇2
1437
+ g,adeg adeg−∇2
1438
+ adeg,g adeg =
1439
+
1440
+ ∂x1 /∈ E0,
1441
+ therefore the system is not MF-linearizable.
1442
+ Thus (5) is an example of a system that is F-linearizable but not MF-
1443
+ linearizable. For such systems the choice is: either to F-linearize for the price
1444
+ of loosing the mechanical structure or to keep the mechanical structure but to
1445
+ get rid of the linearization.
1446
+ Example 2: Consider the equation of dynamics of the Inertia Wheel Pen-
1447
+ dulum [18] with constant parameters m0, md, J2:
1448
+ ˙x1 = y1,
1449
+ ˙x2 = y2,
1450
+ ˙y1 = e1 + g1u,
1451
+ ˙y2 = e2 + g2u,
1452
+ e1 = m0
1453
+ md sin x1, e2 = − m0
1454
+ md sin x1, g1 = − 1
1455
+ md , g2 = md+J2
1456
+ J2md .
1457
+ We will verify whether the conditions of Theorem 2 are satisfied. First, we
1458
+ calculate adeg = ( m0
1459
+ m2
1460
+ d cos x1) ∂
1461
+ ∂x1 − ( m0
1462
+ m2
1463
+ d cos x1) ∂
1464
+ ∂x2 . It can be checked that g and
1465
+ adeg are independent for x1 ̸= ± π
1466
+ 2 , which corresponds to the horizontal position
1467
+ of the pendulum, therefore (MF1)’ is satisfied everywhere except for x1 = ± π
1468
+ 2 .
1469
+ Next, we verify condition (MF2)’ by calculating ∇gg = ∇adegg = 0 ∈ E0.
1470
+ Finally, a direct calculation shows
1471
+ ∇2
1472
+ g,adeg adeg = ∇2
1473
+ adeg,g adeg =
1474
+ = (m2
1475
+ 0
1476
+ m5
1477
+ d
1478
+ cos2 x1) ∂
1479
+ ∂x1 − (m2
1480
+ 0
1481
+ m5
1482
+ d
1483
+ cos2 x1) ∂
1484
+ ∂x2 ,
1485
+ thus ∇2
1486
+ g,adeg adeg − ∇2
1487
+ adeg,g adeg = 0 ∈ E0 satisfies (MF5)’. The system is MF-
1488
+ linearizable. A linearizing function is h(x) = md+J2
1489
+ J2
1490
+ x1 + x2 (all others giving
1491
+ MF-linearization are of the form σ h(x), where σ ∈ R\ {0}). Due to the proof of
1492
+ Theorem 2, the linearizing diffeomorphism is (˜x, ˜y) = Φ(x, y) = (φ(x), Dφ(x)y)
1493
+ with φ(x) = (h, Leh)T . The system in new coordinates reads
1494
+ ˙˜x1 = md + J2
1495
+ J2
1496
+ y1 + y2 = ˜y1
1497
+ ˙˜y1 = md + J2
1498
+ J2
1499
+ �m0
1500
+ md
1501
+ sin x1 − 1
1502
+ md
1503
+ u
1504
+
1505
+ − m0
1506
+ md
1507
+ sin x1 + md + J2
1508
+ m2J2
1509
+ u
1510
+ = m0
1511
+ J2
1512
+ sin x1 = Leh = ˜x2
1513
+ (20)
1514
+ ˙˜x1 = m0
1515
+ J2
1516
+ cos x1y1 = ˜y2
1517
+ ˙˜y2 = −m0
1518
+ J2
1519
+ sin x1y1y1 +
1520
+ m2
1521
+ 0
1522
+ 2mdJ2
1523
+ sin(2x1) −
1524
+ m0
1525
+ mdJ2
1526
+ cos x1u = ˜u.
1527
+ Example 3: We will study MF-linearizability of the TORA3 system (see
1528
+ Figure 1), which is based on the TORA system (Translational Oscillator with
1529
+ 16
1530
+
1531
+ Figure 1: The TORA3 system
1532
+ Rotational Actuator) studied in the literature, e.g. [19] (however we add gravita-
1533
+ tional effects). It consists of a two dimensional spring-mass system, with masses
1534
+ m1, m2 and spring constants k1, k2, respectively. A pendulum of length l3, mass
1535
+ m3, and moment of inertia J3 is added to the second body. The displacements
1536
+ of the bodies are denoted by x1 and x2, respectively, and the angle of the pen-
1537
+ dulum by x3. The gravitational constant is a and u is a torque applied to the
1538
+ pendulum. The kinetic energy is
1539
+ T =1
1540
+ 2m1( ˙x1)2 + 1
1541
+ 2(m2 + m3)( ˙x2)2
1542
+ + 1
1543
+ 2(J3 + m3l2
1544
+ 3)( ˙x3)2 + m3l3 cos x3 ˙x2 ˙x3,
1545
+ and the mass matrix depends on the configurations. The potential energy is
1546
+ V = 1
1547
+ 2k1(x1)2 + 1
1548
+ 2k2(x2 − x1)2 − m3l3a cos x3. The equations of the dynamics
1549
+ read
1550
+ m1¨x1 + k1x1 − k2
1551
+
1552
+ x2 − x1�
1553
+ = 0
1554
+ (m2 + m3)¨x2 + m3l3 cos x3¨x3 − m3l3 sin x3( ˙x3)2
1555
+ +k2
1556
+
1557
+ x2 − x1�
1558
+ = 0
1559
+ m3l3 cos x3¨x2 + (m3l2
1560
+ 3 + J3)¨x3 + m3l3a sin x3 = u,
1561
+ which can be rewritten on TQ as
1562
+ ˙x1 = y1
1563
+ ˙y1 = η1
1564
+ ˙x2 = y2
1565
+ ˙y2 = −¯Γ2
1566
+ 33y3y3 + η2 + τ 2u
1567
+ ˙x3 = y3
1568
+ ˙y3 = −¯Γ3
1569
+ 33y3y3 + η3 + τ 3u
1570
+ (21)
1571
+ where ¯Γ2
1572
+ 33 =
1573
+ −ν0 sin x3
1574
+ ν1+ν2 sin2 x3 , ¯Γ3
1575
+ 33 = ν2 sin x3 cos x3
1576
+ ν1+ν2 sin2 x3 , η1 = − k1
1577
+ m1 x1 + k2
1578
+ m3
1579
+
1580
+ x2 − x1�
1581
+ , η2 =
1582
+ 1
1583
+ 2 ν2a sin 2x3−ν3(x2−x1)
1584
+ ν1+ν2 sin2 x3
1585
+ ,
1586
+ η3 =
1587
+ ν4(x2−x1) cos x3−ν5 sin x3
1588
+ ν1+ν2 sin2 x3
1589
+ , τ 2 = −m3l3 cos x3
1590
+ ν1+ν2 sin2 x3 , τ 3 =
1591
+ m2+m3
1592
+ ν1+ν2 sin2 x3 , with constant
1593
+ parameters:
1594
+ ν0 = m3l3(m3l2
1595
+ 3 + J3), ν1 = m2m3l2
1596
+ 3 + J3(m2 + m3), ν2 = m2
1597
+ 3l2
1598
+ 3, ν3 =
1599
+ k2
1600
+
1601
+ m3l2
1602
+ 3 + J3
1603
+
1604
+ , ν4 = m3l3k2 ν5 = m3l3a(m2 + m3).
1605
+ 17
1606
+
1607
+ --
1608
+ m1
1609
+ m2
1610
+ ki
1611
+ k2
1612
+ W
1613
+ --
1614
+ W
1615
+ aTo simplify calculations we apply to the system a preliminary mechanical
1616
+ feedback1 u =
1617
+ 1
1618
+ τ 3
1619
+ �¯Γ3
1620
+ 33y3y3 − η3 + ¯u
1621
+
1622
+ which yields
1623
+ ˙x1 = y1
1624
+ ˙x2 = y2
1625
+ ˙x3 = y3
1626
+ ˙y1 = −µ1x1 + µ2x2
1627
+ ˙y2 = µ3 sin x3y3y3 + µ4(x1 − x2) − µ3 cos x3u
1628
+ ˙y3 = ¯u,
1629
+ (22)
1630
+ with µ1 = k1+k2
1631
+ m1 , µ2 = k2
1632
+ m1 , µ3 =
1633
+ m3l3
1634
+ m2+m3 , µ4 =
1635
+ k2
1636
+ m2+m3 .
1637
+ Since conditions (MF1)-(MF4) of Theorem 1 are MF-invariant, we will check
1638
+ them for system (22). To summarize:
1639
+ Γ2
1640
+ 33 = −µ3 sin x3,
1641
+ and
1642
+ Γi
1643
+ jk = 0
1644
+ otherwise,
1645
+ e =
1646
+
1647
+ −µ1x1 + µ2x2� ∂
1648
+ ∂x1 + µ4
1649
+
1650
+ x1 − x2� ∂
1651
+ ∂x2
1652
+ g = −µ3 cos x3 ∂
1653
+ ∂x2 +
1654
+
1655
+ ∂x3 = g2 ∂
1656
+ ∂x2 +
1657
+
1658
+ ∂x3 .
1659
+ We have (notice that calculations are performed on Q only)
1660
+ adeg =
1661
+
1662
+ µ2µ3 cos x3� ∂
1663
+ ∂x1 −
1664
+
1665
+ µ3µ4 cos x3� ∂
1666
+ ∂x2 ,
1667
+ ad2
1668
+ eg = µ3 cos x3
1669
+
1670
+ (µ1µ2 + µ2µ4) ∂
1671
+ ∂x1 −
1672
+
1673
+ µ2µ4 + µ2
1674
+ 4
1675
+ � ∂
1676
+ ∂x2
1677
+
1678
+ ,
1679
+ therefore rank E2 = 3 for x3 ̸= ± π
1680
+ 2 , and (MF1) is satisfied. Now
1681
+ [g, adeg] = −
1682
+
1683
+ µ2µ3 sin x3� ∂
1684
+ ∂x1 +
1685
+
1686
+ µ3µ4 sin x3� ∂
1687
+ ∂x2 ∈ E1
1688
+ and (MF2) is satisfied. Then, for any vector field v = vi(x) ∂
1689
+ ∂xi ,
1690
+ ∇vg =
1691
+ �∂g2
1692
+ ∂x3 + Γ2
1693
+ 33
1694
+
1695
+ v3 ∂
1696
+ ∂x2 = 0,
1697
+ thus (MC3) is satisfied if we replace v by, in particular, g, adeg, ad2
1698
+ eg. Finally,
1699
+ for (MF4), we calculate
1700
+ ∇2
1701
+ g,ge =
1702
+
1703
+ µ2µ3 sin x3� ∂
1704
+ ∂x1 −
1705
+
1706
+ µ3µ4 sin x3� ∂
1707
+ ∂x2 ∈ E1,
1708
+ ∇2
1709
+ adkeg,adj
1710
+ ege = 0
1711
+ otherwise,
1712
+ thus, the system is MF-linearizable. Now, choose h =
1713
+ µ4
1714
+ µ2 x1 + x2 + µ3 sin x3
1715
+ (whose differential dh annihilates g and adeg), thus we take a linearizing diffeo-
1716
+ morphism (˜x, ˜y) =
1717
+
1718
+ φ(x), ∂φ
1719
+ ∂x(x)y
1720
+
1721
+ , with φ(x) =
1722
+
1723
+ h, Leh, L2
1724
+ eh
1725
+ �T . The linearized
1726
+ 1This preliminary feedback is not necessary and it is possible to check the conditions and
1727
+ to linearize the system without it, since our method and conditions are feedback invariant.
1728
+ 18
1729
+
1730
+ system is in the form of (LMS) and reads
1731
+ ˙˜x1 = µ4
1732
+ µ2
1733
+ y1 + y2 + µ3 cos x3y3 = ˜y1
1734
+ ˙˜y1 = µ4
1735
+ µ2
1736
+ ˙y1+ ˙y2+µ3(cos x3 ˙y3−sin x3y3y3)= µ4(µ2 − µ1)
1737
+ µ2
1738
+ x1 = ˜x2
1739
+ ˙˜x2 = µ4(µ2 − µ1)
1740
+ µ2
1741
+ y1 = ˜y2
1742
+ ˙˜y2 = µ4(µ2 − µ1)
1743
+ µ2
1744
+ ˙y1 = µ4(µ2 − µ1)
1745
+ µ2
1746
+
1747
+ µ2x2 − µ1x1�
1748
+ = ˜x3
1749
+ ˙˜x3 = µ1µ4(µ1 − µ2)
1750
+ µ2
1751
+ y1 + µ4(µ2 − µ1)y2 = ˜y3
1752
+ ˙˜y3 = (µ2 − µ1)µ3µ4 sin x3y3y3 − (µ1 − µ2)(µ2
1753
+ 1 + µ2µ4)µ4
1754
+ µ2
1755
+ x1
1756
+ + (µ1 − µ2)(µ1 + µ4)µ4x2 + (µ1 − µ2)µ3µ4 cos x3u = ˜u.
1757
+ 5
1758
+ Conclusions
1759
+ In this paper, we consider MF-linearization of mechanical control systems (MS)
1760
+ with scalar control. We formulate the problem as a particular case of feedback
1761
+ linearization preserving the mechanical structure of (MS) so that the trans-
1762
+ formed system is both linear and mechanical. As we showed in [4] and confirmed
1763
+ in this paper, even in the simplest case, the class of MF-linearizable systems is
1764
+ substantially smaller than that of general F-linearizable ones. Therefore, a nat-
1765
+ ural question arises, namely to compare the conditions presented in this paper
1766
+ with those for F-linearization.
1767
+ The answer lies in the interplay between the
1768
+ distributions Ei = span
1769
+
1770
+ adj
1771
+ eg, 0 ≤ j ≤ i
1772
+
1773
+ and the ”usual” for F-linearization
1774
+ Di = span
1775
+
1776
+ adj
1777
+ F G, 0 ≤ j ≤ i
1778
+
1779
+ . We will address this problem in the future.
1780
+ 6
1781
+ Appendix
1782
+ The following lemma can be proved by a direct calculation.
1783
+ Lemma 1. The second covariant derivative ∇2
1784
+ X,Y Z satisfies the following prop-
1785
+ erties:
1786
+ (i) linearity over C∞(Q) in X and Y :
1787
+ ∇2
1788
+ (α1X1+α2X2),Y Z = α1∇2
1789
+ X1,Y Z + α2∇2
1790
+ X2,Y Z
1791
+ ∇2
1792
+ X,(α1Y1+α2Y2)Z = α1∇2
1793
+ X,Y1Z + α2∇2
1794
+ X,Y1Z
1795
+ (ii) linearity over R in Z:
1796
+ ∇2
1797
+ X,Y (a1Z1 + a2Z2) = a1∇2
1798
+ X,Y Z1 + a2∇2
1799
+ X,Y Z2
1800
+ 19
1801
+
1802
+ (iii) the product rule:
1803
+ ∇2
1804
+ X,Y (βZ) =β∇2
1805
+ X,Y Z + LXβ∇Y Z
1806
+ + LY β∇XZ +
1807
+
1808
+ ∇2
1809
+ X,Y β
1810
+
1811
+ Z,
1812
+ where ∇2
1813
+ X,Y β = LXLY β−L∇XY β ∈ C∞(Q), Xi, Yi, Zi ∈ X(Q), αi, β ∈ C∞(Q),
1814
+ and ai ∈ R.
1815
+ The following lemma is crucial for the proof of Theorem 1.
1816
+ Lemma 2. For the system
1817
+ ˙x1 = y1
1818
+ ˙xi = yi
1819
+ ˙y1 = u
1820
+ ˙yi = −Γi
1821
+ jkyjyk + xi−1,
1822
+ 2 ≤ i ≤ n,
1823
+ (23)
1824
+ we have for any 1 ≤ k, j ≤ n,
1825
+ ∇2
1826
+ adk−1
1827
+ e
1828
+ g,adj−1
1829
+ e
1830
+ ge = (−1)j+k
1831
+ �∂Γi
1832
+ js
1833
+ ∂xk es + Γi
1834
+ jk+1 + Γi
1835
+ kj+1 − Γi−1
1836
+ kj
1837
+ + (Γd
1838
+ jsΓi
1839
+ kd − Γd
1840
+ kjΓi
1841
+ ds)es
1842
+ � ∂
1843
+ ∂xi .
1844
+ (24)
1845
+ Proof. For system (23) we calculate ∇2
1846
+ adk−1
1847
+ e
1848
+ g,adj−1
1849
+ e
1850
+ ge = (−1)j+k∇2
1851
+
1852
+ ∂xk ,
1853
+
1854
+ ∂xj e =
1855
+
1856
+
1857
+ ∂xk ∇
1858
+
1859
+ ∂xj e − ∇∇
1860
+
1861
+ ∂xk
1862
+
1863
+ ∂xj e,
1864
+ where ∇
1865
+
1866
+ ∂xj e =
1867
+
1868
+ ∂ed
1869
+ ∂xj + Γd
1870
+ jses�
1871
+
1872
+ ∂xd , and
1873
+
1874
+
1875
+ ∂xk
1876
+
1877
+
1878
+
1879
+ ∂xj e
1880
+
1881
+ = ∇
1882
+
1883
+ ∂xk
1884
+ � ∂ed
1885
+ ∂xj
1886
+
1887
+
1888
+ ∂xd + ∇
1889
+
1890
+ ∂xk
1891
+
1892
+ Γd
1893
+ jses�
1894
+
1895
+ ∂xd
1896
+ = ∂ed
1897
+ ∂xj ∇
1898
+
1899
+ ∂xk
1900
+
1901
+ ∂xd + L
1902
+
1903
+ ∂xk
1904
+ � ∂ed
1905
+ ∂xj
1906
+
1907
+
1908
+ ∂xd +
1909
+
1910
+ Γd
1911
+ jses�
1912
+
1913
+
1914
+ ∂xk
1915
+
1916
+ ∂xd
1917
+ +
1918
+
1919
+ L
1920
+
1921
+ ∂xk
1922
+
1923
+ Γd
1924
+ js
1925
+
1926
+ es + L
1927
+
1928
+ ∂xk (es) Γd
1929
+ js
1930
+
1931
+
1932
+ ∂xd
1933
+ = ∂ed
1934
+ ∂xj Γi
1935
+ kd
1936
+
1937
+ ∂xi + Γd
1938
+ jsesΓi
1939
+ kd
1940
+
1941
+ ∂xi +
1942
+
1943
+ ∂Γi
1944
+ js
1945
+ ∂xk es + ∂es
1946
+ ∂xk Γi
1947
+ js
1948
+
1949
+
1950
+ ∂xi
1951
+ =
1952
+
1953
+ ∂Γi
1954
+ js
1955
+ ∂xk es + Γi
1956
+ jk+1 + Γi
1957
+ kj+1 + Γd
1958
+ jsΓi
1959
+ kdes
1960
+
1961
+
1962
+ ∂xi ,
1963
+ since
1964
+ ∂ed
1965
+ ∂xj = 1, if d = j + 1, and zero otherwise, and thus
1966
+ ∂ed
1967
+ ∂xj Γi
1968
+ kd = Γi
1969
+ kj+1
1970
+ (analogously for the other derivatives).
1971
+ Now, using ∇
1972
+
1973
+ ∂xk
1974
+
1975
+ ∂xj = Γd
1976
+ kj
1977
+
1978
+ ∂xd , we
1979
+ calculate
1980
+ ∇∇
1981
+
1982
+ ∂xk
1983
+
1984
+ ∂xj e = ∇Γd
1985
+ kj
1986
+
1987
+ ∂xd e = Γd
1988
+ kj
1989
+ � ∂ei
1990
+ ∂xd + Γi
1991
+ dses
1992
+ � ∂
1993
+ ∂xi
1994
+ =
1995
+
1996
+ Γi−1
1997
+ kj
1998
+ + Γd
1999
+ kjΓi
2000
+ dses� ∂
2001
+ ∂xi ,
2002
+ so we have
2003
+ 20
2004
+
2005
+ ∇2
2006
+
2007
+ ∂xk ,
2008
+
2009
+ ∂xj e = ∇
2010
+
2011
+ ∂xk
2012
+
2013
+
2014
+
2015
+ ∂xj e
2016
+
2017
+ − ∇∇
2018
+
2019
+ ∂xk
2020
+
2021
+ ∂xj e
2022
+ =
2023
+ �∂Γi
2024
+ js
2025
+ ∂xk es + Γi
2026
+ jk+1 + Γi
2027
+ kj+1 − Γi−1
2028
+ kj
2029
+ + (Γd
2030
+ jsΓi
2031
+ kd − Γd
2032
+ kjΓi
2033
+ ds)es
2034
+ � ∂
2035
+ ∂xi .
2036
+ which yields (24).
2037
+ References
2038
+ [1] R. W. Brockett, ”Feedback invariants for nonlinear systems”, in Proc. IFAC
2039
+ Congress, Helsinki, 1978.
2040
+ [2] B. Jakubczyk and W. Respondek, ”On linearization of control systems”,
2041
+ Bull. Acad. Polonaise Sci., Ser. Sci. Math., vol. 28, pp. 517-522, 1980.
2042
+ [3] L. R. Hunt and R. Su, ”Linear equivalents of nonlinear time varying sys-
2043
+ tems”, Proc. of the MTNS, pp. 119-123, Santa Monica, 1981.
2044
+ [4] M. Nowicki and W. Respondek, ”A classification of feedback linearizable
2045
+ mechanical systems with 2 degrees of freedom”, in Advanced, Contemporary
2046
+ Control, vol. 1196, pp. 638-650, Springer, 2020.
2047
+ [5] F. Bullo and A.D. Lewis, Geometric Control of Mechanical Systems,
2048
+ Springer-Verlag, 2004.
2049
+ [6] H. Nijmeijer, A.J. van der Schaft, Nonlinear Dynamical Control Systems,
2050
+ Springer-Verlag, New York, 1990.
2051
+ [7] A. Isidori, Nonlinear Control Systems (3rd ed.), Springer-Verlag, Berlin,
2052
+ Heidelberg, 1995.
2053
+ [8] A. M. Bloch, Nonholonomic Mechanics and Control, Springer, 2003.
2054
+ [9] S. Ricardo and W Respondek, ”When is a control system mechanical?”,
2055
+ Journal of Geometric Mechanics, vol. 2, no. 3, pp. 265-302, 2010.
2056
+ [10] W. Respondek and S. Ricardo, ”Equivariants of mechanical control sys-
2057
+ tems”, SIAM J Control Optim, vol. 51, no. 4, pp. 3027-3055, 2013.
2058
+ [11] F. Bullo and A.D. Lewis, ”Reduction, linearization, and stability of rel-
2059
+ ative equilibria for mechanical systems on Riemannian manifolds”, Acta
2060
+ Applicandae Mathematicae, vol. 99, no. 1, pp. 53-95, 2007.
2061
+ [12] W. Respondek and S. Ricardo, ”On linearization of mechanical control
2062
+ systems”, IFAC Proceedings Volumes, vol. 45, no. 19, pp. 102-107, 2012.
2063
+ 21
2064
+
2065
+ [13] M. Nowicki and W. Respondek, ”Mechanical state-space linearization of
2066
+ mechanical control systems and symmetric product of vector fields”, IFAC-
2067
+ PapersOnLine, vol. 54, no. 19, pp. 204-209, 2021.
2068
+ [14] N. S. Bedrossian and M. W. Spong, ”Feedback linearization of robot ma-
2069
+ nipulators and Riemannian curvature”, Journal of Robotic Systems, vol.
2070
+ 12, no. 8, pp. 541-552, 1995.
2071
+ [15] P. C. Hughes and R. E. Skelton, ”Controllability and observability of linear
2072
+ matrix-second-order systems”, Journal of Applied Mechanics, vol. 47, no.
2073
+ 2, pp. 415-420, 1980.
2074
+ [16] M. Nowicki and W. Respondek, ”A mechanical feedback classification of
2075
+ linear mechanical control systems”, Applied Sciences, vol. 11, no. 22, pp.
2076
+ 10669, 2021.
2077
+ [17] J. M. Lee, Riemannian Manifolds: An Introduction to Curvature, Graduate
2078
+ Texts in Mathematics, Springer, New York, 1997.
2079
+ [18] M. W. Spong, P. Corke and R. Lozano, ”Nonlinear control of the reaction
2080
+ wheel pendulum”, Automatica, vol. 37, no. 11, pp. 1845-1851, 2001.
2081
+ [19] C. Wan, D. Bernstein, V. Coppola, ”Global Stabilization of the Oscillating
2082
+ Eccentric Rotor”, Nonlinear Dynamics, 10: 49–62, 1995.
2083
+ 22
2084
+
EtAyT4oBgHgl3EQfSfdv/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
FtE4T4oBgHgl3EQfHAzF/content/tmp_files/2301.04900v1.pdf.txt ADDED
@@ -0,0 +1,753 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Universality of neural dynamics on complex networks
2
+ Vaiva Vasiliauskaite†∗ and Nino Antulov-Fantulin†
3
+ Computational Social Science, ETH Z¨urich, 8092 Z¨urich, Switzerland
4
+ (Dated: January 13, 2023)
5
+ This paper discusses the capacity of graph neural networks to learn the functional form of ordinary
6
+ differential equations that govern dynamics on complex networks. We propose necessary elements
7
+ for such a problem, namely, inductive biases, a neural network architecture and a learning task. Sta-
8
+ tistical learning theory suggests that generalisation power of neural networks relies on independence
9
+ and identical distribution (i.i.d.) of training and testing data. Although this assumption together
10
+ with an appropriate neural architecture and a learning mechanism is sufficient for accurate out-of-
11
+ sample predictions of dynamics such as, e.g. mass-action kinetics, by studying the out-of-distribution
12
+ generalisation in the case of diffusion dynamics, we find that the neural network model: (i) has a
13
+ generalisation capacity that depends on the first moment of the initial value data distribution; (ii)
14
+ learns the non-dissipative nature of dynamics implicitly; and (iii) the model’s accuracy resolution
15
+ limit is of order O(1/√n) for a system of size n.
16
+ Introduction
17
+ Dynamics in a complex networked sys-
18
+ tem is modelled as a set of n ordinary differential equa-
19
+ tions (ODEs) that describe the rate of change of a quan-
20
+ tity xi(t) for each node i and are coupled via adjacency
21
+ matrix A ∈ Rn×n. A general form of these equations is
22
+ ˙xi = L(xi(t)) +
23
+
24
+ j
25
+ AijQ(xi(t), xj(t))
26
+ (1)
27
+ = F(xi(t), x(t), A)
28
+ where L describes self-interactions, Q is a function that
29
+ models pairwise interactions between neighbours and �
30
+ is an aggregation function.
31
+ With appropriate choices
32
+ of functions L, Q, � this definition is a general form
33
+ for models of epidemic processes, biochemical dynamics,
34
+ birth–death processes, gene regulatory dynamics [1], as
35
+ well as dynamics that show chaotic behaviour [2].
36
+ The initial value problem of a set of ODEs such as Eq.
37
+ 1 together with an initial condition x(t0), has a solution
38
+ that satisfies
39
+ x(t) = x(t0) +
40
+ � t
41
+ t0
42
+ FFF(x(t′), A)dt′
43
+ (2)
44
+ and describes a set of trajectories of the dynamics, if the
45
+ system was initialised at x(t0).
46
+ Appropriately setup, a neural network ΨΨΨ(x;ωωω) has ca-
47
+ pacity to approximate any continuous function F with
48
+ compact support [3]. In practice, learning the weights is
49
+ usually done via some variant of backpropagation algo-
50
+ rithm [4].
51
+ Notably, neural networks can also be used to approx-
52
+ imate dynamical systems [5] and find solutions of initial
53
+ and boundary value problems of differential equations [6].
54
+ A dynamical system is that in which FFF describes the time
55
+ dependence of x in an ambient space. Notably, if FFF is
56
+ known, the description quality of the course of dynamics
57
+ ∗ vvasiliau@ethz.ch
58
+ † Authors contributed equally to this work.
59
+ is independent of a coordinate in the space. For exam-
60
+ ple, Newton’s laws of motion describe the trajectory of a
61
+ bouncing ball regardless of its longitudinal and latitudi-
62
+ nal position. Recovering universal dynamical principles
63
+ from empirical data has been shown to belong to NP-
64
+ hard class [7].
65
+ Despite, the hardness of problem, in recent years, dif-
66
+ ferent classes of neural networks were used to learn dif-
67
+ ferent parts of dynamics from empirical data, including
68
+ graph neural networks [8] and their differential [9] coun-
69
+ terparts [10]; reservoir computers [11, 12] as well as re-
70
+ gression techniques [13, 14] or to learn control dynam-
71
+ ics [15].
72
+ Here we discuss architectural design choices and induc-
73
+ tive biases that are crucial for a neural network model
74
+ that approximates dynamics evolving on complex net-
75
+ works. We then study the model’s generalisation capac-
76
+ ity using simple models of deterministic dynamics [1].
77
+ Lastly, we discuss our work in the context of learning
78
+ principles that govern dynamics in complex system from
79
+ perspective of generalization to unseen initial conditions.
80
+ Inductive biases for dynamics on complex networks
81
+ There are several important inductive biases and assump-
82
+ tions worth noting about the complex network dynamics
83
+ and its neural approximations.
84
+ 1.
85
+ Network structure: There exists a known static
86
+ network represented as an adjacency matrix A. There-
87
+ fore it is reasonable to take a GNN [16] as the candidate
88
+ for ΨΨΨ. A single-layer graph convolution network can be
89
+ defined as
90
+ ΨΨΨgnn(x) = (σ [ΦΦΦxW + b]) Wagg.
91
+ (3)
92
+ where x ∈ Rn×d is an input, ΦΦΦ ∈ Rn×n is a graph oper-
93
+ ator (e.g. ΦΦΦ = ˜D− 1
94
+ 2 ˜A ˜D− 1
95
+ 2 [17]), W ∈ Rd×h, b ∈ Rn×1,
96
+ Wagg ∈ Rh×d are trainable parameters and σ is a non-
97
+ linear function. Different versions of GNN with respect
98
+ to different expressive power for Weisfeiler-Lehman iso-
99
+ morphism are described in [18].
100
+ 2.
101
+ Self-Interaction:
102
+ The model includes a self-
103
+ interaction part that approximates L(·).
104
+ arXiv:2301.04900v1 [cond-mat.stat-mech] 12 Jan 2023
105
+
106
+ 2
107
+ 3.
108
+ Neighbour-Interaction:
109
+ The model includes a
110
+ neighbour interaction part that approximates Q(·, ·).
111
+ Note that a single-layer GNN, such as a convolutional
112
+ graph neural network has no mixed quadratic terms
113
+ xixj and therefore does not simply satisfy such a con-
114
+ dition.
115
+ Although theoretically it should still be pos-
116
+ sible to approximate nonlinear quadratic terms with a
117
+ single layer neural network with an arbitrary width, in
118
+ practice it can be challenging and require either a very
119
+ large number of hidden neurons, or an exotic learning
120
+ mechanism that goes beyond the standard gradient de-
121
+ scend.
122
+ Alternatively, one can improve expressivity of
123
+ the model by increasing its depth, i.e. using multi-layer
124
+ GNNs or message-passing neural networks [19] to rep-
125
+ resent ΨΨΨ(x;ωωω).
126
+ Here ωωω includes graph operator terms
127
+ ΦΦΦk, k ∈ {1, 2, ..., K} where K is the depth of the neural
128
+ network.
129
+ 4. Spatiotemporal locality: The dynamical process
130
+ that follows Eq. 1 must be local, that is, the function
131
+ Q(·, ·) encodes interactions between neighbours.
132
+ How-
133
+ ever, including terms ΦΦΦk in a multi-layer graph neural
134
+ network allows for k-hop interactions via length k walks
135
+ in a network at a timescale smaller than the infinitesimal
136
+ dt thereby subdividing dt to k intervals and breaking an
137
+ assumption of temporal locality.
138
+ 5. Aggregation of neighbour-interactions: The ag-
139
+ gregation can itself be non-linear.
140
+ 6. Initial value condition: Initial values are preserved
141
+ during training: x0: ΨΨΨ(x0) → x0. If the neural network
142
+ straightforwardly approximates the RHS of Eq. 2, then
143
+ enconding and decoding layers must be pseudo-inverses
144
+ of each other, see App. A.
145
+ 7. Conservation/dissipation laws. If the system is
146
+ closed, it does not exchange energy or mass with the en-
147
+ vironment, therefore a conservation law holds, namely
148
+
149
+ i
150
+ dxi(t)
151
+ dt
152
+ = C
153
+ ∀t.
154
+ (4)
155
+ A constraint on a neural network to satisfy conservation
156
+ laws can be imposed via a regularisation term in the loss
157
+ function,
158
+ R(D) =
159
+ 1
160
+ |D|
161
+
162
+ x∈D
163
+ |FFF(x)1 − ΨΨΨ(x)1| ,
164
+ that penalises the model weights which produce predic-
165
+ tions which do not respect the conservation law Eq. 4.
166
+ Here D is the dataset over which the loss is calculated.
167
+ The strength of the regulariser term can be modulated
168
+ by mutiplying R(D) with a non-negative real number λ.
169
+ Architecture
170
+ Given the inductive biases for dynamics
171
+ on networks, we propose a neural network model of the
172
+ following form:
173
+ ˙x = ψψψℓ(x) + ψψψ
174
+
175
+ (x)
176
+ (5)
177
+ ψψψ
178
+
179
+ (x) = vec−1�
180
+ ψψψq3�
181
+ vec
182
+
183
+ ΦΦΦ ⊙
184
+
185
+ ψψψq1(x)⊤1 ×k ψψψq2(x)⊤2� ���
186
+ where ψψψ(x) is a single hidden layer neural network
187
+ are given by (3).
188
+ The mappings of local interaction
189
+ are summarised in App. B. The design choices of Eq.
190
+ 5 comply with the inductive biases stated earlier.
191
+ To
192
+ this end, we performed vetorisation of input to the
193
+ function ψψψ
194
+
195
+ [ψψψq3 (·)].
196
+ This function can approximate
197
+ any invariant poolings of a set [20] or a multiset [18].
198
+ Notably, we also assumed that Q(·, ·) is factorisable.
199
+ Since it can be approximated by Chebyshev polynomials,
200
+ and, according to the strictly real fundamental theorem
201
+ of algebra [21], it is possible to factorise polynomial
202
+ function to two factors. Alternatively, one can use deep
203
+ sets [20] as arguments to approximate Q(·, ·).
204
+ In order to guarantee the local existence and unique-
205
+ ness of the solution to the initial value problem, by Pi-
206
+ card–Lindel¨of theorem the neural network ΨΨΨ needs to be
207
+ Lipschitz continuous. To enforce Lipschitz continuity of
208
+ ΨΨΨ, we will be using 1-Lipschitz activation functions such
209
+ as ReLU, sigmoid, softmax, or tanh.
210
+ Learning task
211
+ We formulate two distinct statistical
212
+ learning settings that relate to an increasing strength of
213
+ generality in the approximation of a dynamical system.
214
+ 1.
215
+ Regression task to approximate FFF by ΨΨΨ: An
216
+ appropriate “proto data set” here is
217
+ D = {(x(t)α, y(t)α)},
218
+ s.t. x(t)α ∈ Rn, y(t)α ∈ Rn, x(0)α ∼ fx(0)(x), t = [0, T] ∈ R.
219
+ our labels are defined as y(t)α = FFF(x((t))α), α denotes
220
+ α-th initial condition x(0)α sampled from a predefined
221
+ distribution fx(0)(x); all others points x(t)α are obtained
222
+ following Eq. 2. Here the functional mapping that is be-
223
+ ing learnt is ˆFFF : Rn → Rn and is obtained by minimising
224
+ the loss L between the true labels y and the labels f(x)
225
+ obtained by the current model:
226
+ ˆFFF = arg
227
+ min
228
+ f:Rn→Rn
229
+ E
230
+ P(x,y) L(f(x), y).
231
+ Here E is an expectation operator, P(x, y) is the data
232
+ sampling distribution.
233
+ At the moment, samples from the “proto data set” are
234
+ not independent: those trajectories that were obtained
235
+ from the same initial condition are non-i.i.d. Such sam-
236
+ pling is compulsory for the Uniform Law of Large num-
237
+ bers, that together with capacity control ensures general-
238
+ isation from train to test set [22, 23]. To ensure statistical
239
+ independence of samples, we create finite train and test
240
+ sets of size m1, m2 by using a specific distribution P over
241
+ a “proto data set��
242
+ Dtrain ∪ Dtest ∼ P(x, y).
243
+ Specifically,
244
+ we randomly delegate (x(t)α, y(t)α) to
245
+ either Dtrain or Dtest thereby ensuring an i.i.d. condition
246
+ by dropping information on the initial conditions and
247
+ time.
248
+
249
+ 3
250
+ Dynamics
251
+ L
252
+ Q
253
+ Ltrain
254
+ reg
255
+ Ltest
256
+ reg
257
+ ≈reg
258
+ Ltrain
259
+ traj
260
+ Ltest
261
+ traj
262
+ ≈traj
263
+ Heata
264
+
265
+ B(xj − xi) 2.03 ± 1.03
266
+ 2.14 ± 1.08
267
+
268
+ 1.39 ± 0.59 1.47 ± 0.63
269
+
270
+ MAKb
271
+ F − Bxb
272
+ i
273
+ Rxj
274
+ 0.41 ± 1.08
275
+ 0.44 ± 1.14
276
+
277
+ 1.48 ± 0.05 1.55 ± 0.04
278
+ ×
279
+ PDc
280
+ −Bxb
281
+ i
282
+ Rxa
283
+ j
284
+ 4.68 ± 12.82 4.72 ± 12.89
285
+
286
+ 3.03 ± 0.03 3.04 ± 0.03
287
+
288
+ MMd
289
+ −Bxi
290
+ R
291
+ xh
292
+ j
293
+ 1+xh
294
+ j
295
+ 7.68 ± 5.36
296
+ 7.83 ± 5.47
297
+
298
+ 5.93 ± 0.12 5.94 ± 0.14
299
+
300
+ SISe
301
+ −Bxi
302
+ (1 − xi)xj
303
+ 1.16 ± 3.62
304
+ 1.31 ± 4.07
305
+
306
+ 1.54 ± 0.01 1.64 ± 0.02
307
+ ×
308
+ a B = 0.05.
309
+ b B = 0.1, R = 1, f = 0.5.
310
+ c B = 2, R = 0.3, a = 1.5, b = 3.
311
+ d B = 4, R = 0.5, h = 3.
312
+ e B = 5, R = 0.5.
313
+ TABLE I. Generalisation of a neural network model Eq. 5 trained on dynamics from [1] in the regression task setting, and
314
+ the trajectory learning setting. Reported loss values are multiplied by a factor 10−2. In columns denoted “≈” we indicate for
315
+ which dynamics the train loss is approximately similar (“✓”) or different (“×”) from the test loss.
316
+ 2.
317
+ Trajectory learning setting that approxi-
318
+ mates x(t): here the train set contains m1 initial condi-
319
+ tions x(0)α as inputs, while each label corresponds to tra-
320
+ jectories yα = {x(t)α}, where t = 0, ∆t, 2∆t, ....k∆t = T
321
+ that were realised from the initial condition x(0)α:
322
+ Dtrain = {(x(0)α, yα)},
323
+ s.t. x(0)α ∈ Rn, yα ∈ Rkn, x(0)α ∼ fx(0)(x), α ∈ [1, m1],
324
+ yα = {x(0)α, x(∆t)α, ..., x(k∆t)α}
325
+ and test set Dtest is constructed analogously from m2
326
+ initial conditions that are sampled from the same distri-
327
+ bution x(0)α ∼ fx(0)(x). The mapping learnt here is of
328
+ the following form:
329
+ ˆFFF : Rn → Rkn and is realised by
330
+ computing an initial value problem Eq. 2 using a neural
331
+ network ΨΨΨ in replacement of FFF.
332
+ Experiments and Results
333
+ We consider models with
334
+ h′ = 6, h = 8, h′′ = 5, hd = 3, trained in 1000 epochs us-
335
+ ing Adam optimiser with learning rate of 10−2 and weight
336
+ decay 10−3. All activations are ReLU. Unless otherwise
337
+ stated, the initial values in both the train set and the test
338
+ set are sampled from B[a = 5, b = 5]. For numerical in-
339
+ tegration, an explicit Runge-Kutta method of order 5(4)
340
+ is used [24].
341
+ The training loss function is the average L1 norm. For
342
+ the regression task, the loss is
343
+ Ltrain
344
+ reg
345
+ =
346
+ 1
347
+ Nreg
348
+
349
+ x,y∈Dtrain
350
+
351
+ ||f(x) − y||1 + λR(x)
352
+
353
+ ,
354
+ where Nreg = |Dtrain|(xmax − xmin). For the trajectory
355
+ learning task, the loss is defined as:
356
+ Ltrain
357
+ traj =
358
+ 1
359
+ Ntraj
360
+
361
+ x(0),y∈Dtrain
362
+ T/∆t
363
+
364
+ k=0
365
+ (6)
366
+
367
+ ||x(k∆t) − ˆx(k∆t)||1 + λR(x(k∆t))
368
+
369
+ Here
370
+ the
371
+ normalisation
372
+ constant
373
+ is
374
+ Ntraj
375
+ =
376
+ |Dtrain|nT(xmax − xmin)/∆t.
377
+ λ = 0 and the regu-
378
+ larisation terms are nil for the first part of the analysis.
379
+ The training sets include samples from 103 trajectories,
380
+ the testing sets – from 102 trajectories and the batch
381
+ size is 10. The parameters for numerical integration are
382
+ ∆t = 0.01, T = 1.5. In all cases, a graph was sampled
383
+ from Erd¨os-R´enyi ensemble with p = 0.5 and � = �
384
+ j.
385
+ Tab. I shows that the trained neural network model Eq.
386
+ 5 can well-approximate the true dynamics and generalise
387
+ to unseen initial values well, provided fx(0)(x) is used for
388
+ generating both, a training test and a sampling test.
389
+ Generalisation
390
+ Crucially, the universality of the neu-
391
+ ral approximation exemplified in Tab. I is only at the low-
392
+ est level that is attainable by putting strong constraints
393
+ on a test set (that are in accordance with statistical learn-
394
+ ing theory): the two sets must be statistically equivalent.
395
+ If the distribution of initial values is irrelevant for the
396
+ steady state solution, the neural model also inadvertently
397
+ universally approximates the dynamical system.
398
+ However, it seems reasonable to ask if a neural net-
399
+ work can do better. In Tab. II we propose three tiers of
400
+ universality of approximation FFF ≈ ΨΨΨ in terms of statis-
401
+ tical properties of training and testing samples. In this
402
+ context, the statistical learning theory concerns only the
403
+ lowest level of generality.
404
+ Level
405
+ Relation between f and g
406
+ Bottom fX0 ≡ gX0 or fX0,X∞ = fX0fX∞
407
+ Mid
408
+ fX0 ̸≡ gX0, sup fX0 = sup gX0
409
+ Top
410
+ fX0 ̸≡ gX0, sup fX0 ̸= sup gX0
411
+ TABLE II. Generalisation levels for a neural approximation
412
+ FFF ≈ ΨΨΨ is encompassed in model’s ability to extrapolate pre-
413
+ dictions to data that was not used during training. Probabil-
414
+ ity density functions related to training data are denoted by
415
+ “f”; related to testing data are denoted by “g”.
416
+ More sophisticated, mid and top level generalisations
417
+ would enable a faithful prediction in cases where the con-
418
+ straints on statistical properties of data are relaxed, for
419
+ example, where fX0 is not the same as gX0.
420
+ Diffusion
421
+ A dynamical system whose faith and the
422
+ course of action depend on the distribution of initial val-
423
+ ues enables us to study the limits of generalisation of a
424
+
425
+ 4
426
+ neural network. Diffusion equation on a graph is a good
427
+ example due to its simplicity and known analytical solu-
428
+ tion of the form
429
+ x(t) =
430
+
431
+ i
432
+ ai(0)e−Bλitvi,
433
+ ai(0) = x(t)⊤vi,
434
+ (7)
435
+ where λi, vi are ith eigenvalue and eigenvector of the
436
+ graph Laplacian and the steady state solution is given
437
+ by
438
+ lim
439
+ t→∞ xi(t) = 1
440
+ n
441
+
442
+ j
443
+ xj(0)
444
+ ∀i.
445
+ Perturbation of the initial value x(0) by δ ∼ fδ such
446
+ that xδ
447
+ i (0) = xi(0) + δ gives a difference in the steady
448
+ state solutions of ⟨xδ
449
+ i (0)⟩i − ⟨xi(0)⟩i = γ.
450
+ Fig. 1 shows how the loss accumulates over the inte-
451
+ gration time t for the neural network model ΨΨΨ for tra-
452
+ jectories in the train and in the test sets. In addition,
453
+ we consider a perturbation (NN,pert) where the initial
454
+ value is sampled from a different distribution, namely,
455
+ gX0(x) = B(6, 5), while the neural network was trained
456
+ using fX0(x) = B(5, 5). This figure shows that the neu-
457
+ ral network prediction is reasonable, under i.i.d. sampling
458
+ condition for an initial condition in train and test set.
459
+ 0.00
460
+ 0.25
461
+ 0.50
462
+ 0.75
463
+ 1.00
464
+ 1.25
465
+ 1.50
466
+ t
467
+ 0.00
468
+ 0.01
469
+ 0.02
470
+ 0.03
471
+ 0.04
472
+ 0.05
473
+ 0.06
474
+ 0.07
475
+ traj(t)
476
+ NN,train
477
+ NN,test
478
+ NN,pert
479
+ Numerical
480
+ FIG. 1.
481
+ Node average loss between the analytical solution
482
+ and:
483
+ 1) the numerical solution (numerical), 2) the neu-
484
+ ral network solution for a subset of initial conditions in the
485
+ training set (NN,train) as well as a subset of a testing set
486
+ (NN,test).
487
+ The original x(0) ∼ B(5, 5), whereas the per-
488
+ turbed (NN,pert) initial values x′(0) ∼ B(6, 5).
489
+ The loss
490
+ is computed for the trajectory learning task using Ntraj =
491
+ 100 trajectories in each case using an equation Ltraj(t) =
492
+ 1
493
+ Ntraj
494
+
495
+ x(0),y∈D ||x(t)− ˆx(t)||1. The errors show one standard
496
+ deviation.
497
+ Fig. 2 follows the same analysis and shows that by
498
+ varying the parameters of the beta distribution B(a, b),
499
+ the loss in the steady state (averaged over the last 10
500
+ steps of the simulation) is proportional to the differ-
501
+ ence in expectation value of the beta-distribution used
502
+ in training, and in testing to generate the initial values.
503
+ All in all, these results show that the neural network ap-
504
+ proximation of the differential form is exclusive to the
505
+ statistical properties of the training set.
506
+ Upsofar, conservation law (4) and the effect of the reg-
507
+ ulariser were not considered. We study it in Fig. 3 for a
508
+ small case with a graph composed of N = 2 nodes. This
509
+ figure presents two key findings: Fig. 3a) clearly shows
510
+ that ΨΨΨ is biased towards the training set; whereas in Fig.
511
+ 3b) it is clear that ΨΨΨ has the property of implicit dissipa-
512
+ tive (conservation) regularization. Even in the case of no
513
+ explicit regularization of the dissipative term, the neural
514
+ network optimises towards a less dissipative regime. This
515
+ is of particular importance, since some systems in Tab. I
516
+ are non-dissipative and some are dissipative.
517
+ 2
518
+ 4
519
+ 6
520
+ 8
521
+ a
522
+ 0.00
523
+ 0.05
524
+ 0.10
525
+ 0.15
526
+ 0.20
527
+ 0.25
528
+ 0.30
529
+ train
530
+ b = a
531
+ b = 5
532
+ FIG. 2.
533
+ Generalisation of ΨΨΨ to unseen initial conditions. The
534
+ neural network was trained using initial values sampled from
535
+ x0 ∼ B(5, 5) until it achieved the loss train. Its prediction
536
+ capacity was then tested on dynamics with initial conditions
537
+ x0 ∼ B(a, b = a) (red circles) as well as x0 ∼ B(a, b = 5) (blue
538
+ triangles). The dashed orange line is a function |0.5−a|/(a+
539
+ 5)).
540
+ The loss is computed for the trajectory learning task
541
+ using Ntraj = 100 trajectories in each case using (6), omitting
542
+ the term xmax−xmin in the normalisation and considering the
543
+ last 10 timesteps. The errors show one standard deviation
544
+ across trajectories.
545
+ Next, we turn our attention to analyse the out-of-
546
+ sample loss for system of n coupled differential equations
547
+ (coupling with Erd¨os-R´enyi model) and diffusion dynam-
548
+ ics. Notably, the steady state solution is governed by the
549
+ average value ⟨x0⟩, and since we have n nodes in our sys-
550
+ tem this value has variance ∝ 1/√n. This implies that
551
+ it is easier to accurately predict dynamics with a larger
552
+ number of differential equations. In Fig. 4, we show that
553
+ indeed, test loss is inversely proportional to the system’s
554
+ size.
555
+ Discussion
556
+ In this paper, we proposed a variant of a
557
+ Neural ODE model which implements a set of inductive
558
+ biases suitable for complex dynamics on graphs and elic-
559
+ its dynamical models in complex networked systems di-
560
+ rectly from time series these systems produce. While we
561
+ showed the presence of generalisation out-of-sample for
562
+ a wide range of dynamical models, perhaps more impor-
563
+ tantly such an exercise reflects on generalisation capacity
564
+ only at the most trivial level. Multiple out-of-distribution
565
+
566
+ 5
567
+ epoch=0
568
+ epoch=1000
569
+ epoch=2000
570
+ a)
571
+ b)
572
+ 𝜆 = 1
573
+ 𝜆 = 0
574
+ Legend
575
+ FIG. 3. Learning diffusion on a fully connected n = 2 network
576
+ using the regression training paradigm and a conservation
577
+ law regulariser. The training sample consists of datapoints
578
+ obtained from trajectories generated using x0 ∼ [0.2, 0.7] +
579
+ N(0, 0.1), the testing sample: x0 ∼ [0.3, 0.8] + N(0, 0.1). a)
580
+ shows an example of a training process, namely by contrast-
581
+ ing the true (continuous lines) and learnt (dotted) trajectories
582
+ of an initial value problem as predicted after indicated train-
583
+ ing epochs, using λ = 1.
584
+ b) shows the loss and the value
585
+ of the regulariser over training period in the case where the
586
+ regulariser plays a part in training (λ = 1, same training as
587
+ in a)), and when it does not (λ = 0). The results in b) are
588
+ obtained from 10 independent runs.
589
+ tests suggest that the neural network approximation is
590
+ valid only for a specific probability distribution of initial
591
+ values, which was also used to generate the training sam-
592
+ ples. Furthermore, even if we kept the statistics intact,
593
+ we observe that it is harder to achieve accurate predic-
594
+ tions in small-size systems as opposed to large-scale ones,
595
+ due to presence of fluctuations that scale as O(1/√n) for
596
+ a system of size n.
597
+ Appendix A: Encoding and decoding layers
598
+ Preceding the differential model layer ΨΨΨ, one can en-
599
+ code the input via ΨΨΨe : x ∈ Rn×d → x ∈ Rn×de [10],
600
+ in which case, the state space is of n × de dimensions
601
+ instead of n × d.
602
+ To revert back to the original n × d
603
+ space, a decoding function ΨΨΨd is used at the end. The
604
+ embedding respects the initial values iff ΨΨΨe =
605
+
606
+ ΨΨΨd�−1. If
607
+ the encoding and decoding are obtained via linear lay-
608
+ ers without bias terms, they are represented by matri-
609
+ ces We ∈ Rd×de and Wd ∈ Rde×d. So after a forward
610
+ pass, the initial values are modified if WeWd ̸= I. This
611
+ only holds if the two matrices are inverses to each other.
612
+ Since these matrices are not square, one can use a Moore-
613
+ Penrose inverse, which is a generalisation of the tradi-
614
+ tional inverse. We want Wd to be a right inverse of We,
615
+ 0
616
+ 50
617
+ 100
618
+ n
619
+ 0.00
620
+ 0.05
621
+ 0.10
622
+ 0.15
623
+ FIG. 4. Test loss (computed for the last 10 time steps of the
624
+ simulation) for a regression learning task at varied network
625
+ sizes.
626
+ The training and testing datasets are sampled from
627
+ B(1, 1). Averages are evaluated using 1000 test samples; for
628
+ training, 100 trajectories were used. The figure indicates that
629
+ the larger the network, the smaller the average loss and the
630
+ variance.
631
+ defined as: Wd = W∗
632
+ e(WeW∗
633
+ e)−1 . Here W∗
634
+ e denotes
635
+ a Hermitian transpose of We, however in our case it is
636
+ equivalent to a transpose, since We is defined over real
637
+ numbers.
638
+ Appendix B: Neural network mappings
639
+ The mappings of functions that constitute the neural
640
+ network model defined in Eq. 3 are defined as (here we
641
+ consider input x ∈ Rn×1×d, a three-dimensional tensor,
642
+ and tensor dimension is counted starting from 1):
643
+ 1. ψψψℓ : Rn×1×d → Rn×1×d, k = 3 mode product with
644
+ W ∈ Rd×h′ i.e. Rn×1×d ×3 Rd×h′ ∈ Rn×1×h′ and
645
+ C ∈ Rh′×d.
646
+ 2. ψψψq1,ψψψq2 : Rn×1×d → Rn×1×h, k = 3 mode product
647
+ with W ∈ Rd×h: Rn×1×d ×3 Rd×h ∈ Rn×1×h and
648
+ C = I.
649
+ 3. x⊤1 : Rn×1×h → Rh×n×1.
650
+ 4. x⊤2 : Rn×1×h → Rh×1×n.
651
+ 5.
652
+
653
+ ψψψq1(x)⊤1 ×k ψψψq2(x)⊤2�
654
+ :
655
+ Rh×n×1 ×3 Rh×1×n
656
+
657
+ Rh×n×n.
658
+ 6. ΦΦΦ ⊙
659
+
660
+ ψψψq1(x)⊤1 ×k ψψψq2(x)⊤2�
661
+ : Rn×n ⊙ Rh×n×1 ×3
662
+ Rh×1×n ∈ Rh×n×n.
663
+ Here an operator ⊙ denotes
664
+ a standard “broadcasted” element-wise multiplica-
665
+ tion.
666
+ 7. vec(·): Rh×n×n → Rn2h×1.
667
+ 8. ψψψq3 : Rn2h×1 → Rn2h×1, W ∈ R1×h′′ and C ∈
668
+ Rh′′×1.
669
+ 9. vec−1(·): Rn2h×1 → Rn×nh.
670
+
671
+ 6
672
+ 10. ψψψ
673
+
674
+ = ψ(�(·)), where we use �(·) as invariant
675
+ pooling layer Rn×nh → Rn×1 and then apply de-
676
+ coding layer ψ that maps Rn×1 → Rn×d, with
677
+ W ∈ R1×hd and C ∈ Rhd×d.
678
+ [1] B. Barzel and A. L. Barab´asi, Universality in network
679
+ dynamics, Nature Physics 2013 9:10 9, 673 (2013).
680
+ [2] J. C. Sprot, Chaotic dynamics on large networks, Chaos:
681
+ An Interdisciplinary Journal of Nonlinear Science 18,
682
+ 023135 (2008).
683
+ [3] K. Hornik, M. Stinchcombe, and H. White, Multilayer
684
+ feedforward networks are universal approximators, Neu-
685
+ ral networks 2, 359 (1989).
686
+ [4] D. E. Rumelhart, G. E. Hinton, and R. J. Williams,
687
+ Learning representations by back-propagating errors, na-
688
+ ture 323, 533 (1986).
689
+ [5] K. i. Funahashi and Y. Nakamura, Approximation of dy-
690
+ namical systems by continuous time recurrent neural net-
691
+ works, Neural Networks 6, 801 (1993).
692
+ [6] I. E. Lagaris, A. Likas, and D. I. Fotiadis, Artificial neu-
693
+ ral networks for solving ordinary and partial differential
694
+ equations, IEEE Transactions on Neural Networks 9, 987
695
+ (1998).
696
+ [7] T. S. Cubitt, J. Eisert, and M. M. Wolf, Extracting dy-
697
+ namical equations from experimental data is np hard,
698
+ Phys. Rev. Lett. 108, 120503 (2012).
699
+ [8] C. Murphy, E. Laurence, and A. Allard, Deep learning of
700
+ contagion dynamics on complex networks, Nature Com-
701
+ munications 12, 10.1038/s41467-021-24732-2 (2021).
702
+ [9] R. T. Chen, B. Amos, and M. Nickel, Learning neural
703
+ event functions for ordinary differential equations, arXiv
704
+ preprint arXiv:2011.03902 (2020).
705
+ [10] C. Zang and F. Wang, Neural Dynamics on Complex Net-
706
+ works, in Proceedings of the ACM SIGKDD International
707
+ Conference on Knowledge Discovery and Data Mining
708
+ (Association for Computing Machinery, 2020) pp. 892–
709
+ 902.
710
+ [11] K. Srinivasan, N. Coble, J. Hamlin, T. Antonsen, E. Ott,
711
+ and M. Girvan, Parallel Machine Learning for Forecast-
712
+ ing the Dynamics of Complex Networks, Physical Review
713
+ Letters 128, 10.1103/PhysRevLett.128.164101 (2022).
714
+ [12] J. Pathak, B. Hunt, M. Girvan, Z. Lu, and E. Ott, Model-
715
+ free prediction of large spatiotemporally chaotic systems
716
+ from data: A reservoir computing approach, Physical re-
717
+ view letters 120, 024102 (2018).
718
+ [13] T.-T. Gao and G. Yan, Autonomous inference of com-
719
+ plex network dynamics from incomplete and noisy data
720
+ 10.1038/s43588-022-00217-0.
721
+ [14] S. Maddu, B. L. Cheeseman, C. L. M¨uller, and I. F.
722
+ Sbalzarini, Learning physically consistent differential
723
+ equation models from data using group sparsity, Phys-
724
+ ical Review E 103, 042310 (2021).
725
+ [15] L. B¨ottcher, N. Antulov-Fantulin, and T. Asikis, Ai pon-
726
+ tryagin or how artificial neural networks learn to control
727
+ dynamical systems, Nature communications 13, 1 (2022).
728
+ [16] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and
729
+ G. Monfardini, The graph neural network model, IEEE
730
+ Transactions on Neural Networks 20, 61 (2009).
731
+ [17] T. N. Kipf and M. Welling, Semi-Supervised Classifica-
732
+ tion with Graph Convolutional Networks, .
733
+ [18] K. Xu, W. Hu, J. Leskovec, and S. Jegelka, How
734
+ powerful are graph neural networks?, arXiv preprint
735
+ arXiv:1810.00826 (2018).
736
+ [19] K. Xu, S. Jegelka, W. Hu, and J. Leskovec, How Pow-
737
+ erful are Graph Neural Networks?, 7th International
738
+ Conference on Learning Representations, ICLR 2019
739
+ 10.48550/arxiv.1810.00826 (2018).
740
+ [20] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R.
741
+ Salakhutdinov, and A. J. Smola, Deep sets, Advances in
742
+ neural information processing systems 30 (2017).
743
+ [21] S. Basu, Strictly real fundamental theorem of algebra
744
+ using polynomial interlacing, Bulletin of the Australian
745
+ Mathematical Society 104, 249 (2021).
746
+ [22] T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Fried-
747
+ man, The elements of statistical learning: data mining,
748
+ inference, and prediction, Vol. 2 (Springer, 2009).
749
+ [23] C. Cortes, V. Vapnik, and L. Saitta, Machine Leaming,
750
+ Tech. Rep. (1995).
751
+ [24] L. F. Shampine, Some practical runge-kutta formulas,
752
+ Mathematics of Computation 46, 135 (1986).
753
+
FtE4T4oBgHgl3EQfHAzF/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,467 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf,len=466
2
+ page_content='Universality of neural dynamics on complex networks Vaiva Vasiliauskaite†∗ and Nino Antulov-Fantulin† Computational Social Science, ETH Z¨urich, 8092 Z¨urich, Switzerland (Dated: January 13, 2023) This paper discusses the capacity of graph neural networks to learn the functional form of ordinary differential equations that govern dynamics on complex networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
3
+ page_content=' We propose necessary elements for such a problem, namely, inductive biases, a neural network architecture and a learning task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
4
+ page_content=' Sta- tistical learning theory suggests that generalisation power of neural networks relies on independence and identical distribution (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
5
+ page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
6
+ page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
7
+ page_content=') of training and testing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
8
+ page_content=' Although this assumption together with an appropriate neural architecture and a learning mechanism is sufficient for accurate out-of- sample predictions of dynamics such as, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
9
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
10
+ page_content=' mass-action kinetics, by studying the out-of-distribution generalisation in the case of diffusion dynamics, we find that the neural network model: (i) has a generalisation capacity that depends on the first moment of the initial value data distribution;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
11
+ page_content=' (ii) learns the non-dissipative nature of dynamics implicitly;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
12
+ page_content=' and (iii) the model’s accuracy resolution limit is of order O(1/√n) for a system of size n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
13
+ page_content=' Introduction Dynamics in a complex networked sys- tem is modelled as a set of n ordinary differential equa- tions (ODEs) that describe the rate of change of a quan- tity xi(t) for each node i and are coupled via adjacency matrix A ∈ Rn×n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
14
+ page_content=' A general form of these equations is ˙xi = L(xi(t)) + � j AijQ(xi(t), xj(t)) (1) = F(xi(t), x(t), A) where L describes self-interactions, Q is a function that models pairwise interactions between neighbours and � is an aggregation function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
15
+ page_content=' With appropriate choices of functions L, Q, � this definition is a general form for models of epidemic processes, biochemical dynamics, birth–death processes, gene regulatory dynamics [1], as well as dynamics that show chaotic behaviour [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
16
+ page_content=' The initial value problem of a set of ODEs such as Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
17
+ page_content=' 1 together with an initial condition x(t0), has a solution that satisfies x(t) = x(t0) + � t t0 FFF(x(t′), A)dt′ (2) and describes a set of trajectories of the dynamics, if the system was initialised at x(t0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
18
+ page_content=' Appropriately setup, a neural network ΨΨΨ(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
19
+ page_content='ωωω) has ca- pacity to approximate any continuous function F with compact support [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
20
+ page_content=' In practice, learning the weights is usually done via some variant of backpropagation algo- rithm [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
21
+ page_content=' Notably, neural networks can also be used to approx- imate dynamical systems [5] and find solutions of initial and boundary value problems of differential equations [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
22
+ page_content=' A dynamical system is that in which FFF describes the time dependence of x in an ambient space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
23
+ page_content=' Notably, if FFF is known, the description quality of the course of dynamics ∗ vvasiliau@ethz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
24
+ page_content='ch † Authors contributed equally to this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
25
+ page_content=' is independent of a coordinate in the space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
26
+ page_content=' For exam- ple, Newton’s laws of motion describe the trajectory of a bouncing ball regardless of its longitudinal and latitudi- nal position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
27
+ page_content=' Recovering universal dynamical principles from empirical data has been shown to belong to NP- hard class [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
28
+ page_content=' Despite, the hardness of problem, in recent years, dif- ferent classes of neural networks were used to learn dif- ferent parts of dynamics from empirical data, including graph neural networks [8] and their differential [9] coun- terparts [10];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
29
+ page_content=' reservoir computers [11, 12] as well as re- gression techniques [13, 14] or to learn control dynam- ics [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
30
+ page_content=' Here we discuss architectural design choices and induc- tive biases that are crucial for a neural network model that approximates dynamics evolving on complex net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
31
+ page_content=' We then study the model’s generalisation capac- ity using simple models of deterministic dynamics [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
32
+ page_content=' Lastly, we discuss our work in the context of learning principles that govern dynamics in complex system from perspective of generalization to unseen initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
33
+ page_content=' Inductive biases for dynamics on complex networks There are several important inductive biases and assump- tions worth noting about the complex network dynamics and its neural approximations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
34
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
35
+ page_content=' Network structure: There exists a known static network represented as an adjacency matrix A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
36
+ page_content=' There- fore it is reasonable to take a GNN [16] as the candidate for ΨΨΨ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
37
+ page_content=' A single-layer graph convolution network can be defined as ΨΨΨgnn(x) = (σ [ΦΦΦxW + b]) Wagg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
38
+ page_content=' (3) where x ∈ Rn×d is an input, ΦΦΦ ∈ Rn×n is a graph oper- ator (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
39
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
40
+ page_content=' ΦΦΦ = ˜D− 1 2 ˜A ˜D− 1 2 [17]), W ∈ Rd×h, b ∈ Rn×1, Wagg ∈ Rh×d are trainable parameters and σ is a non- linear function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
41
+ page_content=' Different versions of GNN with respect to different expressive power for Weisfeiler-Lehman iso- morphism are described in [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
42
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
43
+ page_content=' Self-Interaction: The model includes a self- interaction part that approximates L(·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
44
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
45
+ page_content='04900v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
46
+ page_content='stat-mech] 12 Jan 2023 2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
47
+ page_content=' Neighbour-Interaction: The model includes a neighbour interaction part that approximates Q(·, ·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
48
+ page_content=' Note that a single-layer GNN, such as a convolutional graph neural network has no mixed quadratic terms xixj and therefore does not simply satisfy such a con- dition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
49
+ page_content=' Although theoretically it should still be pos- sible to approximate nonlinear quadratic terms with a single layer neural network with an arbitrary width, in practice it can be challenging and require either a very large number of hidden neurons, or an exotic learning mechanism that goes beyond the standard gradient de- scend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
50
+ page_content=' Alternatively, one can improve expressivity of the model by increasing its depth, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
51
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
52
+ page_content=' using multi-layer GNNs or message-passing neural networks [19] to rep- resent ΨΨΨ(x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
53
+ page_content='ωωω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
54
+ page_content=' Here ωωω includes graph operator terms ΦΦΦk, k ∈ {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
55
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
56
+ page_content=', K} where K is the depth of the neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
57
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
58
+ page_content=' Spatiotemporal locality: The dynamical process that follows Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
59
+ page_content=' 1 must be local, that is, the function Q(·, ·) encodes interactions between neighbours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
60
+ page_content=' How- ever, including terms ΦΦΦk in a multi-layer graph neural network allows for k-hop interactions via length k walks in a network at a timescale smaller than the infinitesimal dt thereby subdividing dt to k intervals and breaking an assumption of temporal locality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
61
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
62
+ page_content=' Aggregation of neighbour-interactions: The ag- gregation can itself be non-linear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
63
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
64
+ page_content=' Initial value condition: Initial values are preserved during training: x0: ΨΨΨ(x0) → x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
65
+ page_content=' If the neural network straightforwardly approximates the RHS of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
66
+ page_content=' 2, then enconding and decoding layers must be pseudo-inverses of each other, see App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
67
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
68
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
69
+ page_content=' Conservation/dissipation laws.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
70
+ page_content=' If the system is closed, it does not exchange energy or mass with the en- vironment, therefore a conservation law holds, namely � i dxi(t) dt = C ∀t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
71
+ page_content=' (4) A constraint on a neural network to satisfy conservation laws can be imposed via a regularisation term in the loss function, R(D) = 1 |D| � x∈D |FFF(x)1 − ΨΨΨ(x)1| , that penalises the model weights which produce predic- tions which do not respect the conservation law Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
72
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
73
+ page_content=' Here D is the dataset over which the loss is calculated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
74
+ page_content=' The strength of the regulariser term can be modulated by mutiplying R(D) with a non-negative real number λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
75
+ page_content=' Architecture Given the inductive biases for dynamics on networks, we propose a neural network model of the following form: ˙x = ψψψℓ(x) + ψψψ � (x) (5) ψψψ � (x) = vec−1� ψψψq3� vec � ΦΦΦ ⊙ � ψψψq1(x)⊤1 ×k ψψψq2(x)⊤2� ��� where ψψψ(x) is a single hidden layer neural network are given by (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
76
+ page_content=' The mappings of local interaction are summarised in App.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
77
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
78
+ page_content=' The design choices of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
79
+ page_content=' 5 comply with the inductive biases stated earlier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
80
+ page_content=' To this end, we performed vetorisation of input to the function ψψψ � [ψψψq3 (·)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
81
+ page_content=' This function can approximate any invariant poolings of a set [20] or a multiset [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
82
+ page_content=' Notably, we also assumed that Q(·, ·) is factorisable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
83
+ page_content=' Since it can be approximated by Chebyshev polynomials, and, according to the strictly real fundamental theorem of algebra [21], it is possible to factorise polynomial function to two factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
84
+ page_content=' Alternatively, one can use deep sets [20] as arguments to approximate Q(·, ·).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
85
+ page_content=' In order to guarantee the local existence and unique- ness of the solution to the initial value problem, by Pi- card–Lindel¨of theorem the neural network ΨΨΨ needs to be Lipschitz continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
86
+ page_content=' To enforce Lipschitz continuity of ΨΨΨ, we will be using 1-Lipschitz activation functions such as ReLU, sigmoid, softmax, or tanh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
87
+ page_content=' Learning task We formulate two distinct statistical learning settings that relate to an increasing strength of generality in the approximation of a dynamical system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
88
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
89
+ page_content=' Regression task to approximate FFF by ΨΨΨ: An appropriate “proto data set” here is D = {(x(t)α, y(t)α)}, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
90
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
91
+ page_content=' x(t)α ∈ Rn, y(t)α ∈ Rn, x(0)α ∼ fx(0)(x), t = [0, T] ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
92
+ page_content=' our labels are defined as y(t)α = FFF(x((t))α), α denotes α-th initial condition x(0)α sampled from a predefined distribution fx(0)(x);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
93
+ page_content=' all others points x(t)α are obtained following Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
94
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
95
+ page_content=' Here the functional mapping that is be- ing learnt is ˆFFF : Rn → Rn and is obtained by minimising the loss L between the true labels y and the labels f(x) obtained by the current model: ˆFFF = arg min f:Rn→Rn E P(x,y) L(f(x), y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
96
+ page_content=' Here E is an expectation operator, P(x, y) is the data sampling distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
97
+ page_content=' At the moment, samples from the “proto data set” are not independent: those trajectories that were obtained from the same initial condition are non-i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
98
+ page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
99
+ page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
100
+ page_content=' Such sam- pling is compulsory for the Uniform Law of Large num- bers, that together with capacity control ensures general- isation from train to test set [22, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
101
+ page_content=' To ensure statistical independence of samples, we create finite train and test sets of size m1, m2 by using a specific distribution P over a “proto data set” Dtrain ∪ Dtest ∼ P(x, y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
102
+ page_content=' Specifically, we randomly delegate (x(t)α, y(t)α) to either Dtrain or Dtest thereby ensuring an i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
103
+ page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
104
+ page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
105
+ page_content=' condition by dropping information on the initial conditions and time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
106
+ page_content=' 3 Dynamics L Q Ltrain reg Ltest reg ≈reg Ltrain traj Ltest traj ≈traj Heata – B(xj − xi) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
107
+ page_content='03 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
108
+ page_content='03 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
109
+ page_content='14 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
110
+ page_content='08 ✓ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
111
+ page_content='39 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
112
+ page_content='59 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
113
+ page_content='47 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
114
+ page_content='63 ✓ MAKb F − Bxb i Rxj 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
115
+ page_content='41 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
116
+ page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
117
+ page_content='44 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
118
+ page_content='14 ✓ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
119
+ page_content='48 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
120
+ page_content='05 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
121
+ page_content='55 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
122
+ page_content='04 × PDc −Bxb i Rxa j 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
123
+ page_content='68 ± 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
124
+ page_content='82 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
125
+ page_content='72 ± 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
126
+ page_content='89 ✓ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
127
+ page_content='03 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
128
+ page_content='03 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
129
+ page_content='04 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
130
+ page_content='03 ✓ MMd −Bxi R xh j 1+xh j 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
131
+ page_content='68 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
132
+ page_content='36 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
133
+ page_content='83 ± 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
134
+ page_content='47 ✓ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
135
+ page_content='93 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
136
+ page_content='12 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
137
+ page_content='94 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
138
+ page_content='14 ✓ SISe −Bxi (1 − xi)xj 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
139
+ page_content='16 ± 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
140
+ page_content='62 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
141
+ page_content='31 ± 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
142
+ page_content='07 ✓ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
143
+ page_content='54 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
144
+ page_content='01 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
145
+ page_content='64 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
146
+ page_content='02 × a B = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
147
+ page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
148
+ page_content=' b B = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
149
+ page_content='1, R = 1, f = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
150
+ page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
151
+ page_content=' c B = 2, R = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
152
+ page_content='3, a = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
153
+ page_content='5, b = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
154
+ page_content=' d B = 4, R = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
155
+ page_content='5, h = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
156
+ page_content=' e B = 5, R = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
157
+ page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
158
+ page_content=' TABLE I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
159
+ page_content=' Generalisation of a neural network model Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
160
+ page_content=' 5 trained on dynamics from [1] in the regression task setting, and the trajectory learning setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
161
+ page_content=' Reported loss values are multiplied by a factor 10−2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
162
+ page_content=' In columns denoted “≈” we indicate for which dynamics the train loss is approximately similar (“✓”) or different (“×”) from the test loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
163
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
164
+ page_content=' Trajectory learning setting that approxi- mates x(t): here the train set contains m1 initial condi- tions x(0)α as inputs, while each label corresponds to tra- jectories yα = {x(t)α}, where t = 0, ∆t, 2∆t, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
165
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
166
+ page_content='.k∆t = T that were realised from the initial condition x(0)α: Dtrain = {(x(0)α, yα)}, s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
167
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
168
+ page_content=' x(0)α ∈ Rn, yα ∈ Rkn, x(0)α ∼ fx(0)(x), α ∈ [1, m1], yα = {x(0)α, x(∆t)α, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
169
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
170
+ page_content=', x(k∆t)α} and test set Dtest is constructed analogously from m2 initial conditions that are sampled from the same distri- bution x(0)α ∼ fx(0)(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
171
+ page_content=' The mapping learnt here is of the following form: ˆFFF : Rn → Rkn and is realised by computing an initial value problem Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
172
+ page_content=' 2 using a neural network ΨΨΨ in replacement of FFF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
173
+ page_content=' Experiments and Results We consider models with h′ = 6, h = 8, h′′ = 5, hd = 3, trained in 1000 epochs us- ing Adam optimiser with learning rate of 10−2 and weight decay 10−3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
174
+ page_content=' All activations are ReLU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
175
+ page_content=' Unless otherwise stated, the initial values in both the train set and the test set are sampled from B[a = 5, b = 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
176
+ page_content=' For numerical in- tegration, an explicit Runge-Kutta method of order 5(4) is used [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
177
+ page_content=' The training loss function is the average L1 norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
178
+ page_content=' For the regression task, the loss is Ltrain reg = 1 Nreg � x,y∈Dtrain � ||f(x) − y||1 + λR(x) � , where Nreg = |Dtrain|(xmax − xmin).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
179
+ page_content=' For the trajectory learning task, the loss is defined as: Ltrain traj = 1 Ntraj � x(0),y∈Dtrain T/∆t � k=0 (6) � ||x(k∆t) − ˆx(k∆t)||1 + λR(x(k∆t)) � Here the normalisation constant is Ntraj = |Dtrain|nT(xmax − xmin)/∆t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
180
+ page_content=' λ = 0 and the regu- larisation terms are nil for the first part of the analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
181
+ page_content=' The training sets include samples from 103 trajectories, the testing sets – from 102 trajectories and the batch size is 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
182
+ page_content=' The parameters for numerical integration are ∆t = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
183
+ page_content='01, T = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
184
+ page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
185
+ page_content=' In all cases, a graph was sampled from Erd¨os-R´enyi ensemble with p = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
186
+ page_content='5 and � = � j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
187
+ page_content=' Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
188
+ page_content=' I shows that the trained neural network model Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
189
+ page_content=' 5 can well-approximate the true dynamics and generalise to unseen initial values well, provided fx(0)(x) is used for generating both, a training test and a sampling test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
190
+ page_content=' Generalisation Crucially, the universality of the neu- ral approximation exemplified in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
191
+ page_content=' I is only at the low- est level that is attainable by putting strong constraints on a test set (that are in accordance with statistical learn- ing theory): the two sets must be statistically equivalent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
192
+ page_content=' If the distribution of initial values is irrelevant for the steady state solution, the neural model also inadvertently universally approximates the dynamical system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
193
+ page_content=' However, it seems reasonable to ask if a neural net- work can do better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
194
+ page_content=' In Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
195
+ page_content=' II we propose three tiers of universality of approximation FFF ≈ ΨΨΨ in terms of statis- tical properties of training and testing samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
196
+ page_content=' In this context, the statistical learning theory concerns only the lowest level of generality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
197
+ page_content=' Level Relation between f and g Bottom fX0 ≡ gX0 or fX0,X∞ = fX0fX∞ Mid fX0 ̸≡ gX0, sup fX0 = sup gX0 Top fX0 ̸≡ gX0, sup fX0 ̸= sup gX0 TABLE II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
198
+ page_content=' Generalisation levels for a neural approximation FFF ≈ ΨΨΨ is encompassed in model’s ability to extrapolate pre- dictions to data that was not used during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
199
+ page_content=' Probabil- ity density functions related to training data are denoted by “f”;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
200
+ page_content=' related to testing data are denoted by “g”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
201
+ page_content=' More sophisticated, mid and top level generalisations would enable a faithful prediction in cases where the con- straints on statistical properties of data are relaxed, for example, where fX0 is not the same as gX0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
202
+ page_content=' Diffusion A dynamical system whose faith and the course of action depend on the distribution of initial val- ues enables us to study the limits of generalisation of a 4 neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
203
+ page_content=' Diffusion equation on a graph is a good example due to its simplicity and known analytical solu- tion of the form x(t) = � i ai(0)e−Bλitvi, ai(0) = x(t)⊤vi, (7) where λi, vi are ith eigenvalue and eigenvector of the graph Laplacian and the steady state solution is given by lim t→∞ xi(t) = 1 n � j xj(0) ∀i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
204
+ page_content=' Perturbation of the initial value x(0) by δ ∼ fδ such that xδ i (0) = xi(0) + δ gives a difference in the steady state solutions of ⟨xδ i (0)⟩i − ⟨xi(0)⟩i = γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
205
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
206
+ page_content=' 1 shows how the loss accumulates over the inte- gration time t for the neural network model ΨΨΨ for tra- jectories in the train and in the test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
207
+ page_content=' In addition, we consider a perturbation (NN,pert) where the initial value is sampled from a different distribution, namely, gX0(x) = B(6, 5), while the neural network was trained using fX0(x) = B(5, 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
208
+ page_content=' This figure shows that the neu- ral network prediction is reasonable, under i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
209
+ page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
210
+ page_content='d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
211
+ page_content=' sampling condition for an initial condition in train and test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
212
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
213
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
214
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
215
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
216
+ page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
217
+ page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
218
+ page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
219
+ page_content='50 t 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
220
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
221
+ page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
222
+ page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
223
+ page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
224
+ page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
225
+ page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
226
+ page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
227
+ page_content='07 traj(t) NN,train NN,test NN,pert Numerical FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
228
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
229
+ page_content=' Node average loss between the analytical solution and: 1) the numerical solution (numerical), 2) the neu- ral network solution for a subset of initial conditions in the training set (NN,train) as well as a subset of a testing set (NN,test).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
230
+ page_content=' The original x(0) ∼ B(5, 5), whereas the per- turbed (NN,pert) initial values x′(0) ∼ B(6, 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
231
+ page_content=' The loss is computed for the trajectory learning task using Ntraj = 100 trajectories in each case using an equation Ltraj(t) = 1 Ntraj � x(0),y∈D ||x(t)− ˆx(t)||1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
232
+ page_content=' The errors show one standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
233
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
234
+ page_content=' 2 follows the same analysis and shows that by varying the parameters of the beta distribution B(a, b), the loss in the steady state (averaged over the last 10 steps of the simulation) is proportional to the differ- ence in expectation value of the beta-distribution used in training, and in testing to generate the initial values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
235
+ page_content=' All in all, these results show that the neural network ap- proximation of the differential form is exclusive to the statistical properties of the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
236
+ page_content=' Upsofar, conservation law (4) and the effect of the reg- ulariser were not considered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
237
+ page_content=' We study it in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
238
+ page_content=' 3 for a small case with a graph composed of N = 2 nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
239
+ page_content=' This figure presents two key findings: Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
240
+ page_content=' 3a) clearly shows that ΨΨΨ is biased towards the training set;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
241
+ page_content=' whereas in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
242
+ page_content=' 3b) it is clear that ΨΨΨ has the property of implicit dissipa- tive (conservation) regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
243
+ page_content=' Even in the case of no explicit regularization of the dissipative term, the neural network optimises towards a less dissipative regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
244
+ page_content=' This is of particular importance, since some systems in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
245
+ page_content=' I are non-dissipative and some are dissipative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
246
+ page_content=' 2 4 6 8 a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
247
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
248
+ page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
249
+ page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
250
+ page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
251
+ page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
252
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
253
+ page_content='30 train b = a b = 5 FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
254
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
255
+ page_content=' Generalisation of ΨΨΨ to unseen initial conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
256
+ page_content=' The neural network was trained using initial values sampled from x0 ∼ B(5, 5) until it achieved the loss train.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
257
+ page_content=' Its prediction capacity was then tested on dynamics with initial conditions x0 ∼ B(a, b = a) (red circles) as well as x0 ∼ B(a, b = 5) (blue triangles).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
258
+ page_content=' The dashed orange line is a function |0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
259
+ page_content='5−a|/(a+ 5)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
260
+ page_content=' The loss is computed for the trajectory learning task using Ntraj = 100 trajectories in each case using (6), omitting the term xmax−xmin in the normalisation and considering the last 10 timesteps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
261
+ page_content=' The errors show one standard deviation across trajectories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
262
+ page_content=' Next, we turn our attention to analyse the out-of- sample loss for system of n coupled differential equations (coupling with Erd¨os-R´enyi model) and diffusion dynam- ics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
263
+ page_content=' Notably, the steady state solution is governed by the average value ⟨x0⟩, and since we have n nodes in our sys- tem this value has variance ∝ 1/√n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
264
+ page_content=' This implies that it is easier to accurately predict dynamics with a larger number of differential equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
265
+ page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
266
+ page_content=' 4, we show that indeed, test loss is inversely proportional to the system’s size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
267
+ page_content=' Discussion In this paper, we proposed a variant of a Neural ODE model which implements a set of inductive biases suitable for complex dynamics on graphs and elic- its dynamical models in complex networked systems di- rectly from time series these systems produce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
268
+ page_content=' While we showed the presence of generalisation out-of-sample for a wide range of dynamical models, perhaps more impor- tantly such an exercise reflects on generalisation capacity only at the most trivial level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
269
+ page_content=' Multiple out-of-distribution 5 epoch=0 epoch=1000 epoch=2000 a) b) 𝜆 = 1 𝜆 = 0 Legend FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
270
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
271
+ page_content=' Learning diffusion on a fully connected n = 2 network using the regression training paradigm and a conservation law regulariser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
272
+ page_content=' The training sample consists of datapoints obtained from trajectories generated using x0 ∼ [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
273
+ page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
274
+ page_content='7] + N(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
275
+ page_content='1), the testing sample: x0 ∼ [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
276
+ page_content='3, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
277
+ page_content='8] + N(0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
278
+ page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
279
+ page_content=' a) shows an example of a training process, namely by contrast- ing the true (continuous lines) and learnt (dotted) trajectories of an initial value problem as predicted after indicated train- ing epochs, using λ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
280
+ page_content=' b) shows the loss and the value of the regulariser over training period in the case where the regulariser plays a part in training (λ = 1, same training as in a)), and when it does not (λ = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
281
+ page_content=' The results in b) are obtained from 10 independent runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
282
+ page_content=' tests suggest that the neural network approximation is valid only for a specific probability distribution of initial values, which was also used to generate the training sam- ples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
283
+ page_content=' Furthermore, even if we kept the statistics intact, we observe that it is harder to achieve accurate predic- tions in small-size systems as opposed to large-scale ones, due to presence of fluctuations that scale as O(1/√n) for a system of size n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
284
+ page_content=' Appendix A: Encoding and decoding layers Preceding the differential model layer ΨΨΨ, one can en- code the input via ΨΨΨe : x ∈ Rn×d → x ∈ Rn×de [10], in which case, the state space is of n × de dimensions instead of n × d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
285
+ page_content=' To revert back to the original n × d space, a decoding function ΨΨΨd is used at the end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
286
+ page_content=' The embedding respects the initial values iff ΨΨΨe = � ΨΨΨd�−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
287
+ page_content=' If the encoding and decoding are obtained via linear lay- ers without bias terms, they are represented by matri- ces We ∈ Rd×de and Wd ∈ Rde×d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
288
+ page_content=' So after a forward pass, the initial values are modified if WeWd ̸= I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
289
+ page_content=' This only holds if the two matrices are inverses to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
290
+ page_content=' Since these matrices are not square, one can use a Moore- Penrose inverse, which is a generalisation of the tradi- tional inverse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
291
+ page_content=' We want Wd to be a right inverse of We, 0 50 100 n 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
292
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
293
+ page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
294
+ page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
295
+ page_content='15 FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
296
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
297
+ page_content=' Test loss (computed for the last 10 time steps of the simulation) for a regression learning task at varied network sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
298
+ page_content=' The training and testing datasets are sampled from B(1, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
299
+ page_content=' Averages are evaluated using 1000 test samples;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
300
+ page_content=' for training, 100 trajectories were used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
301
+ page_content=' The figure indicates that the larger the network, the smaller the average loss and the variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
302
+ page_content=' defined as: Wd = W∗ e(WeW∗ e)−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
303
+ page_content=' Here W∗ e denotes a Hermitian transpose of We, however in our case it is equivalent to a transpose, since We is defined over real numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
304
+ page_content=' Appendix B: Neural network mappings The mappings of functions that constitute the neural network model defined in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
305
+ page_content=' 3 are defined as (here we consider input x ∈ Rn×1×d, a three-dimensional tensor, and tensor dimension is counted starting from 1): 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
306
+ page_content=' ψψψℓ : Rn×1×d → Rn×1×d, k = 3 mode product with W ∈ Rd×h′ i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
307
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
308
+ page_content=' Rn×1×d ×3 Rd×h′ ∈ Rn×1×h′ and C ∈ Rh′×d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
309
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
310
+ page_content=' ψψψq1,ψψψq2 : Rn×1×d → Rn×1×h, k = 3 mode product with W ∈ Rd×h: Rn×1×d ×3 Rd×h ∈ Rn×1×h and C = I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
311
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
312
+ page_content=' x⊤1 : Rn×1×h → Rh×n×1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
313
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
314
+ page_content=' x⊤2 : Rn×1×h → Rh×1×n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
315
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
316
+ page_content=' � ψψψq1(x)⊤1 ×k ψψψq2(x)⊤2� : Rh×n×1 ×3 Rh×1×n ∈ Rh×n×n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
317
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
318
+ page_content=' ΦΦΦ ⊙ � ψψψq1(x)⊤1 ×k ψψψq2(x)⊤2� : Rn×n ⊙ Rh×n×1 ×3 Rh×1×n ∈ Rh×n×n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
319
+ page_content=' Here an operator ⊙ denotes a standard “broadcasted” element-wise multiplica- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
320
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
321
+ page_content=' vec(·): Rh×n×n → Rn2h×1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
322
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
323
+ page_content=' ψψψq3 : Rn2h×1 → Rn2h×1, W ∈ R1×h′′ and C ∈ Rh′′×1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
324
+ page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
325
+ page_content=' vec−1(·): Rn2h×1 → Rn×nh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
326
+ page_content=' 6 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
327
+ page_content=' ψψψ � = ψ(�(·)), where we use �(·) as invariant pooling layer Rn×nh → Rn×1 and then apply de- coding layer ψ that maps Rn×1 → Rn×d, with W ∈ R1×hd and C ∈ Rhd×d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
328
+ page_content=' [1] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
329
+ page_content=' Barzel and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
330
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
331
+ page_content=' Barab´asi, Universality in network dynamics, Nature Physics 2013 9:10 9, 673 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
332
+ page_content=' [2] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
333
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
334
+ page_content=' Sprot, Chaotic dynamics on large networks, Chaos: An Interdisciplinary Journal of Nonlinear Science 18, 023135 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
335
+ page_content=' [3] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
336
+ page_content=' Hornik, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
337
+ page_content=' Stinchcombe, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
338
+ page_content=' White, Multilayer feedforward networks are universal approximators, Neu- ral networks 2, 359 (1989).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
339
+ page_content=' [4] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
340
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
341
+ page_content=' Rumelhart, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
342
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
343
+ page_content=' Hinton, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
344
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
345
+ page_content=' Williams, Learning representations by back-propagating errors, na- ture 323, 533 (1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
346
+ page_content=' [5] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
347
+ page_content=' i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
348
+ page_content=' Funahashi and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
349
+ page_content=' Nakamura, Approximation of dy- namical systems by continuous time recurrent neural net- works, Neural Networks 6, 801 (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
350
+ page_content=' [6] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
351
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
352
+ page_content=' Lagaris, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
353
+ page_content=' Likas, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
354
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
355
+ page_content=' Fotiadis, Artificial neu- ral networks for solving ordinary and partial differential equations, IEEE Transactions on Neural Networks 9, 987 (1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
356
+ page_content=' [7] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
357
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
358
+ page_content=' Cubitt, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
359
+ page_content=' Eisert, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
360
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
361
+ page_content=' Wolf, Extracting dy- namical equations from experimental data is np hard, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
362
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
363
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
364
+ page_content=' 108, 120503 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
365
+ page_content=' [8] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
366
+ page_content=' Murphy, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
367
+ page_content=' Laurence, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
368
+ page_content=' Allard, Deep learning of contagion dynamics on complex networks, Nature Com- munications 12, 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
369
+ page_content='1038/s41467-021-24732-2 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
370
+ page_content=' [9] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
371
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
372
+ page_content=' Chen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
373
+ page_content=' Amos, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
374
+ page_content=' Nickel, Learning neural event functions for ordinary differential equations, arXiv preprint arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
375
+ page_content='03902 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
376
+ page_content=' [10] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
377
+ page_content=' Zang and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
378
+ page_content=' Wang, Neural Dynamics on Complex Net- works, in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (Association for Computing Machinery, 2020) pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
379
+ page_content=' 892– 902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
380
+ page_content=' [11] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
381
+ page_content=' Srinivasan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
382
+ page_content=' Coble, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
383
+ page_content=' Hamlin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
384
+ page_content=' Antonsen, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
385
+ page_content=' Ott, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
386
+ page_content=' Girvan, Parallel Machine Learning for Forecast- ing the Dynamics of Complex Networks, Physical Review Letters 128, 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
387
+ page_content='1103/PhysRevLett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
388
+ page_content='128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
389
+ page_content='164101 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
390
+ page_content=' [12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
391
+ page_content=' Pathak, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
392
+ page_content=' Hunt, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
393
+ page_content=' Girvan, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
394
+ page_content=' Lu, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
395
+ page_content=' Ott, Model- free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Physical re- view letters 120, 024102 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
396
+ page_content=' [13] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
397
+ page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
398
+ page_content=' Gao and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
399
+ page_content=' Yan, Autonomous inference of com- plex network dynamics from incomplete and noisy data 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
400
+ page_content='1038/s43588-022-00217-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
401
+ page_content=' [14] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
402
+ page_content=' Maddu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
403
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
404
+ page_content=' Cheeseman, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
405
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
406
+ page_content=' M¨uller, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
407
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
408
+ page_content=' Sbalzarini, Learning physically consistent differential equation models from data using group sparsity, Phys- ical Review E 103, 042310 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
409
+ page_content=' [15] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
410
+ page_content=' B¨ottcher, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
411
+ page_content=' Antulov-Fantulin, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
412
+ page_content=' Asikis, Ai pon- tryagin or how artificial neural networks learn to control dynamical systems, Nature communications 13, 1 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
413
+ page_content=' [16] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
414
+ page_content=' Scarselli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
415
+ page_content=' Gori, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
416
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
417
+ page_content=' Tsoi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
418
+ page_content=' Hagenbuchner, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
419
+ page_content=' Monfardini, The graph neural network model, IEEE Transactions on Neural Networks 20, 61 (2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
420
+ page_content=' [17] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
421
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
422
+ page_content=' Kipf and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
423
+ page_content=' Welling, Semi-Supervised Classifica- tion with Graph Convolutional Networks, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
424
+ page_content=' [18] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
425
+ page_content=' Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
426
+ page_content=' Hu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
427
+ page_content=' Leskovec, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
428
+ page_content=' Jegelka, How powerful are graph neural networks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
429
+ page_content=', arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
430
+ page_content='00826 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
431
+ page_content=' [19] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
432
+ page_content=' Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
433
+ page_content=' Jegelka, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
434
+ page_content=' Hu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
435
+ page_content=' Leskovec, How Pow- erful are Graph Neural Networks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
436
+ page_content=', 7th International Conference on Learning Representations, ICLR 2019 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
437
+ page_content='48550/arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
438
+ page_content='1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
439
+ page_content='00826 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
440
+ page_content=' [20] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
441
+ page_content=' Zaheer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
442
+ page_content=' Kottur, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
443
+ page_content=' Ravanbakhsh, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
444
+ page_content=' Poczos, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
445
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
446
+ page_content=' Salakhutdinov, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
447
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
448
+ page_content=' Smola, Deep sets, Advances in neural information processing systems 30 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
449
+ page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
450
+ page_content=' Basu, Strictly real fundamental theorem of algebra using polynomial interlacing, Bulletin of the Australian Mathematical Society 104, 249 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
451
+ page_content=' [22] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
452
+ page_content=' Hastie, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
453
+ page_content=' Tibshirani, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
454
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
455
+ page_content=' Friedman, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
456
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
457
+ page_content=' Fried- man, The elements of statistical learning: data mining, inference, and prediction, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
458
+ page_content=' 2 (Springer, 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
459
+ page_content=' [23] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
460
+ page_content=' Cortes, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
461
+ page_content=' Vapnik, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
462
+ page_content=' Saitta, Machine Leaming, Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
463
+ page_content=' Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
464
+ page_content=' (1995).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
465
+ page_content=' [24] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
466
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
467
+ page_content=' Shampine, Some practical runge-kutta formulas, Mathematics of Computation 46, 135 (1986).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/FtE4T4oBgHgl3EQfHAzF/content/2301.04900v1.pdf'}
KNAzT4oBgHgl3EQfVfxu/content/tmp_files/2301.01285v1.pdf.txt ADDED
@@ -0,0 +1,606 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Transition between metastable equilibra:
2
+ applications to binary-choice games
3
+ A. Antonov(1)∗, A. Leonidov(1,2), and A. Semenov(1,3)
4
+ (1) P.N. Lebedev Physical Institute, Moscow, Russia
5
+ (2) Moscow Institute of Physics and Technology, Dolgoprudny, Russia
6
+ (3) Higher School of Economics, Moscow, Russia
7
+ Abstract
8
+ Transitions between metastable equilibria in the low-temperature phase
9
+ of dynamical Ising game with activity spillover are studied in the infinite
10
+ time limit.
11
+ It is shown that exponential enhancement due to activity
12
+ spillover previously found for finite-time transitions in [1] is absent in the
13
+ infinite time limit. Analytical description for infinite time trajectory is
14
+ developed and compared with results of exact numerical analysis.
15
+ ∗antonov@lpi.ru
16
+ 1
17
+ arXiv:2301.01285v1 [cond-mat.stat-mech] 3 Jan 2023
18
+
19
+ 1. Introduction
20
+ Studies of noisy binary choice games are of special interest because of the
21
+ existence of close parallels to statistical physics of spin systems, in particular to
22
+ static and dynamic properties of phase transitions in them [2, 3, 4]. These par-
23
+ allels are particularly intriguing because of the fundamentally different origins
24
+ of equilibria in game theory and statistical physics: in game theory equilibration
25
+ is a result of balancing individual interests while in statistical physics equilibra-
26
+ tion is a search of a global minimum of free energy. For the noisy binary choice
27
+ problem on complete graphs it is long known, see [2] and references therein,
28
+ that for a special choice of noise game-theoretic equilibria are characterised by
29
+ the same mean-field Curie-Weiss equation as that describing phase transitions
30
+ in magnetics, see e.g. [4]. The properties of static and dynamic equilibria in
31
+ noisy binary choice games were studied in [5, 6, 7] for arbitrary noise, and com-
32
+ plete and random graph topologies. It was established in particular that static
33
+ game-theoretic equilibria in noisy binary choice games on graphs correspond to
34
+ the so-called quantal response/expectation equilibria [8].
35
+ The dynamics of games can, however, be fundamentally different from con-
36
+ ventional spin dynamics due to a variety of possible mechanisms. One of these
37
+ is a possibility of activity spillover (self-excitation) that was intensively studied
38
+ for so-called Hawkes processes [9] with applications to finance [10, 11], earth-
39
+ quakes [12] and other subjects, see the recent review in [13]. A master equation
40
+ formalism for such processes was developed in [14, 15]. The effects of an ac-
41
+ tivity spillover different from the Hawkes self-excitation mechanism for a noisy
42
+ binary choice game (Ising game) on complete graphs was studied in [1]. The
43
+ main focus of [1] was in studying transitions between metastable equilibria in
44
+ the low-temperature phase taking place at finite time. It was observed that
45
+ activity spillover leads to an exponential acceleration of such transitions. The
46
+ present paper complements the analysis of [1] by studying transitions between
47
+ metastable equilibria in the limit of infinite time. The importance of studying
48
+ this limit is, first, in establishing a link with a rich literature on Kramers rate
49
+ [16] and, second, in that in this limit the exponential enhancement is absent
50
+ and an analysis of pre-exponential contribution is necessary. In analysing this
51
+ problem we develop an analytical description of the infinite-limit trajectory and
52
+ suggest an analytical formula for the transition rate that is compared with the
53
+ results of exact numerical simulations.
54
+ 2. Model
55
+ We consider a dynamical noisy binary choice game of N agents on a complete
56
+ graph topology. Each agent i has two possible strategies si = ±1 so the system
57
+ is fully described by the vector st = (s1, . . . , sn)t at given time t. The temporal
58
+ evolution of the strategies configuration st → st+δt within a small time interval
59
+ δt is assumed to be driven by a strategy flip si → −si of some agent i with the
60
+ flip probability
61
+ Prob[si → −si|(t; t + δt)] = λi(t)δt γi(si → −si|s−i,t)
62
+ (1)
63
+ where λi(t) is an activity rate of the agent i, i.e. λi(t)δt is a time-dependent
64
+ probability for an agent i to be active and have a possibility to change a strategy
65
+ 2
66
+
67
+ within a time interval (t, t + δt) while γ(si → −si|s−i,t) is a probability, for an
68
+ active agent i, of a strategy flip dependent of the current configuration s−i,t of
69
+ strategies in the neighbourhood of this node. In what follows we shall assume
70
+ a noisy best response (Ising-Glauber) flip rate1. For a complete graph topology
71
+ at large N, it is the same for all agents
72
+ γ(m(t)) = 1
73
+ 2 [1 − si tanh (βJm(t))]
74
+
75
+ γ±(m(t)) = 1
76
+ 2 [1 ± tanh(βJm(t))]
77
+ (2)
78
+ where β = 1/T is an inverse temperature, J is an Ising coupling constant,
79
+ γ± = γ(∓s → ±s) and m(t) =
80
+ 1
81
+ N
82
+ �N
83
+ i=1 si. For the compete graph topology
84
+ activity rates {λi} are also the same for all agents, λi(t) = λ(t) for any i.
85
+ The time-dependence of the activity rate λ(t) is due to the spillover effect
86
+ driven by the past events of strategy flips assumed to be described by the Hawkes
87
+ process [9] with an exponential kernel:
88
+ λ(t) = λ0 + µ
89
+ N
90
+
91
+ τk<t
92
+ e−b(t−τk),
93
+ (3)
94
+ where {τk} are times at which strategy flip of one of agents took place. The
95
+ spillover effect described by (3) can be termed realised activity spillover.
96
+ The time-dependent state of the system is described by the probability dis-
97
+ tribution P(m, λ; t) and the character of its evolution depends on parameters
98
+ λ0, µ, b, βJ. The particular case of µ = 0 corresponds to a standard Poisson
99
+ dynamics of the system with constant intensity λ(t) = λ0 so that the probability
100
+ distribution describing it is reduced to P(m(t); t). To investigate the effects of
101
+ realised activity spillover, in what follows we compare the properties of Hawkes
102
+ and Poisson dynamic games.
103
+ In the limit N → ∞, the probability density function P(m, λ; t) obeys the
104
+ Fokker-Planck equation derived in [1]
105
+ ∂tP
106
+ =
107
+ ∂i(fiP) + 1
108
+ N ∂i∂j (gijP)
109
+ fi
110
+ =
111
+
112
+ λ [m − tanh(βJm)]
113
+ −λ [1 − m tanh(βJm)] + b [λ − λ0]
114
+
115
+ gij
116
+ =
117
+ �λ [1 − m tanh(βJm)]
118
+ −λ [m − tanh(βJm)]
119
+ −λ [m − tanh(βJm)]
120
+ λ [1 − m tanh(βJm)]
121
+
122
+ ,
123
+ (4)
124
+ where summation over repeated indices is assumed. Here and in what follows
125
+ the indices i and j represent coordinates m and λ, and the following rescaling
126
+ was performed:
127
+ λ → 2λ
128
+ µ , λ0 → 2λ0
129
+ µ , b → 2b
130
+ µ , t → µt
131
+ 2 .
132
+ (5)
133
+ The Fokker-Planck equation (4) describes Brownian motion in an external
134
+ vector field fi in the plane (λ, m) subject to noise effects described by the
135
+ matrix gij and corresponds to a mean field game-type description of the dynamic
136
+ Ising game under consideration2. We also note that the considered system is
137
+ 1This choice corresponds to the Gumbel noise in the individual agents utilities.
138
+ 2The standard description of a mean field game includes, in addition to a Fokker-Planck
139
+ equation, additional equations describing optimal control, see e.g. [17].
140
+ 3
141
+
142
+ symmetric with respect to m-axis, and the external field is non-gradient, i.e.
143
+ ∂λfm ̸= ∂mfλ.
144
+ In such a parametrisation, the process of realised activity spillover is con-
145
+ trolled by a single memory kernel parameter b. In the special case of Poisson
146
+ game with µ = 0 the rescaling in (5) is of course not relevant and the corre-
147
+ sponding one-dimensional dynamics along the m axis can be studied by simply
148
+ taking λ(t) ≡ λ0.
149
+ Depending on parameters of the system, the considered system can have
150
+ different equilibrium configurations (λeq, meq) given by the zeros of vector field
151
+ fi = 0 corresponding to stable fixed points. As we can see from Eq. (4), for
152
+ both Hawkes and Poisson games, for any time-dependent activity rate in the
153
+ Hawkes game λ(t), the m-equilibria are described [1] by the same Curie-Weiss
154
+ equation as in [2, 3, 5]
155
+ meq = tanh(βJmeq).
156
+ (6)
157
+ For high temperatures βJ < 1 the system has one equilibrium meq = 0, and
158
+ for low temperatures βJ > 1 it has two symmetrical (meta)stable equlibria at
159
+ meq = ±m0(β) as well as the unstable one at m = 0 serving as a separatrix
160
+ separating the two stable ones.
161
+ The λ - equilibria are more complicated and depend on both temperature
162
+ βJ and self-excitation memory kernel parameter b.
163
+ In the high-temperature phase βJ < 1 and b > 1, we have the equilibrium
164
+ configuration of the form
165
+ m = 0, λ = bλ0
166
+ b − 1
167
+ while for b < 1 we have a blow-up solution with λ → ∞ for m = 0.
168
+ In the low-temperature phase βJ > 1, three following modes are possible:
169
+ • Mode 1 “calm agents”: if b > 1, then we have two (meta)stable equilibrium
170
+ configurations at
171
+ m = ±m0(β), λ =
172
+ bλ0
173
+ b − 1 + m2
174
+ 0(β) = ˜λ(m0)
175
+ as well as the unstable saddle one at
176
+ m = 0, λ = bλ0
177
+ b − 1
178
+ • Mode 2 “excited agents”: if 1 − m2
179
+ 0(β) < b < 1, then we still have equilib-
180
+ rium configurations at
181
+ m = ±m0(β), λ = ˜λ(m0),
182
+ but the saddle configuration is now absent:
183
+ m = 0, λ → ∞
184
+ • Mode 3 “physcho agents”: if b < 1 − m2
185
+ 0(β), then
186
+ λ → ∞
187
+ for all extrema of the m-axis.
188
+ 4
189
+
190
+ 0.5
191
+ 1
192
+ 1.5
193
+ 2
194
+ 2.5
195
+ 3
196
+ 3.5
197
+ 4
198
+ 0
199
+ 0.2
200
+ 0.4
201
+ 0.6
202
+ 0.8
203
+ 1
204
+ 1.2
205
+ βJ
206
+ b
207
+ Mode 2
208
+ Mode 3
209
+ Single equilibrium
210
+ Mode 1
211
+ Figure 1. Phase diagram of all possible modes in the (b, βJ) plane for λ0 = 1.
212
+ The red area (single equilibrium) here denotes the presence of only equilibrium
213
+ along the m-axis. Blue, yellow and green areas correspond to Modes 1, 2 and
214
+ 3, respectively (see the description in the main text).
215
+ 5
216
+
217
+ The phase diagram showing the above modes is given in Fig. 1.
218
+ At the timescale of τλ ∼ 1/b, in Modes 1 and 2 the system relaxes to the ap-
219
+ propriate temperature-dependent equilibrium while the Mode 3 does not corre-
220
+ spond to any equilibrium. The dependence of such a relaxation on temperature
221
+ βJ and Hawkes parameter b in the Modes 1 and 2 was studied in [1].
222
+ At low temperatures βJ > 1, the equilibrium configurations for Modes 1 and
223
+ 2 are in fact metastable due to noise-induced transitions of the type m0(β) ↔
224
+ −m0(β) taking place at large timescale τ ≫ τλ.
225
+ The saddle we introduced
226
+ for Mode 1 then has the following physical meaning: it is the point where the
227
+ transition trajectory from one equilibrium to another at the infinite time limit
228
+ crosses the separatrix m = 0. [18]
229
+ To consider these transitions, here and in what follows we fix βJ = 1.5 to
230
+ establish the mode with two metastable equilibria (Mode 1 or 2, see Fig. 1).
231
+ For our convenience, in what follows we shall consider the transition −m0(β) →
232
+ m0(β).
233
+ 3. Transition between metastable equilibria
234
+ 3.1. Long-time behaviour of probability density function
235
+ The subject of our study is a comparison of the transition probability be-
236
+ tween the states (m(ta), λ(ta)) and (m(tb), λ(tb)) within the time interval [ta, tb]
237
+ for Hawkes and Poisson Ising games. In what follows we shall use a condensed
238
+ notation xa,b = (m(ta,b), λ(ta,b)) and fix [ta, tb] = [0, τ] so that the transition
239
+ probability between two metastable states is
240
+ P(xb, t|xa, 0) ≡ P(xb, t)
241
+ ���
242
+ x(0)=xa
243
+ (7)
244
+ where m(0) = −m0(β), λ(0) = ˜λ(m0) and m(τ) = m0(β), λ(τ) = ˜λ(m0). The
245
+ transition probability (7) obeys [1] the Fokker-Planck equation (4).
246
+ In the previous paper we have compared the probabilities of transition be-
247
+ tween metastable equilibria in Hawkes and Poisson Ising games within a finite
248
+ time interval [0, τ] and demonstrated an exponential acceleration of this tran-
249
+ sition in the Hawkes case. The main goal of the present paper is to calculate
250
+ this transition probability in the limit τ → ∞. To discuss this limit let us use,
251
+ following [1], the analogy with classical mechanics. A formal justification for it
252
+ can be found, e.g., in [19].
253
+ As the diffusion coefficient in (4) is proportional to 1/N, in the limit of N →
254
+ ∞, for solving the Fokker-Planck equation we can use the WKB approximation.
255
+ Introducing an analogue of action S(x, t) through P(x, t) ∝ e−NS(x,t), we get
256
+ the following Hamilton-Jacobi equation for S:
257
+ ∂tS(x, t) = fi(x(t))∂S(x, t)
258
+ ∂xi
259
+ − gij(x(t))∂S(x, t)
260
+ ∂xi
261
+ ∂S(x, t)
262
+ ∂xj
263
+ .
264
+ (8)
265
+ On can also introduce an analogue of the Hamiltonian
266
+ H(p, x; t) = −fi(x(t))pi(t) + gij(x(t))pi(t)pj(t),
267
+ pi = ∂S
268
+ ∂xi
269
+ (9)
270
+ 6
271
+
272
+ The time evolution of the system is then given by the corresponding Hamilton
273
+ equations
274
+ ˙xi(t) + fi(x(t))
275
+ =
276
+ 2gij(x(t))pj(t)
277
+ ˙pi(t) − pj∂ifj(x(t))
278
+ =
279
+ −pj∂igjk(x(t))pk(t).
280
+ (10)
281
+ The system of Hamilton equations (10) has the first integral H(p, x) = E. As
282
+ will be shown later, the value of E implicitly sets conditions on the transition
283
+ time τ from one metastable equilibrium to another in the classical problem.
284
+ The leading contribution to the transition probability has the form
285
+ P(xi, xf; τ) ∝ e−NS
286
+ (11)
287
+ where the exponential factor S can be calculated by implementing the Mopertui
288
+ principle [20]
289
+ S = S0 − Eτ =
290
+
291
+ i
292
+ � ∞
293
+ 0
294
+ pi(t) ˙xi(t)dt − Eτ =
295
+
296
+ i
297
+
298
+ trajectory
299
+ pidxi − Eτ.
300
+ (12)
301
+ The transition trajectory itself is determined by equations (10), the first
302
+ integral H(p, x) = E and, obviously, should minimise the trajectory-depended
303
+ term S0. Transition time is set by E via relation τ = ∂S0/∂E. [20]
304
+ In [1] we considered transition probability from one metastable equilibrium
305
+ to another in finite time (E ̸= 0) and found out that the probability exponen-
306
+ tially increases due to activity spillover. In the present study we augment the
307
+ results of [1] by considering introducing transition rates in the infinite time limit
308
+ corresponding to E = 0.
309
+ The system of differential equation (10) for E = 0 is solvable in quadratures.
310
+ The corresponding solution for the transition trajectory can naturally be broken
311
+ into two pieces.
312
+ The first piece corresponding to transition from the initial equilibrium to
313
+ separatrix −m0(β) → 0. The corresponding formulae read
314
+ ˙m(t)
315
+ =
316
+ λ(t)[m − tanh(βJm)],
317
+ (13)
318
+ ˙λ(t)
319
+ =
320
+ λ(t)[1 − m tanh(βJm)] − b(λ(t) − λ0)
321
+
322
+ λ(t)[m − tanh(βJm)]2
323
+ 1 − m tanh(βJm)
324
+ ,
325
+ (14)
326
+ pm
327
+ =
328
+ m − tanh(βJm)
329
+ 1 − m tanh(βJm),
330
+ (15)
331
+
332
+ =
333
+ 0.
334
+ (16)
335
+ The second piece corresponding to transition from the separatrix to another
336
+ equilibrium 0 → m0(β) . The corresponding formulae read
337
+ ˙m(t)
338
+ =
339
+ −λ(t)[m − tanh(βJm)],
340
+ (17)
341
+ ˙λ(t)
342
+ =
343
+ λ(t)[1 − m tanh(βJm)] − b(λ(t) − λ0),
344
+ (18)
345
+ pm
346
+ =
347
+ 0,
348
+ (19)
349
+
350
+ =
351
+ 0.
352
+ (20)
353
+ 7
354
+
355
+ We note that despite the symmetry with respect to m-axis, the transition tra-
356
+ jectory is asymmetric as the external field is non-gradient.
357
+ In accordance with the classification of modes introduced in Section 2, for
358
+ different values of the parameter b the Hawkes transition trajectory does either
359
+ pass through the saddle point where it has the discontinuity (Mode 1) or di-
360
+ verges at the separatrix m = 0 (Mode 2). The trajectories for various values of
361
+ parameter b are shown in Fig. 2.
362
+ 1
363
+ 2
364
+ 3
365
+ 4
366
+ 5
367
+ 6
368
+ 7
369
+ −1
370
+ 0
371
+ m0
372
+ −m0
373
+ 1
374
+ λ
375
+ m
376
+ b = 2.0
377
+ b = 1.5
378
+ b = 1.2
379
+ b = 0.5
380
+ Figure 2.
381
+ Transition trajectories
382
+ −m0(β) → m0(β) at the infinite time
383
+ limit E = 0 given by Eqs. 13-16 (left half) and Eqs. 17-20 (right half) for
384
+ b = 0.5, 1.2, 1.5, 2.0 at βJ = 1.5, λ0 = 1.
385
+ The trajectories for Mode 1
386
+ (b = 1.2, 1.5, 2.0) are defined and have a discontinuity at the saddle, and the
387
+ trajectory for Mode 2 (b = 0.5) diverges at m = 0.
388
+ From Eqs. (12),(13-20) it follows that in the infinite time limit for which
389
+ E = 0, the exponential factor S for Poisson and Hawkes Ising games is the
390
+ same is equal for Poisson and Hawkes games for all b:
391
+ S =
392
+ 0
393
+
394
+ −m0(β)
395
+ m − tanh(βJm)
396
+ 1 − m tanh(βJm)dm
397
+ (21)
398
+ Therefore, for understanding a possible difference between the Hawkes and
399
+ Poisson Ising games in the infinite time limit, an analysis of pre-exponential
400
+ factor of the transition rate is required.
401
+ 8
402
+
403
+ 3.2. Pre-exponential factor of the transition rate
404
+ The calculation of the pre-exponential factor for the one-dimensional Pois-
405
+ son game closely follows the original calculation by Kramers [16] and can be
406
+ done analytically, see e.g. [21, 22]. A more general result for larger number
407
+ of dimensions, including the case of non-potential fields, was obtained in [23].
408
+ However, this result is not applicable in our the two-dimensional Hawkes game,
409
+ since the transition trajectory in the non-gradient field has a discontinuity, see
410
+ a related discussion in [24].
411
+ When the trajectory is defined (Mode 1), we can use analogies with one-
412
+ dimensional motion. In the Kramers’ problem for the potential with smooth
413
+ barrier the pre-exponential factor of escape rate depends on second derivatives
414
+ of the potential both for stationary attractor and a saddle.
415
+ However, if the
416
+ potential barrier is edge-shaped, the result depends only on the second derivative
417
+ of the potential at stationary attractor.[25]. That leads us to an assumption that
418
+ in the Hawkes game acceleration with respect to Poisson one is caused only by
419
+ a corresponding change in the activity of agents in the equilibrium state, with
420
+ the rest of motion having non-significant effect on the transition time. That
421
+ means average transition times in the Hawkes and Poisson games with intensity
422
+ ˜λ(m0) are equal. Therefore a ratio of transition times in the original Hawkes
423
+ and Poisson games can be written in the following form:
424
+ ttr,P
425
+ ttr,H
426
+
427
+ b
428
+ b − 1 + m2
429
+ 0(β).
430
+ (22)
431
+ To check the above-formulated assumption we have performed computer
432
+ simulations of Hawkes and Poisson games as well as those of Langevin equations
433
+ that correspond to Eq. 4. We have also checked that the transition time ratio
434
+ does not depend on number of agents for N ≥ 20, i.e. when the number of
435
+ agents is sufficiently large. A comparison of the results of these simulations
436
+ with Eq. 22 is shown in Fig. 3.
437
+ From Fig. 3 we see that the activity in the Hawkes game as compared to the
438
+ Poisson one is indeed enhanced. A more detailed conclusion is that in the regime
439
+ corresponding to Mode 1 the formula in Eq. (22) works well for the Mode 1 for
440
+ both continuum and discrete cases, but in the regime corresponding to Mode
441
+ 2 it is, due to the presence of divergence the continuum generalisation of the
442
+ game, not in agreement with the exact discrete formulation. Despite this, the
443
+ shape of the transition trajectory still provides us a qualitatively correct insight
444
+ into the behaviour of agents, see Fig. 4.
445
+ Let us note that the decision process does significantly intensify around the
446
+ separatrix, i.e. when are uncertain of which of the two (quasi)stable equilibria
447
+ to choose. Once the decision is made, the agents calm down.
448
+ 4. Conclusions
449
+ We have studied the self-excited Ising game on a complete graph. Inspite
450
+ of its simplicity, it has rich dynamics exhibiting various types of behaviour.
451
+ Competition of “calming down” and “activation” in the Hawkes self-excitation
452
+ mechanism at different levels of noise results in three possible modes (phases).
453
+ 9
454
+
455
+ 1
456
+ 1.2
457
+ 1.4
458
+ 1.6
459
+ 1.8
460
+ 2
461
+ 0.6
462
+ 0.8
463
+ 1
464
+ 1.2
465
+ 1.4
466
+ 1.6
467
+ 1.8
468
+ 2
469
+ ttr,P/ttr,H
470
+ b
471
+ game
472
+ langevin
473
+ analytics
474
+ Mode 1
475
+ Mode 2
476
+ Figure 3.
477
+ Ratio of transition times in Hawkes and Poisson cases.
478
+ Triangles
479
+ show simulation results for games (discrete model), and squares show results
480
+ for Langevin equations (continuum model). The line refers to the theoretical
481
+ prediction given by Eq. (22). Dashed line b = 1 separates Mode 1 (blue area)
482
+ from Mode 2 (yellow area).
483
+ 10
484
+
485
+ 2
486
+ 4
487
+ 6
488
+ 8
489
+ 10
490
+ 12
491
+ 14
492
+ −1
493
+ 0
494
+ 1
495
+ λ
496
+ m
497
+ Analytics
498
+ Agent model
499
+ Figure 4. Example of transition in Hawkes game (red line) for b = 0.5, λ0 =
500
+ 1, N = 20. As in the corresponding transition trajectory (blue line), the inten-
501
+ sity of decision making process increases near m = 0.
502
+ We expect that this competition might play an important role in other situ-
503
+ ations, e.g. for non-exponential Hawkes kernels [26] or for more complicated
504
+ graph topology.
505
+ Another focus in this work was to investigate the probability of transition
506
+ between metastable equilibria in the infinite time limit. This is a very challeng-
507
+ ing task for a multi-dimensional case when the external field is non-gradient and
508
+ has a discontinuity. Also, since in the relevant one-dimensional case (i.e. when
509
+ the potential field only has the discontinuity) it is known that the dynamics for
510
+ such fields is rather different that for smooth potential fields [27], it would be
511
+ natural to assume a similar situation in the multi-dimensional case. However,
512
+ based on the intuitive understanding of the considered model, we have presented
513
+ an approach that allows us to reduce the problem to calculating the transition
514
+ time in the corresponding one-dimensional model. The analytically calculated
515
+ transition trajectory also gave us a qualitative insight into the behaviour of
516
+ agents in the corresponding discrete system.
517
+ As for further developments of the suggested approach, an interesting idea
518
+ would be working out its generalisation for two- and multi-dimensional systems.
519
+ Compared to other another existing approaches for treating the case of non-
520
+ gradient external field (see e.g. [28, 29]), this newly introduced method could
521
+ present a workable alternative due to its simplicity.
522
+ 11
523
+
524
+ References
525
+ [1] A. Antonov, A. Leonidov, and A. Semenov. Self-excited ising game. Physica
526
+ A: Statistical Mechanics and its Applications, 561:125305, 2021.
527
+ [2] Lawrence Blume and Steven Durlauf. Equilibrium concepts for social inter-
528
+ action models. International Game Theory Review, 05(03):193–209, 2003.
529
+ [3] Jean-Philippe Bouchaud. Crises and collective socio-economics phenomena:
530
+ Simple models and challenges. Journal of Statistical Physics, 151:567–606,
531
+ 2013.
532
+ [4] Silvio Salinas. Introduction to statistical physics. Springer Science & Busi-
533
+ ness Media, 2001.
534
+ [5] Andrey Leonidov, Alexey Savvateev, and Andrew G Semenov.
535
+ Quan-
536
+ tal response equilibria in binary choice games on graphs. arXiv preprint
537
+ arXiv:1912.09584, 2019.
538
+ [6] A. Leonidov, A. Savvateev, and A. Semenov. Qre in the ising game. CEUR
539
+ Workshop proceedings, 2020.
540
+ [7] Andrey Leonidov, Alexey Savvateev, and Andrew G Semenov. Ising game
541
+ on graphs. arXiv preprint arXiv:2108.00824, 2021.
542
+ [8] Jacob K Goeree, Charles A Holt, and Thomas R Palfrey. Quantal response
543
+ equilibria. Springer, 2016.
544
+ [9] Alan G. Hawkes. Spectra of some self-exciting and mutually exciting point
545
+ processes. Biometrika, 58(1):83–90, 04 1971.
546
+ [10] V. Filimonov and D. Sornette. Apparent criticality and calibration issues in
547
+ the hawkes self-excited point process model: application to high-frequency
548
+ financial data. Quantitative Finance, 15(8):1293–1314, 2015.
549
+ [11] Stephen J. Hardiman, Nicolas Bercot, and Jean-Philippe Bouchaud. Criti-
550
+ cal reflexivity in financial markets: a hawkes process analysis. The European
551
+ Physical Journal B, 86:442, 2013.
552
+ [12] Yosihiko Ogata. Statistical models for earthquake occurrences and residual
553
+ analysis for point processes. Journal of the American Statistical Associa-
554
+ tion, 83(401):9–27, 1988.
555
+ [13] Patrick J. Laub, Thomas Taimre, and Philip K. Pollett. Hawkes processes,
556
+ 2015.
557
+ [14] Kiyoshi Kanazawa and Didier Sornette. Field master equation theory of
558
+ the self-excited hawkes process. Phys. Rev. Res., 2:033442, Sep 2020.
559
+ [15] Kiyoshi Kanazawa and Didier Sornette.
560
+ Nonuniversal power law distri-
561
+ bution of intensities of the self-excited hawkes process: A field-theoretical
562
+ approach. Phys. Rev. Lett., 125:138301, Sep 2020.
563
+ [16] H.A. Kramers. Brownian motion in a field of force and the diffusion model
564
+ of chemical reactions. Physica, 7(4):284 – 304, 1940.
565
+ 12
566
+
567
+ [17] Jean-Michel Lasry and Pierre-Louis Lions. Mean field games. Japanese
568
+ journal of mathematics, 2(1):229–260, 2007.
569
+ [18] Haidong Feng, Kun Zhang, and Jin Wang. Non-equilibrium transition state
570
+ rate theory. Chem. Sci., 5:3761–3769, 2014.
571
+ [19] V. P. Maslov and M. V. Fedoriuk. Semiclassical Approximation in Quantum
572
+ Mechanics. Reidel, Dordrecht, 1981.
573
+ [20] L. D. Landau and E. M. Lifshitz.
574
+ Mechanics. Vol. 1.
575
+ Butterworth-
576
+ Heinemann, 1976.
577
+ [21] B. Caroli, C. Caroli, and B. Roulet.
578
+ Diffusion in a bistable potential:
579
+ The functional integral approach.
580
+ Journal of Statistical Physics, 26:83–
581
+ 111, 1981.
582
+ [22] Sidney Coleman. Aspects of Symmetry: Selected Erice Lectures. Cambridge
583
+ University Press, 1985.
584
+ [23] Freddy Bouchet and Julien Reygner. Generalisation of the eyring–kramers
585
+ transition rate formula to irreversible diffusion processes. Annales Henri
586
+ Poincar´e, 17:3499–3532, 2016.
587
+ [24] Daisy Dahiya and Maria Cameron. Ordered line integral methods for com-
588
+ puting the quasi-potential. Journal of Scientific Computing, 75, 2018.
589
+ [25] B. J. Matkowsky, Z. Schuss, and E. Ben-Jacob. A singular perturbation
590
+ approach to kramers’ diffusion problem. SIAM Journal on Applied Math-
591
+ ematics, 42(4):835–849, 1982.
592
+ [26] Jean-Philippe Bouchaud, Julius Bonart, Jonathan Donier, and Martin
593
+ Gould. Trades, Quotes and Prices: Financial Markets Under the Micro-
594
+ scope. Sect. 9.3.4. Cambridge University Press, 2018.
595
+ [27] H. Dekker. Kramers’ activation rate for a sharp edged potential barrier:
596
+ The double oscillator. Physica A: Statistical Mechanics and its Applica-
597
+ tions, 136(1):124–146, 1986.
598
+ [28] Nicholas Paskal and Maria Cameron. An efficient jet marcher for computing
599
+ the quasipotential for 2d sdes, 2021.
600
+ [29] Peter Ashwin,
601
+ Jennifer Creaser,
602
+ and Krasimira Tsaneva-Atanasova.
603
+ Quasipotentials for coupled escape problems and the gate-height bifurca-
604
+ tion, 2022.
605
+ 13
606
+
KNAzT4oBgHgl3EQfVfxu/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf,len=355
2
+ page_content='Transition between metastable equilibra: applications to binary-choice games A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
3
+ page_content=' Antonov(1)∗, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
4
+ page_content=' Leonidov(1,2), and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
5
+ page_content=' Semenov(1,3) (1) P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
6
+ page_content='N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
7
+ page_content=' Lebedev Physical Institute, Moscow, Russia (2) Moscow Institute of Physics and Technology, Dolgoprudny, Russia (3) Higher School of Economics, Moscow, Russia Abstract Transitions between metastable equilibria in the low-temperature phase of dynamical Ising game with activity spillover are studied in the infinite time limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
8
+ page_content=' It is shown that exponential enhancement due to activity spillover previously found for finite-time transitions in [1] is absent in the infinite time limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
9
+ page_content=' Analytical description for infinite time trajectory is developed and compared with results of exact numerical analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
10
+ page_content=' ∗antonov@lpi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
11
+ page_content='ru 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
12
+ page_content='01285v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
13
+ page_content='stat-mech] 3 Jan 2023 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
14
+ page_content=' Introduction Studies of noisy binary choice games are of special interest because of the existence of close parallels to statistical physics of spin systems, in particular to static and dynamic properties of phase transitions in them [2, 3, 4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
15
+ page_content=' These par- allels are particularly intriguing because of the fundamentally different origins of equilibria in game theory and statistical physics: in game theory equilibration is a result of balancing individual interests while in statistical physics equilibra- tion is a search of a global minimum of free energy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
16
+ page_content=' For the noisy binary choice problem on complete graphs it is long known, see [2] and references therein, that for a special choice of noise game-theoretic equilibria are characterised by the same mean-field Curie-Weiss equation as that describing phase transitions in magnetics, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
17
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
18
+ page_content=' [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
19
+ page_content=' The properties of static and dynamic equilibria in noisy binary choice games were studied in [5, 6, 7] for arbitrary noise, and com- plete and random graph topologies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
20
+ page_content=' It was established in particular that static game-theoretic equilibria in noisy binary choice games on graphs correspond to the so-called quantal response/expectation equilibria [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
21
+ page_content=' The dynamics of games can, however, be fundamentally different from con- ventional spin dynamics due to a variety of possible mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
22
+ page_content=' One of these is a possibility of activity spillover (self-excitation) that was intensively studied for so-called Hawkes processes [9] with applications to finance [10, 11], earth- quakes [12] and other subjects, see the recent review in [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
23
+ page_content=' A master equation formalism for such processes was developed in [14, 15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
24
+ page_content=' The effects of an ac- tivity spillover different from the Hawkes self-excitation mechanism for a noisy binary choice game (Ising game) on complete graphs was studied in [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
25
+ page_content=' The main focus of [1] was in studying transitions between metastable equilibria in the low-temperature phase taking place at finite time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
26
+ page_content=' It was observed that activity spillover leads to an exponential acceleration of such transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
27
+ page_content=' The present paper complements the analysis of [1] by studying transitions between metastable equilibria in the limit of infinite time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
28
+ page_content=' The importance of studying this limit is, first, in establishing a link with a rich literature on Kramers rate [16] and, second, in that in this limit the exponential enhancement is absent and an analysis of pre-exponential contribution is necessary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
29
+ page_content=' In analysing this problem we develop an analytical description of the infinite-limit trajectory and suggest an analytical formula for the transition rate that is compared with the results of exact numerical simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
30
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
31
+ page_content=' Model We consider a dynamical noisy binary choice game of N agents on a complete graph topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
32
+ page_content=' Each agent i has two possible strategies si = ±1 so the system is fully described by the vector st = (s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
33
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
34
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
35
+ page_content=' , sn)t at given time t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
36
+ page_content=' The temporal evolution of the strategies configuration st → st+δt within a small time interval δt is assumed to be driven by a strategy flip si → −si of some agent i with the flip probability Prob[si → −si|(t;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
37
+ page_content=' t + δt)] = λi(t)δt γi(si → −si|s−i,t) (1) where λi(t) is an activity rate of the agent i, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
38
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
39
+ page_content=' λi(t)δt is a time-dependent probability for an agent i to be active and have a possibility to change a strategy 2 within a time interval (t, t + δt) while γ(si → −si|s−i,t) is a probability, for an active agent i, of a strategy flip dependent of the current configuration s−i,t of strategies in the neighbourhood of this node.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
40
+ page_content=' In what follows we shall assume a noisy best response (Ising-Glauber) flip rate1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
41
+ page_content=' For a complete graph topology at large N, it is the same for all agents γ(m(t)) = 1 2 [1 − si tanh (βJm(t))] → γ±(m(t)) = 1 2 [1 ± tanh(βJm(t))] (2) where β = 1/T is an inverse temperature, J is an Ising coupling constant, γ± = γ(∓s → ±s) and m(t) = 1 N �N i=1 si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
42
+ page_content=' For the compete graph topology activity rates {λi} are also the same for all agents, λi(t) = λ(t) for any i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
43
+ page_content=' The time-dependence of the activity rate λ(t) is due to the spillover effect driven by the past events of strategy flips assumed to be described by the Hawkes process [9] with an exponential kernel: λ(t) = λ0 + µ N � τk<t e−b(t−τk), (3) where {τk} are times at which strategy flip of one of agents took place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
44
+ page_content=' The spillover effect described by (3) can be termed realised activity spillover.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
45
+ page_content=' The time-dependent state of the system is described by the probability dis- tribution P(m, λ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
46
+ page_content=' t) and the character of its evolution depends on parameters λ0, µ, b, βJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
47
+ page_content=' The particular case of µ = 0 corresponds to a standard Poisson dynamics of the system with constant intensity λ(t) = λ0 so that the probability distribution describing it is reduced to P(m(t);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
48
+ page_content=' t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
49
+ page_content=' To investigate the effects of realised activity spillover, in what follows we compare the properties of Hawkes and Poisson dynamic games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
50
+ page_content=' In the limit N → ∞, the probability density function P(m, λ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
51
+ page_content=' t) obeys the Fokker-Planck equation derived in [1] ∂tP = ∂i(fiP) + 1 N ∂i∂j (gijP) fi = � λ [m − tanh(βJm)] −λ [1 − m tanh(βJm)] + b [λ − λ0] � gij = �λ [1 − m tanh(βJm)] −λ [m − tanh(βJm)] −λ [m − tanh(βJm)] λ [1 − m tanh(βJm)] � , (4) where summation over repeated indices is assumed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
52
+ page_content=' Here and in what follows the indices i and j represent coordinates m and λ, and the following rescaling was performed: λ → 2λ µ , λ0 → 2λ0 µ , b → 2b µ , t → µt 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
53
+ page_content=' (5) The Fokker-Planck equation (4) describes Brownian motion in an external vector field fi in the plane (λ, m) subject to noise effects described by the matrix gij and corresponds to a mean field game-type description of the dynamic Ising game under consideration2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
54
+ page_content=' We also note that the considered system is 1This choice corresponds to the Gumbel noise in the individual agents utilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
55
+ page_content=' 2The standard description of a mean field game includes, in addition to a Fokker-Planck equation, additional equations describing optimal control, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
56
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
57
+ page_content=' [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
58
+ page_content=' 3 symmetric with respect to m-axis, and the external field is non-gradient, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
59
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
60
+ page_content=' ∂λfm ̸= ∂mfλ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
61
+ page_content=' In such a parametrisation, the process of realised activity spillover is con- trolled by a single memory kernel parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
62
+ page_content=' In the special case of Poisson game with µ = 0 the rescaling in (5) is of course not relevant and the corre- sponding one-dimensional dynamics along the m axis can be studied by simply taking λ(t) ≡ λ0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
63
+ page_content=' Depending on parameters of the system, the considered system can have different equilibrium configurations (λeq, meq) given by the zeros of vector field fi = 0 corresponding to stable fixed points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
64
+ page_content=' As we can see from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
65
+ page_content=' (4), for both Hawkes and Poisson games, for any time-dependent activity rate in the Hawkes game λ(t), the m-equilibria are described [1] by the same Curie-Weiss equation as in [2, 3, 5] meq = tanh(βJmeq).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
66
+ page_content=' (6) For high temperatures βJ < 1 the system has one equilibrium meq = 0, and for low temperatures βJ > 1 it has two symmetrical (meta)stable equlibria at meq = ±m0(β) as well as the unstable one at m = 0 serving as a separatrix separating the two stable ones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
67
+ page_content=' The λ - equilibria are more complicated and depend on both temperature βJ and self-excitation memory kernel parameter b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
68
+ page_content=' In the high-temperature phase βJ < 1 and b > 1, we have the equilibrium configuration of the form m = 0, λ = bλ0 b − 1 while for b < 1 we have a blow-up solution with λ → ∞ for m = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
69
+ page_content=' In the low-temperature phase βJ > 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
70
+ page_content=' three following modes are possible: Mode 1 “calm agents”: if b > 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
71
+ page_content=' then we have two (meta)stable equilibrium configurations at m = ±m0(β),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
72
+ page_content=' λ = bλ0 b − 1 + m2 0(β) = ˜λ(m0) as well as the unstable saddle one at m = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
73
+ page_content=' λ = bλ0 b − 1 Mode 2 “excited agents”: if 1 − m2 0(β) < b < 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
74
+ page_content=' then we still have equilib- rium configurations at m = ±m0(β),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
75
+ page_content=' λ = ˜λ(m0),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
76
+ page_content=' but the saddle configuration is now absent: m = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
77
+ page_content=' λ → ∞ Mode 3 “physcho agents”: if b < 1 − m2 0(β),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
78
+ page_content=' then λ → ∞ for all extrema of the m-axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
79
+ page_content=' 4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
80
+ page_content='5 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
81
+ page_content='5 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
82
+ page_content='5 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
83
+ page_content='5 4 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
84
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
85
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
86
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
87
+ page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
88
+ page_content='2 βJ b Mode 2 Mode 3 Single equilibrium Mode 1 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
89
+ page_content=' Phase diagram of all possible modes in the (b, βJ) plane for λ0 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
90
+ page_content=' The red area (single equilibrium) here denotes the presence of only equilibrium along the m-axis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
91
+ page_content=' Blue, yellow and green areas correspond to Modes 1, 2 and 3, respectively (see the description in the main text).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
92
+ page_content=' 5 The phase diagram showing the above modes is given in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
93
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
94
+ page_content=' At the timescale of τλ ∼ 1/b, in Modes 1 and 2 the system relaxes to the ap- propriate temperature-dependent equilibrium while the Mode 3 does not corre- spond to any equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
95
+ page_content=' The dependence of such a relaxation on temperature βJ and Hawkes parameter b in the Modes 1 and 2 was studied in [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
96
+ page_content=' At low temperatures βJ > 1, the equilibrium configurations for Modes 1 and 2 are in fact metastable due to noise-induced transitions of the type m0(β) ↔ −m0(β) taking place at large timescale τ ≫ τλ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
97
+ page_content=' The saddle we introduced for Mode 1 then has the following physical meaning: it is the point where the transition trajectory from one equilibrium to another at the infinite time limit crosses the separatrix m = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
98
+ page_content=' [18] To consider these transitions, here and in what follows we fix βJ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
99
+ page_content='5 to establish the mode with two metastable equilibria (Mode 1 or 2, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
100
+ page_content=' 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
101
+ page_content=' For our convenience, in what follows we shall consider the transition −m0(β) → m0(β).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
102
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
103
+ page_content=' Transition between metastable equilibria 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
104
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
105
+ page_content=' Long-time behaviour of probability density function The subject of our study is a comparison of the transition probability be- tween the states (m(ta), λ(ta)) and (m(tb), λ(tb)) within the time interval [ta, tb] for Hawkes and Poisson Ising games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
106
+ page_content=' In what follows we shall use a condensed notation xa,b = (m(ta,b), λ(ta,b)) and fix [ta, tb] = [0, τ] so that the transition probability between two metastable states is P(xb, t|xa, 0) ≡ P(xb, t) ��� x(0)=xa (7) where m(0) = −m0(β), λ(0) = ˜λ(m0) and m(τ) = m0(β), λ(τ) = ˜λ(m0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
107
+ page_content=' The transition probability (7) obeys [1] the Fokker-Planck equation (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
108
+ page_content=' In the previous paper we have compared the probabilities of transition be- tween metastable equilibria in Hawkes and Poisson Ising games within a finite time interval [0, τ] and demonstrated an exponential acceleration of this tran- sition in the Hawkes case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
109
+ page_content=' The main goal of the present paper is to calculate this transition probability in the limit τ → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
110
+ page_content=' To discuss this limit let us use, following [1], the analogy with classical mechanics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
111
+ page_content=' A formal justification for it can be found, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
112
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
113
+ page_content=', in [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
114
+ page_content=' As the diffusion coefficient in (4) is proportional to 1/N, in the limit of N → ∞, for solving the Fokker-Planck equation we can use the WKB approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
115
+ page_content=' Introducing an analogue of action S(x, t) through P(x, t) ∝ e−NS(x,t), we get the following Hamilton-Jacobi equation for S: ∂tS(x, t) = fi(x(t))∂S(x, t) ∂xi − gij(x(t))∂S(x, t) ∂xi ∂S(x, t) ∂xj .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
116
+ page_content=' (8) On can also introduce an analogue of the Hamiltonian H(p, x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
117
+ page_content=' t) = −fi(x(t))pi(t) + gij(x(t))pi(t)pj(t), pi = ∂S ∂xi (9) 6 The time evolution of the system is then given by the corresponding Hamilton equations ˙xi(t) + fi(x(t)) = 2gij(x(t))pj(t) ˙pi(t) − pj∂ifj(x(t)) = −pj∂igjk(x(t))pk(t).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
118
+ page_content=' (10) The system of Hamilton equations (10) has the first integral H(p, x) = E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
119
+ page_content=' As will be shown later, the value of E implicitly sets conditions on the transition time τ from one metastable equilibrium to another in the classical problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
120
+ page_content=' The leading contribution to the transition probability has the form P(xi, xf;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
121
+ page_content=' τ) ∝ e−NS (11) where the exponential factor S can be calculated by implementing the Mopertui principle [20] S = S0 − Eτ = � i � ∞ 0 pi(t) ˙xi(t)dt − Eτ = � i � trajectory pidxi − Eτ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
122
+ page_content=' (12) The transition trajectory itself is determined by equations (10), the first integral H(p, x) = E and, obviously, should minimise the trajectory-depended term S0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
123
+ page_content=' Transition time is set by E via relation τ = ∂S0/∂E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
124
+ page_content=' [20] In [1] we considered transition probability from one metastable equilibrium to another in finite time (E ̸= 0) and found out that the probability exponen- tially increases due to activity spillover.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
125
+ page_content=' In the present study we augment the results of [1] by considering introducing transition rates in the infinite time limit corresponding to E = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
126
+ page_content=' The system of differential equation (10) for E = 0 is solvable in quadratures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
127
+ page_content=' The corresponding solution for the transition trajectory can naturally be broken into two pieces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
128
+ page_content=' The first piece corresponding to transition from the initial equilibrium to separatrix −m0(β) → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
129
+ page_content=' The corresponding formulae read ˙m(t) = λ(t)[m − tanh(βJm)], (13) ˙λ(t) = λ(t)[1 − m tanh(βJm)] − b(λ(t) − λ0) − λ(t)[m − tanh(βJm)]2 1 − m tanh(βJm) , (14) pm = m − tanh(βJm) 1 − m tanh(βJm), (15) pλ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
130
+ page_content=' (16) The second piece corresponding to transition from the separatrix to another equilibrium 0 → m0(β) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
131
+ page_content=' The corresponding formulae read ˙m(t) = −λ(t)[m − tanh(βJm)], (17) ˙λ(t) = λ(t)[1 − m tanh(βJm)] − b(λ(t) − λ0), (18) pm = 0, (19) pλ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
132
+ page_content=' (20) 7 We note that despite the symmetry with respect to m-axis, the transition tra- jectory is asymmetric as the external field is non-gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
133
+ page_content=' In accordance with the classification of modes introduced in Section 2, for different values of the parameter b the Hawkes transition trajectory does either pass through the saddle point where it has the discontinuity (Mode 1) or di- verges at the separatrix m = 0 (Mode 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
134
+ page_content=' The trajectories for various values of parameter b are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
135
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
136
+ page_content=' 1 2 3 4 5 6 7 −1 0 m0 −m0 1 λ m b = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
137
+ page_content='0 b = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
138
+ page_content='5 b = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
139
+ page_content='2 b = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
140
+ page_content='5 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
141
+ page_content=' Transition trajectories −m0(β) → m0(β) at the infinite time limit E = 0 given by Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
142
+ page_content=' 13-16 (left half) and Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
143
+ page_content=' 17-20 (right half) for b = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
144
+ page_content='5, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
145
+ page_content='2, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
146
+ page_content='5, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
147
+ page_content='0 at βJ = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
148
+ page_content='5, λ0 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
149
+ page_content=' The trajectories for Mode 1 (b = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
150
+ page_content='2, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
151
+ page_content='5, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
152
+ page_content='0) are defined and have a discontinuity at the saddle, and the trajectory for Mode 2 (b = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
153
+ page_content='5) diverges at m = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
154
+ page_content=' From Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
155
+ page_content=' (12),(13-20) it follows that in the infinite time limit for which E = 0, the exponential factor S for Poisson and Hawkes Ising games is the same is equal for Poisson and Hawkes games for all b: S = 0 � −m0(β) m − tanh(βJm) 1 − m tanh(βJm)dm (21) Therefore, for understanding a possible difference between the Hawkes and Poisson Ising games in the infinite time limit, an analysis of pre-exponential factor of the transition rate is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
156
+ page_content=' 8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
157
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
158
+ page_content=' Pre-exponential factor of the transition rate The calculation of the pre-exponential factor for the one-dimensional Pois- son game closely follows the original calculation by Kramers [16] and can be done analytically, see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
159
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
160
+ page_content=' [21, 22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
161
+ page_content=' A more general result for larger number of dimensions, including the case of non-potential fields, was obtained in [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
162
+ page_content=' However, this result is not applicable in our the two-dimensional Hawkes game, since the transition trajectory in the non-gradient field has a discontinuity, see a related discussion in [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
163
+ page_content=' When the trajectory is defined (Mode 1), we can use analogies with one- dimensional motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
164
+ page_content=' In the Kramers’ problem for the potential with smooth barrier the pre-exponential factor of escape rate depends on second derivatives of the potential both for stationary attractor and a saddle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
165
+ page_content=' However, if the potential barrier is edge-shaped, the result depends only on the second derivative of the potential at stationary attractor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
166
+ page_content='[25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
167
+ page_content=' That leads us to an assumption that in the Hawkes game acceleration with respect to Poisson one is caused only by a corresponding change in the activity of agents in the equilibrium state, with the rest of motion having non-significant effect on the transition time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
168
+ page_content=' That means average transition times in the Hawkes and Poisson games with intensity ˜λ(m0) are equal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
169
+ page_content=' Therefore a ratio of transition times in the original Hawkes and Poisson games can be written in the following form: ttr,P ttr,H ≃ b b − 1 + m2 0(β).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
170
+ page_content=' (22) To check the above-formulated assumption we have performed computer simulations of Hawkes and Poisson games as well as those of Langevin equations that correspond to Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
171
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
172
+ page_content=' We have also checked that the transition time ratio does not depend on number of agents for N ≥ 20, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
173
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
174
+ page_content=' when the number of agents is sufficiently large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
175
+ page_content=' A comparison of the results of these simulations with Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
176
+ page_content=' 22 is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
177
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
178
+ page_content=' From Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
179
+ page_content=' 3 we see that the activity in the Hawkes game as compared to the Poisson one is indeed enhanced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
180
+ page_content=' A more detailed conclusion is that in the regime corresponding to Mode 1 the formula in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
181
+ page_content=' (22) works well for the Mode 1 for both continuum and discrete cases, but in the regime corresponding to Mode 2 it is, due to the presence of divergence the continuum generalisation of the game, not in agreement with the exact discrete formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
182
+ page_content=' Despite this, the shape of the transition trajectory still provides us a qualitatively correct insight into the behaviour of agents, see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
183
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
184
+ page_content=' Let us note that the decision process does significantly intensify around the separatrix, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
185
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
186
+ page_content=' when are uncertain of which of the two (quasi)stable equilibria to choose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
187
+ page_content=' Once the decision is made, the agents calm down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
188
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
189
+ page_content=' Conclusions We have studied the self-excited Ising game on a complete graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
190
+ page_content=' Inspite of its simplicity, it has rich dynamics exhibiting various types of behaviour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
191
+ page_content=' Competition of “calming down” and “activation” in the Hawkes self-excitation mechanism at different levels of noise results in three possible modes (phases).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
192
+ page_content=' 9 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
193
+ page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
194
+ page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
195
+ page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
196
+ page_content='8 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
197
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
198
+ page_content='8 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
199
+ page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
200
+ page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
201
+ page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
202
+ page_content='8 2 ttr,P/ttr,H b game langevin analytics Mode 1 Mode 2 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
203
+ page_content=' Ratio of transition times in Hawkes and Poisson cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
204
+ page_content=' Triangles show simulation results for games (discrete model), and squares show results for Langevin equations (continuum model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
205
+ page_content=' The line refers to the theoretical prediction given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
206
+ page_content=' (22).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
207
+ page_content=' Dashed line b = 1 separates Mode 1 (blue area) from Mode 2 (yellow area).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
208
+ page_content=' 10 2 4 6 8 10 12 14 −1 0 1 λ m Analytics Agent model Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
209
+ page_content=' Example of transition in Hawkes game (red line) for b = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
210
+ page_content='5, λ0 = 1, N = 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
211
+ page_content=' As in the corresponding transition trajectory (blue line), the inten- sity of decision making process increases near m = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
212
+ page_content=' We expect that this competition might play an important role in other situ- ations, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
213
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
214
+ page_content=' for non-exponential Hawkes kernels [26] or for more complicated graph topology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
215
+ page_content=' Another focus in this work was to investigate the probability of transition between metastable equilibria in the infinite time limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
216
+ page_content=' This is a very challeng- ing task for a multi-dimensional case when the external field is non-gradient and has a discontinuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
217
+ page_content=' Also, since in the relevant one-dimensional case (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
218
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
219
+ page_content=' when the potential field only has the discontinuity) it is known that the dynamics for such fields is rather different that for smooth potential fields [27], it would be natural to assume a similar situation in the multi-dimensional case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
220
+ page_content=' However, based on the intuitive understanding of the considered model, we have presented an approach that allows us to reduce the problem to calculating the transition time in the corresponding one-dimensional model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
221
+ page_content=' The analytically calculated transition trajectory also gave us a qualitative insight into the behaviour of agents in the corresponding discrete system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
222
+ page_content=' As for further developments of the suggested approach, an interesting idea would be working out its generalisation for two- and multi-dimensional systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
223
+ page_content=' Compared to other another existing approaches for treating the case of non- gradient external field (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
224
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
225
+ page_content=' [28, 29]), this newly introduced method could present a workable alternative due to its simplicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
226
+ page_content=' 11 References [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
227
+ page_content=' Antonov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
228
+ page_content=' Leonidov, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
229
+ page_content=' Semenov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
230
+ page_content=' Self-excited ising game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
231
+ page_content=' Physica A: Statistical Mechanics and its Applications, 561:125305, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
232
+ page_content=' [2] Lawrence Blume and Steven Durlauf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
233
+ page_content=' Equilibrium concepts for social inter- action models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
234
+ page_content=' International Game Theory Review, 05(03):193–209, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
235
+ page_content=' [3] Jean-Philippe Bouchaud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
236
+ page_content=' Crises and collective socio-economics phenomena: Simple models and challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
237
+ page_content=' Journal of Statistical Physics, 151:567–606, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
238
+ page_content=' [4] Silvio Salinas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
239
+ page_content=' Introduction to statistical physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
240
+ page_content=' Springer Science & Busi- ness Media, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
241
+ page_content=' [5] Andrey Leonidov, Alexey Savvateev, and Andrew G Semenov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
242
+ page_content=' Quan- tal response equilibria in binary choice games on graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
243
+ page_content=' arXiv preprint arXiv:1912.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
244
+ page_content='09584, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
245
+ page_content=' [6] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
246
+ page_content=' Leonidov, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
247
+ page_content=' Savvateev, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
248
+ page_content=' Semenov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
249
+ page_content=' Qre in the ising game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
250
+ page_content=' CEUR Workshop proceedings, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
251
+ page_content=' [7] Andrey Leonidov, Alexey Savvateev, and Andrew G Semenov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
252
+ page_content=' Ising game on graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
253
+ page_content=' arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
254
+ page_content='00824, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
255
+ page_content=' [8] Jacob K Goeree, Charles A Holt, and Thomas R Palfrey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
256
+ page_content=' Quantal response equilibria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
257
+ page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
258
+ page_content=' [9] Alan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
259
+ page_content=' Hawkes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
260
+ page_content=' Spectra of some self-exciting and mutually exciting point processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
261
+ page_content=' Biometrika, 58(1):83–90, 04 1971.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
262
+ page_content=' [10] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
263
+ page_content=' Filimonov and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
264
+ page_content=' Sornette.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
265
+ page_content=' Apparent criticality and calibration issues in the hawkes self-excited point process model: application to high-frequency financial data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
266
+ page_content=' Quantitative Finance, 15(8):1293–1314, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
267
+ page_content=' [11] Stephen J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
268
+ page_content=' Hardiman, Nicolas Bercot, and Jean-Philippe Bouchaud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
269
+ page_content=' Criti- cal reflexivity in financial markets: a hawkes process analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
270
+ page_content=' The European Physical Journal B, 86:442, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
271
+ page_content=' [12] Yosihiko Ogata.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
272
+ page_content=' Statistical models for earthquake occurrences and residual analysis for point processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
273
+ page_content=' Journal of the American Statistical Associa- tion, 83(401):9–27, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
274
+ page_content=' [13] Patrick J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
275
+ page_content=' Laub, Thomas Taimre, and Philip K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
276
+ page_content=' Pollett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
277
+ page_content=' Hawkes processes, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
278
+ page_content=' [14] Kiyoshi Kanazawa and Didier Sornette.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
279
+ page_content=' Field master equation theory of the self-excited hawkes process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
280
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
281
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
282
+ page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
283
+ page_content=', 2:033442, Sep 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
284
+ page_content=' [15] Kiyoshi Kanazawa and Didier Sornette.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
285
+ page_content=' Nonuniversal power law distri- bution of intensities of the self-excited hawkes process: A field-theoretical approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
286
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
287
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
288
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
289
+ page_content=', 125:138301, Sep 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
290
+ page_content=' [16] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
291
+ page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
292
+ page_content=' Kramers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
293
+ page_content=' Brownian motion in a field of force and the diffusion model of chemical reactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
294
+ page_content=' Physica, 7(4):284 – 304, 1940.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
295
+ page_content=' 12 [17] Jean-Michel Lasry and Pierre-Louis Lions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
296
+ page_content=' Mean field games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
297
+ page_content=' Japanese journal of mathematics, 2(1):229–260, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
298
+ page_content=' [18] Haidong Feng, Kun Zhang, and Jin Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
299
+ page_content=' Non-equilibrium transition state rate theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
300
+ page_content=' Chem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
301
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
302
+ page_content=', 5:3761–3769, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
303
+ page_content=' [19] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
304
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
305
+ page_content=' Maslov and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
306
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
307
+ page_content=' Fedoriuk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
308
+ page_content=' Semiclassical Approximation in Quantum Mechanics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
309
+ page_content=' Reidel, Dordrecht, 1981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
310
+ page_content=' [20] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
311
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
312
+ page_content=' Landau and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
313
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
314
+ page_content=' Lifshitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
315
+ page_content=' Mechanics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
316
+ page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
317
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
318
+ page_content=' Butterworth- Heinemann, 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
319
+ page_content=' [21] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
320
+ page_content=' Caroli, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
321
+ page_content=' Caroli, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
322
+ page_content=' Roulet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
323
+ page_content=' Diffusion in a bistable potential: The functional integral approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
324
+ page_content=' Journal of Statistical Physics, 26:83– 111, 1981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
325
+ page_content=' [22] Sidney Coleman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
326
+ page_content=' Aspects of Symmetry: Selected Erice Lectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
327
+ page_content=' Cambridge University Press, 1985.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
328
+ page_content=' [23] Freddy Bouchet and Julien Reygner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
329
+ page_content=' Generalisation of the eyring–kramers transition rate formula to irreversible diffusion processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
330
+ page_content=' Annales Henri Poincar´e, 17:3499–3532, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
331
+ page_content=' [24] Daisy Dahiya and Maria Cameron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
332
+ page_content=' Ordered line integral methods for com- puting the quasi-potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
333
+ page_content=' Journal of Scientific Computing, 75, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
334
+ page_content=' [25] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
335
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
336
+ page_content=' Matkowsky, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
337
+ page_content=' Schuss, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
338
+ page_content=' Ben-Jacob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
339
+ page_content=' A singular perturbation approach to kramers’ diffusion problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
340
+ page_content=' SIAM Journal on Applied Math- ematics, 42(4):835–849, 1982.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
341
+ page_content=' [26] Jean-Philippe Bouchaud, Julius Bonart, Jonathan Donier, and Martin Gould.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
342
+ page_content=' Trades, Quotes and Prices: Financial Markets Under the Micro- scope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
343
+ page_content=' Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
344
+ page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
345
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
346
+ page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
347
+ page_content=' Cambridge University Press, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
348
+ page_content=' [27] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
349
+ page_content=' Dekker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
350
+ page_content=' Kramers’ activation rate for a sharp edged potential barrier: The double oscillator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
351
+ page_content=' Physica A: Statistical Mechanics and its Applica- tions, 136(1):124–146, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
352
+ page_content=' [28] Nicholas Paskal and Maria Cameron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
353
+ page_content=' An efficient jet marcher for computing the quasipotential for 2d sdes, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
354
+ page_content=' [29] Peter Ashwin, Jennifer Creaser, and Krasimira Tsaneva-Atanasova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
355
+ page_content=' Quasipotentials for coupled escape problems and the gate-height bifurca- tion, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
356
+ page_content=' 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/KNAzT4oBgHgl3EQfVfxu/content/2301.01285v1.pdf'}
L9AyT4oBgHgl3EQf6voI/content/tmp_files/2301.00825v1.pdf.txt ADDED
@@ -0,0 +1,1024 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MNRAS 000, 1–9 (2021)
2
+ Preprint 4 January 2023
3
+ Compiled using MNRAS LATEX style file v3.0
4
+ Dissecting the active galactic nucleus in Circinus IV. MUSE NFM
5
+ observations unveil a tuning-fork ionised outflow morphology
6
+ D. Kakkad1,★ M. Stalevski2,3, M. Kishimoto4, S. Knežević2, D. Asmus5,6, F. P. A. Vogt7
7
+ 1Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
8
+ 2Astronomical Observatory, Volgina 7, 11060 Belgrade, Serbia
9
+ 3Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281-S9, Gent, 9000, Belgium
10
+ 4Department of Astrophysics & Atmospheric Sciences, Faculty of Science, Kyoto Sangyo University, Kamigamo-motoyama, Kita-ku, Kyoto 603-8555, Japan
11
+ 5Department of Physics & Astronomy, University of Southampton, Southampton, SO17 1BJ, UK
12
+ 6Gymnasium Schwarzenbek, 21493 Schwarzenbek, Germany
13
+ 7Federal Office of Meteorology and Climatology - MeteoSwiss, Chemin de l’Aérologie 1, 1530 Payerne, Switzerland
14
+ Accepted XXX. Received YYY; in original form ZZZ
15
+ ABSTRACT
16
+ We present the ionised gas outflow morphology in the Circinus galaxy using the Narrow Field Mode (NFM) of the MUSE
17
+ instrument on board the Very Large Telescope (VLT). The NFM observations provide a spatial resolution of ∼0.1′′, corresponding
18
+ to a physical scale of ∼2 pc, one of the highest spatial resolution achievable using ground-based AO-assisted observations in the
19
+ optical wavelengths. The MUSE observations reveal a collimated clumpy outflow profile originating near the AGN location and
20
+ extending up to 1.5′′ (∼30 pc) in the NW direction. The collimated structure then fragments into two filaments, giving the entire
21
+ outflowing gas a “tuning-fork” morphology. These structures remain undetected in the lower spatial resolution MUSE Wide
22
+ Field Mode data. We explain the origin of this tuning-fork structure to the interaction of the outflow with a dense clump in the
23
+ interstellar medium (ISM) as the outflow propagates outward. The origin of the collimated structure itself could be from jet-ISM
24
+ interactions on small scales. These observations also provide evidence to the origin of the ionised gas filaments previously
25
+ observed in the Circinus galaxy out to kiloparsec scales. We find instantaneous and time-averaged mass outflow rates of 10−2
26
+ M⊙ yr−1 and 10−4 M⊙ yr−1, respectively. Based on the star formation rate in the Circinus galaxy reported in the literature, the
27
+ observed ionised outflows are not expected to regulate star formation within the ∼100 pc scales probed by the NFM data.
28
+ Key words: galaxies:active – galaxies:individual – galaxies:ISM – galaxies: kinematics and dynamics – galaxies: nuclei –
29
+ galaxies: Seyfert
30
+ 1 INTRODUCTION
31
+ The so-called Unified Model (UM) of the active galactic nuclei
32
+ (AGN) consists of a central black hole surrounded by an equato-
33
+ rial torus-like structure, which is responsible for the angle-dependent
34
+ obscuration of the accretion disk and in some cases, may include col-
35
+ limated jets along the polar directions (e.g., Antonucci 1993; Urry
36
+ & Padovani 1995; Netzer 2015). The equatorial torus has been be-
37
+ lieved to dominate the infrared emission from the AGN (see Ramos
38
+ Almeida & Ricci 2017, and the references therein). There have been
39
+ intensive efforts in the literature, both from an observational as well
40
+ as modelling perspective, to study the nature of this torus, such as
41
+ its geometry (e.g., Hönig 2019; García-Burillo et al. 2021), what
42
+ are the typical dust covering factors (e.g., Elitzur 2012; Stalevski
43
+ et al. 2016; Toba et al. 2021) and whether the material is clumpy or
44
+ smooth (e.g., Dullemond & van Bemmel 2005; Marin et al. 2015;
45
+ García-González et al. 2017).
46
+ High resolution mid-infrared observations over the past decade
47
+ have now challenged these simplified torus model that have dusty
48
+ ★ E-mail: dkakkad@stsci.edu
49
+ clumps only in the equatorial region (Nenkova et al. 2002). Several
50
+ studies in the literature now show strong infrared emission along
51
+ the polar direction on the scales of a few parsecs (e.g., Hönig et al.
52
+ 2012, 2013; López-Gonzaga et al. 2016; Hönig & Kishimoto 2017;
53
+ Leftley et al. 2018) to hundreds of parsecs (e.g., Braatz et al. 1993;
54
+ Bock et al. 2000; Asmus et al. 2016; Asmus 2019). As a result,
55
+ the dust emission around the AGN is believed to consist of two
56
+ components: an equatorial thin disk and a polar extended feature that
57
+ could originate from the winds from the central engine (see Hönig
58
+ 2019, and the references therein).
59
+ The polar wind is believed to have a multi-phase composition
60
+ ranging from dust to ionised and molecular gas components. In fact,
61
+ recent high spatial resolution observations of nearby AGN have tar-
62
+ geted the molecular gas distribution around the torus using ALMA,
63
+ revealing high velocity outflows in the molecular gas phase (e.g.,
64
+ Gallimore et al. 2016; Combes et al. 2019; García-Burillo et al.
65
+ 2019; Lopez-Rodriguez et al. 2020). In order to get a holistic view
66
+ of the multi-phase dusty gas flows around the torus, it is imperative
67
+ to obtain the morphology and kinematics of the ionised gas on the
68
+ same scales as the molecular gas and infrared emission. Furthermore,
69
+ obtaining outflow morphology at such small spatial scales can also
70
+ © 2021 The Authors
71
+ arXiv:2301.00825v1 [astro-ph.GA] 2 Jan 2023
72
+
73
+ 2
74
+ D. Kakkad et al.
75
+ give clues into the outflow launching mechanism and connect them
76
+ to the observed kiloparsec scales structure or outflows, whenever
77
+ available in the literature. Thanks to the Narrow Field Mode (NFM)
78
+ capabilities of the Multi Unit Spectroscopic Explorer (MUSE Bacon
79
+ et al. 2010) at the Very Large Telescope (VLT), such high spatial
80
+ resolution observations can now be performed with ground-based
81
+ Integral Field Spectroscopic instruments operating at optical wave-
82
+ lengths. The optical wavelengths provide access to bright emission
83
+ lines such as the [O iii]𝜆5007 that trace ionised gas in the Narrow
84
+ Line Region (NLR). Furthermore, emission lines such as H𝛼, H𝛽,
85
+ [N ii]𝜆𝜆6549, 6585 and [S ii]𝜆𝜆6716, 6731 help derive dust extinc-
86
+ tion maps and diagnostic diagrams that trace the source of ionisation
87
+ across the field-of-view.
88
+ The Circinus galaxy is the closest Seyfert 2 galaxy (∼4.2 Mpc
89
+ away, z = 0.001 Freeman et al. 1977) and hosts an infrared-bright
90
+ AGN (e.g., Jarrett et al. 2019). The polar axis of the AGN in the
91
+ Circinus galaxy is seen almost edge-on and is therefore an ideal target
92
+ to study the relative gas and dust structure in a typical obscured AGN.
93
+ Recent mid-infrared (MIR) observations using the upgraded VISIR
94
+ instrument at the Very Large Telescope (VLT) suggests the presence
95
+ of dust in the form of a hollow cone at the edges of the ionised outflow
96
+ (e.g., Stalevski et al. 2017). The polar elongation of the infrared
97
+ emission has also been reported on parsec scales (e.g., Tristram
98
+ et al. 2014; Stalevski et al. 2019) using Mid-Infrared Interferometric
99
+ Instrument (MIDI) and the Multi AperTure mid-infrared Spectro-
100
+ Scopic Experiment (MATISSE, e.g., Isbell et al. 2022) at the VLT.
101
+ The presence of dust in the form of a hollow cone in the polar region is
102
+ also confirmed by multi-band optical polarimetry with VLT/FORS2
103
+ (Stalevski et al., submitted) The galaxy also hosts powerful outflows
104
+ in the ionised gas phase, visible in the form of a one-sided ionisation
105
+ cone extended up to ∼1 kiloparsec (e.g., Marconi et al. 1994; Veilleux
106
+ & Bland-Hawthorn 1997; Mingozzi et al. 2019; Fonseca-Faria et al.
107
+ 2021; Kakkad et al. 2022). The Circinus galaxy hosts an obscured
108
+ AGN with nuclear star formation that dominates the dust emission
109
+ on scales of hundreds of parsec (e.g., Matt et al. 2000; Arévalo et al.
110
+ 2014).
111
+ In this fourth paper in the series, we map the morphology and
112
+ kinematics of ionised gas in the Circinus galaxy at ∼2 pc resolution
113
+ using MUSE-NFM observations. We present a model of the ionisa-
114
+ tion cone and the resulting outflowing gas structure. The observed
115
+ ionised outflow morphology obtained from the NFM observations
116
+ is compared with the larger scale outflows observed with the Wide
117
+ Field Mode (WFM) of MUSE, to understand outflow propagation
118
+ across the host galaxy. We locate the regions with high dust extinc-
119
+ tion and compare this with the archival mid-infrared images. Lastly,
120
+ the MUSE data is compared with other archival radio observations
121
+ to infer the presence of jet-ISM interaction in the host galaxy.
122
+ Throughout this paper, e adopt the following ΛCDM cosmological
123
+ parameters: 𝐻0 = 70 km s−1, ΩM = 0.3 and ΩΛ = 0.7. All the maps
124
+ use the following convention: North is up and East is to left.
125
+ 2 MUSE-NFM OBSERVATIONS & DATA REDUCTION
126
+ The observations were performed using the Laser Tomographic
127
+ Adaptive Optics (LTAO) assisted Narrow Field Mode of the MUSE
128
+ instrument, on board Unit Telescope 4 of the VLT1. The observa-
129
+ tions were carried out on the nights of 29 and 30 April 2019 with a
130
+ DIMM seeing in the range 0.89–1.01′′. We observed the galaxy with
131
+ 1 ESO programme ID: 0103.B-0396(A)
132
+ an optimised sequence: O-S-O-O-S-O (O = Object, S = Sky), where
133
+ the Sky was obtained at an offset position ∼1.5 arcmin away outside
134
+ the galaxy. We performed small dithering between the individual
135
+ science exposures and rotated the field by 90 degrees on each subse-
136
+ quent exposure to eliminate the impact of bad pixels and to average
137
+ out the patterns of slicers and channels. The nucleus of the Circinus
138
+ galaxy has an H-band magnitude of 13.4 and therefore, served as the
139
+ Adaptive Optics (AO) reference for the Wavefront Sensor (WFS).
140
+ The total on-source exposure time was ∼4000s.
141
+ The raw data was reduced using the standard MUSE pipeline
142
+ (e.g., Weilbacher et al. 2014, 2020). The pipeline performs bias
143
+ correction, flat fielding, wavelength and astrometry calibration, sky
144
+ subtraction and flux calibration. The final data cube consisted of a
145
+ field-of-view of ∼7.5×7.5 arcsec2 centred on the nucleus (AGN) with
146
+ a spatial sampling of 0.025′′. As the observations were performed
147
+ in the Nominal mode, this provided a uniform wavelength coverage
148
+ between 4800–9300 Å with a gap between 5780–6050 Å due to the
149
+ presence of a notch filter that suppresses the Sodium laser light. The
150
+ spectral PSF (also known as the Line Spread Function, LSF) is in
151
+ the range 2.5–2.9 Å, with the best resolution obtained at the redder
152
+ end of the spectra. This corresponds to a velocity resolution of ∼150
153
+ km s−1 at the location of [O iii]𝜆5007 line, one of the emission lines
154
+ that will be analysed in this paper. The LTAO-assisted observations
155
+ resulted in a spatial PSF of ∼0.1′′, determined using one of the point
156
+ sources in the field-of-view. This spatial resolution is one of the
157
+ highest that can be achieved using ground-based IFS observations.
158
+ At the redshift of the Circinus, this corresponds to a physical scale of
159
+ ∼2 pc, which means that the observations can potentially resolve the
160
+ region near the AGN torus. With a field-of-view of 7.5′′×7.5′′, the
161
+ NFM observations trace spatial scales up to ∼100 pc from the AGN
162
+ location.
163
+ 3 ANALYSIS
164
+ In order to derive the flux and velocity maps from the optical emis-
165
+ sion lines, we first subtract the stellar continuum emission across the
166
+ MUSE field-of-view. We perform the stellar continuum emission us-
167
+ ing the LZIFU tool (e.g., Ho et al. 2016; Kreckel et al. 2018), which
168
+ adopts the penalized pixel fitting routine (PPXF Cappellari & Em-
169
+ sellem 2004; Cappellari 2017) to fit the stellar continuum using input
170
+ stellar spectrum templates (from González Delgado et al. (2005)) or
171
+ modelled simple stellar populations (SSPs) that are convolved with
172
+ parametrised velocity distributions. In doing so, regions in the spectra
173
+ with strong skylines and emission lines intrinsic to the host galaxy
174
+ were masked. The key emission lines that were masked were H𝛽,
175
+ [O iii]𝜆𝜆4959, 5007, [N ii]𝜆𝜆6549, 6585, H𝛼 and [S ii]𝜆𝜆6716, 6731
176
+ ([S ii] doublet hereafter). We also mask the notch filter region in the
177
+ spectrum that is contaminated by the sodium doublet emission from
178
+ the lasers. Being a Seyfert 2 galaxy, the Circinus galaxy does not
179
+ display broad emission lines from the Broad Line Region (BLR).
180
+ The resulting stellar continuum-subtracted data cubes were then
181
+ used to analyse the morphology, kinematics and the ionisation mech-
182
+ anism of the gas using strong emission lines in the optical spectra. For
183
+ instance, we used the [O iii]𝜆5007 line ([O iii] hereafter) to trace the
184
+ ionised gas in the Narrow Line Region (NLR), the Balmer lines H𝛼
185
+ and H𝛽 lines are used to trace potential star formation and dust extinc-
186
+ tion in the host galaxy (using Balmer decrement), the [N ii]𝜆𝜆6549,
187
+ 6585 lines are used in investigating the ionisation source in each
188
+ pixel (AGN or star formation) and the [S ii] doublet are used to trace
189
+ the ionised gas electron density. We model these emission lines with
190
+ multiple Gaussian functions using the scipy.curve-fit package in
191
+ MNRAS 000, 1–9 (2021)
192
+
193
+ NFM observations of Circinus
194
+ 3
195
+ Figure 1. The top left panel shows an RGB colour image of the Circinus galaxy, derived from the NFM data. The field-of-view of the RGB image is 7.5′′×7.5′′.
196
+ The red hue indicated the H𝛼 emission, while the ionised gas cone is apparent in green. The white square at the edge of the ionisation cone shows the pixel from
197
+ where the example spectra shown in this figure was obtained. The middle and right panels in the top row shows the example of a stellar continuum fit and the
198
+ middle and right panels in the bottom row shows the emission line fitting results of H𝛽, [O iii]𝜆𝜆4959,5007, [N ii]𝜆𝜆6549,6585, H𝛼 and [S ii]𝜆𝜆6716,6731.
199
+ In all the spectra, the grey curve shows the extracted data from the pixel location shown in the median image, the magenta curve shows the stellar continuum
200
+ model, the green and blue curves show the individual Gaussian components used to model the emission lines and the red curve shows the overall model. The
201
+ bottom left panel illustrates the definition of outflow and systemic flux used in this paper. Further details are given in Sect. 3.
202
+ python (see Virtanen et al. 2020). Initially a single Gaussian is fitted
203
+ to the emission line profile, and additional Gaussian functions were
204
+ added if the 𝜒2 is minimised, and until the line fluxes are stable within
205
+ ∼10%. Circinus does not display BLR emission in its H𝛽 or H𝛼 pro-
206
+ files and the maximum number of Gaussians required to model an
207
+ emission line was two. We do not assign any physical significance to
208
+ the individual Gaussian components as we follow a non-parametric
209
+ approach to derive the properties associated with the systemic and
210
+ outflow components. The non-parametric approach was chosen over
211
+ a parametric model as it does not depend on the choice of the models
212
+ used for the emission lines (e.g., Gaussian, Lorentzian or a power-
213
+ law). In addition, we tied the line centroids of [O iii] line with that of
214
+ H𝛽 and the [N ii] and [S ii] lines with that of H𝛼 based on the expected
215
+ location of the respective atomic species. The emission line ratios
216
+ [O iii]𝜆5007:[O iii]𝜆4959 and [N ii]𝜆6585:[N ii]𝜆6549 were set ap-
217
+ proximately equal to 3:1 based on expectations from theory (e.g.,
218
+ Osterbrock & Ferland 2006; Dimitrijević et al. 2007). Lastly, the
219
+ FWHM of the individual Gaussian components of the [O iii] line
220
+ was coupled with H𝛽 and the H𝛼 FWHM with that of [N ii].
221
+ From the line fitting procedure, we are interested in the following
222
+ parameters: The 10th percentile velocity: 𝑣10 (blue-shifted [O iii] ve-
223
+ locity that contains 10% of the overall [O iii] flux), the width of the
224
+ emission line: 𝑤80 (width containing 80% of the flux of the emission
225
+ line, see Harrison et al. 2014; Kakkad et al. 2016; Wylezalek et al.
226
+ 2020) and the flux of the outflowing and systemic components of
227
+ each of the emission lines. To determine the fluxes, we first define
228
+ the zero velocity location in the emission line spectra, which is the
229
+ location of the peak of the emission line. The flux within ±300 km
230
+ s−1 on either side of this zero velocity location is considered to be
231
+ the systemic component of the emission line. The choice of 300 km
232
+ s−1 is based on the line width (FWHM) cut of ∼600 km s−1 which
233
+ is often employed in the literature to distinguish between outflow-
234
+ ing and non-outflowing gas (e.g., Kakkad et al. 2020). Using similar
235
+ arguments, we use the flux outside of the ±300 km s−1 channels to
236
+ define the flux associated with outflows. Using lower velocity cuts
237
+ such as 250 or 200 km s−1 yield similar results, but due to the pos-
238
+ sibility of contamination from the non-outflowing gas at the lower
239
+ velocities, we make a conservative cut of 300 km s−1. Figure 1 shows
240
+ an example of the analysis methods and the fitting results presented
241
+ in this section.
242
+ 4 RESULTS & DISCUSSION
243
+ In this section, we show the results of the analysis methods described
244
+ in Sect. 3. The main aim of this section is to derive the ionised
245
+ gas outflow morphology and kinematics close to the AGN torus and
246
+ compare this with archival multi-wavelength data, including the low
247
+ spatial resolution MUSE-WFM data.
248
+ 4.1 The parsec-scale ionised outflow in the Circinus galaxy
249
+ Figure 2 shows the flux maps of the systemic (left panel) and outflow-
250
+ ing component (middle panel) of the [O iii] emission line from the
251
+ NFM data. The systemic [O iii] flux map traces the ionisation cone
252
+ originating from the AGN location and extends towards the NW di-
253
+ rection from the nucleus. The presence of the ionisation cone in the
254
+ Circinus galaxy has previously also been reported in the literature, in-
255
+ cluding MUSE Wide Field Mode (WFM) observations (e.g., Marconi
256
+ et al. 1994; Fischer et al. 2013; Mingozzi et al. 2019; Fonseca-Faria
257
+ MNRAS 000, 1–9 (2021)
258
+
259
+ 3500
260
+ Data
261
+ 3500
262
+ Stellar continuum
263
+ 3000
264
+ 3000
265
+ erg/s/cm?/A)
266
+ 2500
267
+ 2500
268
+ 2000
269
+ 2000
270
+ [e-17
271
+ 1500
272
+ 1500
273
+ Flux
274
+ 1000
275
+ 1000
276
+ 500
277
+ 500
278
+ 0
279
+ 0
280
+ 4800
281
+ 4850
282
+ 4900
283
+ 4950
284
+ 5000
285
+ 5050
286
+ 6550
287
+ 6600
288
+ 6650
289
+ 6700
290
+ 6750
291
+ Rest-frame wavelength[A]
292
+ Rest-frame wavelength[A]
293
+ 2.5
294
+ Outflow
295
+ Data
296
+ Systemic
297
+ 3.0
298
+ Component 1
299
+ Component2
300
+ 2.0
301
+ erg/s/cm²/A)
302
+ 2.5
303
+ Model
304
+ 2.0
305
+ 1.5
306
+ 1.5
307
+ [e-17
308
+ 1.0
309
+ 1.0
310
+ 0.5
311
+ 0.5
312
+ 0.0
313
+ 0.0
314
+ -300
315
+ 0
316
+ 300
317
+ 4800
318
+ 4850
319
+ 4900
320
+ 4950
321
+ 5000
322
+ 5050
323
+ 65506600
324
+ 665067006750
325
+ v [km/s]
326
+ Rest-frameWavelength[A]
327
+ Rest-frameWavelength[A]4
328
+ D. Kakkad et al.
329
+ Figure 2. The left panel shows the systemic [O iii] flux (|𝑣 | < 300 km s−1) that traces the ionisation cone. The middle panel shows the outflowing component
330
+ of the [O iii] flux (|𝑣 | > 300 km s−1). The outflow morphology suggests the gas propagation along a collimated structure before it fragments into two filaments,
331
+ giving it a "tuning-fork" resemblance. The green contours in the middle panel shows the [O iii] outflow contours within the collimated structure to highlight
332
+ that the outflowing structure itself shows the presence of clumps. Thanks to the 0.1′′ spatial resolution achieved with the NFM observations, such structures are
333
+ barely visible in archival WFM data shown in the right panel. In all the panels, the blue "X" marks the AGN location and the black bar on the bottom left shows
334
+ the 20 pc scale.
335
+ Figure 3. The top left panel shows the stellar velocity map obtained from the stellar continuum fitting. The stellar velocity map shows a smooth rotation-like
336
+ profile of the host galaxy. The [O iii] centroid velocity profile (top centre panel) approximately mimics the stellar velocity field, suggesting that the bulk of the
337
+ ionised gas cone co-rotates with the host galaxy. The top right panel shows the residual map after subtracting the stellar velocity map from the [O iii] velocity
338
+ map. We observe residuals at the locations of the "tuning-fork" structure (red contour), suggesting that it is a part of the non-rotation component. The positive
339
+ residuals in the filament directed towards the West and the negative residuals in the filament towards North shows that the outflow itself is co-rotating with the
340
+ ionised gas and the host galaxy. The bottom panels show the non-parametric velocities, 𝑣10 and 𝑤80, described in Sect. 3. Both these velocities confirm that
341
+ the high velocity regions are along the collimated structure that fragments into two filaments ∼1.5′′ from the AGN location. Furthermore, the presence of this
342
+ structure in the 𝑣10 map shows that the dominant component of the outflow is blue-shifted. The black "X" in all maps indicate the AGN location.
343
+ et al. 2021; Kakkad et al. 2022). The systemic flux dominates the
344
+ bulk of the ionised gas flux in the host galaxy by nearly two orders of
345
+ magnitude, compared to the [O iii] outflow flux. The [O iii] outflow
346
+ map (middle panel, Fig. 2), on the other hand, shows a collimated
347
+ structure that originates close to the AGN location and extends to-
348
+ wards the NW of the nucleus (same direction and approximately the
349
+ same PA as the ionisation cone). The collimated structure itself is
350
+ not uniform and shows multiple clumps. Such clumps have also been
351
+ previously reported in extended radial ionised gas filaments of the
352
+ Circinus galaxy (e.g., Veilleux & Bland-Hawthorn 1997). We note
353
+ that the location of the first clump is not coincident with the AGN
354
+ location, but ≈0.4′′ NW of the nucleus. In Section 5, we further dis-
355
+ cuss the origin of these clumps and whether they could be produced
356
+ by the shocks within the outflowing wind.
357
+ Beyond ∼1.5′′ from the AGN location (∼30 pc) in the NW direc-
358
+ tion, the collimated structure then fragments into two filaments, one
359
+ towards the West and another towards North, which gives the overall
360
+ outflow morphology a "tuning-fork" resemblance. The impact of the
361
+ high resolution NFM observations is clear from these observations
362
+ as such pc-scale filaments and fragmenting structures are not visible
363
+ in the archival low resolution (∼0.5′′) MUSE WFM data, as shown
364
+ in the right panel of Fig. 2.
365
+ Figure 3 shows velocity maps of the stellar component and the
366
+ ionised gas of the Circinus galaxy, derived from the NFM observa-
367
+ tions. The top left panel in Fig. 3 shows the stellar velocity distribution
368
+ MNRAS 000, 1–9 (2021)
369
+
370
+ 1e-17.
371
+ 10
372
+ 1e-19
373
+ 9
374
+ 3
375
+ 3
376
+ 8
377
+ 8
378
+ 2
379
+ 2
380
+ arcsec]
381
+ 6
382
+ 1
383
+ 5
384
+ 0
385
+ 0
386
+ 4
387
+ 4
388
+ -1
389
+ m
390
+ 2
391
+ 2
392
+ 2
393
+ 2
394
+ 20 pc
395
+ -3
396
+ 20 pc
397
+ -3
398
+ 1
399
+ [ol] systemic flux
400
+ [olll] outflowflux
401
+ C
402
+ -2
403
+ 0
404
+ 2
405
+ -2
406
+ 0
407
+ 2
408
+ Ax [arcsec]
409
+ △x [arcsec][Olll] outflow flux WFM
410
+ 1e-18
411
+ 6
412
+ 3
413
+ 5
414
+ 2 -
415
+ 4
416
+ y [arcsec]
417
+ 1
418
+ erg/s/cm2
419
+ 0 -
420
+ X
421
+ -11
422
+ 2
423
+ -2 -
424
+ 1
425
+ -3
426
+ 0
427
+ -2
428
+ 0
429
+ 2
430
+ △x [arcsec]100
431
+ 100
432
+ 30
433
+ 75
434
+ 75
435
+ 2
436
+ 2
437
+ 2
438
+ 20
439
+ 50
440
+ 50
441
+ Ay [arcsec]
442
+ V[ol] [km s-1]
443
+ 10
444
+ 25
445
+ [t-s
446
+ 25
447
+ V[OI] - V*
448
+ 0
449
+ 0
450
+ [km
451
+ 0
452
+ 0
453
+ 0
454
+ 0
455
+ -25
456
+ -25
457
+ -10
458
+ -50
459
+ -50
460
+ -2
461
+ -2
462
+ -2
463
+ -75
464
+ -75
465
+ -20
466
+ 100
467
+ -100
468
+ -2
469
+ 30
470
+ -2
471
+ 0
472
+ 2
473
+ 0
474
+ 2
475
+ -2
476
+ 0
477
+ 2
478
+ Ax[arcsec]
479
+ Ax[arcsec]
480
+ Ax[arcsec][OI] V10
481
+ [OI|] W80
482
+ -50
483
+ 380
484
+ 2
485
+ 2
486
+ [arcsec]
487
+ -95
488
+ 320
489
+ -1
490
+ -1
491
+ 's
492
+ 0
493
+ 0
494
+ S
495
+ km
496
+ km
497
+ -135
498
+ 260
499
+ -2
500
+ -2
501
+ -180
502
+ 200
503
+ -2
504
+ 0
505
+ 2
506
+ -2
507
+ 0
508
+ 2
509
+ Ax[arcsec]
510
+ Ax[arcsec]NFM observations of Circinus
511
+ 5
512
+ Figure 4. The map shows the mass outflow rate distribution for the ionised
513
+ gas derived from the [O iii]𝜆5007 line in the MUSE-NFM observations of
514
+ the Circinus galaxy. The mass outflow rates are higher in regions with higher
515
+ outflow velocity.
516
+ in the host galaxy obtained from the stellar continuum modelling. The
517
+ velocity map shows a smooth gradient indicating a rotation-like pro-
518
+ file, with the axis of rotation aligned approximately along the axis of
519
+ the ionisation cone. The [O iii] centroid map, shown in the top centre
520
+ panel of Fig. 3, mimics the stellar velocity map i.e., the ionised gas
521
+ co-rotates with the host galaxy. The [O iii] centroid velocity profile is
522
+ also consistent with previous MUSE-WFM results from the literature
523
+ (e.g., Fonseca-Faria et al. 2021). On subtracting the stellar velocity
524
+ component from the [O iii] centroid velocity, we see clear residuals
525
+ at the locations of the "tuning-fork" structure, as shown in the top
526
+ right panel of Fig. 3. This proves that the outflow flux shown in the
527
+ middle panel in Fig. 2 is indeed part of the non-systemic component
528
+ of the host galaxy. Furthermore, the positive and negative residuals
529
+ in the West and North arms respectively in the residual map in the
530
+ top right panel of Fig. 3 indicates that the fork structure itself is
531
+ co-rotating with the host galaxy and the ionisation cone. The bottom
532
+ panels in Fig. 3 show the [O iii] 𝑣10 and the 𝑤80 maps (left and right
533
+ panels respectively). Both the 𝑣10 and 𝑤80 maps confirm the results
534
+ seen in the outflow maps i.e., the high velocity regions are collimated
535
+ up to ∼1.5′′ from the AGN location before they fragment into two
536
+ filaments. The presence of the tuning-fork structure in the 𝑣10 map
537
+ suggests that most of the observed outflow flux is dominated by the
538
+ blue-shifted emission. We note that the stellar velocity map in the
539
+ top left panel of Fig. 3 also shows a "V-shaped" structure at approx-
540
+ imately the same location where the high velocity regions fragment
541
+ into the two arms, suggesting that the material within the cone is both
542
+ outflowing and co-rotating with the host galaxy.
543
+ We also derived the ionised gas mass outflow rate using the
544
+ [O iii] line adopting methods from the literature (e.g., Rupke et al.
545
+ 2005; Genzel et al. 2011; Veilleux et al. 2020; Kakkad et al. 2022).
546
+ We report two kinds of outflow rate values: Instantaneous outflow
547
+ rates ( �𝑀inst) is the sum of mass outflow rates calculated for each
548
+ pixel, and time-averaged mass outflow rate (𝑀Tavg) calculated by
549
+ taking averaged quantities over the whole outflowing region. These
550
+ quantities can be computed using the following equations:
551
+ �𝑀inst =
552
+ ∑︁
553
+ pix
554
+ 𝑀out · 𝑣out
555
+ Δ𝑅
556
+ (1)
557
+ �𝑀Tavg = 𝑀out· < 𝑣out >
558
+ 𝑅
559
+ (2)
560
+ In Equation 1, the mass of the outflowing ionised gas, 𝑀out and the
561
+ velocity of the ionised gas, 𝑣out is computed for each pixel and Δ𝑅
562
+ is the size of the pixel. In Equation 2, on the other hand, 𝑀out is the
563
+ total outflowing gas mass computed from the outflowing [O iii] flux
564
+ and 𝑣out is the average velocity over the outflowing region (∼300 km
565
+ s−1). The parameter, 𝑅, in Eq. 2 is the distance of the outflow from
566
+ the AGN location. As we are using spatially-resolved observations,
567
+ we do not need to assume an outflow geometry or outflow density
568
+ for the time-averaged quantity. The outflow density in both cases
569
+ is obtained from the flux ratio of the outflowing components of
570
+ [S ii]𝜆𝜆6716, 6731.
571
+ The mass outflow rate map, representing the instantaneous mass
572
+ outflow rates (Eq. 1), is shown in Fig. 4, where the mass outflow rate
573
+ was calculated for each pixel. The advantage of using this method is
574
+ that variation in the outflow parameters such as outflow density and
575
+ velocity can be incorporated without the need for any assumptions
576
+ on outflow models. We find the median outflow density across the
577
+ field-of-view, calculated using the flux ratio of [S ii]𝜆𝜆6716, 6731
578
+ to be ∼200 cm−3 (e.g., Sanders et al. 2016; Kaasinen et al. 2017;
579
+ Kakkad et al. 2018). The total summed instantaneous outflow rate
580
+ is 0.01 M⊙ yr−1 (an average of 3×10−7 M⊙ yr−1 per pixel where
581
+ outflow is detected), which is two orders of magnitude less than the
582
+ total instantaneous outflow rate value reported with MUSE-WFM
583
+ observations (Kakkad et al. 2022). The time-averaged outflow rate
584
+ computed using Eq. 2 is 10−4 M⊙ yr−1. The obscured star formation
585
+ rate (SFR) in the Circinus galaxy is reported to be between 3–8 M⊙
586
+ yr−1. The orders of magnitude difference between the SFR and the
587
+ ionised outflow rate within a radius of ∼100 pc of the AGN location,
588
+ therefore, suggests that the observed ionised outflow is not expected
589
+ to shut down star formation in the host galaxy. However, this may not
590
+ be true for kiloparsec-scale molecular outflows where the outflow
591
+ rate in the molecular gas phase has been reported to be ∼0.35–12.3
592
+ M⊙ yr−1 (see Zschaechner et al. 2016). The high molecular outflow
593
+ rate in kiloparsec-scales can, therefore, regulate star formation. A
594
+ multi-phase approach to high resolution gas kinematics, by tracing
595
+ warm and cold molecular gas components, is required to robustly
596
+ confirm whether these AGN outflows affect star formation within
597
+ ∼100 pc of the AGN.
598
+ Lastly, using spatially resolved Baldwin, Phillips & Terlevich di-
599
+ agrams (e.g., Baldwin et al. 1981; Veilleux & Osterbrock 1987), we
600
+ infer that the dominant source of ionisation across the NFM field-of-
601
+ view is the AGN and the ionisation by star formation is negligible or
602
+ absent (Figure 5). The ionisation structure is consistent with previ-
603
+ ous WFM results in the literature (e.g., Mingozzi et al. 2019; Kakkad
604
+ et al. 2022). The ionisation by the AGN is observed for both systemic
605
+ as well as outflowing components. Therefore, the current observa-
606
+ tions also do not support a scenario where these outflows trigger star
607
+ formation activity in the vicinity of the AGN.
608
+ 4.2 The dust-outflow connection in Circinus
609
+ Previous mid-infrared observations of the Circinus galaxy estab-
610
+ lished that a major fraction of dust emission is coming from the
611
+ polar region, tentatively associated with dusty winds driven by radi-
612
+ ation pressure (e.g., Stalevski et al. 2017; Venanzi et al. 2020). Even
613
+ though far away from the central engine, dust and gas are expected to
614
+ be coupled and co-spatial, and until recently the models of infrared
615
+ emission ignored this polar dust component. The spatially-resolved
616
+ optical spectra from the MUSE-NFM mode can be used to derive
617
+ extinction maps from Balmer decrement (H𝛼/H𝛽) to confirm the
618
+ presence of dust along the polar direction. Therefore, we derived the
619
+ host galaxy extinction, 𝐴V, across the NFM field-of-view using the
620
+ MNRAS 000, 1–9 (2021)
621
+
622
+ OutflowRatemap
623
+ 1e-6
624
+ 3
625
+ 4
626
+ 2
627
+ [arcsec]
628
+ 1
629
+ 3
630
+ 0
631
+ 2
632
+ -1
633
+ -2
634
+ 1
635
+ -3
636
+ 0
637
+ -2
638
+ 0
639
+ 2
640
+ △x[arcsec]6
641
+ D. Kakkad et al.
642
+ Figure 5. The left panel shows the ionisation structure (AGN in red and composite in orange) in the field-of-view probed by the MUSE-NFM data. The right
643
+ panel shows the location of each pixel in the classical [N ii] BPT diagram. The solid and dashed black curves are obtained from Kauffmann et al. (2003) and
644
+ Kewley et al. (2001) and divide the plots between regions ionised by AGN, star formation and composite processes. The systemic flux of the emission lines was
645
+ used while plotting this diagram. However, the results are similar if the outflowing components is used. The figure highlights that the gas is ionised primarily by
646
+ the AGN.
647
+ Figure 6. Dust extinction (𝐴V) map of the Circinus galaxy using the NFM
648
+ observations. The background image shows the extinction map obtained from
649
+ the systemic components ofH𝛼 and H𝛽, thecyan contoursshow theextinction
650
+ from the outflowing components and the magenta contours show the location
651
+ of high velocity ionised gas outflow. The dust extinction is dominant along
652
+ the polar direction, consistent with previous mid-infrared observations in the
653
+ literature.
654
+ Balmer Decrement parameter. We assumed a Calzetti et al. (2000)
655
+ dust attenuation law with 𝑅V = 4.05 and a fixed temperature of 10,000
656
+ K, which is the typical electron temperature in the NLR. We note
657
+ that the Circinus galaxy suffers from Galactic extinction of 𝐴V ∼2
658
+ (see For et al. 2012). However, the [O iii] outflow morphology and
659
+ the associated velocities and mass outflow rates will not change on
660
+ correcting the Galactic extinction. The extinction map is shown in
661
+ Fig. 6. The background map in Fig. 6 shows the extinction map ob-
662
+ tained from the systemic components of the flux ratio, H𝛼/H𝛽. The
663
+ cyan contours show the extinction (𝐴V > 1) obtained from the out-
664
+ flowing component of H𝛼 and H𝛽, and the magenta contours show
665
+ the location of the [O iii] ionised gas outflow from the middle panel
666
+ in Figure 2.
667
+ While the map in Fig. 6 shows potential dust distribution both along
668
+ the disk (consistent with the results reported in Mingozzi et al. (2019)
669
+ and Fonseca-Faria et al. (2021)) as well as the polar direction, the
670
+ overall distribution is dominant along the polar direction. This result
671
+ is consistent with the previous results with mid-infrared emission,
672
+ which was also dominant along the polar direction (e.g., Jaffe et al.
673
+ 2004; Tristram et al. 2007, 2014; Asmus et al. 2016). This suggests
674
+ that the dust along the polar direction might be a part of lower velocity
675
+ gas (compared to the high velocity collimated outflow observed here)
676
+ surrounding the ionised gas outflow. The extinction from the systemic
677
+ components peaks at the location of the AGN and gradually falls off
678
+ to 𝐴V = 0 at a distance of ≈3′′ i.e., ≈60 pc.
679
+ The extinction obtained from the outflowing H𝛼 and H𝛽 compo-
680
+ nents shows non-uniform clumps scarcely distributed along the polar
681
+ direction. We attribute these clumps to be a part of the outflowing
682
+ gas and dust. It is worth noting that one of these clumps is almost
683
+ at the tip of the collimated component of the outflow, approximately
684
+ where the outflow filament fragments into two arms. The observa-
685
+ tion, therefore, might support a picture where the ionised gas outflow
686
+ chooses the path of least resistance and therefore fragments into the
687
+ two filaments, avoiding the region radially outward towards where
688
+ the dust clump is present.
689
+ 5 DISCUSSION
690
+ The results presented in Section 4 highlights the complex structures
691
+ within an outflow that are revealed from high resolution observations
692
+ in the vicinity of the AGN. The presence of an outflow in the form of
693
+ an ionisation cone in the Circinus galaxy has been known for nearly
694
+ three decades, and thanks to the current state-of-the-art instrumen-
695
+ tation that we are now able to resolve parsec scale emission using
696
+ optical spectra. The origin of the initial clumpy collimated structure
697
+ observed in the NFM data could be due to a small-scale radio jet.
698
+ The Circinus galaxy is known to host a radio jet, although its PA is
699
+ aligned closer to the edge of the ionisation cone (e.g., Elmouttie et al.
700
+ 1998). However, these are lower spatial resolution radio imaging ob-
701
+ servations and the regions close to the AGN torus is not resolved.
702
+ Due to the precession and interaction with the surrounding medium,
703
+ jets in AGN are known to bend and change directons at larger scales
704
+ (e.g., as in the case of NGC 1068 Gallimore et al. 2004). Therefore,
705
+ future work will target the regions closer to the AGN to search for
706
+ the potential impact of small-scale jets that may be aligned along the
707
+ MNRAS 000, 1–9 (2021)
708
+
709
+ 1.5
710
+ 3
711
+ AGN
712
+ 1.0
713
+ 2
714
+ 0.5
715
+ 1
716
+ Log [OI]/Hβ
717
+ [arcsec]
718
+ 0
719
+ 0.0
720
+ -0.5
721
+ -2
722
+ -1.0
723
+ Star
724
+ -3
725
+ formation
726
+ Composite
727
+ -1.5
728
+ -3
729
+ -2
730
+ 0
731
+ 1
732
+ 2
733
+ 3
734
+ 0.8
735
+ -0.6
736
+ -0.4
737
+ -0.2
738
+ 0.0
739
+ 0.2
740
+ 0.4
741
+ Ax [arcsec]
742
+ Log [NII]/Hα6
743
+ 3
744
+ 5
745
+ 2
746
+ 4
747
+ [arcsec]
748
+ 7
749
+ 0
750
+ m
751
+
752
+ -1
753
+ 2
754
+ -2
755
+ 1
756
+ [ooutflow
757
+ -3
758
+ 0
759
+ 2
760
+ Ax [arcsec]NFM observations of Circinus
761
+ 7
762
+ Figure 7. The background image shows the [O iii]𝜆5007 flux map from the
763
+ MUSE-WFM data, which shows an overall asymmetric conical morphology
764
+ with two distinct filaments on hundreds of parsec scales. The red contours
765
+ show the [O iii] outflow morphology obtained from the NFM observations
766
+ in the middle panel of Fig. 2. Comparing the NFM contours with that of the
767
+ WFM [O iii] flux map, it appears that the two filaments observed in the WFM
768
+ data have their origins in parsec-scale outflow traced with the NFM.
769
+ Figure 8. A cartoon model of the ionised outflow observed with the MUSE-
770
+ NFM data in the Circinus galaxy. The outflow might be launched as a col-
771
+ limated structure by the radio jet, which then fragments into two filaments
772
+ probably due to the presence of a dense clump at the tip of the collimated
773
+ structure. The dust is distributed along the ionisation cone, apparent from the
774
+ extinction maps and also consistent with archival mid-infrared observations.
775
+ axis of the ionisation cone. The relative orientations of the radio jet
776
+ and the ionisation cone in the Circinus galaxy are depicted in Fig. 8.
777
+ The outflow itself is not composed of uniformly distributed ionised
778
+ gas, but shows the presence of clumps, confirming previous observa-
779
+ tions from lower resolution data of other targets that the outflowing
780
+ media are non-uniform in nature (e.g., Kakkad et al. 2018, 2022).
781
+ The presence of clumps or knots along outflow/ionised gas filaments
782
+ in the Circinus galaxy has also been previously reported in Veilleux
783
+ & Bland-Hawthorn (1997) and these structures have been attributed
784
+ to bow-shocked features that resemble Herbig Haro objects that in-
785
+ teract strongly with the surrounding ISM. The observed morphology
786
+ in the NFM data could be a smaller-scale version of the observations
787
+ in larger scales.
788
+ The collimated component of the outflow most likely fragments
789
+ into two components because of the presence of an obstruction in
790
+ the path of the outflow. In the NFM data, there is an indication of
791
+ the presence of a dust clump via extinction maps obtained from the
792
+ outflowing component of H𝛼 and H𝛽 emission. Infrared observations
793
+ targeting primarily the dust emission could confirm this scenario.
794
+ Filament structures are also observed in images that trace gas across
795
+ larger scales (e.g., Marconi et al. 1994; Veilleux & Bland-Hawthorn
796
+ 1997) and it is probable that the NFM data presented in this paper
797
+ reveals the origin of the kiloparsec-scale filaments. Fig. 7 shows
798
+ the ionised gas morphology traced by the archival MUSE-WFM
799
+ observations (background) and the emission from the outflowing
800
+ [O iii] component from the MUSE-NFM data (red contours). The
801
+ spatial distribution of the filaments observed in WFM is strongly
802
+ suggestive that their origin is to be found in the fragmented arms of
803
+ the tuning fork observed in the NFM observations.
804
+ Fig. 8 shows an overall cartoon model of the ionised outflow in the
805
+ Circinus galaxy, that shows the ionised outflow that fragments into
806
+ two filaments, probably due to the presence of a dense clump at the
807
+ tip of the collimated structure. The dust itself is distributed around the
808
+ ionisation cone and it envelops the lower velocity ionised gas within
809
+ the conical structure. We speculate that the outflow itself might be
810
+ launched due to the radio jet, which cannot be robustly confirmed
811
+ based on the available archival radio observations, as mentioned
812
+ earlier.
813
+ As the NFM observations have only recently targeted the nearby
814
+ galaxies, it is unclear if such outflow structures and fragmentations
815
+ within outflows are common. If this is indeed observed for majority
816
+ of the galaxies, conventional outflow models that use ionisation cone
817
+ morphology may need to be revised to account for the results from
818
+ these high spatial resolution data.
819
+ 6 SUMMARY & CONCLUSIONS
820
+ We presented the MUSE NFM observations of the Circinus galaxy at
821
+ a spatial resolution of 0.1′′ (∼2 pc) that resolves the regions close to
822
+ the AGN torus. We derived the properties of the ionised gas outflow
823
+ using the [O iii]𝜆5007 emission line and the dust distribution using
824
+ Balmer Decrement. We follow a non-parametric approach to analyse
825
+ the emission lines, so the derived properties are independent of the
826
+ fitting functions used. The main results of this work is summarized
827
+ below:
828
+ • The flux distribution of the systemic component of [O iii] emis-
829
+ sion, defined by the velocity components within ±300 km s−1, shows
830
+ a conical morphology, which is also observed in larger spatial scales
831
+ up to hundreds of parsecs in the literature. Archival radio observa-
832
+ tions show that the radio jet is aligned approximately with the edge
833
+ of the ionisation cone. The flux distribution of the outflowing com-
834
+ ponent of the [O iii] emission, on the other hand, shows a collimated
835
+ structure up to ∼30 pc before fragmenting into two arms that overall
836
+ mimics a “tuning-fork” shape. The outflowing structure itself is not
837
+ smooth and shows clumps at several locations.
838
+ • A comparison between the stellar kinematic map and the
839
+ [O iii] centroid map suggests that the ionised gas (both the systemic
840
+ as well as the outflowing component) co-rotates with the host galaxy.
841
+ Both the non-parametric velocity distributions, 𝑣10 and 𝑤80 show
842
+ similar structures as the outflow flux i.e., the tuning-fork shapes. Most
843
+ of this outflow is blue-shifted, consistent with the outflow models of
844
+ the Circinus galaxy reported in the literature.
845
+ • We find a total instantaneous mass outflow rate of ∼0.01 M⊙
846
+ yr−1 (3×10−7 on average per pixel of size 0.5 pc) and a time-average
847
+ mass outflow rate of 10−4 M⊙ yr−1. This is much lower than the star
848
+ formation within the galaxy and therefore, the ionised gas outflow
849
+ is not expected to regular star formation within a radius of ∼100 pc
850
+ from the AGN location.
851
+ MNRAS 000, 1–9 (2021)
852
+
853
+ Dust
854
+ clumps
855
+ Radio jet
856
+ lonisationcone
857
+ AGN20
858
+ 15
859
+ [arcsec]
860
+ 10
861
+ 5
862
+ Ay I
863
+ 0
864
+ -5
865
+ -10
866
+ -10
867
+ -5
868
+ 0
869
+ 5
870
+ 10
871
+ 15
872
+ 20
873
+ △x [arcsec]8
874
+ D. Kakkad et al.
875
+ • The extinction maps using the systemic components of the H𝛽
876
+ and H𝛼 line show that the dust distribution is concentrated along the
877
+ ionisation cone i.e., the polar direction, consistent with the archival
878
+ mid-infrared observations. The extinction map obtained from the
879
+ outflowing components of H𝛽 and H𝛼 shows a scarce distribution,
880
+ with a clump approximately at the tip of the collimated part of the
881
+ outflow. This dust clump might explain the fragmentation in the
882
+ outflowing ionised gas.
883
+ • We combine previous reported results gathered from the litera-
884
+ ture and the observed outflows from the NFM data presented in this
885
+ paper to present a model of the ionised gas outflow in the Circinus
886
+ galaxy. We suggest that the observed outflow in the Circinus galaxy
887
+ is composed of high velocity collimated gas that is enveloped by
888
+ lower velocity dusty ionised gas. The presence of a dust clump at the
889
+ tip of the collimated part of the outflow might be responsible for the
890
+ fragmentation in the outflowing gas. The collimated outflow might
891
+ be launched by a radio jet. Although the jet, observed in low spatial
892
+ resolution radio observations, does not show a 1:1 alignment with
893
+ the outflowing cone PA, they are known to bend and change at larger
894
+ scales.
895
+ While the MUSE-NFM only provides the kinematic information
896
+ in the ionised gas phase, outflows have been known in exist also in
897
+ the molecular gas phase in the Circinus galaxy. A multi-wavelength
898
+ approach to trace gas in the other phases such as warm and cold
899
+ molecular, at the same spatial resolution of ∼2 pc, will be key to
900
+ obtaining a holistic view into the outflow-AGN connection in the
901
+ Circinus galaxy. Furthermore, high spatial resolution radio obser-
902
+ vations will verify if the observed collimated outflow is a result of
903
+ jet-ISM interaction on small scales.
904
+ ACKNOWLEDGEMENTS
905
+ We would like to thank the anonymous referee for insightful com-
906
+ ments and suggestions to improve this manuscript. The authors also
907
+ thank T. Fischer for useful discussions. Based on observations from
908
+ the ESO programme ID 0103.B-0396. M.S. and S.K. acknowledge
909
+ support by the Science Fund of the Republic of Serbia, PROMIS
910
+ 6060916, BOWIE and by the Ministry of Education, Science and
911
+ Technological Development of the Republic of Serbia through con-
912
+ tract No. 451-03-9/2022-14/200002. D.A. acknowledges funding
913
+ through the European Union’s Horizon 2020 and Innovation pro-
914
+ gramme under the Marie Sklodowska-Curie grant agreement no.
915
+ 793499 (DUSTDEVILS).
916
+ DATA AVAILABILITY
917
+ The data presented in this paper is available with ESO archive under
918
+ the programme ID: 0103.B-0396.
919
+ REFERENCES
920
+ Antonucci R., 1993, ARA&A, 31, 473
921
+ Arévalo P., et al., 2014, ApJ, 791, 81
922
+ Asmus D., 2019, MNRAS, 489, 2177
923
+ Asmus D., Hönig S. F., Gandhi P., 2016, ApJ, 822, 109
924
+ Bacon R., et al., 2010, in Ground-based and Airborne Instrumentation for
925
+ Astronomy III. p. 773508, doi:10.1117/12.856027
926
+ Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5
927
+ Bock J. J., et al., 2000, AJ, 120, 2904
928
+ Braatz J. A., Wilson A. S., Gezari D. Y., Varosi F., Beichman C. A., 1993,
929
+ ApJ, 409, L5
930
+ Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi-
931
+ Bergmann T., 2000, ApJ, 533, 682
932
+ Cappellari M., 2017, MNRAS, 466, 798
933
+ Cappellari M., Emsellem E., 2004, PASP, 116, 138
934
+ Combes F., et al., 2019, A&A, 623, A79
935
+ Dimitrijević M. S., Popović L. Č., Kovačević J., Dačić M., Ilić D., 2007,
936
+ MNRAS, 374, 1181
937
+ Dullemond C. P., van Bemmel I. M., 2005, A&A, 436, 47
938
+ Elitzur M., 2012, ApJ, 747, L33
939
+ Elmouttie M., Haynes R. F., Jones K. L., Sadler E. M., Ehle M., 1998,
940
+ MNRAS, 297, 1202
941
+ Fischer T. C., Crenshaw D. M., Kraemer S. B., Schmitt H. R., 2013, ApJS,
942
+ 209, 1
943
+ Fonseca-Faria M. A., Rodríguez-Ardila A., Contini M., Reynaldi V., 2021,
944
+ MNRAS, 506, 3831
945
+ For B. Q., Koribalski B. S., Jarrett T. H., 2012, MNRAS, 425, 1934
946
+ Freeman K. C., Karlsson B., Lynga G., Burrell J. F., van Woerden H., Goss
947
+ W. M., Mebold U., 1977, A&A, 55, 445
948
+ Gallimore J. F., Baum S. A., O’Dea C. P., 2004, ApJ, 613, 794
949
+ Gallimore J. F., et al., 2016, ApJ, 829, L7
950
+ García-Burillo S., et al., 2019, A&A, 632, A61
951
+ García-Burillo S., et al., 2021, A&A, 652, A98
952
+ García-González J., et al., 2017, MNRAS, 470, 2578
953
+ Genzel R., et al., 2011, ApJ, 733, 101
954
+ González Delgado R. M., Cerviño M., Martins L. P., Leitherer C., Hauschildt
955
+ P. H., 2005, MNRAS, 357, 945
956
+ Harrison C. M., Alexander D. M., Mullaney J. R., Swinbank A. M., 2014,
957
+ MNRAS, 441, 3306
958
+ Ho I. T., et al., 2016, Ap&SS, 361, 280
959
+ Hönig S. F., 2019, ApJ, 884, 171
960
+ Hönig S. F., Kishimoto M., 2017, ApJ, 838, L20
961
+ Hönig S. F., Kishimoto M., Antonucci R., Marconi A., Prieto M. A., Tristram
962
+ K., Weigelt G., 2012, ApJ, 755, 149
963
+ Hönig S. F., et al., 2013, ApJ, 771, 87
964
+ Isbell J. W., et al., 2022, A&A, 663, A35
965
+ Jaffe W., et al., 2004, Nature, 429, 47
966
+ Jarrett T. H., Cluver M. E., Brown M. J. I., Dale D. A., Tsai C. W., Masci F.,
967
+ 2019, ApJS, 245, 25
968
+ Kaasinen M., Bian F., Groves B., Kewley L. J., Gupta A., 2017, MNRAS,
969
+ 465, 3220
970
+ Kakkad D., et al., 2016, A&A, 592, A148
971
+ Kakkad D., et al., 2018, A&A, 618, A6
972
+ Kakkad D., et al., 2020, A&A, 642, A147
973
+ Kakkad D., et al., 2022, MNRAS, 511, 2105
974
+ Kauffmann G., et al., 2003, MNRAS, 346, 1055
975
+ Kewley L. J., Dopita M. A., Sutherland R. S., Heisler C. A., Trevena J., 2001,
976
+ ApJ, 556, 121
977
+ Kreckel K., et al., 2018, ApJ, 863, L21
978
+ Leftley J. H., Tristram K. R. W., Hönig S. F., Kishimoto M., Asmus D.,
979
+ Gandhi P., 2018, ApJ, 862, 17
980
+ López-Gonzaga N., Burtscher L., Tristram K. R. W., Meisenheimer K.,
981
+ Schartmann M., 2016, A&A, 591, A47
982
+ Lopez-Rodriguez E., et al., 2020, ApJ, 893, 33
983
+ Marconi A., Moorwood A. F. M., Origlia L., Oliva E., 1994, The Messenger,
984
+ 78, 20
985
+ Marin F., Goosmann R. W., Gaskell C. M., 2015, A&A, 577, A66
986
+ Matt G., Fabian A. C., Guainazzi M., Iwasawa K., Bassani L., Malaguti G.,
987
+ 2000, MNRAS, 318, 173
988
+ Mingozzi M., et al., 2019, A&A, 622, A146
989
+ Nenkova M., Ivezić Ž., Elitzur M., 2002, ApJ, 570, L9
990
+ Netzer H., 2015, ARA&A, 53, 365
991
+ Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and
992
+ active galactic nuclei
993
+ Ramos Almeida C., Ricci C., 2017, Nature Astronomy, 1, 679
994
+ Rupke D. S., Veilleux S., Sanders D. B., 2005, ApJS, 160, 87
995
+ Sanders R. L., et al., 2016, ApJ, 816, 23
996
+ MNRAS 000, 1–9 (2021)
997
+
998
+ NFM observations of Circinus
999
+ 9
1000
+ Stalevski M., Ricci C., Ueda Y., Lira P., Fritz J., Baes M., 2016, MNRAS,
1001
+ 458, 2288
1002
+ Stalevski M., Asmus D., Tristram K. R. W., 2017, MNRAS, 472, 3854
1003
+ Stalevski M., Tristram K. R. W., Asmus D., 2019, MNRAS, 484, 3334
1004
+ Toba Y., et al., 2021, ApJ, 912, 91
1005
+ Tristram K. R. W., et al., 2007, A&A, 474, 837
1006
+ Tristram K. R. W., Burtscher L., Jaffe W., Meisenheimer K., Hönig S. F.,
1007
+ Kishimoto M., Schartmann M., Weigelt G., 2014, A&A, 563, A82
1008
+ Urry C. M., Padovani P., 1995, PASP, 107, 803
1009
+ Veilleux S., Bland-Hawthorn J., 1997, ApJ, 479, L105
1010
+ Veilleux S., Osterbrock D. E., 1987, ApJS, 63, 295
1011
+ Veilleux S., Maiolino R., Bolatto A. D., Aalto S., 2020, A&ARv, 28, 2
1012
+ Venanzi M., Hönig S., Williamson D., 2020, ApJ, 900, 174
1013
+ Virtanen P., et al., 2020, Nature Methods, 17, 261
1014
+ Weilbacher P. M., Streicher O., Urrutia T., Pécontal-Rousset A., Jarno A.,
1015
+ Bacon R., 2014, in Manset N., Forshay P., eds, Astronomical Society
1016
+ of the Pacific Conference Series Vol. 485, Astronomical Data Analysis
1017
+ Software and Systems XXIII. p. 451 (arXiv:1507.00034)
1018
+ Weilbacher P. M., et al., 2020, A&A, 641, A28
1019
+ Wylezalek D., Flores A. M., Zakamska N. L., Greene J. E., Riffel R. A., 2020,
1020
+ MNRAS, 492, 4680
1021
+ Zschaechner L. K., et al., 2016, ApJ, 832, 142
1022
+ This paper has been typeset from a TEX/LATEX file prepared by the author.
1023
+ MNRAS 000, 1–9 (2021)
1024
+
L9AyT4oBgHgl3EQf6voI/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
R9E4T4oBgHgl3EQf_g7i/content/tmp_files/2301.05372v1.pdf.txt ADDED
@@ -0,0 +1,964 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Text to Point Cloud Localization with Relation-Enhanced Transformer
2
+ Guangzhi Wang1, Hehe Fan2, Mohan Kankanhalli2
3
+ 1Institute of Data Science, National University of Singapore
4
+ 2School of Computing, National University of Singapore
5
+ guangzhi.wang@u.nus.edu, hehe.fan@nus.edu.sg, mohan@comp.nus.edu.sg
6
+ Abstract
7
+ Automatically localizing a position based on a few natural
8
+ language instructions is essential for future robots to commu-
9
+ nicate and collaborate with humans. To approach this goal,
10
+ we focus on the text-to-point-cloud cross-modal localization
11
+ problem. Given a textual query, it aims to identify the de-
12
+ scribed location from city-scale point clouds. The task in-
13
+ volves two challenges. 1) In city-scale point clouds, similar
14
+ ambient instances may exist in several locations. Searching
15
+ each location in a huge point cloud with only instances as
16
+ guidance may lead to less discriminative signals and incor-
17
+ rect results. 2) In textual descriptions, the hints are provided
18
+ separately. In this case, the relations among those hints are
19
+ not explicitly described, leading to the difficulties of learn-
20
+ ing relations. To overcome these two challenges, we propose
21
+ a unified Relation-Enhanced Transformer (RET) to improve
22
+ representation discriminability for both point cloud and nat-
23
+ ural language queries. The core of the proposed RET is a
24
+ novel Relation-enhanced Self-Attention (RSA) mechanism,
25
+ which explicitly encodes instance (hint)-wise relations for the
26
+ two modalities. Moreover, we propose a fine-grained cross-
27
+ modal matching method to further refine the location predic-
28
+ tions in a subsequent instance-hint matching stage. Experi-
29
+ mental results on the KITTI360Pose dataset demonstrate that
30
+ our approach surpasses the previous state-of-the-art method
31
+ by large margins.
32
+ Introduction
33
+ Understanding natural language instructions in the 3D real
34
+ world is a fundamental skill for future artificial intelligence
35
+ assistants to collaborate with humans. In this paper, we fo-
36
+ cus on the outdoor environment and study the task of natural
37
+ language-based localization from city-scale point clouds. As
38
+ shown in Figure 1, given a linguistic description of a posi-
39
+ tion, which contains several hints, the goal of the task is to
40
+ find out the target location from a large-scale point cloud.
41
+ This task can effectively help mobile robots, such as self-
42
+ driving cars and autonomous drones, cooperate with humans
43
+ to coordinate actions and plan their trajectories. By under-
44
+ standing the destination from natural language instructions,
45
+ it reduces the human effort required for manual operation.
46
+ However, this task is intrinsically challenging. Precise lo-
47
+ calization requires both correct language interpretation and
48
+ Copyright © 2023, Association for the Advancement of Artificial
49
+ Intelligence (www.aaai.org). All rights reserved.
50
+ Heading to a place:
51
+ [hint1] east of a dark-green terrain.
52
+ [hint2] south of a gray road.
53
+ [hint3] west of a dark-green traffic sign.
54
+ [hint4] south of a green terrain.
55
+ Textual Query
56
+ Localization
57
+ Figure 1: Illustration of the text to point cloud localization
58
+ task. Given a textual query, which usually contains several
59
+ independent hints, the goal is to localize the point of interest
60
+ in a huge city-scale point cloud.
61
+ effective large-scale point cloud understanding. Considering
62
+ the difficulties, an existing method (Kolmet et al. 2022) first
63
+ divides a city-wide point cloud into several cells, and then
64
+ solves this task in a Coarse-to-Fine manner.
65
+ The goal of the ‘coarse’ stage is to find out the target
66
+ cell that contains the queried location according to the given
67
+ natural language descriptions. In this stage, the instances
68
+ included in point cloud cells and those mentioned in lan-
69
+ guage descriptions are mainly used for text-to-point-cloud
70
+ retrieval based on their types, without considering their rela-
71
+ tions. In the ‘fine’ stage, each object in the textual query is
72
+ matched with an in-cell point cloud instance, whereby a tar-
73
+ get location will be predicted from each hint. This pioneer-
74
+ ing method sets up a significant starting point for tackling
75
+ the challenging task. However, it fails to consider the intrin-
76
+ sic relations in both stages, resulting in sub-optimal perfor-
77
+ mance.
78
+ For the coarse stage, because similar ambient instances
79
+ may exist in several cells, performing retrieval based on only
80
+ the cell-contained and query-related instance types without
81
+ considering their relations may lead to low discriminabil-
82
+ arXiv:2301.05372v1 [cs.CV] 13 Jan 2023
83
+
84
+ ity for both cell and query representations, which inevitably
85
+ leads to ambiguity. Based on those low-discriminability rep-
86
+ resentations, it is difficult to find out the correct cell. In the
87
+ fine stage, we observe that insufficient cross-modal collabo-
88
+ ration leads to difficulties in location refinement. Given the
89
+ retrieved cell, precise location prediction requires joint un-
90
+ derstanding of both point clouds and textual queries. How-
91
+ ever, in the previous method (Kolmet et al. 2022), the cross-
92
+ modal collaboration is only performed from textual queries
93
+ to point clouds in a single step, which results in optimization
94
+ difficulty for multi-task learning.
95
+ In this work, we aim to solve the aforementioned short-
96
+ comings in both stages. For the coarse stage, we pro-
97
+ pose to encode pairwise instance relations to improve rep-
98
+ resentation discriminability for both modalities, which is
99
+ achieved through a novel Relation-Enhanced Transformer
100
+ (RET) architecture. In particular, the in-cell point cloud in-
101
+ stance relations are modeled as their geometric displace-
102
+ ments, while computed as the fusion of hint representations
103
+ in the linguistic domain. These relations from two modali-
104
+ ties are respectively incorporated into their representation in
105
+ a unified manner, which is achieved through the proposed
106
+ Relation-enhanced Self-Attention (RSA) mechanism. For
107
+ the fine stage, we perform Cascaded Matching and Refine-
108
+ ment (CMR) to enhance cross-modal collaboration. In par-
109
+ ticular, different from (Kolmet et al. 2022) which achieves
110
+ this objective in a single step, we perform description-
111
+ instance matching and position refinement in two sequential
112
+ steps. Such formulation allows us to minimize the optimiza-
113
+ tion difficulty of multi-objective learning and noisy interme-
114
+ diate results, thereby improving cross-modal collaboration.
115
+ We validated the effectiveness of our method on the
116
+ KITTI360Pose benchmark (Kolmet et al. 2022). Extensive
117
+ experiments demonstrate that the proposed method can sur-
118
+ pass the previous approach by a large margin, leading to new
119
+ state-of-the-art results. Our contributions are three-fold:
120
+ • We propose a novel Relation-Enhanced Transformer
121
+ (RET) to improve representation discriminability for
122
+ both point clouds and textual queries. The core com-
123
+ ponent of RET is the Relation-enhanced Self-Attention
124
+ (RSA) mechanism, which encodes instance (hint) rela-
125
+ tions for the two modalities in a unified manner.
126
+ • We propose to perform cross-modal instance matching
127
+ and position refinement in two sequential steps. This for-
128
+ mulation allows us to minimize the optimization diffi-
129
+ culty of multi-task learning and the influence of noisy
130
+ intermediate results, thereby improving cross-modal col-
131
+ laboration for fine-grained location prediction.
132
+ • We perform extensive experiments on the KITTI360Pose
133
+ dataset (Kolmet et al. 2022). The results show that our
134
+ approach can surpass previous method by a large margin,
135
+ resulting in new state-of-the-art performance. Additional
136
+ ablation studies further demonstrate the effectiveness of
137
+ each component in the proposed method.
138
+ Related Work
139
+ Transformer and Attention Mechanism. Transformer and
140
+ self-attention mechanism (Vaswani et al. 2017; Fan, Yang,
141
+ and Kankanhalli 2021) has become increasingly popular in
142
+ recent years. Although first proposed for natural language
143
+ processing, with architectural adaptation, Transformer has
144
+ been widely applied to many vision tasks including visual
145
+ recognition (Dosovitskiy et al. 2020; Liu et al. 2021), object
146
+ detection (Carion et al. 2020; Zhu et al. 2020) and seman-
147
+ tic segmentation (Cheng, Schwing, and Kirillov 2021). Be-
148
+ sides, the transformer-based architectures are also utilized to
149
+ model cross-modal (e.g., vision and language) relations (Tan
150
+ and Bansal 2019; Lu et al. 2019; Li et al. 2019; Zhang et al.
151
+ 2021; Li et al. 2022). In these architectures, the attention
152
+ mechanism is widely employed to implicitly learn relations
153
+ among the input tokens. Nevertheless, without explicit rela-
154
+ tion encoding, the vanilla Transformer can only encode rela-
155
+ tions implicitly with the help of positional encoding (Doso-
156
+ vitskiy et al. 2020). To facilitate better relation modeling,
157
+ some works modulate the attention computation process
158
+ by explicitly incorporating element relations. For example,
159
+ (Wu et al. 2021) modified the attention mechanism via uni-
160
+ fied relative position bias to improve visual recognition. For
161
+ object detection, spatial relations between bounding boxes
162
+ are introduced to modulate the attention weights (Liu et al.
163
+ 2022; Gao et al. 2021). For dynamic point cloud analy-
164
+ sis, displacement between points (Fan, Yang, and Kankan-
165
+ halli 2022) is utilized for point-specific attention computa-
166
+ tion. In this work, we propose to model relations for both
167
+ point clouds and language queries by explicitly incorporat-
168
+ ing intra-modality relations in a unified manner.
169
+ Visual Localization. The task that is most related to ours is
170
+ vision-based localization (Arandjelovic et al. 2016; Brach-
171
+ mann et al. 2017; Hausler et al. 2021), which is to estimate a
172
+ pose based on an image or image sequence. Existing meth-
173
+ ods mostly solve this task in two stages (Sarlin et al. 2019;
174
+ Sattler, Leibe, and Kobbelt 2016; Zhou et al. 2020). The first
175
+ stage finds a subset of all images using image retrieval-based
176
+ techniques (Arandjelovic et al. 2016; Hausler et al. 2021;
177
+ Torii et al. 2015), while the second stage establishes pixel-
178
+ wise correspondence between the query image and the re-
179
+ trieved one to predict the precise pose. In this work, we also
180
+ study the task of localization in a coarse-to-fine manner, but
181
+ differ from visual localization in that: 1) we try to infer the
182
+ location from city-wide point clouds instead of images. 2)
183
+ we try to estimate the pose from textual query rather than
184
+ images. Compared to visual localization, our task requires
185
+ multi-modal understanding and is more challenging to solve.
186
+ 3D Language Grounding. As we humans live in a 3D
187
+ world and communicate through natural language, recent
188
+ work has begun to investigate the tasks on the cross-modal
189
+ understanding of 3D vision and natural language. Among
190
+ these tasks, the one that is most related to ours is 3D lan-
191
+ guage grounding, which aims at localizing an object in
192
+ point clouds from a given natural language query. For ex-
193
+ ample, ScanRefer (Chen, Chang, and Nießner 2020) stud-
194
+ ies 3D language grounding from real-life in-door scenes.
195
+ ReferIt3D (Achlioptas et al. 2020) studies a related task un-
196
+ der a simpler setting, which assumes the object instances
197
+ are segmented in advance. InstanceRefer (Yuan et al. 2021)
198
+ improves previous methods by adopting a 3D panoptic seg-
199
+ mentation backbone, utilizing multi-level visual context. Re-
200
+
201
+ east of a dark-green terrain.
202
+ south of a gray road.
203
+ south of a green terrain.
204
+ west of a dark-green traffic sign.
205
+ Split
206
+ ...
207
+ Cells
208
+ Textual Query
209
+ Hint Encoder
210
+ north of a dark-green smallpole .
211
+ east of a green pole.
212
+ ...
213
+ Instance Encoder
214
+ Hints
215
+ Instances
216
+ Relation-Enhanced
217
+ Self-Attention
218
+ Add & LayerNorm
219
+ Feed Foward Network
220
+ Add & LayerNorm
221
+ Relation-Enhanced
222
+ Self-Attention
223
+ Add & LayerNorm
224
+ Feed Foward Network
225
+ Add & LayerNorm
226
+ Instance-wise
227
+ Relation
228
+ x
229
+ x
230
+ ...
231
+ (a) Hint-Instance Matching
232
+ ...
233
+ ...
234
+ ...
235
+ ...
236
+ Feature
237
+ Pooling
238
+ (b) Offset Prediction
239
+ Offsets
240
+ Matching
241
+ Coarse Stage
242
+ Fine Stage
243
+ Cross-modal
244
+ Fusion
245
+ Multi-Layer
246
+ Perceptron
247
+ Figure 2: Framework of the proposed method. The city-scale point cloud is first divided into individual cells. Then, in the
248
+ coarse stage, the cells and the textual query are respectively encoded with the proposed Relation-Enhanced Transformer (RET),
249
+ which are later used for query-cell matching. In the fine stage, each hint is matched with an in-cell instance. Then, cross-modal
250
+ fusion dynamically aggregates hints and instance representations for offset prediction. The target location is predicted based on
251
+ matching results and offset predictions.
252
+ cently, graph structure (Feng et al. 2021) is also utilized to
253
+ improve the representation learning qualities.
254
+ Methodology
255
+ Preliminaries
256
+ Given a textual query, our goal is to identify the position it
257
+ describes from a city-scale point cloud. To handle the large-
258
+ scale point cloud, we divide each scene into a set of cubic
259
+ cells of fixed size by a preset stride. Each cell C contains a
260
+ set of p point cloud instances, which are encoded by Point-
261
+ Net++ (Qi et al. 2017) into vector representations {pi}p
262
+ i=1.
263
+ Following (Kolmet et al. 2022), the textual query T is repre-
264
+ sented as a set of hints {hj}h
265
+ j=1, each encoding the direction
266
+ relation between the target location and an instance.
267
+ Inspired by the existing work (Kolmet et al. 2022), given
268
+ the cell splits, we solve this task in a coarse-to-fine manner
269
+ with two stages. The coarse stage is formulated as textual
270
+ query based cell retrieval. The goal of this stage is to train
271
+ a model that encodes C and T into a joint embedding space
272
+ whereby matched query-cell pairs are close while those un-
273
+ matched are pulled apart (Kiros, Salakhutdinov, and Zemel
274
+ 2014). In the fine stage, given a retrieved cell, we aim to
275
+ refine the position prediction by utilizing fine-grained cross-
276
+ modal information. In particular, we first match each hint
277
+ in the query with an in-cell instance by formulating it as an
278
+ optimal transport problem (Liu et al. 2020). After that, with
279
+ the matching results, we predict the target location through
280
+ a cross-modal fusion of point cloud instance and hint repre-
281
+ sentations. Based on the fused representation, we predict the
282
+ target location for each matched instance. Finally, we obtain
283
+ the target location prediction based on a weighted combi-
284
+ nation of the matching and location prediction results. The
285
+ framework of our method is shown in Figure 2. In the fol-
286
+ lowing of this section, we will explain the proposed method
287
+ for coarse stage and fine stage. After that, our training and
288
+ inference procedure will be detailed.
289
+ Coarse Stage: Relation-Enhanced Transformer
290
+ After the cell split, the goal of the coarse stage is to suc-
291
+ cessfully retrieve the cell C given a textual query T . To ap-
292
+ proach this objective, we need to encode C and T into a joint
293
+ embedding space. An intuitive solution is to encode both
294
+ C and T based on the instances they contained as is done
295
+ in (Kolmet et al. 2022). However, with such representations,
296
+ the low discriminability for cells and textual queries results
297
+ in poor retrieval performance. We argue that this can be at-
298
+ tributed to the following two reasons. On the one hand, the
299
+ outdoor scenes are often of low diversity, whereby a group
300
+ of mentioned instances can appear at multiple different lo-
301
+ cations. Thus, simply describing a cell with its contained in-
302
+ stances can result in less discriminative representations. On
303
+ the other hand, the textual queries often contain limited clues
304
+ compared to the point clouds, making this cross-modality re-
305
+ trieval especially challenging. To this end, we propose to ex-
306
+ plicitly encode instance-relations to provide more discrimi-
307
+ native representations for both modalities.
308
+ The Transformer (Vaswani et al. 2017) has been widely
309
+ utilized for relation-based representation learning in vari-
310
+ ous tasks (Hu et al. 2018; Liu et al. 2021; Fan, Yang, and
311
+ Kankanhalli 2022). The key component of the Transformer
312
+ is the Self-Attention (SA) operation:
313
+ Attn(Q, K, V ) = Softmax(QKT /
314
+
315
+ d)V ,
316
+ (1)
317
+
318
+ Pooling
319
+ Matmul
320
+ Add
321
+ Figure 3: Illustration of the proposed Relation-enhanced
322
+ Self-Attention (RSA) mechanism. Pairwise relations are ex-
323
+ plicitly encoded into the value computation process.
324
+ where d is the representation dimension and Q, K, V
325
+
326
+ RN×d are the query, key and value matrices by transform-
327
+ ing in-cell instances (or hints for textual queries) with corre-
328
+ sponding linear transformations:
329
+ Q = W QX, K = W KX, V = W V X,
330
+ (2)
331
+ with W ∗ ∈ Rd×d are learnable matrices and X = P ∈
332
+ Rp×d or H ∈ Rh×d represents stacked instances1.
333
+ Despite its generality, the vanilla SA lacks explicit rela-
334
+ tions in both modalities, thus is less informative to represent
335
+ the cell and query. To this end, we propose a novel Relation-
336
+ Enhanced Transformer (RET) to model explicit instance re-
337
+ lations in both point clouds and textual descriptions. Our
338
+ RET is a stack of multiple Transformer encoder layers, ex-
339
+ cept that, in place of SA, we propose a Relation-enhanced
340
+ Self-Attention (RSA) to explicitly incorporate relation in-
341
+ formation into value computation. The computation process
342
+ is shown as follows and illustrated in Figure 3.
343
+ RSA(Q, K, V , R) = Softmax(QKT /
344
+
345
+ d)(V +Pool(R, 1)),
346
+ (3)
347
+ where R ∈ RN×N×d captures pairwise relations with
348
+ Rij ∈ Rd representing the relation between the i-th and j-
349
+ th instance (hint). Pool(R, 1) indicates pooling tensor R
350
+ along dimension 1. In this way, our model can explicitly
351
+ encode instance relations through this computation process,
352
+ leading to more informative representations.
353
+ The definition of relation varies flexibly with task objec-
354
+ tive and input modality. For point cloud data, we take the
355
+ geometric displacement of two instances as their relations,
356
+ as direction is often mentioned in textual queries and thus
357
+ informative for retrieval:2
358
+ RV
359
+ ij = W V (ci − cj),
360
+ (4)
361
+ where ci ∈ R3 represents the center coordinate of the i-th
362
+ instance and W v ∈ Rd×3 transforms the displacement into
363
+ 1Note that the attention operation is often performed in different
364
+ subspaces with multiple heads, which is omitted for simplicity.
365
+ 2We have also tried other features such as number of points
366
+ and bounding boxes of instances but didn’t observe performance
367
+ improvement.
368
+ embedding space. For the linguistic description, we compute
369
+ the hint relation as the concatenation of their embeddings:
370
+ RL
371
+ ij = W L[hi; hj],
372
+ (5)
373
+ where W L ∈ Rd×2d transforms the linguistic feature into
374
+ representation space. With the computation of RSA, the
375
+ instance-wise relations for different modalities can be uni-
376
+ formly incorporated into query or cell representations
377
+ Finally, the cell (description) representations Cm (Tm) are
378
+ obtained via a pooling operation over all instances (hints)
379
+ output from the RET for cross-modal retrieval.
380
+ Fine Stage: Cascaded Matching and Refinement
381
+ Following the coarse stage, we aim to refine the location pre-
382
+ diction within the retrieved cell in the fine stage. Inspired
383
+ by (Kolmet et al. 2022), we perform instance matching and
384
+ location refinement to utilize the fine-grained visual and lin-
385
+ guistic information, which involves the following two objec-
386
+ tives: (1) For each hint, we find the in-cell instance it refers
387
+ to via a matching process. (2) For each matched pair (i, j),
388
+ a regressor predicts an offset ˆti ∈ R2 for each matched hint
389
+ hj, which represents the offset from the instance center ci
390
+ to the target location.3
391
+ Previous method (Kolmet et al. 2022) achieves the two
392
+ objectives within a single step. However, given the objec-
393
+ tive of both hint-instance matching and offset prediction,
394
+ the multi-task learning process introduces optimization dif-
395
+ ficulty. Furthermore, in the early training steps, the matcher
396
+ is only partially trained, which produces noisy matching re-
397
+ sults. The regressor learns and makes predictions based on
398
+ this noisy results, leading to unstable learning process and
399
+ sub-optimal performance.
400
+ To this end, we propose a Cascaded Matching and Refine-
401
+ ment (CMR) strategy for the fine stage, where hint-instance
402
+ matching and offset regression are sequentially performed.
403
+ Specifically, following (Kolmet et al. 2022), we first train
404
+ the SuperGlue (Sarlin et al. 2020) matcher for hint-instance
405
+ matching, which is formulated as an optimal-transport prob-
406
+ lem. Given the trained matcher, we obtain a set of hint-
407
+ instance matching results {pi, hj, wi}h
408
+ j=1, where wi repre-
409
+ sents the confidence of the match. Then, to reduce the noise
410
+ for regression, we predict the target location according to
411
+ matched instances only.
412
+ Precise location prediction requires proper understand-
413
+ ing on both point cloud (what and where the referred in-
414
+ stance is, e.g., dark-green terrain) and language de-
415
+ scription (what is the relation between the matched instance
416
+ and the target location, e.g., east of). For this, we pro-
417
+ pose to facilitate cross-modal collaboration via the Cross-
418
+ Attention (CA) mechanism, which is commonly used for
419
+ cross-modality information fusion.
420
+ CA(H, P ) = Attn(W QH, W KP , W V P ),
421
+ (6)
422
+ where H, P represent hints and instances, respectively, and
423
+ W ∗ are learnable transformation matrices. Shortcut connec-
424
+ tion and layer normalization (Ba, Kiros, and Hinton 2016)
425
+ 3For position prediction, we ignore the height information and
426
+ considers 2D coordinates only.
427
+
428
+ Table 1: Performance comparison on the KITTI360Pose.
429
+ Method
430
+ Localization Recall (ϵ < 5/10/15m) ↑
431
+ Validation Set
432
+ Test Set
433
+ k = 1
434
+ k = 5
435
+ k = 10
436
+ k = 1
437
+ k = 5
438
+ k = 10
439
+ Text2Pos (Kolmet et al. 2022)
440
+ 0.14/0.25/0.31
441
+ 0.36/0.55/0.61
442
+ 0.48/0.68/0.74
443
+ 0.13/0.21/0.25
444
+ 0.33/0.48/0.52
445
+ 0.43/0.61/0.65
446
+ RET (Ours)
447
+ 0.19/0.30/0.37
448
+ 0.44/0.62/0.67
449
+ 0.52/0.72/0.78
450
+ 0.16/0.25/0.29
451
+ 0.35/0.51/0.56
452
+ 0.46/0.65/0.71
453
+ follows the cross-attention operation. With these operations,
454
+ the hint representation hi is accordingly updated to ˜hi by
455
+ dynamically fusing visual information. As such, the infor-
456
+ mation in the two modalities are joint utilized with the help
457
+ of cross-modal collaboration.
458
+ Then, we predict the offset (the direction vector from in-
459
+ stance center to target location) from the updated hint:
460
+ ˆti = MLP(˜hj).
461
+ (7)
462
+ To utilize the matching results, the final prediction is ob-
463
+ tained via a weighted combination of each hint’s prediction:
464
+ ˆg =
465
+
466
+ i
467
+ wi
468
+
469
+ m wm
470
+ (ci + ˆti),
471
+ (8)
472
+ where wi ∈ [0, 1] is the confidence score of the match
473
+ (pi, hj, wi) and is set to 0 for non-matched instances. To
474
+ filter out noisy matches, we consider only matches with con-
475
+ fidence score greater than 0.2.
476
+ Training and Inference
477
+ Training. For the coarse stage, we train the proposed RET
478
+ for cross-modal retrieval with pairwise ranking loss (Kiros,
479
+ Salakhutdinov, and Zemel 2014):
480
+ Lcoarse =
481
+ Nb
482
+
483
+ m=1
484
+ Nb
485
+
486
+ n̸=m
487
+ [α − ⟨Cm, Tm⟩ + ⟨Cm, Tn⟩]+
488
+ +
489
+ Nb
490
+
491
+ m=1
492
+ Nb
493
+
494
+ n̸=m
495
+ [α − ⟨Tm, Cm⟩ + ⟨Tm, Cn⟩]+,
496
+ (9)
497
+ where Nb is the batch size, α is a hyper-parameter to con-
498
+ trol the separation strength and ⟨·, ·⟩ represents inner product
499
+ between vectors. This loss function encourages the represen-
500
+ tation of matched description-cell pair to be by a margin α
501
+ closer than those unmatched. For the fine stage, we employ
502
+ the loss in (Sarlin et al. 2020) to train the matcher, while L2
503
+ loss is applied to train the offset regressor.
504
+ Inference. We first encode all cells and queries into a joint
505
+ embedding space with the proposed Relation-Enhanced
506
+ Transformer. Then, for each query representation, we re-
507
+ trieve top-k cells with highest similarity. For each retrieved
508
+ cell, we use the SuperGlue matcher trained in the fine stage
509
+ to match each hint with an in-cell instance, which is fol-
510
+ lowed by offset prediction based on the fused representa-
511
+ tions. Finally, the position prediction is given by Eq. 8.
512
+ Experiments
513
+ Dataset and Implementation Details
514
+ Dataset Details. We evaluate our method on the recently
515
+ proposed KITTI360Pose dataset (Kolmet et al. 2022), which
516
+ is built upon the KITTI360 dataset (Liao, Xie, and Geiger
517
+ 2021) with sampled locations and generated hints. It con-
518
+ tains point clouds of a total of 9 scenes, covering 14,934
519
+ positions with a total area of 15.51km2. We follow (Kol-
520
+ met et al. 2022) to use five scenes for training, one for val-
521
+ idation, and the remaining three for testing. We sample the
522
+ cells of size 30m with a stride of 10m. For more details on
523
+ the dataset preprocessing, please refer to our supplementary
524
+ material.
525
+ Implementation Details For the coarse stage, we trained
526
+ the model with AdamW optimizer (Loshchilov and Hutter
527
+ 2018) with a learning rate of 2e-4. The models are trained
528
+ for a total of 18 epochs while the learning rate is decayed
529
+ by 10 at the 9-th epoch. The α is set to 0.35. For the fine
530
+ stage, we first train the matcher with a learning rate of 5e-
531
+ 4 for a total of 16 epochs. Afterwards, we fix the matcher
532
+ and train the regressor based on the matching results for 10
533
+ epochs with a learning rate of 1e-4. The regressor is for-
534
+ mulated as a 3 layer Multi-Layer Perceptron. Both of the
535
+ two steps adopt an Adam (Kingma and Ba 2014) optimizer.
536
+ The RET has 2 encoder layers for both point cloud part and
537
+ linguistic part, each utilizing the Relation-enhanced Atten-
538
+ tion (RSA) mechanism with 4 heads and hidden dimension
539
+ 2048. For the two stages, we encode each instance in the cell
540
+ with PointNet++ (Qi et al. 2017) provided by Text2Pos (Kol-
541
+ met et al. 2022) for a fair comparison. The hint representa-
542
+ tions are obtained by concatenating learned word embed-
543
+ dings. More details are provided in our appendix.4
544
+ Comparison with the State-of-the-art
545
+ We compared our method with Text2Pos (Kolmet et al.
546
+ 2022) on the KITTI360Pose dataset. Following (Kolmet
547
+ et al. 2022), we report top-k (k = 1/5/10) recall rate of dif-
548
+ ferent error ranges ϵ < 5/10/15m for comprehensive com-
549
+ parison. The results are shown in Table 1. Text2Pos gives
550
+ a recall of 0.14 when k = 1 and ϵ < 5m. In contrast,
551
+ our method can significantly improve the recall rate to 0.19,
552
+ which amounts to 35.7% relative improvement upon the
553
+ baseline. Furthermore, when we relax the localization error
554
+ constraints or increase k, consistent improvements upon the
555
+ baseline can also be observed. For example, with ϵ < 5m,
556
+ our method achieves top-5 recall rate of 0.44, which is 8%
557
+ higher than previous state-of-the-art. Similar improvements
558
+ can also be seen on the test set, showing our method is su-
559
+ perior to the baseline method.
560
+ Ablation Studies
561
+ In this section, we perform ablation studies for both stages
562
+ to investigate the effectiveness of each proposed component
563
+ 4Code available at: https://github.com/daoyuan98/text2pos-ret
564
+
565
+ Table 2: Ablation study of the Relation-Enhanced Trans-
566
+ former (RET) on KITTI360Pose validation set. ”wo X rela-
567
+ tion” indicates replacing the proposed RSA with the vanilla
568
+ Self-Attention in corresponding modality.
569
+ Method
570
+ k = 1 ↑
571
+ k = 3 ↑
572
+ k = 5 ↑
573
+ w/o both relations
574
+ 0.11
575
+ 0.24
576
+ 0.32
577
+ w/o linguistic relation
578
+ 0.14
579
+ 0.28
580
+ 0.37
581
+ w/o visual relation
582
+ 0.16
583
+ 0.30
584
+ 0.40
585
+ Full (Ours)
586
+ 0.18
587
+ 0.34
588
+ 0.44
589
+ Table 3: The effects of #layers of RET and #heads of RSA.
590
+ #Layers
591
+ #Heads
592
+ k = 1 ↑
593
+ k = 3 ↑
594
+ k = 5 ↑
595
+ 1
596
+ 4
597
+ 0.16
598
+ 0.31
599
+ 0.40
600
+ 1
601
+ 8
602
+ 0.16
603
+ 0.30
604
+ 0.40
605
+ 2
606
+ 2
607
+ 0.17
608
+ 0.32
609
+ 0.42
610
+ 2
611
+ 4
612
+ 0.18
613
+ 0.34
614
+ 0.44
615
+ 2
616
+ 8
617
+ 0.16
618
+ 0.31
619
+ 0.40
620
+ 3
621
+ 4
622
+ 0.16
623
+ 0.32
624
+ 0.39
625
+ 3
626
+ 8
627
+ 0.15
628
+ 0.29
629
+ 0.37
630
+ in our method. The ablation studies for coarse stage and fine
631
+ stage are provided separately for clear investigation.
632
+ Coarse Stage. We study the importance of explicit relation
633
+ incorporation in the coarse stage. Since the coarse stage is
634
+ formulated as a retrieval task, we use top-1/3/5 recall rate as
635
+ evaluation metric, whereby the cell that contains the ground
636
+ truth location is defined as positive.
637
+ Relation Incorporation. We first study the necessity of ex-
638
+ plicit relation modeling for both point cloud and textual
639
+ queries. The results are shown in Table 2. It can be observed
640
+ that relation modeling contributes significantly to successful
641
+ retrieval. In particular, without any relation incorporation,
642
+ the top-5 recall rate is 0.32. With the explicit fusion of lin-
643
+ guistic relation, we observe an increase of 0.05 recall rate
644
+ under same condition. Besides, with the incorporation of vi-
645
+ sual (point cloud instance) relations only, the top-5 recall
646
+ rate can be improved by 0.08, indicating explicit relations
647
+ in the point clouds play a more important role. Finally, with
648
+ both relations, we achieve an improvement of 0.12 at top-5
649
+ recall rate upon that without any relation, showing that both
650
+ visual and linguistic relations are necessary and complemen-
651
+ tary to improve the cell retrieval performance.
652
+ RET Hyper-parameters. We also studied the importance of
653
+ the hyper-parameters involved in RET, namely the number
654
+ of layers of RET and the number of heads of RSA. The re-
655
+ sults are shown in Table 3. It can be observed that, thanks to
656
+ the strong relation modeling capacity of the proposed RET,
657
+ we can obtain the best performance with 2 layers and 4 heads
658
+ in the RSA. Decreasing and increasing the number of layers
659
+ both lead to worse performance, which may be attributed to
660
+ underfitting and overfitting, respectively.
661
+ Fine Stage. The objective of the fine stage is to correctly
662
+ match linguistic hints and point cloud instances and regress
663
+ the target location. Thus, we study the performance of the
664
+ matcher and regressor, respectively.
665
+ Table 4: Comparison of training strategy and matcher per-
666
+ formance on the KITTI360Pose dataset.
667
+ Strategy
668
+ Train
669
+ Validation
670
+ Precision ↑
671
+ Recall ↑
672
+ Precision ↑
673
+ Recall ↑
674
+ joint
675
+ 98.12
676
+ 98.16
677
+ 86.67
678
+ 87.59
679
+ cascade(ours)
680
+ 98.89
681
+ 99.04
682
+ 92.18
683
+ 93.01
684
+ Table 5: Ablation study on the regression error of the fine-
685
+ stage on the KITTI360Pose dataset.
686
+ Method
687
+ Train Error ↓
688
+ Validation Error ↓
689
+ w/o cascade training
690
+ 10.24 (+1.72)
691
+ 10.01 (+0.86)
692
+ w/o cross-attention
693
+ 9.57 (+1.05)
694
+ 9.56 (+0.41)
695
+ w/o confidence weighting
696
+ 9.02 (+0.50)
697
+ 9.23 (+0.08)
698
+ Ours
699
+ 8.52
700
+ 9.15
701
+ Matcher. Following (Sarlin et al. 2020), we take precision
702
+ and recall as the the evaluation metric of the matcher. With
703
+ an identical matcher architecture, we investigate the impact
704
+ of training strategy on the matcher performance. The results
705
+ are shown in Table 4. It can be seen that compared with joint
706
+ training (Kolmet et al. 2022), our cascaded training achieves
707
+ not only high precision and recall in the training set, but
708
+ also stronger generalization on the validation set. The re-
709
+ sults demonstrate that the cascade training strategy is able to
710
+ mitigate the multi-task optimization difficulty.
711
+ Regressor. The regressor predicts the target location based
712
+ on the the matching results. We study the effects of cas-
713
+ caded training, cross-attention based cross-modal fusion and
714
+ confidence weighting for final location prediction. We use
715
+ regression error as evaluation metric and compare different
716
+ versions on both KITTI360Pose training and validation set.
717
+ The results are shown in Table. 5. Without cascaded training
718
+ strategy, the regressor achieves an error of 10.24 and 10.01
719
+ on the training and validation set, respectively, which is 1.72
720
+ and 0.86 higher than that with cascaded training. This re-
721
+ sult suggests that our cascaded training strategy also allevi-
722
+ ates the optimization difficulty of the regressor, which was
723
+ caused by the noisy intermediate results. Furthermore, with-
724
+ out cross-attention mechanism, the regression error also in-
725
+ creases by a considerable margin, showing that cross-modal
726
+ collaboration is important for precise location prediction. Fi-
727
+ nally, with confidence-based weighting, we can further re-
728
+ duce the regression error on both the training and validation
729
+ set, suggesting this information from the trained matcher can
730
+ be further utilized to improve performance.
731
+ Visualizations
732
+ Embedding Space Visualization. We visualize the learned
733
+ embedding space via T-SNE (Van der Maaten and Hin-
734
+ ton 2008) in Figure 5. It can be observed that the base-
735
+ line method Text2Pos (Kolmet et al. 2022) results in a less
736
+ discriminative space, where positive cells are relatively far
737
+ away from the query and sometimes separated across the
738
+ embedding space. In contrast, our method draw positive cell
739
+ and query representations closer in the embedding space, re-
740
+ sulting in a more informative embedding space for retrieval.
741
+
742
+ Ground Truth
743
+ Top-1
744
+ Top-2
745
+ Top-3
746
+ Ground Truth
747
+ Top-1
748
+ Top-2
749
+ Top-3
750
+ (a)
751
+ (b)
752
+ (c)
753
+ (e)
754
+ (d)
755
+ (f)
756
+ 557.85
757
+ 10.00
758
+ 20.00
759
+ 0.00
760
+ 10.00
761
+ 50.99
762
+ 10.00
763
+ 0.0
764
+ 819.08
765
+ 10.00
766
+ 0.00
767
+ 64.03
768
+ 14.14
769
+ 211.90
770
+ 221.36
771
+ 455.41
772
+ 1150.00
773
+ 218.40
774
+ Building
775
+ Pole
776
+ Traffic Light
777
+ Traffic Sign
778
+ Parking
779
+ Sidewalk
780
+ Vegetation
781
+ Terrain
782
+ Road
783
+ Wall
784
+ Garage
785
+ Figure 4: Qualitative retrieval results on KITTI360Pose validation set. The red dot in the ground truth cell indicates the target
786
+ location. In each retrieved cell, the number in the lower right indicates the center distance between this cell and the ground
787
+ truth. Green box indicates positive cell which contains the target location, while red box indicates negative cells.
788
+ Text2Pos
789
+ Ours
790
+ Textual Query
791
+ Negative Cell
792
+ Positive Cell
793
+ Figure 5: T-SNE visualization of embedding space for the
794
+ coarse stage. A cell is considered as positive if it contains
795
+ the location described by the query. Compared with baseline
796
+ method (Kolmet et al. 2022), our method can produce better
797
+ representation where positive cells are closer to the target.
798
+ Qualitative Cell Retrieval Results. We show some exam-
799
+ ple text to point cloud retrieval results in Figure. 4. For a
800
+ given query, we visualize the top-3 retrieved cells. A re-
801
+ trieved cell is defined as positive if it contains the target lo-
802
+ cation. It can be observed that, our method can retrieve the
803
+ ground truth cell or those close in most cases. Sometimes,
804
+ negative cells can also be retrieved, e.g., top-1 in (a) and
805
+ top-3 in (e). It can be seen that these retrieved negative cells
806
+ exhibit high semantic similarity with the ground truth cell,
807
+ even though far away from it. We also show a failure case (f),
808
+ where the retrieved cells are all negative. It can be seen that
809
+ even though far away from the target location, all these neg-
810
+ ative cells have instances similar to the ground truth. These
811
+ observations suggest that outdoor scenes are indeed of low
812
+ diversity, indicating that successful retrieval requires highly
813
+ discriminative representations to disambiguate the cells.
814
+ Conclusion
815
+ In this work, we proposed a novel method for precise
816
+ text-based localization from large-scale point clouds. Our
817
+ method employs a coarse-to-fine principle and pipelines this
818
+ process into two stages. For the coarse stage which is formu-
819
+ lated as a textual query based cell retrieval task, we aim to
820
+ improve representation discriminability for both point cloud
821
+ and query representations. This is achieved through explicit
822
+ modeling of instance relations and implemented via a newly
823
+ proposed Relation-Enhanced Transformer (RET). The core
824
+ of RET is a novel Relation-enhanced Self-Attention (RSA)
825
+ mechanism, whereby the instance relations for the two
826
+ modalities are explicitly incorporated into the value com-
827
+ putation process in a unified manner. For the fine stage,
828
+ our method performs description-instance matching and
829
+ position refinement in a cascaded way, whereby cross-
830
+ modal information collaboration is enhanced through the
831
+ cross-attention mechanism. Extensive experiments on the
832
+ KITTI360Pose dataset validated the effectiveness of the pro-
833
+ posed method, which achieves new state-of-the-art perfor-
834
+ mance. Additional ablation studies further corroborate the
835
+ effectiveness of each component in the proposed method.
836
+ Acknowledgement
837
+ This research is supported by the National Research Foun-
838
+ dation, Singapore under its Strategic Capability Research
839
+ Centres Funding Initiative. Any opinions, findings and con-
840
+ clusions or recommendations expressed in this material are
841
+ those of the author(s) and do not reflect the views of National
842
+ Research Foundation, Singapore.
843
+
844
+ tReferences
845
+ Achlioptas, P.; Abdelreheem, A.; Xia, F.; Elhoseiny, M.; and
846
+ Guibas, L. 2020. Referit3d: Neural listeners for fine-grained
847
+ 3d object identification in real-world scenes.
848
+ In ECCV.
849
+ Springer.
850
+ Arandjelovic, R.; Gronat, P.; Torii, A.; Pajdla, T.; and Sivic,
851
+ J. 2016. NetVLAD: CNN architecture for weakly supervised
852
+ place recognition. In CVPR.
853
+ Ba, J. L.; Kiros, J. R.; and Hinton, G. E. 2016. Layer nor-
854
+ malization. arXiv preprint arXiv:1607.06450.
855
+ Brachmann, E.; Krull, A.; Nowozin, S.; Shotton, J.; Michel,
856
+ F.; Gumhold, S.; and Rother, C. 2017. Dsac-differentiable
857
+ ransac for camera localization. In CVPR.
858
+ Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov,
859
+ A.; and Zagoruyko, S. 2020. End-to-end object detection
860
+ with transformers. In ECCV. Springer.
861
+ Chen, D. Z.; Chang, A. X.; and Nießner, M. 2020. Scanrefer:
862
+ 3d object localization in rgb-d scans using natural language.
863
+ In ECCV.
864
+ Cheng, B.; Schwing, A.; and Kirillov, A. 2021. Per-pixel
865
+ classification is not all you need for semantic segmentation.
866
+ NeurIPS.
867
+ Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn,
868
+ D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.;
869
+ Heigold, G.; Gelly, S.; et al. 2020. An Image is Worth 16x16
870
+ Words: Transformers for Image Recognition at Scale.
871
+ In
872
+ ICLR.
873
+ Fan, H.; Yang, Y.; and Kankanhalli, M. 2022. Point spatio-
874
+ temporal transformer networks for point cloud video model-
875
+ ing. TPAMI.
876
+ Fan, H.; Yang, Y.; and Kankanhalli, M. S. 2021.
877
+ Point
878
+ 4D Transformer Networks for Spatio-Temporal Modeling in
879
+ Point Cloud Videos. In CVPR.
880
+ Feng, M.; Li, Z.; Li, Q.; Zhang, L.; Zhang, X.; Zhu, G.;
881
+ Zhang, H.; Wang, Y.; and Mian, A. 2021. Free-form descrip-
882
+ tion guided 3d visual graph network for object grounding in
883
+ point cloud. In ICCV.
884
+ Gao, P.; Zheng, M.; Wang, X.; Dai, J.; and Li, H. 2021.
885
+ Fast Convergence of DETR With Spatially Modulated Co-
886
+ Attention. In ICCV.
887
+ Hausler, S.; Garg, S.; Xu, M.; Milford, M.; and Fischer, T.
888
+ 2021. Patch-netvlad: Multi-scale fusion of locally-global de-
889
+ scriptors for place recognition. In CVPR.
890
+ Hu, H.; Gu, J.; Zhang, Z.; Dai, J.; and Wei, Y. 2018. Relation
891
+ Networks for Object Detection. In CVPR.
892
+ Kingma, D. P.; and Ba, J. 2014.
893
+ Adam: A method for
894
+ stochastic optimization. arXiv preprint arXiv:1412.6980.
895
+ Kiros, R.; Salakhutdinov, R.; and Zemel, R. S. 2014. Uni-
896
+ fying visual-semantic embeddings with multimodal neural
897
+ language models. arXiv preprint arXiv:1411.2539.
898
+ Kolmet, M.; Zhou, Q.; Osep, A.; and Leal-Taixe, L. 2022.
899
+ Text2Pos: Text-to-Point-Cloud Cross-Modal Localization.
900
+ In CVPR.
901
+ Li, G.; Zhu, L.; Liu, P.; and Yang, Y. 2019. Entangled Trans-
902
+ former for Image Captioning. In ICCV.
903
+ Li, J.; Li, D.; Xiong, C.; and Hoi, S. 2022. BLIP: Boot-
904
+ strapping Language-Image Pre-training for Unified Vision-
905
+ Language Understanding and Generation. In ICML.
906
+ Liao, Y.; Xie, J.; and Geiger, A. 2021. KITTI-360: A Novel
907
+ Dataset and Benchmarks for Urban Scene Understanding in
908
+ 2D and 3D. arXiv preprint arXiv:2109.13410.
909
+ Liu, S.; Li, F.; Zhang, H.; Yang, X.; Qi, X.; Su, H.; Zhu, J.;
910
+ and Zhang, L. 2022. DAB-DETR: Dynamic Anchor Boxes
911
+ are Better Queries for DETR. In ICLR.
912
+ Liu, Y.; Zhu, L.; Yamada, M.; and Yang, Y. 2020. Seman-
913
+ tic Correspondence as an Optimal Transport Problem. In
914
+ CVPR.
915
+ Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin,
916
+ S.; and Guo, B. 2021. Swin transformer: Hierarchical vision
917
+ transformer using shifted windows. In ICCV.
918
+ Loshchilov, I.; and Hutter, F. 2018. Decoupled Weight De-
919
+ cay Regularization. In ICLR.
920
+ Lu, J.; Batra, D.; Parikh, D.; and Lee, S. 2019.
921
+ Vilbert:
922
+ Pretraining task-agnostic visiolinguistic representations for
923
+ vision-and-language tasks. NeurIPS.
924
+ Qi, C. R.; Yi, L.; Su, H.; and Guibas, L. J. 2017. Pointnet++:
925
+ Deep hierarchical feature learning on point sets in a metric
926
+ space. NeurIPS.
927
+ Sarlin, P.-E.; Cadena, C.; Siegwart, R.; and Dymczyk, M.
928
+ 2019. From coarse to fine: Robust hierarchical localization
929
+ at large scale. In CVPR.
930
+ Sarlin, P.-E.; DeTone, D.; Malisiewicz, T.; and Rabinovich,
931
+ A. 2020. Superglue: Learning feature matching with graph
932
+ neural networks. In CVPR.
933
+ Sattler, T.; Leibe, B.; and Kobbelt, L. 2016. Efficient & ef-
934
+ fective prioritized matching for large-scale image-based lo-
935
+ calization. TPAMI.
936
+ Tan, H.; and Bansal, M. 2019. LXMERT: Learning Cross-
937
+ Modality Encoder Representations from Transformers. In
938
+ EMNLP-IJCNLP.
939
+ Torii, A.; Arandjelovic, R.; Sivic, J.; Okutomi, M.; and Pa-
940
+ jdla, T. 2015. 24/7 place recognition by view synthesis. In
941
+ CVPR.
942
+ Van der Maaten, L.; and Hinton, G. 2008. Visualizing data
943
+ using t-SNE. JMLR.
944
+ Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones,
945
+ L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At-
946
+ tention is all you need. NeurIPS.
947
+ Wu, K.; Peng, H.; Chen, M.; Fu, J.; and Chao, H. 2021. Re-
948
+ thinking and improving relative position encoding for vision
949
+ transformer. In ICCV.
950
+ Yuan, Z.; Yan, X.; Liao, Y.; Zhang, R.; Wang, S.; Li, Z.;
951
+ and Cui, S. 2021. Instancerefer: Cooperative holistic under-
952
+ standing for visual grounding on point clouds through in-
953
+ stance multi-level contextual referring. In ICCV.
954
+ Zhang, H.; Sun, A.; Jing, W.; Nan, G.; Zhen, L.; Zhou, J. T.;
955
+ and Goh, R. S. M. 2021. Video Corpus Moment Retrieval
956
+ with Contrastive Learning. In SIGIR.
957
+ Zhou, Q.; Sattler, T.; Pollefeys, M.; and Leal-Taixe, L. 2020.
958
+ To learn or not to learn: Visual localization from essential
959
+ matrices. In ICRA.
960
+
961
+ Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; and Dai, J. 2020.
962
+ Deformable DETR: Deformable Transformers for End-to-
963
+ End Object Detection. In ICLR.
964
+
R9E4T4oBgHgl3EQf_g7i/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
U9E3T4oBgHgl3EQf0QsH/content/tmp_files/2301.04735v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
U9E3T4oBgHgl3EQf0QsH/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
W9FKT4oBgHgl3EQfoC5E/content/tmp_files/2301.11864v1.pdf.txt ADDED
@@ -0,0 +1,2545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.11864v1 [physics.flu-dyn] 27 Jan 2023
2
+ Under consideration for publication in J. Fluid Mech.
3
+ 1
4
+ Gravity can lead to multiple peaks in the early stages
5
+ of coffee ring formation
6
+ M. R. M O O R E1
7
+ AND A. W.
8
+ W R A Y2
9
+ 1Department of Mathematics, School of Natural Sciences, University of Hull, Cottingham Road, Hull, HU6 7RX,
10
+ UK
11
+ 2Department of Mathematics and Statistics, University of Strathclyde, Livingstone Tower, 26 Richmond Street,
12
+ Glasgow G1 1XH, UK
13
+ (Received ?; revised ?; accepted ?. - To be entered by editorial office)
14
+ We consider the role of gravity in solute transport when a thin droplet evaporates. Under the physically-
15
+ relevant assumptions that the contact line is pinned and the solutal P´eclet number, Pe is large, we identify
16
+ two fundamental regimes that depend on the size of the Bond number, Bo. When Bo = O(1), the asymptotic
17
+ structure of solute transport follows directly from the surface tension-dominated regime, whereby advection
18
+ drives solute towards the contact line, only to be countered by local diffusive effects, leading to the for-
19
+ mation of the famous “coffee ring”. For larger Bond numbers, we identify the distinguished limit in which
20
+ Bo−1/2Pe2/3 = O(1), where the diffusive boundary layer is comparable to the surface tension boundary
21
+ layer. In each regime, we perform a systematic asymptotic analysis of the solute transport and compare
22
+ our predictions to numerical simulations of the full model. Our analysis identifies the effect of gravity on
23
+ the nascent coffee ring, providing quantitative predictions of the size, location and shape of the solute mass
24
+ profile. Furthermore, we reveal that, for certain values of Bo, Pe and the evaporation time, a secondary
25
+ peak may exist inside the classical coffee ring. We find that the onset of this secondary peak is linked to the
26
+ change in behaviour of the critical point in the droplet centre. Both the onset and the peak characteristics
27
+ are shown to be independent of Pe, but solutal diffusion may act to remove the secondary peak when the
28
+ classical coffee ring becomes so large as to subsume it.
29
+ Key words:
30
+ 1. Introduction
31
+ The evaporation of sessile droplets has received significant attention in recent years, being the subject
32
+ of several major reviews (Cazabat & Guena 2010; Lohse et al. 2015; Brutin & Starov 2018; Wilson &
33
+ D’Ambrosio 2023) due to its ubiquity in theoretical, experimental and industrial settings. A particular
34
+ phenomenon of interest is the so-called “coffee ring effect”, in which a solute in such an evaporating droplet
35
+ ends up preferentially accumulated at the contact line (Deegan et al. 1997, 2000). This effect is very robust,
36
+ occurring even when the solution is initially uniformly dispersed throughout the droplet, and even when the
37
+ evaporative flux is not preferentially localised at the contact line (Boulogne et al. 2016).
38
+ Motivated by typical physical parameters, models of such systems typically assume that the P´eclet number
39
+ is sufficiently large that diffusive effects can be neglected, and so dynamics of the solute inside the droplet
40
+ are governed purely by convection (Deegan et al. 1997; Wray et al. 2021). This unphysical assumption leads
41
+ to a variety of undesirable side-effects, including singular accumulations of residue, and solute not being
42
+ conserved (Deegan et al. 2000).
43
+ A variety of attempts have been made to resolve this problem phenomenologically, including via the
44
+ incorporation of jamming effects (Popov 2005; Kaplan & Mahadevan 2015). However, jamming effects only
45
+ become significant close to the particle packing fraction, and the assumptions underpinning the model fail
46
+ long before this point. In particular, the assumption that diffusive effects can be ignored breaks down in a
47
+ diffusive boundary layer close to the contact line (Moore et al. 2021), as might be anticipated from the singular
48
+ accumulation in the na¨ıve, convection-only model. This boundary layer and its growth and dynamics have
49
+ been analysed and understood via matched asymptotics and careful numerics in situations where droplets
50
+ are small, and thus exist at quasi-static equilibrium due to surface tension (Moore et al. 2022), but little is
51
+ known for larger droplets where the effects of gravity are important.
52
+ Investigations of larger droplets have a long history, dating back to numerical integration of the appropriate
53
+ Laplace equations by Padday (1971) and Boucher & Evans (1975), with a variety of studies via asymptotics of
54
+
55
+ 2
56
+ M. R. Moore & A. W. Wray
57
+ their shape (Rienstra 1990; O’Brien 1991; Yariv 2022) and stability (Pozrikidis 2012) in the intervening time.
58
+ The effect of gravity on droplets, and especially their internal flows, has experienced a recent resurgence of
59
+ interest due to the experiments of Edwards et al. (2018), which showed that the dynamics of binary droplets
60
+ can be sensitively dependent on droplet inclination (and hence gravity). This has since received extensive
61
+ investigation both experimentally and numerically (Li et al. 2019; Pradhan & Panigrahi 2017).
62
+ Notably, however, despite the original experiments of Deegan et al. (1997) involving large droplets, there
63
+ have been relatively few investigations of particle transport inside them, with those available being principally
64
+ experimental (Sandu & Fleaca 2011; Hampton et al. 2012; Devlin et al. 2016). This is perhaps because of the
65
+ robustness of the coffee-stain effect: asymptotic and numerical investigations (Barash et al. 2009; Kolegov
66
+ & Lobanov 2014) confirm the experimental results that the ring-stain is preserved unless additional physics
67
+ are incorporated, such as continuous particle deposition (Devlin et al. 2016). However, this neglects the bulk
68
+ of the story, including the dynamics of the residue over the course of the lifetime of the droplets: a critical
69
+ omission in situations such as continuous particle deposition. We show in the present work that the dynamics
70
+ are actually quite subtle and complex, and certainly merit detailed investigation.
71
+ The structure of this paper is therefore as follows. In §2, we describe the equations governing the fluid
72
+ flow and solute transport for the problem of a thin droplet evaporating under a diffusive flux, in particular
73
+ highlighting the effect of gravity in the model. We nondimensionalise the model and introduce the three key
74
+ dimensionless numbers in the model: the capillary, Bond and P´eclet numbers. In §3, we completely solve
75
+ for the liquid flow in the limit in which the solute is dilute, so that the flow and solute transport problems
76
+ decouple. We discuss pertinent features of the resulting fluid velocity and droplet shape, and in particular
77
+ how these features vary with the Bond number.
78
+ The bulk of the analysis in this paper concerns the influence of gravity on solute transport within the
79
+ droplet, which we analyse in the physically-relevant large-P´eclet number limit in §4. We find that there are
80
+ two distinct regimes depending on the relative sizes of the Bond and P´eclet numbers. In the first, where
81
+ the Bond number is moderate, we extend the asymptotic analysis of Moore et al. (2021) to include the
82
+ effect of gravity. However, when the Bond number is also large, a more complex asymptotic analysis is
83
+ necessary, which is presented in detail in Appendix A. In each asymptotic regime, we derive predictions for
84
+ the distribution of the solute mass within the droplet and compare the results to numerical simulations of
85
+ the full advection-diffusion problem. In particular, while we find the expected ‘nascent coffee ring’ profile in
86
+ the solute mass, for certain input parameters, we also find evidence of a novel phenomenon whereby a second
87
+ peak may also develop in the mass profile inside the classical coffee ring.
88
+ We analyse both of these peaks in detail in §5. In particular, for the classical coffee ring, we discuss
89
+ the effect of gravity in each of the two asymptotic regimes discussed in §4 and Appendix A, while for the
90
+ secondary peak, we investigate the key role gravity plays in its existence and how the secondary peak may
91
+ also be subsumed in the classical coffee ring for certain values of the Bond and P´eclet numbers. Finally, in
92
+ §6, we summarize our findings and discuss implications to various applications, as well as avenues for future
93
+ study.
94
+ 2. Problem configuration
95
+ We consider the configuration depicted in figure 1 in which an axisymmetric droplet of initial volume
96
+ V ∗
97
+ 0 evaporates from a solid substrate. Here and hereafter, an asterisk denotes a dimensional variable. We
98
+ let (r∗, θ, z∗) be cylindrical polar coordinates centred along the line of symmetry of the droplet with the
99
+ substrate lying in the plane z∗ = 0: by axisymmetry, we shall assume that all the variables are independent
100
+ of θ. The droplet contact line is thus circular and we assume that it is pinned throughout the drying process,
101
+ which is observed in practice for a wide range of liquids for the majority of the drying time (Deegan et al.
102
+ 1997; Hu & Larson 2002; Kajiya et al. 2008; Howard et al. 2023). We let r∗ = R∗ be the radius of the contact
103
+ line. Throughout this analysis, we shall assume that the droplet is thin, which reduces to the assumption
104
+ that
105
+ 0 < δ = V ∗
106
+ 0
107
+ R∗3 ≪ 1.
108
+ (2.1)
109
+ As we discuss presently, the thin-droplet assumption allows us to greatly simplify the flow and solute transport
110
+ models; the assumption has been extensively-validated and has shown to be reasonable even for droplets that
111
+ should realistically fall outside of this regime (Larsson & Kumar 2022).
112
+ The droplet consists of a liquid of constant density and viscosity denoted by ρ∗ and µ∗, respectively. The
113
+ droplet free surface is denoted by z∗ = h(r∗, t∗) and the air-water surface tension coefficient, σ∗ is assumed
114
+ to be constant.
115
+
116
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
117
+ 3
118
+ z∗
119
+ r∗
120
+ 2R∗
121
+ z∗ = h∗(r∗, t∗)
122
+ E∗(r∗)
123
+ Figure 1: A side-on view of a solute-laden droplet evaporating under an evaporative flux E∗(r∗) from a solid
124
+ substrate that lies in the plane z∗ = 0. The droplet is axisymmetric and the contact line is assumed to be
125
+ pinned on the substrate at r∗ = R∗. The droplet free surface is denoted by h∗(r∗, t∗). The solute is assumed
126
+ to be inert and sufficiently dilute that the flow of liquid in the droplet is decoupled from the solute transport.
127
+ The liquid evaporates into the surrounding air and we assume that the evaporative process is quasi-steady,
128
+ which is a reasonable assumption for a wide range of liquid-substrate configurations (Hu & Larson 2002).
129
+ While there are a number of different viable evaporation models depending on the physical and chemical
130
+ characteristics of the problem (Murisic & Kondic 2011), for the purposes of this analysis, we assume that
131
+ the dominant process of vapour transport from the droplet surface is diffusion, so that the evaporative flux
132
+ E∗(r∗) is given by
133
+ E∗(r∗) = 2D∗(c∗
134
+ s − c∗
135
+ ∞)
136
+ π
137
+
138
+ R∗2 − r∗2 ,
139
+ (2.2)
140
+ where D∗ is the diffusion coefficient and c∗
141
+ s, c∗
142
+ ∞ are the surface and ambient vapour concentrations, respec-
143
+ tively (Deegan et al. 2000; Murisic & Kondic 2011).
144
+ The droplet contains an inert solute of initially uniform concentration φ∗
145
+ 0. The solute is assumed to be
146
+ sufficiently dilute that the flow and transport problems completely decouple. We shall discuss the validity of
147
+ the dilute assumption further in §6.
148
+ 2.1. Flow model
149
+ The droplet is assumed to be sufficiently thin and the evaporation-induced flow sufficiently slow that the
150
+ flow is governed by the lubrication equations
151
+ ∂h∗
152
+ ∂t∗ + 1
153
+ r∗
154
+
155
+ ∂r∗ (r∗h∗u∗) = − E∗
156
+ ρ∗ ,
157
+ (2.3)
158
+ u∗ = − h∗2
159
+ 3µ∗
160
+ ∂p∗
161
+ ∂r∗ ,
162
+ (2.4)
163
+ p∗ = p∗
164
+ atm − ρ∗g∗(z∗ − h∗) − σ∗ 1
165
+ r∗
166
+
167
+ ∂r∗
168
+
169
+ r∗ ∂h∗
170
+ ∂r∗
171
+
172
+ ,
173
+ (2.5)
174
+ for 0 < r∗ < R∗, t∗ > 0, where u∗(r∗, t∗) is the depth-averaged radial fluid velocity, p∗(r∗, z∗, t∗) is the liquid
175
+ pressure and p∗
176
+ atm denotes atmospheric pressure (Hocking 1983; Deegan et al. 2000; Oliver et al. 2015).
177
+ Equations (2.3)–(2.5) must be solved subject to the symmetry conditions
178
+ r∗h∗u∗ = ∂h∗
179
+ ∂r∗ = 0
180
+ at
181
+ r∗ = 0,
182
+ (2.6a, b)
183
+ and the fact that the free surface touches down at, and we require no-flux of liquid through, the pinned
184
+ contact line, that is
185
+ h∗ = r∗h∗u∗ = 0
186
+ at
187
+ r∗ = R∗.
188
+ (2.7a, b)
189
+ We close the problem by specifying the initial droplet profile, that is
190
+ h∗(r∗, 0) = h∗
191
+ 0(r∗)
192
+ for
193
+ 0 < r∗ < R∗.
194
+ (2.8)
195
+ It is worth noting at this stage that, while this initial condition is needed to fully specify the mathematical
196
+ problem, in our analysis, we do not explicitly use the initial condition (2.8). In what follows, it is assumed
197
+
198
+ 4
199
+ M. R. Moore & A. W. Wray
200
+ that the rate of evaporation is sufficiently slow that the droplet quickly relaxes under capillary action to the
201
+ quasi-steady profile found in §3 (see, for example, Lacey (1982); De Gennes (1985); Oliver et al. (2015)).
202
+ Thus, we shall for simplicity assume that h0(r) is of the same functional form of the free surface we find in
203
+ §3. While this assumption is reasonable for a wide range of applications, for extremely rapid evaporation (for
204
+ example, laser-induced evaporation, Volkov & Strizhak (2019)), a more careful consideration of the evolution
205
+ after deposition would be needed.
206
+ Assuming the contact line is pinned, the volume of the droplet V ∗(t∗) is given by
207
+ V ∗(t∗) = 2π
208
+ � R∗
209
+ 0
210
+ r∗h∗(r∗, t∗) dr∗,
211
+ V ∗(0) = V ∗
212
+ 0 .
213
+ (2.9)
214
+ The total mass loss due to evaporation F ∗(t∗) is given by
215
+ F ∗(t∗) = 2π
216
+ � R∗
217
+ 0
218
+ r∗E∗(r∗) dr∗ = 4D∗(c∗
219
+ s − c∗
220
+ ∞)R∗.
221
+ (2.10)
222
+ Thus, conservation of mass in the liquid phase is
223
+ dV ∗
224
+ dt∗ = −F ∗
225
+ ρ∗ = −4D∗(c∗
226
+ s − c∗
227
+ ∞)R∗
228
+ ρ∗
229
+ (2.11)
230
+ so that
231
+ V ∗(t∗) = V ∗
232
+ 0 − 4D∗(c∗
233
+ s − c∗
234
+ ∞)R∗t∗
235
+ ρ∗
236
+ .
237
+ (2.12)
238
+ In particular, the dryout time, that is the time when the drop has fully evaporated, is
239
+ t∗
240
+ f =
241
+ ρ∗V ∗
242
+ 0
243
+ 4D∗(c∗s − c∗∞)R∗ .
244
+ (2.13)
245
+ 2.2. Solute model
246
+ The droplet is assumed to be sufficiently thin that the transport of the solute is governed by the depth-
247
+ averaged advection-diffusion equation
248
+
249
+ ∂t∗ (h∗φ∗) + 1
250
+ r∗
251
+
252
+ ∂r∗
253
+
254
+ r∗
255
+
256
+ h∗u∗φ∗ − D∗
257
+ φh∗ ∂φ∗
258
+ ∂r∗
259
+ ��
260
+ = 0
261
+ (2.14)
262
+ for 0 < r∗ < R∗, t∗ > 0, where φ∗(r∗, t∗) is the depth-averaged solute concentration and D∗
263
+ φ is the solutal
264
+ diffusion coefficient (Wray et al. 2014; Pham & Kumar 2017; Moore et al. 2021).
265
+ While there is an acknowledged effect of the solute particles eventually being trapped at and transported
266
+ along the free surface (Kang et al. 2016; D’Ambrosio 2022), this effect is less pronounced for thin droplets,
267
+ where the capture tends to occur closer to the contact line due to the stronger outward radial flow. Thus, we
268
+ shall neglect its effects here as our study concerns the interplay between gravity, surface tension and solute
269
+ advection/diffusion. A more focused analysis on the final deposit profile would certainly need to account for
270
+ such effects.
271
+ Equation (2.14) must be solved subject to the symmetry condition
272
+ ∂φ∗
273
+ ∂r∗ = 0
274
+ at
275
+ r∗ = 0,
276
+ (2.15)
277
+ and the condition that there can be no flux of solute particles through the pinned contact line,
278
+ r∗
279
+
280
+ h∗u∗φ∗ − D∗
281
+ φh∗ ∂φ∗
282
+ ∂r∗
283
+
284
+ = 0
285
+ at
286
+ r∗ = R∗.
287
+ (2.16)
288
+ Finally, we impose the initially uniform distribution of solute throughout the droplet, so that
289
+ φ∗(r∗, 0) = φ∗
290
+ 0
291
+ for
292
+ 0 < r∗ < R∗.
293
+ (2.17)
294
+ 2.3. Non-dimensionalization
295
+ We assume that the fluid velocity is driven by evaporation and, for now, we retain both gravity and surface
296
+ tension, so that the pertinent scalings are
297
+ (r∗, z∗) = R∗(r, δz),
298
+ u∗ = D∗(c∗
299
+ s − c∗
300
+ ∞)
301
+ δρ∗R∗
302
+ u,
303
+ t∗ = t∗
304
+ ft,
305
+ φ∗ = φ∗
306
+ 0φ,
307
+ (h∗, h∗
308
+ 0) = δR∗(h, h0),
309
+ p∗ = p∗
310
+ atm + µ∗D∗(c∗
311
+ s − c∗
312
+ ∞)
313
+ δ3ρ∗R∗2
314
+ p
315
+ V ∗ = V ∗
316
+ 0 V.
317
+ (2.18)
318
+
319
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
320
+ 5
321
+ Note, in particular, that the choice of timescale fixes the dimensionless dryout time to be t = 1.
322
+ Upon substituting the scalings (2.18) into (2.3)–(2.5), we see that
323
+ ∂h
324
+ ∂t + 1
325
+ 4r
326
+
327
+ ∂r (rhu) = −
328
+ 1
329
+
330
+
331
+ 1 − r2 ,
332
+ (2.19)
333
+ u = h2
334
+ 3Ca
335
+
336
+ ∂r
337
+
338
+ −Boh + 1
339
+ r
340
+
341
+ ∂r
342
+
343
+ r∂h
344
+ ∂r
345
+ ��
346
+ ,
347
+ (2.20)
348
+ for 0 < r < 1, 0 < t < 1, where the Capillary and Bond numbers are defined by
349
+ Ca = µ∗D∗(c∗
350
+ s − c∗
351
+ ∞)
352
+ δ4ρ∗R∗σ∗
353
+ and
354
+ Bo = ρ∗g∗R∗2
355
+ σ∗
356
+ ,
357
+ (2.21)
358
+ respectively.
359
+ Under scalings (2.18), the symmetry conditions (2.6) become,
360
+ rhu = ∂h
361
+ ∂r = 0
362
+ at
363
+ r = 0,
364
+ (2.22a, b)
365
+ while the contact line conditions (2.7) are
366
+ h = rhu = 0
367
+ at
368
+ r = 1.
369
+ (2.23a, b)
370
+ The initial condition (2.8) becomes
371
+ h(r, 0) = h0(r)
372
+ for
373
+ 0 < r < 1.
374
+ (2.24)
375
+ Finally, the dimensionless form of conservation of liquid volume conditions (2.9) and (2.12) is
376
+ 1 − t = 2π
377
+ � 1
378
+ 0
379
+ rh(r, t) dr.
380
+ (2.25)
381
+ After scaling, the solute transport equation (2.14) becomes
382
+
383
+ ∂t (hφ) + 1
384
+ 4r
385
+
386
+ ∂r
387
+
388
+ r
389
+
390
+ huφ − h
391
+ Pe
392
+ ∂φ
393
+ ∂r
394
+ ��
395
+ = 0
396
+ (2.26)
397
+ for 0 < r <, 0 < t < 1, where the solutal P´eclet number is
398
+ Pe = D∗(c∗
399
+ s − c∗
400
+ ∞)
401
+ δρ∗D∗
402
+ φ
403
+ .
404
+ (2.27)
405
+ The symmetry (2.15) and boundary conditions (2.16) become
406
+ ∂φ
407
+ ∂r = 0
408
+ at
409
+ r = 0
410
+ (2.28)
411
+ and
412
+ r
413
+
414
+ huφ − h
415
+ Pe
416
+ ∂φ
417
+ ∂r
418
+
419
+ = 0
420
+ at
421
+ r = 1,
422
+ (2.29)
423
+ respectively. Finally, the initial condition (2.17) becomes
424
+ φ(r, 0) = 1
425
+ for
426
+ 0 < r < 1.
427
+ (2.30)
428
+ 2.4. Integrated mass variable formulation
429
+ The assumption that the solute is dilute decouples the flow and solute transport problems, so that we may
430
+ solve for h and u from (2.19)–(2.25) independently of the solute concentration, φ. We shall discuss the
431
+ resulting flow solution shortly in §3.
432
+ First, however, we present a reformulation of the solute transport problem (2.26)–(2.30), which will greatly
433
+ aid us in our asymptotic and numerical investigations. In this, we follow Moore et al. (2021, 2022) by
434
+ introducing the integrated mass variable
435
+ M(r, t) =
436
+ � r
437
+ 0
438
+ sh(s, t)φ(s, t) ds.
439
+ (2.31)
440
+ By integrating the advection-diffusion equation (2.26) from 0 to r and applying the no-flux condition (2.29),
441
+
442
+ 6
443
+ M. R. Moore & A. W. Wray
444
+ we find that
445
+ ∂M
446
+ ∂t +
447
+ �u
448
+ 4 +
449
+ 1
450
+ 4Pe
451
+ �1
452
+ r + 1
453
+ h
454
+ ∂h
455
+ ∂r
456
+ �� ∂M
457
+ ∂r −
458
+ 1
459
+ 4Pe
460
+ ∂2M
461
+ ∂r2
462
+ = 0
463
+ for
464
+ 0 < r, t < 1.
465
+ (2.32)
466
+ This must be solved subject to the boundary conditions
467
+ M(0, t) = 0,
468
+ M(1, t) = 1
469
+
470
+ for
471
+ t > 0,
472
+ (2.33a, b)
473
+ where the latter condition dictates that mass is conserved along a radial ray, which replaces the no-flux
474
+ condition (2.29). Finally, the initial condition (2.30) becomes
475
+ M(r, 0) =
476
+ � r
477
+ 0
478
+ sh(s, 0) ds
479
+ for
480
+ 0 < r < 1.
481
+ (2.34)
482
+ Finally, we note that, once we have determined the integrated mass variable from (2.32)–(2.34), the solute
483
+ mass m = φh can then be retrieved from
484
+ m = 1
485
+ r
486
+ ∂M
487
+ ∂r .
488
+ (2.35)
489
+ 3. Flow solution in the large-Ca limit
490
+ We now suppose that surface tension dominates viscosity in the flow problem, that is Ca ≫ 1. Importantly,
491
+ this means that the problems for the free surface profile and the flow velocity decouple, an assumption that
492
+ is valid for a wide range of different liquids and evaporation models in practice (Moore et al. 2021, 2022).
493
+ Unlike these previous studies, however, we shall retain gravity in (2.20) to investigate what role it plays in
494
+ the formation of the nascent coffee ring.
495
+ To this end, we neglect the left-hand side of (2.20), so that upon integrating and applying the symmetry
496
+ condition (2.22), the contact line condition (2.23a) and the conservation of liquid volume condition (2.25),
497
+ we deduce that
498
+ h(r, t) = (1 − t)
499
+ π
500
+ I0(
501
+
502
+ Bo)
503
+ I2(
504
+
505
+ Bo)
506
+
507
+ 1 − I0(
508
+
509
+ Bo r)
510
+ I0(
511
+
512
+ Bo)
513
+
514
+ ,
515
+ (3.1)
516
+ where Iν(z) is the modified Bessel function of the first kind of order ν.
517
+ With the free surface found, the velocity is determined immediately from (2.19) and the no-flux condition
518
+ (2.23b) to be
519
+ u(r, t) = 1
520
+ rh
521
+
522
+ 2
523
+ π
524
+
525
+ 1 − r2 + 4I0(
526
+
527
+ Bo)
528
+ πI2(
529
+
530
+ Bo)
531
+ �r2 − 1
532
+ 2
533
+ +
534
+ 1
535
+
536
+ BoI0(
537
+
538
+ Bo)
539
+ (I1(
540
+
541
+ Bo) − rI1(
542
+
543
+ Bo r))
544
+ ��
545
+ .
546
+ (3.2)
547
+ Notably, as in the surface tension-dominated regime where Bo → 0, time is separable in both the free
548
+ surface and fluid velocity profiles, and so merely acts to scale the functional form. In particular, this means
549
+ that the streamlines and pathlines coincide, which we shall exploit when considering the regime in which
550
+ solutal diffusion is negligible in §5.2.
551
+ We display the scaled forms of the free surface and fluid velocity for various values of the Bond number
552
+ in figure 2a,b. For the droplet free surface profile, we see the expected transition from the spherical cap for
553
+ Bo → 0 (Deegan et al. 2000) to the flat ‘pancake’ droplet for Bo → ∞ (Rienstra 1990). For each Bond
554
+ number, the velocity is singular at the contact line — as expected for a diffusive evaporative flux (see, for
555
+ example, Deegan et al. (2000)). We see that as the effect of gravity increases, the sharp increase in u occurs
556
+ closer to the contact line, corresponding to the progressively smaller region in which surface tension effects
557
+ are important.
558
+ Finally, since this will be important in our discussions of the secondary peaks seen in the solute mass profile
559
+ in §5.2, we show the divergence of the fluid velocity in figure 2c. For small Bond numbers, the divergence is
560
+ monotonically increasing with r and, as with the velocity, singular at the contact line. However, for moderate
561
+ and large Bond numbers ≳ 15, we see a clear change of behaviour, with a region of non-monotonic behaviour
562
+ in the droplet interior. This behaviour is accentuated as Bo → ∞.
563
+ For future reference, the asymptotic behaviours of the free surface and fluid velocity as r → 1− for
564
+ Bo = O(1) are given by
565
+ h = θc(t; Bo)(1 − r) + O((1 − r)2),
566
+ (3.3)
567
+ u =
568
+
569
+ θc(t; Bo)(1 − r)−1/2 + O
570
+
571
+ (1 − r)1/2�
572
+ ,
573
+ (3.4)
574
+
575
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
576
+ 7
577
+ 0
578
+ 0.5
579
+ 1
580
+ 0
581
+ 0.2
582
+ 0.4
583
+ 0.6
584
+ 0.8
585
+ 1
586
+ 0
587
+ 0.5
588
+ 1
589
+ 0
590
+ 1
591
+ 2
592
+ 3
593
+ 4
594
+ 5
595
+ 0
596
+ 0.5
597
+ 1
598
+ 0
599
+ 1
600
+ 2
601
+ 3
602
+ 4
603
+ 5
604
+ Figure 2: (a) The quasi-steady droplet free surface, (b) the fluid velocity, and (c) the divergence of the
605
+ velocity displayed for Bo = 0.1 (black), 1 (dark purple), 10 (blue), 20 (cyan), 50 (green) and 100 (yellow).
606
+ Notably, we see the transition from the spherical cap to the ‘pancake’ droplet profile as the effect of gravity
607
+ increases. The divergence of the fluid velocity also shows a transition from a monotonic to a non-monotonic
608
+ profile as the Bond number increases.
609
+ where
610
+ θc(t; Bo) = − lim
611
+ r→1−
612
+ ∂h
613
+ ∂r = (1 − t)ψ(Bo),
614
+ ψ(Bo) =
615
+
616
+ BoI1(
617
+
618
+ Bo)
619
+ πI2(
620
+
621
+ Bo)
622
+ (3.5)
623
+ is the leading order contact angle in the thin droplet limit and
624
+ χ =
625
+
626
+ 2
627
+ π
628
+ (3.6)
629
+ is the dimensionless coefficient of the inverse square root singularity at the contact line in the evaporative
630
+ flux (2.2). Note that we have chosen this notation to highlight the similarities with the previous analysis of
631
+ Moore et al. (2022), who consider a surface tension-dominated droplet of arbitary contact set.
632
+ On the other hand, if we take 1 − r = O(1) and consider the large-Bo limit of (3.1), (3.2), we find that
633
+ h = h0(t) + Bo−1/2h1(t) + O(Bo−1),
634
+ (3.7)
635
+ u = u0(r, t) + Bo−1/2u1(r, t) + O(Bo−1),
636
+ (3.8)
637
+ as Bo → ∞, where
638
+ h0(t) = (1 − t)
639
+ π
640
+ ,
641
+ h1(t) = 2(1 − t)
642
+ π
643
+ ,
644
+ (3.9a, b)
645
+ and
646
+ u0(r, t) = 2
647
+
648
+ 1 − r2
649
+ r(1 − t) (1 −
650
+
651
+ 1 − r2),
652
+ u1(r, t) =
653
+ 4
654
+ r(1 − t)(1 −
655
+
656
+ 1 − r2).
657
+ (3.10a, b)
658
+ Notably, in the droplet bulk, the droplet free surface h is flat to all orders: the aforementioned characteristic
659
+ of ‘pancake’ droplets associated with large Bond numbers (Rienstra 1990). These expansions break down
660
+ close to the contact line where surface tension effects become important. We find that for 1 − r = Bo−1/2¯r,
661
+ we have
662
+ h = ¯h0(¯r, t) + Bo−1/2¯h1(¯r, t) + O(Bo−1),
663
+ (3.11)
664
+ u = Bo−1/4 �
665
+ ¯u0(¯r, t) + Bo−1/4¯u1(¯r, t) + Bo−1/2¯u2(¯r, t) + O(Bo−3/4)
666
+
667
+ (3.12)
668
+ as Bo → ∞, where
669
+ ¯h0(¯r, t) = 1
670
+ π (1 − t)(1 − e−¯r),
671
+ (3.13)
672
+ ¯h1(¯r, t) = 2(1 − t)
673
+ π
674
+
675
+ 1 − e−¯r�
676
+ − (1 − t)¯r
677
+
678
+ e−¯r,
679
+ (3.14)
680
+
681
+ 8
682
+ M. R. Moore & A. W. Wray
683
+ and
684
+ ¯u0(¯r, t) =
685
+ 2
686
+
687
+ 2¯r
688
+ (1 − t)(1 − e−¯r),
689
+ (3.15)
690
+ ¯u1(¯r, t) =
691
+ 4
692
+ (1 − t) −
693
+ 4¯r
694
+ (1 − t)(1 − e−¯r),
695
+ (3.16)
696
+ ¯u2(¯r, t) =
697
+ 3¯r3/2
698
+
699
+ 2(1 − t)(1 − e−¯r) −
700
+ 4
701
+
702
+ 2¯r
703
+ (1 − t)(1 − e−¯r) +
704
+
705
+ 2¯r3/2e−¯r
706
+ (1 − t)(1 − e−¯r)2 .
707
+ (3.17)
708
+ We note here that as ¯r → 0, we retrieve the expect inverse square root singularity in the fluid velocity.
709
+ 4. Solute transport in the large-Pe limit
710
+ Having fully determined the leading-order flow, we now seek to understand the transport of solute within
711
+ the drop and to make predictions about the early-stages of coffee ring formation. We follow the analyses of
712
+ Moore et al. (2021, 2022) by considering the physically-relevant regime in which Pe ≫ 1. In this regime, in
713
+ the bulk of the droplet, advection dominates solutal diffusion, with the latter only being relevant close to
714
+ the contact line.
715
+ Previous studies of this problem have concentrated on surface tension-dominated drops (i.e. Bo → 0) and
716
+ have shown how the competition between solutal advection and diffusion near the contact line leads to the
717
+ early stages of coffee ring formation in drying droplets. In this analysis, we wish to investigate how this
718
+ behaviour changes as we allow Bo to vary, which we pursue using a hybrid asymptotic-numerical approach.
719
+ There are naturally several different asymptotic regimes depending on the relative sizes of Bo and Pe, but
720
+ these broadly fall into two categories
721
+ i) intermediate Bond number, Bo = O(1), where the asymptotic structure of the solute transport depends
722
+ solely on the large P´eclet number;
723
+ ii) large Bond number, Bo ≫ 1, where the asymptotic structure of the solute transport now depends on
724
+ the relative sizes of Bo and Pe.
725
+ In the first regime where Bo = O(1), Pe ≫ 1, the asymptotic structure of the flow is a natural extension of
726
+ the surface tension-dominated case considered in Moore et al. (2021). In the droplet bulk where 1−r = O(1),
727
+ solute advection dominates diffusion. However, close to the contact line, a balance between solute advection
728
+ and diffusion occurs when
729
+ rhuφ ∼ rh
730
+ Pe
731
+ ∂φ
732
+ ∂r =⇒ 1 − r = O(Pe−2).
733
+ (4.1)
734
+ We discuss the asymptotic solution for this regime in §4.1.
735
+ In the second regime, there are several different possibilities depending on the relative sizes of the boundary
736
+ layer where surface tension enters the flow profile and the solutal diffusion boundary layer. The richest
737
+ distinguished asymptotic limit is that in which these boundary layers are comparable. As detailed in §3, for
738
+ large Bond number the free surface is flat in the bulk of the droplet, with the effect of surface tension restricted
739
+ to a boundary layer at the contact line of size 1 − r = O(Bo−1/2), where h = O(1) and u = O(Bo−1/4).
740
+ Turning to the solute transport equation (2.26), since h is order unity and u is square root bounded in this
741
+ region, advection and diffusion are comparable when
742
+ 1 − r = O(Pe−2/3).
743
+ (4.2)
744
+ Hence, in the most general limit in which the size of the two boundary layers are comparable, we have
745
+ α = Bo−1/2Pe2/3 = O(1).
746
+ (4.3)
747
+ The asymptotic analysis in this regime is somewhat more involved, so for brevity, we present the details in
748
+ Appendix A.
749
+ 4.1. Asymptotic solution when Bo = O(1)
750
+ In this section, we present the asymptotic solution of the solute transport problem as Pe → ∞ when
751
+ Bo = O(1). The analysis herein is a natural extension of Moore et al. (2021). For the purposes of this section,
752
+ we shall use the concentration form of the advection-diffusion equation (2.26)–(2.30) and, in particular, find
753
+ the solution in terms of the solute mass m = φh, where h is given by (3.1).
754
+
755
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
756
+ 9
757
+ 4.1.1. Outer region
758
+ In the droplet bulk where 1−r = O(1), we seek a solution of the form m = m0(r, t)+O(Pe−1) as Pe → ∞.
759
+ Substituting into (2.26), (2.30), we find that
760
+ ∂m0
761
+ ∂t
762
+ + 1
763
+ 4r
764
+
765
+ ∂r (rm0u) = 0
766
+ for
767
+ 0 < r < 1, t > 0
768
+ (4.4)
769
+ where u is given by (3.2), subject to m(r, 0) = h(r, 0). This is the usual advection equation, with solution
770
+ given by
771
+ m0(r, t) = h(R, 0)
772
+ J(R, t) ,
773
+ (4.5)
774
+ where R is the initial location of the point that is at r at time t and J(R, t) is the Jacobian of the Eulerian-
775
+ Lagrangian transformation, that satisfies Euler’s identity,
776
+ D
777
+ Dt(log J) = 1
778
+ 4r
779
+
780
+ ∂r(ru),
781
+ J(R, 0) = 1,
782
+ (4.6)
783
+ where D/Dt is the convective derivative.
784
+ A straightforward asymptotic analysis of (4.4) reveals that
785
+ u∂m
786
+ ∂r ∼ m
787
+ r
788
+
789
+ ∂r (ru)
790
+ (4.7)
791
+ as r → 1−, so that m0 = O(√1 − r) as r → 1−, and hence the concentration φ0 is square root singular. This
792
+ sharp local concentration increase necessitates the inclusion of a diffusive boundary layer.
793
+ 4.1.2. Inner region
794
+ Close to the contact line, we set
795
+ r = 1 − Pe−2ˆr,
796
+ h = Pe−2ˆh,
797
+ u = Peˆu,
798
+ m = Pe2 ˆm,
799
+ (4.8)
800
+ where the last scaling on the mass comes from global conservation of solute considerations (Moore et al.
801
+ 2021). We seek an asymptotic solution of the form ˆm = ˆm0(ˆr, t) + O(Pe−1) and find to leading order
802
+
803
+ ∂ˆr
804
+ ��
805
+
806
+ θc(t; Bo)
807
+
808
+ ˆr
809
+ − 1
810
+ ˆr
811
+
812
+ ˆm0 + ∂ ˆm0
813
+ ∂ˆr
814
+
815
+ = 0
816
+ in
817
+ ˆr > 0, t > 0
818
+ (4.9)
819
+ such that
820
+
821
+
822
+ θc(t; Bo)
823
+
824
+ ˆr
825
+ − 1
826
+ ˆr
827
+
828
+ ˆm0 + ∂ ˆm0
829
+ ∂ˆr
830
+ = 0
831
+ for
832
+ ˆr = 0.
833
+ (4.10)
834
+ It is straightforward to show that the solution to (4.9)–(4.10) is given by
835
+ ˆm0(ˆr, t) = C(t; Bo)ˆrexp
836
+
837
+
838
+
839
+ θc(t; Bo)
840
+
841
+ ˆr
842
+
843
+ ,
844
+ (4.11)
845
+ where, by pursuing a similar matching process to Moore et al. (2022), we find that the coefficient C(t; Bo)
846
+ is given by
847
+ C(t) =
848
+ 64χ4
849
+ 3θc(t; Bo)4 N(t; Bo),
850
+ (4.12)
851
+ where N(t; Bo) is the leading-order accumulated mass advected into the contact line region up to time t,
852
+ viz.
853
+ N(t; Bo) = 1
854
+ 4
855
+ � t
856
+ 0
857
+ m0(r, τ)u(r, τ) dτ.
858
+ (4.13)
859
+ It is worth noting that this solution follows directly from the Bo = 0 regime discussed in Moore et al. (2021,
860
+ 2022), with the alterations due to gravity entering into the accumulated mass flux into the contact line and
861
+ the leading order contact angle. In particular, we note that in the limit Bo → 0, since ψ = 4/π + O(Bo), this
862
+ yields the expected form found in the surface tension-dominated problem in Moore et al. (2022) (see §3.7.2
863
+ therein). We display the accumulated mass flux and the local contact angle for a wide range of Bond numbers
864
+ in figure 3. We see that as the influence of gravity increases, the acccumulated mass flux into the contact
865
+ line at a fixed percentage of the evaporation time is reduced from the surface tension-dominated regime. On
866
+ the other hand, the local contact angle increases, commensurate with the droplet profile transitioning from
867
+
868
+ 10
869
+ M. R. Moore & A. W. Wray
870
+ 0
871
+ 0.5
872
+ 1
873
+ 0
874
+ 1
875
+ 2
876
+ 3
877
+ 4
878
+ Figure 3: (a) The accumulated mass flux, N(t; Bo) as defined by (4.13) and (b) the leading order local
879
+ contact angle θc(t; Bo) as defined by (3.5), for Bo = 10−2 (purple), Bo = 10−1 (purple) (dark blue), Bo = 1
880
+ (light blue), Bo = 10 (green) and Bo = 102 (yellow).
881
+ a spherical cap to a ‘pancake’ droplet. We note that this combined behaviour leads to C(t; Bo) decreasing
882
+ as Bo increases. We discuss how these findings impact coffee ring formation in more detail in §5.1.1.
883
+ 4.1.3. Composite solution
884
+ We may use van Dyke’s rule (Van Dyke 1964) to formulate a leading-order composite solution for the
885
+ solute mass that is valid throughout the drop by combining the leading-order-outer solution (4.5) and the
886
+ leading-order-inner solution (4.11), finding
887
+ mcomp(r, t) = mouter(r, t) + Pe2m
888
+
889
+ Pe2(1 − r), t
890
+
891
+ .
892
+ (4.14)
893
+ 4.2. Comparisons between the numerical and asymptotic results
894
+ Our asymptotic predictions are compared to numerical simulations of the full advection-diffusion problem for
895
+ the integrated mass variable given by (2.32)–(2.34). The integrated mass variable is chosen over the solute
896
+ mass m or the concentration φ since it is better behaved close to the contact line. The numerical procedure
897
+ requires careful consideration of the thin diffusive boundary layer and we follow a similar approach to that
898
+ described for the surface tension-dominated problem by Moore et al. (2021). We give a summary of the
899
+ methodologies in Appendix B.
900
+ We begin by comparing the asymptotic predictions of the solute mass profiles to numerical solutions in
901
+ the regime where Bo = O(1). In figure 4, we display asymptotic (dashed, red) and numerical (solid, blue)
902
+ curves at 10% intervals of the total drying time for Pe = 102, Bo = 1 (a,b) and Pe = 102, Bo = 30 (c,d).
903
+ In each figure, we see excellent agreement between the simulations of the full system and the leading-order
904
+ composite solution (4.14). There is a clear formation of the expected coffee ring in the region near the contact
905
+ line, where solutal diffusion and advection interact. We see that increasing the Bond number in this regime
906
+ leads to a slight reduction of the size of the coffee ring.
907
+ This behaviour is reminiscent of the Bo = 0 regime considered previously by Moore et al. (2021). However,
908
+ in the later stages of the Pe = 102, Bo = 30 example, we see evidence of a qualitative difference in behaviour,
909
+ with the formation of another peak in the mass profile in the droplet interior (see inset in figure 4(c)).
910
+ Henceforth, we shall refer to the classical coffee ring as the primary peak and this new feature as the
911
+ secondary peak. The presence of the secondary peak depends on the Bond number, as there is no secondary
912
+ peak in any of the profiles when Bo = 1, but it also depends on the drying time, as the peak only develops
913
+ in the later stages of evaporation when Bo = 30 (between 60 − 70% of the drying time). Noticeably, the
914
+ secondary peak is significantly smaller in magnitude than the primary peak.
915
+ For larger Bond numbers, we compare the numerical results to the asymptotic predictions in Appendix
916
+ A. In figure 5, we display results for Pe = 102, Bo = 105 (α ≈ 0.07) (a,b) and Pe = 103 and Bo = 104
917
+ (α = 1) (c,d). In each case, we display the composite profile for the solute mass given by (A 34). In each
918
+ figure, we see that after an initial transient the asymptotic predictions and numerical results are again in
919
+ excellent agreement. Moreover, we see further evidence of the existence of a secondary peak in the case
920
+
921
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
922
+ 11
923
+ 0
924
+ 0.5
925
+ 1
926
+ 0
927
+ 0.2
928
+ 0.4
929
+ 0.6
930
+ 0.8
931
+ 1
932
+ 10 -6
933
+ 10 -4
934
+ 10 -2
935
+ 10 0
936
+ 10 -3
937
+ 10 -1
938
+ 10 1
939
+ 10 3
940
+ 10 5
941
+ 0
942
+ 0.5
943
+ 1
944
+ 0
945
+ 0.2
946
+ 0.4
947
+ 0.6
948
+ 0.8
949
+ 1
950
+ 10 -6
951
+ 10 -4
952
+ 10 -2
953
+ 10 0
954
+ 10 -3
955
+ 10 -1
956
+ 10 1
957
+ 10 3
958
+ 10 5
959
+ 0.4
960
+ 0.6
961
+ 0.1
962
+ 0.105
963
+ Figure 4: Profiles of the solute mass when an axisymmetric droplet evaporates under a diffusive evaporative
964
+ flux for (a,b) Pe = 102 and Bo = 1 (c,d) Pe = 102 and Bo = 30. In each figure, the bold, black curve
965
+ represents the initial mass profile, which corresponds to the droplet free surface profile (3.1). We also display
966
+ plots at time intervals of 0.1 up to t = 0.9 in which solid, blue curves represent the results from the numerical
967
+ solution of (2.32)–(2.34) and the dashed, red curves show the leading-order composite mass profile, given by
968
+ (4.14). The right-hand figures display a close-up of the profiles near the contact line. In (c), the inset shows
969
+ a close up of the mass profile in the droplet interior at t = 0.9 where we see a clear formation of a secondary
970
+ peak.
971
+ Pe = 103, Bo = 104 regimes, where the peak appears much earlier and is noticeably larger than that in
972
+ the previous example (cf. figure 4c, where Pe = 102, Bo = 30). However, we also note again the strong
973
+ dependence of the secondary peak on Bo and, possibly, Pe, as there is no evidence of such an interior peak
974
+ when Pe = 102, Bo = 105.
975
+ These findings prompt us to investigate this new feature more closely, alongside a discussion of how the
976
+ characteristics of the primary peak — and hence the classical coffee ring — depend on the Bond number.
977
+ 5. Properties of the two peaks
978
+ Given the excellent comparisons displayed in the previous section, we seek to use our asymptotic results to
979
+ investigate properties of the nascent coffee ring and, in particular, the new feature of these moderate-to-large
980
+ Bond number regimes: the secondary peak.
981
+ 5.1. Primary peak
982
+ We shall begin by discussing the effect of the Bond number on the primary peak. As in previous studies
983
+ of the surface tension-dominated regime, the formation of the primary peak is driven by the competing
984
+ diffusive and advective solute fluxes (Moore et al. 2021, 2022) and is always present in the large-Pe regime.
985
+ Furthermore, since all of the features of interest are well within the solutal diffusion boundary layer, we will
986
+
987
+ 12
988
+ M. R. Moore & A. W. Wray
989
+ Mass
990
+ Mass
991
+ Figure 5: Profiles of the solute mass when an axisymmetric droplet evaporates under a diffusive evaporative
992
+ flux for (a,b) Pe = 102 and Bo = 105 (α ≈ 0.07) (c,d) Pe = 103 and Bo = 104 (α = 1). In each figure,
993
+ the bold, black curve represents the initial mass profile (2.34). We also display plots at time intervals of 0.1
994
+ up to t = 0.9 in which solid, blue curves represent the results from the numerical solution of (2.32)–(2.34)
995
+ and the dashed, red curves show the composite mass profiles, given by (A 33) for the integrate mass variable
996
+ and (A 34) for the solute mass, respectively. Note that in (c,d), we can clearly see the development of the
997
+ secondary peak behind the primary peak.
998
+ use the inner solution — as discussed in §4.1.2 in the Bo = O(1) regime and §A.2 in the large-Bo regime —
999
+ to do this.
1000
+ 5.1.1. Bo = O(1) regime
1001
+ When the Bond number is order unity, the analysis is a natural extension of that in Moore et al. (2021,
1002
+ 2022). The local solute profile is dominated by the leading-order inner solution (4.11). Introducing the time-
1003
+ dependent P´eclet number
1004
+ Pet =
1005
+ Pe
1006
+ 1 − t,
1007
+ (5.1)
1008
+ the nascent coffee ring profile may be seen to have the similarity form
1009
+ ˆm0(R, t)
1010
+ Pe2
1011
+ t N(t; Bo) =
1012
+
1013
+ 3ψ(Bo)f
1014
+ �√
1015
+ R, 3,
1016
+
1017
+ ψ(Bo)
1018
+
1019
+ ,
1020
+ R = Pe2
1021
+ t(1 − r)
1022
+ (5.2)
1023
+ where ψ and χ retain their definitions from (3.5) and (3.6) as the initial local contact angle and the coefficient
1024
+ of the evaporative flux singularity, respectively, and f(x, k, l) = lkxk−1e−lx/Γ(k) is the probability density
1025
+ function of a gamma distribution. It is this functional form which describes the characteristic narrow, sharp
1026
+ peak of the coffee ring.
1027
+ Since the definition of R only depends on the time-dependent P´eclet number, we can clearly illustrate the
1028
+
1029
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
1030
+ 13
1031
+ 0
1032
+ 20
1033
+ 40
1034
+ 60
1035
+ 80
1036
+ 100
1037
+ 0
1038
+ 0.02
1039
+ 0.04
1040
+ 0.06
1041
+ 0.08
1042
+ 0.1
1043
+ Figure 6: The similarity profile (5.2) of the leading-order-inner solute mass profile for Bo = 10−2 (purple),
1044
+ Bo = 10−1 (purple) (dark blue), Bo = 1 (light blue), Bo = 10 (green) and Bo = 102 (yellow).
1045
+ effect of gravity by plotting the similarity profile (5.2) for a range of Bond numbers in figure 6. We see that,
1046
+ as the effect of gravity increases, the height of the primary peak decreases, and the peak moves further from
1047
+ the pinned contact line. Moreover, the shape of the primary peak tends towards a shallower, wider profile.
1048
+ Notably, this behaviour is driven purely by changes in ψ(Bo); as we saw in figure 3a, the accumulated mass
1049
+ flux into the contact line decreases with the Bond number, clearly this acts to accentuate this behaviour.
1050
+ We can expand upon these results by finding the leading order asymptotic prediction of the primary peak
1051
+ height and location, which are given by
1052
+ rpeak,I(t; Bo) = 1 − ψ(Bo)2
1053
+ 4Pe2
1054
+ t χ2 ,
1055
+ mpeak,I(t; Bo) = 16Pe2
1056
+ tN(t; Bo)χ2
1057
+ 3e2ψ(Bo)2
1058
+ ,
1059
+ (5.3)
1060
+ respectively. Notably, while gravity only influences the location of the primary peak through the initial local
1061
+ contact angle, ψ(Bo), the height depends on gravity through both the contact angle and the accumulated
1062
+ mass flux, N(t; Bo). In particular, referring back to figure 3, this means that gravity has a stronger effect on
1063
+ the peak height than its location.
1064
+ We illustrate the veracity of these asymptotic predictions by comparing them to the corresponding numer-
1065
+ ical results for Pe = 102 and a range of Bond numbers in figure 7. As anticipated from the comparisons of
1066
+ the solute mass profiles, we see excellent agreement between the asymptotic predictions and the numerical
1067
+ results. In particular, in figure 7a, we note that as the influence of gravity increases (i.e. Bo increases), the
1068
+ coffee ring effect is inhibited: although a peak clearly still forms, it is lower for large Bond number at a similar
1069
+ stage of the drying process. This effect varies nonlinearly with time (cf. figure 3a). For example, considering
1070
+ the cases Bo = 1/2 and Bo = 30, after 50% of the drying time, the peak height is reduced by a factor of
1071
+ ≈ 3.97, while at 60% of the drying time, the reduction is a factor of ≈ 3.85 and at 90% of the drying time,
1072
+ it is ≈ 3.63.
1073
+ Similarly, in figure 7b, we see that as the Bond number increases, the location of primary peak moves
1074
+ further from the contact line and that this significantly increases as the Bond number gets larger. For
1075
+ Bo = 1/10, 1/2, 1 the location is almost indistinguishable from the zero-Bond number solution — where
1076
+ Pe2
1077
+ t (1 − r) = 2 (Moore et al. (2022)) — but for Bo = 30, this has increased to ≈ 6.79.
1078
+ It is worth noting that in all this analysis, the P´eclet number simply acts to scale the above findings. For
1079
+ a larger P´eclet number, the height of the primary peak increases, while it is located closer to the contact
1080
+ line. This is precisely what is seen for the Bo = 0 regime (Moore et al. 2021).
1081
+ 5.1.2. Large-Bo regime
1082
+ In the large-Bo regime, given the size of the primary peak, we anticipate that the leading-order-inner
1083
+ solution
1084
+ ˜
1085
+ M0(˜r, t) as given by (A 16) should reasonably capture the features of the primary peak. However,
1086
+
1087
+ 14
1088
+ M. R. Moore & A. W. Wray
1089
+ 0
1090
+ 0.2
1091
+ 0.4
1092
+ 0.6
1093
+ 0.8
1094
+ 1
1095
+ 0
1096
+ 0.5
1097
+ 1
1098
+ 1.5
1099
+ 2
1100
+ 2.5
1101
+ 3
1102
+ 0
1103
+ 0.2
1104
+ 0.4
1105
+ 0.6
1106
+ 0.8
1107
+ 1
1108
+ 0
1109
+ 2
1110
+ 4
1111
+ 6
1112
+ 8
1113
+ 10
1114
+ Figure 7: Numerical (circles) and asymptotic predictions (solid lines) of (a)) the height of the primary peak,
1115
+ mpeak,I(t)/Pe2/3
1116
+ t
1117
+ and (b)) its location Pe2/3
1118
+ t
1119
+ (1 − rpeak,I(t)) in the Bo = O(1) regime as given by (5.3). For
1120
+ each curve, Pe = 102, while the Bond number varies according to Bo = 1/10 (dark purple), Bo = 1/2 (blue),
1121
+ Bo = 1 (green) and Bo = 30 (yellow).
1122
+ 0
1123
+ 0.2
1124
+ 0.4
1125
+ 0.6
1126
+ 0.8
1127
+ 1
1128
+ 10-6
1129
+ 10-4
1130
+ 10-2
1131
+ 100
1132
+ 102
1133
+ 104
1134
+ 106
1135
+ 0
1136
+ 0.2
1137
+ 0.4
1138
+ 0.6
1139
+ 0.8
1140
+ 1
1141
+ 10-6
1142
+ 10-5
1143
+ 10-4
1144
+ 10-3
1145
+ 10-2
1146
+ 10-1
1147
+ 100
1148
+ Figure 8: Numerical (circles) and asymptotic predictions (solid lines) of (a)) the height of the primary peak,
1149
+ MI(t) = mpeak,I(t)/Pe2/3 and (b)) its location ηI(t) = Pe2/3(1−rpeak,I(t)) as given by (5.8)–(5.9). Results are
1150
+ presented for Pe = 102, Bo = 103 (α ≈ 0.68, yellow), Pe = 103, Bo = 104 (α = 1, green), Pe = 104, Bo = 105
1151
+ (α ≈ 1.47, blue) and Pe = 105, Bo = 105 (α ≈ 6.81, dark purple).
1152
+ unlike its moderate-Bo counterpart, there is no simple similarity form for the solution in this regime, so that
1153
+ we proceed more carefully.
1154
+ We denote the height and location of the primary peak by
1155
+ mpeak,I(t) = Pe2/3MI(t),
1156
+ rpeak,I(t) = 1 − Pe−2/3ηI(t),
1157
+ (5.4a, b)
1158
+ respectively. By (2.35), the location of the maximum ηI(t) satisfies
1159
+ 0 = ∂2 ˜
1160
+ M0
1161
+ ∂˜r2 (ηI(t), t).
1162
+ (5.5)
1163
+
1164
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
1165
+ 15
1166
+ Utilizing (A 13), we find that
1167
+ ∂2 ˜
1168
+ M0
1169
+ ∂˜r2 (ηI(t), t) = −
1170
+
1171
+ ˜u0 − 1
1172
+ ˜h0
1173
+ ∂˜h0
1174
+ ∂˜r
1175
+
1176
+ ∂ ˜
1177
+ M0
1178
+ ∂˜r
1179
+ �����
1180
+ (ηI(t),t)
1181
+ = 0.
1182
+ (5.6)
1183
+ Since ∂ ˜
1184
+ M0/∂˜r > 0 for ˜r > 0, we conclude
1185
+ ˜u0(ηI(t), t) −
1186
+ 1
1187
+ ˜h0(ηI(t), t)
1188
+ ∂˜h0
1189
+ ∂˜r (ηI(t), t) = 0
1190
+ (5.7)
1191
+ so that
1192
+ ηI(t) = α
1193
+ 2 W0
1194
+ �(1 − t)2
1195
+ 4α3
1196
+
1197
+ ,
1198
+ (5.8)
1199
+ where W0(x) is the Lambert-W function (i.e. the solution to wew = x).
1200
+ With ηI(t) in hand, the corresponding height of the ring at the peak is then given by
1201
+ MI(t) =
1202
+ � −B0(t)
1203
+ I(ηI(t), t)
1204
+
1205
+ =
1206
+
1207
+ N(t)
1208
+ I(ηI(t), t)
1209
+ �� ∞
1210
+ 0
1211
+ 1
1212
+ I(s, t) ds
1213
+ �−1�
1214
+ ,
1215
+ (5.9)
1216
+ where I(r, t) is given by (A 15) and N(t) is the leading-order accumulated mass flux into the boundary layer
1217
+ (A 18). Note that, in this regime, N(t) is independent of α and, hence, the Bond number, but the function
1218
+ I(r, t) does change with α.
1219
+ In figure 8, we plot the asymptotic predictions of the location and height of the primary deposit peak
1220
+ against the simulation results for a range of different P´eclet and Bond numbers (and, correspondingly, α).
1221
+ There are several discernible features. After an initial transient, the location of the peak is captured extremely
1222
+ well by the asymptotic prediction (5.8) for each case presented. This initial transient is primarily due to the
1223
+ lack of a distinct peak at early stages of the drying process; a period of time is necessary for sufficient solute
1224
+ to be advected to the contact line. This process takes longer for smaller P´eclet numbers, i.e. when diffusion
1225
+ is relatively stronger. The height of the primary peak is captured quite well by the asymptotic prediction
1226
+ (5.9), particularly for larger P´eclet numbers and as time increases. It is worth noting that the error in the
1227
+ approximation of the height is O(Pe1/3), so for an improved estimation of the primary peak height, it would
1228
+ be necessary to consider the first two inner solutions ˜
1229
+ M0(˜r, t) and ˜
1230
+ M1(˜r, t). While this is possible, the results
1231
+ do not have a simple analytic form, so are not practical to work with. We also note that, as the droplet
1232
+ evaporates, the primary peak both increases in size and moves closer to the contact line, i.e. MI(t) increases
1233
+ and ηI(t) decreases as t increases.
1234
+ 5.2. Secondary peak
1235
+ As evidenced by the solute mass profiles, the behaviour of the secondary peak — and indeed, even its presence
1236
+ — is more complex than that of the primary peak, which always forms in the large-Pe regime. We have seen,
1237
+ for example in figure 4 in the Bo = O(1) regime, that the presence of the peak varies with both Bo and
1238
+ drying time, while when Bo ≫ 1, we have also seen variation with Pe (and hence α), see for example figure
1239
+ 5. This gives a clear indication that we need to treat this feature more carefully.
1240
+ To begin, we will consider whether or not the secondary peak is present. We shall first fix the P´eclet
1241
+ number and use the numerical results to produce a regime diagram in (Bo, t)-parameter space indicating
1242
+ whether one or two peaks are present in the solute mass profile. We note here that these are the only options
1243
+ that we have been able to find — we have found no instances of more than two peaks appearing.
1244
+ We show the results for Pe = 102 in figure 9a. In the figure, solute profiles with one peak — i.e. only the
1245
+ classical coffee ring — are denoted by blue circles, while solute profiles exhibiting two peaks are denoted
1246
+ by red circles. We see a strong nonlinear dependence on both Bond number and dryout time. In particular,
1247
+ there is a band of Bond numbers between around Bo ≈ 10 and Bo ≈ 30000 that may lead to secondary peak
1248
+ formation, although the existence of a peak also depends strongly on t for a fixed Bond number. We note
1249
+ that for Bo ≲ 10, there is only one peak for any t, in agreement with the classical Bo = 0 regime. Moreover,
1250
+ for very large Bond number Bo ≳ 30000, again we see that there is only one peak.
1251
+ We illustrate the effect of the P´eclet number by plotting the equivalent regime diagram for Pe = 103
1252
+ in figure 9b. Remarkably, the onset of the secondary peak appears to be unaffected by the increase of the
1253
+ P´eclet number, although the band of Bond numbers for which we see two peaks is significantly widened into
1254
+ larger Bo. Notably, however, the shape of the curve delineating between two peaks / one peak for large Bond
1255
+ number appears to be independent of Pe, only its location has shifted.
1256
+
1257
+ 16
1258
+ M. R. Moore & A. W. Wray
1259
+ 10-3
1260
+ 10-2
1261
+ 10-1
1262
+ 100
1263
+ 101
1264
+ 102
1265
+ 103
1266
+ 104
1267
+ 105
1268
+ 0
1269
+ 0.2
1270
+ 0.4
1271
+ 0.6
1272
+ 0.8
1273
+ 1
1274
+ One peak
1275
+ Two peaks
1276
+ 10-3
1277
+ 10-2
1278
+ 10-1
1279
+ 100
1280
+ 101
1281
+ 102
1282
+ 103
1283
+ 104
1284
+ 105
1285
+ 0
1286
+ 0.2
1287
+ 0.4
1288
+ 0.6
1289
+ 0.8
1290
+ 1
1291
+ One peak
1292
+ Two peaks
1293
+ Figure 9: (Bo, t)-regime diagram illustrating the presence of either one (blue circles) or two (red circles)
1294
+ peaks in the solute mass profile for (a) Pe = 102 and (b) Pe = 103. The data is extracted from the numerical
1295
+ simulations and demonstrates a clear band of Bond numbers for which two peaks may exist in the profile.
1296
+ In each figure, the black curve denotes the asymptotic prediction of when the centre of the droplet changes
1297
+ from a maximum to a minimum as given by (5.21).
1298
+
1299
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
1300
+ 17
1301
+ 10-6
1302
+ 10-3
1303
+ 100
1304
+ 10-2
1305
+ 10-1
1306
+ 100
1307
+ 101
1308
+ 102
1309
+ 103
1310
+ 10-6
1311
+ 10-3
1312
+ 100
1313
+ 10-2
1314
+ 10-1
1315
+ 100
1316
+ 101
1317
+ 102
1318
+ 103
1319
+ 10-6
1320
+ 10-3
1321
+ 100
1322
+ 10-2
1323
+ 10-1
1324
+ 100
1325
+ 101
1326
+ 102
1327
+ 103
1328
+ Figure 10: Solute profiles for an evaporating droplet with Pe = 102 and Bo = 20. The deposit profile is
1329
+ displayed on a doubly-logarithmic plot at 25% (a)), 35% (b)) and 75% (c)) of the drying time in order to
1330
+ catch the emergence of the secondary peak. In each of a) − c), the primary peak is indicated by a red circle,
1331
+ while the secondary peak is indicated by a black circle (when it exists).
1332
+ 5.2.1. Onset of the secondary peak
1333
+ In this section, we seek to investigate some of the phenomena around the onset of the secondary peak in
1334
+ more detail. We saw that for a fixed P´eclet number, there was a distinct switch from one to two peaks for
1335
+ Bond number Bo ≈ 10 and that this switch appears to be independent of Pe. This suggests that secondary
1336
+ peak formation is not a result of the interplay between solutal advection and diffusion that drives the classical
1337
+ coffee ring.
1338
+ In order to investigate the reasons behind the presence of or lack of a secondary peak, in figure 10, we
1339
+ plot numerical results for the solute profiles in a droplet with Pe = 102, Bo = 20 at 25%, 35% and 75% of
1340
+ the drying time. In the figure, the primary and secondary peaks are indicated by the red and black circles,
1341
+ respectively. We clearly see in figure 10a that at 25% of the drying time there is only one peak, but by 35%
1342
+ of the drying time, the secondary peak has emerged close to the droplet centre. As the droplet evaporates
1343
+ further to 75% of the drying time the secondary peak has moved further towards the droplet contact line.
1344
+ This particular example gives us a strong indication that the secondary peak initially arises from the centre
1345
+ of the drop and, in particular, appears to be linked with a transition from the centre being a maximum in
1346
+ solute mass profile — as it is for the classical coffee ring of Deegan et al. (1997, 2000) — to a minimum.
1347
+ To investigate this postulate, we consider the behvaiour close to the droplet centre. To simplify things,
1348
+ since the initial emergence of the secondary peak appears to be independent of the P´eclet number, we neglect
1349
+ solutal diffusion completely, taking Pe = ∞, so that the solute mass m satisfies the first-order semi-linear
1350
+ equation
1351
+ ∂m
1352
+ ∂t + 1
1353
+ 4r
1354
+
1355
+ ∂r (rmu) = 0,
1356
+ m(r, 0) = h(r, 0),
1357
+ (5.10)
1358
+ where, since the emergence appears to be rooted in the region where Bo ≈ 10, we consider the moderate
1359
+ Bond number regime and retain the full expressions for the droplet free surface h and fluid velocity u given
1360
+ by (3.1)–(3.2).
1361
+ We seek an asymptotic solution of (5.10) as r → 0. First, we note that for small arguments, the free surface
1362
+ and velocity have the following asymptotic expansions:
1363
+ h(r, t) ∼ (1 − t)
1364
+
1365
+ H0(Bo) + H1(Bo)r2 + o(r2)
1366
+
1367
+ ,
1368
+ (5.11)
1369
+ u(r, t) ∼
1370
+ 1
1371
+ (1 − t)
1372
+
1373
+ U0(Bo)r + U1(Bo)r3 + o(r3)
1374
+
1375
+ (5.12)
1376
+
1377
+ 18
1378
+ M. R. Moore & A. W. Wray
1379
+ as r → 0, where
1380
+ H0(Bo) = (I0(
1381
+
1382
+ Bo) − 1)
1383
+ πI2(
1384
+
1385
+ Bo)
1386
+ ,
1387
+ (5.13)
1388
+ H1(Bo) = −
1389
+ Bo
1390
+ 4πI2(
1391
+
1392
+ Bo)
1393
+ ,
1394
+ (5.14)
1395
+ U0(Bo) = 2
1396
+
1397
+ Bo −
1398
+
1399
+ BoI0(
1400
+
1401
+ Bo) − 2I1(
1402
+
1403
+ Bo)
1404
+
1405
+ Bo(1 − I0(
1406
+
1407
+ Bo))
1408
+ ,
1409
+ (5.15)
1410
+ U1(Bo) = −(Bo3/2 −
1411
+
1412
+ BoI0(
1413
+
1414
+ Bo) +
1415
+
1416
+ BoI0(
1417
+
1418
+ Bo)2 + 2I1(
1419
+
1420
+ Bo) − 2BoI1(
1421
+
1422
+ Bo) − 2I0(
1423
+
1424
+ Bo)I1(
1425
+
1426
+ Bo)
1427
+ 4
1428
+
1429
+ Bo(1 − I0(
1430
+
1431
+ Bo))2
1432
+ .
1433
+ (5.16)
1434
+ Now, by the symmetry of the problem, the droplet centre must be a critical point, so we seek a solution
1435
+ of the form m = m0(t) + m1(t)r2 + o(r2) as r → 0. Upon substituting this ansatz and the above forms for h
1436
+ and u into (5.10), straightforward calculation yields
1437
+ m0(t) = H0(1 − t)U0/2,
1438
+ (5.17)
1439
+ m1(t) =
1440
+ �2U1H0
1441
+ U0
1442
+ + H1
1443
+
1444
+ (1 − t)U0 − 2U1H0
1445
+ U0
1446
+ (1 − t)U0/2.
1447
+ (5.18)
1448
+ Hence, given that initially the droplet has a maximum at its centre for any Bo, we deduce that the
1449
+ maximum becomes a minimum at the critical time tc such that
1450
+ m1(tc) = 0.
1451
+ (5.19)
1452
+ Since 2U1H0/U0 + H1 < 0, H0 > 0, U0 > 0 for all Bo, (5.19) only has solutions for Bo > Boc where
1453
+ U1(Boc) = 0
1454
+ =⇒
1455
+ Boc ≈ 15.21.
1456
+ (5.20)
1457
+ When Bo > Boc, we may solve (5.19) explicitly to find
1458
+ tc(Bo) = 1 −
1459
+
1460
+ 2U1(Bo)H0(Bo)
1461
+ 2U1(Bo)H0(Bo) + H1(Bo)U0(Bo)
1462
+ �2/U0(Bo)
1463
+ .
1464
+ (5.21)
1465
+ This critical curve in figure 9 is displayed as the solid black curve and we see that there is excellent
1466
+ agreement between this prediction and the transition from one to two peaks. But, what is causing the
1467
+ transition? Since the phenomenon is independent of the P´eclet number, it is purely a result of the droplet
1468
+ geometry and the evaporation-driven flow. In particular, we note that the critical Bond number Boc given by
1469
+ (5.20) is linked to the change in sign of U1, which is equivalent to requiring that (1 − t)∇ · (uer) is decreasing
1470
+ near r = 0. This correlates with the profiles of the divergence of u displayed in figure 2c, where we see this
1471
+ change in sign clearly as the Bond number increases.
1472
+ Notably, considering the curve displayed in figure 9, we see that for Bo close to Boc, the secondary peak only
1473
+ emerges very late in the dryout process, but as the Bond number increases, it appears almost instantaneously.
1474
+ Hence, from this analysis alone, we might expect there to always be two peaks for Bo > Boc, but clearly this
1475
+ is not the case. We now investigate why in more detail.
1476
+ 5.2.2. Loss of the secondary peak
1477
+ Given its clear variation with each of t, Bo and Pe, it is perhaps unsurprising that it is more challenging to
1478
+ determine an analytical expression for the location of the right-hand boundary between two peaks and one
1479
+ peak in figure 9), and unfortunately we have been unable to do so. However, it is relatively straightforward
1480
+ to illustrate why the transition occurs by considering a specific example.
1481
+ In figure 11, we plot solute mass profiles for Pe = 102 and Bo = 103 at 5%, 20%, 50% and 90% of the
1482
+ drying time indicating the primary and secondary peaks by red and black circles where appropriate. Note
1483
+ that, for such a large Bond number, the critical time at which we would expect a secondary peak to be
1484
+ present may be found from (5.21) to be tc ≈ 2.8 × 10−10. We see in figure 11a that, indeed, after 5% of the
1485
+ drying time, the secondary peak has emerged and is visible close to the droplet centre — moreover, at this
1486
+ stage, the primary peak associated with the coffee ring has yet to fully develop (so that the ‘one peak’ at
1487
+ this stage in figure 9a is in fact the secondary peak!). However, by the time we reach 20% of the drying time,
1488
+ both peaks are clearly visible, with the primary peak now approximately 50% larger than the secondary
1489
+ peak.
1490
+
1491
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
1492
+ 19
1493
+ 10-6
1494
+ 10-4
1495
+ 10-2
1496
+ 100
1497
+ 10-2
1498
+ 10-1
1499
+ 100
1500
+ 10-6
1501
+ 10-4
1502
+ 10-2
1503
+ 100
1504
+ 10-2
1505
+ 10-1
1506
+ 100
1507
+ 10-6
1508
+ 10-4
1509
+ 10-2
1510
+ 100
1511
+ 10-2
1512
+ 10-1
1513
+ 100
1514
+ 101
1515
+ 10-6
1516
+ 10-4
1517
+ 10-2
1518
+ 100
1519
+ 10-2
1520
+ 100
1521
+ 102
1522
+ Figure 11: Solute profiles for an evaporating droplet with Pe = 102 and Bo = 103 displayed on a doubly-
1523
+ logarithmic plot at 5% (a)), 20% (b)), 50% (c)) and 90% (d)) of the drying time. In each figure, the primary
1524
+ peak is indicated by a red circle, while the secondary peak is indicated by a black circle when either exists.
1525
+ Increasing time further, we see that the primary peak continues to grow rapidly so that, by 50% of the
1526
+ drying time, it is so large, that it has subsumed the secondary peak into its upstream tail. That is, the
1527
+ secondary peak is still present according to the Pe = ∞ theory, but due to the fact that Pe is actually finite
1528
+ and the corresponding presence of the classical coffee ring, we do not see the secondary peak.
1529
+ If we then increase t even further, we see that by 90% of the drying time, the secondary peak has reemerged
1530
+ from the lee of the primary peak. By this stage of the evaporation process, the primary peak has moved
1531
+ significantly closer to the contact line — here 1 − rpeak,I ≈ 1.4 × 10−4, while the secondary peak is located
1532
+ at 1 − r ≈ 4.8 × 10−2, so that it is sufficiently far behind the primary peak to be visible.
1533
+ Thus, the loss of the secondary peak appears to be intrinsically tied to both the location, size and shape of
1534
+ the primary peak. Given that this behaviour largely occurs in the regime in which Bo ≫ 1, these properties
1535
+ of the primary peak are given by (5.8), (5.9) and the derivative of (A 16), respectively. Clearly, therefore, the
1536
+ behaviour is strongly dependent on t, Bo and Pe (cf. figure 8, for example).
1537
+ 5.2.3. Properties of the secondary peak
1538
+ Given its dependence on the various parameters of the model, discerning the properties of the secondary
1539
+ peak analytically is challenging, particularly in the Bo = O(1)-regime since, in this case, the peak tends to
1540
+ be situated in the droplet bulk, so that we are unable to use the simpler forms of the inner solution described
1541
+ in §4.1.2.
1542
+ Hence, we utilize the numerical results to track the height mpeak,II(t) and location rpeak,II(t) of the
1543
+ secondary peak when it exists and we display the results for several different values of Pe, Bo in figure
1544
+ 12. In the figure, results for Pe = 102 and Pe = 103 are denoted by circles and squares, respectively. The
1545
+ Bond number is represented by the colour, with results for Bo = 20 (purple), 50 (dark blue), 100 (light
1546
+ blue) and 1000 (green). It is evident that for each of the Bond numbers represented, increasing the P´eclet
1547
+ number appears to have negligible effect on both the size and location of the secondary peak. However, both
1548
+
1549
+ 20
1550
+ M. R. Moore & A. W. Wray
1551
+ 0
1552
+ 0.5
1553
+ 1
1554
+ 0
1555
+ 0.1
1556
+ 0.2
1557
+ 0.3
1558
+ 0.4
1559
+ 0
1560
+ 0.5
1561
+ 1
1562
+ 0
1563
+ 0.2
1564
+ 0.4
1565
+ 0.6
1566
+ 0.8
1567
+ 1
1568
+ Figure 12: Numerical predictions of (a)) the height of the secondary peak, mpeak,II(t) and (b)) its location
1569
+ rpeak,II(t) for different values of Pe, Bo. The symbols denote different P´eclet numbers: Pe = 100 (circles),
1570
+ Pe = 1000 (squares); while the colours denote different Bond numbers: Bo = 20 (purple), Bo = 50 (dark
1571
+ blue), Bo = 100 (light blue), Bo = 1000 (green).
1572
+ properties do vary with the Bond number. In particular, as the Bond number increases, the secondary peak
1573
+ is situated closer to the contact line at the same stage of the drying process, and similarly, for a fixed Bond
1574
+ number, the peak gets closer to the contact line as the droplet evaporates. On the other hand, variations
1575
+ of the secondary peak height with Bo are much less trivial, although for all of the displayed results, we see
1576
+ that the height of the secondary peak decreases as the droplet evaporates. This is in stark contrast to the
1577
+ primary peak, which always grows as more solute is transported to the contact line.
1578
+ Thus, we conclude that the secondary peak is predominantly driven by the Bond number. Indeed, it is
1579
+ only for sufficiently large Bond numbers that we find a second peak at all, and the properties of that peak
1580
+ then depend strongly on the size of Bo. The only role played by the P´eclet number appears to be in the
1581
+ disappearance of the secondary peak when it is subsumed by the primary peak, which is typically orders of
1582
+ magnitude larger and always closer to the contact line.
1583
+ 6. Summary and discussion
1584
+ In this paper, we have performed a detailed asymptotic and numerical analysis into the effect of gravity on
1585
+ the famous coffee ring phenomenon observed in solute-laden droplets. In the physically-relevant limit of small
1586
+ droplet capillary number, Ca ≪ 1, and large solutal P´eclet number, Pe ≫ 1, we identified two asymptotic
1587
+ regimes based on the size of the Bond number, Bo:
1588
+ i) a moderate Bond number regime, where Bo = O(1);
1589
+ ii) a large Bond number regime, Bo ≫ 1.
1590
+ In the first of these regimes, gravity acts to flatten the droplet profile from the spherical cap of the zero-
1591
+ gravity problem, while reducing the liquid velocity. Moreover, the asymptotic structure of the solute transport
1592
+ follows exactly that presented by Moore et al. (2021) for surface tension-dominated droplets, with advection
1593
+ dominating in the droplet bulk, while the competition between advection and diffusion in a boundary layer
1594
+ of width of O(Pe−2) near the pinned contact line drives the nascent coffee ring. Gravity acts to modify the
1595
+ surface tension-dominated solution both through the accumulated mass flux of solute into the contact line
1596
+ and a parameter dependent on the local contact angle. In particular, as the Bond number increases, the
1597
+ height of the nascent coffee ring is reduced — which is consistent with the reduced flow velocity as Bo is
1598
+ increased. Moreover, the peak is situated further from the contact line.
1599
+ To categorize the role of gravity more explicitly, we derived an approximate similarity profile, ˆm0, for the
1600
+
1601
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
1602
+ 21
1603
+ nascent coffee ring profile, given by
1604
+ ˆm0(R, t)
1605
+ Pe2
1606
+ t N(t; Bo) =
1607
+
1608
+ 3ψ(Bo)f
1609
+ �√
1610
+ R, 3,
1611
+
1612
+ ψ(Bo)
1613
+
1614
+ ,
1615
+ R = Pe2
1616
+ t(1 − r)
1617
+ (6.1)
1618
+ where Pet = Pe/(1−t) is the time-dependent P´eclet number, N(t; Bo) is the accumulated mass flux of solute
1619
+ at the contact line from the droplet bulk, χ is the coefficient of the inverse square root singularity in the
1620
+ evaporative flux at the contact line; ψ(Bo) is the leading order initial local contact angle; and f(x, k, l) =
1621
+ lkxk−1e−lx/Γ(k)! is the probability density function of a gamma distribution. Clearly, the Bond number acts
1622
+ to scale the coffee ring profile through the accumulated mass flux, while it acts to change the shape of the
1623
+ profile through the initial contact angle ψ(Bo).
1624
+ In the second regime, the Bond number is large, so that the droplet is approximately flat, with surface
1625
+ tension confined to a narrow region near the pinned contact line — a ‘pancake’ or ‘puddle’ droplet. Thus,
1626
+ the asymptotic analysis discussed above is no longer valid, since there are two competing boundary layers
1627
+ near the edge of the droplet — the diffusion boundary layer in the solute transport and the surface tension
1628
+ boundary layer in the droplet free surface profile (and, hence, the liquid velocity). We derived the resulting
1629
+ solute distribution in the most general regime in which the two boundary layers are comparable, which
1630
+ reduces to the assumption that α = Bo−1/2Pe2/3 = O(1). Under this assumption, diffusion and advection
1631
+ balance in a region of size Pe−2/3 near the contact line, noticeably larger than in the moderate gravity
1632
+ regime. This is a further indication of gravity acting to shift the coffee ring further from the contact line
1633
+ and, moreover, tends to cause shallower solute profiles in the boundary layer region.
1634
+ The asymptotic analysis in the large-Bond number regime is more challenging than that in the moderate
1635
+ Bond number regime and, in particular, the nascent coffee ring no longer collapses onto an approximate
1636
+ similarity profile. However, we were able to derive expressions for the location (5.8) and height (5.9) of the
1637
+ peak, demonstrating that it still strongly depends on the accumulated mass flux of solute into the contact
1638
+ line alongside the parameter α. In particular, increasing α leads to higher coffee rings that are located closer
1639
+ to the contact line.
1640
+ In each regime, we demonstrated that our asymptotic predictions were in excellent agreement with nu-
1641
+ merical simulations of the full advection-diffusion problem for the solute mass distribution.
1642
+ Alongside the anticipated nascent coffee ring driven by the competition between advection and diffusion
1643
+ of the solute, our asymptotic and numerical analysis also revealed a novel phenomenon: that the solute
1644
+ profile may have a secondary peak. The secondary peak was characterized by being situated upstream of and
1645
+ significantly smaller than the primary coffee ring. Moreover, the presence of this peak strongly depended on
1646
+ the Bond number, P´eclet number and evaporation time.
1647
+ Further investigation revealed that, for a fixed P´eclet number, there exists a band in (Bo, t)-space at which
1648
+ two peaks are present in the profile. We demonstrated that the onset of this band is independent of the P´eclet
1649
+ number and is caused by the critical point at the centre of the droplet changing in type from a maximum
1650
+ (as in the spherical cap droplet in the Bo = 0 regime) to a minimum. When the critical point at the droplet
1651
+ centre changes type, an internal maximum forms downstream of the centre and it is this that corresponds to
1652
+ the secondary peak. This behaviour only occurs above a critical Bond number, Boc ≈ 15.21, and then only
1653
+ after a given drying time, given by
1654
+ tc(Bo) = 1 −
1655
+
1656
+ 2U1(Bo)H0(Bo)
1657
+ 2U1(Bo)H0(Bo) + H1(Bo)U0(Bo)
1658
+ �2/U0(Bo)
1659
+ .
1660
+ (6.2)
1661
+ In particular, as Bo increases, tc decreases, so the secondary peak emerges earlier in the evaporative process.
1662
+ These predictions were shown to be in excellent agreement to the numerical results and, remarkably, are
1663
+ independent of the P´eclet number.
1664
+ However, the above analysis suggests that for all Bo > Boc and t > tc a secondary peak exists — something
1665
+ that we did not find in our analysis. The reason for this discrepancy was shown to be due to the presence
1666
+ of the primary peak. In particular, as time increases, the secondary peak is located further from the droplet
1667
+ centre so that it may get subsumed in the tail of the primary peak. For a fixed Bond number, this possibility
1668
+ was shown to depend strongly on both the P´eclet number and the evaporation time; this is due to the fact
1669
+ that the size of the primary peak increases with both t and Pe, while the size of the secondary peak only
1670
+ varies with t.
1671
+ Beyond this subsuming effect, however, we were able to demonstrate that the P´eclet number plays negligible
1672
+ role in the size and location of the secondary peak for a range of Bond numbers, suggesting that this feature
1673
+ may be reliably controlled simply by altering Bo.
1674
+ In previous studies of coffee ring formation (e.g. Deegan et al. (2000); Popov (2005); Moore et al. (2021),
1675
+
1676
+ 22
1677
+ M. R. Moore & A. W. Wray
1678
+ gravity has frequently been neglected under the assumption of small Bond number, which is a reasonable
1679
+ assumption for sufficiently small droplets. However, given that the Bond number may be increased in an
1680
+ experimental or industrial setting by steadily increasing the droplet radius, the influence of gravity may be
1681
+ of fundamental interest in applications that utilize droplet drying to adaptively control the shape of the
1682
+ residual deposit, such as colloidal patterning (Harris et al. 2007; Choi et al. 2010) and fabrication techniques
1683
+ using inkjet printing (Layani et al. 2009). Our analysis thus plays a dual role in the field. First, we have
1684
+ presented the first formal categorization of the role of gravity in the early-stages of coffee ring formation and
1685
+ given a quantitative prediction of the resulting features of the solute profile. Second, we have found a novel
1686
+ phenomenon — the secondary peak — which may also be exploited in such processes, particularly when the
1687
+ size of the primary peak can be carefully controlled. This is particularly relevant given that the secondary
1688
+ peak emerges at a relatively moderate critical Bond number.
1689
+ There are, naturally, limitations to our analysis. Throughout, we have assumed that the contact line
1690
+ remains pinned as the droplet evaporates. This has been shown to be a reasonable assumption for many
1691
+ configurations (see, for example, the experiments in Deegan et al. (2000); Kajiya et al. (2008); Howard et al.
1692
+ (2023)) and may further be enhanced by solute aggregation near the edge of the droplet (Orejon et al.
1693
+ (2011); Weon & Je (2013); Larson (2014)). However, at late stages of the evaporation, the contact line may
1694
+ depin and become mobile, moving inwards towards the droplet centre. The contact line may then become
1695
+ pinned at a new location and the process may repeat. This behaviour is known as ‘stick-slip’ evaporation
1696
+ and represents an important class within the field that is beyond the scope of the present study, but may
1697
+ represent an interesting future direction in terms of the effect of gravity, particularly with the presence of
1698
+ the secondary peak and its associated increased solute mass, which may promote re-pinning.
1699
+ Another effect that we have neglected in the present analysis is the possibility of solute becoming trapped
1700
+ at the free surface of the droplet. If this occurs, the solute is then transported to the contact line along
1701
+ the free surface, and has been suggested as an alternative mechanism for coffee ring formation (Kang et al.
1702
+ (2016)). This behaviour has been demonstrated to occur for a wide variety of droplets but is more pronounced
1703
+ for droplets with large contact angles Kang et al. (2016) or when vertical diffusion happens over a longer
1704
+ timescale than evaporation D’Ambrosio (2022). Since we deal with the opposite case of a thin drop with
1705
+ fast vertical diffusion (i.e. so that the solute concentration may be assumed to be uniform across the droplet
1706
+ to leading order), we have not considered this phenomenon here. It would be interesting to see how such
1707
+ behaviour impacted the solute profile in the current case, although it should be noted that the aforementioned
1708
+ studies neglect gravity entirely.
1709
+ A further aspect that would form the basis of an exciting future study surrounds the assumption that
1710
+ the solute is dilute in the droplet. Naturally, the build up of the solute in the coffee ring means that the
1711
+ concentration rapidly approaches levels where finite particle size effects can no longer be ignored. This has
1712
+ been analysed in detail for surface tension-dominated droplets in Moore et al. (2021, 2022) and a similar
1713
+ analysis would follow here with the inclusion of gravity. One possible aspect that would differentiate droplets
1714
+ where gravity is included is in the vicinity of the secondary peak. It is an interesting open question as to
1715
+ whether the dilute assumption may also break down in the vicinity of the secondary peak as well as the
1716
+ primary. Once finite particle size effects become important, there are a number of different approaches that
1717
+ can be followed to continue the analysis, such as a sharp transition between a dilute and jammed region
1718
+ (Popov (2005)), using a viscosity and solute diffusivity that vary with concentration (Kaplan & Mahadevan
1719
+ (2015)) or through more complicated two-phase suspension models (see, for example, Guazzelli & Pouliquen
1720
+ (2018)).
1721
+ Our analysis has concentrated on a diffusive evaporative model, while there are many situations where
1722
+ other evaporative models may be appropriate. Examples include water evaporating on glass, which may more
1723
+ appropriately be modelled using a kinetic evaporative model (Murisic & Kondic (2011)), droplets evaporating
1724
+ above a bath of the same liquid, where the evaporation is effectively constant (Boulogne et al. (2016)) and
1725
+ situations where a mask is used to control the evaporative flux so that it is stronger towards the centre
1726
+ (Vodolazskaya et al. (2017)). The analysis herein could readily be extended to such situations, although we
1727
+ note that for evaporative fluxes with different — including non-singular — behaviour close to the contact
1728
+ line, the size of the boundary layer regions near the contact line in which solutal diffusion and surface tension
1729
+ are relevant will change accordingly, as for surface tension-dominated droplets (Moore et al. 2021).
1730
+ A future direction of interest would be to extend the analysis herein to non-axisymmetric droplets. Such
1731
+ droplets occur widely in applications, particularly in printing OLED/AMOLED screens (see, for example,
1732
+ Mai & Richerzhagen (2007); Huo et al. (2020)). It is well-known that droplet geometry plays a strong role in
1733
+ the behaviour of the evaporative flux (S´aenz et al. (2017); Wray & Moore (2023)) and the transient and final
1734
+
1735
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
1736
+ 23
1737
+ deposit profiles (Freed-Brown (2015); S´aenz et al. (2017); Moore et al. (2022)). It would be of significant
1738
+ theoretical and practical interest to explore the behaviour of the secondary peak in such problems as well.
1739
+ Finally, we note that another context in which gravity may play an important role is that of binary/multi-
1740
+ component droplets, particularly in situations where the different fluids have different densities. Multi-
1741
+ component droplets occur widely, from commercial alcohols such as whiskey and ouzo (Tan et al. (2019);
1742
+ Carrithers et al. (2020)) to various inks (Shargaieva et al. (2020)). While it would be certainly of interest to
1743
+ extend the analysis presente here to such droplets, a careful treatment of the internal flow would be needed,
1744
+ as the multi-component nature of the droplet significantly complicates the dynamics (Li et al. (2019)).
1745
+ Acknowledgments MRM would like to acknowledge the support of EPSRC grant EP/X035646/1.
1746
+ Declaration of Interests. The authors report no conflict of interests.
1747
+ Appendix A. Matyched asymptotic analysis in the limit of large Bo, α = O(1)
1748
+ In this appendix, we present the asymptotic solution of the solute transport problem in the limit in which
1749
+ Bo, Pe ≫ 1 and
1750
+ α = Bo−1/2Pe2/3 = O(1).
1751
+ (A 1)
1752
+ For convenience, we choose to use Pe−2/3 as our small parameter in the asymptotic expansions. Moreover, it
1753
+ transpires that it is easier to analyse the integrated mass variable formulation of the problem (2.32)–(2.35).
1754
+ A.1. Outer region
1755
+ In the droplet bulk, 1 − r is O(1), and we recall from (3.9) that the droplet free surface h is flat to all orders
1756
+ and that the velocity is given by (3.10). Upon substituting these expressions into (2.32) and (2.34), and then
1757
+ expanding M(r, t) = M0(r, t) + Pe−2/3M1(r, t) + O(Pe−4/3) as Pe → ∞, we find to leading-order
1758
+ ∂M0
1759
+ ∂t
1760
+ + u0
1761
+ 4
1762
+ ∂M0
1763
+ ∂r
1764
+ = 0
1765
+ for
1766
+ 0 < r, t < 1,
1767
+ M0(r, 0) = r2
1768
+
1769
+ for
1770
+ 0 < r < 1.
1771
+ (A 2a, b)
1772
+ This may be solved using the method of characteristics, yielding
1773
+ M0(r, t) = (1 − t)r2
1774
+
1775
+ +
1776
+ √1 − t(1 − √1 − t)
1777
+ π
1778
+ (1 −
1779
+
1780
+ 1 − r2).
1781
+ (A 3)
1782
+ We see that this solution automatically satisfies the boundary condition (2.33a).
1783
+ At O(Pe−2/3), the problem for M1(r, t) is given by
1784
+ ∂M1
1785
+ ∂t
1786
+ + u0
1787
+ 4
1788
+ ∂M1
1789
+ ∂r
1790
+ = −αu1
1791
+ 4
1792
+ ∂M0
1793
+ ∂r
1794
+ for
1795
+ 0 < r, t < 1,
1796
+ M1(r, 0) = 0
1797
+ for
1798
+ 0 < r < 1.
1799
+ (A 4a, b)
1800
+ for 0 < r < 1, 0 < t < 1, while the initial condition is given by M1(r, 0) = αr2/π for 0 < r < 1. This may
1801
+ be solved in a similar manner using the method of characteristics, yielding
1802
+ M1(r, t) = 2ακ(r, t)
1803
+ π
1804
+ (1 − κ(r, t)) log
1805
+ �√1 − t − κ(r, t)
1806
+ 1 − κ(r, t)
1807
+
1808
+ + α
1809
+ π (1 − (1 − κ(r, t))2),
1810
+ (A 5)
1811
+ where κ(r, t) = √1 − t(1 −
1812
+
1813
+ 1 − r2).
1814
+ Expanding the leading-order solution (A 3) as we approach the contact line, we have
1815
+ M0(r, t) ∼
1816
+ √1 − t
1817
+ π
1818
+ − (1 − t)
1819
+
1820
+
1821
+
1822
+ 2(1 − t)
1823
+ π
1824
+ (1 −
1825
+
1826
+ 1 − t)
1827
+
1828
+ 1 − r − (1 − t)
1829
+ π
1830
+ (1 − r) + O((1 − r)3/2)
1831
+ (A 6)
1832
+ as r → 1−. Notably, this means that the leading-order outer solute mass m0 is singular at the contact line,
1833
+ which gives a strong indication of the importance of diffusive effects local to the edge of the droplet. This is
1834
+ in stark contrast to the Bo = O(1) solution, where the outer solute mass was square root bounded as r → 1−.
1835
+ A similar expansion of (A 5), yields
1836
+ M1(r, t) ∼ α√1 − t(1 − √1 − t)
1837
+ π
1838
+ log(1 − r) + α√1 − t(1 − √1 − t)
1839
+ π
1840
+ log
1841
+
1842
+ 2(1 − t)
1843
+ (1 − √1 − t)2
1844
+
1845
+ +
1846
+ α
1847
+ π (1 − (1 −
1848
+
1849
+ 1 − t)2) + O(
1850
+
1851
+ 1 − r log(1 − r))
1852
+ (A 7)
1853
+ as r → 1−. We can clearly see this will necessitate an inner expansion that contains logarithmic terms;
1854
+ similar behaviour is displayed for surface tension-dominated drops under different evaporative fluxes (Moore
1855
+ et al. 2021).
1856
+
1857
+ 24
1858
+ M. R. Moore & A. W. Wray
1859
+ Finally, if we expand the solute mass m ∼ m0 as Pe → 0 in (2.35), we find
1860
+ m0(r, t) =
1861
+ √1 − t
1862
+ π
1863
+
1864
+ 1 − r2
1865
+
1866
+ 1 −
1867
+
1868
+ 1 − t(1 −
1869
+
1870
+ 1 − r2)
1871
+
1872
+ .
1873
+ (A 8)
1874
+ Whilst we could proceed to O(Pe−2/3) in the solute mass expansion in the outer region, we shall not require
1875
+ this when constructing a composite profile that is valid to O(1) throughout the droplet, so we do not present
1876
+ this here.
1877
+ A.2. Inner region
1878
+ Recalling (3.11)–(3.12), (4.2) and (4.3), in order to retain a balance between the advective and diffusive
1879
+ effects in (2.32) close to the contact line, we set
1880
+ r = 1 − Pe−2/3˜r,
1881
+ u = Pe−1/3˜u,
1882
+ h = ˜h,
1883
+ M = ˜
1884
+ M,
1885
+ m = Pe2/3 ˜m
1886
+ (A 9)
1887
+ in (2.32)–(2.35). Note that we therefore have
1888
+ ˜h = ˜h0 + Pe−2/3˜h1 + O(Pe−4/3),
1889
+ ˜u = ˜u0 + Pe−1/3˜u1 + Pe−2/3˜u2 + O(Pe−1)
1890
+ (A 10)
1891
+ as Pe → ∞ where
1892
+ ˜h0(˜r, t) = ¯h0(˜r/α, t),
1893
+ ˜h1(˜r, t) = α¯h1(˜r/α, t),
1894
+ (A 11)
1895
+ and ¯h0, ¯h1 are given by (3.13)–(3.14), and
1896
+ ˜u0(˜r, t) = √α¯u0(˜r/α, t),
1897
+ ˜u1(˜r, t) = α¯u1(˜r/α, t),
1898
+ ˜u2(˜r, t) = α3/2 ¯u2(˜r/α, t)
1899
+ (A 12)
1900
+ and ¯u0, ¯u1, ¯u2 are given by (3.15))–(3.17).
1901
+ Seeking an asymptotic expansion of the integrated mass of the form ˜
1902
+ M = ˜
1903
+ M0+Pe−1/3 ˜
1904
+ M1+Pe−2/3 log Pe−2/3 ˜
1905
+ M2+
1906
+ Pe−2/3 ˜
1907
+ M3 + o(Pe−2/3) as Pe → ∞, we find that the leading-order inner problem is given by
1908
+ ∂2 ˜
1909
+ M0
1910
+ ∂˜r2
1911
+ +
1912
+
1913
+ ˜u0 − 1
1914
+ ˜h0
1915
+ ∂˜h0
1916
+ ∂˜r
1917
+
1918
+ ∂ ˜
1919
+ M0
1920
+ ∂r
1921
+ = 0,
1922
+ for
1923
+ ˜r > 0, 0 < t < 1,
1924
+ (A 13)
1925
+ subject to the boundary condition
1926
+ ˜
1927
+ M0(0, t) = 1/2π for 0 < t < 1 and, in order to match with the local
1928
+ expansion of leading-order-outer solution at the contact line (A 6), we must have
1929
+ ˜
1930
+ M0 →
1931
+ √1 − t
1932
+ π
1933
+ − (1 − t)
1934
+
1935
+ as
1936
+ ˜r → ∞,
1937
+ (A 14)
1938
+ Defining the integrating factor
1939
+ I(˜r, t) =
1940
+
1941
+ 1
1942
+ 1 − e−˜r/α
1943
+
1944
+ exp
1945
+
1946
+ 2
1947
+
1948
+ 2
1949
+ (1 − t)
1950
+ � ˜r
1951
+ 0
1952
+ √ξ
1953
+ 1 − e−ξ/α dξ
1954
+
1955
+ ,
1956
+ (A 15)
1957
+ we find that the solution is given by
1958
+ ˜
1959
+ M0(˜r, t) = 1
1960
+ 2π + B0(t)
1961
+ � ˜r
1962
+ 0
1963
+ 1
1964
+ I(s, t) ds,
1965
+ (A 16)
1966
+ where
1967
+ B0(t) = − 1
1968
+ π
1969
+
1970
+ 1 −
1971
+
1972
+ 1 − t − t
1973
+ 2
1974
+ � �� ∞
1975
+ 0
1976
+ 1
1977
+ I(s, t) ds
1978
+ �−1
1979
+ .
1980
+ (A 17)
1981
+ We note here that the first term on the right-hand side of B0(t) is simply the leading-order accumulated
1982
+ mass at the contact line as a function of time, N(t), that is
1983
+ N(t) = 1
1984
+ 4
1985
+ � t
1986
+ 0
1987
+ (m0u0)(1−, τ) dτ = 1
1988
+ π
1989
+
1990
+ 1 −
1991
+
1992
+ 1 − t − t
1993
+ 2
1994
+
1995
+ .
1996
+ (A 18)
1997
+ It is worth noting the similarities between (A 18) and the equivalent expression for a surface-tension domi-
1998
+ nated drop evaporating under a constant evaporative flux (Freed-Brown 2015; Moore et al. 2021).
1999
+ At O(Pe−1/3), we have
2000
+ ∂2 ˜
2001
+ M1
2002
+ ∂˜r2
2003
+ +
2004
+
2005
+ ˜u0 − 1
2006
+ ˜h0
2007
+ ∂˜h0
2008
+ ∂˜r
2009
+
2010
+ ∂ ˜
2011
+ M1
2012
+ ∂r
2013
+ = 4∂ ˜
2014
+ M0
2015
+ ∂t
2016
+ − ˜u1
2017
+ ∂ ˜
2018
+ M0
2019
+ ∂˜r
2020
+ for
2021
+ ˜r > 0, 0 < t < 1,
2022
+ (A 19)
2023
+
2024
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
2025
+ 25
2026
+ subject to
2027
+ ˜
2028
+ M1(0, t) = 0 for 0 < t < 1 and the far-field matching condition
2029
+ ˜
2030
+ M1 → −
2031
+
2032
+ 2(1 − t)
2033
+ π
2034
+ (1 −
2035
+
2036
+ 1 − t)
2037
+
2038
+ ˜r
2039
+ as
2040
+ ˜r → ∞.
2041
+ (A 20)
2042
+ While in practice it may be easier to find
2043
+ ˜
2044
+ M1(˜r, t) from (A 19)–(A 20) numerically, for posterity, we state
2045
+ that this boundary value problem has solution
2046
+ ˜
2047
+ M1(˜r, t) =
2048
+ � ˜r
2049
+ 0
2050
+ 1
2051
+ I(s, t)
2052
+ �� s
2053
+ 0
2054
+
2055
+ 4∂ ˜
2056
+ M0
2057
+ ∂t
2058
+ − ˜u1
2059
+ ∂ ˜
2060
+ M0
2061
+ ∂˜r
2062
+
2063
+ I(σ, t) dσ
2064
+
2065
+ ds + B1(t)
2066
+ � ˜r
2067
+ 0
2068
+ 1
2069
+ I(s, t) ds,
2070
+ (A 21)
2071
+ where
2072
+ B1(t) = −
2073
+ �� ∞
2074
+ 0
2075
+
2076
+ 1
2077
+ I(s, t)
2078
+ �� s
2079
+ 0
2080
+
2081
+ 4∂ ˜
2082
+ M0
2083
+ ∂t
2084
+ − ˜u1
2085
+ ∂ ˜
2086
+ M0
2087
+ ∂˜r
2088
+
2089
+ I(σ, t) dσ
2090
+
2091
+
2092
+
2093
+ 2(1 − t)∂ ˜
2094
+ M0
2095
+ ∂t
2096
+ 1
2097
+ √s
2098
+
2099
+ ds
2100
+ � �� ∞
2101
+ 0
2102
+ 1
2103
+ I(s, t) ds
2104
+ �−1
2105
+ ,
2106
+ (A 22)
2107
+ is chosen to kill the O(1)-term in the far-field expansion of
2108
+ ˜
2109
+ M1(˜r, t).
2110
+ The O(Pe−2/3 log Pe−2/3)-problem is given by
2111
+ ∂2 ˜
2112
+ M2
2113
+ ∂˜r2
2114
+ +
2115
+
2116
+ ˜u0 − 1
2117
+ ˜h0
2118
+ ∂˜h0
2119
+ ∂˜r
2120
+
2121
+ ∂ ˜
2122
+ M2
2123
+ ∂r
2124
+ = 0
2125
+ for
2126
+ ˜r > 0, 0 < t < 1,
2127
+ (A 23)
2128
+ subject to
2129
+ ˜
2130
+ M2(0, t) = 0 for 0 < t < 1 and the far-field matching condition
2131
+ ˜
2132
+ M2 → α√1 − t(1 − √1 − t)
2133
+ π
2134
+ as
2135
+ ˜r → ∞.
2136
+ (A 24)
2137
+ The solution may be found in a similar manner to the leading-order problem, yielding
2138
+ ˜
2139
+ M2(˜r, t) = B2(t)
2140
+ � ˜r
2141
+ 0
2142
+ 1
2143
+ I(s, t) ds,
2144
+ (A 25)
2145
+ where
2146
+ B2(t) = α√1 − t(1 − √1 − t)
2147
+ π
2148
+ �� ∞
2149
+ 0
2150
+ 1
2151
+ I(s, t) ds
2152
+ �−1
2153
+ .
2154
+ (A 26)
2155
+ Lastly, at O(Pe−2/3), we have
2156
+ ∂2 ˜
2157
+ M3
2158
+ ∂˜r2
2159
+ +
2160
+
2161
+ ˜u0 − 1
2162
+ ˜h0
2163
+ ∂˜h0
2164
+ ∂˜r
2165
+
2166
+ ∂ ˜
2167
+ M3
2168
+ ∂r
2169
+ = 4∂ ˜
2170
+ M1
2171
+ ∂t
2172
+ −˜u1
2173
+ ∂ ˜
2174
+ M1
2175
+ ∂˜r −˜u2
2176
+ ∂ ˜
2177
+ M0
2178
+ ∂˜r − 1
2179
+ ˜h0
2180
+ �˜h1
2181
+ ˜h0
2182
+ ∂˜h0
2183
+ ∂˜r − ∂˜h1
2184
+ ∂˜r
2185
+
2186
+ ∂ ˜
2187
+ M0
2188
+ ∂˜r − ∂ ˜
2189
+ M0
2190
+ ∂˜r
2191
+ =: V(˜r, t)
2192
+ (A 27)
2193
+ for ˜r > 0, 0 < t < 1, subject to
2194
+ ˜
2195
+ M3(0, t) = 0 for 0 < t < 1 and the far-field condition
2196
+ ˜
2197
+ M3 → −(1 − t)
2198
+ π
2199
+ ˜r+
2200
+ �α√1 − t(1 − √1 − t)
2201
+ π
2202
+ � �
2203
+ log ˜r + log
2204
+
2205
+ 2(1 − t)
2206
+ (1 − √1 − t)2
2207
+ ��
2208
+
2209
+ π (1−(1−
2210
+
2211
+ 1 − t)2)
2212
+ as
2213
+ ˜r → ∞.
2214
+ (A 28)
2215
+ The solution is given by
2216
+ ˜
2217
+ M3(˜r, t) =
2218
+ � ˜r
2219
+ 0
2220
+ 1
2221
+ I(s, t)
2222
+ �� s
2223
+ 0
2224
+ V(σ, t)I(σ, t) dσ
2225
+
2226
+ ds + B3(t)
2227
+ � ˜r
2228
+ 0
2229
+ 1
2230
+ I(s, t) ds,
2231
+ (A 29)
2232
+ where
2233
+ B3(t) =
2234
+
2235
+
2236
+ � ∞
2237
+ 1
2238
+
2239
+ 1
2240
+ I(s, t)
2241
+ �� s
2242
+ 0
2243
+ V(σ, t)I(σ, t) dσ
2244
+
2245
+ + (1 − t)
2246
+ π
2247
+ − α√1 − t(1 − √1 − t)
2248
+ πs
2249
+
2250
+ ds
2251
+
2252
+ � 1
2253
+ 0
2254
+ 1
2255
+ I(s, t)
2256
+ �� s
2257
+ 0
2258
+ V(σ, t)I(σ, t) dσ
2259
+
2260
+ ds − (1 − t)
2261
+ π
2262
+ + α
2263
+ π (1 − (1 −
2264
+
2265
+ 1 − t)2)
2266
+ +α√1 − t(1 − √1 − t)
2267
+ π
2268
+ log
2269
+
2270
+ 2(1 − t)
2271
+ (1 − √1 − t)2
2272
+ �� �� ∞
2273
+ 0
2274
+ 1
2275
+ I(s, t) ds
2276
+ �−1
2277
+ ,
2278
+ (A 30)
2279
+ has been chosen to satisfy the correct far-field behaviour.
2280
+ We are now in a position to find the inner solution for the solute mass. By substituting the scalings (A 9)
2281
+
2282
+ 26
2283
+ M. R. Moore & A. W. Wray
2284
+ into (2.35), we see that
2285
+ ˜m = −
2286
+ 1
2287
+ 1 − Pe−2/3˜r
2288
+ ∂ ˜
2289
+ M
2290
+ ∂˜r ,
2291
+ (A 31)
2292
+ so that expanding ˜m = ˜m0 + Pe−1/3 ˜m1 + Pe−2/3 log Pe−2/3 ˜m1 + Pe−2/3 ˜m2 as Pe → ∞, we have
2293
+ ˜m0 = −∂ ˜
2294
+ M0
2295
+ ∂˜r ,
2296
+ ˜m1 = −∂ ˜
2297
+ M1
2298
+ ∂˜r ,
2299
+ ˜m2 = −∂ ˜
2300
+ M2
2301
+ ∂˜r ,
2302
+ ˜m3 = −∂ ˜
2303
+ M3
2304
+ ∂˜r
2305
+ − ˜r∂ ˜
2306
+ M0
2307
+ ∂˜r .
2308
+ (A 32)
2309
+ A.3. Composite solutions
2310
+ We now have all of the necessary components needed to construct (additive) composite solutions for com-
2311
+ parison to the numerical results.
2312
+ To construct a composite solution for the integrated mass variable, we combine the first two outer solutions
2313
+ (A 3) and (A 5), the first four inner solutions (A 16), (A 21), (A 25) and (A 29), the overlap terms given by
2314
+ (A 6)–(A 7) using Van Dyke’s matching rule Van Dyke (1964), which yields
2315
+ Mcomp(r, t) = M0(r, t) + Pe−2/3M1(r, t) + ˜
2316
+ M0
2317
+
2318
+ Pe2/3(1 − r), t
2319
+
2320
+ + Pe−1/3 ˜
2321
+ M1
2322
+
2323
+ Pe2/3(1 − r), t
2324
+
2325
+ +
2326
+ Pe−2/3 log Pe−2/3 ˜
2327
+ M2
2328
+
2329
+ Pe2/3(1 − r), t
2330
+
2331
+ + Pe−2/3 ˜
2332
+ M3
2333
+
2334
+ Pe2/3(1 − r), t
2335
+
2336
+
2337
+ �√1 − t
2338
+ π
2339
+ − (1 − t)
2340
+
2341
+
2342
+
2343
+ 2(1 − t)
2344
+ π
2345
+ (1 −
2346
+
2347
+ 1 − t)
2348
+
2349
+ 1 − r − (1 − t)
2350
+ π
2351
+ (1 − r)+
2352
+ Pe−2/3
2353
+ �α
2354
+ π (1 − (1 −
2355
+
2356
+ 1 − t)2) + α√1 − t(1 − √1 − t)
2357
+ π
2358
+
2359
+ log(1 − r) + log
2360
+
2361
+ 2(1 − t)
2362
+ (1 − √1 − t)2
2363
+ ����
2364
+ .
2365
+ (A 33)
2366
+ This composite solution is valid up to and including O(Pe−2/3) throughout the whole of the droplet.
2367
+ Similarly, for the solute mass, the equivalent composite profile is compiled by taking the first outer solution
2368
+ (A 8) as well as the first four inner solutions given by (A 32), so that, accounting for the overlap contributions,
2369
+ mcomp(r, t) = m0(r, t) + Pe2/3 ˜m0
2370
+
2371
+ Pe2/3(1 − r), t
2372
+
2373
+ + Pe1/3 ˜m1
2374
+
2375
+ Pe2/3(1 − r), t
2376
+
2377
+ +
2378
+ log Pe−2/3 ˜m2
2379
+
2380
+ Pe2/3(1 − r), t
2381
+
2382
+ + ˜m3
2383
+
2384
+ Pe2/3(1 − r), t
2385
+
2386
+
2387
+
2388
+ (1 − t)(1 − √1 − t)
2389
+
2390
+ 2π√1 − r
2391
+ − (1 − t)
2392
+ π
2393
+ .
2394
+ (A 34)
2395
+ We note that this composite solution is valid up to and including O(1) throughout the droplet.
2396
+ Appendix B. Numerical solution of the solute transport problem
2397
+ In this section, we outline the numerical scheme for solving the advection-diffusion problem (2.32)–(2.34)
2398
+ for the integrated mass variable M(r, t). As discussed previously, the integrated mass variable formulation
2399
+ is advantageous when solving numerically, since it is mass-preserving and has simple-to-implement Dirichlet
2400
+ boundary conditions.
2401
+ Our numerical method is an adaptation of that discussed in Moore et al. (2021) for the Bo = 0 regime. We
2402
+ utilize central differences with gridpoints clustered close to the contact line, where there are rapid changes
2403
+ in behaviour associated with the coffee ring. We choose a uniform grid in the variable ζ ∈ [0, 1], where
2404
+ r = 1 − ℓζ
2405
+ 1 − ℓ ,
2406
+ (B 1)
2407
+ and ℓ is taken to coincide with the smallest of the two boundary layers; that is, ℓ = κ(1 − tc) where
2408
+ κ = min
2409
+
2410
+ Bo−1/2, Pe−2/3�
2411
+ and tc is the final computation time. Note that these boundary layers are in
2412
+ the context of large Bond number; when Bo = O(1), we have both increased the number of nodes in the
2413
+ discretization and chosen ℓ = Pe−2 to ensure we capture the diffusive boundary layer in this regime.
2414
+ Even when it is present, the secondary peak does not exhibit such extreme behaviour, with a much
2415
+ shallower profile than the primary peak, so provided that the discretization is chosen suitably small, the
2416
+ secondary peak is captured well without special considerations. The resulting system is solved using ode15s
2417
+ in MATLAB and incorporates complex step differentiation to compute the Jacobian (Shampine (2007)).
2418
+
2419
+ Gravity can lead to multiple peaks in the early stages of coffee ring formation
2420
+ 27
2421
+ The veracity of the simulations has been confirmed with stringent convergent checks alongside the excellent
2422
+ comparisons to the asymptotic results in both the order unity Bond number regime and the large Bond
2423
+ number regime (cf. figures 4, 5).
2424
+ REFERENCES
2425
+ Barash, L Yu, Bigioni, TP, Vinokur, VM & Shchur, LN 2009 Evaporation and fluid dynamics of a sessile drop
2426
+ of capillary size. Physical Review E 79 (4), 046301.
2427
+ Boucher, EA & Evans, MJB 1975 Pendent drop profiles and related capillary phenomena. Proceedings of the Royal
2428
+ Society of London. A. Mathematical and Physical Sciences 346 (1646), 349–374.
2429
+ Boulogne, Franc¸ois, Ingremeau, Franc¸ois & Stone, Howard A 2016 Coffee-stain growth dynamics on dry
2430
+ and wet surfaces. Journal of Physics: Condensed Matter 29 (7), 074001.
2431
+ Brutin, D & Starov, V 2018 Recent advances in droplet wetting and evaporation. Chemical Society Reviews 47 (2),
2432
+ 558–585.
2433
+ Carrithers, Adam D, Brown, Martin J, Rashed, Mohamed Z, Islam, Sabina, Velev, Orlin D & Williams,
2434
+ Stuart J 2020 Multiscale self-assembly of distinctive weblike structures from evaporated drops of dilute Amer-
2435
+ ican whiskeys. ACS Nano 14 (5), 5417–5425.
2436
+ Cazabat, Anne-Marie & Guena, Geoffroy 2010 Evaporation of macroscopic sessile droplets. Soft Matter 6 (12),
2437
+ 2591–2612.
2438
+ Choi, S., Stassi, S., Pisano, A. P. & Zohdi, T. I. 2010 Coffee-ring effect-based three dimensional patterning of
2439
+ micro/nanoparticle assembly with a single droplet. Langmuir 26 (14), 11690–11698.
2440
+ D’Ambrosio, Hannah-May 2022 On the evolution of and the deposition from an evaporating sessile droplet. PhD
2441
+ thesis, University of Strathclyde.
2442
+ De Gennes, P.-G. 1985 Wetting: statics and dynamics. Rev. Mod. Phys. 57 (3), 827.
2443
+ Deegan, R. D., Bakajin, O., Dupont, T. F., Huber, G., Nagel, S. R. & Witten, T. A. 1997 Capillary flow
2444
+ as the cause of ring stains from dried liquid drops. Nature 389 (6653), 827–829.
2445
+ Deegan, R. D., Bakajin, O., Dupont, T. F., Huber, G., Nagel, S. R. & Witten, T. A 2000 Contact line
2446
+ deposits in an evaporating drop. Phys. Rev. E 62 (1), 756–765.
2447
+ Devlin, Nicole Raley, Loehr, Katherine & Harris, Michael T 2016 The importance of gravity in droplet
2448
+ evaporation: A comparison of pendant and sessile drop evaporation with particles. AIChE Journal 62 (3),
2449
+ 947–955.
2450
+ Edwards, AMJ, Atkinson, PS, Cheung, CS, Liang, H, Fairhurst, DJ & Ouali, FF 2018 Density-driven flows
2451
+ in evaporating binary liquid droplets. Physical review letters 121 (18), 184501.
2452
+ Freed-Brown, J. E. 2015 Deposition from evaporating drops: power laws and new morphologies in coffee stains.
2453
+ PhD thesis, University of Chicago.
2454
+ Guazzelli, ´E. & Pouliquen, O. 2018 Rheology of dense granular suspensions. J. Fluid Mech. 852, P1.
2455
+ Hampton, Marc A, Nguyen, Tuan AH, Nguyen, Anh V, Xu, Zhi Ping, Huang, Longbin & Rudolph, Victor
2456
+ 2012 Influence of surface orientation on the organization of nanoparticles in drying nanofluid droplets. Journal
2457
+ of colloid and interface science 377 (1), 456–462.
2458
+ Harris, D. J., Hu, H., Conrad, J. C. & Lewis, J. A. 2007 Patterning colloidal films via evaporative lithography.
2459
+ Phys. Rev. Lett. 98 (14), 148301.
2460
+ Hocking, LM 1983 The spreading of a thin drop by gravity and capillarity. Quar. J. Mech. Appl. Math. 36 (1),
2461
+ 55–69.
2462
+ Howard, NS, Archer, AJ, Sibley, DN, Southee, DJ & Wijayantha, KGU 2023 Surfactant control of coffee
2463
+ ring formation in carbon nanotube suspensions. Langmuir .
2464
+ Hu, H. & Larson, R. G. 2002 Evaporation of a sessile droplet on a substrate. J. Phys. Chem. B 106 (6), 1334–1344.
2465
+ Huo, Si-Tao, Shao, Li-Qin, Dong, Ting, Liang, Ji-Sheng, Bi, Ze-Tong, He, Mu, Li, Zhe, Gao, Zhuo &
2466
+ Song, Jing-Yao 2020 Real rgb printing amoled with high pixel per inch value. J. Soc. for Inf. Disp. 28 (1),
2467
+ 36–43.
2468
+ Kajiya, T., Kaneko, D. & Doi, M. 2008 Dynamical visualization of ‘coffee stain phenomenon’ in droplets of
2469
+ polymer solution via fluorescent microscopy. Langmuir 24, 12369–12374.
2470
+ Kang, S. J., Vandadi, V., Felske, J. D. & Masoud, H. 2016 Alternative mechanism for coffee-ring deposition
2471
+ based on active role of free surface. Phys. Rev. E 94 (6), 063104.
2472
+ Kaplan, C Nadir & Mahadevan, L 2015 Evaporation-driven ring and film deposition from colloidal droplets.
2473
+ Journal of Fluid Mechanics 781.
2474
+ Kolegov, KS & Lobanov, AI 2014 Mathematical modeling of fluid dynamics in evaporating drop with taking into
2475
+ account capillary and gravitational forces. Discrete and Continuous Models and Applied Computational Science
2476
+ (2), 375–380.
2477
+ Lacey, A. A. 1982 The motion with slip of a thin viscous droplet over a solid surface. Stud in App. Math. 67 (3),
2478
+ 217–230.
2479
+ Larson, R. G. 2014 Transport and deposition patterns in drying sessile droplets. AIChE Journal 60 (5), 1538–1571.
2480
+ Larsson, Christopher & Kumar, Satish 2022 Quantitative analysis of the vertical-averaging approximation for
2481
+ evaporating thin liquid films. Physical Review Fluids 7 (9), 094002.
2482
+
2483
+ 28
2484
+ M. R. Moore & A. W. Wray
2485
+ Layani, M., Gruchko, M., Milo, O., Balberg, I., Azulay, D. & Magdassi, S. 2009 Transparent conductive
2486
+ coatings by printing coffee ring arrays obtained at room temperature. ACS Nano 3 (11), 3537–3542.
2487
+ Li, Yaxing, Diddens, Christian, Lv, Pengyu, Wijshoff, Herman, Versluis, Michel & Lohse, Detlef 2019
2488
+ Gravitational effect in evaporating binary microdroplets. Physical review letters 122 (11), 114501.
2489
+ Lohse, Detlef, Zhang, Xuehua et al. 2015 Surface nanobubbles and nanodroplets. Reviews of modern physics
2490
+ 87 (3), 981.
2491
+ Mai, Tuan Anh & Richerzhagen, Bernold 2007 53.3: Manufacturing of 4th generation OLED masks with the
2492
+ Laser MicroJet® technology. In SID Symposium Digest of Technical Papers, , vol. 38, pp. 1596–1598. Wiley
2493
+ Online Library.
2494
+ Moore, M. R., Vella, D. & Oliver, J. M. 2021 The nascent coffee ring: how solute diffusion counters advection.
2495
+ J. Fluid Mech. 920, A54.
2496
+ Moore, M. R., Vella, D. & Oliver, J. M. 2022 The nascent coffee ring with arbitrary droplet contact set: an
2497
+ asymptotic analysis. arXiv preprint arXiv:2111.04854 .
2498
+ Murisic, N. & Kondic, L. 2011 On evaporation of sessile drops with moving contact lines. J. Fluid Mech. 679,
2499
+ 219–246.
2500
+ O’Brien, SBG 1991 On the shape of small sessile and pendant drops by singular perturbation techniques. Journal
2501
+ of Fluid Mechanics 233, 519–537.
2502
+ Oliver, J. M., Whiteley, J. P., Saxton, M. A., Vella, D., Zubkov, V. S. & King, J. R. 2015 On contact-line
2503
+ dynamics with mass transfer. Eur. J. Appl. Math. 26 (5), 671–719.
2504
+ Olver, F. W. J., Lozier, D. W., Boisvert, R. F. & Clark, C. W. 2010 NIST Handbook of Mathematical
2505
+ Functions. CUP.
2506
+ Orejon, D., Sefiane, K. & Shanahan, M. E. R. 2011 Stick–slip of evaporating droplets: substrate hydrophobicity
2507
+ and nanoparticle concentration. Langmuir 27 (21), 12834–12843.
2508
+ Padday, JF 1971 The profiles of axially symmetric menisci. Philosophical Transactions of the Royal Society of
2509
+ London. Series A, Mathematical and Physical Sciences 269 (1197), 265–293.
2510
+ Pham, T. & Kumar, S. 2017 Drying of droplets of colloidal suspensions on rough substrates. Langmuir 33 (38),
2511
+ 10061–10076.
2512
+ Popov, Yuri O 2005 Evaporative deposition patterns: spatial dimensions of the deposit. Physical Review E 71 (3),
2513
+ 036313.
2514
+ Pozrikidis, C 2012 Stability of sessile and pendant liquid drops. Journal of Engineering Mathematics 72 (1), 1–20.
2515
+ Pradhan, Tapan Kumar & Panigrahi, Pradipta Kumar 2017 Evaporation induced natural convection inside
2516
+ a droplet of aqueous solution placed on a superhydrophobic surface. Colloids and Surfaces A: Physicochemical
2517
+ and Engineering Aspects 530, 1–12.
2518
+ Rienstra, SW 1990 The shape of a sessile drop for small and large surface tension. Journal of Engineering Mathe-
2519
+ matics 24 (3), 193–202.
2520
+ S´aenz, P. J., Wray, A. W., Che, Z., Matar, O. K., Valluri, P., Kim, J. & Sefiane, K. 2017 Dynamics and
2521
+ universal scaling law in geometrically-controlled sessile drop evaporation. Nature Comm. 8, 14783.
2522
+ Sandu, Ion & Fleaca, Claudiu Teodor 2011 The influence of gravity on the distribution of the deposit formed
2523
+ onto a substrate by sessile, hanging, and sandwiched hanging drop evaporation. Journal of colloid and interface
2524
+ science 358 (2), 621–625.
2525
+ Shampine, L. F. 2007 Accurate numerical derivatives in matlab. ACM Trans. on Math. Software 33, 26.
2526
+ Shargaieva, Oleksandra, N¨asstr¨om, Hampus, Smith, Joel A, T¨obbens, Daniel, Munir, Rahim & Unger,
2527
+ Eva 2020 Hybrid perovskite crystallization from binary solvent mixtures: interplay of evaporation rate and
2528
+ binding strength of solvents. Materials Advances 1 (9), 3314–3321.
2529
+ Tan, Huanshu, Wooh, Sanghyuk, Butt, Hans-J¨urgen, Zhang, Xuehua & Lohse, Detlef 2019 Porous supra-
2530
+ particle assembly through self-lubricating evaporating colloidal ouzo drops. Nature communications 10 (1), 1–8.
2531
+ Van Dyke, M. 1964 Perturbation methods in fluid mechanics. Academic Press New York.
2532
+ Vodolazskaya, IV, Tarasevich, Yu et al. 2017 Modeling of mass transfer in a film of solution evaporating under
2533
+ the mask with holes. The European Physical Journal E 40 (10), 1–6.
2534
+ Volkov, RS & Strizhak, PA 2019 Measuring the temperature of a rapidly evaporating water droplet by planar
2535
+ laser induced fluorescence. Measurement 135, 231–243.
2536
+ Weon, B. M. & Je, J. H. 2013 Self-pinning by colloids confined at a contact line. Phys. Rev. Lett. 110 (2), 028303.
2537
+ Wilson, Stephen K & D’Ambrosio, Hannah-May 2023 Evaporation of sessile droplets. Annual Review of Fluid
2538
+ Mechanics 55.
2539
+ Wray, A. W. & Moore, M. R. 2023 Evaporation of non-circular droplets. J. Fluid Mech. p. (Under review).
2540
+ Wray, A. W., Papageorgiou, D. T., Craster, R. V., Sefiane, K. & Matar, O. K. 2014 Electrostatic suppres-
2541
+ sion of the “coffee stain effect”. Langmuir 30 (20), 5849–5858.
2542
+ Wray, A. W., Wray, P. S., Duffy, B. R. & Wilson, S. K. 2021 Contact-line deposits from multiple evaporating
2543
+ droplets. arXiv preprint arXiv:2103.07221 .
2544
+ Yariv, Ehud 2022 Shape of sessile drops at small contact angles. Journal of Fluid Mechanics 950, R4.
2545
+
W9FKT4oBgHgl3EQfoC5E/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
YNE2T4oBgHgl3EQfvAgF/content/tmp_files/2301.04085v1.pdf.txt ADDED
@@ -0,0 +1,1609 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MNRAS 000, 1–9 (2022)
2
+ Preprint 11 January 2023
3
+ Compiled using MNRAS LATEX style file v3.0
4
+ Propagating photo-z uncertainties: a functional derivative approach
5
+ Robert Reischke⋆1
6
+ 1 Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB),
7
+ German Centre for Cosmological Lensing, 44780 Bochum, Germany
8
+ 11 January 2023
9
+ ABSTRACT
10
+ Photometric redshifts are a key ingredient in the analysis and interpretation of large-scale
11
+ structure (LSS) surveys. The accuracy and precision of these redshifts estimates is directly
12
+ linked to the constraining power of photometric surveys. It is hence necessary to define preci-
13
+ sion and accuracy requirements for the redshift calibration to not to infer biased results in the
14
+ final analysis. For weak gravitational lensing of the LSS the photometry culminates in the es-
15
+ timation of the source redshift distribution (SRD) in each of the tomographic bins used in the
16
+ analysis. The focus has been on shifts of the mean of the SRDs and how well the calibration
17
+ must be able to recover those. Since the estimated SRDs are usually given as a normalized
18
+ histogram with corresponding errors, it would be advantageous to propagate these uncertain-
19
+ ties accordingly to see whether the requirements of the given survey are indeed fulfilled. Here
20
+ we propose the use of functional derivatives to calculate the sensitivity of the final observ-
21
+ ables, e.g. the lensing angular power spectrum, with respect to the SRD at a specific redshift.
22
+ This allows the propagation of arbitrarily shaped small perturbations to the SRD, without hav-
23
+ ing to run the whole analysis pipeline for each realization again. We apply our method to a
24
+ EUCLID survey and demonstrate it with SRDs of the KV450 data set, recovering previous
25
+ results. Lastly, we note that for cosmic shear moments of order larger than two will probably
26
+ be not relevant when propagating redshift uncertainties.
27
+ Key words: cosmology: theory, large-scale structure of Universe, surveys, galaxies: photom-
28
+ etry
29
+ 1
30
+ INTRODUCTION
31
+ Cosmic shear, the weak gravitational lensing effect imprinted on
32
+ distant galaxies by the large-scale structure (LSS), is one of the pri-
33
+ mal science goals for EUCLID and Rubin-LSST. The blueprint for
34
+ these missions has been set by current stage-3 surveys, including
35
+ the Kilo-Degree Survey (Kuijken et al. 2019; Asgari et al. 2021,
36
+ KiDS), the Dark Energy Survey (Abbott et al. 2018; Gatti et al.
37
+ 2022, DES) or the Subaru Hyper Suprime-Cam (Hamana et al.
38
+ 2020, HST), yielding tight constraints ob the matter distribution
39
+ in the late Universe.
40
+ The cosmic shear signal is estimated by measuring the coher-
41
+ ent distortion of background galaxies. Since the intrinsic elliptic-
42
+ ity of galaxies is much larger than the lensing effect, millions of
43
+ galaxies are required to measure a significant signal. This casts a
44
+ complete spectroscopic survey unfeasible. Hence, one has to rely
45
+ on redshift estimates from photometry. In order to interpret the ob-
46
+ served ellipticity correlations, the potometric redshifts have to be
47
+ calibrated. There are different approaches for the calibration pro-
48
+ cedure on the market. These include the calibration with a spec-
49
+ troscopic reference sample (possibly with re-weighting) (e.g. Lima
50
+ et al. 2008; Newman 2008; Matthews & Newman 2010; Masters
51
+ ⋆ E-mail: reischke@astro.ruhr-uni-bochum.de
52
+ et al. 2015; Bonnett et al. 2016; McLeod et al. 2017; Hildebrandt
53
+ et al. 2020; Wright et al. 2020; Myles et al. 2021), using photome-
54
+ try measurements in conjunction with clustering measurements of
55
+ tracer populations (e.g. Sánchez & Bernstein 2019; van den Busch
56
+ et al. 2020; Alarcon et al. 2020) and self-organising maps (Wright
57
+ et al. 2020). It is also possible to partially self-calibrate the pho-
58
+ tometric redshifts in weak lensing data (e.g. Schaan et al. 2020).
59
+ In order to account for general shapes of the source-redshift dis-
60
+ tributions (SRDs) different mixture models have been employed
61
+ (see for example Rau et al. 2020). These Gaussian processes are
62
+ non-parametric, but they are by definition non-linear, which makes
63
+ their implementation in cosmology pipelines in general very diffi-
64
+ cult. Stölzner et al. (2021) used linear fit parameters to circumvent
65
+ this problem to self-calibrate the data, as it can be implemented in
66
+ existing pipelines very easily.
67
+ Currently it is best practice to propagate the redshift uncer-
68
+ tainty in the SRDs by introducing shift parameters in the mean of
69
+ the distribution (Hildebrandt et al. 2021; Hikage et al. 2019; Abbott
70
+ et al. 2022). As the sensitivity of surveys rises, however, the re-
71
+ quirements on the SRD uncertainties become larger as well. There-
72
+ fore, the contributions from higher order cumulants of the SRD be-
73
+ come important. As discussed above, previous works have focused
74
+ on Gaussian mixture models to self-calibrate the cosmic shear mea-
75
+ surement. In this paper we investigate the general sensitivity of the
76
+ © 2022 The Authors
77
+ arXiv:2301.04085v1 [astro-ph.CO] 10 Jan 2023
78
+
79
+ 2
80
+ Reischke
81
+ lensing power spectrum to perturbations in the SRD. In particu-
82
+ lar we are calculating the functional derivative of the cosmic shear
83
+ angular power spectrum with respect to the SRD at a particular
84
+ co-moving distance. This can then be mapped to a total error in
85
+ the cosmic shear power spectrum if a perturbation in the SRD in
86
+ a co-moving interval is applied. We take the constraint of the nor-
87
+ malisation of the SRD into account when calculating the functional
88
+ derivative. Therefore we can propagate arbitrary perturbations to
89
+ the SRDs (subject to some underlying covariance) and propagate
90
+ them into the Cℓ of cosmic shear. This allows us to estimate the dif-
91
+ ference in χ2 induced by the uncertainty in the SRD, without hav-
92
+ ing to run thousands of realizations of the analysis pipeline used.
93
+ By using a Fisher matrix for the cosmological parameters, this ∆χ2
94
+ can then be mapped to potential biases in cosmological parameters.
95
+ Here we studied a rather idealised scenario by working in Fourier
96
+ space, assuming a Gaussian likelihood and ignoring intrinsic align-
97
+ ments. The method, however, easily generalises and including these
98
+ effects is straightforward.
99
+ We structure the paper as follows: In Section 2 we briefly
100
+ review cosmic shear basics and introduce the methodology used
101
+ by calculating the functional derivative of the weak lensing angu-
102
+ lar power spectrum. The results are presented in Section 3, where
103
+ we apply the procedure to a survey with EUCLIDs specifications
104
+ and to KiDS-VIKING-450 (KV450). We conclude in Section 4.
105
+ In the appendices we also investigate the possibility of an Edge-
106
+ worth expansion of the SRD (Appendix A), discuss photometric
107
+ galaxy clustering (Appendix B), the distribution of the mean and
108
+ standard deviation of the SRD in Appendix C, the general relation-
109
+ ship to observables (Appendix D), the functional derivative of the
110
+ non-Limber projection in Appendix E and inrinsic alignments (Ap-
111
+ pendix F).
112
+ 2
113
+ METHODOLOGY
114
+ In this section we present the basic methodology of our analysis.
115
+ In particular we describe the basics of cosmic shear and derive the
116
+ function derivative of the lensing angular power spectrum with re-
117
+ spect to the SRDs.
118
+ 2.1
119
+ Cosmic shear basics
120
+ The equation for the cosmic shear power spectrum in tomographic
121
+ bins i and j in the Limber proejction is (Limber 1954; Loverde &
122
+ Afshordi 2008)
123
+ C
124
+ κiκj
125
+
126
+ =
127
+ � χH
128
+ 0
129
+
130
+ χ2 W(i)
131
+ κ (χ)W(j)
132
+ κ (χ)Pδ
133
+ �ℓ + 0.5
134
+ χ
135
+ , χ
136
+
137
+ ,
138
+ (1)
139
+ where Pδ is the matter power spectrum, for which we use the em-
140
+ ulated spectrum from Mead et al. (2015). W(i)
141
+ κ (χ) is the lensing
142
+ weight of the i-th tomographic bin as given by:
143
+ W(i)
144
+ κ (χ) = 3Ωm0
145
+ 2χ2
146
+ H
147
+ χ
148
+ a(χ)
149
+ � χH
150
+ χ
151
+ dχ′n(i)
152
+ s (χ′)χ′ − χ
153
+ χ′
154
+ .
155
+ (2)
156
+ Here χ is the co-moving distance, a the scale factor, Ωm0 the matter
157
+ density parameter today, χH the Hubble radius and n(i)
158
+ s is the SRD
159
+ in the i-th tomographic bin which builds on photo-z measurements
160
+ and its calibration. It is normalized in each tomographic bin such
161
+ that
162
+
163
+ dz n(i)
164
+ s (z) = 1 =
165
+
166
+ dχ n(i)
167
+ s (z(χ)) dz
168
+ dχ ≡
169
+
170
+ dχ n(i)
171
+ s (χ) .
172
+ (3)
173
+ Since photo-z is just an estimate of the true redshift, the estimated
174
+ source-redshift distribution, n(i)
175
+ s , is not exactly known. Here we in-
176
+ vestigate two approaches:
177
+ i) Use functional derivatives to evaluate the change of the lens-
178
+ ing power spectrum when perturbing the n(i)
179
+ s at different redshifts.
180
+ Given specific survey settings and precision goals, limits on the al-
181
+ lowed change of the n(i)
182
+ s can be determined, which in turn can be
183
+ mapped to changes in the cumulants or moments of the underlying
184
+ distribution (see Section 2.2).
185
+ ii) We expand the underlying source-redshift distribution in an
186
+ asymptotic Edgeworth series and investigate the requirements on
187
+ the cumulants directly in a Fisher analysis. The second approach is
188
+ not feasible for realistic SRDs (see Appendix A).
189
+ 2.2
190
+ Functional derivative of the lensing power spectrum
191
+ Here we wish to investigate the sensitivity of the weak lensing
192
+ power spectrum to the full shape of the source-redshift distribution
193
+ using functional derivatives. In particular we start by perturbing
194
+ n(i)
195
+ s (χ(z)) at a certain redshift z0, such that χ0 = χ(z0). The corre-
196
+ sponding perturbed lensing weight is thus
197
+ ∆W(i)
198
+ κ (χ, χ0) = δW(i)
199
+ κ (χ)
200
+ δn(i)
201
+ s (χ0)
202
+ ∆n(i)
203
+ s (χ0) .
204
+ (4)
205
+ This expression evaluates, how the lensing weight changes if the
206
+ source-redshift distribution is perturbed by an amount ∆n(i)
207
+ s at the
208
+ co-moving distance χ0 corresponding to the redshift z0.
209
+ Ultimately, we are interested in the change of the lensing
210
+ power spectrum, Equation (B1). First, by applying the Leibniz rule
211
+ δC(ij)
212
+
213
+ δn(a)(χ0) =
214
+
215
+ dx δC(ij)
216
+
217
+ δW(a)(x)
218
+ δW(a)(x)
219
+ δn(a)(χ0)
220
+ =
221
+
222
+ dx δW(a)(x)
223
+ δn(a)(χ0)
224
+
225
+
226
+ ℓ+0.5
227
+ x , x
228
+
229
+ x2
230
+
231
+ W(j)(x)δD
232
+ ia + W(i)(x)δD
233
+ ja
234
+
235
+ ,
236
+ (5)
237
+ The missing ingredient is the functional derivative of the lensing
238
+ kernel, for which we find
239
+ δW(i)(x)
240
+ δn(j)
241
+ s (χ0)
242
+ = 3Ωm0
243
+ 2χ2
244
+ H
245
+ x
246
+ a(x)
247
+ χ0 − x
248
+ χ0
249
+ δD
250
+ ijΘ(χ0 − x) .
251
+ (6)
252
+ Θ(x) is the Heaviside function to ensure that the functional deriva-
253
+ tive vanishes if the SRD is perturbed outside the integration bounds.
254
+ Using Equation (4) and Equation (5) we can write the change in
255
+ angular power spectrum ∆C(ij)
256
+ ℓ (χ′) due to a change in the source-
257
+ redshift distribution at co-moving distance χ0 as
258
+ ∆C(ij)
259
+ ℓ,a (χ0) ≡
260
+ δC(ij)
261
+
262
+ δn(a)(χ0)∆n(a)(χ0)
263
+ = 3Ωm0
264
+ 2χ2
265
+ H
266
+ ∆n(χ0)
267
+
268
+ dx
269
+ a(x)x
270
+ χ0 − x
271
+ χ0
272
+
273
+ �ℓ + 0.5
274
+ x
275
+ , x
276
+
277
+ ×
278
+
279
+ W(j)(x)δD
280
+ ia + W(i)(x)δD
281
+ ja
282
+
283
+ .
284
+ (7)
285
+ Integrating the perturbed lensing spectrum then gives the total per-
286
+ turbation:
287
+ ∆C(ij)
288
+ ℓ,a ≡
289
+
290
+ dχ0∆C(ij)
291
+ ℓ,a (χ0) .
292
+ (8)
293
+ So far we have treated the function n(i)(z) as being completely free.
294
+ However, the functional derivative needs to respect the constraint
295
+ MNRAS 000, 1–9 (2022)
296
+
297
+ functional photo-z
298
+ 3
299
+ 0.5
300
+ 1.0
301
+ 1.5
302
+ 2.0
303
+ 2.5
304
+ 3.0
305
+ 3.5
306
+ redshift z
307
+ 1
308
+ 2
309
+ 3
310
+ 4
311
+ n(i)
312
+ s (z)
313
+ 1
314
+ 2
315
+ 3
316
+ 4
317
+ 5
318
+ 6
319
+ 7
320
+ 8
321
+ 9
322
+ 10
323
+ tomographic bin index
324
+ Figure 1. Allowed perturbation for EUCLID to the SRD of the ten tomo-
325
+ graphic source bins. Solid lines show the fiducial SRD, while the bands
326
+ show the allowed perturbation to it.
327
+ given in Equation (3), thus limiting the possible variations of n(i)(z).
328
+ The normalization condition itself is again a function and we write
329
+ N[n(i)
330
+ s ] � 1 −
331
+
332
+ dz n(i)
333
+ s (z) = 0 ,
334
+ (9)
335
+ this constraint can be implemented by first defining
336
+ n(i)
337
+ s (z) �
338
+ f(z)
339
+
340
+ dx′ f(x′)
341
+ (10)
342
+ which will be normalized by construction. n(i)
343
+ s (z) is a functional of
344
+ f and we can now evaluate the functional derivative of C[n[ f]] as
345
+ an unconstrained derivative but evaluated at f = n. To avoid clutter
346
+ we ignore the sub- and superscripts in this part
347
+ �δC[n[ f]]
348
+ δf(x)
349
+ � ������ f=n
350
+ =
351
+
352
+ dx′ δC[n]
353
+ δn(x′)
354
+ δn(x′)
355
+ δ f(x)
356
+ ������ f=n
357
+ .
358
+ (11)
359
+ With
360
+ δn(x′)
361
+ δf(x) = δD(x′ − x)
362
+
363
+ dy f(y)
364
+
365
+ f(x′)
366
+ ��
367
+ dy f(y)
368
+ �2 ,
369
+ (12)
370
+ one finds
371
+ δC[n]
372
+ δ1n(x) ≡
373
+ �δC[n[f]]
374
+ δf(x)
375
+ � ������ f=n
376
+ = δC[n]
377
+ δn(x) −
378
+
379
+ dy δC[n]
380
+ δn(y) n(y) ,
381
+ (13)
382
+ where we denote that we want to keep the normalization fixed by
383
+ the variation δ1. This is a very intuitive expression: the first term
384
+ evaluates the standard functional derivative, while the second term
385
+ corrects this variation to respect the normalization.
386
+ 2.3
387
+ Fisher forecast
388
+ The next step is to set some requirement on the lensing power spec-
389
+ tra. Here we will look at the difference in the χ2, assuming a Gaus-
390
+ sian likelihood and thus setting a lower limit on the required accu-
391
+ racy of n(i)
392
+ s (z). For modes aℓm with zero mean and covariance Cℓ,
393
+ the ∆χ2 between multipoles ℓmin and ℓmax can be written as
394
+ ∆χ2(ℓmin, ℓmax) = fsky
395
+ ℓmax
396
+
397
+ ℓ=ℓmin
398
+ 2ℓ + 1
399
+ 2
400
+ tr
401
+
402
+ ∆CℓC−1
403
+ ℓ ∆CℓC−1
404
+
405
+
406
+ ,
407
+ (14)
408
+ 1
409
+ 2
410
+ 3
411
+ 4
412
+ 5
413
+ 6
414
+ 7
415
+ 8
416
+ 9
417
+ order of central moment n
418
+ 10−2
419
+ 100
420
+ 102
421
+ 104
422
+ 106
423
+ 108
424
+ relative change in %
425
+ 1
426
+ 2
427
+ 3
428
+ 4
429
+ 5
430
+ 6
431
+ 7
432
+ 8
433
+ 9
434
+ 10
435
+ tomographic bin index
436
+ Figure 2. Allowed relative change in per-cent of the central moment of the
437
+ SRD in each tomographic bin. The changes are calculated from the per-
438
+ turbed SRD distributions as shown in Figure 1.
439
+ note that Cℓ is the matrix with the components C(ij). The factor fsky
440
+ takes into account the observed sky fraction. Using Equation (8) we
441
+ rewrite the previous equation as a Riemann sum
442
+ ∆χ2(ℓmin, ℓmax) = fsky
443
+ ℓmax
444
+
445
+ ℓ=ℓmin
446
+ 2ℓ + 1
447
+ 2
448
+ ×
449
+
450
+ r,s,i,j
451
+ tr
452
+
453
+ δCℓ
454
+ δ1n(i)(χr)C−1
455
+
456
+ δCℓ
457
+ δ1n(j)(χs)C−1
458
+
459
+
460
+ × DχrDχs∆n(i)(χr)∆n(j)(χs) ,
461
+ (15)
462
+ with the measure Dχr. If we define the Fisher matrix in this case
463
+ as:
464
+ Fαβ = fsky
465
+ ℓmax
466
+
467
+ ℓ=ℓmin
468
+ 2ℓ + 1
469
+ 2
470
+ tr
471
+ � δCℓ
472
+ δ1nα
473
+ C−1
474
+
475
+ δCℓ
476
+ δ1nβ
477
+ C−1
478
+
479
+
480
+ Dχr(α)Dχs(β) ,
481
+ (16)
482
+ where we labeled n(i)(χr) → nα, we recover for a difference in χ2
483
+ using a scalar product on the finite dimensional Hilbert space of
484
+ shifts in the redshift distribution where the Fisher matrix acts as a
485
+ norm-inducing metric
486
+ ∆χ2 = F(∆n, ∆n) ≡ ∆nT F∆n ,
487
+ (17)
488
+ where ∆n is the vector containing shifts of the components nα.
489
+ The Fisher matrix, Equation (16), describes, how well the
490
+ shifts nα can be determined by a measurement of the angular power
491
+ spectra Cα given certain survey settings. Clearly, if one would try to
492
+ measure all possible perturbations, neighbouring δn(χ) are strongly
493
+ correlated. This is, however not the question we would like to ask
494
+ in this work. Instead, we want to look at the situation that we allow
495
+ any perturbation ∆n, irrespective of the correlation. Therefore, by
496
+ turning this argument around, we only use the diagonal part of the
497
+ Fisher matrix.
498
+ Lastly one should note that the functional derivative is strictly
499
+ defined as a limiting process for infinitesimally small perturbation
500
+ to the function at hand. The relation in general can be non-linear,
501
+ but as long as relative perturbations to the function are small with
502
+ respect to unity, these non-linear contributions are sub-dominant.
503
+ Especially for surveys with tight requirements on the SRDs this is
504
+ essentially always fulfilled.
505
+ MNRAS 000, 1–9 (2022)
506
+
507
+ 4
508
+ Reischke
509
+ 0.00
510
+ 0.25
511
+ 0.50
512
+ 0.75
513
+ 1.00
514
+ 1.25
515
+ 1.50
516
+ 1.75
517
+ 2.00
518
+ redshift z
519
+ 1
520
+ 2
521
+ 3
522
+ 4
523
+ 5
524
+ 6
525
+ n(i)
526
+ s (z)
527
+ KV450
528
+ 1
529
+ 2
530
+ 3
531
+ 4
532
+ 5
533
+ tomographic bin index
534
+ Figure 3. Allowed perturbation for KV450 to the SRD for the 5 tomo-
535
+ graphic source bins. Solid lines show the fiducial SRD, while the bands
536
+ show the allowed perturbation to it.
537
+ 3
538
+ RESULTS
539
+ 3.1
540
+ Allowed Perturbations to the Source Redshift
541
+ Distribution
542
+ First we will look at the allowed perturbations to the SRD by as-
543
+ suming allowing for a total ∆χ2 of unity, corresponding to a one
544
+ σ shift of a linear model parameter. Clearly, there are many differ-
545
+ ent solutions ∆n that satisfy ∆χ2 = 1 subject to Equation (17). To
546
+ show the structure of the Fisher matrix we therefore distribute the
547
+ allowed ∆χ2 per ∆nα equally.
548
+ We will assume EUCLID specifications for the survey as
549
+ given in Blanchard et al. (2020) and assume ntomo = 10 tomo-
550
+ graphic bins, a sky fraction of 0.3. Furthermore, we will collect
551
+ multipoles between ℓmin = 10 and ℓmax = 3000. We then calculate
552
+ the diagonal Fisher matrix from Equation (16) and distribute the er-
553
+ rors equally as described above. This results into a possible realisa-
554
+ tion of ∆n yielding ∆χ2 = 1 subject to the constraint Equation (3).
555
+ Figure 1 shows the resulting perturbed SRDs. The solid lines show
556
+ the fiducial SRD, while the shaded areas show the allowed pertur-
557
+ bations to not cause a bias of more than 1 σ for a linear model pa-
558
+ rameter. Lastly, the tomographic bin index is shown as a colour-bar.
559
+ The general trend is very clear, the allowed perturbations become
560
+ very large around a small interval ∆χ around the mean of the distri-
561
+ butions. For most tomographic bins this coincides with the peak of
562
+ the distribution as they are very close to Gaussian. Only for the first
563
+ and the last bin these spikes are a bit offset since the distributions
564
+ are a bit more asymmetric. This already confirms that the most im-
565
+ portant part about the SRDs in cosmic shear measurements is to
566
+ calibrate the mean redshift of each tomographic bin very well. Fur-
567
+ thermore, we observe that the spikes tend to be narrower at higher
568
+ redshifts, indicating that the uncertainty on the mean of the SRD
569
+ is more important at higher redshifts. We want to stress again, that
570
+ this is just one realization of ∆n that produces a ∆χ2 = 1, but by
571
+ distributing the errors equally, it is possible to see, which pertur-
572
+ bations the final measurement is most sensitive to. However, the
573
+ uncertainties should not be used at literally value and are extreme
574
+ values, they just give a general trend.
575
+ Next, we use the perturbed SRDs to calculate their central mo-
576
+ ments µn:
577
+ µn � E[(X − E[X])n] =
578
+
579
+ p(x)(x − µ)ndx ,
580
+ (18)
581
+ 0.0
582
+ 0.5
583
+ 1.0
584
+ 1.5
585
+ 2.0
586
+ 2.5
587
+ 3.0
588
+ 3.5
589
+ 4.0
590
+ ∆χ2
591
+ 0.0
592
+ 0.2
593
+ 0.4
594
+ 0.6
595
+ 0.8
596
+ 1.0
597
+ 1.2
598
+ KV450
599
+ 68th
600
+ 50th
601
+ 95th percentile
602
+ Figure 4. ∆χ2 for 106 realisations of ∆n from the CKV450
603
+ n(χ)
604
+ . We also show
605
+ the 50, 68 and 95 percentiles.
606
+ for a probability distribution function p(x) with mean µ. The per-
607
+ turbed SRDs are used to calculate the change in the central mo-
608
+ ments relative to the fiducial SRD. Figure 2 shows the resulting
609
+ relative change for all tomographic bins as a function of the order
610
+ of the central moment. Clearly, the first moment is most important
611
+ and while the second one still needs to be known at a 10% level,
612
+ all higher order moments are essentially unimportant. This is of
613
+ course reminiscent of the behaviour observed in Figure 1, where
614
+ the perturbations are such that they essentially fix the mean. It is of
615
+ course entirely possible, that we alter the shape of the distribution
616
+ in a different way but still achieve the desired accuracy.
617
+ Nonetheless, the results show that for the SRD for cosmic
618
+ shear only the mean redshift and the width are important with the
619
+ former influencing the result way stronger (by more than an order
620
+ of magnitude). In Appendix C we sample from the allowed changes
621
+ in the SRD and show the relative difference of the first two mo-
622
+ ments to illustrate their scatter.
623
+ 3.2
624
+ Propagating Redshift Errors
625
+ In this section we will revisit the KV450 data for the SRD (Hilde-
626
+ brandt et al. 2020). This data set is used since it includes a covari-
627
+ ance matrix from the direct calibration (DIR). For the clustering
628
+ redshifts (van den Busch et al. 2020) or the self-organising maps
629
+ (Wright et al. 2020) no bootstrap covariance was estimated so far.
630
+ For completeness the allowed perturbations are shown in Fig-
631
+ ure 3. Due to the lower signal-to-noise ratio of the measurement,
632
+ the allowed perturbations are much larger than in the previous case.
633
+ The features, however, are very similar.
634
+ Since we are expressing everything in co-moving distance, the
635
+ covariance matrix needs to be transformed accordingly. Let CKV450
636
+ n(z)
637
+ be the covariance matrix in n(z) space, the transformed covariance
638
+ is then
639
+ CKV450
640
+ n(χ)
641
+ = JTCKV450
642
+ n(z)
643
+ J ,
644
+ (19)
645
+ where J is the Jacobian with components Ji
646
+ j = δi
647
+ jdz/dχ. Alterna-
648
+ tively, the Fisher matrix of the SRD perturbations can be expressed
649
+ in redshift space by the inverse transform.
650
+ Perturbations ∆n are now sampled from CKV450
651
+ n(χ)
652
+ and propa-
653
+ gated to obtain ∆χ2 according to Equation (14). If the redshift errors
654
+ as given in CKV450
655
+ n(χ)
656
+ are sufficiently small to not produce a significant
657
+ bias in the cosmological parameters such as S 8 we expect most
658
+ MNRAS 000, 1–9 (2022)
659
+
660
+ functional photo-z
661
+ 5
662
+ 0.200
663
+ 0.225
664
+ 0.250
665
+ 0.275
666
+ 0.300
667
+ 0.325
668
+ 0.350
669
+ 0.375
670
+ 0.400
671
+ Ωm0
672
+ 0.60
673
+ 0.65
674
+ 0.70
675
+ 0.75
676
+ 0.80
677
+ 0.85
678
+ 0.90
679
+ 0.95
680
+ 1.00
681
+ σ8
682
+ KV450
683
+ 1
684
+ 2
685
+ 3
686
+ 4
687
+ 5
688
+ 6
689
+ ∆χ2
690
+ Figure 5. The black histogram shows the induced shifts by the photo-z un-
691
+ certainty in the Ωm0-σ8-plane, derived from the ∆χ2 of Figure 4. In red
692
+ we show the contour from the Fisher matrix for KV450 enclosing the 1σ
693
+ confidence interval.
694
+ realisations (i.e. 68% Hildebrandt et al. 2020) to yield ∆χ2 < 1.
695
+ Figure 4 shows the resulting distribution in ∆χ2 for the 106 real-
696
+ izations of ∆n for KV450. The vertical dashed lines show the 50th,
697
+ 68th and 95th percentile. It is clear from this plot that the precision
698
+ of the SRD used in KV450 is high enough to not yield any spurious
699
+ detection in the final parameter constraints since the 68th percentile
700
+ is still well below unity.
701
+ One could now further propagate these uncertainties into cos-
702
+ mological parameters using the corresponding Fisher matrix. For a
703
+ given shift in the SRD ∆n, the corresponding shifts in the cosmo-
704
+ logical parameters, ∆θ can be calculated:
705
+ ∆θi = −(F−1)i
706
+ αFα
707
+ β∆nβ ,
708
+ (20)
709
+ where Greek indices run over the perturbations in the SRD, while
710
+ Latin indices label cosmological parameters. Here we assumed the
711
+ sum convention. Fi
712
+ α hence is the mixed pseudo Fisher matrix:
713
+ Fi
714
+ α = −E
715
+ �∂ ln L
716
+ ∂θi
717
+ δ ln L
718
+ δnα Dχr(α)
719
+
720
+ (21)
721
+ and it’s inverse is a pseudo inverse. Since the inversion of this ma-
722
+ trix is not necessarily stable we choose to go another route here.
723
+ Since the distribution of ∆χ2 is known, we are interested in sam-
724
+ ples of cosmological parameters with the same ∆χ2 with respect
725
+ to the best fit value. For a Gaussian posterior in one dimension
726
+ this would amount to a distribution such that the absolute value of
727
+ each sample is fixed to
728
+
729
+ ∆θ2. We sample from a standard Gaussian
730
+ distribution and modify its width by
731
+
732
+ ∆θ2. This Gaussian is then
733
+ mapped into the frame of the cosmological parameters under con-
734
+ sideration via the Cholesky decomposition of the Fisher matrix of
735
+ the cosmological parameters. In Figure 5 we apply this procedure
736
+ to the ∆χ2 distribution of KV450 (Figure 4). Each dot represents
737
+ one sample of the ∆χ2 distribution with its value shown as a colour
738
+ bar. It can be seen as the geodesic distance to the fiducial value
739
+ for the cosmological parameters in the parameter manifold (Giesel
740
+ et al. 2021). The red contours depict the expected 1, 2, 3σ confi-
741
+ dence regions from the Fisher forecast for KV450. Since in the
742
+ original analysis more than the two parameters here where used,
743
+ we re-scale the ∆χ2 accordingly, in particular by the χ2 quantile
744
+ function χ2
745
+ k(p), where k = 10 is the number of parameters in the
746
+ actual analysis analysis (Hildebrandt et al. 2017) and p = 0.68.
747
+ This is done in order to obtain a fair comparison. It is clear from
748
+ 0.794
749
+ 0.796
750
+ 0.798
751
+ 0.800
752
+ 0.802
753
+ 0.804
754
+ 0.806
755
+ S8
756
+ 0
757
+ 25
758
+ 50
759
+ 75
760
+ 100
761
+ 125
762
+ 150
763
+ 175
764
+ 200
765
+ KV450
766
+ Figure 6. Induced scatter on the S 8 = σ8
767
+ √Ωm0/0.3 parameter. This is
768
+ directly derived from the samples of Figure 5. The scatter is roughly 15
769
+ per-cent of the statistical error budget reported in Hildebrandt et al. (2017,
770
+ 2020).
771
+ the plot, that all samples for the photometric redshift distribution lie
772
+ well within the 1σ contour. Furthermore, it should be noted that we
773
+ are considering a very idealised forecast with two free parameters
774
+ and no systematics here. The procedure, however, can be general-
775
+ ized to any number of parameters. Furthermore, one can apply the
776
+ same analysis to a full Monte-Carlo-Markov-Chain (MCMC) by
777
+ matching those samples which are ∆χ2 away from the maximum
778
+ likelihood of the MCMC. Lastly, the samples from Figure 5 can be
779
+ mapped to S 8 = σ8
780
+ √Ωm0/0.3. Figure 6 shows the resulting his-
781
+ togram of the scatter due to the photo-z uncertainties. Comparing
782
+ this to ∆S 8 = 0.076 at 68% confidence (Hildebrandt et al. 2020)
783
+ shows that the scatter induced by the redshift uncertainties as sam-
784
+ pled from the KV450 SRD covariance have a small effect on the
785
+ overall error budget. In Hildebrandt et al. (2017) a Fisher matrix
786
+ method for the shifts of the mean of the SRDs was investigated as
787
+ a a source of systematics, which found similar results to the once
788
+ presented here. The main difference between the two methods is
789
+ that we allow for general perturbations to the redshift distribution
790
+ (provided there correlation is given). Generalizing the procedure
791
+ in Hildebrandt et al. (2017) to moments higher than the variance
792
+ is bound to fail (see Appendix A). However, we would also con-
793
+ clude that even for EUCLID, the analysis of the first two moments
794
+ is probably sufficient.
795
+ In appendix C the mean and standard deviation of each SRD
796
+ in the five tomographic bins are shown for the realisations used in
797
+ this section as sampled from the DIR covariance matrix. Figure C1
798
+ shows a very similar behaviour to what we found in fig. 2. In par-
799
+ ticular this is that the mean scatters less at higher redshifts, while
800
+ the standard deviation scatters roughly equally for most of the bins.
801
+ We close the section with a general discussion about the usage
802
+ of ∆χ2 or directly uncertainties in the parameters. It is in general
803
+ advantageous to make accuracy assessments for the SRD using the
804
+ ∆χ2 and not by inverting the Fisher matrix for the parameters of
805
+ interests to obtain the shift values for those. The reason for this is
806
+ that ∆χ2 is an invariant quantity, while shifts in parameter space are
807
+ dependent on the specific model choice. The only caveat in the ∆χ2
808
+ is that the number of parameter must be taken into account, this is,
809
+ however, much easier than calculating the Fisher matrix.
810
+ MNRAS 000, 1–9 (2022)
811
+
812
+ 6
813
+ Reischke
814
+ 4
815
+ CONCLUSIONS
816
+ In this paper we have analysed the dependence of the cosmic shear
817
+ angular power spectrum on the SRD. This has be done by employ-
818
+ ing functional derivatives of the cosmic shear Cℓ with respect to
819
+ the SRD at a fixed co-moving distance χ0. By integrating over the
820
+ introduced error we estimated the ∆χ2 introduced by arbitrary un-
821
+ certainties in the SRD. We applied our method to a cosmic shear
822
+ survey with EUCLID specifications and KV450 since a covariance
823
+ of the SRD estimate was given. Our main findings can be sum-
824
+ marised as follows:
825
+ (i) Allowed perturbations of the SRD are such that they preserve
826
+ the mean of the underlying distribution. If they do, they can be
827
+ rather larger, even for a survey like EUCLID. This is in line with
828
+ the common practice of using only shifted means of the underlying
829
+ redshift distribution.
830
+ (ii) In order to achieve the accuracy required for EUCLID, the
831
+ mean of the redshift distribution needs to be determined between
832
+ 1 and 0.01 per-cent, depending on the tomographic bin under con-
833
+ sideration. The variance of the SRD is still important at the 10 per-
834
+ cent level. There is still some sensitivity left in the skewness, but
835
+ all other moments are not relevant.
836
+ (iii) We performed a simplistic analysis of the KV450 SRDs to
837
+ check whether they fulfill the requirements and found that the un-
838
+ certainties, in this very idealised scenario, only yield biases up to
839
+ 1σ in the final constraints. In a full analysis, this bias would be even
840
+ smaller. Thus confirming the redshift calibration used in KV450.
841
+ (iv) Even for EUCLID it is most likely not necessary to inves-
842
+ tigate moments of the redshift distribution n > 2. This conclusion
843
+ could change for different settings and self-calibration methods.
844
+ (v) The procedure outlined here has the advantage to be very
845
+ cheap computationally, since the functional derivatives only need
846
+ to be computed once. It is then only a matter of sampling from the
847
+ underlying SRD and to propagate these perturbations with the pre-
848
+ viously calculated functional derivative. It is hence not necessary
849
+ to push thousands of realisations of the SRD through the analysis
850
+ pipeline.
851
+ The method outlined here can thus be used to analyse whether a
852
+ perturbation in the SRD still fulfills the requirements of a given
853
+ experiment so that no biases of model parameters are introduced. It
854
+ allows for arbitrary perturbations to the SRD without requiring a fit
855
+ to the actual distribution. We intend to apply the presented method
856
+ to the updated SRDs of KiDS in the future.
857
+ For the interested reader the appendices Appendix A - Ap-
858
+ pendix E discuss various aspects of the analysis which could be
859
+ refined in future work. In particular we look at the Edgeworth ex-
860
+ pansion of the SRD in Appendix A, i.e. an expansion in the cu-
861
+ mulants of the underlying SRDs. However, we find that, even for
862
+ a realistic setting, the Edgeworth expansion cannot reproduce the
863
+ original SRDs if cumulants n > 2 are considered.
864
+ Data Availability: The data underlying this article will be
865
+ shared on reasonable request to the corresponding author.
866
+ ACKNOWLEDGMENTS
867
+ RR would like to thank Hendrik Hildebrandt and Björn Malte
868
+ Schäfer for insightful discussions and comments on the manuscript.
869
+ RR is supported by the European Research Council (Grant No.
870
+ 770935).
871
+ REFERENCES
872
+ Abbott T. M. C., et al., 2018, Phys. Rev. D, 98, 043526
873
+ Abbott T. M. C., et al., 2022, Phys. Rev. D, 105, 023520
874
+ Alarcon A., Sánchez C., Bernstein G. M., Gaztañaga E., 2020, Monthly
875
+ Notices of the Royal Astronomical Society, 498, 2614
876
+ Asgari M., et al., 2021, Astron. Astrophys., 645, A104
877
+ Blanchard A., et al., 2020, Astron. Astrophys., 642, A191
878
+ Blinnikov S., Moessner R., 1998, Astron. Astrophys. Suppl. Ser., 130, 193
879
+ Bonnett C., et al., 2016, Physical Review D, 94, 042005
880
+ Gatti M., et al., 2022, Mon. Not. Roy. Astron. Soc., 510, 1223
881
+ Giesel E., Reischke R., Schäfer B. M., Chia D., 2021, JCAP, 01, 005
882
+ Hamana T., et al., 2020, Publications of the Astronomical Society of Japan,
883
+ 72, 16
884
+ Hikage C., et al., 2019, Publications of the Astronomical Society of Japan,
885
+ 71, 43
886
+ Hildebrandt H., et al., 2017, Mon. Not. Roy. Astron. Soc., 465, 1454
887
+ Hildebrandt H., et al., 2020, A&A, 633, A69
888
+ Hildebrandt H., et al., 2021, Astron. Astrophys., 647, A124
889
+ Kuijken K., et al., 2019, A&A, 625, A2
890
+ Lima M., Cunha C. E., Oyaizu H., Frieman J., Lin H., Sheldon E. S., 2008,
891
+ Monthly Notices of the Royal Astronomical Society, 390, 118
892
+ Limber D. N., 1954, ApJ, 119, 655
893
+ Loverde M., Afshordi N., 2008, Phys. Rev. D, 78, 123506
894
+ Masters D., et al., 2015, ApJ, 813, 53
895
+ Matthews D. J., Newman J. A., 2010, ApJ, 721, 456
896
+ McLeod M., Balan S. T., Abdalla F. B., 2017, Monthly Notices of the Royal
897
+ Astronomical Society, 466, 3558
898
+ Mead A. J., Peacock J. A., Heymans C., Joudaki S., Heavens A. F., 2015,
899
+ MNRAS, 454, 1958
900
+ Myles J., et al., 2021, MNRAS, 505, 4249
901
+ Newman J. A., 2008, The Astrophysical Journal, 684, 88
902
+ Rau M. M., Wilson S., Mandelbaum R., 2020, Monthly Notices of the Royal
903
+ Astronomical Society, 491, 4768
904
+ Schaan E., Ferraro S., Seljak U., 2020, J. Cosmol. Astropart. Phys., 2020,
905
+ 001
906
+ Stölzner B., Joachimi B., Korn A., Hildebrandt H., Wright A. H., 2021,
907
+ Astron. Astrophys., 650, A148
908
+ Sánchez C., Bernstein G. M., 2019, Monthly Notices of the Royal Astro-
909
+ nomical Society, 483, 2801
910
+ Wright A. H., Hildebrandt H., Busch J. L. v. d., Heymans C., 2020, A&A,
911
+ 637, A100
912
+ van den Busch J. L., et al., 2020, A&A, 642, A200
913
+ APPENDIX A: EDGEWORTH EXPANSION
914
+ In this section we employ an Edgeworth expansion for the photo-
915
+ z distribution. The Edgeworth expansion is an asymptotic expan-
916
+ sion (in contrast to the Gram-Charlier expansion). Starting from
917
+ the characteristic function (Blinnikov & Moessner 1998).
918
+ ϕ(j)
919
+ Z (t) = En( j)(z)
920
+
921
+ eitZ�
922
+ ,
923
+ (A1)
924
+ i.e. the Fourier transform of the probability density n( j)(z). With the
925
+ definition of the moments ˜µn, the Taylor expansion of the charac-
926
+ teristic function is
927
+ ϕ(j)
928
+ Z (t) = 1 +
929
+
930
+
931
+ n1
932
+ ˜µn
933
+ n! (it)n .
934
+ (A2)
935
+ The logarithm of the characterstic function is the cumulant, κn, gen-
936
+ erating function
937
+ κn = 1
938
+ in
939
+ dn
940
+ dtn log ϕ( j)
941
+ Z (t)
942
+ �����t=0
943
+ .
944
+ (A3)
945
+ MNRAS 000, 1–9 (2022)
946
+
947
+ functional photo-z
948
+ 7
949
+ Using this definition one can relate the cumulants to the moments
950
+ κn = n!
951
+
952
+ {km}
953
+ (−1)r−1(r − 1)!
954
+ n
955
+
956
+ m=1
957
+ 1
958
+ km!
959
+ � ˜µm
960
+ m!
961
+ �km
962
+ ,
963
+ (A4)
964
+ where {km} denotes the set of all solutions to the Diophantine equa-
965
+ tion
966
+ n
967
+
968
+ a=1
969
+ aka − n = 0 .
970
+ (A5)
971
+ If a distribution is then expanded as a asymptotic series around a
972
+ normal distribution one finds
973
+ n(z) =
974
+ 1
975
+ √2πκ2
976
+ exp
977
+
978
+ −(z − κ1)2
979
+ 2κ2
980
+
981
+ ×
982
+
983
+ 1 +
984
+
985
+
986
+ s=1
987
+ κs/2
988
+ 2
989
+
990
+ {km}
991
+ Hes+2r
992
+ �������
993
+ z
994
+ κ1/2
995
+ 2
996
+ �������
997
+ s
998
+
999
+ m=1
1000
+ 1
1001
+ km!
1002
+
1003
+ λm+2
1004
+ (m + 2)!
1005
+ �km �
1006
+ ≡ nG(z)(1 + Eg(z)),
1007
+ (A6)
1008
+ where λn � κn/κn/2
1009
+ 2 . We are now interested in the sensitivity of the
1010
+ distribution with respect to its cumulants. Here the cases n = 1, 2
1011
+ are a bit special:
1012
+ ∂n(z)
1013
+ ∂κ1
1014
+ = n(z)z − κ1
1015
+ κ2
1016
+ (A7)
1017
+ and for κ2
1018
+ ∂n(z)
1019
+ ∂κ2
1020
+ =
1021
+ 1
1022
+ 2κ1/2
1023
+ 2
1024
+ �������n(z)
1025
+ �(z − κ1)2
1026
+ κ2
1027
+ − 1
1028
+
1029
+ + nG(z)∂Eg(z)
1030
+ ∂κ1/2
1031
+ 2
1032
+ ������� ,
1033
+ (A8)
1034
+ where
1035
+ ∂Eg(z)
1036
+ ∂κ1/2
1037
+ 2
1038
+ =
1039
+
1040
+
1041
+ s=1
1042
+
1043
+ {km}
1044
+ κs/2
1045
+ 2 P(s, {km})
1046
+
1047
+ He2r+s
1048
+ �������
1049
+ z
1050
+ κ1/2
1051
+ 2
1052
+ �������
1053
+ ×
1054
+ �������
1055
+ s
1056
+ κ1/2
1057
+ 2
1058
+
1059
+ s
1060
+
1061
+ a
1062
+ ka(2a + 2)
1063
+ κka(a+1)+1/2
1064
+ 2
1065
+ ������� − (2r + s)He2r+s−1
1066
+ �������
1067
+ z
1068
+ κ3/2
1069
+ 2
1070
+ �������
1071
+ z
1072
+ κ2
1073
+
1074
+ ,
1075
+ (A9)
1076
+ where we also defined the product:
1077
+ P(s, {km}) �
1078
+ s
1079
+
1080
+ m=1
1081
+ 1
1082
+ km!
1083
+
1084
+ κm+2
1085
+ (m + 2)!κ2m+2
1086
+ 2
1087
+ �km
1088
+ .
1089
+ (A10)
1090
+ For all cumulants with n ≥ 3 one finds:
1091
+ ∂n(z)
1092
+ ∂κn
1093
+ = nG(z)
1094
+
1095
+
1096
+ s=1
1097
+
1098
+ {km}
1099
+ κs/2
1100
+ 2 P(s, {km})He2r+s
1101
+ �������
1102
+ z
1103
+ κ1/2
1104
+ 2
1105
+ �������
1106
+ kn−2
1107
+ κn
1108
+ .
1109
+ (A11)
1110
+ It should be noted, however, that the Edgeworth expansion is not a
1111
+ convergent series but rather an asymptotic expansion. One therefore
1112
+ needs to check whether the expansion is a good approximation of
1113
+ the underlying distribution.
1114
+ In this case one can define the ordinary Fisher matrix using
1115
+ partial derivatives:
1116
+ Fκ(i)
1117
+ m κ( j)
1118
+ n = fsky
1119
+ ℓmax
1120
+
1121
+ ℓ=ℓmin
1122
+ 2ℓ + 1
1123
+ 2
1124
+ tr
1125
+ � ∂Cℓ
1126
+ ∂κ(i)
1127
+ m
1128
+ C−1
1129
+
1130
+ ∂Cℓ
1131
+ ∂κ(j)
1132
+ n
1133
+ C−1
1134
+
1135
+
1136
+ ,
1137
+ (A12)
1138
+ where κ(i)
1139
+ m is the m-th cumulant of the source-redshift distribution in
1140
+ the i-th tomographic bin.
1141
+ Figure A1 shows the fiducial redshift distributions for EU-
1142
+ CLID and their Edgeworth expanded approximations as solid and
1143
+ dashed lines respectively. The top plot uses the expansion up to κ3,
1144
+ while the bottom plot sums contributions up to κ6. For all but the
1145
+ 0.5
1146
+ 1.0
1147
+ 1.5
1148
+ 2.0
1149
+ 2.5
1150
+ 3.0
1151
+ 3.5
1152
+ z
1153
+ 0
1154
+ 1
1155
+ 2
1156
+ 3
1157
+ 4
1158
+ p(z)
1159
+ order = 3
1160
+ 1
1161
+ 2
1162
+ 3
1163
+ 4
1164
+ 5
1165
+ 6
1166
+ 7
1167
+ 8
1168
+ 9
1169
+ 10
1170
+ tomographic bin index
1171
+ 0.5
1172
+ 1.0
1173
+ 1.5
1174
+ 2.0
1175
+ 2.5
1176
+ 3.0
1177
+ 3.5
1178
+ z
1179
+ 0
1180
+ 1
1181
+ 2
1182
+ 3
1183
+ 4
1184
+ p(z)
1185
+ order = 6
1186
+ 1
1187
+ 2
1188
+ 3
1189
+ 4
1190
+ 5
1191
+ 6
1192
+ 7
1193
+ 8
1194
+ 9
1195
+ 10
1196
+ tomographic bin index
1197
+ Figure A1. SRD for EUCLID in all 10 tomographic bins. Solid lines rep-
1198
+ resent the fiducial SRD, while dashed lines represent their respective Edge-
1199
+ worth expansion. Cumulants up to order n = 3 , 6 are used respectively.
1200
+ first and last tomographic bin, the Edgeworth series is a good ap-
1201
+ proximation. This is expected as they are essentially Gaussian and
1202
+ therefore κn ≈ 0 for n > 2. The first tomographic bin experiences
1203
+ boundary effects at z = 0 and is therefore slightly skewed. This ef-
1204
+ fect is even larger for the last tomographic bin, which has a very
1205
+ long tail to high redshifts. While the first bin can still be described
1206
+ by the Edgeworth expansion and the series converges, the 10th bin
1207
+ shows negative probability in the Edgeworth series already at third
1208
+ order. The situation becomes worst if higher order cumulants are
1209
+ included. This goes to show that even for such an idealized case as
1210
+ the EUCLID forecast, the use of the Edgeworth expansion can be
1211
+ very dangerous.
1212
+ For the case n = 2 we show the Pearson correlation coeffi-
1213
+ cient of the joint covariance matrix between the first three cumu-
1214
+ lants in each tomographic bin and four cosmological parameters in
1215
+ Figure A2. We observe some correlations between the first and sec-
1216
+ ond moment of each tomographic bin. There is a very strong cor-
1217
+ relation between first and second moment of two different redshift
1218
+ bins. Furthermore, one can see that parameters controlling the am-
1219
+ plitude of the lensing spectrum are anti-correlated with the mean.
1220
+ We want to stress again, however, that the expansion, even in this
1221
+ case, is not convergent and results obtained with n > 2 have thus to
1222
+ be taken with care.
1223
+ MNRAS 000, 1–9 (2022)
1224
+
1225
+ 8
1226
+ Reischke
1227
+ κ(0)
1228
+ 1
1229
+ κ(0)
1230
+ 2
1231
+ κ(1)
1232
+ 1
1233
+ κ(1)
1234
+ 2
1235
+ κ(2)
1236
+ 1
1237
+ κ(2)
1238
+ 2
1239
+ κ(3)
1240
+ 1
1241
+ κ(3)
1242
+ 2
1243
+ κ(4)
1244
+ 1
1245
+ κ(4)
1246
+ 2
1247
+ κ(5)
1248
+ 1
1249
+ κ(5)
1250
+ 2
1251
+ κ(6)
1252
+ 1
1253
+ κ(6)
1254
+ 2
1255
+ κ(7)
1256
+ 1
1257
+ κ(7)
1258
+ 2
1259
+ κ(8)
1260
+ 1
1261
+ κ(8)
1262
+ 2
1263
+ κ(9)
1264
+ 1
1265
+ κ(9)
1266
+ 2
1267
+ Ωm0
1268
+ σ8
1269
+ w0
1270
+ wa
1271
+ κ(0)
1272
+ 1
1273
+ κ(0)
1274
+ 2
1275
+ κ(1)
1276
+ 1
1277
+ κ(1)
1278
+ 2
1279
+ κ(2)
1280
+ 1
1281
+ κ(2)
1282
+ 2
1283
+ κ(3)
1284
+ 1
1285
+ κ(3)
1286
+ 2
1287
+ κ(4)
1288
+ 1
1289
+ κ(4)
1290
+ 2
1291
+ κ(5)
1292
+ 1
1293
+ κ(5)
1294
+ 2
1295
+ κ(6)
1296
+ 1
1297
+ κ(6)
1298
+ 2
1299
+ κ(7)
1300
+ 1
1301
+ κ(7)
1302
+ 2
1303
+ κ(8)
1304
+ 1
1305
+ κ(8)
1306
+ 2
1307
+ κ(9)
1308
+ 1
1309
+ κ(9)
1310
+ 2
1311
+ Ωm0
1312
+ σ8
1313
+ w0
1314
+ wa
1315
+ −0.75
1316
+ −0.50
1317
+ −0.25
1318
+ 0.00
1319
+ 0.25
1320
+ 0.50
1321
+ 0.75
1322
+ 1.00
1323
+ rij
1324
+ Figure A2. Pearson correlation coefficient for the joint covariance matrix of the first two cumulants of the EUCLID like survey and four cosmological
1325
+ parameters.
1326
+ APPENDIX B: PHOTOMETRIC GALAXY CLUSTERING
1327
+ For photometric galaxy clustering, the procedure can be simply
1328
+ adopted by changing the weight function (up to galaxy bias, which
1329
+ we absorb in the power spectrum). Again by using the Limber pro-
1330
+ jection:
1331
+ C
1332
+ gig j
1333
+
1334
+ =
1335
+ � χH
1336
+ 0
1337
+
1338
+ χ2 W(i)
1339
+ g (χ)W(j)
1340
+ g (χ)Pgg
1341
+ �ℓ + 0.5
1342
+ χ
1343
+ , χ
1344
+
1345
+ ,
1346
+ (B1)
1347
+ with the galaxy power spectrum Pgg and corresponding weights
1348
+ given by:
1349
+ W(i)
1350
+ g (χ) = n(i)
1351
+ g (χ) ,
1352
+ (B2)
1353
+ therefore the functional derivative takes the very simple form
1354
+ δC
1355
+ gig j
1356
+
1357
+ δna(χ0) =
1358
+ Pgg
1359
+
1360
+ ℓ+0.5
1361
+ χ , χ
1362
+
1363
+ χ2
1364
+
1365
+ n(j)(x)δD
1366
+ ia + n(i)(x)δD
1367
+ ja
1368
+
1369
+ .
1370
+ (B3)
1371
+ APPENDIX C: DISTRIBUTION OF THE MEAN AND
1372
+ VARIANCE
1373
+ We show the relative difference of the mean redshift and the stan-
1374
+ dard deviation of the SRD for each tomographic. As before we dis-
1375
+ tinguish between the EUCLID’s survey settings and KV450. In par-
1376
+ ticular we sample from the diagonal covariance obtained from the
1377
+ functional Fisher matrix as described in Section 3 for the former,
1378
+ while we use the DIR covariance for the latter.
1379
+ The top plots of Figure C1 show the distribution of the mean
1380
+ and the standard deviation and show generally good agreement with
1381
+ Figure 2, that is that the mean must be known below the per-cent
1382
+ level for most bins, while the standard deviation needs to be de-
1383
+ termined by roughly 10 per-cent. It should be noted that Figure 2
1384
+ considers the extreme case where we exactly look at the envelope
1385
+ shown in Figure 1.
1386
+ Finally, the buttom two plots show the same for KV450, where
1387
+ we find much wider errors on mean and standard deviation, few per-
1388
+ cent and a few ten per-cent respectively. The general trend, how-
1389
+ ever, is the same - high redshift bins are more important than lower
1390
+ redshift bins.
1391
+ APPENDIX D: RELATIONSHIPS TO OBSERVABLES
1392
+ Real surveys usually do not use the angular power spectra as a fi-
1393
+ nal statistic. This is for example due to incomplete sky coverage,
1394
+ masking effects, variable depth or simply the dimensionality of the
1395
+ data vector. All these factors require a sufficient summary statis-
1396
+ tic. Very commonly used ones are the correlation function or band
1397
+ powers (or similarly pseudo-Cℓ). All of these are essentially linear
1398
+ transformations of the pure angular power spectrum Cℓ and assume
1399
+ the following general form:
1400
+ O[Cℓ] =
1401
+
1402
+ dℓCℓWO(ℓ) ,
1403
+ (D1)
1404
+ MNRAS 000, 1–9 (2022)
1405
+
1406
+ functional photo-z
1407
+ 9
1408
+ −0.004
1409
+ −0.002
1410
+ 0.000
1411
+ 0.002
1412
+ 0.004
1413
+ ¯z/⟨¯z⟩ − 1
1414
+ 0
1415
+ 100
1416
+ 200
1417
+ 300
1418
+ 400
1419
+ 500
1420
+ 600
1421
+ 700
1422
+ 1
1423
+ 2
1424
+ 3
1425
+ 4
1426
+ 5
1427
+ 6
1428
+ 7
1429
+ 8
1430
+ 9
1431
+ 10
1432
+ tomographic bin index
1433
+ −0.20 −0.15 −0.10 −0.05
1434
+ 0.00
1435
+ 0.05
1436
+ 0.10
1437
+ 0.15
1438
+ 0.20
1439
+ σz/⟨σz⟩ − 1
1440
+ 0
1441
+ 10
1442
+ 20
1443
+ 30
1444
+ 40
1445
+ 50
1446
+ 1
1447
+ 2
1448
+ 3
1449
+ 4
1450
+ 5
1451
+ 6
1452
+ 7
1453
+ 8
1454
+ 9
1455
+ 10
1456
+ tomographic bin index
1457
+ −0.15
1458
+ −0.10
1459
+ −0.05
1460
+ 0.00
1461
+ 0.05
1462
+ 0.10
1463
+ 0.15
1464
+ ¯z/⟨¯z⟩ − 1
1465
+ 0
1466
+ 10
1467
+ 20
1468
+ 30
1469
+ 40
1470
+ 50
1471
+ KV450
1472
+ 1
1473
+ 2
1474
+ 3
1475
+ 4
1476
+ 5
1477
+ tomographic bin index
1478
+ −0.4
1479
+ −0.3
1480
+ −0.2
1481
+ −0.1
1482
+ 0.0
1483
+ 0.1
1484
+ 0.2
1485
+ 0.3
1486
+ 0.4
1487
+ σz/⟨σz⟩ − 1
1488
+ 0
1489
+ 2
1490
+ 4
1491
+ 6
1492
+ 8
1493
+ 10
1494
+ 12
1495
+ 14
1496
+ 16
1497
+ KV450
1498
+ 1
1499
+ 2
1500
+ 3
1501
+ 4
1502
+ 5
1503
+ tomographic bin index
1504
+ Figure C1. Distribution of the relative deviation of the mean and the variance of the SRD, n(i)
1505
+ s (z). Top: For the EUCLID survey settings with realisations from
1506
+ the inverse of the diagonal Fisher matrix used in Figure 1. Bottom: For KV450 using the samples generated from the DIR bootstrap covariance.
1507
+ where O is some observable of interest and WO(ℓ) is the associ-
1508
+ ated kernel defining the transformation. Again by the chain rule,
1509
+ the functional derivative of this new observable with respect to the
1510
+ SRD is readily available:
1511
+ δO[Cℓ]
1512
+ δn(χ0) =
1513
+
1514
+ dx
1515
+ δO
1516
+ δCℓ(x)
1517
+ δCℓ(x)
1518
+ δn(χ0) ,
1519
+ (D2)
1520
+ where we dropped all the indices for less clutter. For band powers,
1521
+ Cl, this would for example assume the following form:
1522
+ δCl[Cℓ]
1523
+ δn(χ0) = 1
1524
+ Nl
1525
+
1526
+ dℓℓS ℓ
1527
+ δCℓ
1528
+ δn(χ0) ,
1529
+ (D3)
1530
+ where S ℓ is the band power response function and Nl is the normal-
1531
+ isation. For the two-point correlation function ξ± one finds:
1532
+ δξ±(θ)
1533
+ δn(χ0) = 1
1534
+
1535
+
1536
+ dℓℓJ0,4(ℓθ) δCℓ
1537
+ δn(χ0) .
1538
+ (D4)
1539
+ APPENDIX E: NON-LIMBER Cℓ
1540
+ The Limber projection used for the Cℓ is not valid on large angu-
1541
+ lar scales, where it must be replaced by the full expression. In full
1542
+ generality, for any tracers i and j of the matter density
1543
+ Ci j
1544
+ ℓ = 2
1545
+ π
1546
+
1547
+ dk k2Iℓ,k,i[ni]Iℓ,k,j[ni] ,
1548
+ (E1)
1549
+ where the functional Ik,i[ni] is given by
1550
+ Iℓ,k,i[ni] =
1551
+
1552
+ dχWi[ni]
1553
+
1554
+ Pii(k, χ)jℓ(χk) .
1555
+ (E2)
1556
+ Here Pii is the auto power spectrum of the tracer i and Wi is its
1557
+ associated weight. Thus we find:
1558
+ δCij
1559
+
1560
+ δna =
1561
+
1562
+ dk k2
1563
+
1564
+ Iℓ,k,i
1565
+ δIℓ,k,j
1566
+ δna δD
1567
+ ja + Iℓ,k,j
1568
+ δIℓ,k,i
1569
+ δna δD
1570
+ ia
1571
+
1572
+ ,
1573
+ (E3)
1574
+ where
1575
+ δIℓ,k,i
1576
+ δni
1577
+ =
1578
+
1579
+ dχδWi
1580
+ δn
1581
+
1582
+ Pii(k, χ) jℓ(χk) .
1583
+ (E4)
1584
+ The derivative of the weight function is calculated as before.
1585
+ APPENDIX F: INTRINSIC ALGINMENTS
1586
+ In this work, we have ignored intrinsic alignments (IA). Its in-
1587
+ clusion is, however,straight forward by noting that the IA angular
1588
+ power spectrum is simply given by
1589
+ CII
1590
+ ℓ =
1591
+ � χH
1592
+ 0
1593
+
1594
+ χ2 n(i)
1595
+ s (χ)n(j)
1596
+ s (χ)PII
1597
+ �ℓ + 0.5
1598
+ χ
1599
+ , χ
1600
+
1601
+ ,
1602
+ (F1)
1603
+ where PII is the IA power spectrum, which summarises the reac-
1604
+ tion of galaxy shapes to the ambient LSS on the two-point level.
1605
+ The functional derivative there proceeds in the same way as in Ap-
1606
+ pendix B. For the GI term of intrinsic alignments, one proceeds as
1607
+ before for cosmic shear (compare Section 2).
1608
+ MNRAS 000, 1–9 (2022)
1609
+
YNE2T4oBgHgl3EQfvAgF/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
_NE1T4oBgHgl3EQfCwKv/content/tmp_files/2301.02869v1.pdf.txt ADDED
@@ -0,0 +1,453 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ The 42nd Asian Conference on Remote Sensing (ACRS2021)
4
+ 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam
5
+
6
+ Deep Learning-Based UAV Aerial Triangulation without Image Control Points
7
+
8
+ Jiageng Zhong1, Ming Li1, Jiangying Qin1, Hanqi Zhang1
9
+ 1State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing,
10
+ Wuhan University, Wuhan 430079 China,
11
+ Email: zhongjiageng@whu.edu.cn, lisouming@whu.edu.cn, jy_qin@whu.edu.cn, hqzhang@whu.edu.cn
12
+
13
+ KEY WORDS: aerial triangulation; Unmanned Aerial Vehicle; Convolutional Neural Network; image matching
14
+
15
+ ABSTRACT: The emerging drone aerial survey has the advantages of low cost, high efficiency, and flexible use.
16
+ However, UAVs are often equipped with cheap POS systems and non-measurement cameras, and their flight attitudes
17
+ are easily affected. How to realize the large-scale mapping of UAV image-free control supported by POS faces many
18
+ technical problems. The most basic and important core technology is how to accurately realize the absolute orientation
19
+ of images through advanced aerial triangulation technology. In traditional aerial triangulation, image matching
20
+ algorithms are constrained to varying degrees by preset prior knowledge. In recent years, deep learning has developed
21
+ rapidly in the field of photogrammetric computer vision. It has surpassed the performance of traditional handcrafted
22
+ features in many aspects. It has shown stronger stability in image-based navigation and positioning tasks, especially
23
+ it has better resistance to unfavorable factors such as blur, illumination changes, and geometric distortion. Based on
24
+ the introduction of the key technologies of aerial triangulation without image control points, this paper proposes a
25
+ new drone image registration method based on deep learning image features to solve the problem of high mismatch
26
+ rate in traditional methods. It adopts SuperPoint as the feature detector, uses the superior generalization performance
27
+ of CNN to extract precise feature points from the UAV image, thereby achieving high-precision aerial triangulation.
28
+ Experimental results show that under the same pre-processing and post-processing conditions, compared with the
29
+ traditional method based on the SIFT algorithm, this method achieves suitable precision more efficiently, which can
30
+ meet the requirements of UAV aerial triangulation without image control points in large-scale surveys.
31
+
32
+ 1. INTODUCTION
33
+
34
+ The UAV-based aerial triangulation using POS (position and orientation system) is extremely rapid and can easily
35
+ cover the survey area. The POS provides the measurement of the position and orientation of the camera so that each
36
+ image and pixel can be georeferenced to the Earth without the need for image control points, and the most important
37
+ and most commonly used data is the position data collected from Global Navigation Satellite Systems (GNSS).
38
+ Although the drone aerial survey has the advantages of low cost and high efficiency, it is still a problem to achieve
39
+ high accuracy. One important factor that would affect global accuracy is the precision of feature matching which
40
+ directly influences the precision of the entire registration. Therefore, accurate features extraction is the basic and key
41
+ technique in aerial triangulation.
42
+
43
+ The feature extraction consists of keypoint detection and description and has been used in computer vision task for a
44
+ long time. The keypoint or feature can be described as a specific meaningful structure, but it is not clear what are the
45
+ relevant keypoints for an arbitrary input image (Mukherjee and et al., 2015). The function of a feature detector is to
46
+ detect keypoints and their corresponding descriptors. In the past decades, feature detectors have been an activate area
47
+ of research. Among the many detectors, SIFT (Scale-Invariant Feature Transform) (Lowe, 2004) is the most
48
+ representative and influential one. SIFT aims to solve the image rotation, affine transformations, intensity, and
49
+ viewpoint change in matching features (Karami and et al., 2017). It generally includes two major steps. It firstly
50
+ convolves the image with Gaussian filters at various scales and finds scale invariant keypoints via estimating a scale
51
+ space extreme. Then, for each keypoint, the local image descriptor is computed based on image gradient magnitude
52
+ and orientation (Lowe, 2004). And there are many other SIFT-like detectors such as SURF (Speed up Robust Feature)
53
+ (Bay and et al., 2008) and ORB (Oriented FAST and Rotated BRIEF) (Rublee and et al., 2011) which are more
54
+ efficient than SIFT.
55
+
56
+ With the rapid development of deep learning methods and the increasing demand for better feature detection, new
57
+ feature detectors emerged, most of which are based on convolutional neural network. Different from the classical
58
+ algorithms, deep learning approaches can learn abstract image features from high-dimensional data in an end-to-end
59
+ fashion instead of relying on handcrafted features such as distinctive corners (Zeiler and Fergus, 2014). As
60
+ convolutional neural networks learn features based on supervision, their performance heavily relies on ground truth
61
+ information (Bojanić and et al.,2019). In other word, the key is usually a large dataset of 2D ground truth locations
62
+ labeled by human annotators. Unlike these approaches, a novel detector named SuperPoint (DeTone and et al., 2018)
63
+ is supervised by itself and works well for matching tasks.
64
+
65
+
66
+
67
+ The 42nd Asian Conference on Remote Sensing (ACRS2021)
68
+ 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam
69
+
70
+
71
+ In this paper, a new aerial survey method without image control points is proposed, and SuperPoint is applied to
72
+ replace traditional feature detector in the aerial triangulation flow. Through the multiple perspectives experiment, it
73
+ is shown that the new method is able to achieve suitable precision more efficiently, compared to those based on classic
74
+ feature detectors.
75
+
76
+ 2. METHODOLOGY
77
+
78
+ 2.1 Big Picture
79
+
80
+ Referring COLMAP’s incremental Structure-from-Motion pipeline (Schonberger and Frahm, 2016), our method can
81
+ be divided into three stages as shown in Figure 1. The first stage is to prepare UAV images and corresponding position
82
+ data which contains latitude and longitude location information. Note that the position data should be converted in
83
+ Gauss-Kruger Projection.
84
+
85
+
86
+ Figure 1. A flow chart of our aerial survey method
87
+
88
+ The second stage is correspondence search which finds overlap in the input images and identifies projections of the
89
+ same points in overlapping images. For each image, the first step is to extract features which are invariant under
90
+ radiometric and geometric changes. In traditional aerial triangulation, SIFT (Lowe, 2004) is applied mostly, here it is
91
+ replaced by SuperPoint which can also output L2-normalized fixed length descriptors. As for feature matching, a
92
+ matcher that combines Lowe’s ratio test matcher (Lowe, 2004) and Nearest Neighbor search is adopted to improve
93
+ the matching accuracy. Based on matched features, images that cover the same scene part are discovered. So the
94
+ output of the second stage is a set of potentially overlapping image pairs and their associated feature correspondences
95
+ (Schonberger and Frahm, 2016). In addition, there are typically geometric verification that uses projective geometry
96
+ to verify the matches.
97
+
98
+ The third stage is mainly carried out in four steps. Based on the output of the second stage, new images can be
99
+ registered. Next, as newly registered images lead to increase scene coverage, new scene points can be triangulated
100
+ and added to the scene structure. Then, considering the possible problem of error accumulation in reconstruction, BA
101
+ (Bundle Adjustment) (Triggs and et al., 1999) is applied to refine camera parameters and point parameters via
102
+ minimizing the reprojection error. Finally, the outliers are filtered. This iterative strategy can significantly improve
103
+ completeness and accuracy (Schonberger and Frahm, 2016).
104
+
105
+ Through the workflows above, the aerial survey without image control points is finished. Specifically, all the images
106
+ are aligned and a set of sparse point cloud of the survey area is formed.
107
+
108
+
109
+ Images
110
+ GNSSMeasurement
111
+ SuperPointFeatureExtraction
112
+ Feature Matching
113
+ Features
114
+ ImageRegistration
115
+ OutlierFiltering
116
+ Reconstruction
117
+ Triangulation
118
+ Bundle Adjustment
119
+
120
+ The 42nd Asian Conference on Remote Sensing (ACRS2021)
121
+ 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam
122
+
123
+ 2.2 SuperPoint Network
124
+
125
+ SuperPoint is an encoder-decoder architecture and is a fully-convolutional neural network which operates on a full-
126
+ sized image. Its structure is shown in Figure 2. Its shared encoder processes the input image at first and branches into
127
+ two decoders, one for interest point detection and the other for interest point description. This strategy is quite different
128
+ from traditional system which first detects keypoints and then calculates the descriptors.
129
+
130
+
131
+ Figure 2. Structure of SuperPoint Network (DeTone and et al., 2018)
132
+
133
+ It should be noted that Superpoint adopts self-supervised training strategy. It firstly trains a base detector called
134
+ MagicPoint based on supervision from a synthetic dataset, where keypoints can be determined unambiguously. Then
135
+ the detector capacity is expanded to real images using homographic adaption. Finally, a keypoint descriptor is
136
+ computed by an additional subnetwork.
137
+
138
+ 3. EXPERIMENTS
139
+
140
+ 3.1 Data Preparation
141
+
142
+ In this section, we present experiment results of the traditional method (based on SIFT) and our method for
143
+ comparison. All experiments in this paper are based on two datasets collected from Chongqing. Each dataset contains
144
+ a UAV image sequence and corresponding POS data of a scene. The GSD (ground sample distance) of the aerial
145
+ images is 0.2 m, and the GNSS standard horizontal and vertical precision is 1 cm and 3 cm respectively. In addition,
146
+ there are also ground known points for precision check. To be more specific, for Scene 1 and Scene 2, there are 24
147
+ images and 6 images respectively. The heading overlap and side overlap rate were set to 80% and 60% respectively.
148
+ These can meet the specification of topographic mapping at small scale.
149
+
150
+ 3.2 System Runtime
151
+
152
+ The run-time of SuperPoint and SIFT is measured using a RTX 2060 GPU. The SuperPoint architecture is
153
+ implemented with Pytorch deep learning library (Paszke and et al., 2019). The average run-time of different
154
+ algorithms is measured as shown in Table 1. As the inference of the deep model is done in a single forward propagation
155
+ step, the run-time of a single forward pass is measured to be about 148 ms. And SIFT takes about 368 ms to process
156
+ one image. It can be seen that SuperPoint executes more efficiently than SIFT and may be applied in real time
157
+ surveying.
158
+
159
+ Table 1. Mean execution times
160
+ Algorithm
161
+ SuperPoint
162
+ SIFT
163
+ Run-time
164
+ 148 ms
165
+ 368 ms
166
+
167
+ 3.3 Feature Extraction and Matching
168
+
169
+ Several comparative experiments are carried out for qualitative and quantitative evaluation of the performance of the
170
+ keypoint detector and descriptor generator on the datasets.
171
+
172
+ The appearance and distribution of keypoints from different detectors are intuitively demonstrated in Figure 3, and
173
+ the image is from the dataset of Scene 1. It can be observed that SuperPoint produces fewer feature points compared
174
+ to SIFT, and its location distribution is more dispersive. For instance, there is a tract of farmland in the top left of
175
+
176
+ InterestPointDecoder
177
+ W
178
+ Conv
179
+ x
180
+ W/8
181
+ Input
182
+ Encoder
183
+ Softmax
184
+ Reshape
185
+ W
186
+ H/8
187
+ H
188
+ 65
189
+ 1
190
+ DescriptorDecoder
191
+ W
192
+ Conv
193
+ H
194
+ W/8
195
+ D
196
+ Bi-Cubic
197
+ L2
198
+ 1
199
+ Interpolate
200
+ Norm
201
+ H/8
202
+ H
203
+ D
204
+ D
205
+
206
+ The 42nd Asian Conference on Remote Sensing (ACRS2021)
207
+ 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam
208
+
209
+ images, SIFT can hardly extract keypoints, and SuperPoint can extract more well-distributed keypoints. Instead, for
210
+ tree or residential areas that have rich texture information, there are more dense points produced by SIFT.
211
+
212
+
213
+
214
+ (1) SuperPoint
215
+ (2) SIFT
216
+ Figure 3. The appearance and distribution of keypoints from different detectors
217
+
218
+
219
+ (1) SuperPoint
220
+
221
+ (2) SIFT
222
+ Figure 4. Qualitative result of matching
223
+
224
+ Then, to evaluate the performance of the descriptors, the extracted features are matched by a matcher that combines
225
+ Lowe’s ratio test matcher (Lowe, 2004) and Nearest Neighbor search. The ratio test checks if matches are ambiguous
226
+ and should be removed, because the probability that a match is correct can be determined by taking the ratio of
227
+ distance from the closest neighbor to the distance of the second closest (Lowe, 2004). A qualitative example of
228
+ SuperPoint versus SIFT is shown in Figure 4 and the distance ratio is set to 0.7. SuperPoint tends to produce a larger
229
+ number of correct matches which densely cover the image while there are several mismatches in the result of SIFT.
230
+
231
+ The statistics analysis of the quality of descriptors is performed. Figure 5 shows the match rates and mismatch rates
232
+ under different distance ratios for real image data. The match rate is defined as the ratio of matched keypoints to all
233
+ keypoints, and the mismatch rate is defined as the ratio of keypoints which are matched falsely to matched keypoints.
234
+
235
+
236
+
237
+ The 42nd Asian Conference on Remote Sensing (ACRS2021)
238
+ 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam
239
+
240
+ Figure 5(a) and 5(b) shows that SuperPoint has higher match rates and lower mismatch rates in most cases. For SIFT
241
+ detector, there are too many mismatches to estimate the pose when the distance ratio is greater than 0.8. The ratio is
242
+ typically set between 0.5 to 0.8 in application, and in this range, SuperPoint can achieve zero mismatch. Therefore, it
243
+ is reasonable to suppose that SuperPoint is able to produce better descriptors.
244
+
245
+
246
+
247
+ (a) Match rate
248
+ (b) Mismatch rate
249
+ Figure 5. The statistical results of feature matching
250
+
251
+ Table 2. Relative orientation error
252
+ Algorithm
253
+ SuperPoint
254
+ SIFT
255
+ Reprojection Error
256
+ 0.1454 pixel
257
+ 0.1008 pixel
258
+
259
+ A relative orientation process recreates relative translation and angular relationships between two successive
260
+ overlapping images (Tjahjadi and Agustina, 2019). In this paper, the reprojection error of relative orientation is used
261
+ as the metric for evaluating the quality of keypoints. Table 2 displays the errors of the image pair in Figure 4. SIFT
262
+ performs better on this metric as SuperPoint has a higher reprojection error. This is likely due to the fact that SIFT
263
+ performs extra sub-pixel localization, while SuperPoint does not perform this step.
264
+
265
+ 3.4 Aerial Triangulation
266
+
267
+ Using our new aerial survey method described in Section 2, the aerial triangulation without image control points is
268
+ carried out on datasets of Scene 1 and Scene 2. The reprojection errors in bundle adjustment are displayed in Table 3,
269
+ and the camera position errors are displayed in Table 4. Due to extra sub-pixel localization, the SIFT-based method
270
+ reaches higher precision. As for camera position, our method has slightly higher precision, presumably because the
271
+ keypoints extracted by SuperPoint distribute more evenly.
272
+
273
+ Table 3. Reprojection errors in bundle adjustment
274
+
275
+ Our Method
276
+ SIFT
277
+ Scene 1
278
+ 0.387 pixel
279
+ 0.332 pixel
280
+ Scene 2
281
+ 0.412 pixel
282
+ 0.353 pixel
283
+
284
+ Table 4. Camera position errors
285
+
286
+ ERROR
287
+ Our Method
288
+ SIFT
289
+ Scene 1
290
+ X error
291
+ 0.603 m
292
+ 0.530 m
293
+ Y error
294
+ 0.716 m
295
+ 1.004 m
296
+ Z error
297
+ 0.202 m
298
+ 0.166 m
299
+ XY error
300
+ 0.936 m
301
+ 1.136 m
302
+ XYZ error
303
+ 0.958 m
304
+ 1.148 m
305
+ Scene 2
306
+ X error
307
+ 0.451 m
308
+ 0.425 m
309
+ Y error
310
+ 0.570 m
311
+ 0.422 m
312
+ Z error
313
+ 0.791 m
314
+ 1.043 m
315
+ XY error
316
+ 0.727 m
317
+ 0.599 m
318
+ XYZ error
319
+ 1.074 m
320
+ 1.203 m
321
+
322
+ MatchRate
323
+ 0.7
324
+ SuperPoint
325
+ 0.6
326
+ SIFT
327
+ 0.5
328
+ 0.2
329
+ 0.1
330
+ 0
331
+ 0
332
+ 0.1
333
+ 0.2
334
+ 0.3
335
+ 0.4
336
+ 0.5
337
+ 0.6
338
+ 0.7
339
+ 0.8
340
+ 0.9
341
+ 1
342
+ Ratio of distances (closest/next closest)MismatchRate
343
+ 0.14
344
+ SuperPoint
345
+ 0.12
346
+ SIFT
347
+ 0.1
348
+ 0.04
349
+ 0.02
350
+ 0
351
+ 0
352
+ 0.1
353
+ 0.2
354
+ 0.3
355
+ 0.4
356
+ 0.5
357
+ 0.6
358
+ 0.7
359
+ 0.8
360
+ 0.9
361
+ Ratio of distances (closest/next closest)
362
+
363
+ The 42nd Asian Conference on Remote Sensing (ACRS2021)
364
+ 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam
365
+
366
+ Table 5. Error of the checkpoint
367
+ ERROR
368
+ Our Method
369
+ SIFT
370
+ X error
371
+ -1.821 m
372
+ -2.444 m
373
+ Y error
374
+ -2.217 m
375
+ -1.635 m
376
+ Z error
377
+ -3.755 m
378
+ -3.925 m
379
+ XY error
380
+ 2.870 m
381
+ 2.940 m
382
+ XYZ error
383
+ 4.726 m
384
+ 4.904 m
385
+
386
+ A known point in Scene 2 is used as the checkpoint for precision check, and Table 5 displays the checkpoint error.
387
+ The comparison result is consistent with Table 4. The results of experiments illustrate that our method is likely to
388
+ reach higher precision compared to traditional SIFT-based method, which confirms that learned representations for
389
+ descriptor matching outperform hand-tuned representations.
390
+
391
+ 4. CONCLUSION
392
+
393
+ This paper presents a new aerial survey method without image control points, which adopts SuperPoint as feature
394
+ detector. A series of comparative experiments illustrate that our method has obvious advantage in efficiency, keypoint
395
+ distribution and matching quality, and it can achieve suitable precision. So, it can be concluded that our method is
396
+ capable to meet the application requirements of aerial triangulation. Future work will comprehensively evaluate the
397
+ performance of our method with more experiment.
398
+
399
+ This paper has shown that the deep learning method outperforms traditional methods in many aspects, therefore, we
400
+ can consider that deep learning-based aerial survey would have an expected future.
401
+
402
+ ACKNOWLEDGEMENTS
403
+
404
+ This research was funded by the National Key R&D Program of China, grant numbers 2018YFB0505400, the
405
+ National Natural Science Foundation of China (NSFC), grant number 41901407 and the LIESMARS Special
406
+ Research Funding.
407
+
408
+ REFERENCES
409
+
410
+ Bay, H., Ess, A., Tuytelaars, T. and Van Gool, L. 2008. Speeded-up robust features (SURF). Computer vision and
411
+ image understanding, 110(3), pp.346-359.
412
+
413
+ Bojanić, D., Bartol, K., Pribanić, T., Petković, T., Donoso, Y. D. and Mas, J. S. 2019. On the comparison of classic
414
+ and deep keypoint detector and descriptor methods. In: 2019 11th International Symposium on Image and Signal
415
+ Processing and Analysis (ISPA), pp. 64-69.
416
+
417
+ DeTone, D., Malisiewicz, T. and Rabinovich, A. 2018. Superpoint: Self-supervised interest point detection and
418
+ description. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. pp. 224-
419
+ 236.
420
+
421
+ Karami, E., Prasad, S. and Shehata, M. 2017. Image matching using SIFT, SURF, BRIEF and ORB: performance
422
+ comparison for distorted images. arXiv preprint arXiv:1710.02726.
423
+
424
+ Lowe, D. G. 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision,
425
+ 60(2), pp.91-110.
426
+
427
+ Mukherjee, D., Wu, Q. J. and Wang, G. 2015. A comparative experimental study of image feature detectors and
428
+ descriptors. Machine Vision and Applications, 26(4), pp.443-466.
429
+
430
+ Paszke, A., Gross, S. and et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances
431
+ in neural information processing systems, 32, pp.8026-8037.
432
+
433
+ Rublee, E., Rabaud, V., Konolige, K. and Bradski, G. 2011. ORB: An efficient alternative to SIFT or SURF. In: 2011
434
+ International conference on computer vision. pp. 2564-2571.
435
+
436
+
437
+
438
+
439
+ The 42nd Asian Conference on Remote Sensing (ACRS2021)
440
+ 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam
441
+
442
+ Schonberger, J. L. and Frahm, J. M. 2016. Structure-from-motion revisited. In: Proceedings of the IEEE conference
443
+ on computer vision and pattern recognition. pp. 4104-4113.
444
+
445
+ Tjahjadi, M. E. and Agustina, F. 2019. Fast and stable direct relative orientation of UAV-based stereo pair.
446
+ International Journal of Advances in Intelligent Informatics, 5(1), pp.24-39.
447
+
448
+ Triggs, B., McLauchlan, P. F., Hartley, R. I. and Fitzgibbon, A. W. 1999. Bundle adjustment—a modern synthesis.
449
+ In: International workshop on vision algorithms. pp. 298-372.
450
+
451
+ Zeiler, M. D. and Fergus, R. 2014. Visualizing and understanding convolutional networks. In: European conference
452
+ on computer vision. pp. 818-833.
453
+
_NE1T4oBgHgl3EQfCwKv/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf,len=317
2
+ page_content='The 42nd Asian Conference on Remote Sensing (ACRS2021) 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam Deep Learning-Based UAV Aerial Triangulation without Image Control Points Jiageng Zhong1, Ming Li1, Jiangying Qin1, Hanqi Zhang1 1State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan 430079 China, Email: zhongjiageng@whu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
3
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
4
+ page_content='cn, lisouming@whu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
5
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
6
+ page_content='cn, jy_qin@whu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
7
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
8
+ page_content='cn, hqzhang@whu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
9
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
10
+ page_content='cn KEY WORDS: aerial triangulation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
11
+ page_content=' Unmanned Aerial Vehicle;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
12
+ page_content=' Convolutional Neural Network;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
13
+ page_content=' image matching ABSTRACT: The emerging drone aerial survey has the advantages of low cost, high efficiency, and flexible use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
14
+ page_content=' However, UAVs are often equipped with cheap POS systems and non-measurement cameras, and their flight attitudes are easily affected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
15
+ page_content=' How to realize the large-scale mapping of UAV image-free control supported by POS faces many technical problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
16
+ page_content=' The most basic and important core technology is how to accurately realize the absolute orientation of images through advanced aerial triangulation technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
17
+ page_content=' In traditional aerial triangulation, image matching algorithms are constrained to varying degrees by preset prior knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
18
+ page_content=' In recent years, deep learning has developed rapidly in the field of photogrammetric computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
19
+ page_content=' It has surpassed the performance of traditional handcrafted features in many aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
20
+ page_content=' It has shown stronger stability in image-based navigation and positioning tasks, especially it has better resistance to unfavorable factors such as blur, illumination changes, and geometric distortion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
21
+ page_content=' Based on the introduction of the key technologies of aerial triangulation without image control points, this paper proposes a new drone image registration method based on deep learning image features to solve the problem of high mismatch rate in traditional methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
22
+ page_content=' It adopts SuperPoint as the feature detector, uses the superior generalization performance of CNN to extract precise feature points from the UAV image, thereby achieving high-precision aerial triangulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
23
+ page_content=' Experimental results show that under the same pre-processing and post-processing conditions, compared with the traditional method based on the SIFT algorithm, this method achieves suitable precision more efficiently, which can meet the requirements of UAV aerial triangulation without image control points in large-scale surveys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
24
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
25
+ page_content=' INTODUCTION The UAV-based aerial triangulation using POS (position and orientation system) is extremely rapid and can easily cover the survey area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
26
+ page_content=' The POS provides the measurement of the position and orientation of the camera so that each image and pixel can be georeferenced to the Earth without the need for image control points, and the most important and most commonly used data is the position data collected from Global Navigation Satellite Systems (GNSS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
27
+ page_content=' Although the drone aerial survey has the advantages of low cost and high efficiency, it is still a problem to achieve high accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
28
+ page_content=' One important factor that would affect global accuracy is the precision of feature matching which directly influences the precision of the entire registration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
29
+ page_content=' Therefore, accurate features extraction is the basic and key technique in aerial triangulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
30
+ page_content=' The feature extraction consists of keypoint detection and description and has been used in computer vision task for a long time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
31
+ page_content=' The keypoint or feature can be described as a specific meaningful structure, but it is not clear what are the relevant keypoints for an arbitrary input image (Mukherjee and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
32
+ page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
33
+ page_content=' The function of a feature detector is to detect keypoints and their corresponding descriptors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
34
+ page_content=' In the past decades, feature detectors have been an activate area of research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
35
+ page_content=' Among the many detectors, SIFT (Scale-Invariant Feature Transform) (Lowe, 2004) is the most representative and influential one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
36
+ page_content=' SIFT aims to solve the image rotation, affine transformations, intensity, and viewpoint change in matching features (Karami and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
37
+ page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
38
+ page_content=' It generally includes two major steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
39
+ page_content=' It firstly convolves the image with Gaussian filters at various scales and finds scale invariant keypoints via estimating a scale space extreme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
40
+ page_content=' Then, for each keypoint, the local image descriptor is computed based on image gradient magnitude and orientation (Lowe, 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
41
+ page_content=' And there are many other SIFT-like detectors such as SURF (Speed up Robust Feature) (Bay and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
42
+ page_content=', 2008) and ORB (Oriented FAST and Rotated BRIEF) (Rublee and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
43
+ page_content=', 2011) which are more efficient than SIFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
44
+ page_content=' With the rapid development of deep learning methods and the increasing demand for better feature detection, new feature detectors emerged, most of which are based on convolutional neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
45
+ page_content=' Different from the classical algorithms, deep learning approaches can learn abstract image features from high-dimensional data in an end-to-end fashion instead of relying on handcrafted features such as distinctive corners (Zeiler and Fergus, 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
46
+ page_content=' As convolutional neural networks learn features based on supervision, their performance heavily relies on ground truth information (Bojanić and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
47
+ page_content=',2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
48
+ page_content=' In other word, the key is usually a large dataset of 2D ground truth locations labeled by human annotators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
49
+ page_content=' Unlike these approaches, a novel detector named SuperPoint (DeTone and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
50
+ page_content=', 2018) is supervised by itself and works well for matching tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
51
+ page_content=' The 42nd Asian Conference on Remote Sensing (ACRS2021) 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam In this paper, a new aerial survey method without image control points is proposed, and SuperPoint is applied to replace traditional feature detector in the aerial triangulation flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
52
+ page_content=' Through the multiple perspectives experiment, it is shown that the new method is able to achieve suitable precision more efficiently, compared to those based on classic feature detectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
53
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
54
+ page_content=' METHODOLOGY 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
55
+ page_content='1 Big Picture Referring COLMAP’s incremental Structure-from-Motion pipeline (Schonberger and Frahm, 2016), our method can be divided into three stages as shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
56
+ page_content=' The first stage is to prepare UAV images and corresponding position data which contains latitude and longitude location information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
57
+ page_content=' Note that the position data should be converted in Gauss-Kruger Projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
58
+ page_content=' Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
59
+ page_content=' A flow chart of our aerial survey method The second stage is correspondence search which finds overlap in the input images and identifies projections of the same points in overlapping images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
60
+ page_content=' For each image, the first step is to extract features which are invariant under radiometric and geometric changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
61
+ page_content=' In traditional aerial triangulation, SIFT (Lowe, 2004) is applied mostly, here it is replaced by SuperPoint which can also output L2-normalized fixed length descriptors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
62
+ page_content=' As for feature matching, a matcher that combines Lowe’s ratio test matcher (Lowe, 2004) and Nearest Neighbor search is adopted to improve the matching accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
63
+ page_content=' Based on matched features, images that cover the same scene part are discovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
64
+ page_content=' So the output of the second stage is a set of potentially overlapping image pairs and their associated feature correspondences (Schonberger and Frahm, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
65
+ page_content=' In addition, there are typically geometric verification that uses projective geometry to verify the matches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
66
+ page_content=' The third stage is mainly carried out in four steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
67
+ page_content=' Based on the output of the second stage, new images can be registered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
68
+ page_content=' Next, as newly registered images lead to increase scene coverage, new scene points can be triangulated and added to the scene structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
69
+ page_content=' Then, considering the possible problem of error accumulation in reconstruction, BA (Bundle Adjustment) (Triggs and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
70
+ page_content=', 1999) is applied to refine camera parameters and point parameters via minimizing the reprojection error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
71
+ page_content=' Finally, the outliers are filtered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
72
+ page_content=' This iterative strategy can significantly improve completeness and accuracy (Schonberger and Frahm, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
73
+ page_content=' Through the workflows above, the aerial survey without image control points is finished.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
74
+ page_content=' Specifically, all the images are aligned and a set of sparse point cloud of the survey area is formed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
75
+ page_content=' Images GNSSMeasurement SuperPointFeatureExtraction Feature Matching Features ImageRegistration OutlierFiltering Reconstruction Triangulation Bundle Adjustment The 42nd Asian Conference on Remote Sensing (ACRS2021) 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
76
+ page_content='2 SuperPoint Network SuperPoint is an encoder-decoder architecture and is a fully-convolutional neural network which operates on a full- sized image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
77
+ page_content=' Its structure is shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
78
+ page_content=' Its shared encoder processes the input image at first and branches into two decoders, one for interest point detection and the other for interest point description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
79
+ page_content=' This strategy is quite different from traditional system which first detects keypoints and then calculates the descriptors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
80
+ page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
81
+ page_content=' Structure of SuperPoint Network (DeTone and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
82
+ page_content=', 2018) It should be noted that Superpoint adopts self-supervised training strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
83
+ page_content=' It firstly trains a base detector called MagicPoint based on supervision from a synthetic dataset, where keypoints can be determined unambiguously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
84
+ page_content=' Then the detector capacity is expanded to real images using homographic adaption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
85
+ page_content=' Finally, a keypoint descriptor is computed by an additional subnetwork.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
86
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
87
+ page_content=' EXPERIMENTS 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
88
+ page_content='1 Data Preparation In this section, we present experiment results of the traditional method (based on SIFT) and our method for comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
89
+ page_content=' All experiments in this paper are based on two datasets collected from Chongqing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
90
+ page_content=' Each dataset contains a UAV image sequence and corresponding POS data of a scene.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
91
+ page_content=' The GSD (ground sample distance) of the aerial images is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
92
+ page_content='2 m, and the GNSS standard horizontal and vertical precision is 1 cm and 3 cm respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
93
+ page_content=' In addition, there are also ground known points for precision check.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
94
+ page_content=' To be more specific, for Scene 1 and Scene 2, there are 24 images and 6 images respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
95
+ page_content=' The heading overlap and side overlap rate were set to 80% and 60% respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
96
+ page_content=' These can meet the specification of topographic mapping at small scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
97
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
98
+ page_content='2 System Runtime The run-time of SuperPoint and SIFT is measured using a RTX 2060 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
99
+ page_content=' The SuperPoint architecture is implemented with Pytorch deep learning library (Paszke and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
100
+ page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
101
+ page_content=' The average run-time of different algorithms is measured as shown in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
102
+ page_content=' As the inference of the deep model is done in a single forward propagation step, the run-time of a single forward pass is measured to be about 148 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
103
+ page_content=' And SIFT takes about 368 ms to process one image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
104
+ page_content=' It can be seen that SuperPoint executes more efficiently than SIFT and may be applied in real time surveying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
105
+ page_content=' Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
106
+ page_content=' Mean execution times Algorithm SuperPoint SIFT Run-time 148 ms 368 ms 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
107
+ page_content='3 Feature Extraction and Matching Several comparative experiments are carried out for qualitative and quantitative evaluation of the performance of the keypoint detector and descriptor generator on the datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
108
+ page_content=' The appearance and distribution of keypoints from different detectors are intuitively demonstrated in Figure 3, and the image is from the dataset of Scene 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
109
+ page_content=' It can be observed that SuperPoint produces fewer feature points compared to SIFT, and its location distribution is more dispersive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
110
+ page_content=' For instance, there is a tract of farmland in the top left of InterestPointDecoder W Conv x W/8 Input Encoder Softmax Reshape W H/8 H 65 1 DescriptorDecoder W Conv H W/8 D Bi Cubic L2 1 Interpolate Norm H/8 H D D The 42nd Asian Conference on Remote Sensing (ACRS2021) 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam images, SIFT can hardly extract keypoints, and SuperPoint can extract more well-distributed keypoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
111
+ page_content=' Instead, for tree or residential areas that have rich texture information, there are more dense points produced by SIFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
112
+ page_content=' (1) SuperPoint (2) SIFT Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
113
+ page_content=' The appearance and distribution of keypoints from different detectors (1) SuperPoint (2) SIFT Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
114
+ page_content=' Qualitative result of matching Then, to evaluate the performance of the descriptors, the extracted features are matched by a matcher that combines Lowe’s ratio test matcher (Lowe, 2004) and Nearest Neighbor search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
115
+ page_content=' The ratio test checks if matches are ambiguous and should be removed, because the probability that a match is correct can be determined by taking the ratio of distance from the closest neighbor to the distance of the second closest (Lowe, 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
116
+ page_content=' A qualitative example of SuperPoint versus SIFT is shown in Figure 4 and the distance ratio is set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
117
+ page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
118
+ page_content=' SuperPoint tends to produce a larger number of correct matches which densely cover the image while there are several mismatches in the result of SIFT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
119
+ page_content=' The statistics analysis of the quality of descriptors is performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
120
+ page_content=' Figure 5 shows the match rates and mismatch rates under different distance ratios for real image data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
121
+ page_content=' The match rate is defined as the ratio of matched keypoints to all keypoints, and the mismatch rate is defined as the ratio of keypoints which are matched falsely to matched keypoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
122
+ page_content=' The 42nd Asian Conference on Remote Sensing (ACRS2021) 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam Figure 5(a) and 5(b) shows that SuperPoint has higher match rates and lower mismatch rates in most cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
123
+ page_content=' For SIFT detector, there are too many mismatches to estimate the pose when the distance ratio is greater than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
124
+ page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
125
+ page_content=' The ratio is typically set between 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
126
+ page_content='5 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
127
+ page_content='8 in application, and in this range, SuperPoint can achieve zero mismatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
128
+ page_content=' Therefore, it is reasonable to suppose that SuperPoint is able to produce better descriptors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
129
+ page_content=' (a) Match rate (b) Mismatch rate Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
130
+ page_content=' The statistical results of feature matching Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
131
+ page_content=' Relative orientation error Algorithm SuperPoint SIFT Reprojection Error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
132
+ page_content='1454 pixel 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
133
+ page_content='1008 pixel A relative orientation process recreates relative translation and angular relationships between two successive overlapping images (Tjahjadi and Agustina, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
134
+ page_content=' In this paper, the reprojection error of relative orientation is used as the metric for evaluating the quality of keypoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
135
+ page_content=' Table 2 displays the errors of the image pair in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
136
+ page_content=' SIFT performs better on this metric as SuperPoint has a higher reprojection error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
137
+ page_content=' This is likely due to the fact that SIFT performs extra sub-pixel localization, while SuperPoint does not perform this step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
138
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
139
+ page_content='4 Aerial Triangulation Using our new aerial survey method described in Section 2, the aerial triangulation without image control points is carried out on datasets of Scene 1 and Scene 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
140
+ page_content=' The reprojection errors in bundle adjustment are displayed in Table 3, and the camera position errors are displayed in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
141
+ page_content=' Due to extra sub-pixel localization, the SIFT-based method reaches higher precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
142
+ page_content=' As for camera position, our method has slightly higher precision, presumably because the keypoints extracted by SuperPoint distribute more evenly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
143
+ page_content=' Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
144
+ page_content=' Reprojection errors in bundle adjustment Our Method SIFT Scene 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
145
+ page_content='387 pixel 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
146
+ page_content='332 pixel Scene 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
147
+ page_content='412 pixel 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
148
+ page_content='353 pixel Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
149
+ page_content=' Camera position errors ERROR Our Method SIFT Scene 1 X error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
150
+ page_content='603 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
151
+ page_content='530 m Y error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
152
+ page_content='716 m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
153
+ page_content='004 m Z error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
154
+ page_content='202 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
155
+ page_content='166 m XY error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
156
+ page_content='936 m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
157
+ page_content='136 m XYZ error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
158
+ page_content='958 m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
159
+ page_content='148 m Scene 2 X error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
160
+ page_content='451 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
161
+ page_content='425 m Y error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
162
+ page_content='570 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
163
+ page_content='422 m Z error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
164
+ page_content='791 m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
165
+ page_content='043 m XY error 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
166
+ page_content='727 m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
167
+ page_content='599 m XYZ error 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
168
+ page_content='074 m 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
169
+ page_content='203 m MatchRate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
170
+ page_content='7 SuperPoint 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
171
+ page_content='6 SIFT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
172
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
173
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
174
+ page_content='1 0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
175
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
176
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
177
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
178
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
179
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
180
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
181
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
182
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
183
+ page_content='9 1 Ratio of distances (closest/next closest)MismatchRate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
184
+ page_content='14 SuperPoint 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
185
+ page_content='12 SIFT 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
186
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
187
+ page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
188
+ page_content='02 0 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
189
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
190
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
191
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
192
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
193
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
194
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
195
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
196
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
197
+ page_content='9 Ratio of distances (closest/next closest) The 42nd Asian Conference on Remote Sensing (ACRS2021) 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
198
+ page_content=' Error of the checkpoint ERROR Our Method SIFT X error -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
199
+ page_content='821 m -2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
200
+ page_content='444 m Y error -2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
201
+ page_content='217 m -1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
202
+ page_content='635 m Z error -3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
203
+ page_content='755 m -3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
204
+ page_content='925 m XY error 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
205
+ page_content='870 m 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
206
+ page_content='940 m XYZ error 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
207
+ page_content='726 m 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
208
+ page_content='904 m A known point in Scene 2 is used as the checkpoint for precision check, and Table 5 displays the checkpoint error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
209
+ page_content=' The comparison result is consistent with Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
210
+ page_content=' The results of experiments illustrate that our method is likely to reach higher precision compared to traditional SIFT-based method, which confirms that learned representations for descriptor matching outperform hand-tuned representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
211
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
212
+ page_content=' CONCLUSION This paper presents a new aerial survey method without image control points, which adopts SuperPoint as feature detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
213
+ page_content=' A series of comparative experiments illustrate that our method has obvious advantage in efficiency, keypoint distribution and matching quality, and it can achieve suitable precision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
214
+ page_content=' So, it can be concluded that our method is capable to meet the application requirements of aerial triangulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
215
+ page_content=' Future work will comprehensively evaluate the performance of our method with more experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
216
+ page_content=' This paper has shown that the deep learning method outperforms traditional methods in many aspects, therefore, we can consider that deep learning-based aerial survey would have an expected future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
217
+ page_content=' ACKNOWLEDGEMENTS This research was funded by the National Key R&D Program of China, grant numbers 2018YFB0505400, the National Natural Science Foundation of China (NSFC), grant number 41901407 and the LIESMARS Special Research Funding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
218
+ page_content=' REFERENCES Bay, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
219
+ page_content=', Ess, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
220
+ page_content=', Tuytelaars, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
221
+ page_content=' and Van Gool, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
222
+ page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
223
+ page_content=' Speeded-up robust features (SURF).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
224
+ page_content=' Computer vision and image understanding, 110(3), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
225
+ page_content='346-359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
226
+ page_content=' Bojanić, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
227
+ page_content=', Bartol, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
228
+ page_content=', Pribanić, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
229
+ page_content=', Petković, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
230
+ page_content=', Donoso, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
231
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
232
+ page_content=' and Mas, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
233
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
234
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
235
+ page_content=' On the comparison of classic and deep keypoint detector and descriptor methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
236
+ page_content=' In: 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
237
+ page_content=' 64-69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
238
+ page_content=' DeTone, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
239
+ page_content=', Malisiewicz, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
240
+ page_content=' and Rabinovich, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
241
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
242
+ page_content=' Superpoint: Self-supervised interest point detection and description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
243
+ page_content=' In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
244
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
245
+ page_content=' 224- 236.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
246
+ page_content=' Karami, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
247
+ page_content=', Prasad, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
248
+ page_content=' and Shehata, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
249
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
250
+ page_content=' Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
251
+ page_content=' arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
252
+ page_content='02726.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
253
+ page_content=' Lowe, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
254
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
255
+ page_content=' 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
256
+ page_content=' Distinctive image features from scale-invariant keypoints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
257
+ page_content=' International journal of computer vision, 60(2), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
258
+ page_content='91-110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
259
+ page_content=' Mukherjee, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
260
+ page_content=', Wu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
261
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
262
+ page_content=' and Wang, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
263
+ page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
264
+ page_content=' A comparative experimental study of image feature detectors and descriptors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
265
+ page_content=' Machine Vision and Applications, 26(4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
266
+ page_content='443-466.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
267
+ page_content=' Paszke, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
268
+ page_content=', Gross, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
269
+ page_content=' and et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
270
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
271
+ page_content=' Pytorch: An imperative style, high-performance deep learning library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
272
+ page_content=' Advances in neural information processing systems, 32, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
273
+ page_content='8026-8037.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
274
+ page_content=' Rublee, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
275
+ page_content=', Rabaud, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
276
+ page_content=', Konolige, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
277
+ page_content=' and Bradski, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
278
+ page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
279
+ page_content=' ORB: An efficient alternative to SIFT or SURF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
280
+ page_content=' In: 2011 International conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
281
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
282
+ page_content=' 2564-2571.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
283
+ page_content=' The 42nd Asian Conference on Remote Sensing (ACRS2021) 22-24th November, 2021 in Can Tho University, Can Tho city, Vietnam Schonberger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
284
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
285
+ page_content=' and Frahm, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
286
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
287
+ page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
288
+ page_content=' Structure-from-motion revisited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
289
+ page_content=' In: Proceedings of the IEEE conference on computer vision and pattern recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
290
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
291
+ page_content=' 4104-4113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
292
+ page_content=' Tjahjadi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
293
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
294
+ page_content=' and Agustina, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
295
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
296
+ page_content=' Fast and stable direct relative orientation of UAV-based stereo pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
297
+ page_content=' International Journal of Advances in Intelligent Informatics, 5(1), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
298
+ page_content='24-39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
299
+ page_content=' Triggs, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
300
+ page_content=', McLauchlan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
301
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
302
+ page_content=', Hartley, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
303
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
304
+ page_content=' and Fitzgibbon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
305
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
306
+ page_content=' 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
307
+ page_content=' Bundle adjustment—a modern synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
308
+ page_content=' In: International workshop on vision algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
309
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
310
+ page_content=' 298-372.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
311
+ page_content=' Zeiler, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
312
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
313
+ page_content=' and Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
314
+ page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
315
+ page_content=' Visualizing and understanding convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
316
+ page_content=' In: European conference on computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
317
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
318
+ page_content=' 818-833.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/_NE1T4oBgHgl3EQfCwKv/content/2301.02869v1.pdf'}
btAyT4oBgHgl3EQfwPm0/content/tmp_files/2301.00646v1.pdf.txt ADDED
@@ -0,0 +1,623 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Addressing the Selection Bias in Voice Assistance: Training Voice Assistance
2
+ Model in Python with Equal Data Selection
3
+ KASHAV PIYA, Augustana College, USA
4
+ SRIJAL SHRESTHA, Augustana College, USA
5
+ CAMERAN FRANK, Augustana College, USA
6
+ ESTEPHANOS JEBESSA, Augustana College, USA
7
+ TAUHEED KHAN MOHD, Augustana College, USA
8
+ In recent times, voice assistants have become a part of our day-to-day lives, allowing information retrieval by voice synthesis, voice
9
+ recognition, and natural language processing. These voice assistants can be found in many modern-day devices such as Apple, Amazon,
10
+ Google, and Samsung. This project is primarily focused on Virtual Assistance in Natural Language Processing. Natural Language
11
+ Processing is a form of AI that helps machines understand people and create feedback loops. This project will use deep learning
12
+ to create a Voice Recognizer and use Commonvoice and data collected from the local community for model training using Google
13
+ Colaboratory. After recognizing a command, the AI assistant will be able to perform the most suitable actions and then give a response.
14
+ The motivation for this project comes from the race and gender bias that exists in many virtual assistants. The computer industry
15
+ is primarily dominated by the male gender, and because of this, many of the products produced do not regard women. This bias has an
16
+ impact on natural language processing. This project will be utilizing various open-source projects to implement machine learning
17
+ algorithms and train the assistant algorithm to recognize different types of voices, accents, and dialects. Through this project, the goal
18
+ to use voice data from underrepresented groups to build a voice assistant that can recognize voices regardless of gender, race, or accent.
19
+ Increasing the representation of women in the computer industry is important for the future of the industry. By representing
20
+ women in the initial study of voice assistants, it can be shown that females play a vital role in the development of this technology. In
21
+ line with related work, this project will use first-hand data from the college population and middle-aged adults to train voice assistant
22
+ to combat gender bias.
23
+ Additional Key Words and Phrases: Voice Assistance, Machine Learning, Virtual Assistance, Artificial Intelligence, Selection Bias,
24
+ Sample Population, Python 3.10, Pyttsx3, PyTorch, JSON
25
+ 1
26
+ INTRODUCTION
27
+ The first-ever voice-activated consumer product was released to the public in 1922. It was known as “Radio Rex.” This
28
+ product was a toy that had a doghouse with a dog inside it. When someone said “Rex” next to the dog house, the dog
29
+ would jump out of the doghouse. This voice-activated toy was created even before modern computers existed.[1]
30
+ Authors’ addresses: Kashav Piya, kashavpiya19@augustana.edu, Augustana College, 639 38th St, Rock Island, Illinois, USA, 61201; Srijal Shrestha,
31
+ srijalshrestha18@augustana.edu, Augustana College, 639 38th St, Rock Island, Illinois, USA, 61201; Cameran Frank, cameranfrank18@augustana.edu,
32
+ Augustana College, 639 38th St, Rock Island, Illinois, USA, 61201; Estephanos Jebessa, estephanosjebessa19@augustana.edu, Augustana College, 639 38th
33
+ St, Rock Island, Illinois, USA, 61201; Tauheed Khan Mohd, tauheedkhanmohd@augustana.edu, Augustana College, 639 38th St, Rock Island, Illinois, USA,
34
+ 61201.
35
+ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
36
+ made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components
37
+ of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
38
+ redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
39
+ © 2023 Association for Computing Machinery.
40
+ Manuscript submitted to ACM
41
+ Manuscript submitted to ACM
42
+ 1
43
+ arXiv:2301.00646v1 [eess.AS] 20 Dec 2022
44
+
45
+ 2
46
+ Kashav Piya, Srijal Shrestha, Cameran Frank, Estephanos Jebessa, and Tauheed Khan Mohd
47
+ Since the development of that toy, there has been a considerable amount of development in voice recognition, natural
48
+ language processing, and machine learning. A voice assistant, also known as an intelligent personal assistant or a
49
+ connected speaker, is based on natural language speech recognition. Recently it has had a rise in popularity and has
50
+ been marketed and used by Apple, Amazon, Google, and Samsung. Now voice assistants are widely found in most
51
+ modern-day devices that a person would use.
52
+ Voice assistants are multi-purposed one of their main purposes was for a search to be carried out using a voice
53
+ command entered by the user as an input. They are also known to be used for information retrieval by voice synthesis.
54
+ They use a variety of voice recognition techniques, language processing algorithms, and voice synthesis to listen to
55
+ specific voice commands that may include wake words, tasks, and queries, and return relevant information or perform
56
+ a specific function as requested by the user. These assistants can be software-based which allows them to be integrated
57
+ into a wide range of devices such as laptops, mobile devices, and speakers, or can be specifically designed into a
58
+ standalone device like Amazon Echo or Amazon Alexa Wall Clock. [2]
59
+ These voice assistants work like a charm and are quite fascinating, making one might ask themselves what goes on
60
+ behind the hood of these amazing innovations or how do they work the way they do?
61
+ To answer the above query, in short voice assistants use artificial intelligence and voice recognition to deliver the
62
+ result that the user is looking for efficiently, and precisely. The user provides a command to the voice assistant that
63
+ is called intent. Through voice recognition, these intentions can be understood by our virtual assistants. Here, voice
64
+ recognition allows the speaker to speak into a device that takes the analog signal from the speaker and changes it into
65
+ a digital signal which is then processed by the computer to match it with words or phrases and then recognize the
66
+ command. Machine learning also has a huge part to play in this as the computer needs to be taught to recognize the
67
+ speaker’s words by feeding it a database of words and syllables in each language to match it with digital signals. This
68
+ process is known as pattern recognition. Additionally, these devices gather a lot of information from the commands
69
+ that they received previously to improve upon themselves using machine learning.[3]
70
+ There are multiple approaches to voice assistants, specifically two types: task-oriented and knowledge-oriented.
71
+ Most voice assistants these days can combine both the task-oriented as well as a knowledge-oriented workflow to
72
+ complete all the tasks that a user may ask the voice assistant to carry out. A task-oriented approach will most likely ask
73
+ something as simple as filling out a form, whereas a knowledge-based approach may include answering questions such
74
+ as who the President of the United States of America is or finding out what engine is in a Ford F50 which is a technical
75
+ specification of a product. [4]
76
+ The task-oriented approach/workflow is pretty much self-explanatory as it uses goals and tasks to achieve whatever
77
+ the user wants or needs. This approach usually requires the voice assistant to use a different application such as time,
78
+ weather, web browser, and music apps, to help complete its tasks. Some examples would be, asking a voice assistant to
79
+ set a reminder to take medicine at 6 PM, playing music using Spotify, etc. This approach does not require the virtual
80
+ voice assistant to search massive databases for knowledge. These tasks are often known as skills. And various assistants
81
+ allow for different skills to be installed according to the user’s preferences. [5]
82
+ Whereas a knowledge-oriented approach/workflow requires the use of analytical data to complete the tasks and help
83
+ the users to complete their tasks. [6] Unlike a task-oriented approach, this approach focuses on using online databases
84
+ to get related information in addition to already recorded knowledge to help users to complete tasks. An example of a
85
+ knowledge-based approach would be if a user asked for a question that would require searching the internet such as
86
+ what is the capital of the state of Illinois or who invented the telephone?
87
+ Manuscript submitted to ACM
88
+
89
+ Addressing the Selection Bias in Voice Assistance: Training Voice Assistance Model in Python with Equal Data Selection
90
+ 3
91
+ Furthermore, there are two types of artificial intelligence (AI); In general, there is a weak AI and there is a strong
92
+ AI.[7] There are many types of machines such as Siri, Alexa, Cortana, and Bixby that can only perform certain tasks
93
+ that have been defined by the user while making the AI. These types of AIs are called Weak AI. And some machines or
94
+ systems have a mind of their own and can make decisions or take actions on their own without human interference.
95
+ These types of machines are called Strong AI. After learning the differences between strong and weak AI, the voice
96
+ assistant this project is opting for is an example of weak AI.
97
+ This projects Virtual Voice Assistant will include a variety of features such as greeting the users, fetching information
98
+ about a person, an object, or anything else in general from the internet, providing the time, opening web browsers,
99
+ playing music, and so on. It might also include additional features such as opening the web camera to take pictures,
100
+ forecasting the weather, logging off from your personal computer, telling you a joke, and many other features.
101
+ The field of Virtual Assistance has many avenues to consider from providing help in technology to connecting
102
+ people through the usage of technology. When studying what exactly a Virtual Assistant is, the field that was decided
103
+ on was Virtual Assistance in natural language processing, which means the technology can understand people more
104
+ accurately. Natural Language Processing is a form of AI that gives machines the ability to not just read but to understand
105
+ and interpret human language. With NLP, machines can make sense of the written or spoken text and perform tasks
106
+ including speech recognition, sentiment analysis, and automatic text summarization [8]. Therefore, not only does
107
+ natural language processing help humans it also helps with machine learning, in the sense that NLP will continue to
108
+ provide more data to better that analysis of speech and create a feedback loop.
109
+ The English language is an extremely hard language to understand and speak, especially if English is not your first
110
+ language. Not only is the usage of English vernacular hard to comprehend and execute, but there is also a form of a
111
+ language barrier in the different dialects that people possess. A person’s dialect can be a communication inhibitor in
112
+ many languages, not just in English, but in this project, the focus is on the English language [9].
113
+ For this project, synthetic voices were originally used instead of human voices, in which data was collected. A
114
+ synthetic voice is a pre-recorded voice produced through text to speech whereas the human voice is pre-recorded.
115
+ ‘Its use involves recording, in advance, a text read aloud by a human being.’ The usage of a synthetic voice will give
116
+ flexibility as they have a high capacity in reading textual context and can generate voice constantly. But “there is a
117
+ disadvantage in expressing social signs as it cannot express emotions, intentions, and attitudes through modulation of
118
+ the voice” [10].
119
+ The computer industry is primarily dominated by the male gender and because of this extreme one-sided representa-
120
+ tion in the field a multitude of the products that are produced do not regard women. One of the fields that this bias
121
+ impacts is natural language processing. Because of the lack of females in the industry, the identification percentage
122
+ for female voices is lower than that of male voices, therefore resulting in the analysis and research of this topic of
123
+ selection bias of voice assistance [11]. Data augmentation by controlling the gender attribute is an effective technique
124
+ in mitigating gender bias in NLP processes.
125
+ 2
126
+ RELATED WORK
127
+ 2.1
128
+ How Does Voice Assistant Work?
129
+ Voice assistance has now been defined is, but how does it work? A voice assistant uses speech recognition along with
130
+ other identification of speech components to help the machine process the voice. Then, the speech is rendered into its
131
+ textual representation based on extracted patterns. Following that, this program isolates the most important words
132
+ Manuscript submitted to ACM
133
+
134
+ 4
135
+ Kashav Piya, Srijal Shrestha, Cameran Frank, Estephanos Jebessa, and Tauheed Khan Mohd
136
+ Fig. 1. How Does Voice Assistant Work?
137
+ or the action also known as anticipated intent. If the intent is not clear, the voice assistant is programmed to ask
138
+ more questions. It then retrieves information by API calls to access the relevant knowledge base. Finally, it relays the
139
+ information back to the human user through text to speech or fulfills the necessary action. Voice assistants rely on
140
+ Natural Language Processing and other machine learning algorithms to perform the best and overcome the challenges
141
+ that it faces. [12]
142
+ 2.2
143
+ Speech Recognition
144
+ How does a voice assistant understand what a person is talking about? These voice assistants use fast decoding
145
+ algorithms that allow real-time continuous speech recognition systems to provide instant responses.[13] Speech
146
+ Recognition has been a part of our day-to-day life for more than 10 years now as it seems to become more and more
147
+ advanced. The beauty of speech-to-text models goes unnoticed in this process as the machines make it look so seamless.
148
+ Most speech recognition algorithms use a mix of Natural Language Processing and Deep Learning techniques to parse
149
+ through the user query, get an appropriate response and present it back to the user in whichever form the user desires.
150
+ All the big companies such as Google’s Assistant, Amazon’s Alexa, Apple’s Siri, and others use the same techniques but
151
+ just with some different variations.[14]
152
+ Manuscript submitted to ACM
153
+
154
+ Voice Assistant
155
+ 5.
156
+ Human User
157
+ NLP
158
+ What is the
159
+ APIs
160
+ weather for
161
+ 3.
162
+ tomorrow?
163
+ 4.Addressing the Selection Bias in Voice Assistance: Training Voice Assistance Model in Python with Equal Data Selection
164
+ 5
165
+ 2.2.1
166
+ Signal Processing for Speech Recognition. Audio signals are any object that vibrates to produce sound waves.
167
+ When an object vibrates, the air molecules oscillate to and from their rest position and transmits its energy to the
168
+ neighboring molecules. This results in the transmission of energy from one molecule to another which in turn produces
169
+ a sound wave. [15]
170
+ There are a few terms that should be familiar when talking about sound and signal processing.
171
+ • Amplitude: Amplitude refers to the maximum displacement of the air molecules from the rest position
172
+ • Crest and Trough: The crest is the highest point in the wave whereas trough is the lowest point
173
+ • Wavelength: The distance between two successive crests or troughs is known as a wavelength
174
+ • Cycle: Every audio signal traverses in the form of cycles. One complete upward movement and downward
175
+ movement of the signal form a cycle
176
+ • Frequency: Frequency refers to how fast a signal is changing over a period
177
+ Additionally, there are two different types of signals: Digital and Analog Signal. For digital signal is a discrete
178
+ representation of a signal over a period. Here, the finite number of samples exists between any two-time intervals
179
+ whereas the analog signal is a continuous representation of a signal over a period which implies that there will be an
180
+ infinite number of samples between any two given time intervals. [16]
181
+ For this project, audio signals are needed for the Voice Assistant, so there is a question about how it stores a
182
+ signal that has an infinite number of samples since they are analog signals. This project will incorporate changing the
183
+ memory-hogging analog signal using complex techniques to convert it into digital signals to make it more convenient
184
+ to work with them.
185
+ To convert a signal from analog to digital, a technique called sampling the signal should be used which selects a
186
+ certain number of samples per second from the analog signal which makes storing and processing the signal memory
187
+ efficient. In analog to digital sampling "an input signal is converted from some continuously varying physical value (e.g.
188
+ pressure in air, or frequency or wavelength of light), by some electro-mechanical device into a continuously varying
189
+ electrical signal." [17]
190
+ 2.2.2
191
+ Feature Extraction Techniques for Audio signals. For this projects model to use the audio signals, the features
192
+ must be extracted from the audio such as time domain and frequency domain. The audio signal is representing by the
193
+ amplitude as a function of time. The features here are the amplitudes which are recorded at different time intervals. On
194
+ the other hand, in the frequency domain, the audio signal is represents amplitude as a function of frequency where the
195
+ features are amplitudes that are recorded at different frequencies.
196
+ 2.3
197
+ Papers Regarding this Topic
198
+ In machine learning when one trains their model, there is always a chance for problems when applying the model to the
199
+ population. This problem arises in big data when the population is too large for machines with limited computability.
200
+ To deal with this problem of big data, a sample population can be used. A sample population refers to a subset of the
201
+ population that represents the whole population. [18] This project will apply machine learning to samples population.
202
+ But there is a chance for selection bias, “selection bias is a systematic error that results in differences between a study
203
+ population and a target population; selection bias primarily affects the external validity of the results of a study”.
204
+ This means that the results thought to be true for the sample population may not be true for the actual population.
205
+ Depending on the population sample, if there are too many true results it will just show true no matter what. Sample
206
+ population can technically be wrong from the population just because of the sample collected.
207
+ Manuscript submitted to ACM
208
+
209
+ 6
210
+ Kashav Piya, Srijal Shrestha, Cameran Frank, Estephanos Jebessa, and Tauheed Khan Mohd
211
+ Likewise, voice assistants use sample data to analyze a user’s speech. “Currently speech recognition has significant
212
+ race and gender biases. It is just another form of AI that performs worse for women and non-white people. Currently, it
213
+ is designed to understand white male voices well” [19]. This is a significant problem because these days voice assistants
214
+ are a crucial part of people’s lives from setting alarms to transportation. Low accuracy in voice recognition would mean
215
+ severe consequences in people’s life [19]. In the paper Empirical Analysis of Bias in Voice-based Personal Assistants,
216
+ they try to check the accuracy of relevant voice assistants such as Google Assistant and Siri. They look at the different
217
+ accents for the Brazilian Portuguese, and how accuracy was off for certain accents than other ones. They also found
218
+ variation in the quality of recognition based on gender. [20]
219
+ So, taking the two paper’s ideas, to tackle this problem and move forward in this topic, data was gathered and training
220
+ our model not only on prevalent male voice data sets but also gathering voices manually and from Common-Voice
221
+ data sets by Mozilla for female and minority voices. This allows us to create samples proportionally to demographic
222
+ indicators of the country. The process for machine learning will be as follows:
223
+ • Filter the words that the user says
224
+ • Digitize the user’s speech into a format that the machine can read
225
+ • Analyze the user’s speech for meaning
226
+ • Decide what the user needs based on previous input and algorithms”
227
+ As talked above, data sets for voice recognition must be sampled so that each voice has an equal probability to be
228
+ included in the sample. This allows for less racial and gender bias for the models to work with. So, this will result in
229
+ everyone having their voice heard.
230
+ Additionally the paper “Dangerous Skills: Understanding and Mitigating Security Risks of Voice-Controlled Third-
231
+ Party Functions on Virtual Personal Assistant Systems” [21] and “Your Voice Assistant is Mine: How to Abuse Speakers
232
+ to Steal Information and Control Your Phone” talks about the development of voice assistant in speech recognition and
233
+ IoTs and with its development comes more vulnerabilities. They talk about voice-based remote attacks and permission
234
+ bypassing. These problems are extremely dangerous and can be misused to expose people’s information. So, these
235
+ papers advise incorporating the ideas of not allowing zero permission and context-aware information collection and
236
+ analysis features to the voice assistant. Not allowing those ideas will allow focusing on forcing restrictions on specific
237
+ operations that a particular process can perform [22].
238
+ 3
239
+ EXPERIMENTAL SETUP
240
+ The data used is partially from Common-Voice, which is an online open source of voice recorders in multiple languages.
241
+ However, since this project is specifically for the English language the data set collected was English. However, this
242
+ data set, as expected has more male voices than female voices. Therefore, to even out the distribution between the
243
+ voices represented the rest of the data was collected via outreach into the Augustana College community, contacting
244
+ over 200 female students (ages ranging from 18 - 24 years old) and receiving about 65 voices samples with 4-7 minutes
245
+ of voice recording from each sample. The data set collected was then doubled, converted, and combined into .wav files.
246
+ After the data collection and manipulated, it was then applied to a gender recognition model in order to numerically
247
+ identify via the frequency of each voice; according to ASHA (American Speech Language and Hearing Association) the
248
+ average range for an adult woman is 165 to 255 Hz and the average range for adult males is 85 to 155 Hz [23]. There is
249
+ inherent technical problem down to the fact that females generally have higher pitched voices. Female voices tend to be
250
+ Manuscript submitted to ACM
251
+
252
+ Addressing the Selection Bias in Voice Assistance: Training Voice Assistance Model in Python with Equal Data Selection
253
+ 7
254
+ quieter and sound more “breathy”. Female more easily masked by noise, like a fan or traffic in the background, which
255
+ makes it harder for speech recognition systems.
256
+ In order to run the data through training model for speech recognition. The full data set from Common-Voice was
257
+ not needed in fact there only segments of the a statement is used, therefore it was necessary to parse the data into
258
+ smaller form.
259
+ 4
260
+ FRAMEWORK
261
+ For Machine Learning, Python is known to be the most common language used due to its versatility, readability, and
262
+ abundant packages. So, for this project, it will be using Python 3.10 as the main programming language. This project
263
+ will be using the following python packages or libraries:
264
+ • Speech Recognition (Will only be used initially to create and test the basic functionalities of the project and will
265
+ be later replaced by this projects own model): It helps understand what the human is saying and converts the
266
+ speech into text.
267
+ • Pyttsx3: It is a simple text to speech conversion library in python which will be used to give this projects Voice
268
+ Assistant a voice.
269
+ • Wikipedia: This package is a Python that extracts information and data from Wikipedia which is a multilingual
270
+ online encyclopedia used by many people.
271
+ • Capture: It helps to capture images from your camera.
272
+ • Datetime: It is an inbuilt module in python that works with date and time.
273
+ • Os: It provides functions to interact with the operating system.
274
+ • Web Browser: It is an in-build package from Python that allows you to extract information from the web.
275
+ • JSON: It is a module that helps to store and exchange data.
276
+ • PyJokes: It gives the user a random joke.
277
+ • Pyaudio: It allows python to play and record audio between different platforms.
278
+ • Pywhatkit: It can access YouTube in order to play a video.
279
+ • Librosa: It is a package for analysing sound, audio and music.
280
+ • Soundfile: This package can help read and write sound files.
281
+ • Numpy: Numpy provides a powerful N-dimensional array object, sophisticated functions, tolls for integrating
282
+ C/C++ and Fortran code, useful linear algebra, Fourier transform, and random number capabilities, and much
283
+ more.
284
+ • BeautifulSoup: Beautiful Soup is a Python library for pulling data out of HTML and XML files.
285
+ • Pandas: pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool, built
286
+ on top of the Python programming language.
287
+ This project will be using all these modules and packages to create basic functionalities of the Voice Assistant. For
288
+ example, this project will use the Wikipedia package to get information regarding a person or a company. This function
289
+ is used to grab the first few sentences from its corresponding Wikipedia page which is generally the broad introduction
290
+ to the subject.
291
+ Then this project will have a Voice Recognizer which will replace the existing Speech Recognition module as well
292
+ as the Google API. The program uses deep learning to create this projects voice recognition using PyTorch. The data
293
+ is from open sources Mozilla Common-Voice as well as a few more from Kaggle to train this projects model and add
294
+ Manuscript submitted to ACM
295
+
296
+ 8
297
+ Kashav Piya, Srijal Shrestha, Cameran Frank, Estephanos Jebessa, and Tauheed Khan Mohd
298
+ voices from volunteers to help balance the data to reduce the biases while training for the wake word as well as the
299
+ general voice recognition itself. This project utilizes Google Colaboratory to train this projects model. Finally, if time
300
+ permits, the goal is to build a simple User Interface either with Flask or with other python libraries itself for a visual
301
+ element to the program.
302
+ 5
303
+ METHODS
304
+ As stated in the experimental setup, the Common-Voice data set planned to be used for the training model has recognized
305
+ 46% male voices and 16% female voices. Therefore, the goal of this project is to come up with an effective solution to
306
+ this disparity.
307
+ When training a speech recognition model, different types of voice data can be used. Depending on the type of
308
+ interaction one’s looking to build, and how robust that interaction should be, different types of voice data might
309
+ be required. Although there are several easily available sources of speech data, such as public speech corpora or
310
+ pre-packaged data sets, it is almost always necessary to cooperate with a data services provider to collect your own
311
+ speech data, either remotely or in person. When you gather one’s own data, it is easy to tailor the speech data set to
312
+ include variables such as language, speaker demographics, audio requirements, and collection size. [24]
313
+ The Bureau of Labor Statistics (BLS) projects computer science research jobs will grow 19 percent by 2026. However,
314
+ in the United States, the percentage of women that receive a bachelor’s degree in Computer Science is still only 18
315
+ percent. There is currently a high demand for computer scientists in the professional industry but despite this fact, this
316
+ industry remains male-dominated, in the United States. For example, in this year’s Senior Inquire class for Computer
317
+ Science, there is a 1:6 female to male ratio [25].
318
+ Computer technology first emerged during World War II and continuing into the 1960s, women made up most of the
319
+ computing workforce. However, by 1970 women only accounted for about 14 percent of bachelor’s in computer science.
320
+ In 1984 that number rose to 37 percent. The percentage of women in computer science has since declined to 18 percent.
321
+ It was around the same time personal computers started showing up in homes. According to NPR, personal computers
322
+ were marketed almost exclusively to men and families were more likely to buy computers for boys than girls.
323
+ Computers are now commonplace in both classrooms and on individuals as personal assistance. It is hard, however, to
324
+ explain the exact reason why females are not as present in this major. There are organizations now that are researching
325
+ and improving ways to increase the potential of more females in the computer science major. It is said that one of the
326
+ reasons why women tend to trend away from the computer science field is because of marketing of the industry in the
327
+ past tailored to those with the geek persona and the social innuendos of what being a geek used to mean. [26]
328
+ One of the reasons why females should be represented in the computer industry is increasing the inclusion of women
329
+ is a sound business strategy. “A study by Deloitte found that women’s choices account for up to 85 percent of buying
330
+ decisions nationwide, and that diversity drives innovation. Though it is still commonplace to find boards and project
331
+ teams without a female member, the integration of female perspectives will naturally lead to higher revenues and a
332
+ better understanding of consumer marketplaces.” [27]
333
+ Therefore, the purpose of this project is to strive to increase the attractiveness of the potential that computer science
334
+ avenues provide for women, by increasing the representation of females in the initial study of voice assistants. If females
335
+ can realize that they possess a more essential in the initial makeup that products that are generated there is a possibility
336
+ that females will be more inclined into joining the computer science industry, by showing them that they do in fact
337
+ play an essential role in the make of technology.
338
+ Speech recognition data can be classified into three categories:
339
+ Manuscript submitted to ACM
340
+
341
+ Addressing the Selection Bias in Voice Assistance: Training Voice Assistance Model in Python with Equal Data Selection
342
+ 9
343
+ • Controlled: Scripted speech data
344
+ • Semi-controlled: Scenario-based speech data
345
+ • Natural: Unscripted or conversational speech data
346
+ For this project, the primary type of voice data used is Semi-controlled data and natural unscripted conversational
347
+ speech data. For semi-controlled data, When developers need a natural sampling of different ways to ask for the same
348
+ thing or a greater diversity of command intentions, scenario-based voice data is collected (i.e. asking for different
349
+ things). As a result, scenario-based speech data adds variety to what is said as well as how it is said.
350
+ On the other hand, The most "natural" kind of speech is unscripted or conversational speech data, which is a recording
351
+ of a conversation between two or more speakers. Unscripted speech data, like spontaneous speech, occurs in a variety
352
+ of formats in the real world. For example, this information could be captured in the form of phone conversations or
353
+ recordings of individuals conversing in a busy room. If a developer is looking for conversational data on a given topic
354
+ (for example, music), two speakers might be asked to conduct a conversation about it.
355
+ Fig. 2. Train-Loss Graph
356
+ To compensate for this gender disparity, data augmentation will be done on the voice data-set in order to artificially
357
+ increase the diversity of the data-set and to increase the data-set size. This technique is performed by changing the
358
+ pitch, speed, injecting noise, and/or adding reverb to the audio data.
359
+ The model was trained with about 30,000 separate lines of data and was trained over 7 epochs with a learning rate of
360
+ 5e-1 and batch sizes of 15.
361
+ The train loss graph shows that the speech recognition model has more than enough data to train the model and is
362
+ using more than enough data making the model almost over-fit which is also supported by the test loss graph.
363
+ As, shown by the train loss graph, the test loss graph also gives the same conclusion that the speech recognition
364
+ model might be slightly over fitting since the train loss is very low, compared the test loss which is much higher than
365
+ the train loss itself. However, the test loss is no too huge which makes the model usable.
366
+ Finally, after half of the training, the learning rate started slowing down as the model kept getting more and more
367
+ training with each epochs.
368
+ Manuscript submitted to ACM
369
+
370
+ train_loss
371
+
372
+ 8
373
+ 6
374
+ 4
375
+ 2
376
+ 0
377
+ 0
378
+ 5k
379
+ 10k
380
+ 15k
381
+ 20k
382
+ 25k
383
+ 30k10
384
+ Kashav Piya, Srijal Shrestha, Cameran Frank, Estephanos Jebessa, and Tauheed Khan Mohd
385
+ Fig. 3. Test-Loss Graph
386
+ Fig. 4. Training Learning Graph
387
+ 6
388
+ RESULTS
389
+ The goal of this project is to create a voice-generated virtual assistant using python that can work similarly to the modern
390
+ popular voice assistants such as Siri, Alexa, or Bixby. This projects personal AI voice assistant can understand voice
391
+ commands using speech recognition in Python and will be able to perform multiple tasks, using the pre-programmed
392
+ functions built into it as well as training data.
393
+ At the end of the project, this project will have AI voice assistant to be able to recognize voices and their commands
394
+ and perform the most suitable actions. This process of voice recognition is done by breaking down audio into individual
395
+ sounds, then converting them into a digital format that will be using Machine Learning algorithms and models to find
396
+ the word for that sound. The words will then be used by the voice assistant and see if anything related to it has been
397
+ pre-programmed and if not, it will be able to perform a most suitable action. The assistant will be speech-enabled, so
398
+ Manuscript submitted to ACM
399
+
400
+ test_loss
401
+ 1.4
402
+ 1.2
403
+ 1
404
+ 0.8
405
+ 0.6
406
+ 5k
407
+ 10k
408
+ 15k
409
+ 20k
410
+ 25ktrain_learning_rate
411
+ 500μ
412
+ 400μ
413
+ 300μ
414
+ 200μ
415
+ 100μ
416
+ 0
417
+ 0
418
+ 10k
419
+ 20k
420
+ 30kAddressing the Selection Bias in Voice Assistance: Training Voice Assistance Model in Python with Equal Data Selection
421
+ 11
422
+ after recognizing a statement or a question, it will call the necessary function to execute the task then give a response
423
+ based on the algorithm.
424
+ However, the project will incorporate some features such as sending text messages, sending emails, opening songs,
425
+ predicting the time, telling jokes, surfing the internet for information, forecasting weather, and many more. The final
426
+ project will not be limited to the features listed above and will possess more as well as having a clearer vision of what
427
+ this projects personal voice AI can do.
428
+ Fig. 5. Cer graph
429
+ Fig. 6. Wer graph
430
+ For the model that was trained, the Wer(Word Error Rate) was reduced to about 50% which is not so great when
431
+ compared to other models made by large scale companies, such as Google’s 4.9% WER and Microsoft’s 5.1%. Also from
432
+ Manuscript submitted to ACM
433
+
434
+ cer
435
+ :
436
+ 0.4
437
+ 0.3
438
+ 0.2
439
+ 5k
440
+ 10k
441
+ 15k
442
+ 20k
443
+ 25kwer
444
+
445
+ 0.9
446
+ 0.8
447
+ 0.7
448
+ 0.6
449
+ 0.5
450
+ 5k
451
+ 10k
452
+ 15k
453
+ 20k
454
+ 25k12
455
+ Kashav Piya, Srijal Shrestha, Cameran Frank, Estephanos Jebessa, and Tauheed Khan Mohd
456
+ the trained model the Cer(Character Error Rate) to about 20% which means one out of every 5 characters were predicted
457
+ incorrectly which is not the best.
458
+ Fig. 7. Pitch Estimator graph
459
+ Part of this project was to see if the computer could correctly analyze whether a voice is female. Using the data
460
+ set collected from Augustana College where every of the 62 voice samples are females. Using the frequency ranges
461
+ mentioned earlier in the paper reported by ASHA as the boundaries for determination the computer categorized that 34
462
+ out the 62 voice samples are female voices meaning that the computer correctly estimated about 55% of the samples.
463
+ Figure 7 is a sample of one of the correctly categorized voices samples showing the frequency boundary ranges for both
464
+ male and female voices with the red line be the voice samples calculated frequency. This results proves the hypothesis
465
+ that computers simple have a hard time distinguishing female voice samples regardless of the frequency ranges.
466
+ One of the biggest challenges is training the assistant to recognize different types of voices, accents, and dialects. To
467
+ counteract this problem by researching effective machine learning algorithms to implement to train the necessary voice
468
+ data. This project will utilize various open-source projects to complete the voice recognizer which then can incorporate
469
+ into this project own voice assistant.
470
+ One of the primary goals for this project is to build and train a model that utilizes more voice data from underrepre-
471
+ sented groups, as there is a huge gender and race bias in most virtual assistants. This statement above can be implied
472
+ because the training data used in some of the original voice assistants consists mainly of Caucasian males and Asian
473
+ males. The female group as well as people from other backgrounds that might have different accents or dialects are
474
+ extremely underrepresented. These models store the public data to try and improve the accuracy of the voice recognizer
475
+ in their voice assistants, but will try to incorporate these data from the beginning. In the end, the voice assistant will be
476
+ able to recognize voices regardless of their gender, race, or accent.
477
+ Another goal is the addition of a two factor authentication for added security to the user and their data. The two
478
+ factor authentication will be set to access the history of all the commands provided by the user. The collection of data is
479
+ done for the functionality of the virtual assistant. As well as allowing the user to keep track of the commands they
480
+ have used. The goal is to have this authentication app ready for the final project, however this is now part of a future
481
+ endeavor. A downside is that the voice recognition may not be viable, because it requires each user to train their voice
482
+ specifically to the virtual assistant. This process will also require a lot more storage of data and processing on the
483
+ programmers part.
484
+ This project demonstrates that gender and race biases exist in a lot of virtual assistants and tries to fix this problem by
485
+ training more voice data from underrepresented groups. Males, especially white and Asian men, are disproportionately
486
+ Manuscript submitted to ACM
487
+
488
+ pitch estimation
489
+ Measured Frequency
490
+ Female min Freqency
491
+ 20
492
+ Female max Freqency
493
+ --- Male min Freqency
494
+ Male max Freqency
495
+ 5
496
+ 0
497
+ 100
498
+ 200
499
+ OOE
500
+ 400
501
+ 500
502
+ frequency (Hz)Addressing the Selection Bias in Voice Assistance: Training Voice Assistance Model in Python with Equal Data Selection
503
+ 13
504
+ over represented in Computer Science education and careers. Because of the male-dominated employment milieu, many
505
+ women abandon Computer Science careers. Researchers have found many hurdles to women in Computer Science
506
+ courses at the academic level, although efforts to address these issues have differed depending on geographic region or
507
+ educational level.
508
+ To combat this over-representation, gender depiction standards, as expressed through look, name, and voice, as well
509
+ as a review procedure, are critical changes that must be introduced. Users are predisposed to draw gender from a voice,
510
+ and often ascribe one to voices that are supposed to be neutral, as the makers of the gender-neutral voice assistant Q
511
+ discovered [28].
512
+ 7
513
+ DISCUSSION
514
+ Virtual Assistants take in voice commands and process that data into commands or useful information. In this age, many
515
+ developed countries can already be considered as aged societies [29]. This causes lots of changes such as a discrepancy
516
+ in the proportion of seniors with the rest of the population. So, assistance technology such as a virtual assistant can
517
+ help with the problems associated with it. In the article “Home-Assistant Robot for an Aging Society” they provide
518
+ information on how assistance technology helps in labor support, healthy lifestyle support, and household and care
519
+ support. In consideration to virtual assistants, they give a list of activities done by an IRT home assistant robot that
520
+ models 3D space in a vector space and uses voice recognition with reinforcement learning to do several tasks:
521
+ • Performing Chores in a Home Environment
522
+ • Deformable Object Detection through image-based learning
523
+ • Geometrical Object Modeling and Its Application to Position Estimation
524
+ • Manipulation of Daily Tools and Appliances
525
+ • Integration of Basic Tasks Into Sequential Behavior
526
+ • Failure Detection and Recovery
527
+ The IRT home assistant robot works with voice commands to learn tasks. One can guide it to perform physical
528
+ activities such as carrying items, pushing objects, collecting items, and sweeping. For example, it can take images of
529
+ clothing and with vector data, learn about wrinkles in clothing to pick up clothes. The robot with more data can do the
530
+ task without any commands. Likewise, it uses environment recognition to do similar tasks at several locations of the
531
+ house using commands. The implementation of this assistance technology can increase the ease and productivity of
532
+ performing home activities. These assistants not only help old people but also physically disabled and blind people as it
533
+ does not require learning and typing where they can just speak and ask questions to the assistant.
534
+ In the expected results, this project discussed some of the outcomes of this projects virtual assistant. Again, it is
535
+ still in the beginning phase, however, the main goal is to make it so that the assistant can do tasks like Siri or Alexa.
536
+ “Virtual assistants are regularly used to make online interfaces more user friendly. It also generates positive responses
537
+ from Internet users leading to a more interpersonal shopping experience, greater pleasure, and customer flow” [30].
538
+ In line with the related work, this project will be aiming to tackle the problem of the under-representation of women’s
539
+ voices and dialects for training. This project will be collecting first-hand data through participatory surveys and training
540
+ our model based on that. Then this project will use data from online databases to further train our voice assistant.
541
+ By doing this, this project can avoid the gender bias present in voice assistants. To confront the bias for dialects, this
542
+ project will also be collecting voice command data from a diverse audience. This helps create a proportional sample
543
+ that will help represent the college population and middle-aged adults that is the target.
544
+ Manuscript submitted to ACM
545
+
546
+ 14
547
+ Kashav Piya, Srijal Shrestha, Cameran Frank, Estephanos Jebessa, and Tauheed Khan Mohd
548
+ Currently the project is not a proper application. It was presented with a GUI (Graphic User Interface) to access
549
+ this project. In the future the goal is to implement an startup application with a better GUI that will run as soon as
550
+ you turn your computer on. Since the application at the moment is only a GUI that will only receive commands there
551
+ are no security risks. The program was also implemented so that the voice assistant will only listen when you press
552
+ the listen button. In the future the plan is to make it so you can use your voice in order to activate it from the start by
553
+ using a wake word. Doing so will make it easier to use but the goal is for the voice assistant to not collect sensitive
554
+ information when the wake word is used by accident. To counter that the plan is to add a mute feature for the assistant
555
+ to not allow it to listen. The future plan is also to add an indicator for when the voice assistant is actively listening
556
+ as well as keeping logs of when listening occur ed and a text alert through a mobile device. But when developing the
557
+ full application there will be a lot of potential security risks to consider. The plan to add several measures to manage
558
+ the risks. One of the ways would be to personalize the voice assistant to one account unless one links another one.
559
+ The future plan also includes adding a feature to ask for two-factor authentication in order to access the device use
560
+ and history. This will help mitigate potential risks of identity theft or user impersonation. It will also add an extra
561
+ layer of security to protect sensitive data from theft. The two-factor authorization that is used will be a pattern and/or
562
+ pass-code and a token, or biometric data and a token. For the token, it will have a token generator or an authentication
563
+ app. For the pattern and/or pass-code, this project will make sure that people can input it through their phones. As for
564
+ biometric data, this project will add a part to the voice assistant so that it will learn the user’s tone and pitch if allowed.
565
+ As for the future of the project, the voice assistant will keep on building up with more features aimed for making
566
+ the day to day activities easier for the user. Our research on the speech recognition will continue to grow as well as
567
+ trying on new alternatives such as adding more layers to our neural network, using audio books to train our model,
568
+ and using various other architectures available such as ctcdecode which can only be used in Linux environment which
569
+ will improve our word error rate dramatically. The end goal of this project is ambitious considering that there are many
570
+ speech recognition systems out already that perform very well, and have resources that cannot be compared to ours,
571
+ but there will be improvements in the speech recognition system with more training and additional libraries.
572
+ Manuscript submitted to ACM
573
+
574
+ Addressing the Selection Bias in Voice Assistance: Training Voice Assistance Model in Python with Equal Data Selection
575
+ 15
576
+ REFERENCES
577
+ [1] S. Subhash, P. N. Srivatsa, S. Siddesh, A. Ullas, and B. Santhosh, “Artificial intelligence-based voice assistant,” in 2020 Fourth World Conference on
578
+ Smart Trends in Systems, Security and Sustainability (WorldS4), pp. 593–596, IEEE, 2020.
579
+ [2] F. Nasirian, M. Ahmadian, and O.-K. D. Lee, “Ai-based voice assistant systems: evaluating from the interaction and trust perspectives,” 2017.
580
+ [3] O. Kudina, ““alexa, who am i?”: Voice assistants and hermeneutic lemniscate as the technologically mediated sense-making,” Human Studies, vol. 44,
581
+ no. 2, pp. 233–253, 2021.
582
+ [4] V. Chattaraman, W.-S. Kwon, J. E. Gilbert, and K. Ross, “Should ai-based, conversational digital assistants employ social-or task-oriented interaction
583
+ style? a task-competency and reciprocity perspective for older adults,” Computers in Human Behavior, vol. 90, pp. 315–330, 2019.
584
+ [5] M. Schmidt and P. Braunger, “Towards a speaking style-adaptive assistant for task-oriented applications,” Studientexte zur Sprachkommunikation:
585
+ Elektronische Sprachsignalverarbeitung 2018, pp. 143–150, 2018.
586
+ [6] A. Bernaras, “Problem-oriented and task-oriented models of design in the commonkads framework,” in Artificial Intelligence in Design’94, pp. 499–516,
587
+ Springer, 1994.
588
+ [7] S. Bringsjord and B. Schimanski, “What is artificial intelligence? psychometric ai as an answer,” in IJCAI, pp. 887–893, Citeseer, 2003.
589
+ [8] G. G. Hendrix, E. D. Sacerdoti, D. Sagalowicz, and J. Slocum, “Developing a natural language interface to complex data,” ACM Transactions on
590
+ Database Systems (TODS), vol. 3, no. 2, pp. 105–147, 1978.
591
+ [9] J. B. Wold, “Difficulties in learning english as a second or foreign language,” 2006.
592
+ [10] G. Beller and X. Rodet, “Content-based transformation of the expressivity in speech,” in Proceedings of the 16th ICPhS, pp. 2157–2160, Citeseer, 2007.
593
+ [11] A. Caliskan, “Detecting and mitigating bias in natural language processing,” 2021.
594
+ [12] K. Chowdhary, “Natural language processing,” Fundamentals of artificial intelligence, pp. 603–649, 2020.
595
+ [13] D. R. Reddy, “Speech recognition by machine: A review,” Proceedings of the IEEE, vol. 64, no. 4, pp. 501–531, 1976.
596
+ [14] J. Meyer, L. Dentel, and F. Meunier, “Speech recognition in natural background noise,” PloS one, vol. 8, no. 11, p. e79279, 2013.
597
+ [15] J. Laroche, “Time and pitch scale modification of audio signals,” in Applications of digital signal processing to audio and acoustics, pp. 279–309,
598
+ Springer, 2002.
599
+ [16] D.-S. Kim, S.-Y. Lee, and R. M. Kil, “Auditory processing of speech signals for robust speech recognition in real-world noisy environments,” IEEE
600
+ Transactions on speech and audio processing, vol. 7, no. 1, pp. 55–69, 1999.
601
+ [17] Crowcroft, “Analog to digital conversion: Sampling,” 1998.
602
+ [18] G. D. Israel, “Determining sample size,” 1992.
603
+ [19] J. P. Bajorek, “Voice recognition still has significant race and gender biases,” Harvard Business Review, vol. 10, 2019.
604
+ [20] L. Lima, V. Furtado, E. Furtado, and V. Almeida, “Empirical analysis of bias in voice-based personal assistants,” in Companion Proceedings of the 2019
605
+ World Wide Web Conference, pp. 533–538, 2019.
606
+ [21] N. Zhang, X. Mi, X. Feng, X. Wang, Y. Tian, and F. Qian, “Dangerous skills: Understanding and mitigating security risks of voice-controlled third-party
607
+ functions on virtual personal assistant systems,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 1381–1396, IEEE, 2019.
608
+ [22] W. Diao, X. Liu, Z. Zhou, and K. Zhang, “Your voice assistant is mine: How to abuse speakers to steal information and control your phone,” in
609
+ Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones & Mobile Devices, pp. 63–74, 2014.
610
+ [23] S. Watson, “The unheard female voice: Women are more likely to be talked over and unheeded. but slps can help them speak up and be heard.,” 2019.
611
+ [24] “3 types of speech recognition data (and what they’re used for),” Mar 2022.
612
+ [25] A. Zilberman and L. Ice, “Why computer occupations are behind strong stem employment growth in the 2019–29 decade,” Computer, vol. 4, no. 5,164.6,
613
+ pp. 11–5, 2021.
614
+ [26] L. Carter, “Why students with an apparent aptitude for computer science don’t choose to major in computer science,” ACM SIGCSE Bulletin, vol. 38,
615
+ no. 1, pp. 27–31, 2006.
616
+ [27] C. staff, “Women in computer science: Getting involved in stem,” 2022.
617
+ [28] M. Robison, “Voice assistants have a gender bias problem. what can we do about it?,” 2020.
618
+ [29] K. Yamazaki, R. Ueda, S. Nozawa, M. Kojima, K. Okada, K. Matsumoto, M. Ishikawa, I. Shimoyama, and M. Inaba, “Home-assistant robot for an aging
619
+ society,” Proceedings of the IEEE, vol. 100, no. 8, pp. 2429–2441, 2012.
620
+ [30] M. Holzwarth, C. Janiszewski, and M. M. Neumann, “The influence of avatars on online consumer shopping behavior,” Journal of marketing, vol. 70,
621
+ no. 4, pp. 19–36, 2006.
622
+ Manuscript submitted to ACM
623
+
btAyT4oBgHgl3EQfwPm0/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ctAzT4oBgHgl3EQfZ_wC/content/tmp_files/2301.01359v1.pdf.txt ADDED
@@ -0,0 +1,1405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01359v1 [math.NT] 3 Jan 2023
2
+ PROOFS OF MODULO 11 AND 13 CYLINDRIC KANADE-RUSSELL CONJECTURES
3
+ FOR A2 ROGERS–RAMANUJAN TYPE IDENTITIES
4
+ ALI KEMAL UNCU
5
+ Abstract. We present proofs of two new families of sum-product identities arising from the cylindric parti-
6
+ tions paradigm. Most of the presented expressions, the related sum-product identities, and the ingredients for
7
+ the proofs were first conjectured by Kanade–Russell in the spirit of Andrews–Schilling–Warnaar identities of
8
+ the A2 Rogers–Ramanujan type. We follow the footsteps of Kanade–Russell while we alter the computations
9
+ heavily to accomplish our goals.
10
+ 1. Introduction
11
+ There is an ever-growing synergy between number theory, combinatorics, q-series, and affine Lie algebras
12
+ that led to groundbreaking techniques and beautiful mathematical discoveries. Among these are the Rogers–
13
+ Ramanujan type identities where an infinite q-series is equal to a infinite product with a modular structure.
14
+ First appeared at the intersection of number theory and combinatorics, the Rogers–Ramanujan identities
15
+ have been of great interest. These sum-product identities have been studied, proved and generalized in many
16
+ different ways over the years [3, 6, 13, 14, 17, 22, 24, 25, 37]. These identities also naturally arose in many
17
+ other fields including mathematical physics [10], representation theory of affine Lie algebras and vector operator
18
+ algebras [31, 32], knot theory in relation to the colored Jones polynomials [8], and algebraic geometry [15] over
19
+ the years.
20
+ For some non-negative integer L and formal variables a and q, let q-Pochhammer symbol be (a; q)L :=
21
+ (1 − a)(1 − aq) . . . (1 − aqL−1), and (a; q)∞ := limL→∞(a; q)L, θ(a; q) := (a, q/a; q)∞, and for a1, . . . , ak some
22
+ formal variables, define the shorthand notation θ(a1, a2, . . . , ak; q) := θ(a1; q)θ(a2; q) . . . θ(ak; q).
23
+ The Rogers–Ramanujan identities are as follows [38].
24
+ Theorem 1.1 (Rogers–Ramanujan identities).
25
+ (1.1)
26
+
27
+ n≥0
28
+ qn2
29
+ (q; q)n
30
+ =
31
+ 1
32
+ θ(q; q5)
33
+ and
34
+
35
+ n≥0
36
+ qn2+n
37
+ (q; q)n
38
+ =
39
+ 1
40
+ θ(q2; q5).
41
+ The reciprocal q-Pochhammer products on the right-hand side of (1.1) has the ±1 and ±2 residue classes
42
+ modulo 5, respectively. We call these modulo 5 identities.
43
+ A composition c of n is a finite list of non-negative integers that sum up to n. A partition is a composition
44
+ where no element of the list (called parts) are zero and the list elements are ordered in a non-increasing order.
45
+ We define the size of a composition π as the sum of all its parts and denote this by |π|. We denote the number
46
+ of parts in a composition π by #(π). A composition (resp. partition) with size n is called “a composition
47
+ (resp. partition) of n.” The empty list is considered as the unique composition/partition of 0 with 0 parts.
48
+ For example, (2, 0, 2) is a composition with 3 parts and (1, 1, 1, 1), (4, 3, 1), and (2, 2) are partitions of 4, 8,
49
+ and 4, respectively.
50
+ MacMahon [33] and Schur [39] gave combinatorial interpretations to Rogers–Ramanujan identities inde-
51
+ pendently.
52
+ Date: January 5, 2023.
53
+ 2010 Mathematics Subject Classification. Primary 05A15; Secondary 05A17, 05A19, 11B65, 11P84, 17B65, 68R05.
54
+ Key words and phrases. Cylindric partitions, Partition identities, Rogers–Ramanujan identities, Andrews–Schilling–Warnaar
55
+ identities.
56
+ Research of the author is partly supported by EPSRC grant number EP/T015713/1 and partly by FWF grant P-34501N.
57
+ 1
58
+
59
+ 2
60
+ ALI KEMAL UNCU
61
+ Theorem 1.2 (Combinatorial interpretaton of Rogers–Ramanujan identities). Let i = 0 or 1. For every
62
+ natural number n, the number of partitions of n such that the difference between two consecutive parts is at
63
+ least 2 and the the smallest part is strictly greater than i is equal to the number of partitions of n into parts
64
+ congruent to ±(1 + i) mod 5.
65
+ Gordon [24] presented a wide generalization of Theorem 1.2 to all odd modulus ≥ 5.
66
+ Theorem 1.3 (Gordon’s identities, 1961). Let r and i be integers such that r ≥ 2 and 1 ≤ i ≤ r. The number
67
+ of partitions π = (π1, π2, . . . , πs) of n such that πj − πj+r−1 ≥ 2 for all j with at most i− 1 1s appears as parts
68
+ in π are equal to the number of partitions of n whose parts are not congruent to 0, ±i mod 2r + 1.
69
+ The Rogers–Ramanujan identities correspond to the cases r = i = 2 and r = 2, i = 1.
70
+ Andrews found the q-series counterpart to Gordon’s identities [3].
71
+ Theorem 1.4 (Andrews–Gordon identities, 1974). Let r ≥ 2 and 1 ≤ i ≤ r be two integers. We have
72
+ (1.2)
73
+
74
+ n1≥···≥nr−1≥0
75
+ qn2
76
+ 1+···+n2
77
+ r−1+ni+···+nr−1
78
+ (q; q)n1
79
+
80
+ n1
81
+ n1 − n2
82
+
83
+ q
84
+ · · ·
85
+
86
+ nr−2
87
+ nr−2 − nr−1
88
+
89
+ q
90
+ = θ(qi; q2r+1)(q2r+1; q2r+1)∞
91
+ (q; q)∞
92
+ ,
93
+ where for two integers n and m,
94
+ �m + n
95
+ m
96
+
97
+ q
98
+ :=
99
+
100
+
101
+
102
+
103
+
104
+ (q; q)m+n
105
+ (q; q)m(q; q)n
106
+ for m, n ≥ 0,
107
+ 0
108
+ otherwise,
109
+ is the classical q-binomial coefficient.
110
+ Note that the Rogers–Ramanujan identities are the particular case of (1.2) where r = i = 2, and r = 2 and
111
+ i = 1. Interested readers can get a great overview of the history of the Rogers–Ramanujan identities, their
112
+ significance, and some generalizations in the recent book of Sills [40].
113
+ The identities (1.2) can be proven by the Bailey machinery coming from the world of q-series. This powerful
114
+ mechanism starts with a pair of q-expressions, called a Bailey pair, that satisfies a pre-defined relation and
115
+ modifies this pair iteratively (using Bailey lemma or one of its generalizations) to make a new Bailey pair (see
116
+ [2, 4, 9, 40]). That way, by starting with the pair related to Rogers–Ramanujan identities, a whole infinite
117
+ chain of identities (1.2) can be acquired. The identities (1.2) are certain characters related to affine Lie algebra
118
+ A(1)
119
+ 1 , and we thus refer to them as A1 Rogers–Ramanujan identities. The original Bailey mechanism was later
120
+ extended to An−1 for general n [34, 35]. However,these works did not yield An−1 Rogers–Ramanujan identities.
121
+ In their influential paper, Andrews, Schilling and Warnaar [7] were able to describe an A2 Bailey lemma
122
+ and the associated Bailey machinery. They found several infinite families of identities, One of their modulo 7
123
+ identities is as follows.
124
+ Theorem 1.5 (Andrews–Schilling–Warnaar, 1999).
125
+ (1.3)
126
+
127
+ r1,s1≥0
128
+ qr2
129
+ 1−r1s1+s2
130
+ 1+r1+s1
131
+ (q; q)r1
132
+ �2r1
133
+ s1
134
+
135
+ q
136
+ =
137
+ 1
138
+ θ(q2, q3, q3; q7).
139
+ Andrews–Schilling–Warnaar found several very general families of sum-product identities. Of particular
140
+ interest to representation theory, the product-sides of these identities are character formulas of the W3 algebra
141
+ multiplied by an extra factor (q; q)−1
142
+ ∞ [21]. These formulas do not yield manifestly positive sum-sides for the
143
+ character formulas because of this extra factor.
144
+ For example, one of Andrews–Schilling–Warnaar’s modulo 10 identities after clearing the extra factor
145
+ (q; q)−1
146
+ ∞ is as follows.
147
+ Theorem 1.6 (Andrews–Schilling–Warnaar, 1999).
148
+ (1.4)
149
+ (q, q)∞
150
+
151
+ r1≥r2≥0
152
+ s1≥s2≥0
153
+ qr2
154
+ 1−r1s1+s2
155
+ 1+r2
156
+ 2−r2s2+s2
157
+ 2+r1+r2+s1+s2
158
+ (q; q)r1−r2(q; q)r2(q; q)s1−r2(q; q)s2(q; q)r2+s2+1
159
+ =
160
+ 1
161
+ θ(q2, q3, q3, q4, q4, q5; q10)
162
+
163
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
164
+ 3
165
+ Recall the Euler’s Pentagonal Number Theorem [5]
166
+ (1.5)
167
+ (q, q)∞ =
168
+
169
+
170
+ i=−∞
171
+ (−1)iqi(3i+1)/2.
172
+ Although it is easy to see that the right-hand side of (1.4) has positive coefficients, in light of (1.5) this is not
173
+ directly visible on the left-hand side. In contrast, both sides of (1.3) are manifestly positive. The manifestly
174
+ positive sum representations give insight to the structure of certain modules for the affine Lie algebra A(1)
175
+ 2 .
176
+ These mentioned character of standard modules for the affine Lie algebra A(1)
177
+ 2 . Interested readers can find
178
+ more on this connection in [7, 29, 28, 31, 32].
179
+ Recently, the discovery of manifestly positive identities of these character formulas through a scheme with
180
+ combinatorial roots attracted the attention and led to many new Rogers–Ramanujan type identities.
181
+ In 1997, Gessel and Krattenthaler [23] defined cylindric partitions in context of non-intersecting lattice
182
+ paths. Borodin [11] gave univariate product formulas for the generating functions of the number of cylindric
183
+ partitions. Foda and Welsh [21] proved the A1 Rogers–Ramanujan identities using the combinatorics of cylin-
184
+ dric partitions. This led to Corteel’s combinatorial proof of the Rogers–Ramanujan identities using cylindric
185
+ partitions [17]. In 2019, Corteel and Welsh [18] derived functional equations for the bivariate generating func-
186
+ tions for the number the number of cylindric partitions using the largest part statistic. While doing so, they
187
+ also gave a new proof of Andrews–Schilling–Warnaar’s modulo 7 A2 Rogers–Ramanujan identities (including
188
+ (1.3)) and a fifth missing identity which was originally conjectured by Feigin–Foda–Welsh [20].
189
+ All these
190
+ modulo 7 identities have manifestly positive sum sides. [18] has been the catalyst for the recent developments.
191
+ Ablinger and the author [1] implemented the Corteel–Welsh’s cylindric partitions related functional equations
192
+ in their symbolic computation implementation qFunctions to be able to exploit this combinatorial idea using
193
+ formal manipulation and computer algebra techniques. Corteel, Dousse and the author [19] later proved the
194
+ modulo 8 identities that arise from the cylindric partitions paradigm with the help of this implementation.
195
+ One of such identities is as follows (see Theorem 1.6 in [19]).
196
+ Theorem 1.7 (Corteel–Dousse–U., 2021).
197
+ (1.6)
198
+
199
+ r1≥s1≥r2≥0
200
+ r1≥s2≥0
201
+ qr2
202
+ 1−r1s1+s2
203
+ 1+r2
204
+ 2+s2
205
+ 2+s1s2+r1+r2+s1+s2
206
+ (q; q)r1
207
+ �r1
208
+ s1
209
+
210
+ q
211
+ �r1
212
+ s2
213
+
214
+ q
215
+ �s1
216
+ r2
217
+
218
+ q
219
+ =
220
+ 1
221
+ θ(q2, q3, q3, q4, q4, q5; q10)
222
+ Unlike (1.4), (1.6) has a manifestly positive sum-side. Shortly after [19], in late 2021, Warnaar [42] come
223
+ up with many beautiful conjectures for manifestly positive sum-sides related to higher moduli (not divisible
224
+ by 3). In 2022, Tsuchioka [41] proved manifestly positive sum-sides for modulus 6 using finite-automata and
225
+ automated proofs. He was also able to analyze the structure of relevant level 3 standard modules for the affine
226
+ Lie algebra A(1)
227
+ 2 .
228
+ In a different vein, Bridges and the author studied weighted versions of cylindric partitions as well as
229
+ cylindric partitions into distinct parts in [12].
230
+ Earlier in 2022, Kanade and Russell [29] aimed (and succeeded) at conjecturing A2 Rogers–Ramanujan
231
+ type identities in the form of Andrews–Schilling–Warnaar instead of aiming for manifestly positive sum-sides.
232
+ They were able to make explicit claims for each modulus ≥ 5. They proved the cases for moduli 5, 6, 7, 8 and
233
+ 10. Their exploration came to an end due to the increasing computational difficulties.
234
+ In this paper, we approach the conjectures of Kanade–Russell by changing the computational techniques
235
+ used. We prove all modulo 11 and 13 A2 Rogers–Ramanujan identities coming from the cylindric partitions
236
+ paradigm. Two such identities are as follows:
237
+
238
+ 4
239
+ ALI KEMAL UNCU
240
+ Theorem 1.8.
241
+
242
+ r1≥r2≥r3≥0
243
+ s1≥s2≥s3≥0
244
+ qr2
245
+ 1−r1s1+s2
246
+ 1+r2
247
+ 2−r2s2+s2
248
+ 2+r2
249
+ 3+r3s3+s2
250
+ 3+r1+r2+r3+s1+s2+s3
251
+ (q; q)r1−r2(q; q)r2−r3(q; q)r3(q; q)s1−s2(q; q)s2−s3(q; q)s3(q; q)r3+s3+1
252
+ =
253
+ 1
254
+ (q; q)∞
255
+ 1
256
+ θ(q2, q3, q3, q4, q4, q5, q5; q11).
257
+ Theorem 1.9.
258
+
259
+ r1≥r2≥r3≥0
260
+ s1≥s2≥s3≥0
261
+ qr2
262
+ 1−r1s1+s2
263
+ 1+r2
264
+ 2−r2s2+s2
265
+ 2+r2
266
+ 3−r3s3+s2
267
+ 3+r1+r2+r3+s1+s2+s3
268
+ (q; q)r1−r2(q; q)r2−r3(q; q)r3(q; q)s1−s2(q; q)s2−s3(q; q)s3(q; q)r3+s3+1
269
+ =
270
+ 1
271
+ (q; q)∞
272
+ 1
273
+ θ(q2, q3, q3, q4, q4, q5, q5, q6, q6; q13).
274
+ The organization of this paper is as follows. In Section 2, we introduce cylindric partitions, the relevant
275
+ results and the conjectures of Kanade–Russell of which we prove some cases of. Section 3 is dedicated to
276
+ rewording the conjectures and the description ot the proof methodology. In Sections 4 and 5 we present
277
+ the proofs of the modulo 11 and 13 A2 Rogers–Ramanujan identities in Andrews–Schilling–Warnaar form,
278
+ respectively. We outline some natural questions and mathematical challenges that arise from this work in
279
+ Section 6. Section 7 is reserved for a discussion on how the computerized proofs have been carried in earlier
280
+ work [19, 29] and this paper and what future improvements can be done to take us further mathematically.
281
+ Acknowledgement
282
+ The author would like to thank the workshop on cylindric partitions group that came together in November
283
+ 2022 in Linz for all the stimulating discussions. In particular, the author would like to thank Shashank Kanade
284
+ for suggesting that the researchers working on cylindric partitions should come together and join forces in the
285
+ first place, and for all his comments on this manuscript.
286
+ The author would also like to thank Christian
287
+ Koutschan his encouragement of the author in the necessary implementations.
288
+ Research of the author is partly supported by EPSRC grant number EP/T015713/1 and partly by FWF
289
+ grant P-34501N.
290
+ 2. Necessary definitions
291
+ We shall start with the definition of a cylindric partition.
292
+ Definition 2.1. A cylindric partition is made up of a composition c = (c1, c2, . . . , cr) called profile with r
293
+ parts, and a vector π = (π(1), π(1), . . . , π(r)) consisting of r partitions π(i) = (π(i)
294
+ 1 , π(i)
295
+ 2 , . . . ), that satisfy the
296
+ inequalities
297
+ π(i)
298
+ j
299
+ ≥ π(i+1)
300
+ j+ci+1
301
+ and
302
+ π(r)
303
+ j
304
+ ≥ π(1)
305
+ j+c1.
306
+ For example, the vector partition π = {(1, 1, 1, 1), (4, 3, 1), (2, 2)} together with the profile (2, 0, 2) is a
307
+ cylindric partition. Note that the same vector partition can also satisfy the cylindric partition inequalities
308
+ with different profiles. For example, π is also a cylindric partition for profiles (2, 0, 0), (2, 0, 1), etc. We can
309
+ define the total size of a cylindric partition π as the sum of all the sizes of the partitions included. We denote
310
+ the total size, once again, by |π|. Let c be a composition and let Pc be the set of all vector partitions that are
311
+ cylindric partitions with profile c.
312
+ For a given profile c, let Pc be the set of all cylindric partitions with profile c. Let
313
+ Fc(z, q) :=
314
+
315
+ π∈Pc
316
+ zmax(π)q|π|,
317
+ the bivariate generating function for the number of cylindric partitions where the exponents of z and q are
318
+ keeping record of the largest parts size and the total of the parts in π, respectively. Borodin [11] showed that
319
+ when z = 1, Fc(z, q) generating functions have product formula.
320
+
321
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
322
+ 5
323
+ Theorem 2.2 (Borodin, 2007). Let r and l be positive integers, and let c = (c1, c2, . . . , cr) be a composition
324
+ of l. Define m := r + l and s(i, j) := ci + ci+1 + · · · + cj. Then,
325
+ (2.1)
326
+ Fc(1, q) =
327
+ 1
328
+ (qm; qm)∞
329
+ r
330
+
331
+ i=1
332
+ r
333
+
334
+ j=i
335
+ ci
336
+
337
+ k=1
338
+ 1
339
+ (qk+j−i+s(i+1,j); qm)∞
340
+ r
341
+
342
+ i=2
343
+ i�
344
+ j=2
345
+ ci
346
+
347
+ k=1
348
+ 1
349
+ (qm−k+j−i−s(j,i−1); qm)∞
350
+ .
351
+ Focusing on replacing the largest part in a given cylindric partition, Corteel–Welsh [18] defined a q-difference
352
+ equation for Fc(z, q). This functional equation relates Fc(z, q) with other generating functions Fc∗(z, q) where
353
+ #(c) = #(c∗) and |c| = |c∗|. Let c = (c1, . . . , cr) (with the convention that c0 = cr) be a given composition
354
+ and define Ic to be the set of indices for the non-zero entries in c. Given a non-empty subset J ⊆ Ic, the
355
+ composition c(J) = (c1(J), . . . , cr(J)) is defined by:
356
+ (2.2)
357
+ ci(J) :=
358
+
359
+
360
+
361
+
362
+
363
+ ci − 1
364
+ if i ∈ J and i − 1 /∈ J,
365
+ ci + 1
366
+ if i /∈ J and i − 1 ∈ J,
367
+ ci
368
+ otherwise.
369
+ Then the explicit q-difference equation Fc(z, q) satisfies is as follows.
370
+ Theorem 2.3 (Corteel–Welsh, 2019). For any profile c,
371
+ (2.3)
372
+ Fc(z, q) =
373
+
374
+ ∅⊂J⊆Ic
375
+ (−1)|J|−1 Fc(J)(zq|J|, q)
376
+ (1 − zq|J|)
377
+ ,
378
+ with the initial conditions Fc(0, q) = Fc(z, 0) = 1.
379
+ Let c be a profile and c′ be a cyclic shift of c. There is a clear one-to-one correspondence between cylindric
380
+ partitions in Pc and Pc′ by cyclically shifting the vector of partitions counted in Pc. This is enough to see that
381
+ the generating functions for these sets of cylindric partitions are equal, i.e. Fc(z, q) = Fc′(z, q). Therefore, we
382
+ can cyclically shift the profiles and lower the number of (seemingly different) generating functions that appear
383
+ in the coupled system of q-difference equations.
384
+ We can also normalize (2.3) and get an equivalent q-difference equation. For example, let
385
+ Gc(z, q) := (zq; q)∞Fc(z, q).
386
+ The equation (2.3) is equivalent to
387
+ (2.4)
388
+ Gc(z, q) =
389
+
390
+ ∅⊂J⊆Ic
391
+ (−1)|J|−1(zq; q)|J|−1Gc(J)(zq|J|, q),
392
+ with the initial conditions Gc(0, q) = Gc(z, 0) = 1. This q-difference equation (2.4) with polynomial coeffi-
393
+ cients, in practice, played a central role in the proofs of modulo 7 and modulo 8 cylindric partition with 3-part
394
+ profile identities in [18] and [19], respectively. Weighted versions of (2.3) and (2.4) are later presented in [12].
395
+ In [29], Kanade–Russell decided to change the initial conditions of (2.4) slightly. While this does not change
396
+ the q-difference equations, this lead to the conjectural discovery of explicit formulas for most of these 3-part
397
+ profile cylindric partition generating functions. Let
398
+ (2.5)
399
+ Hc(z, q) := (zq; q)∞
400
+ (q; q)∞
401
+ Fc(z, q).
402
+ Then Hc(z, q) satisfies the same q-difference equation as Gc(z, q), namely
403
+ (2.6)
404
+ Hc(z, q) =
405
+
406
+ ∅⊂J⊆Ic
407
+ (−1)|J|−1(zq; q)|J|−1Hc(J)(zq|J|, q),
408
+ with the initial conditions Hc(0, q) = 1/(q; q)∞ and Hc(z, 0) = 1.
409
+ From this point forward we only focus on cylindric partition profiles with 3-parts.
410
+ Let k ≥ 2, let
411
+ (2.7)
412
+ ρ = (ρ1, ρ2, . . . , ρk−1) ∈ Zk−1, and σ = (σ1, σ2, . . . , σk−1) ∈ Zk−1
413
+
414
+ 6
415
+ ALI KEMAL UNCU
416
+ and define
417
+ S3k−1(z; ρ|σ) =
418
+
419
+ r,s∈Zk−1
420
+ ≥0
421
+ zr1
422
+ q
423
+ �k−1
424
+ i=1 r2
425
+ i −risi+s2
426
+ i +ρiri+σisi
427
+ �k−2
428
+ i=1 (q; q)ri−ri+1(q; q)si−si+1
429
+ q2rk−1sk−1
430
+ (q; q)rk−1(q; q)sk−1(q; q)rk−1+sk−1+1
431
+ ,
432
+ (2.8)
433
+ S3k(z; ρ|σ) =
434
+
435
+ r,s∈Zk−1
436
+ ≥0
437
+ zr1
438
+ q
439
+ �k−1
440
+ i=1 r2
441
+ i −risi+s2
442
+ i +ρiri+σisi
443
+ �k−2
444
+ i=1 (q; q)ri−ri+1(q; q)si−si+1
445
+ 1
446
+ (q; q)rk−1+sk−1(q; q)rk−1+sk−1+1
447
+ �rk−1 + sk−1
448
+ rk−1
449
+
450
+ q3,
451
+ (2.9)
452
+ S3k+1(z; ρ|σ) =
453
+
454
+ r,s∈Zk−1
455
+ ≥0
456
+ zr1
457
+ q
458
+ �k−1
459
+ i=1 r2
460
+ i −risi+s2
461
+ i +ρiri+σisi
462
+ �k−2
463
+ i=1 (q; q)ri−ri+1(q; q)si−si+1
464
+ 1
465
+ (q; q)rk−1(q; q)sk−1(q; q)rk−1+sk−1+1
466
+ .
467
+ (2.10)
468
+ Let
469
+ (2.11)
470
+ ei = (0, 0, . . . , 0
471
+
472
+ ��
473
+
474
+ i
475
+ , 1, 1, . . ., 1) ∈ Zk−1
476
+ and
477
+ δi := (δij)1≤j≤k−1 ∈ Zk−1.
478
+ It is easy to see that
479
+ (2.12)
480
+ Sm(zqn; ρ|σ) = Sm(z; ρ + nδ1|σ)
481
+ for any n ∈ Z.
482
+ Kanade–Russell conjectured that for any fixed k ≥ 3, the H(c1,c2,c3)(z, q) can be expressed as linear combi-
483
+ nations of the Sm(z; ρ|σ) functions. Precisely they claimed the following.
484
+ Conjecture 2.4 (Kanade–Russell, 2022). Let k ≥ 3 and |c|+3 = m = 3k+{−1, 0, 1}. Using cyclis symmetries
485
+ asusme that c1 ≥ c2, c3. If c2, c3 ≤ k − 1, then
486
+ (2.13)
487
+ H(c1,c2,c3)(z, q) =
488
+
489
+
490
+
491
+ Sm(z; ec2|ec3) − qSm(z; ec2−1|ec3−1),
492
+ c2, c3 > 0,
493
+ Sm(z; ec2|e0),
494
+ c3 = 0,
495
+ Sm(z; e0|ec3) − q(1 − z)Sm(z; e0 + δ0|ec3−1),
496
+ c2 = 0, c3 ̸= 0,
497
+ where ei and δi be defined as in (2.11).
498
+ The explicit claims that Conjecture 2.4 provide do not cover all the functions Hc(z, q) with |c| + 3 = m.
499
+ It does provide enough claims to recover explicit expression claims. How to find the conjectural Sm(z; ρ|σ)
500
+ equivalents of the other Hc(z, q) that appear in the coupled q-difference equation system is explained in [29].
501
+ The profiles related to the functions to be recovered are called ”under-the-line” profiles by Kanade–Russell.
502
+ We will also call these profiles as such while we ignore to explain anything about the line. These under-the-line
503
+ profile related functions can have shifts of z in the Sm(z; ρ|σ) language. These shifts are inherited from the
504
+ q-difference equations (2.6). One can use (2.12) to clear all the shifts in z. Therefore, from now on in all our
505
+ expressions we will translate any z shift of Sm(z; ρ|σ) using (2.12) and this way ignore any and all shifts in z.
506
+ To further emphasize this moving forward on we suppress the variable z from our notation and write
507
+ Sm(ρ|σ) := Sm(z; ��|σ).
508
+ Proof of Conjecture 2.4 (and its extension to all 3-part profiles with total m − 3) requires one to show that
509
+ the expressions in Sm(ρ|σ) are the correct expressions for the respective Hc(z; q) functions. This can be done
510
+ by showing that the expressions in Sm(ρ|σ) satisfies the same recurrence relation specified by (2.6) and the
511
+ initial conditions of the expression holds. In [29], it is already proven that for c2, c3 ≤ k − 1, the conjectural
512
+ formulas of (2.13) all satisfy the necessary initial conditions
513
+ (2.14)
514
+ Hc(z, 0) = 1
515
+ and
516
+ Hc(0, z) = 1/(q; q)∞.
517
+ It was noted in the [29, Lemma 9.1, Lemma 9.2] that Sm(ρ|σ) functions satisfy the following list of recur-
518
+ rences.
519
+
520
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
521
+ 7
522
+ Lemma 2.5 (Kanade–Russell, 2022). Let k ≥ 3, let m = 3k + {−1, 0, 1} and let δi := (δij)1≤j≤k−1 ∈ Zk−1,
523
+ where δij is the Kronecker delta function. The following recurrence relations follow for all 1 ≤ i ≤ k − 2,
524
+ Sm(ρ|σ) − Sm(ρ + δi − δi+1|σ) − zqi+�i
525
+ j=1 ρjSm(ρ + 2
526
+ i
527
+
528
+ j=1
529
+ δj|σ −
530
+ i
531
+
532
+ j=1
533
+ δj) = 0,
534
+ (R(i)
535
+ 1 (ρ|σ))
536
+ Sm(ρ|σ) − Sm(ρ|σ + δi − δi+1) − zqi+�i
537
+ j=1 σjSm(ρ −
538
+ i
539
+
540
+ j=1
541
+ δj|σ + 2
542
+ i
543
+
544
+ j=1
545
+ δj) = 0.
546
+ (R(i)
547
+ 2 (ρ|σ))
548
+ i. If m ≡ −1 (mod 3),
549
+ a) and if σk−1 = 0, then
550
+ (R3(ρ|σ))
551
+ Sm(ρ|σ) − Sm(ρ|σ + δk−1) − qSm(ρ + δk−1|σ + δk−1) + qSm(ρ + δk−1|σ + δk−2 + δk−1) = 0.
552
+ b) and if ρk−1 = 0, then
553
+ (R4(ρ|σ))
554
+ Sm(ρ|σ) − Sm(ρ + δk−1|σ) − qSm(ρ + δk−1|σ + δk−1) + qSm(ρ + δk−2 + δk−1|σ + δk−1) = 0.
555
+ ii. If m ≡ 0 (mod 3), then
556
+ Sm(ρ|σ) − (1 + q)Sm(ρ + δk−1|σ + δk−1) + qSm(ρ + 2δk−1|σ + 2δk−1)
557
+ (R3(ρ|σ))
558
+ − zqk−1+�k−1
559
+ j=1 ρjSm(ρ + 2
560
+ k−1
561
+
562
+ j=1
563
+ δj|σ −
564
+ k−1
565
+
566
+ j=1
567
+ δj)
568
+ − qk−1+�k−1
569
+ j=1 σjSm(ρ −
570
+ k−2
571
+
572
+ j=1
573
+ δj + 2δk−1|σ + 2
574
+ k−1
575
+
576
+ j=1
577
+ δj) = 0.
578
+ Sm(ρ|σ) − (1 + q)Sm(ρ + δk−1|σ + δk−1) + qSm(ρ + 2δk−1|σ + 2δk−1)
579
+ (R4(ρ|σ))
580
+ − zqk−1+�k−1
581
+ j=1 ρjSm(ρ + 2
582
+ k−1
583
+
584
+ j=1
585
+ δj|σ −
586
+ k−2
587
+
588
+ j=1
589
+ δj + 2δk−1)
590
+ − qk−1+�k−1
591
+ j=1 σjSm(ρ −
592
+ k−1
593
+
594
+ j=1
595
+ δj|σ + 2
596
+ k−1
597
+
598
+ j=1
599
+ δj) = 0.
600
+ iii. If m ≡ 1 (mod 3), then
601
+ Sm(ρ|σ) − Sm(ρ|σ + δk−1) − qSm(ρ + δk−1|σ + 2δk−1)
602
+ (R3(ρ|σ))
603
+ + qSm(ρ + δk−1|σ + 2δk−1) − qk−1+�k−1
604
+ j=1 σjSm(ρ −
605
+ k−1
606
+
607
+ j=1
608
+ δj|σ + 2
609
+ k−1
610
+
611
+ j=1
612
+ δj) = 0
613
+ Sm(ρ|σ) − Sm(ρ + δk−1|σ) − qSm(ρ + δk−1|σ + δk−1)
614
+ (R4(ρ|σ))
615
+ + qSm(ρ + 2δk−1|σ + δk−1) − zqk−1+�k−1
616
+ j=1 ρjSm(ρ + 2
617
+ k−1
618
+
619
+ j=1
620
+ δj|σ −
621
+ k−1
622
+
623
+ j=1
624
+ δj) = 0.
625
+ Then they made the following claim (see [29, Conjecture 9.3]).
626
+ Conjecture 2.6 (Kanade–Russell, 2022). In each modulus m ≥ 5, the relations (R(i)
627
+ 1 (ρ|σ))-(R4(ρ|σ)) are
628
+ enough to prove recurrences necessary for the proof of Conjecture 2.4.
629
+ We find this conjecture highly sensible. For all m ≥ 5, the explicit Sm’s are 2⌊m/3⌋-fold sums. Same is
630
+ true for the number of distinct functional equations (R(i)
631
+ 1 (ρ|σ))-(R4(ρ|σ)). One can easily check that these
632
+ relations are distinct by comparing the first two terms in each left-hand side. Each second term corresponds to
633
+ a canonical shift in one of the summation variables. One would expect to see every relation that the Sm(ρ|σ)
634
+ functions satisfy to be translated and recovered as a combination the relations (R(i)
635
+ 1 (ρ|σ))-(R4(ρ|σ)). Hence,
636
+
637
+ 8
638
+ ALI KEMAL UNCU
639
+ if the claims of Conjecture 2.4 are correct, for any fixed profile c the coupled q-difference equations (2.6)
640
+ written using the explicit claims of (2.13) (together with the “under-the-line” expressions) can be recovered
641
+ as a combination of the relations (R(i)
642
+ 1 (ρ|σ))-(R4(ρ|σ)).
643
+ 3. Proof Methodology
644
+ Conjecture 2.6 can be rephrased as a set inclusion question. Let m = 3k + {−1, 0, 1} with k ≥ 3, ρ and σ
645
+ as in (2.7) and 1 ≤ i ≤ k − 2. Define
646
+ (3.1)
647
+ Im := ⟨R(i)
648
+ 1 (ρ|σ), R(i)
649
+ 2 (ρ|σ), R3(ρ|σ), R4(ρ|σ)⟩,
650
+ the ideal generated by the left-hand sides of the recurrences (R(i)
651
+ 1 (ρ|σ))-(R4(ρ|σ)) as polynomials in the ring
652
+ Z((q, z))[Sm(ρ|σ)]. Here which R3(ρ|σ) and R4(ρ|σ) to be included in Im is to be understood by the residue class
653
+ of m modulo 3. Recall that ρ and σ are integer vectors with k −1 entries. Therefore the ring Z((q, z))[Sm(ρ|σ)]
654
+ is a formal polynomial ring defined on a countable set of variables.
655
+ For any given fixed m = 3k + {−1, 0, 1} with k ≥ 3, let the set of all the coupled system of q-difference
656
+ equations (2.6) for the profiles c with |c| + 3 = m be Hm. Any relation in Hm can be written in Sm(ρ|σ)
657
+ functions using the Conjecture 2.4 (and the paragraph below it). Let Sm be the set of all relations in Hm
658
+ written in the conjectural Sm(ρ|σ) form.
659
+ Now we can write Conjecture 2.6 in its equivalent form:
660
+ Conjecture 3.1. Let m = 3k + {−1, 0, 1} with k ≥ 3 be fixed.
661
+ ∀h ∈ Sm, we have h ∈ Im.
662
+ The infinite set {R(i)
663
+ 1 (ρ|σ), R(i)
664
+ 2 (ρ|σ), R3(ρ|σ), R4(ρ|σ) : 1 ≤ i ≤ k − 2, ρ, σ ∈ Zk−1} that spans Im has
665
+ non-trivial relations within itself and not all the elements of this set are generators of Im. However, we do not
666
+ know an exact pattern of which elements are related at the moment. Nevertheless, it is easy to understand
667
+ that Im is generated by infinitely many elements since ρ and σ ∈ Zk−2.
668
+ On the other hand, for any fixed m, the Sm(ρ|σ) functions that appear within the formulas from Sm
669
+ make up a finite list. One can easily find explicit bounds for the entries of vectors ρ and σ such that every
670
+ Sm(ρ|σ) that appear in Sm is within the bounds. This observation suggests that instead of attempting to
671
+ prove Conjecture 3.1, we can instead go after a stronger conjecture that is more suitable for computations.
672
+ To that end, let [N] := {−N, . . . , −1, 0, 1, . . ., N} and we define
673
+ Im,N := ⟨{R(i)
674
+ 1 (ρ|σ), R(i)
675
+ 2 (ρ|σ), R3(ρ|σ), R4(ρ|σ) : ρ, σ ∈ [N]k−1}⟩ ⊂ Im.
676
+ With this definition we form the stronger conjecture:
677
+ Conjecture 3.2. Let m = 3k + {−1, 0, 1} with k ≥ 3 be fixed. There is some N ∈ N such that
678
+ ∀h ∈ Sm, we have h ∈ Im,N.
679
+ Since Im,N ⊂ Im, it is clear that Conjecture 3.2 implies Conjecture 3.1.
680
+ Finally we transferred the open problems into a linear algebra setting, and we can approach it as such.
681
+ Let m and N be fixed. we can order all the Sm(ρ|σ) that appears in the spanning set of Im,N and write in
682
+ a column vector ⃗s. Then the matrix A is uniquely defined by
683
+ A⃗s = ⃗0A,
684
+ where ⃗0A is the colum vector with the same number of rows as A. Every row of A, corresponds to a functional
685
+ relation Rj(σ|ρ) ∈ {R(i)
686
+ 1 (ρ|σ), R(i)
687
+ 2 (ρ|σ), R3(ρ|σ), R4(ρ|σ) : ρ, σ ∈ [N]k−1} and every column of A corresponds
688
+ to the coefficients of Sm(ρ|σ). Also observe that A is a finite dimensional matrix with entries in Z[q, z].
689
+ One can use Gaussian elimination on A. Any non-trivial relation within the functional relations (R(i)
690
+ 1 (ρ|σ))-
691
+ (R4(ρ|σ)) within the defining bounds of A would yield 0 rows. Let B be the matrix consisting of non-zero
692
+ rows of A after the Gaussian elimination is performed. It should still be clear that
693
+ B⃗s = ⃗0B.
694
+
695
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
696
+ 9
697
+ Moreover, the ideal Im,N is generated by the equations that appear in B⃗s = ⃗0.
698
+ Therefore, for any element of h ∈ Sm one can check whether that element is in Im,N by simply writing that
699
+ relation as a row vector ⃗h (with respect to the vector ⃗s, i.e.
700
+ vech is defined by h := [⃗h⃗s = 0]), add the row vector ⃗h to B and perform Gaussian elimination to this new
701
+ matrix. If the Gaussian elimination yields a zero row, this means that ⃗h is a linear combination of rows in B,
702
+ or equivalently this means h ∈ Im,N. If no zero row appears, then h ̸∈ Im,N.
703
+ This approach is clearly algorithmic. Furthermore, termination of the algorithm and a definitive answer
704
+ among the termination are both guaranteed. Top it all up, the explicit combination of (R(i)
705
+ 1 (ρ|σ))-(R4(ρ|σ))
706
+ functional equations that is equivalent to a given h ∈ Sm is also easy to find. One only needs to use an
707
+ augmented version of A where one more column is added to keep track of the name of the relations (R(i)
708
+ 1 (ρ|σ))-
709
+ (R4(ρ|σ)) while doing the row reductions.
710
+ After these considerations, proof of Conjectures 3.2 (and consequently Conjectures 2.4, 2.6, and 3.1) comes
711
+ down to experimentally identifying an N and being able to perform the Gaussian elimination calculations.
712
+ 4. Modulo 11 Identities
713
+ Let m = 11 (= 3k − 1 with k = 4), for this family of identities ρ and σ ∈ Z3. There are a total of 15
714
+ essentially unique 3 part compositions of 8 that appear in the coupled q-difference system (2.6). Conjecture 2.4
715
+ suggests that the following sum representations for Hc(z, q) hold for all but one of these:
716
+ (4.1)
717
+ H(8,0,0)(z, q)
718
+ = S11((1, 1, 1) | (1, 1, 1)),
719
+ H(7,1,0)(z, q)
720
+ = S11((0, 1, 1) | (1, 1, 1)),
721
+ H(7,0,1)(z, q)
722
+ = S11((1, 1, 1) | (0, 1, 1)) − q(1 − z)S11((2, 1, 1) | (1, 1, 1)),
723
+ H(6,2,0)(z, q)
724
+ = S11((0, 0, 1) | (1, 1, 1)),
725
+ H(6,1,1)(z, q)
726
+ = S11((0, 1, 1) | (0, 1, 1)) − qS11((1, 1, 1) | (1, 1, 1)),
727
+ H(6,0,2)(z, q)
728
+ = S11((1, 1, 1) | (0, 0, 1)) − q(1 − z)S11((2, 1, 1) | (0, 1, 1)),
729
+ H(5,3,0)(z, q)
730
+ = S11((0, 0, 0) | (1, 1, 1)),
731
+ H(5,2,1)(z, q)
732
+ = S11((0, 0, 1) | (0, 1, 1)) − qS11((0, 1, 1) | (1, 1, 1)),
733
+ H(5,1,2)(z, q)
734
+ = S11((0, 1, 1) | (0, 0, 1)) − qS11((1, 1, 1) | (0, 1, 1)),
735
+ H(5,0,3)(z, q)
736
+ = S11((1, 1, 1) | (0, 0, 0)) − q(1 − z)S11((2, 1, 1) | (0, 0, 1)),
737
+ H(4,3,1)(z, q)
738
+ = S11((0, 0, 0) | (0, 1, 1)) − qS11((0, 0, 1) | (1, 1, 1)),
739
+ H(4,2,2)(z, q)
740
+ = S11((0, 0, 1) | (0, 0, 1)) − qS11((0, 1, 1) | (0, 1, 1)),
741
+ H(4,1,3)(z, q)
742
+ = S11((0, 1, 1) | (0, 0, 0)) − qS11((1, 1, 1) | (0, 0, 1)),
743
+ H(3,3,2)(z, q)
744
+ = S11((0, 0, 0) | (0, 0, 1)) − qS11((0, 0, 1) | (0, 1, 1)).
745
+ Only H(4,4,0)(z, q) misses a claimed formula and that can be recovered by the q-difference equations (2.6).
746
+ We know that H(4,4,0)(z, q) satisfies
747
+ (4.2)
748
+ H(4,4,0)(z, q) + (1 − qz)H(4,1,3)(q2z, q) − H(4,3,1)(qz, q) − H(5,0,3)(qz, q) = 0.
749
+ Using the conjectured series equivalents (4.1) of H(4,1,3)(z, q), H(4,3,1)(z, q) and H(5,0,3)(z, q), we see that
750
+ H(4,4,0)(z, q) = −(S11(qz; (0, 0, 0)|(0, 1, 1)) − qS11(qz; (0, 0, 1)|(1, 1, 1)))
751
+ + (1 − qz)(S11(q2z; (0, 1, 1)|(0, 0, 0)) − qS11(q2z; (1, 1, 1)|(0, 0, 1)))
752
+ (4.3)
753
+ − (S11(qz; (1, 1, 1)|(0, 0, 0)) − q(1 − z)S11(qz(2, 1, 1)|(0, 0, 1))).
754
+ Notice that we used the shifts in the variable z in (4.3). We clear these shifts by employing (2.12). This yields
755
+ an explicit claim for H(4,4,0)(z, q):
756
+ (4.4)
757
+ H(4,4,0)(z, q) = S11((1, 0, 0) | (0, 1, 1)) − qS11((1, 0, 1) | (1, 1, 1)) + qzS11(2, 1, 1) | (0, 0, 0)),
758
+ with no shifts in z, where the S11(ρ|σ) functions fit the forms in Lemma 2.5.
759
+ We can also see that H(4,4,0)(z, q) satisfies the necessary initial conditions (2.14). The initial condition
760
+ H(4,4,0)(z, 0) = 1 is immediate by (4.4) and (2.8). We can also see that H(4,4,0)(0, q) = 1/(q; q)∞ by plugging
761
+
762
+ 10
763
+ ALI KEMAL UNCU
764
+ in z = 0 in (4.2) and using the initial conditions of the other proven initial conditions (2.14) for the functions
765
+ in (4.2).
766
+ Our proof routine explained in Section 3 can start once all the normalized generating functions Hc(z, q)’s
767
+ are (conjecturally) translated in the S11((a1, a2, a3)|(b1, b2, b3)) language. It is easy to see that the following
768
+ four recurrences,
769
+ H(8,0,0)(z, q) − H(7,1,0)(qz, q) = 0,
770
+ H(7,0,1)(z, q) − H(6,1,1)(qz, q) + (1 − qz)H(7,1,0)(q2z, q) − H(8,0,0)(qz, q) = 0,
771
+ H(6,0,2)(z, q) − H(5,1,2)(qz, q) + (1 − qz)H(6,1,1)(q2z, q) − H(7,0,1)(qz, q) = 0,
772
+ H(5,0,3)(z, q) − H(4,1,3)(qz, q) + (1 − qz)H(5,1,2)(q2z, q) − H(6,0,2)(qz, q) = 0,
773
+ trivializes to 0 = 0 once the terms on the left-hand sides are written in S11((a1, a2, a3)|(b1, b2, b3)) using (4.1)
774
+ and (2.12). Therefore, these relations are trivially in I11, the ideal generated by the functional relations of the
775
+ S11((a1, a2, a3)|(b1, b2, b3)) series.
776
+ Recall that we used the coupled q-difference equation (4.2) to make an explicit claim for H(4,4,0)(z, q).
777
+ Hence, the functional relation of H(4,4,0)(z, q) also trivializes to 0 = 0 once written in the claimed S11(ρ|σ)
778
+ forms. The very claim (4.4) is instrumental in proving that the q-difference equations satisfied by H(5,3,0)(z, q),
779
+ H(4,3,1)(z, q), and H(4,1,3)(z, q) in S11((a1, a2, a3)|(b1, b2, b3)) language are elements of I11.
780
+ Next, we look at the q-difference equation satisfied by H(7,1,0)(z, q) from (2.6):
781
+ H(7,1,0)(z, q) − H(7,0,1)(qz, q) − H(6,2,0)(qz, q) + (1 − qz)H(6,1,1)(q2z, q) = 0.
782
+ After the use of (4.1) and (2.12), we see that this q-difference equation is equivalent to the following conjectural
783
+ form
784
+ S11((0, 1, 1)|(1, 1, 1)) − S11((1, 0, 1)|(1, 1, 1)) − qzS11((2, 1, 1)|(0, 1, 1)) = 0.
785
+ This is nothing but the relation R(1)
786
+ 1 (0, 1, 1)|(1, 1, 1) of (R(i)
787
+ 1 (ρ|σ)) given in Lemma 2.5. Hence, this relation is
788
+ also within I11 and covered by the relations of S11((a1, a2, a3)|(b1, b2, b3)).
789
+ As a second explicit example, consider the q-difference equation satisfied by H(6,1,1)(z, q),
790
+ H(6,1,1)(z, q) − H(7,1,0)(qz, q) − H(6,0,2)(qz, q) − H(5,2,1)(qz, q) + (1 − qz)H(7,0,1)(q2z, q)
791
+ + (1 − qz)H(6,2,0)(q2z, q) + (1 − qz)H(5,1,2)(q2z, q) − (1 − qz)(1 − q2z)H(6,1,1)(q3z, q) = 0.
792
+ Using employing (4.1) and (2.12), we see get the (conjecturally) equivalent form
793
+ S11((0, 1, 1)|(0, 1, 1)) − S11((1, 0, 1)|(0, 1, 1)) − S11((1, 1, 1)|(1, 1, 1))
794
+ + (1 − qz)S11((2, 0, 1)|(1, 1, 1)) − qzS11((2, 1, 1)|(0, 0, 1)) + q2z(1 − qz)S11((3, 1, 1)|(0, 1, 1)) = 0.
795
+ This relation can be checked to be the side-by-side additions of
796
+ R(1)
797
+ 1 ((0, 1, 1)|(0, 1, 1)) − (1 − qz)R(1)
798
+ 1 ((1, 1, 1)|(1, 1, 1)) + qzR(1)
799
+ 2 ((2, 1, 1)|(−1, 1, 1)) ∈ I11.
800
+ We can one-by-one write down the remaining 8 recurrences, their S11(ρ|σ) equivalents, and what combina-
801
+ tion of (R(i)
802
+ 1 (ρ|σ))-(R4(ρ|σ)) is equivalent to the functional equations in the S11(ρ|σ). This way we prove that
803
+ these relations are all included in the ideal I11. We need to say that these relations gets messier, pages long and
804
+ not hand-verifiable. Printing these would be a waste of page/paper and instead we keep these in the digital
805
+ realm for interested readers to check it easily, or print on paper as they wish. To that end, similar to how it was
806
+ handled in [29], we include text files M11RecHXYZ Explicit.txt in the ancillary files portion of ArXiv and on
807
+ the author’s website [36]. Here XYZ is to be replaced by the relevant profile’s digits such as 620 for the profile
808
+ (6, 2, 0). One can check that the elements of I11 given in these text files are equivalent to the q-difference
809
+ equations (2.6) satisfied by H(X,Y,Z)(z, q) after they are translated to S11(ρ|σ) form using (4.1), (4.4) and
810
+ (2.12). The functional equation names are reflected in the text as RX[{Y},{{a1,a2,a3},{b1,b2,b3}}] for X
811
+ and Y to be replaced by 1 or 2 to denote R(Y )
812
+ X ((a1, a2, a3)|(b1, b2, b3)), or RZ[{{a1,a2,a3},{b1,b2,b3}}] for
813
+ Z to be replaced by 3 or 4 to denote R3((a1, a2, a3)|(b1, b2, b3)) and R4((a1, a2, a3)|(b1, b2, b3)), respectively.
814
+
815
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
816
+ 11
817
+ The definitions of these functional equations can be found in Lemma 2.5 for m = 11. A guide document that
818
+ explicitly lists each R functional relation for modulo 10 is given in M11R text file. One also can see that the
819
+ largest entry within ρ = (a1, a2, a3) and σ = (b1, b2, b3) of the relations (R(i)
820
+ 1 (ρ|σ))-(R4(ρ|σ)) for the modulo
821
+ 11 case given in the additional documents is 6. This proves the following theorem and its corollary.
822
+ Theorem 4.1. Conjecture 3.2 is correct for m = 11 and N = 6.
823
+ Corollary 4.2. Conjecture 3.1 is correct for m = 11.
824
+ Corollary 4.2 is equivalent to the following theorem:
825
+ Theorem 4.3. The claimed expressions (4.1) and (4.4) hold.
826
+ Observe that Theorem 4.3 adds a new supporting case to Corollary 2.6.
827
+ Now that the main conjectures are proven for the modulus 11 cases, we can specialize z = 1 and see the 15
828
+ sum-product identities coming from the cylindric partitions paradigm.
829
+ Theorem 4.4. The following identities hold
830
+
831
+ r1≥r2≥r3≥0
832
+ s1≥s2≥s3≥0
833
+ qr2
834
+ 1−r1s1+s2
835
+ 1+r2
836
+ 2−r2s2+s2
837
+ 2+r2
838
+ 3+r3s3+s2
839
+ 3 pc(r1, r2, r3, s1, s2, s3, q)
840
+ (q; q)r1−r2(q; q)r2−r3(q; q)r3(q; q)s1−s2(q; q)s2−s3(q; q)s3(q; q)r3+s3+1
841
+ (4.5)
842
+ =
843
+ 1
844
+ (q; q)∞
845
+ 1
846
+ θ(qi1, qi2, qi3, qi4, qi5, qi6, qi7; q11),
847
+ where the polynomials pc(r1, r2, r3, s1, s2, s3, q) and the 7-tuples (i1, i2, i3, i4, i5, i6, i7) for each profile is given
848
+ in the following table:
849
+ Profile c
850
+ pc(r1, r2, r3, s1, s2, s3, q)
851
+ (i1, i2, i3, i4, i5, i6, i7)
852
+ (8, 0, 0)
853
+ qr1+r2+r3+s1+s2+s3
854
+ (2, 3, 3, 4, 4, 5, 5)
855
+ (7, 1, 0)
856
+ qr2+r3+s1+s2+s3
857
+ (1, 2, 3, 4, 4, 5, 5)
858
+ (7, 0, 1)
859
+ qr1+r2+r3+s2+s3
860
+ (1, 2, 3, 4, 4, 5, 5)
861
+ (6, 2, 0)
862
+ qr3+s1+s2+s3
863
+ (1, 2, 2, 3, 4, 5, 5)
864
+ (6, 1, 1)
865
+ qr2+r3+s2+s3(1 − qr1+s1+1)
866
+ (1, 1, 3, 3, 4, 5, 5)
867
+ (6, 0, 2)
868
+ qr1+r2+r3+s3
869
+ (1, 2, 2, 3, 4, 5, 5)
870
+ (5, 3, 0)
871
+ qs1+s2+s3
872
+ (1, 2, 2, 3, 3, 4, 5)
873
+ (5, 2, 1)
874
+ qr3+s2+s3(1 − qr2+s1+1)
875
+ (1, 1, 2, 3, 4, 4, 5)
876
+ (5, 1, 2)
877
+ qr2+r3+s3(1 − qr1+s2+1)
878
+ (1, 1, 2, 3, 4, 4, 5)
879
+ (5, 0, 3)
880
+ qr1+r2+r3
881
+ (1, 2, 2, 3, 3, 4, 5)
882
+ (4, 3, 1)
883
+ qs2+s3(1 − qr3+s1+1)
884
+ (1, 1, 2, 3, 3, 4, 5)
885
+ (4, 2, 2)
886
+ qr3+s3(1 − qr2+s2+1)
887
+ (1, 1, 2, 2, 4, 4, 5)
888
+ (4, 1, 3)
889
+ qr2+r3(1 − qr1+s3+1)
890
+ (1, 1, 2, 3, 3, 4, 5)
891
+ (3, 3, 2)
892
+ qs3(1 − qr3+s2+1)
893
+ (1, 1, 2, 2, 3, 5, 5)
894
+ (4, 4, 0)
895
+ qr1(qs2+s3 − qr3+s1+s2+s3+1 + qr1+r2+r3+1)
896
+ (1, 2, 2, 3, 3, 4, 4)
897
+ In Theorem 4.4, we chose to put the profile (4, 4, 0) related sum-product identity under a line in the table.
898
+ This is to indicate that this identity is not a direct claim made by combining (2.4) with z = 1 and (2.1). We
899
+ first recovered a formula for H(4,4,0)(z, q) as a combination of S11((a1, a2, a3)|(b1, b2, b3)) series and then made
900
+ this claim. This line also has the added benefit that it aligns us with Kanade–Russell’s language as this is the
901
+ sum-product identity related to the under-the-line Hc(z, q) function, which we chose not to directly define.
902
+ The sum sides are the expressions (4.1) and (4.4) with z = 1 written explicitly using (2.8) with k = 4. The
903
+ product sides follow from (2.5) with z = 1 followed by (2.1). The product related to the first profile, (8, 0, 0),
904
+ on the table is presented in the introduction as Theorem 1.8.
905
+ Observe that the products that appear on the right-hand side of (4.5) related to the profiles (c1, c2, c3) and
906
+ (c1, c3, c2) are the same. The symmetry for the generating functions have been observed and noted before, for
907
+ example in [19, Corollary 2.2]. This symmetry is visible on the sum side of Theorem 4.4 too. One can get
908
+
909
+ 12
910
+ ALI KEMAL UNCU
911
+ the “other” sum by merely replacing the variable ‘r’s and ‘s’s. Note that this is a byproduct of setting z = 1
912
+ and this similarity does not exist on the sum side for generic z. In that light, this theorem consisting of 15
913
+ sum-product identities actually provide a total of 10 essentially unique sum-product identities.
914
+ We also note that among these identities the ones related to profiles (8, 0, 0), (6, 1, 1), (4, 2, 2), (3, 3, 2) are
915
+ the i = 1, . . . , 4 cases of (5.28), respectively, and (5, 3, 0) and (6, 2, 0) are the σ = 0 and 1 cases of (5.29),
916
+ respectively, of [7, Theorem 5.3].
917
+ 5. Modulo 13 Identities
918
+ Similar to Section 4, we start by listing the explicit claims of Conjecture 2.4 for the modulus m = 13 family.
919
+ (5.1)
920
+ H(10,0,0)(z, q)
921
+ = S13((1, 1, 1)|(1, 1, 1)),
922
+ H(9,1,0)(z, q)
923
+ = S13((0, 1, 1)|(1, 1, 1)),
924
+ H(9,0,1)(z, q)
925
+ = S13((1, 1, 1)|(0, 1, 1)) − q(1 − z)S13((2, 1, 1)|(1, 1, 1)),
926
+ H(8,2,0)(z, q)
927
+ = S13((0, 0, 1)|(1, 1, 1)),
928
+ H(8,1,1)(z, q)
929
+ = S13((0, 1, 1)|(0, 1, 1)) − qS13((1, 1, 1)|(1, 1, 1)),
930
+ H(8,0,2)(z, q)
931
+ = S13((1, 1, 1)|(0, 0, 1)) − q(1 − z)S13((2, 1, 1)|(0, 1, 1)),
932
+ H(7,3,0)(z, q)
933
+ = S13((0, 0, 0)|(1, 1, 1)),
934
+ H(7,2,1)(z, q)
935
+ = S13((0, 0, 1)|(0, 1, 1)) − qS13((0, 1, 1)|(1, 1, 1)),
936
+ H(7,1,2)(z, q)
937
+ = S13((0, 1, 1)|(0, 0, 1)) − qS13((1, 1, 1)|(0, 1, 1)),
938
+ H(7,0,3)(z, q)
939
+ = S13((1, 1, 1)|(0, 0, 0)) − q(1 − z)S13((2, 1, 1)|(0, 0, 1)),
940
+ H(6,3,1)(z, q)
941
+ = S13((0, 0, 0)|(0, 1, 1)) − qS13((0, 0, 1)|(1, 1, 1)),
942
+ H(6,2,2)(z, q)
943
+ = S13((0, 0, 1)|(0, 0, 1)) − qS13((0, 1, 1)|(0, 1, 1)),
944
+ H(6,1,3)(z, q)
945
+ = S13((0, 1, 1)|(0, 0, 0)) − qS13((1, 1, 1)|(0, 0, 1)),
946
+ H(5,3,2)(z, q)
947
+ = S13((0, 0, 0)|(0, 0, 1)) − qS13((0, 0, 1)|(0, 1, 1)),
948
+ H(5,2,3)(z, q)
949
+ = S13((0, 0, 1)|(0, 0, 0)) − qS13((0, 1, 1)|(0, 0, 1)),
950
+ H(4,3,3)(z, q)
951
+ = S13((0, 0, 0)|(0, 0, 0)) − qS13((0, 0, 1)|(0, 0, 1)).
952
+ There are six profiles that are not covered by Conjecture 2.4. Once again, using (2.6) explicit claims for
953
+ the normalized generating functions related to the number of cylindric partitions with these profiles can be
954
+ recovered. We make the claims in the following succession.
955
+ First we look at the q-difference equation (2.6) that H(7,3,0):
956
+ (5.2)
957
+ H(7,3,0)(z, q) − H(7,2,1)(qz, q) − H(6,4,0)(qz, q) + (1 − qz)H(6,3,1)(q2z, q) = 0.
958
+ By writing the S13((a1, a2, a3)|(b1, b2, b3)) equivalents for the functions in (5.1) and using (2.12), we get
959
+ H(6,4,0)(z, q) = S13((−1, 0, 0)|(1, 1, 1)) − S13((0, 0, 1)|(0, 1, 1)) + qS13((0, 1, 1)|(1, 1, 1))
960
+ (5.3)
961
+ + (1 − z)S13((1, 0, 0)|(0, 1, 1)) − q(1 − z)S13((1, 0, 1)|(1, 1, 1)).
962
+ Note that we did not use the q-difference equation of H(6,4,0)(z, q) to make a claim for its formula. In
963
+ Section 4, there was only a single missing formula. That allowed us to use the q-difference equation for that
964
+ very function and get a formula in S11((a1, a2, a3)|(b1, b2, b3))’s with no backwards shifts (i.e. z �→ z/q, which
965
+ also reflects as negative indices in the first variable a1). This may not be possible in general. The q-difference
966
+ equation H(6,4,0)(z, q) satisfies is
967
+ (5.4)
968
+ H(6,4,0)(z, q) − H(6,3,1)(qz, q) − H(5,5,0)(qz, q) + (1 − qz)H(5,4,1)(q2z, q) = 0.
969
+ The conjectural formulas (5.1) does not cover H(5,5,0)(z, q). Hence, we cannot fully translate H(6,4,0)(z, q) to
970
+ a formula made up of S13((a1, a2, a3)|(b1, b2, b3)) series. Nevertheless, as also noted in [29], we can recover
971
+ formulas for all the missing functions using other recurrences and backwards shifts in a1.
972
+
973
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
974
+ 13
975
+ In fact, the recurrence (5.4) and (5.3) can be put together to claim a formula for H(5,5,0)(z, q). After the
976
+ similar considerations we claim
977
+ H(5,5,0)(z, q) = S13((−2, 0, 0)|(1, 1, 1)) − S13((−1, 0, 1)|(0, 1, 1)) + qS13((−1, 1, 1)|(1, 1, 1))
978
+ (5.5)
979
+ + (1 − z/q − z)S13((0, 0, 0)|(0, 1, 1)) − (q − z − qz)S13((0, 0, 1)|(1, 1, 1))
980
+ − q(1 − z)zS13((1, 0, 0)|(1, 1, 1)) − (1 − z)S13((1, 0, 1)|(0, 0, 1))
981
+ + q(1 − z)S13((1, 1, 1)|(0, 1, 1)) + (1 − z)(1 − qz)S13((2, 0, 0)|(0, 0, 1))
982
+ − q(1 − z)(1 − qz)S13((2, 0, 1)|(0, 1, 1)).
983
+ We point out that the coefficients of the claimed H(5,5,0)(z, q) formula now can be seen to have a Laurent
984
+ polynomial. This is a byproduct of the backwards shifts in z.
985
+ Using the q-difference equation for H(6,3,1)(z, q),
986
+ H(6,3,1)(z, q) − H(7,3,0)(qz, q) − H(6,2,2)(qz, q) − H(5,4,1)(qz, q) + (1 − qz)H(7,2,1)(q2z, q)
987
+ (5.6)
988
+ + (1 − qz)H(6,4,0)(q2z, q) + (1 − qz)H(5,3,2)(q2z, q) − (1 − qz)(1 − q2z)H(6,3,1)(q3z, q) = 0,
989
+ (5.1) and (5.3) we claim that
990
+ H(5,4,1)(z, q) = S13((−1, 0, 0)|(0, 1, 1)) − qS13((−1, 0, 1)|(1, 1, 1)) − zS13((0, 0, 0)|(1, 1, 1))
991
+ (5.7)
992
+ − S13((0, 0, 1)|(0, 0, 1)) + qS13((0, 1, 1)|(0, 1, 1)) + (1 − z)S13((1, 0, 0)|(0, 0, 1))
993
+ − q(1 − z)S13((1, 0, 1)|(0, 1, 1)).
994
+ Using the q-difference equation for H(5,3,2)(z, q),
995
+ H(5,3,2)(z, q) − H(6,3,1)(qz, q) − H(5,2,3)(qz, q) − H(4,4,2)(qz, q) + (1 − qz)H(6,2,2)(q2z, q)
996
+ (5.8)
997
+ + (1 − qz)H(5,4,1)(q2z, q) + (1 − qz)H(4,3,3)(q2z, q) − (1 − qz)(1 − q2z)H(5,3,2)(q3z, q) = 0,
998
+ (5.1) and (5.7) we claim that
999
+ H(4,4,2)(z, q) = S13((−1, 0, 0)|(0, 0, 1)) − qS13((−1, 0, 1)|(0, 1, 1)) − zS13((0, 0, 0)|(0, 1, 1))
1000
+ (5.9)
1001
+ − S13((0, 0, 1)|(0, 0, 0)) + qzS13((0, 0, 1)|(1, 1, 1)) + qS13((0, 1, 1)|(0, 0, 1))
1002
+ + (1 − z)S13((1, 0, 0)|(0, 0, 0)) − qz(1 − z)S13((1, 0, 0)|(1, 1, 1))
1003
+ − q(1 − z)S13((1, 0, 1)|(0, 0, 1)).
1004
+ Then, by the q-difference equation for H(5,2,3)(z, q),
1005
+ H(5,2,3)(z, q) − H(6,2,2)(qz, q) − H(5,1,4)(qz, q) − H(4,3,3)(qz, q) + (1 − qz)H(6,1,3)(q2z, q)
1006
+ (5.10)
1007
+ + (1 − qz)H(5,3,2)(q2z, q) + (1 − qz)H(4,4,2)(q2z, q) − (1 − qz)(1 − q2z)H(5,2,3)(q3z, q) = 0,
1008
+ together with (5.1) and (5.9) we claim that
1009
+ H(5,1,4)(z, q) = S13((−1, 0, 1)|(0, 0, 0)) − qS13((−1, 1, 1)|(0, 0, 1)) − S13((0, 0, 0)|(0, 0, 0))
1010
+ (5.11)
1011
+ + (1 − z)S13((0, 0, 0)|(0, 0, 1)) − (1 − q)S13((0, 0, 1)|(0, 0, 1))
1012
+ − q(1 − z)S13((0, 0, 1)|(0, 1, 1)) + qS13((0, 1, 1)|(0, 1, 1))
1013
+ + (1 − z)S13((1, 0, 0)|(0, 0, 1)) − q(1 − z)zS13((1, 0, 0)|(0, 1, 1))
1014
+ − (1 − z)S13((1, 0, 1)|(0, 0, 0)) − q(1 − z)S13((1, 0, 1)|(0, 1, 1))
1015
+ + q2(1 − z)zS13((1, 0, 1)|(1, 1, 1)) + (1 − z)S13((1, 1, 1)|(0, 0, 0))
1016
+ + q(1 − z)S13((1, 1, 1)|(0, 0, 1)) + (1 − z)(1 − qz)S13((2, 0, 0)|(0, 0, 0))
1017
+ − q2z(1 − z)(1 − qz)S13((2, 0, 0)|(1, 1, 1)) − (1 − z)(1 − qz)S13((2, 0, 1)|(0, 0, 0))
1018
+ − q(1 − z)(1 − qz)S13((2, 0, 1)|(0, 0, 1)) − q2z(1 − z)S13((2, 1, 1)|(0, 0, 1)).
1019
+
1020
+ 14
1021
+ ALI KEMAL UNCU
1022
+ Finally, by replacing the formulas in (5.1) and (5.11) in
1023
+ (5.12)
1024
+ H(6,0,4)(z, q) − H(7,0,3)(qz, q) − H(5,1,4)(qz, q) + (1 − qz)H(6,1,3)(q2z, q) = 0
1025
+ we conjecture that
1026
+ H(6,0,4)(z, q) = S13((0, 0, 1)|(0, 0, 0)) − qS13((0, 1, 1)|(0, 0, 1)) − S13((1, 0, 0)|(0, 0, 0))
1027
+ (5.13)
1028
+ + (1 − qz)S13((1, 0, 0)|(0, 0, 1)) − (1 − q)S13((1, 0, 1)|(0, 0, 1))
1029
+ − q(1 − qz)S13((1, 0, 1)|(0, 1, 1)) + qS13((1, 1, 1)|(0, 1, 1))
1030
+ + (1 − qz)S13((2, 0, 0)|(0, 0, 1)) − q2z(1 − qz)S13((2, 0, 0)|(0, 1, 1))
1031
+ − (1 − qz)S13((2, 0, 1)|(0, 0, 0)) − q(1 − qz)S13((2, 0, 1)|(0, 1, 1))
1032
+ + q3z(1 − qz)S13((2, 0, 1)|(1, 1, 1)) + S13((2, 1, 1)|(0, 0, 0))
1033
+ + q(1 − qz)S13((2, 1, 1)|(0, 0, 1)) + (1 − qz)(1 − q2z)S13((3, 0, 0)|(0, 0, 0))
1034
+ − q3z(1 − qz)(1 − q2z)S13((3, 0, 0)|(1, 1, 1)) − (1 − qz)(1 − q2z)S13((3, 0, 1)|(0, 0, 0))
1035
+ − q(1 − qz)(1 − q2z)S13((3, 0, 1)|(0, 0, 1)) − q3z(1 − qz)S13((3, 1, 1)|(0, 0, 1)).
1036
+ We can prove that the later claimed H(6,4,0)(z, q), H(5,5,0)(z, q), H(5,4,1)(z, q), H(5,3,2)(z, q), H(5,2,3)(z, q),
1037
+ and H(6,0,4)(z, q) the initial conditions Hc(0, q) = 1/(q; q)∞ and Hc(z, 0) = 1 in the succession from (5.2),
1038
+ (5.4), (5.6), (5.8), (5.10), and (5.12), respectively. To prove the Hc(0, q) = 1/(q; q)∞ initial condition we need
1039
+ to first shift z �→ z/q in all but the last of the functional equations.
1040
+ The q-difference equations for H(10,0,0)(z, q), H(9,0,1)(z, q), H(8,0,2)(z, q), and H(7,0,3)(z, q) becomes tau-
1041
+ tologies once translated into S13 form using (5.1) and (2.12). The q-difference equations for H(7,3,0)(z, q),
1042
+ H(6,4,0)(z, q), H(6,3,1)(z, q), H(5,3,2)(z, q), H(5,2,3)(z, q), and H(6,0,4)(z, q) are the recurrences used to define the
1043
+ missing Hc(z, q) functions in the modulo 13 family (see (5.2), (5.4), (5.6), (5.8), (5.10), and (5.12), resp.).
1044
+ Hence, these equations also trivializes once the relevant functions are written in their claimed S13 forms using
1045
+ (5.1), (5.3), (5.5), (5.7), (5.9), (5.11), and (5.13) together with (2.12).
1046
+ After the considerations above, we end up with 10 non-trivial coupled q-difference equations to prove.
1047
+ Showing that the q-difference equations’ in the claimed S13((a1, a2, a3)|(b1, b2, b3)) belong to the ideal I13,
1048
+ which is generated by the relations of S13((a1, a2, a3)|(b1, b2, b3))s (see Lemma 2.5), is done by the method
1049
+ outlined in Section 3. Explicit linear combination of (R(i)
1050
+ 1 (ρ|σ))-(R4(ρ|σ)) equivalents of these 12 functional
1051
+ equations in S13 form can, once again, be found in the ancillary files portion of ArXiv and on the author’s
1052
+ website [36] under the file names M13RecHXYZ Explicit.txt. Here XYZ is to be replaced by the relevant
1053
+ profile’s digits such as 910 for the profile (9, 1, 0). One can check that the elements of I13 given in these
1054
+ text files are equivalent to the q-difference equations (2.6) satisfied by H(X,Y,Z)(z, q) after they are translated
1055
+ to S11(ρ|σ) form using (5.1), (5.3), (5.5), (5.7), (5.9), (5.11), and (5.13) and (2.12). The recurrence names
1056
+ are reflected in the text as RX[{Y},{{a1,a2,a3},{b1,b2,b3}}] for X and Y to be replaced by 1 or 2 to
1057
+ denote R(Y )
1058
+ X ((a1, a2, a3)|(b1, b2, b3)), or RZ[{{a1,a2,a3},{b1,b2,b3}}] for Z to be replaced by 3 or 4 to
1059
+ denote R3((a1, a2, a3)|(b1, b2, b3)) and R4((a1, a2, a3)|(b1, b2, b3)), respectively. Finally, a guide document that
1060
+ explicitly lists each R functional relation for modulo 10 is given in M13R text file.
1061
+ This tedious, error prone and impossible by hand calculation proves the following theorem and its corollary.
1062
+ Theorem 5.1. Conjecture 3.2 is correct for m = 13 and N = 6.
1063
+ Corollary 5.2. Conjecture 3.1 is correct for m = 13.
1064
+ Corollary 5.2 is equivalent to the following theorem:
1065
+ Theorem 5.3. The claimed expressions of (5.1), (5.3), (5.5), (5.7), (5.9), (5.11), and (5.13) hold.
1066
+ As before, Theorem 5.3 adds another new witness to Corollary 2.6, and increases our confidence in it.
1067
+ Now that the main conjectures are proven for the modulus 13 cases, we can set z = 1 and see the 22
1068
+ sum-product identities coming from the cylindric partitions paradigm.
1069
+
1070
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
1071
+ 15
1072
+ Theorem 5.4. The following identities hold
1073
+
1074
+ r1≥r2≥r3≥0
1075
+ s1≥s2≥s3≥0
1076
+ qr2
1077
+ 1−r1s1+s2
1078
+ 1+r2
1079
+ 2−r2s2+s2
1080
+ 2+r2
1081
+ 3−r3s3+s2
1082
+ 3 pc(r1, r2, r3, s1, s2, s3, q)
1083
+ (q; q)r1−r2(q; q)r2−r3(q; q)r3(q; q)s1−s2(q; q)s2−s3(q; q)s3(q; q)r3+s3+1
1084
+ (5.14)
1085
+ =
1086
+ 1
1087
+ (q; q)∞
1088
+ 1
1089
+ θ(qi1, qi2, qi3, qi4, qi5, qi6, qi7, qi8, qi9; q13),
1090
+ where the polynomials pc(r1, r2, r3, s1, s2, s3, q) and the 9-tuples (i1, i2, i3, i4, i5, i6, i7, i8, i9) for each profile is
1091
+ given in the following table:
1092
+ Profile c
1093
+ pc(r1, r2, r3, s1, s2, s3, q)
1094
+ (i1, i2, i3, i4, i5, i6, i7, i8, i9)
1095
+ (10, 0, 0)
1096
+ qr1+r2+r3+s1+s2+s3
1097
+ (2, 3, 3, 4, 4, 5, 5, 6, 6)
1098
+ (9, 1, 0)
1099
+ qr2+r3+s1+s2+s3
1100
+ (1, 2, 3, 4, 4, 5, 5, 6, 6)
1101
+ (9, 0, 1)
1102
+ qr1+r2+r3+s2+s3
1103
+ (1, 2, 3, 4, 4, 5, 5, 6, 6)
1104
+ (8, 2, 0)
1105
+ qr3+s1+s2+s3
1106
+ (1, 2, 2, 3, 4, 5, 5, 6, 6)
1107
+ (8, 1, 1)
1108
+ qr2+r3+s2+s3(1 − qr1+s1+1)
1109
+ (1, 1, 3, 3, 4, 5, 5, 6, 6)
1110
+ (8, 0, 2)
1111
+ qr1+r2+r3+s3
1112
+ (1, 2, 2, 3, 4, 5, 5, 6, 6)
1113
+ (7, 3, 0)
1114
+ qs1+s2+s3
1115
+ (1, 2, 2, 3, 3, 4, 5, 6, 6)
1116
+ (7, 2, 1)
1117
+ qr3+s2+s3(1 − qr2+s1+1)
1118
+ (1, 1, 2, 3, 4, 4, 5, 6, 6)
1119
+ (7, 1, 2)
1120
+ qr2+r3+s3(1 − qr1+s2+1)
1121
+ (1, 1, 2, 3, 4, 4, 5, 6, 6)
1122
+ (7, 0, 3)
1123
+ qr1+r2+r3
1124
+ (1, 2, 2, 3, 3, 4, 5, 6, 6)
1125
+ (6, 3, 1)
1126
+ qs2+s3(1 − qr3+s1+1)
1127
+ (1, 1, 2, 3, 3, 4, 5, 5, 6)
1128
+ (6, 2, 2)
1129
+ qr3+s3(1 − qr2+s2+1)
1130
+ (1, 1, 2, 2, 4, 4, 5, 5, 6)
1131
+ (6, 1, 3)
1132
+ qr2+r3(1 − qr1+s3+1)
1133
+ (1, 1, 2, 3, 3, 4, 5, 5, 6)
1134
+ (5, 3, 2)
1135
+ qs3(1 − qr3+s2+1)
1136
+ (1, 1, 2, 2, 3, 4, 5, 5, 6)
1137
+ (5, 2, 3)
1138
+ qs3(1 − qr3+s2+1)
1139
+ (1, 1, 2, 2, 3, 4, 5, 5, 6)
1140
+ (4, 3, 3)
1141
+ (1 − qr3+s3+1)
1142
+ (1, 1, 2, 2, 3, 3, 5, 6, 6)
1143
+ (6, 4, 0)
1144
+ qs2+s3(q−r1+s1 − qr3 + qr2+r3+s1+1)
1145
+ (1, 2, 2, 3, 3, 4, 4, 5, 6)
1146
+ (6, 0, 4)
1147
+ qr3 − qr2+r3+s3+1 − qr1 + q2r1+r2+r3 + qr1+r2+r3+s2+s3+1
1148
+ +(1 − q)qr1+s3(1 − qr3 − qr3+s3+1)
1149
+ −(1 − q)q2r1(qr3 − qs3 − qr2+r3+s3+1 + qr1+r2+r3+s3+3
1150
+ +qs2+s3+2 + qr3+s2+s3+1 − qr3+s1+s2+s3+3)
1151
+ +(1 − q)(1 − q2)qr3(1 − qr3 − qr3+s3+1 + qs1+s2+s3+3)
1152
+ (1, 2, 2, 3, 3, 4, 4, 5, 6)
1153
+ (5, 5, 0)
1154
+ q−2r1+s1+s2+s3 − q−r1+r3+s2+s3 − qs2+s3−1 + qr3+s1+s2+s3
1155
+ +q−r1+r2+r3+s1+s2+s3+1
1156
+ (1, 2, 2, 3, 3, 4, 4, 5, 5)
1157
+ (5, 4, 1)
1158
+ q−r1+s2+s3 − q−r1+r3+s1+s2+s3+1 − qs1+s2+s3 − qr3+s3
1159
+ +qr2+r3+s2+s3+1
1160
+ (1, 1, 2, 3, 3, 4, 4, 5, 6)
1161
+ (5, 1, 4)
1162
+ q−r1+r3 − q−r1+r2+r3+s3+1 + qr2+r3+s2+s3+1 − (1 − q)qr3+s3 − 1
1163
+ (1, 1, 2, 3, 3, 4, 4, 5, 6)
1164
+ (4, 4, 2)
1165
+ q−r1+s3 − q−r1+r3+s2+s3+1 − qs2+s3 − qr3 + qr3+s1+s2+s3+1
1166
+ +qr2+r3+s3+1
1167
+ (1, 1, 2, 2, 3, 4, 4, 6, 6)
1168
+ Once we ignore the symmetries between variables r and s, Theorem 5.4 proves 16 essentially unique sum-
1169
+ product identities. It can easily be seen that within the under-the-line identities, we do not see these sym-
1170
+ metries. The product related to the first profile, (10, 0, 0), on the table is presented in the introduction as
1171
+ Theorem 1.9.
1172
+ We also note that among these identities the ones related to profiles (10, 0, 0), (8, 1, 1), (6, 2, 2), (4, 3, 3) are
1173
+ the i = 1, . . . , 4 cases of (5.22), respectively, and (7, 3, 0) and (8, 2, 0) are the σ = 0 and 1 cases of (5.23),
1174
+ respectively, of [7, Theorem 5.1].
1175
+
1176
+ 16
1177
+ ALI KEMAL UNCU
1178
+ 6. Future Directions
1179
+ There are many mathematical questions that arose from the recent studies on cylindric partitions. It is
1180
+ relevant to mention some of the future directions we plan to pursue.
1181
+ The approach outlined in [29] and in this paper attempts to prove sum-representations for all the normalized
1182
+ generating function Hc(z, q) in one stroke for any fixed |c| where #(c) = 3. The proof requires hefty calculations
1183
+ after the under-the-line sums are recovered. Then by setting z = 1 and using (2.1), we prove sum-product
1184
+ identities for all profiles within a cylindric partition system for a fixed modulus, again in one stroke. Therefore,
1185
+ to prove A2 Rogers–Ramanujan identities we first prove a more general and more complicated combinatorial
1186
+ connection with a free variable z. The success of this method depends on the completion of these calculations,
1187
+ which is virtually impossible by hand.
1188
+ Warnaar [43] mentioned that he build the necessary theory of the Bailey machinery for profiles with 3
1189
+ parts. This machinery will allow us to prove one sum-product identity at a time. This is wonderful to hear
1190
+ and a great advancement in mathematics. Sadly, it comes with its own short-comings. Warnaar acknowledged
1191
+ that this Bailey machinery can not prove any under-the-line identity at the moment. It can only find the
1192
+ sum-product relation related to the z = 1 specializations of Conjecture 2.13. This is similar to the situation of
1193
+ the original Andrews–Schilling–Warnaar paper, where for example at the modulo 7 case the Bailey machinery
1194
+ there couldn’t reach the under-the-line identity related to the profile (2, 2, 0), which was later proven in [18].
1195
+ Be that as it may, we plan to investigate ways to simplify calculations necessary to prove the identities as
1196
+ a whole in one stroke for the free z case by adding the extra information we gather from Warnaar’s results.
1197
+ At the very least, for the z = 1 specialization, we should pursue ways to prove under-the-line identities using
1198
+ the Bailey-machinery-proven over-the-line identities.
1199
+ There are other sum-product identities that are not visible through the cylindric partitions paradigm.These
1200
+ identities do not have a related cylindric partition profiles attached to them either. Similar to the under-the-
1201
+ line identities, we discover and prove these sum representations using the proven relations in the cylindric
1202
+ partitions system. For example, there are the following two modulo 10 examples similar to (1.4):
1203
+
1204
+ r1≥r2≥0
1205
+ s1≥s2≥0
1206
+ qr2
1207
+ 1−r1s1+s2
1208
+ 1+r2
1209
+ 2−r2s2+s2
1210
+ 2 qs1+s2(1 + qr1+r2+1)
1211
+ (q; q)r1−r2(q; q)s1−s2(q; q)r2(q; q)s2(q; q)r2+s2+1
1212
+ =
1213
+ 1
1214
+ (q; q)∞
1215
+ 1
1216
+ θ(q, q, q3, q4, q4, q4; q10),
1217
+ (6.1)
1218
+
1219
+ r1≥r2≥0
1220
+ s1≥s2≥0
1221
+ qr2
1222
+ 1−r1s1+s2
1223
+ 1+r2
1224
+ 2−r2s2+s2
1225
+ 2 qs1+s2(1 − qr1+r2+1)
1226
+ (q; q)r1−r2(q; q)s1−s2(q; q)r2(q; q)s2(q; q)r2+s2+1
1227
+ =
1228
+ 1
1229
+ (q; q)∞
1230
+ 1
1231
+ θ(q2, q2, q2, q3, q3, q3; q10).
1232
+ (6.2)
1233
+ All the products associated to principal characters of modulo 10 A2 Rogers–Ramanujan identities are covered
1234
+ by the products that appear in (2.1). The identities (6.1) and (6.2) are outside of this system and appear, so
1235
+ to say, on the dark-side of the cylinder. We hope to find a cylindric partition interpretation of these identities
1236
+ in the future. Nevertheless, we plan to present the proofs of these theorems using q-theoretic means in an
1237
+ upcoming paper.
1238
+ It is still highly relevant to find manifestly positive sum representations for any one of the identities men-
1239
+ tioned here. We are looking for ways to see the positivity of the series coefficients. In [7], Andrews–Schilling–
1240
+ Warnaar suggests applying hypergeometric transformations to eliminate the (q; q)∞ factor that appear in the
1241
+ identities (such as (1.4)) to get a manifestly positive representation. That suggestion is limited and might not
1242
+ be widely applicable, especially for the under-the-line identities.
1243
+ In the study of symmetric cylindric partitions [12] another two fundamental modulo 8 partition theoretic
1244
+ identity families, namely G¨ollnitz–Gordon and little G¨ollnitz identities, showed up.
1245
+ The G¨ollnitz–Gordon
1246
+ identities are known to be related to the level 2 modules of affine Lie algebra A(2)
1247
+ 5
1248
+ [27]. This raises new
1249
+ questions of whether, similar to the symmetric partitions paradigm, we can also relate symmetric cylindric
1250
+ partitions to character formulas of some affine Lie algebras. The product formula analogous to (2.1) for the
1251
+ count of symmetric cylindric partitions’ is present in [12]. At the moment, the relation of these products’ to
1252
+ affine Lie algebra character formulas are fuzzy, and there are no general conjectural series representations for
1253
+ symmetric cylindric partitions either. We plan to study these objects further.
1254
+
1255
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
1256
+ 17
1257
+ Finally, we plan to pursue sum representations of any generating functions for cylindric partitions with
1258
+ profiles of more than 3 parts.
1259
+ The product representation (2.1) and the functional equations (2.3) apply
1260
+ regardless of the size and length of the profiles.
1261
+ So far, we are only able to prove and conjecture sum
1262
+ representations for the profiles with up to 3 parts.
1263
+ 7. Comments on Computations
1264
+ In the computerized proofs of [19], we make extensive use of [1] and [30]. Those proofs had three main
1265
+ steps. Finding a recurrence relation (over the exponent of z) for claimed sum formulas of the (normalized)
1266
+ generating functions of cylindric partitions, uncoupling the q-difference equation system laid out by the (2.3)
1267
+ to get a recurrence satisfied by the coefficient of the z’s in the true generating functions of cylindric partitions,
1268
+ comparing recurrences (taking greatest common divisors of recurrences as operators if needed) and showing
1269
+ that both sequences satisfy the same recurrences with the same initial conditions. Once the critical mass
1270
+ of proved identities were reached the rest of the identities were shown by series manipulations guided by
1271
+ (2.3). That way we showed that all the claimed sum and the true combinatorial generating function were the
1272
+ same. This proof required two hefty algorithms, namely Creative Telescoping algorithm and Gr¨obner bases
1273
+ calculations, to find the recurrence of a given hypergeometric sum dependent of a discrete variable and to
1274
+ uncouple a coupled system of recurrences, respectively.
1275
+ We tried using the same method to prove some claims Warnaar [42] made for cylindric partitions with 3
1276
+ part profiles where the modulus is not divisible by 3. Then we quickly saw that the Creative Telescoping
1277
+ calculations were not terminating (in any definition of reasonable time). This is due to the increasing number
1278
+ of nested summations in these conjectures. However, uncoupling of recurrences could still be performed.
1279
+ Kanade–Russell’s approach [29] to prove that the claimed series representations for the bivariate generating
1280
+ functions of cylindric partitions are the true generating functions is a fresh take on things. It is somehow
1281
+ backwards compared to the proofs of [19], in the sense that we first extend our conjectural identities using the
1282
+ explicit conjectures of Conjecture 2.13 and series manipulations, then prove all these conjectural identities by
1283
+ showing that the coupled relations are satisfied and that we still satisfy the initial conditions. The key idea
1284
+ of reducing coupled q-difference equation with the functional relations of the claimed hypergeometric sums
1285
+ was also used in [16] in a different context. Moreover, this approach replaces (the old bottle-neck) Creative
1286
+ Telescoping with the contiguous relations of Lemma 2.5. However, rewriting the coupled relations of (2.3) in
1287
+ the new language as a linear combination of terms in the ideal Im (see Section 3) with coefficients in Z((q, z)) is
1288
+ highly non-trivial. Kanade [26] mentioned that they found these linear combinations by first making an ansatz
1289
+ for a single case at a time and then solving for undetermined coefficients. The identification of the minimal
1290
+ necessary ansatz is impossible. They also mentioned that each hard-case proof of modulo 10 calculations took
1291
+ about 8 hours to terminate on a home computer. With the matrix reduction approach of this paper, we are
1292
+ order of 2 faster in the modulo 10 cases. This is basically because once we reduce a matrix, we can use it
1293
+ repeadetly for all the functional relations, whereas the previous approach needs to make a single ansatz and
1294
+ solve if for all cases individually. It is with this speed upgrade that we could prove the new modulo 11 and
1295
+ modulo 13 cases. On the other hand, modulo 9 and modulo 12 cases are still open. This is likely due to the
1296
+ extra degree of complication the q-binomial coefficients in (2.9)’s introduce. As the order of the recurrences
1297
+ the Sm(ρ|σ) satisfy increases, the systems we need to reduce also become larger.
1298
+ Mathematica’s Gaussian elimination function RowReduce is adamant in calculating the reduced row echelon
1299
+ form of matrices.
1300
+ This is not only not necessary, it also overcomplicates the calculations by introducing
1301
+ large rational function expressions for upper triangular coefficients. This forced us to implementing our own
1302
+ Gaussian elimination algorithm within the Mathematica computer algebra system. This basic implementation
1303
+ sorts, performs row elimination of a matrix with entries in a polynomial ring with integer coefficients, such as
1304
+ Z[q, z], while not introducing rational functions, and it terminates when a row echelon matrix (a triangular
1305
+ system of equations) is reached. This function will be made a part of the impending next version release of
1306
+ qFunctions package. As a side note, we implemented a naive parallelization of this elimination but we have
1307
+ not seen any benefits of splitting calculations yet.
1308
+ We should also acknowledge that there are at least two crucial optimizations waiting to be implemented to
1309
+ aid proos of families in cylindric partitions scheme and other similar schemes. First task that should be done
1310
+
1311
+ 18
1312
+ ALI KEMAL UNCU
1313
+ is to keep track of nullified relations and to remove the contributions of the nullspace in later calculations. To
1314
+ put it in concrete terms, at the moment we do not know if N = 6 is the minimal number to prove Theorems 4.1
1315
+ and/or 5.1. We know that it is a sufficient number. By removing any and all nullified relations we would only
1316
+ see a minimal representation (dependent on the choice of N) of these recurrences as elements in the ideals Im,
1317
+ and that can give us an idea of what the optimal bound for N is supposed to be in general. The second pending
1318
+ addition is dynamic extension of the matrix to be reduced. At the moment, we fix an N experimentally hoping
1319
+ that it is enough to show that the relations of interest are in the nullspace of this matrix. This is in the same
1320
+ spirit of making a fixed ansatz. Row reduction as a preprocessing step helps for the repeated calculations.
1321
+ Having an echelon system boosts the speed of later calculations immensely. If the chosen N is not enough,
1322
+ then we need to pick a larger N and start all over. This requires performing row reduction of the matrix for
1323
+ N once more as a subproblem. This should be changed by extending the already triangularized matrix for N
1324
+ to N + 1 and doing the row reduction again for only the added relations. The incrementality of the matrix
1325
+ would also carry us to the minimal necessary N for any given m (assuming that Conjecture 3.2 is correct)
1326
+ naturally.
1327
+ References
1328
+ [1] J. Ablinger and A. K. Uncu. qFunctions - a Mathematica package for q-series and partition theory applications. Submitted.
1329
+ arXiv:1910.12410, 2019.
1330
+ [2] A. Agrawal, G. E. Andrews, and D. Bressoud. The Bailey lattice. J. Indian Math. Soc., 51:57–73, 1987.
1331
+ [3] G. E. Andrews. An analytic generalization of the Rogers-Ramanujan identities for odd moduli. Proc. Nat. Acad. Sci. USA,
1332
+ 71:4082–4085, 1974.
1333
+ [4] G. E. Andrews. q-series:
1334
+ their development and application in analysis, number theory, combina- torics, physics, and
1335
+ computer algebra. Vol. 66. CBMS Regional Conference Series in Math- ematics. Published for the Conference Board of the
1336
+ Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1986, pp. xii+130.
1337
+ [5] G. E. Andrews. The Theory of Partitions. Cambridge University Press, 1984.
1338
+ [6] G. E. Andrews. On the proofs of the Rogers-Ramanujan identities. In q-Series and Partitions, pages 1–14. Springer-Verlag,
1339
+ New York, 1989.
1340
+ [7] G. E. Andrews, A. Schilling, and S. O. Warnaar. An A2 Bailey lemma and Rogers-Ramanujan-type identities. J. Amer.
1341
+ Math. Soc., 12(3):677–702, 1999.
1342
+ [8] C. Armond and O. T. Dasbach. Rogers-Ramanujan type identities and the head and tail of the colored Jones polynomial.
1343
+ arXiv:1106.3948 [math.GT].
1344
+ [9] W. N. Bailey. Identities of the Rogers-Ramanujan type. Proc. London Math. Soc., 50(2):1–10, 1949.
1345
+ [10] R. J. Baxter. Rogers-Ramanujan identities in the hard hexagon model. J. Stat. Phys., 26:427–452, 1981.
1346
+ [11] A. Borodin. Periodic Schur process and cylindric partitions. Duke Math. J., 140(3):391–468, 2007.
1347
+ [12] W. Bridges, and A. K. Uncu. Weighted cylindric partitions. J. Algebraic Combin., 56 (2022), no. 4, 1309—1337.
1348
+ [13] D. M. Bressoud. A generalization of the Rogers-Ramanujan identities for all moduli. J. Comb. Th. A, 27:64–68, 1979.
1349
+ [14] D. M. Bressoud. An easy proof of the Rogers-Ramanujan identities. J. Number Th., 16:335–241, 1983.
1350
+ [15] C. Bruschek, H. Mourtada, and J. Schepers. Arc spaces and Rogers-Ramanujan identities. Ramanujan J., 30:9–38, 2013.
1351
+ [16] S. Chern Linked partition ideals, directed graphs and q-multi-summations. Electron. J, Combin. 27(3): Paper No. 3.33, 29
1352
+ pp.
1353
+ [17] S. Corteel. Rogers-Ramanujan identities and the Robinson-Schensted-Knuth correspondence. Proc. Amer. Math. Soc.,
1354
+ 145(5):2011–2022, 2017.
1355
+ [18] S. Corteel and T. A. Welsh. The A2 Rogers–Ramanujan identities revisited. Annals of Combinatorics, 23(3):683–694, 2019.
1356
+ [19] S. Corteel, J. Dousse and A. K. Uncu. Cylindric partitions and some new A2 Rogers–Ramanujan identities. Proc. Amer.Math.
1357
+ Soc., 150(2):481—497, 2021.
1358
+ [20] B. Feigin, O. Foda, and T. A. Welsh. Andrews–Gordon type identities from combinations of Virasoro characters. Ramanujan
1359
+ J., 17(1):33–52, 2008.
1360
+ [21] O. Foda and T. A. Welsh. Cylindric partitions, Wr characters and the Andrews-Gordon-Bressoud identities. J. Phys. A,
1361
+ 49(16):164004, 37, 2016.
1362
+ [22] A. M. Garsia and S. C. Milne. A Rogers-Ramanujan bijection. J. Combin. Theory Ser. A, 31:289–339, 1981.
1363
+ [23] I. M. Gessel and C. Krattenthaler. Cylindric partitions. Trans. Amer. Math. Soc., 349(2):429–479, 1997.
1364
+ [24] B. Gordon. A combinatorial generalisation of the Rogers-Ramanujan identities. Amer. J. Math., 83:393–399, 1961.
1365
+ [25] M. J. Griffin, K. Ono, and S. O. Warnaar. A framework of Rogers–Ramanujan identities and their arithmetic properties.
1366
+ Duke Math. J., 8:1475–1527, 2016.
1367
+ [26] S. Kanade. Private communications.
1368
+ [27] S. Kanade. Structure of certain level 2 standard modules for A(2)
1369
+ 5
1370
+ and G¨ollnitz–Gordon identities. Ramanujan J., 45(3):873–
1371
+ 893, 2018.
1372
+ [28] S. Kanade. On the A2 Andrews—Schilling—Warnaar identities. preprint.
1373
+
1374
+ MOD 11 AND 13 A2 ROGERS–RAMANUJAN TYPE IDENTITIES
1375
+ 19
1376
+ [29] S. Kanade, and M. C. Russell. Completing the A2 Andrews–Schilling–Warnaar identities. arXiv:2203.05690 [math.CO].
1377
+ [30] C. Koutschan. Advanced applications of the holonomic systems approach. PhD thesis, RISC, Johannes Kepler University,
1378
+ Linz, 2009.
1379
+ [31] J. Lepowsky and R. L. Wilson. The structure of standard modules, I: Universal algebras and the Rogers-Ramanujan identities.
1380
+ Invent. Math., 77:199–290, 1984.
1381
+ [32] J. Lepowsky and R. L. Wilson. The structure of standard modules, II: The case A(1)
1382
+ 1 , principal gradation. Invent. Math.,
1383
+ 79:417–442, 1985.
1384
+ [33] P. A. MacMahon. Combinatory Analysis, volume 2. Cambridge University Press, New York, NY, USA, 1916.
1385
+ [34] S. C. Milne and G. M. Lilly. The Aℓ and Cℓ Bailey transform and lemma. Bull. Amer. Math.Soc., 26:258ˆa€“263, 1992.
1386
+ [35] S. C. Milne and G. M. Lilly. Consequences of the Aℓ and Cℓ Bailey transform and lemma. Discrete Math., 139:319–346,
1387
+ 1995.
1388
+ [36] A.K. Uncu <https://drive.google.com/drive/folders/1qRLIfX8JVIzxkKQCCaYfg4i84X_l_-fo> Last accessed January 3,
1389
+ 2023.
1390
+ [37] A. Pascadi. Several new product identities in relation to two-variable Rogers–Ramanujan type sums and mock theta functions.
1391
+ arXiv:2009.05878, 2020.
1392
+ [38] L. J. Rogers and S. Ramanujan. Proof of certain identities in combinatory analysis. Cambr. Phil. Soc. Proc., 19:211–216,
1393
+ 1919.
1394
+ [39] I. Schur. Ein Beitrag zur Additiven Zahlentheorie und zur Theorie der Kettenbr¨uche. S.-B. Preuss. Akad. Wiss. Phys. Math.
1395
+ Klasse, pages 302–321, 1917.
1396
+ [40] A. V. Sills. An invitation to the Rogers-Ramanujan identities. CRC Press, 2017.
1397
+ [41] S. Tsuchioka. An example of A2 Rogers-Ramanujan bipartition identities of level 3. arXiv:2205.04811 [math.RT].
1398
+ [42] S. O. Warnaar. The A2 Andrews-Gordon identities and cylindric partitions. arXiv:2111.07550 [math.CO].
1399
+ [43] S. O. Warnaar. Private communications.
1400
+ Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Science, Altenberg-
1401
+ erstraße 69, A-4040 Linz, Austria
1402
+ Email address: akuncu@ricam.oeaw.ac.at
1403
+ University of Bath, Faculty of Science, Department of Computer Science, Bath, BA2 7AY, UK
1404
+ Email address: aku21@bath.ac.uk
1405
+
ctAzT4oBgHgl3EQfZ_wC/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff