jackkuo commited on
Commit
955a718
·
verified ·
1 Parent(s): e61704c

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -dE5T4oBgHgl3EQfRg6t/content/tmp_files/2301.05522v1.pdf.txt +286 -0
  2. -dE5T4oBgHgl3EQfRg6t/content/tmp_files/load_file.txt +159 -0
  3. -tE3T4oBgHgl3EQfSwk7/content/2301.04435v1.pdf +3 -0
  4. -tE3T4oBgHgl3EQfSwk7/vector_store/index.pkl +3 -0
  5. .gitattributes +53 -0
  6. 3tAzT4oBgHgl3EQfD_po/vector_store/index.pkl +3 -0
  7. 4dE2T4oBgHgl3EQfOQZ5/content/tmp_files/2301.03746v1.pdf.txt +2602 -0
  8. 4dE2T4oBgHgl3EQfOQZ5/content/tmp_files/load_file.txt +0 -0
  9. 4dFJT4oBgHgl3EQfjyxF/content/tmp_files/2301.11576v1.pdf.txt +3435 -0
  10. 4dFJT4oBgHgl3EQfjyxF/content/tmp_files/load_file.txt +0 -0
  11. 7tAzT4oBgHgl3EQfE_oc/content/tmp_files/2301.01001v1.pdf.txt +1527 -0
  12. 7tAzT4oBgHgl3EQfE_oc/content/tmp_files/load_file.txt +0 -0
  13. 7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf +3 -0
  14. 7tE1T4oBgHgl3EQfBwI8/vector_store/index.faiss +3 -0
  15. 89E1T4oBgHgl3EQf7wWX/content/tmp_files/2301.03538v1.pdf.txt +1370 -0
  16. 89E1T4oBgHgl3EQf7wWX/content/tmp_files/load_file.txt +0 -0
  17. 8dAzT4oBgHgl3EQfgfzo/vector_store/index.faiss +3 -0
  18. BdAzT4oBgHgl3EQfv_7e/content/tmp_files/2301.01717v1.pdf.txt +0 -0
  19. BdAzT4oBgHgl3EQfv_7e/content/tmp_files/load_file.txt +0 -0
  20. C9AyT4oBgHgl3EQf4fqw/vector_store/index.pkl +3 -0
  21. CdFAT4oBgHgl3EQftB4M/vector_store/index.faiss +3 -0
  22. CtE0T4oBgHgl3EQfyQKJ/content/2301.02657v1.pdf +3 -0
  23. CtE1T4oBgHgl3EQf9wbT/vector_store/index.faiss +3 -0
  24. D9E4T4oBgHgl3EQfGQye/vector_store/index.pkl +3 -0
  25. F9AzT4oBgHgl3EQfHPuf/content/2301.01042v1.pdf +3 -0
  26. F9AzT4oBgHgl3EQfHPuf/vector_store/index.pkl +3 -0
  27. FNAzT4oBgHgl3EQfw_7Q/vector_store/index.pkl +3 -0
  28. FNE4T4oBgHgl3EQf7A5n/content/tmp_files/2301.05336v1.pdf.txt +2041 -0
  29. FNE4T4oBgHgl3EQf7A5n/content/tmp_files/load_file.txt +0 -0
  30. GNFIT4oBgHgl3EQfWStC/content/2301.11239v1.pdf +3 -0
  31. H9FAT4oBgHgl3EQfuR4A/content/tmp_files/2301.08668v1.pdf.txt +2795 -0
  32. H9FAT4oBgHgl3EQfuR4A/content/tmp_files/load_file.txt +0 -0
  33. JNFLT4oBgHgl3EQfJy82/content/tmp_files/2301.12005v1.pdf.txt +2861 -0
  34. JNFLT4oBgHgl3EQfJy82/content/tmp_files/load_file.txt +0 -0
  35. OdFAT4oBgHgl3EQfyx46/content/tmp_files/2301.08694v1.pdf.txt +1636 -0
  36. OdFAT4oBgHgl3EQfyx46/content/tmp_files/load_file.txt +0 -0
  37. OtAyT4oBgHgl3EQfUfen/content/2301.00127v1.pdf +3 -0
  38. PdE0T4oBgHgl3EQfkAEs/vector_store/index.faiss +3 -0
  39. PdFQT4oBgHgl3EQfYjZ5/content/2301.13312v1.pdf +3 -0
  40. PdFQT4oBgHgl3EQfYjZ5/vector_store/index.faiss +3 -0
  41. PdFQT4oBgHgl3EQfYjZ5/vector_store/index.pkl +3 -0
  42. PtA0T4oBgHgl3EQfDP9d/content/2301.02000v1.pdf +3 -0
  43. PtA0T4oBgHgl3EQfDP9d/vector_store/index.faiss +3 -0
  44. PtA0T4oBgHgl3EQfDP9d/vector_store/index.pkl +3 -0
  45. PtE0T4oBgHgl3EQfTwCq/content/tmp_files/2301.02241v1.pdf.txt +2158 -0
  46. PtE0T4oBgHgl3EQfTwCq/content/tmp_files/load_file.txt +0 -0
  47. PtFPT4oBgHgl3EQfnzX3/content/2301.13132v1.pdf +3 -0
  48. PtFPT4oBgHgl3EQfnzX3/vector_store/index.pkl +3 -0
  49. QdE4T4oBgHgl3EQf-w6Q/content/tmp_files/2301.05366v1.pdf.txt +468 -0
  50. QdE4T4oBgHgl3EQf-w6Q/content/tmp_files/load_file.txt +335 -0
-dE5T4oBgHgl3EQfRg6t/content/tmp_files/2301.05522v1.pdf.txt ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Hyperparameter Optimization as a Service on INFN
2
+ Cloud
3
+ Matteo Barbetti1,2 and Lucio Anderlini2
4
+ 1 Department of Information Engineering, University of Florence,
5
+ via Santa Marta 3, Firenze, Italy
6
+ 2 Istituto Nazionale di Fisica Nucleare, Sezione di Firenze,
7
+ via G. Sansonse 1, Sesto Fiorentino (FI), Italy
8
+ E-mail: Matteo.Barbetti@fi.infn.it
9
+ Abstract.
10
+ The simplest and often most effective way of parallelizing the training of complex
11
+ machine learning models is to execute several training instances on multiple machines, possibly
12
+ scanning the hyperparameter space to optimize the underlying statistical model and the learning
13
+ procedure. Often, such a meta learning procedure is limited by the ability of accessing securely
14
+ a common database organizing the knowledge of the previous and ongoing trials. Exploiting
15
+ opportunistic GPUs provided in different environments represents a further challenge when
16
+ designing such optimization campaigns. In this contribution we discuss how a set of RestAPIs
17
+ can be used to access a dedicated service based on INFN Cloud to monitor and possibly
18
+ coordinate multiple training instances, with gradient-less optimization techniques, via simple
19
+ HTTP requests. The service, named Hopaas (Hyperparameter OPtimization As A Service), is
20
+ made of web interface and sets of APIs implemented with a FastAPI back-end running through
21
+ Uvicorn and NGINX in a virtual instance of INFN Cloud. The optimization algorithms are
22
+ currently based on Bayesian techniques as provided by Optuna. A Python front-end is also
23
+ made available for quick prototyping. We present applications to hyperparameter optimization
24
+ campaigns performed combining private, INFN Cloud and CINECA resources.
25
+ 1. Introduction
26
+ In the last decade, machine learning (ML) has become an incredibly valuable tool in practically
27
+ every field of application, from scientific research to industry.
28
+ Increasingly complex models
29
+ achieve surprising results in a wide range of applications, such as image generation [1], language
30
+ modelling [2] or medical diagnosis [3]. Most of the ML techniques rely on the optimization of
31
+ an objective function with respect to some internal parameters, describing the performance of
32
+ the algorithm. Usually, when the optimum of the objective function is a minimum, the names
33
+ cost or loss function are adopted.
34
+ The fastest iterative optimization techniques rely on the
35
+ (Stochastic) Gradient Descent technique [4]. Unfortunately, for a wide class of optimization
36
+ problems the gradient of the loss function with respect to the model parameter is extremely
37
+ expensive to compute or cannot be defined at all. For example, optimization problems involving
38
+ noisy loss functions in contexts where analytical derivatives cannot be computed cannot rely on
39
+ gradient-descent techniques, requiring the adoption of slower, often heuristic, methods. A widely
40
+ adopted option is to define a surrogate model describing the variations of the loss function across
41
+ the parameter space together with its uncertainty, driving the optimization algorithm to explore
42
+ those regions where improvements were not statistically excluded from previous evaluations.
43
+ arXiv:2301.05522v1 [cs.DC] 13 Jan 2023
44
+
45
+ Techniques adopting this approach are referred to as Bayesian optimization (BO) methods and
46
+ have been an active area of research in ML for the last decade [5, 6, 7, 8, 9].
47
+ Tuning the performance of ML models may benefit from the optimization of the
48
+ hyperparameters, defined as all those parameters that are not learned during the model training
49
+ procedure but encode some arbitrariness in the architecture of the model itself or in the procedure
50
+ to train it [5]. In practice, hyperparameter optimization (HPO) studies require training the
51
+ model multiple times to explore the hyperparameter space.
52
+ Since training ML models is
53
+ computationally expensive, HPO campaigns should focus as much as possible on those regions
54
+ of the hyperparameter space where the model performs better to reduce the time needed for
55
+ finding the best configuration. On the other hand, the loss is often a noisy function of the
56
+ hyperparameters as multiple training procedures may result in different performance because of
57
+ the intrinsic randomness of the stochastic gradient-descent techniques.
58
+ Exploring the hyperparameter space requires many independent trainings,
59
+ or trials,
60
+ that can run in parallel on different computing resources.
61
+ In general, accessing more
62
+ resources enables the exploration of larger hyperparameter spaces, possibly resulting in better
63
+ models.
64
+ Opportunistic access to compute resources may provide valuable contribution to
65
+ HPO campagins.
66
+ Unfortunately, coordinating studies on resources from different providers,
67
+ restrictions and regulations challenges the adoption of existing HPO services.
68
+ In this document, we propose Hopaas (Hyperparameter OPtimization As A Service),
69
+ implementing a set of RestAPIs to orchestrate HPO studies across multiple computing instances.
70
+ Computing nodes from multiple HPC centers can concur dynamically to the same optimization
71
+ study, requesting to the Hopaas server a set of hyperparameters to test and then sending
72
+ back the outcome of the training procedure.
73
+ Several trials of one or more studies can be
74
+ tracked and monitored through the web interface provided by the Hopaas service. A reference
75
+ implementation, with a server instance1 deployed on INFN Cloud resources and a simple client
76
+ package [10] wrapping the RestAPIs to Python functions, is also discussed.
77
+ 2. Hopaas API specification
78
+ We refer to a trial as a single training attempt with a specific set of hyperparameters to test. A
79
+ study represents an optimization session and includes a collection of trials. In practice, a study
80
+ is unambiguously defined by the set of hyperparameters to optimize, the range of values where
81
+ searching the optimum, and the modality in which this search is carried out (e.g., grid search,
82
+ Bayesian methods [5], or evolutionary algorithms [11]).
83
+ The core activity of the Hopaas service is to manage distributed optimization studies. A set
84
+ of RestAPIs is designed to create trials, finalize them, and update the service on intermediate
85
+ values of the objective function to enable early termination of the trial before the conclusion
86
+ of the training. The APIs named ask, tell and should prune implement these actions upon
87
+ POST HTTP requests, with user authentication based on an API token in the request path.
88
+ A computing node ready to test a set of hyperparameters will query the Hopaas server via
89
+ the ask API, including in the request body all the information needed to define an optimization
90
+ study unambiguously. The Hopaas server will define a new trial, possibly assigning it to an
91
+ existing study, or creating a new one, and providing a unique identifier of the new trial and the
92
+ set of hyperparameters to evaluate as part of the HTTP response.
93
+ Usually, the evaluation of the set of hyperparameters consists of training a model defined by
94
+ those hyperparameters aiming at the resulting value of the objective function. The evaluated
95
+ performance metric may correspond to the loss function computed during the training procedure
96
+ but, in general, it can be any numerical score obtained processing a given set of hyperparameters.
97
+ Once the evaluation is completed, the computing node will finalize the trial using the tell API,
98
+ 1 Visit https://hopaas.cloud.infn.it for additional details.
99
+
100
+ POST REQUEST
101
+ ASK
102
+ POST REQUEST
103
+ TELL
104
+ Hopaas server
105
+ Processing Trial #1
106
+ Study A
107
+ Computing node
108
+ loss
109
+ trials
110
+ Computing node
111
+ Study B
112
+ Processing Trial #1
113
+ Study A
114
+ Study B
115
+ Trial #1
116
+ Trial #2
117
+ Trial #1
118
+ Trial #2
119
+ Trial #3
120
+ Computing node
121
+ Study B
122
+ Processing Trial #3
123
+ /
124
+ Computing node
125
+ Study B
126
+ Processing Trial #2
127
+ Processing Trial #2
128
+ Study A
129
+ Computing node
130
+ On-prem
131
+ Figure 1.
132
+ A Hopaas server orchestrating multiple studies across multiple sites.
133
+ whose body will include the unique identifier of the trial and the final evaluation of the objective
134
+ function.
135
+ The Hopaas server may serve multiple ask requests from different sources, assigning them
136
+ to one or different studies, while updating the surrogate model each time a new evaluation is
137
+ made available by querying the tell API. A schematic representation of the orchestration of
138
+ studies in multiple sites is reported in Figure 1.
139
+ Depending on the specific ML algorithm, intermediate evaluations of the objective function
140
+ can be accessed during the training procedure and used to abort non-promising trials (pruning)
141
+ without wasting computing power to take the training procedure to an end. Optionally, the
142
+ computing node may update the Hopaas server with intermediate evaluations of the objective
143
+ function by querying the should prune API for monitoring and pruning purposes. The body of
144
+ a should prune request will contain the unique identifier of the trial, the intermediate value of
145
+ the loss function and an integer number encoding the progress of the training procedure, named
146
+ the step. The HTTP response will indicate whether the study should be early terminated, or it
147
+ is sufficiently likely to result in an improvement over the previous tests.
148
+ A reference Python front-end was developed aiming at a facilitated access to the Hopaas
149
+ service from Python applications [10]. While Python is a primary choice for many scientific
150
+ applications, it should be noticed that the client simply wraps the RestAPIs into classes and
151
+ functions, as the Hopaas protocol is designed to be language-agnostic, relying on widely adopted
152
+ web communication standards. In addition, the Hopaas client is also framework-agnostic since
153
+ the evaluation of the objective function for a given set of hyperparameters can be implemented
154
+ with any framework and environment.
155
+ 3. Implementation
156
+ The reference implementation for the Hopaas service running on INFN Cloud relies on
157
+ containerized applications orchestrated with docker-compose. The web server implementing
158
+ the RestAPIs is a scalable set of Uvicorn instances running an application based on the FastAPI
159
+ framework. The BO algorithms are provided by integrating the back-end with Optuna, while
160
+
161
+ INPN
162
+ CLOUDCINECAawsCERNINPN
163
+ CLOUDhttp://hopaas.cloud.infn.it/api/ask/user_token
164
+ POST
165
+ http://hopaas.cloud.infn.it/api/tell/user_token
166
+ POST
167
+ 200
168
+ empty HTTP response
169
+ Set-up of the optimization study
170
+ (e.g. title, min/max, sampler, search space)
171
+ 1.
172
+ Retrieving the trial parameters
173
+ (both constant and optimizable parameters)
174
+ 4.
175
+ Objective function computation
176
+ (e.g. closed formula, machine learning score)
177
+ 5.
178
+ Creation/loading of the optimization study
179
+ (same set-up allows to load existing optimization study)
180
+ 2.
181
+ Parameters sampled according to study set-up
182
+ (the prepared set of parameters defines an optimization trial)
183
+ 3.
184
+ Optimization study updated with trial results
185
+ (parallel tests enabled by gradientless optimization strategies)
186
+ 6.
187
+ Processing...
188
+ Ready for testing a new trial set
189
+ (both from the same or a new optimization study)
190
+ 7a.
191
+ Ready for providing a new trial set
192
+ (score-driven suggestions enabled by default)
193
+ 7b.
194
+ Framework-agnostic optimization campaign
195
+ Computing instance
196
+ Hopaas server
197
+ +
198
+ Authorized users only!
199
+ 200
200
+ HTTP response with trial parameters
201
+ Figure 2.
202
+ Workflow of an optimization study with a client-server approach based on RestAPIs.
203
+ future extensions to additional frameworks are planned. The access to the Uvicorn instances
204
+ from the Internet is mediated by an NGINX reverse proxy accessed via the encrypted HTTPS
205
+ protocol. A PostgreSQL instance is part of the docker-compose configuration to provide shared
206
+ persistency to the multiple instances of the web application back-end.
207
+ The workflow of the
208
+ interaction between the Hopaas server and computing nodes is depicted in Figure 2.
209
+ The same Hopaas server is designed to serve web-based user access. A web application,
210
+ developed in HTML, CSS and JavaScript, is shipped to the client browser as defined by a set
211
+ of web-specific APIs in Uvicorn. The web pages of the front-end provide dynamic visualizations
212
+ by fetching data from specialized APIs at regular intervals. Plots showing the evolution of the
213
+ loss reported by different studies and trials are obtained with the Chartist library.
214
+ The user authentication and authorization procedure of the web application is managed
215
+ relying on access tokens as defined by the OAuth2 standard, using the INFN GitLab instance
216
+ as identity provider.
217
+ Support for INDIGO IAM is also planned for the future [12].
218
+ Once
219
+ authenticated, users can generate multiple API tokens through the web application. Each API
220
+ token has a validity period defined at generation and can be revoked at any time. Tokens with
221
+ shorter validity are more appropriate for usage in public or untrusted contexts.
222
+ 4. Tuning the LHCb ultra-fast simulation models with Hopaas and Marconi 100
223
+ Machine Learning is an important research area in High Energy Physics (HEP), with first
224
+ applications dating back to the 1990s. Recent years have witnessed an explosion in the use of
225
+ ML techniques in HEP to face the computational challenges raised by the upcoming and future
226
+ runs of the Large Hadron Collider (LHC) [13]. With an increasing number, complexity and
227
+ range of applications of the ML models, HPO is becoming popular in HEP [14], and specialized
228
+ frameworks targeting distributed computing are being developed [7].
229
+ The reference implementation of the Hopaas service presented here has been successfully used
230
+ for HEP applications and, in particular, to optimize the parameterizations for Lamarr [15], a
231
+ novel LHCb ultra-fast simulation framework. Most of the parameterizations of Lamarr rely
232
+
233
+ scikitdmlc
234
+ XGBoostA
235
+ wwWINPN
236
+ CLOUDuvicornFastAPion Generative Adversarial Networks (GANs) [16], advanced algorithms taken from Computer
237
+ Vision that were demonstrated to be able to well reproduce the distributions obtained from
238
+ standard simulation techniques [17, 18]. Adversarial models are particularly sensitive to the
239
+ choice of the hyperparameter configuration and require intensive optimization campaigns to
240
+ model accurately the target distributions.
241
+ Several optimization studies have been orchestrated by the Hopaas service using diverse
242
+ computing instances, from scientific providers (like INFN, CERN and CINECA) and from
243
+ commercial cloud provider (like GCP or AWS). Most of the resources have been provided by
244
+ the CINECA supercomputer Marconi 100, with a custom network configuration to enable
245
+ the communication with the Hopaas server [19]. Hopaas was able to coordinate dozens of
246
+ optimization studies with hundreds of trials on each study from more than twenty concurrent
247
+ and diverse computing nodes.
248
+ 5. Conclusion and future work
249
+ Hyperparameter tuning and Bayesian methods for gradient-less optimization provide an effective
250
+ and simple mean of exploiting opportunistic compute resources to improve ML models.
251
+ Unfortunately, environment variability and constraints set by different resource providers make
252
+ the application of existing HPO services challenging.
253
+ With Hopaas, we propose a solution
254
+ designed to require the addition of the thinnest possible layer in the model training application,
255
+ querying a central service via HTTPS and minimal RestAPIs. A reference implementation with
256
+ a server instance running on INFN Cloud and a Python client was presented and tested in a real-
257
+ world application to coordinate hyperparameter optimization campaigns on multiple resource
258
+ providers including INFN, CERN, and CINECA. In the future we will improve the quality of
259
+ the Web User Interface, for example enabling custom model documentation and sharing among
260
+ multiple users, and introduce support to multi-objective optimizations.
261
+ Acknowledgments
262
+ We would like to thank Doina Cristina Duma and the rest of the INFN Cloud group for the
263
+ technical support in the deployment and test of Hopaas. We acknowledge enlightening and
264
+ motivating discussions with Diego Ciangottini, Stefano Dal Pra, Piergiulio Lenzi and Daniele
265
+ Spiga, especially on future applications and developments.
266
+ References
267
+ [1] Ramesh A et al. 2021 PMLR’21 pp 8821–8831
268
+ [2] Brown T et al. 2020 NeurIPS’20 pp 1877–1901
269
+ [3] Richens J G, Lee C M and Johri S 2020 Nat. Comm. 11 3923
270
+ [4] Orr G and M¨uller K 2003 Neural Networks: Tricks of the Trade (Springer Berlin Heidelberg)
271
+ [5] Bergstra J et al. 2011 NeurIPS’11 pp 2546–2554
272
+ [6] Bergstra J, Yamins D and Cox D 2013 PMLR’13 pp 115–123
273
+ [7] Liaw R et al. 2018 (Preprint 1807.05118)
274
+ [8] Akiba T et al. 2019 KDD’19 pp 2623–2631 (Preprint 1907.10902)
275
+ [9] Head T et al. 2021 URL https://doi.org/10.5281/zenodo.5565057
276
+ [10] Barbetti M and Anderlini L 2023 URL https://doi.org/10.5281/zenodo.7528502
277
+ [11] Tani L et al. 2021 EPJ C 81 170
278
+ [12] Spiga D et al. 2020 EPJ Web Conf. 245 07020
279
+ [13] Albertsson K et al. 2018 J. Phys.: Conf. Ser. 1085 022008
280
+ [14] Wulff E, Girone M and Pata J 2022 (Preprint 2203.01112)
281
+ [15] Anderlini L et al. 2022 PoS ICHEP2022 233
282
+ [16] Goodfellow I et al. 2014 NeurIPS’14 pp 2672–2680 (Preprint 1406.2661)
283
+ [17] Anderlini L et al. (LHCb) 2022 (Preprint 2204.09947)
284
+ [18] Ratnikov F et al. 2023 Nucl. Instrum. Meth. A 1046 167591
285
+ [19] Mariotti M, Spiga D and Boccali T 2021 PoS ISGC2021 002
286
+
-dE5T4oBgHgl3EQfRg6t/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf,len=158
2
+ page_content='Hyperparameter Optimization as a Service on INFN Cloud Matteo Barbetti1,2 and Lucio Anderlini2 1 Department of Information Engineering, University of Florence, via Santa Marta 3, Firenze, Italy 2 Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, via G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
3
+ page_content=' Sansonse 1, Sesto Fiorentino (FI), Italy E-mail: Matteo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
4
+ page_content='Barbetti@fi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
5
+ page_content='infn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
6
+ page_content='it Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
7
+ page_content=' The simplest and often most effective way of parallelizing the training of complex machine learning models is to execute several training instances on multiple machines, possibly scanning the hyperparameter space to optimize the underlying statistical model and the learning procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
8
+ page_content=' Often, such a meta learning procedure is limited by the ability of accessing securely a common database organizing the knowledge of the previous and ongoing trials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
9
+ page_content=' Exploiting opportunistic GPUs provided in different environments represents a further challenge when designing such optimization campaigns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
10
+ page_content=' In this contribution we discuss how a set of RestAPIs can be used to access a dedicated service based on INFN Cloud to monitor and possibly coordinate multiple training instances, with gradient-less optimization techniques, via simple HTTP requests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
11
+ page_content=' The service, named Hopaas (Hyperparameter OPtimization As A Service), is made of web interface and sets of APIs implemented with a FastAPI back-end running through Uvicorn and NGINX in a virtual instance of INFN Cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
12
+ page_content=' The optimization algorithms are currently based on Bayesian techniques as provided by Optuna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
13
+ page_content=' A Python front-end is also made available for quick prototyping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
14
+ page_content=' We present applications to hyperparameter optimization campaigns performed combining private, INFN Cloud and CINECA resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
15
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
16
+ page_content=' Introduction In the last decade, machine learning (ML) has become an incredibly valuable tool in practically every field of application, from scientific research to industry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
17
+ page_content=' Increasingly complex models achieve surprising results in a wide range of applications, such as image generation [1], language modelling [2] or medical diagnosis [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
18
+ page_content=' Most of the ML techniques rely on the optimization of an objective function with respect to some internal parameters, describing the performance of the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
19
+ page_content=' Usually, when the optimum of the objective function is a minimum, the names cost or loss function are adopted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
20
+ page_content=' The fastest iterative optimization techniques rely on the (Stochastic) Gradient Descent technique [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
21
+ page_content=' Unfortunately, for a wide class of optimization problems the gradient of the loss function with respect to the model parameter is extremely expensive to compute or cannot be defined at all.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
22
+ page_content=' For example, optimization problems involving noisy loss functions in contexts where analytical derivatives cannot be computed cannot rely on gradient-descent techniques, requiring the adoption of slower, often heuristic, methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
23
+ page_content=' A widely adopted option is to define a surrogate model describing the variations of the loss function across the parameter space together with its uncertainty, driving the optimization algorithm to explore those regions where improvements were not statistically excluded from previous evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
24
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
25
+ page_content='05522v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
26
+ page_content='DC] 13 Jan 2023 Techniques adopting this approach are referred to as Bayesian optimization (BO) methods and have been an active area of research in ML for the last decade [5, 6, 7, 8, 9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
27
+ page_content=' Tuning the performance of ML models may benefit from the optimization of the hyperparameters, defined as all those parameters that are not learned during the model training procedure but encode some arbitrariness in the architecture of the model itself or in the procedure to train it [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
28
+ page_content=' In practice, hyperparameter optimization (HPO) studies require training the model multiple times to explore the hyperparameter space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
29
+ page_content=' Since training ML models is computationally expensive, HPO campaigns should focus as much as possible on those regions of the hyperparameter space where the model performs better to reduce the time needed for finding the best configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
30
+ page_content=' On the other hand, the loss is often a noisy function of the hyperparameters as multiple training procedures may result in different performance because of the intrinsic randomness of the stochastic gradient-descent techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
31
+ page_content=' Exploring the hyperparameter space requires many independent trainings, or trials, that can run in parallel on different computing resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
32
+ page_content=' In general, accessing more resources enables the exploration of larger hyperparameter spaces, possibly resulting in better models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
33
+ page_content=' Opportunistic access to compute resources may provide valuable contribution to HPO campagins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
34
+ page_content=' Unfortunately, coordinating studies on resources from different providers, restrictions and regulations challenges the adoption of existing HPO services.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
35
+ page_content=' In this document, we propose Hopaas (Hyperparameter OPtimization As A Service), implementing a set of RestAPIs to orchestrate HPO studies across multiple computing instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
36
+ page_content=' Computing nodes from multiple HPC centers can concur dynamically to the same optimization study, requesting to the Hopaas server a set of hyperparameters to test and then sending back the outcome of the training procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
37
+ page_content=' Several trials of one or more studies can be tracked and monitored through the web interface provided by the Hopaas service.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
38
+ page_content=' A reference implementation, with a server instance1 deployed on INFN Cloud resources and a simple client package [10] wrapping the RestAPIs to Python functions, is also discussed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
39
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
40
+ page_content=' Hopaas API specification We refer to a trial as a single training attempt with a specific set of hyperparameters to test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
41
+ page_content=' A study represents an optimization session and includes a collection of trials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
42
+ page_content=' In practice, a study is unambiguously defined by the set of hyperparameters to optimize, the range of values where searching the optimum, and the modality in which this search is carried out (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
43
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
44
+ page_content=', grid search, Bayesian methods [5], or evolutionary algorithms [11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
45
+ page_content=' The core activity of the Hopaas service is to manage distributed optimization studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
46
+ page_content=' A set of RestAPIs is designed to create trials, finalize them, and update the service on intermediate values of the objective function to enable early termination of the trial before the conclusion of the training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
47
+ page_content=' The APIs named ask, tell and should prune implement these actions upon POST HTTP requests, with user authentication based on an API token in the request path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
48
+ page_content=' A computing node ready to test a set of hyperparameters will query the Hopaas server via the ask API, including in the request body all the information needed to define an optimization study unambiguously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
49
+ page_content=' The Hopaas server will define a new trial, possibly assigning it to an existing study, or creating a new one, and providing a unique identifier of the new trial and the set of hyperparameters to evaluate as part of the HTTP response.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
50
+ page_content=' Usually, the evaluation of the set of hyperparameters consists of training a model defined by those hyperparameters aiming at the resulting value of the objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
51
+ page_content=' The evaluated performance metric may correspond to the loss function computed during the training procedure but, in general, it can be any numerical score obtained processing a given set of hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
52
+ page_content=' Once the evaluation is completed, the computing node will finalize the trial using the tell API, 1 Visit https://hopaas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
53
+ page_content='cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
54
+ page_content='infn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
55
+ page_content='it for additional details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
56
+ page_content=' POST REQUEST ASK POST REQUEST TELL Hopaas server Processing Trial #1 Study A Computing node loss trials Computing node Study B Processing Trial #1 Study A Study B Trial #1 Trial #2 Trial #1 Trial #2 Trial #3 Computing node Study B Processing Trial #3 / Computing node Study B Processing Trial #2 Processing Trial #2 Study A Computing node On-prem Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
57
+ page_content=' A Hopaas server orchestrating multiple studies across multiple sites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
58
+ page_content=' whose body will include the unique identifier of the trial and the final evaluation of the objective function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
59
+ page_content=' The Hopaas server may serve multiple ask requests from different sources, assigning them to one or different studies, while updating the surrogate model each time a new evaluation is made available by querying the tell API.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
60
+ page_content=' A schematic representation of the orchestration of studies in multiple sites is reported in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
61
+ page_content=' Depending on the specific ML algorithm, intermediate evaluations of the objective function can be accessed during the training procedure and used to abort non-promising trials (pruning) without wasting computing power to take the training procedure to an end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
62
+ page_content=' Optionally, the computing node may update the Hopaas server with intermediate evaluations of the objective function by querying the should prune API for monitoring and pruning purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
63
+ page_content=' The body of a should prune request will contain the unique identifier of the trial, the intermediate value of the loss function and an integer number encoding the progress of the training procedure, named the step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
64
+ page_content=' The HTTP response will indicate whether the study should be early terminated, or it is sufficiently likely to result in an improvement over the previous tests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
65
+ page_content=' A reference Python front-end was developed aiming at a facilitated access to the Hopaas service from Python applications [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
66
+ page_content=' While Python is a primary choice for many scientific applications, it should be noticed that the client simply wraps the RestAPIs into classes and functions, as the Hopaas protocol is designed to be language-agnostic, relying on widely adopted web communication standards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
67
+ page_content=' In addition, the Hopaas client is also framework-agnostic since the evaluation of the objective function for a given set of hyperparameters can be implemented with any framework and environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
68
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
69
+ page_content=' Implementation The reference implementation for the Hopaas service running on INFN Cloud relies on containerized applications orchestrated with docker-compose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
70
+ page_content=' The web server implementing the RestAPIs is a scalable set of Uvicorn instances running an application based on the FastAPI framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
71
+ page_content=' The BO algorithms are provided by integrating the back-end with Optuna, while INPN CLOUDCINECAawsCERNINPN CLOUDhttp://hopaas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
72
+ page_content='cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
73
+ page_content='infn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
74
+ page_content='it/api/ask/user_token POST http://hopaas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
75
+ page_content='cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
76
+ page_content='infn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
77
+ page_content='it/api/tell/user_token POST 200 empty HTTP response Set-up of the optimization study (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
78
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
79
+ page_content=' title, min/max, sampler, search space) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
80
+ page_content=' Retrieving the trial parameters (both constant and optimizable parameters) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
81
+ page_content=' Objective function computation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
82
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
83
+ page_content=' closed formula, machine learning score) 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
84
+ page_content=' Creation/loading of the optimization study (same set-up allows to load existing optimization study) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
85
+ page_content=' Parameters sampled according to study set-up (the prepared set of parameters defines an optimization trial) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
86
+ page_content=' Optimization study updated with trial results (parallel tests enabled by gradientless optimization strategies) 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
87
+ page_content=' Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
88
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
89
+ page_content=' Ready for testing a new trial set (both from the same or a new optimization study) 7a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
90
+ page_content=' Ready for providing a new trial set (score-driven suggestions enabled by default) 7b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
91
+ page_content=' Framework-agnostic optimization campaign Computing instance Hopaas server + Authorized users only!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
92
+ page_content=' 200 HTTP response with trial parameters Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
93
+ page_content=' Workflow of an optimization study with a client-server approach based on RestAPIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
94
+ page_content=' future extensions to additional frameworks are planned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
95
+ page_content=' The access to the Uvicorn instances from the Internet is mediated by an NGINX reverse proxy accessed via the encrypted HTTPS protocol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
96
+ page_content=' A PostgreSQL instance is part of the docker-compose configuration to provide shared persistency to the multiple instances of the web application back-end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
97
+ page_content=' The workflow of the interaction between the Hopaas server and computing nodes is depicted in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
98
+ page_content=' The same Hopaas server is designed to serve web-based user access.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
99
+ page_content=' A web application, developed in HTML, CSS and JavaScript, is shipped to the client browser as defined by a set of web-specific APIs in Uvicorn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
100
+ page_content=' The web pages of the front-end provide dynamic visualizations by fetching data from specialized APIs at regular intervals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
101
+ page_content=' Plots showing the evolution of the loss reported by different studies and trials are obtained with the Chartist library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
102
+ page_content=' The user authentication and authorization procedure of the web application is managed relying on access tokens as defined by the OAuth2 standard, using the INFN GitLab instance as identity provider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
103
+ page_content=' Support for INDIGO IAM is also planned for the future [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
104
+ page_content=' Once authenticated, users can generate multiple API tokens through the web application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
105
+ page_content=' Each API token has a validity period defined at generation and can be revoked at any time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
106
+ page_content=' Tokens with shorter validity are more appropriate for usage in public or untrusted contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
107
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
108
+ page_content=' Tuning the LHCb ultra-fast simulation models with Hopaas and Marconi 100 Machine Learning is an important research area in High Energy Physics (HEP), with first applications dating back to the 1990s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
109
+ page_content=' Recent years have witnessed an explosion in the use of ML techniques in HEP to face the computational challenges raised by the upcoming and future runs of the Large Hadron Collider (LHC) [13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
110
+ page_content=' With an increasing number, complexity and range of applications of the ML models, HPO is becoming popular in HEP [14], and specialized frameworks targeting distributed computing are being developed [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
111
+ page_content=' The reference implementation of the Hopaas service presented here has been successfully used for HEP applications and, in particular, to optimize the parameterizations for Lamarr [15], a novel LHCb ultra-fast simulation framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
112
+ page_content=' Most of the parameterizations of Lamarr rely scikitdmlc XGBoostA wwWINPN CLOUDuvicornFastAPion Generative Adversarial Networks (GANs) [16], advanced algorithms taken from Computer Vision that were demonstrated to be able to well reproduce the distributions obtained from standard simulation techniques [17, 18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
113
+ page_content=' Adversarial models are particularly sensitive to the choice of the hyperparameter configuration and require intensive optimization campaigns to model accurately the target distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
114
+ page_content=' Several optimization studies have been orchestrated by the Hopaas service using diverse computing instances, from scientific providers (like INFN, CERN and CINECA) and from commercial cloud provider (like GCP or AWS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
115
+ page_content=' Most of the resources have been provided by the CINECA supercomputer Marconi 100, with a custom network configuration to enable the communication with the Hopaas server [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
116
+ page_content=' Hopaas was able to coordinate dozens of optimization studies with hundreds of trials on each study from more than twenty concurrent and diverse computing nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
117
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
118
+ page_content=' Conclusion and future work Hyperparameter tuning and Bayesian methods for gradient-less optimization provide an effective and simple mean of exploiting opportunistic compute resources to improve ML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
119
+ page_content=' Unfortunately, environment variability and constraints set by different resource providers make the application of existing HPO services challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
120
+ page_content=' With Hopaas, we propose a solution designed to require the addition of the thinnest possible layer in the model training application, querying a central service via HTTPS and minimal RestAPIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
121
+ page_content=' A reference implementation with a server instance running on INFN Cloud and a Python client was presented and tested in a real- world application to coordinate hyperparameter optimization campaigns on multiple resource providers including INFN, CERN, and CINECA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
122
+ page_content=' In the future we will improve the quality of the Web User Interface, for example enabling custom model documentation and sharing among multiple users, and introduce support to multi-objective optimizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
123
+ page_content=' Acknowledgments We would like to thank Doina Cristina Duma and the rest of the INFN Cloud group for the technical support in the deployment and test of Hopaas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
124
+ page_content=' We acknowledge enlightening and motivating discussions with Diego Ciangottini, Stefano Dal Pra, Piergiulio Lenzi and Daniele Spiga, especially on future applications and developments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
125
+ page_content=' References [1] Ramesh A et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
126
+ page_content=' 2021 PMLR’21 pp 8821–8831 [2] Brown T et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
127
+ page_content=' 2020 NeurIPS’20 pp 1877–1901 [3] Richens J G, Lee C M and Johri S 2020 Nat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
128
+ page_content=' Comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
129
+ page_content=' 11 3923 [4] Orr G and M¨uller K 2003 Neural Networks: Tricks of the Trade (Springer Berlin Heidelberg) [5] Bergstra J et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
130
+ page_content=' 2011 NeurIPS’11 pp 2546–2554 [6] Bergstra J, Yamins D and Cox D 2013 PMLR’13 pp 115–123 [7] Liaw R et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
131
+ page_content=' 2018 (Preprint 1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
132
+ page_content='05118) [8] Akiba T et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
133
+ page_content=' 2019 KDD’19 pp 2623–2631 (Preprint 1907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
134
+ page_content='10902) [9] Head T et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
135
+ page_content=' 2021 URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
136
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
137
+ page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
138
+ page_content='5565057 [10] Barbetti M and Anderlini L 2023 URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
139
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
140
+ page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
141
+ page_content='7528502 [11] Tani L et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
142
+ page_content=' 2021 EPJ C 81 170 [12] Spiga D et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
143
+ page_content=' 2020 EPJ Web Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
144
+ page_content=' 245 07020 [13] Albertsson K et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
145
+ page_content=' 2018 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
146
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
147
+ page_content=' : Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
148
+ page_content=' Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
149
+ page_content=' 1085 022008 [14] Wulff E, Girone M and Pata J 2022 (Preprint 2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
150
+ page_content='01112) [15] Anderlini L et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
151
+ page_content=' 2022 PoS ICHEP2022 233 [16] Goodfellow I et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
152
+ page_content=' 2014 NeurIPS’14 pp 2672–2680 (Preprint 1406.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
153
+ page_content='2661) [17] Anderlini L et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
154
+ page_content=' (LHCb) 2022 (Preprint 2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
155
+ page_content='09947) [18] Ratnikov F et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
156
+ page_content=' 2023 Nucl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
157
+ page_content=' Instrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
158
+ page_content=' Meth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
159
+ page_content=' A 1046 167591 [19] Mariotti M, Spiga D and Boccali T 2021 PoS ISGC2021 002' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE5T4oBgHgl3EQfRg6t/content/2301.05522v1.pdf'}
-tE3T4oBgHgl3EQfSwk7/content/2301.04435v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9698b1111af46792707f9cb9e621a2521fbdff4c2965d65b760d8e560db49091
3
+ size 380973
-tE3T4oBgHgl3EQfSwk7/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:834e2ff47d583e9e7db750f33d2c2a7b1ba30bf8f1e31f457a45e83ca86a2f11
3
+ size 206787
.gitattributes CHANGED
@@ -7164,3 +7164,56 @@ JNFIT4oBgHgl3EQfZCuh/content/2301.11251v1.pdf filter=lfs diff=lfs merge=lfs -tex
7164
  PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf filter=lfs diff=lfs merge=lfs -text
7165
  g9AzT4oBgHgl3EQf4P7n/content/2301.01843v1.pdf filter=lfs diff=lfs merge=lfs -text
7166
  q9E3T4oBgHgl3EQf8gt0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7164
  PdE0T4oBgHgl3EQfkAEs/content/2301.02466v1.pdf filter=lfs diff=lfs merge=lfs -text
7165
  g9AzT4oBgHgl3EQf4P7n/content/2301.01843v1.pdf filter=lfs diff=lfs merge=lfs -text
7166
  q9E3T4oBgHgl3EQf8gt0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7167
+ u9FLT4oBgHgl3EQfji8Y/content/2301.12111v1.pdf filter=lfs diff=lfs merge=lfs -text
7168
+ 8dAzT4oBgHgl3EQfgfzo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7169
+ CtE1T4oBgHgl3EQf9wbT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7170
+ utFLT4oBgHgl3EQfjS8b/content/2301.12110v1.pdf filter=lfs diff=lfs merge=lfs -text
7171
+ WtE3T4oBgHgl3EQfbgpl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7172
+ cdAyT4oBgHgl3EQfwfli/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7173
+ ctFQT4oBgHgl3EQfijaX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7174
+ oNE5T4oBgHgl3EQfIQ6_/content/2301.05448v1.pdf filter=lfs diff=lfs merge=lfs -text
7175
+ 7tE1T4oBgHgl3EQfBwI8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7176
+ b9FIT4oBgHgl3EQfmyvP/content/2301.11311v1.pdf filter=lfs diff=lfs merge=lfs -text
7177
+ GNFIT4oBgHgl3EQfWStC/content/2301.11239v1.pdf filter=lfs diff=lfs merge=lfs -text
7178
+ _NFRT4oBgHgl3EQfsjc9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7179
+ PdE0T4oBgHgl3EQfkAEs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7180
+ RtE2T4oBgHgl3EQfBwaC/content/2301.03606v1.pdf filter=lfs diff=lfs merge=lfs -text
7181
+ PtA0T4oBgHgl3EQfDP9d/content/2301.02000v1.pdf filter=lfs diff=lfs merge=lfs -text
7182
+ ctE5T4oBgHgl3EQfEw7I/content/2301.05417v1.pdf filter=lfs diff=lfs merge=lfs -text
7183
+ ntE5T4oBgHgl3EQfIA7J/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7184
+ wNFJT4oBgHgl3EQffyxP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7185
+ vNAyT4oBgHgl3EQf0vmi/content/2301.00724v1.pdf filter=lfs diff=lfs merge=lfs -text
7186
+ _NFRT4oBgHgl3EQfsjc9/content/2301.13624v1.pdf filter=lfs diff=lfs merge=lfs -text
7187
+ CdFAT4oBgHgl3EQftB4M/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7188
+ hNAyT4oBgHgl3EQf-voB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7189
+ b9E4T4oBgHgl3EQfow0v/content/2301.05186v1.pdf filter=lfs diff=lfs merge=lfs -text
7190
+ ftE1T4oBgHgl3EQfywUm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7191
+ PdFQT4oBgHgl3EQfYjZ5/content/2301.13312v1.pdf filter=lfs diff=lfs merge=lfs -text
7192
+ utFLT4oBgHgl3EQfjS8b/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7193
+ vNAyT4oBgHgl3EQf0vmi/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7194
+ CtE0T4oBgHgl3EQfyQKJ/content/2301.02657v1.pdf filter=lfs diff=lfs merge=lfs -text
7195
+ -tE3T4oBgHgl3EQfSwk7/content/2301.04435v1.pdf filter=lfs diff=lfs merge=lfs -text
7196
+ odE0T4oBgHgl3EQfZwBs/content/2301.02325v1.pdf filter=lfs diff=lfs merge=lfs -text
7197
+ adAzT4oBgHgl3EQfLPsZ/content/2301.01109v1.pdf filter=lfs diff=lfs merge=lfs -text
7198
+ ptE2T4oBgHgl3EQfKgYC/content/2301.03702v1.pdf filter=lfs diff=lfs merge=lfs -text
7199
+ g9AzT4oBgHgl3EQf4P7n/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7200
+ k9E3T4oBgHgl3EQf5wuC/content/2301.04784v1.pdf filter=lfs diff=lfs merge=lfs -text
7201
+ 7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf filter=lfs diff=lfs merge=lfs -text
7202
+ WdFOT4oBgHgl3EQf7jS_/content/2301.12963v1.pdf filter=lfs diff=lfs merge=lfs -text
7203
+ oNE5T4oBgHgl3EQfIQ6_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7204
+ ptE2T4oBgHgl3EQfKgYC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7205
+ u9FLT4oBgHgl3EQfji8Y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7206
+ b9AyT4oBgHgl3EQfwfkG/content/2301.00647v1.pdf filter=lfs diff=lfs merge=lfs -text
7207
+ OtAyT4oBgHgl3EQfUfen/content/2301.00127v1.pdf filter=lfs diff=lfs merge=lfs -text
7208
+ QtE2T4oBgHgl3EQfsAh_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7209
+ tNE5T4oBgHgl3EQfKw5g/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7210
+ PtFPT4oBgHgl3EQfnzX3/content/2301.13132v1.pdf filter=lfs diff=lfs merge=lfs -text
7211
+ S9E4T4oBgHgl3EQfmA05/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7212
+ PtA0T4oBgHgl3EQfDP9d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7213
+ k9E3T4oBgHgl3EQf5wuC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7214
+ WdFOT4oBgHgl3EQf7jS_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7215
+ m9E0T4oBgHgl3EQfZQCv/content/2301.02319v1.pdf filter=lfs diff=lfs merge=lfs -text
7216
+ PdFQT4oBgHgl3EQfYjZ5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
7217
+ F9AzT4oBgHgl3EQfHPuf/content/2301.01042v1.pdf filter=lfs diff=lfs merge=lfs -text
7218
+ wNFJT4oBgHgl3EQffyxP/content/2301.11558v1.pdf filter=lfs diff=lfs merge=lfs -text
7219
+ b9AyT4oBgHgl3EQfwfkG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
3tAzT4oBgHgl3EQfD_po/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d93cd40198d60c640eb1c41d687de0958b05d41bf477a763166f544f7da5345
3
+ size 232238
4dE2T4oBgHgl3EQfOQZ5/content/tmp_files/2301.03746v1.pdf.txt ADDED
@@ -0,0 +1,2602 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Total energy-shaping control for mechanical systems via
2
+ Control-by-Interconnection
3
+ Joel Ferguson1
4
+ Abstract— Application of IDA-PBC to mechanical systems
5
+ has received much attention in recent decades, but its ap-
6
+ plication is still limited by the solvability of the so-called
7
+ matching conditions. In this work, it is shown that total energy-
8
+ shaping control of under-actuated mechanical systems has a
9
+ control-by-interconnection interpretation. Using this interpreta-
10
+ tion, alternate matching conditions are formulated that defines
11
+ constraints on the added energy, rather then the total closed-
12
+ loop energy. It is additionally shown that, for systems that are
13
+ under-actuated degree one with the mass matrix depending
14
+ on a single coordinate, the kinetic energy matching conditions
15
+ resolve to ODEs which can be evaluated numerically. Using this
16
+ approach controllers are proposed for the benchmark cart-pole
17
+ and acrobot systems.
18
+ I. INTRODUCTION
19
+ Energy-based methods for controlling nonlinear physical
20
+ systems have been shown to be effective in a variety of
21
+ physical domains [1]. Such methods consider the energy
22
+ and structure of the system to be controlled to derive
23
+ control strategies that exploit the natural system behaviours.
24
+ Interconnection and Damping Assignment, Passivity-Based
25
+ Control (IDA-PBC) is one such control methodology where
26
+ the control input is designed such that the closed-loop can be
27
+ interpreted as an alternate physical system with a different
28
+ energy, interconnection and damping structure [2].
29
+ While IDA-PBC has been applied to a broad range of
30
+ systems, particular attention has been given to mechanical
31
+ systems which exhibit a rich canonical structure [3], [4], [5].
32
+ In the case of fully-actuated systems, IDA-PBC allows a
33
+ user to arbitrarily modify the potential and kinetic energy
34
+ of the closed-loop system [6], a process known as total
35
+ energy shaping [3]. For under-actuated systems, however,
36
+ application of IDA-PBC is limited by solutions that satisfy
37
+ a set of PDEs, the so-called matching conditions. Much re-
38
+ search effort has been committed to solving these equations,
39
+ with solutions posed in several special cases [4], [5], [6].
40
+ This design methodology has been applied to a number of
41
+ benchmark examples such as the cart-pole, acrobot, spider
42
+ crane, amongst others.
43
+ Control-by-Interconnection (CbI) describes a sub-class of
44
+ energy-based control methods that falls under the umbrella
45
+ of IDA-PBC [7], [8]. Under this scheme, the controller is
46
+ assumed to be a passive system that is interconnected with
47
+ a passive plant to be controlled via the passive input-output
48
+ pair. Casimirs, conserved quantities between the control sub-
49
+ system and the plant, can be constructed to help shape the
50
+ 1Joel
51
+ Ferguson
52
+ is
53
+ with
54
+ the
55
+ School
56
+ of
57
+ Engineer-
58
+ ing,
59
+ The
60
+ University
61
+ of
62
+ Newcastle,
63
+ Australia
64
+ Email:
65
+ joel.ferguson@newcastle.edu.au
66
+ energy of the closed-loop system. It is known that potential
67
+ energy shaping of fully-actuated mechanical systems falls
68
+ into the class of CbI [7], [9]. Control of underactuated
69
+ mechanical systems has been explored in the context of CbI
70
+ by applying nonlinear PID controllers to both the standard
71
+ and alternate passive outputs [10], [11], [12]. The idea of
72
+ using PID for stabilisation of passive systems was formalised
73
+ in [13] and a general characterisation of all passive outputs
74
+ from a given system was characterised.
75
+ In this work, the connection between IDA-PBC and CbI
76
+ for under-actuated mechanical systems is explored. Using
77
+ the bond graph formalism (see [14] for introduction), a
78
+ control sub-system is proposed that allows for shaping of
79
+ the kinetic and potential energies of the closed-loop system.
80
+ By representing the controller as a passive interconnection,
81
+ the requisite matching conditions are reformulated in terms
82
+ of the added mass and added potential energy. Equivalence
83
+ between the CbI and IDA-PBC is then established in the
84
+ case of mechanical systems by identifying Casimirs relating
85
+ the controller states to those of the plant. Finally, using the
86
+ reformulated matching conditions, it is shown that in the case
87
+ that the mass matrix depends on only one coordinate that the
88
+ kinetic energy matching conditions can be formulated as an
89
+ ODE that can be evaluated numerically for implementation.
90
+ Notation. Function arguments are declared upon definition
91
+ and are omitted for subsequent use. 0n×m denotes a n × m
92
+ zeros matrix whereas In denotes a n×n identity matrix. For
93
+ mappings H : Rn → R, we denote the transposed gradient as
94
+ ∇H :=
95
+ � ∂H
96
+ ∂x
97
+ �⊤. For P = P ⊤ ∈ Rn×n, λmin [P] , λmax [P]
98
+ denotes the minimum and maximum (real) eigenvalues of P,
99
+ respectively.
100
+ II. BACKGROUND AND PROBLEM
101
+ FORMULATION
102
+ In this section a number of key concepts necessary for the
103
+ subsequent developments are briefly revised.
104
+ A. Control-by-interconnection
105
+ In
106
+ this
107
+ work
108
+ we
109
+ consider
110
+ input-state-output
111
+ port-
112
+ Hamiltonian systems (ISO-PHS) of the form
113
+ ˙xp = Fp(xp)∇xpHp(xp) + Gp(xp)up
114
+ yp = G⊤
115
+ p (xp)∇xpHp(xp)
116
+ (1)
117
+ where xp ∈ Rp is the state of the plant, Fp(xp) ∈ Rp×p is
118
+ the combined interconnection and damping matrix satisfying
119
+ Fp(xp) + F ⊤
120
+ p (xp) ≤ 0, Hp(xp) ∈ R is the Hamiltonian,
121
+ up ∈ Rm is the input, Gp(xp) ∈ Rp×m is the input
122
+ arXiv:2301.03746v1 [eess.SY] 10 Jan 2023
123
+
124
+ mapping matrix and yp ∈ Rm is the natural passive output
125
+ corresponding to the input up.
126
+ CbI assumes that the controller is a passive system that is
127
+ interconnected with the plant (1) via a passive interconnec-
128
+ tion. In this work, we consider a controller subsystem with
129
+ two input-output ports, described by the ISO-PHS
130
+
131
+
132
+ ˙xc
133
+ −yc1
134
+ −yc2
135
+
136
+ � =
137
+
138
+
139
+ K11(xc)
140
+ K12(xc)
141
+ K13(xc)
142
+ K21(xc)
143
+ K22(xc)
144
+ K23(xc)
145
+ K31(xc)
146
+ K32(xc)
147
+ K33(xc)
148
+
149
+
150
+
151
+ ��
152
+
153
+ :=K(xc)
154
+
155
+
156
+ ∇xcHc
157
+ uc1
158
+ uc2
159
+
160
+
161
+ (2)
162
+ where xc ∈ Rc is the state of the controller, Hc(xc) ∈
163
+ R is the controller Hamiltonian, uc1, yc1
164
+
165
+ Rm and
166
+ uc2, yc2 ∈ Rr are passive input-output pairs and K(xc) ∈
167
+ R(p+m+r)×(p+m+r) satisfies K(xc) + K⊤(xc) ≤ 0 [15].
168
+ The controller system (2) can be interconnected with the
169
+ plant (1) via the passive interconnection
170
+ up = −yc1
171
+ uc1 = yp,
172
+ (3)
173
+ resulting in the closed-loop dynamics
174
+
175
+
176
+ ˙xp
177
+ ˙xc
178
+ −yc2
179
+
180
+ � =
181
+
182
+
183
+ Fp + GpK22G⊤
184
+ p
185
+ GpK21
186
+ GpK23
187
+ K12G⊤
188
+ p
189
+ K11
190
+ K13
191
+ K32G⊤
192
+ p
193
+ K31
194
+ K33
195
+
196
+
197
+
198
+ ��
199
+
200
+ Fcl
201
+
202
+
203
+ ∇xpHp
204
+ ∇xcHc
205
+ uc2
206
+
207
+
208
+ (4)
209
+ where uc2, yc2 is a passive input-output pair to the inter-
210
+ connected system. Noting that K + K⊤ ≤ 0, the closed-
211
+ loop interconnection and damping structure Fcl satisfies
212
+ Fcl + F ⊤
213
+ cl ≤ 0 also.
214
+ In the case of stabilisation, the objective is to construct
215
+ the plant functions Hc(xc), K(xc) to ensure the existence
216
+ of Casimirs which statically relate the controller states to
217
+ functions of the plant states
218
+ xc = fc(xp),
219
+ (5)
220
+ for fc(x) ∈ Rc. The Casimir functions and controller initial
221
+ conditions are then designed to assign a desirable minimum
222
+ to the total energy function
223
+ W(xp) = H(xp) + Hc(xc)|xc=fc(xp).
224
+ (6)
225
+ It is noted that the Lyapunov candidate W(xp) can be gen-
226
+ eralised to a function of H, Hc and the Casimirs xc −fc(xp)
227
+ [15]. Methods to ensure the existence of and constructing
228
+ Casimirs have been reported in [7] and the references therein.
229
+ B. Underactuated mechanical systems
230
+ The primary objective of this work is to apply CbI to the
231
+ class of underactuated mechanical systems, described by the
232
+ dynamics � ˙q
233
+ ˙p
234
+
235
+ =
236
+ �0n×n
237
+ In
238
+ −In
239
+ 0n×n
240
+ � �∇qH
241
+ ∇pH
242
+
243
+ +
244
+ �0n×m
245
+ G
246
+
247
+ u
248
+ H(q, p) = 1
249
+ 2p⊤M −1(q)p
250
+
251
+ ��
252
+
253
+ :=T (q,p)
254
+ +V (q)
255
+ y = G⊤∇pH,
256
+ (7)
257
+ where q ∈ Rn, p ∈ Rn are the configuration and momen-
258
+ tum vectors, respectively, u ∈ Rm in the input, M(q) =
259
+ M ⊤(q) > 0 is the inertia matrix and y is the natural passive
260
+ output corresponding the the input u. The input mapping
261
+ matrix G is assumed to be constant and have the structure
262
+ G =
263
+
264
+ Im
265
+ 0(n−m)×m
266
+
267
+ ,
268
+ (8)
269
+ where n − m < n is the degree of underactuation of the
270
+ system1. The Hamiltonian H(q, p) is the sum of the kinetic
271
+ energy T(q, p) and the potential energy V (q), which allows
272
+ the gradient of H with respect to q to be written as
273
+ ∇qH(q, p) = ∇qT(q, p) + ∇qV (q).
274
+ (9)
275
+ A full-rank left-annihilator for the input mapping matrix (8)
276
+ is defined as
277
+ G⊥ =
278
+ �0(n−m)×m
279
+ I(n−m)
280
+
281
+ (10)
282
+ which satisfies G⊥G = 0(n−m)×m.
283
+ In the subsequent development, we will require an alter-
284
+ nate representation of the gradient of the kinetic energy with
285
+ respect to configuration ∇qT(q, p). Noting that the kinetic
286
+ energy is quadratic in p, the gradient ∇qT(q, p) can always
287
+ be factored into the form
288
+ ∇qT(q, p) = E(q, p)M −1(q)p,
289
+ (11)
290
+ for some matrix E(q, p) ∈ Rn×n. This has been previously
291
+ noted in [16] using the Christoffel symbols. Note, however,
292
+ that the matrix E(q, p) is non-unique and in this work we
293
+ will use the representation given by
294
+ ∇qT(q, p) = 1
295
+ 2
296
+ ∂⊤
297
+ ∂q
298
+
299
+ M −1(q)p
300
+
301
+ p
302
+ = 1
303
+ 2
304
+ ∂⊤
305
+ ∂q
306
+
307
+ M −1(q)p
308
+
309
+ M(q)
310
+
311
+ ��
312
+
313
+ :=E(q,p)
314
+ M −1(q)p.
315
+ (12)
316
+ In constructing a CbI interpretation to total energy shaping
317
+ it is useful to define a virtual input-output pair for the system
318
+ (7) by defining the input
319
+ uv = Gu,
320
+ (13)
321
+ which allows the system to written similarly to a fully-
322
+ actuated system as
323
+ � ˙q
324
+ ˙p
325
+
326
+ =
327
+ �0n×n
328
+ In
329
+ −In
330
+ 0n×n
331
+ � �∇qH
332
+ ∇pH
333
+
334
+ +
335
+ �0n×n
336
+ In
337
+
338
+ uv
339
+ yv = ∇pH,
340
+ (14)
341
+ where uv, yv ∈ Rn. From the definition (13), it is clear that
342
+ any input uv must satisfy
343
+ G⊥uv = 0m×1,
344
+ (15)
345
+ 1The assumed structure of G requires that the first m configuration
346
+ coordinates are chosen to be collocated with the actuators. This class of
347
+ dynamics falls into the broader class of ISO-PHS (1). For a more general
348
+ input mapping matrix ¯G(q) ∈ Rn×m, there exists a change of coordinates
349
+ recovering the structure (8) if the columns of ¯G(q) are involute.
350
+
351
+ which will be ensured in subsequent control design. Assum-
352
+ ing that (15) holds, the input u can be described as a function
353
+ of uv by
354
+ u = G⊤uv.
355
+ (16)
356
+ The advantage of constructing the virtual input-output pair is
357
+ that the virtual output now describes the full velocity vector
358
+ yv = M −1(q)p = ˙q,
359
+ (17)
360
+ a property that will be exploited when shaping the potential
361
+ energy.
362
+ C. IDA-PBC for underactuated mechanical systems
363
+ IDA-PBC is a control design methodology whereby the
364
+ control signal is designed such that the closed-loop dynamics
365
+ have a port-Hamiltonian (pH) structure. When applied to
366
+ underactuated mechanical systems, the target closed-loop
367
+ dynamics have the structure
368
+ � ˙q
369
+ ˙p
370
+
371
+ =
372
+
373
+ 0n×n
374
+ M −1(q)Md(q)
375
+ −Md(q)M −1(q)
376
+ J2(q, p) − GKdG⊤
377
+ � �∇qHd
378
+ ∇pHd
379
+
380
+ Hd(q, p) = 1
381
+ 2p⊤M −1
382
+ d (q)p + Vd(q),
383
+ (18)
384
+ where Md(q) = M ⊤
385
+ d (q) > 0, Vd(q) are the desired closed-
386
+ loop inertia matrix and potential energies, respectively,
387
+ J2(q, p) = −J⊤
388
+ 2 (q, p) is skew-symmetric and Kd = K⊤
389
+ d ≥ 0
390
+ is a tuning parameter used for damping injection. If Vd
391
+ is minimised at the target configuration, Hd qualifies as a
392
+ Lyapunov function for the closed-loop system [4].
393
+ The complexity of applying IDA-PBC to underactuated
394
+ system is satisfying the so-called matching conditions. This
395
+ conditions requires that the dynamics of the open-loop and
396
+ closed-loop systems must agree on the spaces perpendicular
397
+ to the control signal. The structure chosen fo the closed-loop
398
+ system in (18) ensures that the dynamics of q agree with (7).
399
+ Comparing the dynamics of p results in the condition
400
+ G⊥ �
401
+ ∇qH − Md(q)M −1(q)∇qHd − J2(q, p)∇pHd
402
+
403
+ = 0(n−m)×1,
404
+ (19)
405
+ which
406
+ defines
407
+ a
408
+ PDE
409
+ that
410
+ should
411
+ be
412
+ solved
413
+ for
414
+ Md(q), Vd(q), J2(q, p).
415
+ Noting
416
+ the
417
+ structure
418
+ of
419
+ the
420
+ Hamiltonians, this PDE can be separated into the components
421
+ involving p, and those that do not by
422
+ G⊥ �
423
+ ∇qT − Md(q)M −1(q)∇qTd − J2(q, p)M −1
424
+ d (q)p
425
+
426
+ = 0(n−m)×1
427
+ G⊥ �
428
+ ∇qV − Md(q)M −1(q)∇qVd
429
+
430
+ = 0(n−m)×1.
431
+ (20)
432
+ These expressions are known as the kinetic energy and
433
+ potential energy matching equations, respectively.
434
+ D. Contributions
435
+ The objective of this work is to construct a CbI interpreta-
436
+ tion of IDA-PBC when applied to underactuated mechanical
437
+ systems. The contributions of this work are threefold:
438
+ C.1 ISO-PHS with Casimirs that statically relate states are
439
+ considered and a closed-form solution to remove the
440
+ Casimirs by reducing the dimension of the state vector
441
+ is proposed. This solution can be applied to the closed-
442
+ loop dynamics of CbI implementations of the form (4)
443
+ to describe the resulting dynamics as a function of xp
444
+ only.
445
+ C.2 A CbI controller for underactuated mechanical system
446
+ of the form (2) is proposed and the resulting closed-
447
+ loop is shown to be equivalent to the well-known dy-
448
+ namics (18). The CbI interpretation generates alternate
449
+ matching conditions to the expressions (20), describing
450
+ constraints on the added mass and added potential
451
+ energy.
452
+ C.3 Using the alternate matching conditions, it is shown that
453
+ the kinetic energy matching equations reduce to ODEs
454
+ in the special case of underactuation degree one where
455
+ the mass matrix is a function of only one configuration
456
+ coordinate. It is demonstrated that numerical methods
457
+ can be utilised in such cases to avoid solving these
458
+ expressions analytically.
459
+ E. Related works
460
+ Significant attention has been given to solving the match-
461
+ ing equations (20) in recent decades. In [3], [17] it was
462
+ shown that is the system is under-actuated degree one and
463
+ the mass matrix depends on a single un-actuated coordinate,
464
+ the kinetic energy matching condition can be simplified to an
465
+ ODE. Using a novel parametrisation of J2(q, p), a general so-
466
+ lution for under-actuated degree one system was proposed in
467
+ [4] under the assumption that the mass matrix depends only
468
+ on the actuated coordinates. This approach was extended
469
+ in [18] using a momentum transformation to simplify the
470
+ matching equations. More recently, solutions to the potential
471
+ energy matching equations were considered in [19] under
472
+ the assumption that the mass matrix and potential energy
473
+ functions were dependent on only one variable. Finally, a
474
+ general solution to the special case of 2 degree-of-freedom
475
+ system was proposed in [6]. The studies [20], [21] considered
476
+ the effects of friction on the closed-loop stability using IDA-
477
+ PBC.
478
+ In recent works, alternate approaches to constructing so-
479
+ lutions to the matching equations have been explored. The
480
+ existence of conservative forces that cannot be factorised
481
+ into a skew-symmetric matrix J2(q, p) was investigated in
482
+ [22], which resulted in alternate matching equations. Im-
483
+ plicit system representations were used in [23] to construct
484
+ solutions in an over-parameterised space where the closed-
485
+ loop dynamics were subject to constraints. By working in
486
+ the larger dimension, a solution to a under-actuated degree
487
+ 2 crane system was proposed. Pfaffian differential equations
488
+ were utilised in [24] which resulted in the kinetic energy
489
+
490
+ PDEs being converted to an alternate form which admits
491
+ simpler solutions.
492
+ Some authors have investigated the possibility of avoiding
493
+ the matching equations altogether by considering the control
494
+ signal to be a CbI. The work [10] relied on a Lagrangian
495
+ structure and several technical assumptions to verify the
496
+ existence of a second passive output corresponding to the
497
+ input u. Using this second output, a stabilising control was
498
+ designed that ensured stability without requiring a solution
499
+ to the matching PDEs. A similar approach was proposed in
500
+ [12], [11] where a second passive output was utilised and
501
+ the control assumed to have a PID structure.
502
+ III. CASIMIR REDUCTION
503
+ In this section, a method for reducing the state dimension
504
+ of ISO-PHS with Casimirs is derived. The reduction method
505
+ applies to general ISO-PHS with Casimirs and can be directly
506
+ applied to the resulting closed-loop dynamics of CbI schemes
507
+ of the form (4) to describe the system as a function of xp
508
+ only. In the sequel, the reduction method will be used to show
509
+ equivalence between the CbI controller for underactuated
510
+ mechanical systems and the IDA-PBC dynamics (18). Before
511
+ introducing the state reduction solution, a useful lemma is
512
+ required.
513
+ Lemma 1: Consider a square block matrix of arbitrary
514
+ dimension
515
+ A =
516
+ �A11
517
+ A12
518
+ A21
519
+ A22
520
+
521
+ (21)
522
+ and assume that A22 is invertible. If the symmetric com-
523
+ ponent of A is negative semi-definite, A + A⊤ ≤ 0, the
524
+ symmetric component of the Schur complement
525
+ A11 − A12A−1
526
+ 22 A21
527
+ (22)
528
+ is negative semi-definite also.
529
+ Proof: First note that the Schur complement of X can
530
+ be computed by
531
+
532
+ I
533
+ −A⊤
534
+ 21A−⊤
535
+ 22
536
+
537
+ A
538
+
539
+ I
540
+ −A−1
541
+ 22 A21
542
+
543
+ = A11 − A12A−1
544
+ 22 A21.
545
+ (23)
546
+ The symmetric component of this expression is negative
547
+ semi-definite as A + A⊤ ≤ 0.
548
+ The solution for reducing the dimension of ISO-PHS
549
+ which exhibit Casimirs is now introduced. This development
550
+ applies to systems of the form
551
+
552
+
553
+ ˙x1
554
+ ˙x2
555
+ −y
556
+
557
+ � =
558
+
559
+
560
+ F11(x)
561
+ F12(x)
562
+ F13(x)
563
+ F21(x)
564
+ F22(x)
565
+ F23(x)
566
+ F31(x)
567
+ F32(x)
568
+ F33(x)
569
+
570
+
571
+
572
+ ��
573
+
574
+ F (x)
575
+
576
+
577
+ ∇x1H
578
+ ∇x2H
579
+ u
580
+
581
+
582
+ (24)
583
+ where x ∈ Rp+c is the state of the system which has been
584
+ partitioned into x1 ∈ Rp, x2 ∈ Rc, H(x1, x2) ∈ R is the
585
+ Hamiltonian, F(x) ∈ R(p+c)×(p+c) is the full-rank intercon-
586
+ nection and damping matrix satisfying F(x) + F ⊤(x) ≤ 0,
587
+ u ∈ Rm is the input and y ∈ Rm is the corresponding passive
588
+ output. It is assumed that the system contains a Casimir and
589
+ the states have been partitioned such that the Casimir can be
590
+ written as
591
+ x2 = fc(x1),
592
+ (25)
593
+ where fc(x1) ∈ Rc is differentiable.
594
+ The first step in constructing a minimal system represen-
595
+ tation is defining a new set of coordinates given by
596
+ w = x2 − fc(x1) = 0c×1,
597
+ (26)
598
+ which is identically equal to zero by construction. The
599
+ system (24) can be described in the coordinates (x1, w) by
600
+
601
+
602
+ ˙x1
603
+ −y
604
+ ˙w
605
+
606
+ � =
607
+
608
+
609
+ Ip
610
+ 0p×c
611
+ 0p×m
612
+ 0m×p
613
+ 0m×c
614
+ Im
615
+ − ∂fc
616
+ ∂x1
617
+ Ic
618
+ 0c×m
619
+
620
+
621
+
622
+
623
+ ˙x1
624
+ ˙x2
625
+ −y
626
+
627
+
628
+ =
629
+
630
+
631
+ Ip
632
+ 0p×c
633
+ 0p×m
634
+ 0m×p
635
+ 0m×c
636
+ Im
637
+ − ∂fc
638
+ ∂x1
639
+ Ic
640
+ 0c×m
641
+
642
+ � F
643
+ ×
644
+
645
+
646
+ Ip
647
+ 0p×m
648
+ − ∂⊤fc
649
+ ∂x1
650
+ 0c×p
651
+ 0c×m
652
+ Ic
653
+ 0m×p
654
+ Im
655
+ 0m×c
656
+
657
+
658
+
659
+
660
+ ∇x1Hr
661
+ u
662
+ ∇wHr
663
+
664
+
665
+ =
666
+
667
+
668
+ ¯F11
669
+ ¯F12
670
+ ¯F13
671
+ ¯F21
672
+ ¯F22
673
+ ¯F23
674
+ ¯F31
675
+ ¯F32
676
+ ¯F33
677
+
678
+
679
+
680
+ ��
681
+
682
+ ¯
683
+ F
684
+
685
+
686
+ ∇x1Hr
687
+ u
688
+ ∇wHr
689
+
690
+ � ,
691
+ (27)
692
+ where
693
+ Hr(x1, w) :=H(x1, w + fc(x1))
694
+ ¯F11(x1) =F11(x)|x2=fc(x1)
695
+ ¯F12(x1) =F13(x)|x2=fc(x1)
696
+ ¯F13(x1) =F12(x) − F11(x)∂⊤fc
697
+ ∂x1
698
+ ����
699
+ x2=fc(x1)
700
+ ¯F21(x1) =F31(x)|x2=fc(x1)
701
+ ¯F22(x1) =F33(x)|x2=fc(x1)
702
+ ¯F23(x1) =F32(x) − F31(x)∂⊤fc
703
+ ∂x1
704
+ ����
705
+ x2=fc(x1)
706
+ ¯F31(x1) =F21(x) − ∂fc
707
+ ∂x1
708
+ F11(x)
709
+ ����
710
+ x2=fc(x1)
711
+ ¯F32(x1) =F23(x) − ∂fc
712
+ ∂x1
713
+ F13(x)
714
+ ����
715
+ x2=fc(x1)
716
+ ¯F33(x1) =F22(x) − F21(x)∂⊤fc
717
+ ∂x1
718
+ − ∂fc
719
+ ∂x1
720
+ F12(x)
721
+ + ∂fc
722
+ ∂x1
723
+ F11(x)∂⊤fc
724
+ ∂x1
725
+ ����
726
+ x2=fc(x1)
727
+ .
728
+ (28)
729
+ Recalling that w is identically equal to zero, ˙w is also
730
+ equal to zero. Consequently, the final row of (27) is a
731
+ constraint that needs to be resolved to construct a minimal
732
+ system representation. There are two methods of reduction
733
+ that will be considered. Firstly, in the transformed coordi-
734
+ nates (27) it can occur that one or more columns of ¯F⋆3 is
735
+ identically zero. Without loss of generality, it is assumed that
736
+
737
+ the first d columns of ¯F⋆3 are equal to zero. To remove the
738
+ zero rows, the full-rank matrix B is defined as
739
+ B =
740
+ �0d×(c−d)
741
+ I(c−d)
742
+
743
+ ,
744
+ (29)
745
+ which acts to select the non-zero columns of ¯F⋆3. The zero
746
+ columns and corresponding rows are removed from (27) by
747
+
748
+
749
+ ˙x1
750
+ −y
751
+ B⊤ ˙w
752
+
753
+ � =
754
+
755
+
756
+ ¯F11
757
+ ¯F12
758
+ ¯F13B
759
+ ¯F21
760
+ ¯F22
761
+ ¯F23B
762
+ B⊤ ¯F31
763
+ B⊤ ¯F32
764
+ B⊤ ¯F33B
765
+
766
+
767
+
768
+ ��
769
+
770
+ := ¯
771
+ FB
772
+
773
+
774
+ ∇x1Hr
775
+ u
776
+ B⊤∇wHr
777
+
778
+ � ,
779
+ (30)
780
+ which does not modify the system dynamics. Note that as
781
+ ¯F + ¯F ⊤ ≤ 0, ¯FB + ¯F ⊤
782
+ B ≤ 0 also. Using this representation,
783
+ a method for resolving the remaining constraint equations is
784
+ presented under the assumption that B⊤ ¯F33B is full rank.
785
+ Proposition 1: Consider the pH system (24) with Casimir
786
+ (25). If the matrix B⊤ ¯F33B is full-rank for all x1 the system
787
+ can be described by a reduced-order model
788
+ � ˙x1
789
+ −y
790
+
791
+ = Fr(x1)
792
+ �∇x1Hr
793
+ u
794
+
795
+ ,
796
+ (31)
797
+ where
798
+ Hr(x1) =H (x1, fc(x1))
799
+ Fr(x1) =
800
+ � ¯F11
801
+ ¯F12
802
+ ¯F21
803
+ ¯F22
804
+
805
+
806
+ � ¯F13B
807
+ ¯F23B
808
+
809
+ (B⊤ ¯F33B)−1 �
810
+ B⊤ ¯F31
811
+ B⊤ ¯F32
812
+
813
+ (32)
814
+ and ¯F(x1) satisfies Fr(x1) + F ⊤
815
+ r (x1) ≤ 0.
816
+ Proof: The expression B⊤ ˙w, defined in (30), is iden-
817
+ tically equal to 0(c−d)×1 by construction. Note, however,
818
+ that the gradient B⊤∇wHr is not necessarily equal to
819
+ zero. Assuming that B⊤ ¯F33B is full rank, the expression
820
+ B⊤∇wHr can be described as
821
+ B⊤∇wHr = − (B⊤ ¯F33B)−1 �
822
+ B⊤ ¯F31
823
+ B⊤ ¯F32
824
+ � �
825
+ ∇x1Hr
826
+ u
827
+
828
+ (33)
829
+ Substituting this expression into the dynamics (30) resolves
830
+ to the reduced dynamics (31).
831
+ To verify that Fr + F ⊤
832
+ r
833
+ ≤ 0, note that Fr is the Schur
834
+ complement of ¯FB which satisfies ¯FB + ¯F ⊤
835
+ B ≤ 0. It follows
836
+ that Fr + F ⊤
837
+ r ≤ 0 by application of Lemma 1.
838
+ Proposition 1 showed that an ISO-PHS that exhibits a
839
+ Casimir function can be described in a reduced state-space.
840
+ The class of dynamics that are derived from application of
841
+ CbI (4) that result in a Casimir of the form (5) falls into the
842
+ class of systems (24). The following Corollary tailors the
843
+ Casimir reduction for this important sub-class of dynamics.
844
+ Corollary 1: If the closed-loop dynamics of a CbI scheme
845
+ (4) exhibit a Casimir of the form (5), the system can be
846
+ equivalently expressed in the form (31) where
847
+ x1 =xp
848
+ Hr(xp) =Hp(xp) + Hc (fc(xp))
849
+ ¯F11 =Fp + GpK22G⊤
850
+ p
851
+ ¯F12 =GpK23
852
+ ¯F13 =GpK21 −
853
+
854
+ Fp + GpK22G⊤
855
+ p
856
+ � ∂⊤fc
857
+ ∂xp
858
+ ¯F21 =K32G⊤
859
+ p
860
+ ¯F22 =K33
861
+ ¯F23 =K31 − K32G⊤
862
+ p
863
+ ∂⊤fc
864
+ ∂xp
865
+ ¯F31 =K12G⊤
866
+ p − ∂fc
867
+ ∂xp
868
+
869
+ Fp + GpK22G⊤
870
+ p
871
+
872
+ ¯F32 =K13 − ∂fc
873
+ ∂xp
874
+ GpK23
875
+ ¯F33 =K11 − K12G⊤
876
+ p
877
+ ∂⊤fc
878
+ ∂xp
879
+ − ∂fc
880
+ ∂xp
881
+ GpK21
882
+ + ∂fc
883
+ ∂xp
884
+
885
+ Fp + GpK22G⊤
886
+ p
887
+ � ∂⊤fc
888
+ ∂xp
889
+ (34)
890
+ and B is suitably chosen as per (29) using the expressions
891
+ ¯F⋆3. The arguments have been dropped from the definitions
892
+ of ¯F⋆⋆(xp) for the sake of readability.
893
+ Proof:
894
+ The result follows from direct application of
895
+ Proposition 1 to the dynamics (4).
896
+ IV. CONTROL-BY-INTERCONNECTION FOR
897
+ MECHANICAL SYSTEMS
898
+ In this section, a control-by-interconnection scheme for
899
+ under-actuated mechanical systems is presented. A dynamic
900
+ 2-port control system is introduced with the intention that it
901
+ will be interconnected to the plant (14) via one of the ports.
902
+ The controller states are constructed to be statically related to
903
+ the plant states after interconnection, resulting in Casimirs.
904
+ By applying Proposition 1, the closed-loop dynamic are
905
+ defined in a reduced space in which the dynamics coincide
906
+ with standard total energy-shaping control (18).
907
+ The proposed CbI scheme is shown in Figure 1. The
908
+ intention of this control subsystem is to interconnect with the
909
+ plant (14) via the uc1, yc1 power port. The second input uc2
910
+ is available for subsequent control design, such as damping
911
+ injection. The terms M(qa2), E(qa2, pa) are the plant mass
912
+ matrix (7) and factorisation of the kinetic energy gradient
913
+ (12) evaluated at the controller states whereas J(qa2, pa) =
914
+ −J(qa2, pa)⊤ ∈ Rn×n is a skew-symmetric matrix to be
915
+ chosen. The three-port storage element Ha(qa1, qa2, pa) has
916
+ states qa1, qa2, pa ∈ Rn and energy function similar to
917
+ mechanical systems,
918
+ Ha(qa1, qa2, pa) = 1
919
+ 2p⊤
920
+ a M −1
921
+ a (qa2)pa
922
+
923
+ ��
924
+
925
+ :=Ta(qa2,pa)
926
+ + Vd(qa2) − V (qa1)
927
+
928
+ ��
929
+
930
+ :=Va(qa1,qa2)
931
+ ,
932
+ (35)
933
+
934
+ Fig. 1: Mass shaping as CbI for under-actuated mechanical
935
+ systems.
936
+ where M −1
937
+ a (qa2) is the inverse added mass, Ta(qa2, pa) is
938
+ the added kinetic energy, Vd(qa2) is the desired closed-loop
939
+ potential energy, V (qa1) is the plant potential energy function
940
+ (7) evaluated at the plant state qa1 and Va(qa1, qa2) is the
941
+ total added potential energy. It is important to note that,
942
+ although M −1
943
+ a (qa2) is represented as a matrix inverse, it
944
+ need not be invertible nor positive. Indeed, it will be shown
945
+ in subsequent developments that the key requirement is that
946
+ M −1
947
+ d (q) := M −1(q) + M −1
948
+ a (q)
949
+ (36)
950
+ should be positive definite.
951
+ In subsequent analysis it will be shown that the intercon-
952
+ nection of the control system with the plant (14) via the
953
+ interconnection
954
+ uv = −yc1
955
+ uc1 = yv,
956
+ (37)
957
+ yields Casimirs
958
+
959
+
960
+ qa1
961
+ qa2
962
+ pa
963
+
964
+ � =
965
+
966
+
967
+ In
968
+ 0n×n
969
+ In
970
+ 0n×n
971
+ 0n×n
972
+ In
973
+
974
+
975
+
976
+ q
977
+ p
978
+
979
+
980
+ ��
981
+
982
+ fc(xp)
983
+ .
984
+ (38)
985
+ Assuming the Casimirs exist, some intuition regarding the
986
+ construction of the control system in Figure 1 can be
987
+ provided. Both qa1, qa2 were constructed to be equal to q.
988
+ Firstly ˙qa1 is equal to ˙q by interconnection with the plant
989
+ virtual output yv via a 1-junction. To verify a similar relation
990
+ for qa2, assume that qa2 = q, pa = p holds which results
991
+ in ∇paHa = M −1
992
+ a (q)p and uc1 + ∇paHa = M −1
993
+ d p. With
994
+ this in mind, the transformer can be seen to reconstruct the
995
+ velocity ˙q = M −1(q)p for the bottom 1-junction, resulting
996
+ in ˙qa1 = ˙q.
997
+ To construct a Casimir pa = p, first note from (7) and
998
+ (12) that the plant momentum dynamics can be expressed as
999
+ ˙p = −∇qV − E(q, p)M −1(q)p + uv.
1000
+ (39)
1001
+ The control structure acts to remove these forces from the
1002
+ plant via the right side of the control structure and re-
1003
+ introduce them via the top 0-junction where they are shared
1004
+ with the dynamics of pa. The ˙qa1 bond acts to cancel the
1005
+ gravity term from the plant −∇qV . Recalling that the bottom
1006
+ 1-junction has flow equal to M −1(q)p, the right-side gyrator
1007
+ cancels the term −E(q, p)M −1(q)p from the plant. The left-
1008
+ side gyrator then re-introduces the force −E(q, p)M −1(q)p
1009
+ via the top 0-junction where it is shared between ˙p and ˙pa,
1010
+ establishing the desired Casimir.
1011
+ The claimed Casimir (38) is now formalised in the follow-
1012
+ ing Proposition. For this development, note that the gradients
1013
+ of the added energy Ha(·) satisfy
1014
+ ∇qa1Ha = −∇qa1V
1015
+ ∇qa2Ha = ∇qa2Ta + ∇qa2Vd
1016
+ ∇qa2Ta = 1
1017
+ 2
1018
+ ∂⊤
1019
+ ∂qa2
1020
+
1021
+ M −1
1022
+ a (qa2)pa
1023
+
1024
+ pa
1025
+ ∇paHa = M −1
1026
+ a (qa2)pa
1027
+ (40)
1028
+ and the expressions A(·), B(·), C(·) in Figure 1 can be
1029
+ evaluated as
1030
+ A(qa2, pa, uc1) =M −1(qa2)Md(qa2) [uc1 + ∇paHa]
1031
+ B(qa2, pa, uc1) =∇qa2Ha − E⊤(qa2, pa)∇paHa
1032
+ − M(qa2)J(qa2, pa)Md(qa2)
1033
+ × [uc1 + ∇paHa]
1034
+ C(qa2, pa, uc1) =Md(qa2)M −1(qa2)∇qa2Ha
1035
+ − Md(qa2)M −1(qa2)E⊤(qa2, pa)∇paHa
1036
+ − Md(qa2)J(qa2, pa)Md(qa2)
1037
+ × [uc1 + ∇paHa] ,
1038
+ (41)
1039
+ with Md(·) defined in (36). To ensure that the Casimir exists,
1040
+ a number of requirements are imposed on the selection of
1041
+ the added inverse mass M −1
1042
+ a (q) and closed-loop potential
1043
+ energy Vd(q) which are equivalent of the standard matching
1044
+ conditions used in IDA-PBC (20).
1045
+ Proposition 2: Consider the control system in Figure 1
1046
+ and assume that it is interconnected to the plant (14) via the
1047
+ interconnection (37). If M −1
1048
+ a (qa), Vd(qa) are chosen such
1049
+ that
1050
+ G⊥C(qa2, pa, uc1)|qa2=q,pa=p = G⊥∇qV
1051
+ (42)
1052
+ and the controller states are initialised as qa1(0) = qa2(0) =
1053
+ q(0), pa(0) = p(0), the Casimir (38) holds for all time.
1054
+ Proof: Consider that at some time instant T (38) holds,
1055
+ implying that
1056
+ qa1(T) = qa2(T) = q(T), pa(T) = p(T).
1057
+ (43)
1058
+ It is shown that if (42) is satisfied, then the derivatives of
1059
+ the states also agree
1060
+ ˙qa1(T) = ˙qa2(T) = ˙q(T), ˙pa(T) = ˙p(T),
1061
+ (44)
1062
+
1063
+ establishing the existence of a Casimir for all future time.
1064
+ We proceed by first establishing the relationship for the
1065
+ configuration vector. From (37) and (14) uc = M −1(q)p
1066
+ which establishes ˙qa1(T) = ˙q(T). The input uc = M −1(q)p
1067
+ is substituted into A(·) (41) to find
1068
+ A|t=T = M −1(qa2)Md(qa2)
1069
+
1070
+ M −1(q)p + M −1
1071
+ a (qa2)pa
1072
+
1073
+ |t=T
1074
+ = M −1(q)p|t=T
1075
+ = ˙q|t=T ,
1076
+ (45)
1077
+ confirming that ˙qa2(T) = ˙q(T).
1078
+ Next we consider the behaviour of the momentum states.
1079
+ First note that, from the bond graph in Figure 1 and the
1080
+ definition (16), the plant input uv is given by
1081
+ u = G⊤uv
1082
+ = G⊤ [Guc2 − C(qa2, pa, uc1) + ∇qa1V (qa1)]
1083
+ = uc2 − G⊤ [C(qa2, pa, uc1) − ∇qa1V (qa1)] .
1084
+ (46)
1085
+ Using the control definition (46) and the condition (42), the
1086
+ plant dynamics (7) can be expanded as
1087
+ ˙p = − ∇qT(q, p) −
1088
+ �G⊤∇qV (q)
1089
+ G⊥∇qV (q)
1090
+
1091
+ + Gu
1092
+ = − ∇qT(q, p) −
1093
+
1094
+ G⊤∇qV (q)
1095
+ G⊥∇qV (q)
1096
+
1097
+ + G
1098
+
1099
+ uc2 − G⊤ [C(qa2, pa, uc1) − ∇qa1V (qa1)]
1100
+
1101
+ = − ∇qT(q, p) + Guc2
1102
+
1103
+ �G⊤ {C(qa2, pa, uc1) + ∇qV (q) − ∇qa1V (qa1)}
1104
+ G⊥∇qV (q)
1105
+
1106
+ (47)
1107
+ Note that at time T, ∇qV (q)|t=T
1108
+ = ∇qa1V (qa1)|t=T .
1109
+ Additionally recall the assumption (42) which allows the
1110
+ simplification
1111
+ ˙p|t=T = − ∇qT(q, p) − C(qa2, pa, uc1) + Guc2.
1112
+ (48)
1113
+ Recalling the identity (45), the dynamics of pa at time T can
1114
+ be expanded to
1115
+ ˙pa = Gv − E(qa2, pa)A(·)|t=T − C(qa2, pa, uc1)|t=T
1116
+ = Gv − ∇qT(q, p)|t=T − C(qa2, pa, uc1)|t=T ,
1117
+ (49)
1118
+ which agrees with (48). As (48) and (49) agree at time T,
1119
+ (44) is verified for the momentum states. If at the initial time
1120
+ t = 0 we have qa(0) = q(0), pa(0) = p(0), it follows that
1121
+ qa(t) = q(t) and pa(t) = p(t) for all time via integration,
1122
+ completing the proof.
1123
+ Proposition 2 has established that the Casimir (38) holds
1124
+ under some technical assumptions that will be verified in
1125
+ subsequent design. Before proceeding, we note that the
1126
+ control subsystem in Figure 1 can be written in the form
1127
+ (2) with
1128
+ xc =
1129
+
1130
+ q⊤
1131
+ a1
1132
+ q⊤
1133
+ a2
1134
+ p⊤
1135
+ a
1136
+ �⊤
1137
+ Hc(qa1, qa2, pa) = Ha(qa1, qa2, pa)
1138
+ K11(qa2, pa) =
1139
+
1140
+
1141
+ 0n×n
1142
+ 0n×n
1143
+ 0n×n
1144
+ 0n×n
1145
+ 0n×n
1146
+ M −1Md
1147
+ 0n×n
1148
+ −MdM −1
1149
+ D − D⊤ + MdJMd
1150
+
1151
+
1152
+ K12(qa2, pa) =
1153
+
1154
+
1155
+ In
1156
+ M −1Md
1157
+ MdJMd − D⊤
1158
+
1159
+
1160
+ K13 =
1161
+
1162
+
1163
+ 0n×m
1164
+ 0n×m
1165
+ G
1166
+
1167
+
1168
+ K21(qa2, pa) =
1169
+
1170
+ −In
1171
+ −MdM −1
1172
+ D + MdJMd
1173
+
1174
+ K31 =
1175
+
1176
+ 0m×n
1177
+ 0m×n
1178
+ −G⊤�
1179
+ K22(qa2, pa) = MdJMd
1180
+ K23 = G
1181
+ K32 = −G⊤
1182
+ K33 = 0m×m
1183
+ D(qa2, pa) = MdM −1E⊤.
1184
+ (50)
1185
+ In the subsequent developments it is assumed that the
1186
+ requisite (42) of Proposition 2 holds, implying qa1(t) =
1187
+ qa2(t) = q(t), pa(t) = p(t). Condition (42) will be verified
1188
+ by choice of M −1
1189
+ a
1190
+ and Vd. Assuming the Casimir holds, the
1191
+ expressions for A(·), B(·), C(·) in (41) can be simplified to
1192
+ A(q, p) =M −1(q)p
1193
+ B(q, p) =∇qTa(q, p) + ∇qVd(q, p) − E⊤(q, p)M −1
1194
+ a (q)p
1195
+ − M(q)J(q, p)p
1196
+ C(q, p) =Md(q)
1197
+
1198
+ M −1(q)∇qTa(q, p) + M −1(q)∇qVd(q, p)
1199
+ −M −1(q)E⊤(q, p)M −1
1200
+ a (q)p − M −1(q)J(q, p)
1201
+
1202
+ (51)
1203
+ Recalling the definition of ∇qaTa in (40), it is noted that
1204
+ C(·) contains some terms which are quadratic in p and some
1205
+ that are functions of q only. The function C(·) is divided into
1206
+ C(q, p) =CKE(q, p) + CP E(q)
1207
+ CKE(q, p) =
1208
+
1209
+ M −1
1210
+ a (q) + M −1(q)
1211
+ �−1
1212
+
1213
+ ��
1214
+
1215
+ Md(q)
1216
+ {Y (q, p) − J(q, p)} p
1217
+ CP E(q) =
1218
+
1219
+ M −1
1220
+ a (q) + M −1(q)
1221
+ �−1 M −1(q)∇qVd,
1222
+ (52)
1223
+ where KE represents kinetic energy, PE represents poten-
1224
+ tial energy and Y is defined as
1225
+ Y (q, p) =1
1226
+ 2M −1(q)∂⊤
1227
+ ∂q
1228
+
1229
+ M −1
1230
+ a (q)p
1231
+
1232
+ − 1
1233
+ 2
1234
+
1235
+ ∂q
1236
+
1237
+ M −1(q)p
1238
+
1239
+ M −1
1240
+ a (q).
1241
+ (53)
1242
+ As Y is linear in p it can be written as
1243
+ Y (q, p) =
1244
+ n
1245
+
1246
+ i=1
1247
+ piY i(q),
1248
+ (54)
1249
+
1250
+ where
1251
+ Y i(q) =1
1252
+ 2M −1(q)∂⊤
1253
+ ∂q
1254
+
1255
+ M −1
1256
+ a (q)ei
1257
+
1258
+ − 1
1259
+ 2
1260
+
1261
+ ∂q
1262
+
1263
+ M −1(q)ei
1264
+
1265
+ M −1
1266
+ a (q).
1267
+ (55)
1268
+ The
1269
+ key
1270
+ constraint
1271
+ for
1272
+ control
1273
+ design
1274
+ is
1275
+ choosing
1276
+ M −1
1277
+ a (q), Vd(q) satisfying the matching condition (42). From
1278
+ the definition of C(·) in (51), the constraint equation is
1279
+ a function of both M −1
1280
+ a (q) and
1281
+
1282
+ M −1
1283
+ a (q) + M −1(q)
1284
+ �−1,
1285
+ making direct design of this matrix difficult. To simplify
1286
+ the design process, an alternate characterisation of (42) is
1287
+ introduced.
1288
+ In the following proposition, the inverse mass matrix,
1289
+ inverse added mass matrix, interconnection matrix and Y (·)
1290
+ are partitioned as
1291
+
1292
+ m11(q)
1293
+ m⊤
1294
+ 21(q)
1295
+ m21(q)
1296
+ m22(q)
1297
+
1298
+ =
1299
+ �G⊤
1300
+ G⊥
1301
+
1302
+ M −1(q)
1303
+
1304
+ G
1305
+ G⊥⊤�
1306
+
1307
+ ma11(q)
1308
+ m⊤
1309
+ a21(q)
1310
+ ma21(q)
1311
+ ma22(q)
1312
+
1313
+ =
1314
+ �G⊤
1315
+ G⊥
1316
+
1317
+ M −1
1318
+ a (q)
1319
+
1320
+ G
1321
+ G⊥⊤�
1322
+
1323
+ J11(q, p)
1324
+ −J⊤
1325
+ 21(q, p)
1326
+ J21(q, p)
1327
+ J22(q, p)
1328
+
1329
+ =
1330
+ �G⊤
1331
+ G⊥
1332
+
1333
+ J(q, p)
1334
+
1335
+ G
1336
+ G⊥⊤�
1337
+ �Y11(q, p)
1338
+ Y12(q, p)
1339
+ Y21(q, p)
1340
+ Y22(q, p)
1341
+
1342
+ =
1343
+ �G⊤
1344
+ G⊥
1345
+
1346
+ Y (q, p)
1347
+
1348
+ G
1349
+ G⊥⊤�
1350
+ .
1351
+ (56)
1352
+ Using the above definitions, an alternate characterisation of
1353
+ (42) is presented.
1354
+ Proposition 3: The matching condition (42) is satisfied if:
1355
+ • The added mass matrix M −1
1356
+ a (q) is chosen such that
1357
+ D(q)
1358
+
1359
+ Y i(q) + Y i⊤(q)
1360
+
1361
+ D⊤(q) = 0(n−m)×(n−m)
1362
+ (57)
1363
+ for all i ∈ {1, . . . , n} where
1364
+ D(q) =
1365
+ �(m21 + ma21)(m11 + ma11)−1
1366
+ −In−m
1367
+
1368
+ .
1369
+ (58)
1370
+ • The desired potential energy Vd(q) satisfies
1371
+ s1(q)G⊥∇qV = −s2(q)G⊤∇qVd − s3(q)G⊥∇qVd
1372
+ = −D(q)M −1(q)∇qVd,
1373
+ (59)
1374
+ where
1375
+ s1(q) = (m22 + ma22)
1376
+ − (m21 + ma21)(m11 + ma11)−1(m⊤
1377
+ 21 + m⊤
1378
+ a21)
1379
+ (60)
1380
+ is the Schur complement of M −1(q) + M −1
1381
+ a (q) and
1382
+ s2(q) = (m21 + ma21)(m11 + ma11)−1m11 − m21
1383
+ s3(q) = (m21 + ma21)(m11 + ma11)−1m⊤
1384
+ 21 − m22.
1385
+ (61)
1386
+ Proof:
1387
+ From the graph in Figure 1 and the intercon-
1388
+ nection (37), the virtual input uv is given by
1389
+ uv = Gu =Gv − C + ∇qV.
1390
+ (62)
1391
+ Recalling the definition of G, (62) is equivalent to (42).
1392
+ Collecting the terms u, v and left multiplying by M −1
1393
+ a +M −1
1394
+ results in
1395
+
1396
+ M −1
1397
+ a
1398
+ + M −1�
1399
+ G(u − v)
1400
+ =
1401
+
1402
+ M −1
1403
+ a
1404
+ + M −1�
1405
+ {−CKE − CP E + ∇qV }
1406
+ (63)
1407
+ Due to the structure of G in (8), (63) has the left annihilator
1408
+ D(·), defined in (58).
1409
+ Left multiplying (63) by D(·) and separating the compo-
1410
+ nents into those relating to the kinetic and potential energies
1411
+ result in
1412
+ 0(n−m)×1 = −D
1413
+
1414
+ M −1
1415
+ a
1416
+ + M −1�
1417
+ CKE
1418
+ (64)
1419
+ 0(n−m)×1 = −D
1420
+
1421
+ M −1
1422
+ a
1423
+ + M −1�
1424
+ {CP E − ∇qV } .
1425
+ (65)
1426
+ Using (52), (65) is expanded to
1427
+ 0(n−m)×1 =D
1428
+
1429
+ M −1
1430
+ a
1431
+ + M −1�
1432
+ ∇qV
1433
+ − DM −1∇qVa,
1434
+ (66)
1435
+ which can be seen to agree with (59) after expanding.
1436
+ Now considering the constraint on the kinetic energy
1437
+ expression (64), the definition (52) is substituted to find
1438
+ 0(n−m)×n =D {−Y + J} .
1439
+ (67)
1440
+ Using the relevant definitions, the first component of (67)
1441
+ can be solved for J21(q, p) as
1442
+ J21(q, p) =(m21 + ma21)(m11 + ma11)−1(−Y11 + J11)
1443
+ + Y21.
1444
+ (68)
1445
+ Substituting this expression back into the second component
1446
+ of (67) reveals the constraint
1447
+ 0(n−m)×(n−m)
1448
+ =(m21 + ma21)(m11 + ma11)−1(−Y12 − J⊤
1449
+ 21)
1450
+ − (−Y22 + J22)
1451
+ =(m21 + ma21)(m11 + ma11)−1 �
1452
+ −Y12 − Y ⊤
1453
+ 21
1454
+
1455
+ − (m21 + ma21)(m11 + ma11)−1(−Y11 + J11)⊤
1456
+ × (m11 + ma11)−1(m⊤
1457
+ 21 + m⊤
1458
+ a21) − (−Y22 + J22)
1459
+ = − D
1460
+
1461
+ −Y11 + J⊤
1462
+ 11
1463
+ −Y12 − Y ⊤
1464
+ 21
1465
+ 0(n−m)×m
1466
+ −Y22 + J22
1467
+
1468
+ D⊤.
1469
+ (69)
1470
+ The term J22 is taken as below to solve the skew-symmetric
1471
+ part of this expression,
1472
+ J22
1473
+ = −1
1474
+ 2D
1475
+
1476
+ −Y11 + Y ⊤
1477
+ 11 + J⊤
1478
+ 11 − J11
1479
+ −Y12 − Y ⊤
1480
+ 21
1481
+ Y ⊤
1482
+ 12 + Y21
1483
+ −Y22 + Y ⊤
1484
+ 22
1485
+
1486
+ D⊤,
1487
+ (70)
1488
+ where J11 ∈ Rm×m is a free skew-symmetric term. The
1489
+ symmetric part of (69) must also be equal to zero, implying
1490
+ that
1491
+ D
1492
+
1493
+ Y + Y ⊤�
1494
+ D⊤ = 0(n−m)×(n−m).
1495
+ (71)
1496
+ Finally, noting that this must be true for each pi, the
1497
+ condition (57) follows.
1498
+
1499
+ Remark 1: The expression (57) implicitly defines a set of
1500
+ PDEs that must be satisfied by any choice of M −1
1501
+ a (q). From
1502
+ the definition of Y i in (55), the first m equations are describe
1503
+ partial differential equations involving the partial derivatives
1504
+ of ma11, ma21. The remaining n − m equations describe
1505
+ partial differential equations involving the partial derivatives
1506
+ of ma21, ma22. This structure can be useful for resolving the
1507
+ equations into a standard representation for solving.
1508
+ Corollary 2: In the special case of under-actuation degree
1509
+ 1, if M −1, M −1
1510
+ a
1511
+ is a function of only 1 configuration variable
1512
+ qi, the kinetic energy matching equations (57) can be reduced
1513
+ to a set of ODEs.
1514
+ d
1515
+ dqi
1516
+
1517
+ m⊤
1518
+ a21
1519
+ ma22
1520
+
1521
+ = g
1522
+
1523
+ ma11, d
1524
+ dqi
1525
+ ma11, M −1, d
1526
+ dqi
1527
+ M −1
1528
+
1529
+ , (72)
1530
+ where g(·) ∈ Rn is a function implicitly defined by the
1531
+ matching conditions (57) and ma11(qi) can be chosen freely.
1532
+ Proof: Assuming that the mass matrix M −1 is function
1533
+ only of a single configuration variable qi, we will also impose
1534
+ that the added mass M −1
1535
+ a
1536
+ is a function only of the same
1537
+ variable. As a consequence, the matching expression (57) is
1538
+ now only a function in the single variable qi. Notably, all
1539
+ partial derivatives of M −1
1540
+ a
1541
+ with respect to qk, where k ̸= i,
1542
+ are equal to zero.
1543
+ Noting Remark 1, the first n − 1 expressions of (57) pro-
1544
+ duce differential equations involving the partial derivatives
1545
+ of ma11, ma21. The dimension of ma21 is 1 × (n − 1), so
1546
+ the first n − 1 equations can be solved simultaneously to
1547
+ find an expression for
1548
+ d
1549
+ dqi m⊤
1550
+ a21. The nth expressions of (57)
1551
+ can then be resolved for an expression for
1552
+ d
1553
+ dqi ma22, which
1554
+ has dimension 1. Combining these expressions, the matching
1555
+ equations (57) can be resolved into an ODE of the form (72).
1556
+ Remark 2: Corollary 2 describes situations in which the
1557
+ kinetic energy matching equations can be reduced to an
1558
+ ODE. The solution, however, will depend on the choice of
1559
+ ma11(qi) and may not be globally defined. This poses the
1560
+ question of how should the function ma11(qi) be chosen to
1561
+ ensure an appropriate solution M −1
1562
+ a (qi)—a nonlinear control
1563
+ problem!
1564
+ The results of Corollary 1 describe the degrees of freedom
1565
+ that exist when constructing a solution to the added inverse
1566
+ mass matrix. Similar degrees of freedom exist in the defini-
1567
+ tion of the closed-loop potential energy that can be exploited
1568
+ to ensure positivity of the chosen function. The following
1569
+ Corollary defines a free function that can be utilised to this
1570
+ effect.
1571
+ Corollary 3: Suppose that there exists a full rank matrix-
1572
+ valued function K(q) ∈ Rm×m such that the integral
1573
+ Γ(q) =
1574
+
1575
+ K(q)G⊤ �
1576
+ M −1
1577
+ a (q) + M −1(q)
1578
+
1579
+ M(q) dq,
1580
+ (73)
1581
+ exists. The desired closed-loop potential energy can be
1582
+ chosen as
1583
+ Vd(q) = Vm(q) + Vf(Γ(q)),
1584
+ (74)
1585
+ where Vm(·) must be chosen to satisfy the potential energy
1586
+ matching conditions (59) and Vf(·) is a free function that
1587
+ does not impact the matching equations. Consequently, the
1588
+ matching equation (59) can be equivalently written as
1589
+ s1(q)G⊥∇qV = −s2(q)G⊤∇qVm − s3(q)G⊥∇qVm
1590
+ = −D(q)M −1(q)∇qVm.
1591
+ (75)
1592
+ Proof: Computing the gradient of Vd results in
1593
+ ∇qVd = ∇qVm + ∂⊤Γ
1594
+ ∂q ∇ΓVf
1595
+ = ∇qVm + M
1596
+
1597
+ M −1
1598
+ a
1599
+ + M −1�
1600
+ GK⊤∇ΓVf.
1601
+ (76)
1602
+ From the definition of D(q) in (58), we have the identity
1603
+ D(q)M −1M(q)
1604
+
1605
+ M −1
1606
+ a (q) + M −1(q)
1607
+
1608
+ G =
1609
+ �Im
1610
+ ⋆�
1611
+ G
1612
+ = 0(n−m)×m
1613
+ (77)
1614
+ Substituting the expression (76) into (59) and noting the
1615
+ above expression results in the simplified matching equation
1616
+ (75).
1617
+ Remark 3: Vf(·) is a free function precisely because Γ is
1618
+ an integral of the passive output yc2. The potential energy
1619
+ Vf could be alternatively constructed as a capacitor element
1620
+ added to the input uc2 in Figure 1.
1621
+ Now we arrive at one of the key results of this work, the
1622
+ equivalence of the proposed CbI scheme and total energy-
1623
+ shaping control of underactuated mechanical systems. As-
1624
+ suming that the CbI scheme has been constructed to satisfy
1625
+ the required matching conditions to ensure the existence of a
1626
+ Casimir of the form qa = q, pa = p, Proposition 1 is applied
1627
+ to reconstruct the reduced closed-loop structure (18).
1628
+ Proposition 4: Consider the underactuated mechanical
1629
+ system with virtual input (14) and assume that Ma(q), Vd(q)
1630
+ are chosen such that the conditions of Proposition 3 are sat-
1631
+ isfied in some neighbourhood of a point (q, p) = (q⋆, 0n×1).
1632
+ If the control signal is chosen as
1633
+ u(q, p) =v − G⊤ �
1634
+ Md(q)M −1(q)
1635
+
1636
+ −E⊤(q, p)M −1
1637
+ a (q)p
1638
+ +∇qaHa(qa, pa) − M(q)J(q, p)p] − ∇qV }
1639
+ (78)
1640
+ where
1641
+ Md(q) =
1642
+
1643
+ M −1
1644
+ a (q) + M −1(q)
1645
+ �−1 ,
1646
+ (79)
1647
+ the following hold:
1648
+ • The closed-loop dynamics have the form
1649
+
1650
+ ˙q
1651
+ ˙p
1652
+
1653
+ =
1654
+
1655
+ 0n×n
1656
+ M −1(q)Md(q)
1657
+ −Md(q)M −1(q)
1658
+ J2(q, p)
1659
+ � �∇qHd
1660
+ ∇pHd
1661
+
1662
+ +
1663
+ �0n×m
1664
+ G
1665
+
1666
+ v
1667
+ Hd(q, p) = 1
1668
+ 2p⊤M −1
1669
+ d (q)p + Vd(q)
1670
+ y = G⊤∇pHd,
1671
+ (80)
1672
+ where
1673
+ J2(q, p) =Md(q)
1674
+
1675
+ J(q, p) + M −1(q)
1676
+
1677
+ E(q, p) − E⊤(q, p)
1678
+
1679
+ ×M −1(q)
1680
+
1681
+ Md(q) + Md(q)M −1(q)E⊤(q, p)
1682
+ − E(q, p)M −1(q)Md(q)
1683
+ (81)
1684
+
1685
+ • If Md(q), Vd(q) satisfy
1686
+ Md(q) > 0,
1687
+ Vd(q) > 0
1688
+ (82)
1689
+ in some neighbourhood of (q, p)
1690
+ =
1691
+ (q⋆, 0n×1),
1692
+ (q⋆, 0n×1) is a stable equilibrium of the closed-loop
1693
+ system for v = 0m×1.
1694
+ • If the input signal v is used for damping injection
1695
+ v = −KdG⊤y
1696
+ (83)
1697
+ for some positive Kd ∈ Rm×m and the equilibrium
1698
+ (q, p) = (q⋆, 0n×1), (q⋆, 0n×1) is locally detectable
1699
+ from the output y, the point (q⋆, 0n×1) is asymptotically
1700
+ stable.
1701
+ Proof: Interconnection of the mechanical system with
1702
+ the control subsystem results in a closed-loop of the form
1703
+ (4), where xc, Hc and K⋆⋆ are defined in (50) and
1704
+ xp =
1705
+ �q
1706
+ p
1707
+
1708
+ Fp =
1709
+
1710
+ 0n×n
1711
+ In
1712
+ −In
1713
+ 0n×n
1714
+
1715
+ Gp =
1716
+ �0n×n
1717
+ In
1718
+
1719
+ .
1720
+ (84)
1721
+ From (38), we have that
1722
+ ∂fc
1723
+ ∂xp
1724
+ =
1725
+
1726
+
1727
+ In
1728
+ 0n×n
1729
+ In
1730
+ 0n×n
1731
+ 0n×n
1732
+ In
1733
+
1734
+ � .
1735
+ (85)
1736
+ To verify the claim, Corollary 1 is applied which requires
1737
+ a suitable definition of B. Expanding the definitions of ¯F⋆3
1738
+ from (34) reveals
1739
+ ¯F13 =
1740
+ �0n×n
1741
+ 0n×n
1742
+ −In
1743
+ 0n×n
1744
+ In − MdM −1
1745
+ D
1746
+
1747
+ ¯F23 =
1748
+ �0n×n
1749
+ 0n×n
1750
+ 0n×n
1751
+
1752
+ ¯F33 =
1753
+
1754
+
1755
+ 0n×n
1756
+ 0n×n
1757
+ 0n×n
1758
+ 0n×n
1759
+ 0n×n
1760
+ In
1761
+ 0n×n
1762
+ −In
1763
+ 0n×n
1764
+
1765
+ � ,
1766
+ (86)
1767
+ resulting in the choice
1768
+ B =
1769
+
1770
+
1771
+ 0n×n
1772
+ 0n×n
1773
+ In
1774
+ 0n×n
1775
+ 0n×n
1776
+ In
1777
+
1778
+ � .
1779
+ (87)
1780
+ Expanding the expression B⊤ ¯F33B results in
1781
+ B⊤ ¯F33B =
1782
+ �0n×n
1783
+ −In
1784
+ In
1785
+ 0n×n
1786
+
1787
+ (88)
1788
+ which is invertible, ensuring that Corollary 1 can be applied.
1789
+ Expanding the definitions of Fr in (32) results in the reduced
1790
+ dynamics
1791
+
1792
+
1793
+ ˙q
1794
+ ˙p
1795
+ −y
1796
+
1797
+ � =
1798
+
1799
+
1800
+ 0n×n
1801
+ M −1Md
1802
+ 0n×n
1803
+ −MdM −1
1804
+ ¯J2
1805
+ G
1806
+ 0n×n
1807
+ −G⊤
1808
+ 0n×n
1809
+
1810
+
1811
+
1812
+ ��
1813
+
1814
+ Fr
1815
+
1816
+
1817
+ ∇qHd
1818
+ ∇pHd
1819
+ v
1820
+
1821
+
1822
+ ¯J2 = MdJMd + D − D⊤ + MdM −1D⊤ − DM −1Md,
1823
+ (89)
1824
+ which agrees with (80) when substituting in the definition
1825
+ for D in (50). Stability and asymptotic stability of the point
1826
+ (q⋆, 0n×1) follows from Proposition 1 of [17].
1827
+ Remark 4: From Proposition 4 it is clear that M −1
1828
+ a (q)
1829
+ does not need to be a positive matrix. Rather, the closed-
1830
+ loop mass M −1
1831
+ d
1832
+ must be positive to ensure stability fo the
1833
+ system. In cases that M −1
1834
+ a
1835
+ is positive, the control sub-system
1836
+ in Figure 1 is passive.
1837
+ V. EXAMPLE APPLICATIONS
1838
+ In this section the matching conditions derived in Propo-
1839
+ sition 3 are used to construct stabilising control laws for
1840
+ the cart-pole and acrobot systems. In both cases, the mass
1841
+ matrix depends on only one configuration variable, so the
1842
+ kinetic energy matching conditions can be reduced to ODEs
1843
+ as detailed in Corollary 2. This enables the solutions to be
1844
+ constructed numerically, removing the need to analytically
1845
+ solve the equations.
1846
+ Both
1847
+ examples
1848
+ were
1849
+ prepared
1850
+ in
1851
+ Matlab
1852
+ 2022a
1853
+ and
1854
+ the
1855
+ source
1856
+ code
1857
+ is
1858
+ available
1859
+ via
1860
+ https://github.com/JoelFerguson/Underactuated Mechanical CbI.
1861
+ A. Cart-pole example
1862
+ The cart-pole system, shown in Figure 2, attempts to
1863
+ balance the pole of length ℓ and mass mp in the upright
1864
+ position by applying a force F to the cart with mass mc.
1865
+ The state q1 describes the horizontal displacement of the cart
1866
+ whereas q2 describes the angle of the pole from vertical in
1867
+ the clockwise direction. The cart-pole system can be written
1868
+ Fig. 2: The cart-pole system attempts to balance the pole in
1869
+ the upright position by regulating the force F.
1870
+ as a pH system of the form (7) with
1871
+ q =
1872
+ �q1
1873
+ q2
1874
+
1875
+ M(q) =
1876
+ � mc + mp
1877
+ mpl cos q2
1878
+ mpl cos q2
1879
+ mpl2
1880
+
1881
+ V (q) = mpgl cos q2
1882
+ G =
1883
+ �1
1884
+ 0
1885
+
1886
+ .
1887
+ (90)
1888
+ In the subsequent control design, the parameters mc = mp =
1889
+ l = 1, g = 9.8 have been used.
1890
+
1891
+ dw
1892
+ mcThe mass matrix of the cart-pole system depends only on
1893
+ q2, the unactuated coordinate. The added inverse mass is
1894
+ assumed to also be a function of q2 also, allowing it to be
1895
+ written as
1896
+ M −1
1897
+ a (q2) =
1898
+
1899
+ ma11(q2)
1900
+ m⊤
1901
+ a21(q2)
1902
+ ma21(q2)
1903
+ ma22(q2)
1904
+
1905
+ .
1906
+ (91)
1907
+ As noted in Corollary 2, the kinetic energy matching equa-
1908
+ tions (57) can be reduced to an ODE as both M −1, M −1
1909
+ a
1910
+ are a function of only one variable. The associate ODE is of
1911
+ the form (72) for qi = q2 where ma11(q2) is a free function
1912
+ to be chosen. The ODE can be evaluated using numerical
1913
+ solvers.
1914
+ Before solving the ODE associated with the kinetic energy
1915
+ matching equations, consideration should be given to how
1916
+ the resulting mass matrix impacts the closed-loop potential
1917
+ energy Vd. Recalling (74), the closed-loop potential energy
1918
+ is composed of a free term Γ(·) and a term Vm(q) which
1919
+ must satisfy (75), where s1, s2, s3 are defined in (60), (61).
1920
+ As the potential V, M −1, M −1
1921
+ a
1922
+ are all functions of only q2,
1923
+ Vm is also assumed to be a function of q2 only, reducing
1924
+ (75) to the ODE
1925
+ ∇q2Vm = −s1(q2)
1926
+ s3(q2)∇q2V,
1927
+ (92)
1928
+ which can be evaluated numerically once a solution for
1929
+ M −1
1930
+ a (q2), and hence s1(·), s3(·), are found. noting that the
1931
+ vector field ∇q2V is divergent from the point q2 = 0, the
1932
+ closed-loop vector field ∇q2Vm should reverse the direction
1933
+ locally. This is ensured if the ratio s1(q)
1934
+ s3(q) is positive in some
1935
+ neighbourhood of the origin. Recalling that s1(q) is the Schur
1936
+ complement of M −1 + M −1
1937
+ a , which is necessarily positive,
1938
+ it is required that s3(q) be positive in some neighbourhood
1939
+ of q2 = 0. The values
1940
+ ma11(0) = 0
1941
+ ma21(0) = −2
1942
+ ma22(0) = 8,
1943
+ (93)
1944
+ where chosen which result in s1(0) = 1, s3(0) = 1 and
1945
+ λmin
1946
+
1947
+ M −1(0) + M −1
1948
+ a (0)
1949
+
1950
+ = 0.917 > 0.
1951
+ The added inverse mass matrix can now be found by
1952
+ numerically evaluating the ODE (72). The term ma11(q2) is
1953
+ a free function that was chosen to be constant ma11(q2) =
1954
+ 0,
1955
+
1956
+ ∂q2 ma11 = 0 for this example. The resulting functions for
1957
+ ma21(q2), ma22(q2) were found to exist on the interval q2 ∈
1958
+ [−0.48, 0.48] and are shown in Figure (3). From Proposition
1959
+ 4, M −1(q2)+M −1
1960
+ a (q2) should be positive to ensure stability,
1961
+ so the minimum eigenvalue of this expression is shown in
1962
+ the same figure.
1963
+ The closed-loop potential energy Vm(q2) can now be
1964
+ obtained by numerically by evaluating the ODE (92). The
1965
+ terms s1(·), s3(·) are evaluated using the solutions to M −1
1966
+ a
1967
+ shown in Figure (3). The resulting function Vm(q2) is shown
1968
+ in Figure (4). As expected, the function is positive in some
1969
+ neighbourhood of q2 = 0 due to the choice of the added
1970
+ mass at q2 = 0 in (93).
1971
+ -0.5
1972
+ 0
1973
+ 0.5
1974
+ q2
1975
+ -2
1976
+ 0
1977
+ 2
1978
+ 4
1979
+ 6
1980
+ 8
1981
+ ma11
1982
+ ma12
1983
+ ma22
1984
+ -0.5
1985
+ 0
1986
+ 0.5
1987
+ q2
1988
+ 0.1
1989
+ 0.15
1990
+ 0.2
1991
+ min[M-1+Ma
1992
+ -1]
1993
+ Fig. 3: A solution for the inverse added mass M −1
1994
+ a
1995
+ was found
1996
+ to exist for the cart-pole system on the domain q2 between
1997
+ ±0.48 radians.
1998
+ -0.5
1999
+ 0
2000
+ 0.5
2001
+ q2
2002
+ 0
2003
+ 1
2004
+ 2
2005
+ 3
2006
+ 4
2007
+ 5
2008
+ Vm
2009
+ Fig. 4: Solution for the added potential energy Vm(q2) for
2010
+ the cart-pole system.
2011
+ The proposed functions of M −1
2012
+ a , Vm can be used to
2013
+ construct a controller to stabilise the pendulum in the upright
2014
+ position. To ensure stability of q1 = 0 also, the free term
2015
+ Vf(Γ(q)), defined in (74), is constructed. The function Γ(·)
2016
+ defined by the integral (73), where K(q) is a free function
2017
+ chosen to ensure solvability. Noting that M −1, M −1
2018
+ a
2019
+ are
2020
+ functions of q2 only, the parametrisation
2021
+ �β1(q2)
2022
+ β2(q2)�
2023
+ = G⊤ �
2024
+ M −1
2025
+ a (q2) + M −1(q2)
2026
+
2027
+ M(q2)
2028
+ (94)
2029
+ is introduced. The free function is chosen as K(q2) =
2030
+ 1
2031
+ β1(q2),
2032
+ resulting in
2033
+ Γ(q) =
2034
+ � �
2035
+ 1
2036
+ β2(q2)
2037
+ β1(q2)
2038
+
2039
+ dq
2040
+ = q1 +
2041
+ � β2(q2)
2042
+ β1(q2) dq2,
2043
+ (95)
2044
+ which can be solved numerically from the initial condition
2045
+ Γ(02×1) = 0. The function Vf(·) was taken as Vf(Γ(q)) =
2046
+ 1
2047
+ 2κΓ(q)2 with κ = 5 for simulation. A contour plot of the
2048
+ resulting closed-loop potential energy is shown in Figure (5).
2049
+ Note that a minimum has been assigned to q = 02×1. As a
2050
+ final control design stage, damping is injected via the new
2051
+ passive input/output pair with
2052
+ v = −5G⊤(M −1
2053
+ a
2054
+ + M −1)p.
2055
+ (96)
2056
+ The complete control signal is defined by the expression (46).
2057
+ The cart-pole system was simulated for 5 seconds from ini-
2058
+ tial conditions q(0) = (0, 0.3), p(0) = (0, 0). The resulting
2059
+ state evolution and closed-loop energy Hd is shown in Figure
2060
+
2061
+ -0.4
2062
+ -0.3
2063
+ -0.2
2064
+ -0.1
2065
+ 0
2066
+ 0.1
2067
+ 0.2
2068
+ 0.3
2069
+ 0.4
2070
+ q2
2071
+ -0.5
2072
+ 0
2073
+ 0.5
2074
+ q1
2075
+ log(Vd)
2076
+ Fig. 5: Contour plot of the closed-loop potential energy
2077
+ Vd(q) = Vm(q2) + Vf(Γ(q)) for the cart-pole system on
2078
+ log scale.
2079
+ 6. As expected, the proposed controller stabilises the origin
2080
+ and the closed-loop energy Hd decreases monotonically.
2081
+ 0
2082
+ 1
2083
+ 2
2084
+ 3
2085
+ 4
2086
+ 5
2087
+ time (s)
2088
+ -2
2089
+ 0
2090
+ 2
2091
+ q1
2092
+ q2
2093
+ p1
2094
+ p2
2095
+ 0
2096
+ 1
2097
+ 2
2098
+ 3
2099
+ 4
2100
+ 5
2101
+ time (s)
2102
+ 0
2103
+ 1
2104
+ 2
2105
+ Hd
2106
+ Fig. 6: Numerical simulation of cart-pole system in closed-
2107
+ loop with CbI scheme.
2108
+ B. Acrobot example
2109
+ The acrobot system, shown in Figure 7, consists of 2 links
2110
+ with an actuator supplying a input torque τ fixed between
2111
+ the base and second links. The base link has displacement of
2112
+ q2, measured from vertical, length ℓ2, mass m2, moment of
2113
+ inertia Jℓ1 and centre of mass ℓc2 from the base pivot point.
2114
+ The actuated link has displacement of q1 measured relative
2115
+ to the base link, length ℓ1, mass m1, moment of inertia Jℓ1
2116
+ and centre of mass ℓc1 from the actuated pivot point. The
2117
+ control objective of this system is to stabilise the upright
2118
+ equilibrium position (q1, q2) = (0, 0). The acrobot system
2119
+ Fig. 7: The acrobot system attempts to balance in the vertical
2120
+ position by manipulating the torque generated by an actuator
2121
+ between the two links.
2122
+ can be written as a pH system of the form (7) with
2123
+ M(q) =
2124
+
2125
+ c2
2126
+ c2 + c3 cos q1
2127
+ c2 + c3 cos q1
2128
+ c1 + c2 + 2c3 cos q1
2129
+
2130
+ V (q) = c4g cos q2 + c5g cos(q1 + q2)
2131
+ G =
2132
+ �1
2133
+ 0
2134
+
2135
+ ,
2136
+ (97)
2137
+ where
2138
+ c1 = m2ℓ2
2139
+ c2 + m1ℓ2
2140
+ 2 + Jℓ2
2141
+ c2 = m1ℓ2
2142
+ c1 + Jℓ1
2143
+ c3 = m1ℓ2ℓc1
2144
+ c4 = m2ℓc2 + m1ℓ1
2145
+ c5 = m1ℓc1.
2146
+ (98)
2147
+ For the purposes of simulation, we take the values g = 9.8,
2148
+ c1 = 2.3333, c2 = 5.3333, c3 = 2, c4 = 3, c5 = 2 which
2149
+ were previously used in [5], [25].
2150
+ In this example, the total energy-shaping controller pro-
2151
+ posed in [5] is reconstructed as a CbI control scheme by
2152
+ solving the matching conditions of Proposition 3. In that
2153
+ work, the closed-loop mass matrix was chosen to be the
2154
+ constant matrix
2155
+ M −1
2156
+ d
2157
+ =
2158
+ � 0.3385
2159
+ −0.9997
2160
+ −0.9997
2161
+ 5.9058
2162
+
2163
+ (99)
2164
+ which will be recovered in subsequent computations.
2165
+ The mass matrix of the acrobot system depends only on
2166
+ q1, the actuated coordinate. The added inverse mass matrix
2167
+ is assumed to be a function of only q1 also, resulting in the
2168
+ structure
2169
+ M −1
2170
+ a (q1) =
2171
+
2172
+ ma11(q1)
2173
+ m⊤
2174
+ a21(q1)
2175
+ ma21(q1)
2176
+ ma22(q1)
2177
+
2178
+ .
2179
+ (100)
2180
+ As the system is underactuated degree 1 and the mass matrix
2181
+ is a function of only one variable, the kinetic energy match-
2182
+ ing equations can be reduced to an ODE as per Corollary
2183
+ 2. The resulting ODE has the form (72) with qi = q1 and
2184
+ where ma11(q1) is a free function.
2185
+
2186
+ In order to recover the result (99), this free function
2187
+ ma11(q1) is chosen as
2188
+ ma11(q1) = G⊤ �
2189
+ M −1
2190
+ d
2191
+ − M −1(q1)
2192
+
2193
+ G
2194
+ = 0.3385 − c1 + c2 + 2c3 cos q1
2195
+ c1c2 − c2
2196
+ 3 cos2(q1) .
2197
+ (101)
2198
+ The initial conditions ma12(0), ma22(0) are similarly defined
2199
+ as
2200
+ ma12(0) = G⊥ �
2201
+ M −1
2202
+ d
2203
+ − M −1(0)
2204
+
2205
+ G = −0.1313
2206
+ ma12(0) = G⊥ �
2207
+ M −1
2208
+ d
2209
+ − M −1(0)
2210
+
2211
+ G⊥⊤ = 5.2743.
2212
+ (102)
2213
+ The added inverse mass was evaluated numerically and the
2214
+ results are shown in Figure 8. As the previously reported
2215
+ solution (99) is globally defined, it is unsurprising that the
2216
+ inverse added mas is also globally defined. As expected, the
2217
+ minimum eigenvalue of M −1 + M −1
2218
+ a
2219
+ is constant also.
2220
+ -2
2221
+ 0
2222
+ 2
2223
+ q1
2224
+ -2
2225
+ 0
2226
+ 2
2227
+ 4
2228
+ 6
2229
+ 8
2230
+ ma11
2231
+ ma12
2232
+ ma22
2233
+ -2
2234
+ 0
2235
+ 2
2236
+ q1
2237
+ 0
2238
+ 0.1
2239
+ 0.2
2240
+ 0.3
2241
+ 0.4
2242
+ 0.5
2243
+ min[M-1+Ma
2244
+ -1]
2245
+ Fig. 8: A solution for the added mass M −1
2246
+ a
2247
+ for the acrobot
2248
+ was found to exist globally.
2249
+ Solving the potential energy PDE (75) is difficult due to
2250
+ the open-loop potential energy being a function of both q1
2251
+ and q2. This dependence implies that Vm cannot be resolved
2252
+ directly using an ODE solver. Considering the structure of
2253
+ V in (97), it is proposed that the closed-loop energy Vm has
2254
+ the structure
2255
+ Vm(q) = f1(q1) sin(q2) + f2(q1) cos(q2),
2256
+ (103)
2257
+ which has derivatives
2258
+ ∇q1Va =∂f1
2259
+ ∂q1
2260
+ sin(q2) + ∂f2
2261
+ ∂q1
2262
+ cos(q2)
2263
+ ∇q2Va =f1(q1) cos(q2) − f2(q1) sin(q2).
2264
+ (104)
2265
+ The open-loop potential energy has gradients
2266
+ ∇q1V = − c5g sin(q1) cos(q2) − c5g cos(q1) sin(q2)
2267
+ ∇q2V = − c4g sin(q2) − c5g sin(q1) cos(q2)
2268
+ − c5g cos(q1) sin(q2).
2269
+ (105)
2270
+ Substituting the expressions (104) and (105) into (59) and
2271
+ matching coefficients results in the system of equations
2272
+ � ∂f1
2273
+ ∂q1
2274
+ ∂f2
2275
+ ∂q1
2276
+
2277
+ =
2278
+ 1
2279
+ s2(q1)
2280
+ ×
2281
+ �c4gs1(q1) + c5gs1(q1) cos(q1) + s3(q1)f2(q1)
2282
+ c5gs1(q1) sin(q1) − s3(q1)f1(q1)
2283
+
2284
+ ,
2285
+ (106)
2286
+ which can be evaluated numerically. The values of f1, f2
2287
+ at the origin should be chosen to ensure that the origin is
2288
+ an equilibrium point and Vm is positive in q2. Considering
2289
+ the expressions (105), (106), the origin is an equilibrium for
2290
+ f1(0) = 0. The energy function (103) is locally positive
2291
+ with respect to q1 for f2(0) negative. For the purpose of
2292
+ simulation, f2(0) = −50 was used. The resulting function
2293
+ Vm is shown in Figure 9.
2294
+ Fig. 9: Solution for the added potential energy Vm(q2).
2295
+ Considering Figure 9, it is clear that Vm is not positive
2296
+ definite with respect to the origin. Note, however, that q2 = 0
2297
+ has been stabilised. To ensure stability of q1 = 0 also,
2298
+ the free term Vf(Γ(q)), defined in (74), is constructed. The
2299
+ function Γ(·) defined by the integral (73), where K(q) is
2300
+ a free function chosen to ensure solvability. Noting that
2301
+ M −1, M −1
2302
+ a
2303
+ are functions of q1 only, the parametrisation
2304
+
2305
+ β1(q1)
2306
+ β2(q1)
2307
+
2308
+ = G⊤ �
2309
+ M −1
2310
+ a (q1) + M −1(q1)
2311
+
2312
+ M(q1)
2313
+ (107)
2314
+ is introduced. The free function is chosen as K(q1) =
2315
+ 1
2316
+ β2(q1),
2317
+ resulting in
2318
+ Γ(q) =
2319
+ � �
2320
+ β1(q1)
2321
+ β2(q1)
2322
+ 1
2323
+
2324
+ dq
2325
+ =
2326
+ � β1(q1)
2327
+ β2(q1) dq1 + q2,
2328
+ (108)
2329
+ which can be solved numerically from the initial condition
2330
+ Γ(02×1) = 0. The function Vf(·) was taken as Vf(Γ(q)) =
2331
+ 1
2332
+ 2κΓ(q)2 with κ = 250 for simulation. A contour plot of
2333
+ the resulting closed-loop potential energy on a log scale is
2334
+ shown in Figure 10. Note that a minimum has been assigned
2335
+ to q = 02×1. As a final control design stage, damping is
2336
+ injected via the new passive input/output pair with
2337
+ v = −5G⊤(M −1
2338
+ a
2339
+ + M −1)p.
2340
+ (109)
2341
+ The complete control signal is defined by the expression (46).
2342
+ The acrobot system was simulated for 20 seconds from ini-
2343
+ tial conditions q(0) = (0, 0.5), p(0) = (0, 0). The resulting
2344
+
2345
+ 100
2346
+ m
2347
+ 0
2348
+ -100
2349
+ -200
2350
+ 4
2351
+ 2
2352
+ 0
2353
+ -2
2354
+ a2
2355
+ 44
2356
+ 2
2357
+ 0
2358
+ -2
2359
+ q1-3
2360
+ -2
2361
+ -1
2362
+ 0
2363
+ 1
2364
+ 2
2365
+ 3
2366
+ q2
2367
+ -3
2368
+ -2
2369
+ -1
2370
+ 0
2371
+ 1
2372
+ 2
2373
+ 3
2374
+ q1
2375
+ log(Vd)
2376
+ Fig. 10: Contour plot of the closed-loop potential energy
2377
+ Vd(q) = Vm(q) + Vf(Γ(q)) on log scale for the acrobot
2378
+ system.
2379
+ state evolution and closed-loop energy Hd is shown in Figure
2380
+ 11. As expected, the proposed controller stabilises the origin
2381
+ and the closed-loop energy Hd decreases monotonically.
2382
+ 0
2383
+ 5
2384
+ 10
2385
+ 15
2386
+ 20
2387
+ time (s)
2388
+ -1
2389
+ 0
2390
+ 1
2391
+ 2
2392
+ q1
2393
+ q2
2394
+ 0
2395
+ 5
2396
+ 10
2397
+ 15
2398
+ 20
2399
+ time (s)
2400
+ -10
2401
+ 0
2402
+ 10
2403
+ p1
2404
+ p2
2405
+ 0
2406
+ 5
2407
+ 10
2408
+ 15
2409
+ 20
2410
+ time (s)
2411
+ 0
2412
+ 20
2413
+ 40
2414
+ Hd
2415
+ Fig. 11: Numerical simulation of acrobot system in closed-
2416
+ loop with CbI scheme.
2417
+ VI. CONCLUSIONS AND FUTURE WORKS
2418
+ In this work total energy shaping has been shown to
2419
+ have a CbI interpretation which results in alternate matching
2420
+ equations related to the added inverse mass. These equations
2421
+ were utilised to construct controllers for the cart-pole and
2422
+ acrobot systems, both of which have the property that the
2423
+ mass matrix depends on only one variable, using numerical
2424
+ methods. While the proposed approach is effective, a number
2425
+ of technical aspects of this approach require further investi-
2426
+ gation. In particular:
2427
+ • As detailed in Corollary 2, The kinetic energy matching
2428
+ equations can be posed as ODEs in the special case
2429
+ that the mass matrix depends on only one configuration
2430
+ variable. This property allows the matching equations
2431
+ to be evaluated numerical using ODE solvers. Further
2432
+ investigation into solving the matching equations in the
2433
+ case that the mass matrix is a function of multiple
2434
+ configuration variables is required. In some cases it
2435
+ may be possible to decouple the dependence on each
2436
+ coordinate, recovering equivalent ODEs. Alternatively,
2437
+ the numerical evaluation of the matching PDEs should
2438
+ be investigated.
2439
+ • When evaluating the kinetic energy matching equations
2440
+ in (72), the term ma11(qi) is a free function that can
2441
+ be used to control the resulting added inverse mass.
2442
+ As seen in the cart-pole example of Section V-A, poor
2443
+ choice of this function results in the solution only being
2444
+ defined on a small domain. Conversely, in the acrobot
2445
+ example of Section V-A this term was chosen to ensure
2446
+ a global solution to the matching equations. Choice of
2447
+ this function defines a nonlinear control problem that
2448
+ should be investigated to ensure desirable behaviour of
2449
+ the result.
2450
+ • In both examples of Section V the controllers were
2451
+ designed to stabilise the origin of the respective sys-
2452
+ tems. While this was achieved and verified numerically,
2453
+ asymptotic stability was not established. Asymptotic
2454
+ stability requires that the passive output of the closed-
2455
+ loop system is zero-state detectable, a task that is non-
2456
+ trivial for underactuated systems. Further investigation
2457
+ into methods for injecting damping into the unactuated
2458
+ momentum channels of the closed-loop system is re-
2459
+ quired. It is hoped that the CbI interpretation of the
2460
+ controller shown in Figure 1 may provide new insight
2461
+ into how this might be achieved.
2462
+ REFERENCES
2463
+ [1] R. Ortega and E. Garcia-Canseco, “Interconnection and damping
2464
+ assignment passivity-based control: A survey,” European Journal of
2465
+ control, vol. 10, no. 5, pp. 432–450, 2004. [Online]. Available:
2466
+ http://www.sciencedirect.com/science/article/pii/S094735800470391X
2467
+ [2] R. Ortega, A. Van der Schaft, B. Maschke, and G. Escobar, “Inter-
2468
+ connection and damping assignment passivity-based control of port-
2469
+ controlled Hamiltonian systems,” Automatica, vol. 38, no. 4, pp. 585–
2470
+ 596, 2002.
2471
+ [3] F. G´omez-Estern, R. Ortega, F. R. Rubio, and J. Aracil, “Stabilization
2472
+ of a class of underactuated mechanical systems via total energy
2473
+ shaping,” Proceedings of the IEEE Conference on Decision and
2474
+ Control, vol. 2, no. December, pp. 1137–1143, 2001.
2475
+ [4] J. Acosta, R. Ortega, and A. Astolfi, “Interconnection and damping
2476
+ assignment passivity-based control of mechanical systems with un-
2477
+ deractuation degree one,” IEEE Transactions on Automatic Control,
2478
+ vol. 50, no. 12, pp. 1936–1955, 2005.
2479
+ [5] A. D. Mahindrakar, A. Astolf, R. Ortega, and G. Viola, “Further
2480
+ constructive results on interconnection and damping assignment
2481
+ control of mechanical systems: The Acrobot example,” International
2482
+ Journal of Robust and Nonlinear Control, vol. 18, no. July, pp.
2483
+ 557–569, 2010. [Online]. Available: http://onlinelibrary.wiley.com/
2484
+ doi/10.1002/rnc.1553/abstract
2485
+ [6] P. Arpenti, F. Ruggiero, and V. Lippiello, “A Constructive Methodol-
2486
+ ogy for the IDA-PBC of Underactuated 2-DoF Mechanical Systems
2487
+ with Explicit Solution of PDEs,” International Journal of Control,
2488
+ Automation and Systems, vol. 20, no. 1, pp. 283–297, 2022.
2489
+ [7] R. Ortega, A. van der Schaft, F. Casta˜nos, and A. Astolfi, “Con-
2490
+ trol by interconnection and standard passivity-based control of port-
2491
+
2492
+ Hamiltonian systems,” IEEE Transactions on Automatic Control,
2493
+ vol. 53, no. 11, pp. 2527–2542, 2008.
2494
+ [8] R. Ortega and L. P. Borja, “New results on Control by Interconnection
2495
+ and Energy-Balancing Passivity-Based Control of port-hamiltonian
2496
+ systems,” Proceedings of the IEEE Conference on Decision and
2497
+ Control, vol. 2015-Febru, no. February, pp. 2346–2351, 2014.
2498
+ [9] V. Duindam, A. Macchelli, S. Stramigioli, and H. Bruyninckx,
2499
+ Modeling
2500
+ and
2501
+ Control
2502
+ of
2503
+ Complex
2504
+ Physical
2505
+ Systems:
2506
+ The
2507
+ Port-Hamiltonian Approach.
2508
+ Berlin Heidelberg: Springer-Verlag,
2509
+ 2009.
2510
+ [Online].
2511
+ Available:
2512
+ https://books.google.com.au/books?hl=
2513
+ en&lr=&id=qFraVEzCTnUC&oi=fnd&pg=PR3&dq=Modeling+and+
2514
+ Control+of+Complex+Physical+Systems&ots=UpR M62tbR&sig=
2515
+ eNCMg94gK6gpY lm 0Ny6UDuVDY
2516
+ [10] A. Donaire, R. Mehra, R. Ortega, S. Satpute, J. G. Romero, F. Kazi,
2517
+ and N. M. Singh, “Shaping the Energy of Mechanical Systems
2518
+ Without Solving Partial Differential Equations,” IEEE Transactions
2519
+ on Automatic Control, vol. 61, no. 4, pp. 1051–1056, 2016.
2520
+ [11] J. G. Romero, A. Donaire, and R. Ortega, “Global Stabilisation of
2521
+ Underactuated Mechanical Systems via PID Passivity-Based Control,”
2522
+ pp. 1–27, 2016. [Online]. Available: http://arxiv.org/abs/1610.06999
2523
+ [12] J. G. Romero, A. Donaire, R. Ortega, and P. Borja, “Global
2524
+ Stabilisation of Underactuated Mechanical Systems via PID Passivity-
2525
+ Based Control,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 9577–9582,
2526
+ 2017.
2527
+ [Online].
2528
+ Available:
2529
+ https://doi.org/10.1016/j.ifacol.2017.08.
2530
+ 1674
2531
+ [13] M. Zhang, P. Borja, R. Ortega, Z. Liu, and H. Su, “PID Passivity-
2532
+ Based Control of Port-Hamiltonian Systems,” IEEE Transactions on
2533
+ Automatic Control, vol. PP, no. 99, pp. 1–1, 2017.
2534
+ [14] P. J. Gawthrop and G. P. Bevan, “Bond-Graph Modeling: A tutorial
2535
+ introduction for control engineers,” IEEE Control Systems Magazine,
2536
+ vol. 27, no. 2, pp. 24–45, 2007.
2537
+ [15] A. van der Schaft, L2-Gain and Passivity Techniques in Nonlinear
2538
+ Control, 3rd ed.
2539
+ Springer, 2017.
2540
+ [16] R. Reyes-B´aez, A. van der Schaft, and B. Jayawardhana, “Tracking
2541
+ Control of Fully-actuated port-Hamiltonian Mechanical Systems
2542
+ via Sliding Manifolds and Contraction Analysis,” in Proc. IFAC
2543
+ World Congress.
2544
+ Toulouse, France: Elsevier, 2017, pp. 8256–8261.
2545
+ [Online]. Available: https://doi.org/10.1016/j.ifacol.2017.08.1395
2546
+ [17] R. Ortega, M. W. Spong, F. G??mez-Estern, and G. Blankenstein,
2547
+ “Stabilization of a class of underactuated mechanical systems via
2548
+ interconnection and damping assignment,” IEEE Transactions on
2549
+ Automatic Control, vol. 47, no. 8, pp. 1218–1233, 2002.
2550
+ [18] G. Viola, R. Ortega, and R. Banavar, “Total energy shaping control
2551
+ of mechanical systems: simplifying the matching equations via coor-
2552
+ dinate changes,” IEEE Transactions on Automatic Control, vol. 52,
2553
+ no. 6, pp. 1093–1099, 2007.
2554
+ [19] M. Ryalat and D. S. Laila, “A simplified IDA-PBC design for
2555
+ underactuated mechanical systems with applications,” European
2556
+ Journal of Control, vol. 27, pp. 1–16, 2016. [Online]. Available:
2557
+ http://dx.doi.org/10.1016/j.ejcon.2015.12.001
2558
+ [20] F. G´omez-Estern and A. van der Schaft, “Physical damping in IDA-
2559
+ PBC
2560
+ controlled
2561
+ underactuated
2562
+ mechanical
2563
+ Systems,”
2564
+ European
2565
+ Journal
2566
+ of
2567
+ Control,
2568
+ vol.
2569
+ 10,
2570
+ no.
2571
+ 5,
2572
+ pp.
2573
+ 451–468,
2574
+ 2004.
2575
+ [Online]. Available: http://www.sciencedirect.com/science/article/pii/
2576
+ S0947358004703921
2577
+ [21] J. Sandoval, R. Kelly, and V. Santibanez, “Interconnection and damp-
2578
+ ing assignment passivity-based control of a class of underactuated
2579
+ mechanical systems with dynamic friction,” International Journal of
2580
+ Robust and Nonlinear Control, vol. 21, no. 7, pp. 738–751, 2010.
2581
+ [22] A. Donaire, R. Ortega, and J. G. Romero, “Simultaneous interconnec-
2582
+ tion and damping assignment passivity-based control of mechanical
2583
+ systems using dissipative forces,” Systems & Control Letters, vol. 94,
2584
+ pp. 118–126, 2016.
2585
+ [23] O. B. Cieza and J. Reger, “IDA-PBC for underactuated mechanical
2586
+ systems in implicit port-hamiltonian representation,” 2019 18th Euro-
2587
+ pean Control Conference, ECC 2019, pp. 614–619, 2019.
2588
+ [24] M. R. J. Harandi and H. D. Taghirad, “Solution of matching equations
2589
+ of
2590
+ IDA-PBC
2591
+ by
2592
+ Pfaffian
2593
+ differential
2594
+ equations,”
2595
+ International
2596
+ Journal of Control, pp. 1–11, 2021. [Online]. Available: https:
2597
+ //doi.org/10.1080/00207179.2021.1972345
2598
+ [25] A. Donaire, J. G. Romero, R. Ortega, B. Siciliano, and M. Crespo,
2599
+ “Robust IDA-PBC for underactuated mechanical systems subject to
2600
+ matched disturbances,” International Journal of Robust and Nonlinear
2601
+ Control, vol. 27, no. 6, pp. 1000–1016, 2017.
2602
+
4dE2T4oBgHgl3EQfOQZ5/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4dFJT4oBgHgl3EQfjyxF/content/tmp_files/2301.11576v1.pdf.txt ADDED
@@ -0,0 +1,3435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.11576v1 [math.PR] 27 Jan 2023
2
+ EMPIRICAL PROCESS
3
+ SAMPLED ALONG A STATIONARY PROCESS
4
+ GUY COHEN AND JEAN-PIERRE CONZE
5
+ Abstract. Let (Xℓ)ℓ∈Zd be a real random field (r.f.)
6
+ indexed by Zd with common
7
+ probability distribution function F. Let (zk)∞
8
+ k=0 be a sequence in Zd. The empirical
9
+ process obtained by sampling the random field along (zk) is �n−1
10
+ k=0[1Xzk ≤s − F(s)].
11
+ We give conditions on (zk) implying the Glivenko-Cantelli theorem for the empirical
12
+ process sampled along (zk) in different cases (independent, associated or weakly corre-
13
+ lated random variables). We consider also the functional central limit theorem when the
14
+ Xℓ’s are i.i.d.
15
+ These conditions are examined when (zk) is provided by an auxiliary stationary pro-
16
+ cess in the framework of “random ergodic theorems”.
17
+ Contents
18
+ Introduction
19
+ 2
20
+ 1.
21
+ General results on the empirical process along a sub-sequence
22
+ 3
23
+ 1.1.
24
+ Preliminaries
25
+ 3
26
+ 1.2.
27
+ A Glivenko-Cantelli type theorem
28
+ 10
29
+ 1.3.
30
+ A sufficient condition for a FCLT for the sampled empirical process
31
+ 12
32
+ 2.
33
+ Local times for ergodic sums
34
+ 15
35
+ 2.1.
36
+ Auxiliary general results
37
+ 15
38
+ 2.2.
39
+ Non centered cocycles
40
+ 22
41
+ 2.3.
42
+ Counterexamples
43
+ 23
44
+ 3.
45
+ Examples
46
+ 27
47
+ 3.1.
48
+ Random walks
49
+ 27
50
+ 3.2.
51
+ Extensions of the r.w. case
52
+ 32
53
+ 3.3.
54
+ Step functions over rotations
55
+ 34
56
+ Date: January 30, 2023.
57
+ 2010 Mathematics Subject Classification. Primary: 60F05, 28D05, 22D40, 60G50; Secondary: 47B15,
58
+ 37A25, 37A30.
59
+ Key words and phrases. Empirical process, sampling along a stationary process, local times, Glivenko-
60
+ Cantelli theorem, functional central limit theorem, random walks.
61
+ 1
62
+
63
+ 2
64
+ GUY COHEN AND JEAN-PIERRE CONZE
65
+ 4.
66
+ About limit theorems along ergodic sums
67
+ 37
68
+ 4.1.
69
+ Glivenko-Cantelli theorem along ergodic sums
70
+ 37
71
+ 4.2.
72
+ Discussion: universal sequences
73
+ 39
74
+ References
75
+ 40
76
+ Introduction
77
+ For a sequence (Xk) of real i.i.d. random variables with common probability distribution
78
+ function F, the empirical process is defined by �n−1
79
+ k=0 [1Xk≤s − F(s)]. Recall two classical
80
+ results.
81
+ (A) the Glivenko-Cantelli theorem:
82
+ a.s. the sequence of empirical distribution functions Fn(s) :=
83
+ 1
84
+ n
85
+ �n−1
86
+ k=0 1Xk≤s converges
87
+ uniformly to F, i.e. sups |Fn(s) − F(s)| → 0;
88
+ (B) a functional central limit theorem (FCLT): if the r.v.s Xk have a common distribution
89
+ F over [0, 1], then
90
+ the process
91
+ 1
92
+ √n
93
+ �n−1
94
+ k=0 [1Xk≤s − F(s)] converges weakly to a Brownian bridge in the space
95
+ of cadlag functions on [0, 1].
96
+ In this paper we study the extension of these results when the process is sampled along a
97
+ subsequence, analogously to what is done for limit theorems in random scenery.
98
+ In the sequel, for d ≥ 1, (Xℓ)ℓ∈Zd will be a real random field (r.f.) indexed by Zd defined
99
+ on a probability space (Ω, F, P) with common probability distribution function F. The
100
+ expectation on (Ω, P) is denoted by E. We consider in particular the case of a r.f. of i.i.d.
101
+ r.v.’s or of stationary associated r.v.’s.
102
+ Let (zk)∞
103
+ k=0 be a sequence in Zd. The process obtained by sampling the random field along
104
+ (zk) is Wn(s) := �n−1
105
+ k=0[1Xzk≤s − F(s)].
106
+ We will call Wn(s) “empirical process sampled along (zk)”, or simply “sampled empirical
107
+ process”. A general question is whether the above results (A), (B) extend to the sampled
108
+ empirical process Wn(s), in particular when (zk) is given by another stationary process
109
+ with values in Zd.
110
+ In Section 1, we give conditions on (zk) implying that (A) and (B) are still valid for
111
+ an empirical process sampled along (zk) in different cases: independent, associated or
112
+ weakly correlated random variables. The conditions are expressed in terms of the following
113
+ quantities associated to the sequence (zk) in Zd: local time, maximal local time and
114
+ number of self-intersections (up to time n) defined, for n ≥ 1, by
115
+ Nn(ℓ) := #{0 ≤ k ≤ n − 1 : zk = ℓ},
116
+ Mn := max
117
+
118
+ Nn(ℓ), Vn := #{0 ≤ j, k ≤ n − 1 : zj = zk}.
119
+ (1)
120
+
121
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
122
+ 3
123
+ They satisfy �
124
+ ℓ Nn(ℓ) = n and n ≤ Vn = �
125
+ ℓ N2
126
+ n(ℓ) ≤ nMn ≤ n2.
127
+ In the other sections, (zk) is given by a stationary process (or equivalently by the sequence
128
+ (Skf(x))k≥1 of ergodic sums of a function f over a dynamical system).
129
+ The conditions found in Section 1 lead to study the local times, maximum number of visits,
130
+ number of self-intersections for the sequence (Skf(x)). General remarks are presented in
131
+ Section 2. Then in Section 3, we consider two families of examples: random walks and
132
+ some ergodic sums over a rotation.
133
+ The Glivenko-Cantelli theorem along ergodic sums (extension of (A)) is strongly related
134
+ to random ergodic theorems, in particular to results in [23] and [25]. This is discussed in
135
+ the last Section 4.
136
+ Finally let us mention the quenched FCLT for the 2-parameters process
137
+ Wn(s, t) :=
138
+ [nt]−1
139
+
140
+ k=0
141
+ [1XZk(x)≤s − F(s)], (s, t) ∈ [0, 1]2.
142
+ When (Xℓ) is a r.f. of i.i.d. r.v.’s indexed by Z2 and when the sampling is provided by a
143
+ 2-dimension centered random walk (Zk) with a moment of order 2, the weak convergence
144
+ for a.e. x toward a Kiefer-M¨uller process can be shown. This will be the content of a
145
+ forthcoming paper.
146
+ Acknowledgements. Part of this research was done during visits of the first author to
147
+ the IRMAR at the University of Rennes 1 and of the second author to the Center for
148
+ Advanced Studies in Mathematics at Ben Gurion University. The authors are grateful to
149
+ their hosts for their support.
150
+ 1. General results on the empirical process along a sub-sequence
151
+ 1.1. Preliminaries. In this subsection, results on the empirical process along a sub-
152
+ sequence are shown for independent variables, as well for some of them for wider classes
153
+ (associated, PDQ and weakly correlated random variables). We start by recalling some
154
+ notions and auxiliary results.
155
+ 1) Associated variables
156
+ Definition (cf. [17]): A finite set of real random variables T = (T1, T2, . . . , Tn) is said to
157
+ be associated if Cov[f(T), g(T)] ≥ 0, for every coordinate-wise non-decreasing functions
158
+ f = f(x1, ..., xn) and g = g(x1, ..., xn) for which E[f(T)], E[g(T)], E[f(T) g(T)] exist. An
159
+ infinite set of random variables is associated if any finite subset of it is associated.
160
+ Association of random variables is preserved under taking subsets and forming unions of
161
+ independent sets (of associated random variables). In particular a family of independent
162
+ variables is associated.
163
+
164
+ 4
165
+ GUY COHEN AND JEAN-PIERRE CONZE
166
+ Clearly, orthogonal associated random variables are independent. Examples of (non inde-
167
+ pendent) stationary associated processes with absolutely summable series of correlations
168
+ are provided by some Ising models. References to such examples of stationary Zd random
169
+ fields which satisfies the FKG inequalities and with absolutely summable correlations
170
+ can be found in Newman’s paper [26]. Notice that the FKG inequalities expresses the
171
+ association property of the r.v.’s.
172
+ 2) PQD variables
173
+ Two r.v.’s X, Y are called (cf. [24]) positively quadrant dependent (PQD) if,
174
+ P(X > x, Y > y) ≥ P(X > x) P(Y > y), ∀x, y ∈ R.
175
+ The property is preserved by centering. Any pairwise associated r.v.’s are pairwise PQD.
176
+ Pairwise independent random variables are pairwise PQD associated.
177
+ Two random variables X and Y are PQD if and only if for every non-decreasing functions
178
+ f and g, Cov(f(X), g(Y )) ≥ 0 (whenever the covariance exists) ([17, Theorem 4.4]).
179
+ 3) We will use the following results:
180
+ a) Maximal inequality of Newman and Wright [27, Inequality (12)]:
181
+ If (Wi) is a sequence of centered associated, square integrable random variables, it holds:
182
+ P( max
183
+ 1≤j≤n |
184
+ j
185
+
186
+ i=1
187
+ Wi| ≥ λ ∥
188
+ n
189
+
190
+ i=1
191
+ Wi∥2) ≤ 2P(|
192
+ n
193
+
194
+ i=1
195
+ Wi| ≥ (λ −
196
+
197
+ 2) ∥
198
+ n
199
+
200
+ i=1
201
+ Wi∥2), ∀λ ≥ 0.
202
+ (2)
203
+ b) Hoeffding’s identity (see [2, Theorem 3.1])
204
+ Let X, Y be random variables with finite second moments. For any absolutely continuous
205
+ functions f, g on R, such that E[f 2(X) + g2(Y )] < ∞, it holds
206
+ Cov(f(X), g(Y )) =
207
+ � ∞
208
+ −∞
209
+ � ∞
210
+ −∞
211
+ f ′(x)g′(y)[P(X > x, Y > y) − P(X > x)P(Y > y)]dxdy.
212
+ In particular, if X, Y are PDQ random variables, if |f ′|, |g′| ≤ M a.e., we have
213
+ |Cov(f(X), g(Y ))| ≤ M2Cov(X, Y ).
214
+ 4) Uniformity in the analogues of Glivenko-Cantelli theorem will follow from the lemma:
215
+ Lemma 1.1. [7, Lemma, p 140] Let Fn, F be a family of right continuous distributions on
216
+ R. Assume that, for each point x in a dense countable set Q ⊂ R, we have Fn(x) → F(x).
217
+ Let J be the set of jumps of F and assume that Fn(x)−Fn(x−) → F(x)−F(x−) for every
218
+ x ∈ J. Then Fn(x) → F(x) uniformly in R.
219
+ A strong law of large numbers
220
+ First we state a law of large numbers for bounded r.v.’s valid under weak hypotheses.
221
+
222
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
223
+ 5
224
+ Let (Uℓ)ℓ∈Zd be a r.f. indexed by Zd of square integrable r.v’s on a probability space
225
+ (Ω, F, P). Let (zk)k≥0 be a sequence in Zd, d ≥ 1, with numbers of self-intersections
226
+ Vn, n ≥ 1. The partial sums along (zk) are denoted by Sn :=
227
+ n−1
228
+
229
+ k=0
230
+ Uzk.
231
+ By the Cauchy-Schwarz inequality, if �
232
+ ℓ supr |⟨Ur+ℓ, Ur⟩| < +∞, if holds for a finite
233
+ constant C0:
234
+
235
+ n−1
236
+
237
+ i=0
238
+ Uzi∥2
239
+ 2 =
240
+
241
+
242
+
243
+ r
244
+ Nn(r + ℓ)Nn(r)⟨Ur+ℓ, Ur⟩ ≤ Vn
245
+
246
+
247
+ sup
248
+ r |⟨Ur+ℓ, Ur⟩| = C0Vn.
249
+ (3)
250
+ In particular if the r.f. is stationary and the series of correlations is absolutely summable
251
+ (i.e., �
252
+ ℓ∈Zd |⟨X0, Xℓ⟩| < +∞), then the spectral density of the r.f.
253
+ exists and is the
254
+ continuous non-negative function ρ on Td with Fourier coefficients
255
+
256
+ Td e2πi⟨ℓ,t⟩ ρ(t) dt =
257
+ ⟨X0, Xℓ⟩ and it holds:
258
+ ∥Sn∥2
259
+ 2 = ∥
260
+ n−1
261
+
262
+ i=0
263
+ Uzi∥2
264
+ 2 ≤ Vn
265
+
266
+
267
+ |⟨Uℓ, U0⟩|.
268
+ (4)
269
+ Proposition 1.2. Suppose the r.v.’s Uℓ on (Ω, P) centered and uniformly bounded by the
270
+ same constant K, ∥Uℓ∥∞ ≤ K, ∀ℓ. Assume that (zk) is such that
271
+ Vn ≤ C1
272
+ n2
273
+ (log n)β , for constants C1, β.
274
+ (5)
275
+ 1) Then, if β > 1 and
276
+
277
+ ℓ∈Zd
278
+ sup
279
+ r∈Zd |⟨Ur+ℓ, Ur⟩| < +∞,
280
+ 2) or if β > ζ for some ζ ∈ [1, 2] and the r.f. (Uℓ) is stationary with
281
+
282
+ ℓ∈Zd
283
+ |⟨Uℓ, U0⟩|ζ < ∞,
284
+ the (strong) LLN holds: Sn(ω)
285
+ n
286
+ → 0, for P-a.e ω.
287
+ Proof. 1) For convenience, if t is in R+, we define St as S[t]. From (3) it follows
288
+
289
+ (|Sn|
290
+ n )2 dP ≤ C0
291
+ Vn
292
+ n2 ≤ C0C1
293
+ 1
294
+ (log n)β .
295
+ Therefore, putting β = 1 + η and α = 1 − η/2 (which implies αβ > 1) we have
296
+
297
+ k
298
+
299
+ (|S2kα|
300
+ 2kα )2 dP ≤ C0C1
301
+
302
+ k
303
+ 1
304
+ (log 2kα)β = C′ �
305
+ k
306
+ 1
307
+ kαβ < +∞;
308
+ hence:
309
+ lim
310
+ k→+∞
311
+ S2kα
312
+ 2kα = 0, a.e.
313
+ For n ≥ 1, let kn be such that 2(kn)α ≤ n < 2(kn+1)α (that is: kn = [(log2 n)1/α]).
314
+ We put qn := 2(kn+1)α − 2(kn)α and pn = n − 2(kn)α ≤ qn.
315
+
316
+ 6
317
+ GUY COHEN AND JEAN-PIERRE CONZE
318
+ For qn, the following estimate holds: qn = 2(kn)α(2(kn+1)α−(kn)α − 1) ∼ C′′ 2(kn)α
319
+ (kn)1−α.
320
+ Using the uniform boundedness of the r.v.’s, we can write:
321
+ |Sn
322
+ n − S2(kn)α
323
+ 2(kn)α | = |S2(kn)α + �2(kn)α+pn
324
+ i=2(kn)α
325
+ Uzi
326
+ 2(kn)α + pn
327
+ − S2(kn)α
328
+ 2(kn)α | = |2(kn)α �2(kn)α+pn
329
+ i=2(kn)α
330
+ Uzi − pnS2(kn)α
331
+ 2(kn)α(2(kn)α + pn)
332
+ |
333
+ ≤ 2(kn)α �2(kn)α+pn
334
+ i=2(kn)α
335
+ |Uzi| + pn|S2(kn)α|
336
+ 2(kn)α(2(kn)α + pn)
337
+ ≤ 2(kn)α �2(kn)α+qn
338
+ i=2(kn)α |Uzi| + qn|S2(kn)α|
339
+ 2(kn)α(2(kn)α)
340
+ ≤ qnK2(kn)α + qn|S2(kn)α|
341
+ 2(kn)α(2(kn)α)
342
+ =
343
+ qn
344
+ 2(kn)α (K + |S2(kn)α|
345
+ 2(kn)α ). ≤
346
+ C
347
+ 2(kn)1−α(K + |S2(kn)α|
348
+ 2(kn)α ) → 0.
349
+ 2) We consider now the stationary case. Since ζ = 1 is special case of 1), we assume
350
+ ζ ∈]1, 2]. We put β = ζ + η, where η is > 0 in view of the hypothesis.
351
+ First, suppose that ζ = 2. Then under the hypothesis, the r.f. has a spectral measure νϕ
352
+ absolutely continuous with respect to the Lebesgue measure λ on the torus with a density
353
+ ρ ∈ L2(dt) given by the Fourier series ρ(t) = �
354
+ ℓ∈Zd⟨Uℓ, U0⟩ e2iπ⟨ℓ,t⟩.
355
+ Using the inequality λ{ρ > Mn} ≤ M−2
356
+ n ∥ρ∥2
357
+ 2, we can write:
358
+ ∥Sn∥2
359
+ 2
360
+ n2
361
+ = 1
362
+ n2
363
+
364
+ Td |
365
+ n−1
366
+
367
+ j=0
368
+ e2πi⟨zj,t⟩|2 dνϕ(t) ≤ Mn
369
+ n2
370
+
371
+ Td |
372
+ n−1
373
+
374
+ j=0
375
+ e2πi⟨zj,t⟩|2 dt +
376
+
377
+ ρ>Mn
378
+ ρ dt
379
+
380
+ Mn
381
+ Vn
382
+ n2 + (λ{ρ > Mn})
383
+ 1
384
+ 2∥ρ∥2 ≤ Mn
385
+ Vn
386
+ n2 + M−1
387
+ n ∥ρ∥2
388
+ 2.
389
+ Taking Mn = (log n)1+ 1
390
+ 2 η, we obtain the bound
391
+ 1
392
+ n2 ∥
393
+ n−1
394
+
395
+ j=0
396
+ Uzj∥2
397
+ 2 ≤
398
+ C
399
+ (log n)1+ 1
400
+ 2 η
401
+ and then we finish the proof as in 1).
402
+ Now, suppose that
403
+
404
+ ℓ∈Zd
405
+ |⟨Uℓ, U0⟩|ζ < ∞ with 1 < ζ < 2. The spectral density ρ exists
406
+ and is in L2(λ), since
407
+
408
+ ℓ∈Zd
409
+ |⟨Uℓ, U0⟩|2 < ∞. Moreover it belongs to Lζ′(λ) where ζ, ζ′ are
410
+ conjugate exponents (see: [31], p. 102, or [21] Th. 31.22), and it satisfies:
411
+ ∥ρ∥ζ′ ≤ (
412
+
413
+ ℓ∈Zd
414
+ |⟨Uℓ, U0⟩|ζ)1/ζ.
415
+ H¨older’s inequality implies:
416
+
417
+ ρ>Mn ρ dt ≤ (λ{ρ > Mn})1/ζ∥ρ∥ζ′. As
418
+ λ{ρ > Mn} ≤ M−ζ′
419
+ n
420
+
421
+ ρζ′ dt = M−ζ′
422
+ n
423
+ ∥ρ∥ζ′
424
+ ζ′,
425
+
426
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
427
+ 7
428
+ it follows:
429
+
430
+ ρ>Mn
431
+ ρ dt ≤ M−ζ′/ζ
432
+ n
433
+ ∥ρ∥1+ζ′/ζ
434
+ ζ′
435
+ .
436
+ Therefore, we obtain
437
+ 1
438
+ n2
439
+
440
+ Td |
441
+ n−1
442
+
443
+ j=0
444
+ e2πi⟨zj,t⟩|2 dνϕ(t) ≤ Mn
445
+ Vn
446
+ n2 +
447
+
448
+ ρ>Mn
449
+ ρ dt ≤ Mn
450
+ Vn
451
+ n2 + M−ζ′/ζ
452
+ n
453
+ ∥ρ∥1+ζ′/ζ
454
+ ζ′
455
+ .
456
+ Now we take Mn such that : Mn/(log n)β = M−ζ′/ζ
457
+ n
458
+ , i.e. Mn = (log n)β/ζ′. We get
459
+ 1
460
+ n2 ∥
461
+ n−1
462
+
463
+ j=0
464
+ Uzj∥2
465
+ 2 ≤
466
+ C
467
+ (log n)β(1−1/ζ′) =
468
+ C
469
+ (log n)β/ζ =
470
+ C
471
+ (log n)1+η/ζ with η > 0,
472
+ and the end of the proof is as above.
473
+
474
+ Remarks 1.3. 1) Let us give an example of a non stationary r.f. (Uℓ) which satisfies
475
+ Condition 1) of the previous proposition.
476
+ We take (Uℓ = Vℓ Wℓ, ℓ ∈ Zd), where (Vℓ) and (Wℓ) are two r.f.’s independent from
477
+ each other, with (Vℓ) centered stationary and such that �
478
+ ℓ∈Zd |⟨Vℓ, V0⟩| < ∞, and (Wℓ)
479
+ satisfying supℓ,p |⟨Wℓ+p, Wℓ⟩| < ∞.
480
+ The r.f. (Wℓ) can be viewed as a (multiplicative) noise (which can be non stationary)
481
+ independent from the r.f. (Uℓ). Clearly the condition in 1) is satisfied.
482
+ 2) For a stationary r.f. (Uℓ) with a bounded spectral density (but with a series of corre-
483
+ lations which may be not absolutely summable), then like in 1) the condition β > 1 is
484
+ sufficient for the conclusion of the theorem.
485
+ Now, we give a pointwise bound for the sampled sums, first for i.i.d. r.v.’s, then for a
486
+ stationary random field (Uℓ)ℓ∈Zd of associated r.v.’s.
487
+ Proposition 1.4. 1) Suppose that the r.v.’s Uℓ, ℓ ∈ Zd, are i.i.d., centered, uniformly
488
+ bounded by a constant K, ∥U0∥∞ ≤ K, and that E|U0|2 = 1. Then it holds
489
+ lim sup
490
+ n
491
+ |Sn|
492
+ √Vn (2 log log n)
493
+ 1
494
+ 2 ≤ K, P-a.e.
495
+ (6)
496
+ If Vn = o(n2 (log log n)−1), then lim
497
+ n
498
+ Sn
499
+ n = 0, P-a.e.
500
+ 2) Suppose the random field stationary and the r.v.’s Uℓ centered associated.
501
+ a) For all ε > 0, it holds, with σn := ∥ �n−1
502
+ i=0 Uzi∥2:
503
+ lim sup
504
+ n
505
+ |Sn|
506
+ σn (log σn)
507
+ 1
508
+ 2+ε ≤ 1, P-a.e.
509
+ (7)
510
+ b) If moreover the r.f. has a summable series of correlations, then, for all ε > 0,
511
+ |Sn| = O(
512
+
513
+ Vn (log n)
514
+ 1
515
+ 2+ε), P-a.e.
516
+ (8)
517
+
518
+ 8
519
+ GUY COHEN AND JEAN-PIERRE CONZE
520
+ If Vn ≤ Cn2 (log n)−(1+η)) for some constants C, η > 0, then lim
521
+ n
522
+ Sn
523
+ n = 0, P-a.e.
524
+ Proof.
525
+ A) Recall that σn = ∥ �n−1
526
+ i=0 Uzi∥2. In case 2) we may assume ∥U0∥2 = 1, and
527
+ then in all cases σn ≤ n and by association σn ≥ n
528
+ 1
529
+ 2. We have in case 1) σn = √Vn and
530
+ in case 2b), for associated variables, by (4): σn ≤ (�
531
+ p⟨Up, U0⟩)
532
+ 1
533
+ 2 √Vn. By association, σn
534
+ is non-decreasing and tends to infinity.
535
+ For ρ > 1, let nk = nk(ρ) be a strictly increasing sequence of integers such that ρk <
536
+ σnk ≤ ρk+1. Since 1 ≤ σ2
537
+ k+1 − σ2
538
+ k ≤ 1 + 2k, such a sequence exists after a certain rank. By
539
+ the choice of (nk) we have
540
+ ρk < σnk ≤ ρk+1 < σnk+1 ≤ ρk+2.
541
+ (9)
542
+ Moreover, we have σnk+1/σnk ≤ ρ2 and, since σn ≤ n, nk ≥ ρk.
543
+ Let (λn) be a non
544
+ decreasing sequence of positive numbers such that
545
+ λnk >
546
+
547
+ 2, lim supk λnk+1/λnk ≤ 1,
548
+
549
+ k P
550
+ ��� �nk−1
551
+ i=0 Uzi
552
+ �� ≥ (λnk −
553
+
554
+ 2) ∥ �nk−1
555
+ i=0 Uzi∥2
556
+
557
+ < ∞.
558
+ (10)
559
+ By the previous inequalities and by Newman-Wright’s inequality (2) for the sequence of
560
+ centered associated random variables 1 (Wi) = (Uzi), we have
561
+
562
+ k
563
+ P(
564
+ max
565
+ 0≤j≤nk−1
566
+ ��
567
+ j
568
+
569
+ i=0
570
+ Uzi
571
+ �� ≥ λnk ∥
572
+ nk−1
573
+
574
+ i=0
575
+ Uzi∥2) ≤ 2
576
+
577
+ k
578
+ P(|
579
+ nk−1
580
+
581
+ j=0
582
+ Uzj| ≥ (λnk−
583
+
584
+ 2)∥
585
+ nk−1
586
+
587
+ j=0
588
+ Uzj∥2) < +∞.
589
+ By the Borel-Cantelli lemma, it follows:
590
+ lim sup
591
+ k
592
+ max0≤j≤nk+1−1
593
+ �� �j
594
+ i=0 Uzi
595
+ ��
596
+ λnk+1 σnk+1
597
+ ≤ 1, P-a.e.
598
+ Hence P-a.e.
599
+ lim sup
600
+ k
601
+ max0≤j<nk+1−1 | �j
602
+ i=0 Uzi|
603
+ λnkσnk
604
+ ≤ lim sup
605
+ k
606
+ �λnk+1
607
+ λnk
608
+ σnk+1
609
+ σnk
610
+
611
+ ≤ ρ2.
612
+ (11)
613
+ Observe that, if |Si| > ρ2λiσi, for some i ∈ [nk, nk+1[, then max0≤j<nk+1 |Sj| > ρ2λnkσnk.
614
+ This shows:
615
+ {|Sn| > ρ2λnσn, i.o.} ⊂ { max
616
+ 0≤j<nk+1 |Sj| > ρ2λnkσnk, i.o.}.
617
+ By this inclusion and (11) it follows: lim sup
618
+ n
619
+ | �n−1
620
+ i=0 Uzi|
621
+ λnσn
622
+ ≤ ρ2, P-a.e.
623
+ Taking ρ = ρn with ρn ↓ 1, we obtain
624
+ lim sup
625
+ n
626
+ | �n−1
627
+ i=0 Uzi|
628
+ λnσn
629
+ ≤ 1, P-a.e.
630
+ (12)
631
+ B) Choice of a sequence (λk) such that (10) is satisfied.
632
+ 1as it is a subset of a set of associated r.v.’s
633
+
634
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
635
+ 9
636
+ Case 1)
637
+ Suppose that the Uk’s are i.i.d. r.v.’s. Recall that if (Wj, j ≥ 1) are centered bounded
638
+ sequence of independent random variables on (Ω, P), for any finite sum of the Wj’s it
639
+ holds by Hoeffding’s inequality for differences of martingale ([20]), for every ε > 0:
640
+ P(|
641
+
642
+ j
643
+ Wj| > ε) ≤ 2 exp(−1
644
+ 2
645
+ ε2
646
+
647
+ j ∥Wj∥2∞
648
+ ).
649
+ (13)
650
+ We apply it to the family (Nn(ℓ)Uℓ, ℓ ∈ Zd). From the hypotheses, we have:
651
+
652
+
653
+ ∥Nn(ℓ)Uℓ∥2
654
+ ∞ ≤ K2 �
655
+
656
+ N2
657
+ n(ℓ) = K2Vn.
658
+ With ε = (λ −
659
+
660
+ 2)√Vn, (13) implies:
661
+ P
662
+ ��� �
663
+
664
+ Nn(ℓ) Uℓ
665
+ �� ≥ (λ −
666
+
667
+ 2)
668
+
669
+ Vn
670
+
671
+ ≤ 2 exp
672
+
673
+ − 1
674
+ 2(λ −
675
+
676
+ 2)2 Vn
677
+ K2Vn
678
+
679
+ = 2 exp
680
+
681
+
682
+ 1
683
+ 2K2(λ −
684
+
685
+ 2)2�
686
+ .
687
+ Let c, δ be such that c > δ > K2. In the previous inequality, we take
688
+ λ = λn = (2c log log n)
689
+ 1
690
+ 2.
691
+ Let k(c, δ) be such that λnk >
692
+
693
+ 2 and c(1−
694
+ 2
695
+ √c log log nk ) ≥ δ > 1, for k ≥ k(c, δ). We have:
696
+
697
+
698
+ k=k(c,δ)
699
+ P
700
+ ���
701
+ nk−1
702
+
703
+ i=1
704
+ Uzi
705
+ �� ≥ (λnk −
706
+
707
+ 2) ∥
708
+ nk−1
709
+
710
+ i=1
711
+ Uzi∥2
712
+
713
+ ≤ 2
714
+
715
+
716
+ k=k(c,δ)
717
+ exp
718
+
719
+
720
+ 1
721
+ 2K2(λnk −
722
+
723
+ 2)2�
724
+
725
+ 2
726
+ exp K2
727
+
728
+
729
+ k=k(c,δ)
730
+ exp
731
+
732
+ − c
733
+ K2 log log nk)(1 −
734
+ 2
735
+ √c log log nk
736
+ )
737
+
738
+
739
+ 2
740
+ exp K2
741
+
742
+
743
+ k=k(c,δ)
744
+ 1
745
+ (k log ρ)
746
+ δ
747
+ K2 < ∞.
748
+ Now we can apply (12). It follows:
749
+ lim sup
750
+ n
751
+ | �n−1
752
+ i=0 Uzi|
753
+
754
+ 2c(log log n)Vn
755
+ ≤ 1, P-a.e.
756
+ Taking c = cn with cn ↓ K2, we get (6).
757
+ Case 2)
758
+ For general associated r.v.’s, we use simply that P
759
+ ��� �n−1
760
+ i=0 Uzi
761
+ �� ≥ λ ∥ �n−1
762
+ i=0 Uzi∥2
763
+
764
+
765
+ 1
766
+ λ2.
767
+ We take λn = (log σn)
768
+ 1
769
+ 2 +ε, with ε > 0. By (9) we have λnk ≥ (k log ρ)
770
+ 1
771
+ 2+ε, and therefore,
772
+ for a constant C1: �
773
+ k
774
+ 1
775
+ λ2nk ≤ C1
776
+
777
+ k k−(1+2ε) < +∞; hence condition (10).
778
+
779
+ 10
780
+ GUY COHEN AND JEAN-PIERRE CONZE
781
+ Moreover we have k log ρ ≤ log σnk ≤ log σnk+1 ≤ (k + 2) log ρ; hence
782
+ λnk+1
783
+ λnk
784
+ =
785
+ �log σnk+1
786
+ log σnk)
787
+ � 1
788
+ 2+ε ≤ (1 + 2
789
+ k)
790
+ 1
791
+ 2+ε → 1.
792
+ By (12), this proves (7) in 2a)
793
+ For case 2b) we have σ2
794
+ n ≤ Vn
795
+
796
+ p⟨Up, U0⟩ and σn ≤ n, hence it yields (8). The last
797
+ conclusion in case 2b) is now clear. Remark that it follows also from Proposition 1.2.
798
+
799
+ 1.2. A Glivenko-Cantelli type theorem.
800
+ Empirical process
801
+ Let us consider a random field of r.v.’s (Xℓ, ℓ ∈ Zd) on (Ω, F, P) with common distribution
802
+ function F. Let (zk) ⊂ Zd be a sequence with self-intersections (Vn).
803
+ Notation. We say that (Xℓ, ℓ ∈ Zd) satisfies a Glivenko-Cantelli theorem along a sequence
804
+ (zk) in Zd if
805
+ lim
806
+ n sup
807
+ s | 1
808
+ n
809
+ n
810
+
811
+ k=1
812
+ 1(−∞,s](Xzk(ω)) − F(s)| = 0, for P-a.e.ω.
813
+ We show now a Glivenko-Cantelli theorem along a sequence (zk) under various hypotheses
814
+ on (zk) and on (Xℓ) (mixing, i.i.d., associated or PQD).
815
+ Le (Xℓ, ℓ ��� Zd) be a r.f. Denoting by σ(Xℓ) the σ-algebra generated by the random
816
+ variable Xℓ, we define a coefficient of mixing by
817
+ γ(ℓ) := sup
818
+ r∈Zd
819
+ sup
820
+ A∈σ(Xr), B∈σ(Xℓ+r)
821
+ |P(A ∩ B) − P(A)P(B)|.
822
+ (14)
823
+ Observe that for every s, t ∈ R it holds:
824
+ sup
825
+ r∈Zd |⟨1Xr≤s − P(Xr ≤ s), 1Xℓ+r≤t − P(Xℓ+r ≤ t)⟩| ≤ γ(ℓ), ∀ℓ ∈ Zd.
826
+ (15)
827
+ By (15) and Proposition 1.2, we get:
828
+ Theorem 1.5. Let (zk) be such that Vn ≤ C1
829
+ n2
830
+ (log n)β , for constants C1 > 0, β.
831
+ If �
832
+ ℓ∈Zd γ(ℓ) < +∞ and β > 1,
833
+ or if the r.f. is stationary and �
834
+ ℓ∈Zd γ(ℓ)ζ < +∞, for some ζ ∈ [1, 2] and β > ζ,
835
+ then (Xℓ, ℓ ∈ Zd) satisfies a Glivenko-Cantelli theorem along (zk).
836
+ Using Proposition 1.4, we consider now the i.i.d. and associated cases.
837
+ Theorem 1.6. a) If (Xℓ)ℓ∈Zd is a r.f. of i.i.d. r.v.’s, then under the condition Vn =
838
+ o(n2 (log log n)−1) it satisfies a Glivenko-Cantelli theorem along (zk).
839
+ b) If (Xℓ)ℓ∈Zd is a strictly stationary r.f. of associated r.v.’s such that �
840
+ ℓ⟨Xℓ, X0⟩ con-
841
+ verges, then, under the condition Vn = O(n2 log−(1+η) n) for some η > 0, for a.e. ω, we
842
+
843
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
844
+ 11
845
+ have for each continuity point s of F:
846
+ lim
847
+ n
848
+ 1
849
+ n
850
+ n−1
851
+
852
+ k=0
853
+ 1(−∞,s](Xzk(ω)) = F(s).
854
+ (16)
855
+ If F is continuous, the convergence is uniform in s.
856
+ Proof.
857
+ a) Denote by Fn(s)(ω) the means
858
+ 1
859
+ n
860
+ �n−1
861
+ k=0 1(−∞,s](Xzk(ω)). Let Q be a dense
862
+ countable set of continuity points of F.
863
+ For every s ∈ Q, by the assumption on Vn and Proposition 1.4, there is a null set N(s)
864
+ such that, for a sequence εn tending to 0, for every ω ̸∈ N(s),
865
+ |Fn(s)(ω) − F(s)| ≤ εn(Vn log log n)− 1
866
+ 2|
867
+ n−1
868
+
869
+ k=0
870
+
871
+ 1(−∞,s](Xzk) − F(s)
872
+
873
+ | → 0.
874
+ Then Fn(s)(ω) → F(s) for every ω outside the null set N := ∪s∈QN(s) and for s ∈ Q.
875
+ Similarly by Proposition 1.4, for every s in the set J of jumps of F, we have Fn(s)(ω) −
876
+ Fn(s−)(ω) → F(s) − F(s−) a.e. As J is countable, this convergence holds for every s ∈ J
877
+ and ω ̸∈ ˜N, where ˜N is a null set.
878
+ Outside the null set N ∪ ˜N, Lemma 1.1 applied with Q and J implies the result.
879
+ b) We consider now the case of a strictly stationary random field of associated r.v.’s. Let
880
+ s be a continuity point of the common distribution F. For every ǫ > 0 there exists δ > 0,
881
+ such that F(s + δ) − F(s − δ) ≤ ǫ. As in [30], for δ > 0 and s, define the approximated
882
+ step function hδ,s by hδ,s(x) = 0, if x ≤ s − δ and hδ,s(x) = 1 + x−s
883
+ δ
884
+ if s − δ ≤ x ≤ s,
885
+ otherwise, hδ,s(x) = 1. It is a non decreasing continuous function with h′
886
+ δ,s(x) = 1/δ for
887
+ s−δ < x < s. It follows from the above Hoeffding’s identity applied to this approximated
888
+ step function (see [2]):
889
+ Cov(hδ,s(Xℓ), hδ,s(X0)) ≤ δ−2⟨Xℓ, X0⟩,
890
+ Cov(hδ,s+δ(Xℓ), hδ,s+δ(X0)) ≤ δ−2⟨Xℓ, X0⟩.
891
+ By association and non decreasing,
892
+
893
+ hδ,s(Xℓ)
894
+
895
+ as well as
896
+
897
+ hδ,s+δ(Xℓ)
898
+
899
+ are stationary r.f.s
900
+ of associated r.v.’s, and we may apply Proposition 1.4 to their centered versions (also
901
+ associated). The condition simply reads, for τ = s, s + δ:
902
+
903
+
904
+ Cov(hδ,τ(Xℓ), hδ,τ(X0)) ≤ δ−2 �
905
+
906
+ ⟨Xℓ, X0⟩ < ∞.
907
+ We put Sn = �n−1
908
+ k=0 hδ,s(Xzk) and Sn = �n−1
909
+ k=0 hδ,s+δ(Xzk).
910
+ By hδ,s+δ(x) ≤ 1{x>s} ≤
911
+ hδ,s(x), it holds Sn ≤ �n−1
912
+ k=0 1(s,∞)(Xzk) ≤ Sn. Hence by Proposition 1.4, we have almost
913
+ everywhere 1
914
+ nSn → E[hδ,s+δ(X0)] and 1
915
+ nSn → E[hδ,s(X0)]. Since
916
+ E[hδ,s(X0)] ≤ F(s) − F(s − δ) + 1 − F(s) ≤ ǫ + 1 − F(s),
917
+ E[hδ,s+δ(X0)] ≥ 1 − F(s + δ) = 1 − F(s) − (F(s + δ) − F(s)) ≥ 1 − F(s) − ǫ,
918
+
919
+ 12
920
+ GUY COHEN AND JEAN-PIERRE CONZE
921
+ we conclude
922
+ 1 − F(s) − ǫ ≤ lim inf
923
+ n
924
+ 1
925
+ n
926
+ n−1
927
+
928
+ k=0
929
+ 1(s,∞)(Xzk) ≤ lim sup
930
+ n
931
+ 1
932
+ n
933
+ n−1
934
+
935
+ k=0
936
+ 1(s,∞)(Xzk) ≤ 1 − F(s) + ǫ.
937
+ Subtracting the 1’s and taking ǫ → 0, we get (16).
938
+
939
+ PQD variables.
940
+ The result shown for associated variables can be extended to the class of PDQ variables,
941
+ but with a stronger condition on the local times of the sequence (zk).
942
+ Proposition 1.7. Let (Uℓ) be a centered stationary random field of pairwise PQD r.v.’s
943
+ such that �
944
+ ℓ⟨Uℓ, U0⟩ converges. Let (zk) be a sequence of points with maximal local times
945
+ sequence (Mn). If �
946
+ n≥1
947
+ Mn
948
+ n2 < +∞, then 1
949
+ n(Uz0 + · · · + Uzn−1) converges a.e. to 0.
950
+ Proof. We apply the following result of [4]: let (Yj : j ≥ 1) be a sequence of pairwise
951
+ centered PQD r.v.’s.
952
+ with finite variance. If �
953
+ j≥1 j−2Cov(Yj, �j
954
+ i=1 Yi) converges and
955
+ supj E|Yj| < ∞, then n−1 �n
956
+ i=1 Yi → 0 a.e.
957
+ Taking for Yj the (still) pairwise PQD r.v.’s Uzj, we get the result, since Cov(Uzj, Uz1 +
958
+ · · · + Uzj) ≤ Mj
959
+
960
+ ℓ⟨U0, Uℓ⟩.
961
+
962
+ Now, we consider the empirical distribution.
963
+ Theorem 1.8. Let (Xℓ) be a centered strictly stationary random field of pairwise PQD
964
+ r.v.’s with distribution function F such that �
965
+ ℓ⟨Xℓ, X0⟩ converges. Let (zk) be a sequence
966
+ of points with maximal local times sequence (Mn). If �
967
+ n≥1
968
+ Mn
969
+ n2 < +∞, then for each
970
+ continuity point s of F, we have for a.e. ω: limn
971
+ 1
972
+ n
973
+ �n−1
974
+ k=0 1(−∞,s](Xzk(ω)) = F(s).
975
+ In particular, if F is continuous, the above convergence is uniform over s.
976
+ Proof. The r.f.s hδ,s(Xℓ) and hδ,s+δ(Xℓ) are still stationary pairwise PQD. The proof is
977
+ analogous to the proof of Theorem 1.6. For the last statement, we use Lemma 1.1.
978
+
979
+ Remark. If Mn = O(n (log n)−(1+η)), then Vn = O(n2 (log n)−(1+η)).
980
+ If Vn ≤ C
981
+ n2
982
+ (log n)β , with β > 2, then
983
+
984
+ n≥1
985
+ Mn
986
+ n2 < +∞.
987
+ As shown in Section 3, �
988
+ n≥1
989
+ Mn
990
+ n2 converges when the sampling is done along random
991
+ walks, but diverges in some examples of sampling along “deterministic” random walks.
992
+ 1.3. A sufficient condition for a FCLT for the sampled empirical process.
993
+ After a Glivenko-Cantelli theorem for sampled empirical processes, we consider now the
994
+ Functional Central Limit Theorem (FCLT). Let (zk) be in Zd, d ≥ 1, with the associated
995
+ quantities Nn(ℓ), Mn and Vn defined by (1).
996
+ Before restricting to a r.f. of i.i.d. r.v.’s, first we examine the variance in the more general
997
+ situation where the series of correlations is absolutely summable.
998
+
999
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
1000
+ 13
1001
+ Kernel associated to a sequence (zk) and variance.
1002
+ Let Kn be the kernel (which is a real even function on Td depending on n ≥ 0) defined
1003
+ by the equivalent formulas:
1004
+ Kn(t)
1005
+ =
1006
+ |
1007
+ n−1
1008
+
1009
+ k=0
1010
+ e2πi⟨zk,t⟩|2 = n + 2
1011
+ n−1
1012
+
1013
+ k=1
1014
+ n−k−1
1015
+
1016
+ j=0
1017
+ cos(2π⟨zk+j − zj, t⟩) = |
1018
+
1019
+ ℓ∈Zd
1020
+ Nn(ℓ) e2πi⟨ℓ,t⟩|2
1021
+ =
1022
+ n + 2
1023
+
1024
+
1025
+ �n−1
1026
+
1027
+ k=1
1028
+ n−k−1
1029
+
1030
+ j=0
1031
+ 1zk+j−zj=ℓ
1032
+
1033
+ cos(2π⟨ℓ, t⟩).
1034
+ (17)
1035
+ If (Xℓ, ℓ ∈ Zd) is a stationary r.f. such that �
1036
+ ℓ∈Zd |⟨Xℓ, X0⟩| < +∞, the spectral density
1037
+ ρ is continuous and we have:
1038
+
1039
+ |
1040
+ n−1
1041
+
1042
+ k=0
1043
+ Xzk|2dP =
1044
+
1045
+ Td Kn(t) ρ(t) dt ≤ ∥ρ∥∞Vn ≤ (
1046
+
1047
+ ℓ∈Zd
1048
+ |⟨Xℓ, X0⟩|)Vn.
1049
+ One can ask if there is an asymptotic variance, i.e., a limit for the normalised quantity
1050
+ V −1
1051
+ n
1052
+
1053
+ |
1054
+ n−1
1055
+
1056
+ k=0
1057
+ Xzk|2dP which is bounded if the series of correlations is absolutely summable.
1058
+ The existence of asymptotic variance is shown in [11] in the case of summation along a
1059
+ random walk. We will come back to the question of its positivity for transient random
1060
+ walks in Subsection 3.1.
1061
+ Functional Central limit Theorem in the i.i.d. case
1062
+ The following result gives a sufficient condition for a Functional Central limit Theorem
1063
+ (FCLT) along a sequence (zk) in the i.i.d. case.
1064
+ The standard Brownian bridge process W 0(s) is the centered Gaussian process W 0(s) :=
1065
+ W(s)−sW(1) in C(0, 1), where W(s) is the Wiener process. It has the properties W 0(0) =
1066
+ W 0(1) = 0 and E[W 0(s1)W 0(s2)] = s1 ∧ s2 − s1s2.
1067
+ Let (Xk)k∈Zd be i.i.d. random variables with common probability distribution F in [0, 1].
1068
+ We put W 0
1069
+ F = W 0 ◦ F. Let Yn(s) be the random element in D[0, 1] defined by
1070
+ Yn(s) =
1071
+ 1
1072
+ √Vn
1073
+ n−1
1074
+
1075
+ k=0
1076
+ [1Xzk≤s − F(s)] =
1077
+ 1
1078
+ √Vn
1079
+
1080
+ ℓ∈Zd
1081
+ Nn(ℓ) [1Xℓ≤s − F(s)].
1082
+ Theorem 1.9. Yn(s) →D[0,1] W 0
1083
+ F(s), if (zk) satisfies the condition
1084
+ lim
1085
+ n
1086
+ M2
1087
+ n
1088
+ Vn
1089
+ = 0,
1090
+ (18)
1091
+ Proof.
1092
+ The result follows from the Cram´er-Wold device, if we prove convergence of the
1093
+ finite dimensional distributions and tightness. The variance is
1094
+ E[Yn(s)]2 = 1
1095
+ Vn
1096
+
1097
+
1098
+ N2
1099
+ n(ℓ) E[1Xℓ≤s − F(s)]2 = σ2(s) = F(s)(1 − F(s)).
1100
+ (19)
1101
+
1102
+ 14
1103
+ GUY COHEN AND JEAN-PIERRE CONZE
1104
+ 1) Finite dimensional distributions. The convergence follows from Lindeberg’s theorem
1105
+ for triangular arrays of independent random variables as in [3, thm 7.2]. The Lindeberg’s
1106
+ condition for the triangular array of independent r.v.’s
1107
+ �Nn(ℓ)[1Xℓ≤s − F(s)]
1108
+ √Vn
1109
+
1110
+ ℓ,n follows
1111
+ from
1112
+ 1
1113
+ Vn
1114
+
1115
+
1116
+
1117
+ {Nn(ℓ)|1Xℓ≤s−F (s)|≥ ε√Vn}
1118
+ N2
1119
+ n(ℓ) |1Xℓ≤s − F(s)|2dP
1120
+
1121
+ 1
1122
+ Vn
1123
+
1124
+
1125
+ N2
1126
+ n(ℓ)
1127
+
1128
+ {supℓ Nn(ℓ)|1X0≤s−F (s)|≥ε√Vn}
1129
+ |1X0≤s − F(s)|2dP → 0,
1130
+ for every ε > 0, since Vn = �
1131
+ ℓ N2
1132
+ n(ℓ) and supℓ Nn(ℓ)
1133
+ √Vn
1134
+ → 0, by assumption.
1135
+ For the correlation of the process taken at s1 and s2, it holds by independence:
1136
+ E[Yn(s1)Yn(s2)]
1137
+ =
1138
+ 1
1139
+ Vn
1140
+
1141
+ ℓ1,ℓ2
1142
+ Nn(ℓ1)Nn(ℓ2)E[(1Xℓ1≤s1 − F(s1))(1Xℓ2≤s2 − F(s2))]
1143
+ =
1144
+ 1
1145
+ Vn
1146
+
1147
+
1148
+ N2
1149
+ n(ℓ)(F(s1 ∧ s2) − F(s1)F(s2)) = F(s1 ∧ s2) − F(s1)F(s2).
1150
+ This proves the convergence in distribution: Yn(s) → W 0
1151
+ F(s) for every s.
1152
+ Let us show now the convergence of the finite dimensional distributions. Starting with
1153
+ the asymptotic distribution of aYn(s1) + bYn(s2), by the above computation, we have
1154
+ E[(aYn(s1) + bYn(s2))2] =
1155
+ a2F(s1)(1 − F(s1)) + b2F(s2)(1 − F(s2)) + 2ab(F(s1 ∧ s2) − F(s1)F(s2)).
1156
+ (20)
1157
+ As above, it is easily seen that Lindeberg’s condition is satisfied for the triangular array
1158
+ defined by aYn(s1)+bYn(s2). It means that the asymptotic distribution of aYn(s1)+bYn(s2)
1159
+ is centered Gaussian with variance as computed above.
1160
+ Note that E[(aW 0(s1) + bW 0(s2))2] is also given by (20) above.
1161
+ Similarly, for every s1 ≤ · · · ≤ sr, it holds
1162
+ (Yn(s1), · · · , Yn(sr)) →dist (W 0
1163
+ F(s1), . . . , W 0
1164
+ F(sr)).
1165
+ Tightness. First we suppose F continuous. Following the method of [3], it is enough to
1166
+ show that for s ≤ t ≤ r, uniformly in n,
1167
+ E[(Yn(t) − Yn(s))2(Yn(r) − Yn(t))2] ≤ C(F(r) − F(s))2.
1168
+ Putting F(u, v) := F(v) − F(u), f(ℓ, u, v) := 1u<Xℓ≤v − F(u, v), for u < v, we compute
1169
+ E[(Yn(t) − Yn(s))2(Yn(r) − Yn(t))2] = 1
1170
+ V 2
1171
+ n
1172
+ E
1173
+ �� �
1174
+
1175
+ Nn(ℓ)f(ℓ, s, t)
1176
+ �2� �
1177
+
1178
+ Nn(ℓ)f(ℓ, t, r)
1179
+ �2�
1180
+ .
1181
+
1182
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
1183
+ 15
1184
+ By expansion and independence, the above expression is sum of three types of terms:
1185
+ 1
1186
+ V 2
1187
+ n
1188
+
1189
+
1190
+ N4
1191
+ n(ℓ) [A], 1
1192
+ V 2
1193
+ n
1194
+
1195
+ ℓ1,ℓ2
1196
+ N2
1197
+ n(ℓ1)N2
1198
+ n(ℓ2) [B], 1
1199
+ V 2
1200
+ n
1201
+
1202
+ ℓ1̸=ℓ2
1203
+ N2
1204
+ n(ℓ1)N2
1205
+ n(ℓ2) [C],
1206
+ with A = E[f 2(ℓ, s, t)f 2(ℓ, t, r)], B = E[f 2(ℓ1, s, t)] E[f 2(ℓ2, t, r)],
1207
+ C = E[f(ℓ1, s, t)f(ℓ1, t, r))] E[f(ℓ2, s, t)f(ℓ2, t, r))].
1208
+ By stationarity and since the intervals do not overlap, we have
1209
+ A
1210
+ = F(s, t)F 2(t, r) + F 2(s, t)F(t, r) − 3F 2(s, t)F 2(t, r),
1211
+ B
1212
+ = F(s, t)(1 − F(s, t)) · F(t, r)(1 − F(t, r)), C = F 2(s, t)F 2(t, r).
1213
+ Since 0 ≤ F(s, t), F(t, r), F(s, r) ≤ 1 and F(s, t), F(t, r) ≤ F(s, r), it follows
1214
+ A ≤ 2F 3(s, r) ≤ 2F 2(s, r), B ≤ F 2(s, r), C ≤ F 4(s, r) ≤ F 2(s, r).
1215
+ Recall that Vn = �
1216
+ ℓ N2
1217
+ n(ℓ). Using ∥ · ∥ℓ4 ≤ ∥ · ∥ℓ2 for the bound of the first term, we have
1218
+ for some fixed constant C > 0:
1219
+ E[(Yn(t) − Yn(s))2(Yn(r) − Yn(t))2] ≤ C(F(r) − F(s))2, ∀n.
1220
+ Hence by [3, Theorem 15.6], for non decreasing continuous F, the sequence of processes
1221
+ (Yn(s)) is tight in D(0, 1). This proves that, if F is continuous, then Yn →D(0,1) (W 0 ◦ F).
1222
+ Now, for a general F a classical method is to use a generalized inverse. Let us describe it
1223
+ briefly. We consider first the uniform empirical process. Let (ζk) be uniformly distributed
1224
+ i.i.d. r.v.’s. Denote the empirical process along (zk) with respect to (ζk) by Un(s). By
1225
+ applying what we have just proved for a continuous distribution, Un(s) →D(0,1) W 0(s).
1226
+ Now let F −1(t) := inf{s : t ≤ F(s)}. We get P(F −1(ζ0) ≤ s) = P(X0 ≤ s) = F(s), so
1227
+ Yn(s) =dist. Un(F(s)). We may then proceed as in Billingsley ([3, Theorem 5.1]) to deduce
1228
+ the FCLT for Yn(s) with W 0(F(s)) as limit.
1229
+
1230
+ 2. Local times for ergodic sums
1231
+ In the previous section about limit theorems for the empirical process sampled along (zk),
1232
+ we have found sufficient conditions on the quantities Vn and Mn associated to (zk). When
1233
+ (zk) is given by a “cocycle”, zk = Skf(x), one can ask if these conditions are satisfied.
1234
+ We start with some general facts and construct counterexamples for which condition (18)
1235
+ is not satisfied. In the next section, we will discuss two very different examples of cocycles:
1236
+ first the case of random walks, then “stationary random walks” generated by a rotation.
1237
+ 2.1. Auxiliary general results.
1238
+ First we introduce some notation and make general remarks.
1239
+
1240
+ 16
1241
+ GUY COHEN AND JEAN-PIERRE CONZE
1242
+ Notation 2.1. Let (X, B, µ) be a probability space and T a measure preserving trans-
1243
+ formation acting on X such that the dynamical system (X, B, µ, T) is ergodic.
1244
+ Let f be a measurable function on X with values in Zd, d ≥ 1. Its ergodic sums generated
1245
+ by the iteration of T, denoted by fk (or Skf), are
1246
+ fk(x) :=
1247
+ k−1
1248
+
1249
+ j=0
1250
+ f(T jx), k ≥ 1, f0(x) = 0.
1251
+ The sequence (fk(x), k ≥ 1) can be viewed as a “stationary random walk” defined on
1252
+ (X, B, µ). It will be called a “cocycle” and denoted by (µ, T, f) or simply (T, f).
1253
+ For x ∈ X, we put (cf. (1)) N0(x, ℓ) = 0 and, for n ≥ 1,
1254
+ Nn(T, f, x, ℓ)
1255
+ :=
1256
+ #{1 ≤ k ≤ n : fk(x) = ℓ}, ℓ ∈ Zd,
1257
+ Mn(T, f, x)
1258
+ :=
1259
+ max
1260
+ ℓ∈Zd Nn(T, f, x, ℓ),
1261
+ Vn(T, f, x)
1262
+ :=
1263
+ #{1 ≤ j, k ≤ n : fj(x) = fk(x)} =
1264
+
1265
+ ℓ∈Zd
1266
+ N2
1267
+ n(x, ℓ).
1268
+ Most of the time, we will omit T and f in the notation and write simply Nn(x, ℓ), Mn(x),
1269
+ Vn(x). We have �
1270
+ ℓ Nn(x, ℓ) = n and n ≤ Vn(x) ≤ n Mn(x).
1271
+ A question is to know if the following conditions hold for a.e. x:
1272
+ Vn(x) = o(n2 (log log n)−1) or Vn(x) ≤ C1
1273
+ n2
1274
+ (log n)β , with β > 1,
1275
+ (21)
1276
+ lim
1277
+ n
1278
+ M2
1279
+ n(x)
1280
+ Vn(x) = 0.
1281
+ (22)
1282
+ For a random walk this is related to a question studied in [16] and later in [15]: How
1283
+ many times does the walk revisit the most frequently visited site in the first n steps?
1284
+ Cylinder map.
1285
+ We denote by ˜Tf the map (sometimes called cylinder map) (x, ℓ) →
1286
+ (Tx, ℓ + f(x)) acting on X × Zd, endowed with the infinite invariant measure ˜µ defined
1287
+ as the product of µ by the counting measure on Zd.
1288
+ For ϕ : X × Zd → R the ergodic sums for the cylinder map are
1289
+ ˜Snϕ(x, ℓ) :=
1290
+ n−1
1291
+
1292
+ k=0
1293
+ ϕ( ˜T k
1294
+ f (x, ℓ)) =
1295
+ n−1
1296
+
1297
+ k=0
1298
+ ϕ(T kx, ℓ + fk(x)).
1299
+ With ϕ0 := 1X×{0}, it holds
1300
+ ˜Snϕ0(x, −ℓ) =
1301
+ n−1
1302
+
1303
+ k=0
1304
+ 1X×{0}(T kx, −ℓ + fk(x)) = #{0 ≤ k ≤ n − 1 : fk(x) = ℓ}.
1305
+ Therefore, ˜Snϕ0(x, −ℓ) = Nn(ℓ)(x).
1306
+
1307
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
1308
+ 17
1309
+ Recurrence/transience. It can be shown that a cocycle (µ, T, f) (over an ergodic dynamical
1310
+ system) is either recurrent or transient. For f with values in Zd, it means that either
1311
+ Skf(x) = 0 infinitely often for a.e. x, or Skf(x) = 0 finitely often for a.e. x. In the latter
1312
+ case, we have limk |Skf(x)| = +∞, µ-a.e.
1313
+ Let Rn(x) = {ℓ ∈ Zd : fk(x) = ℓ for some k ≤ n} be the “range” of the cocycle, i.e., the
1314
+ set of points visited by fk(x) up to time n.
1315
+ In [14] the following result is shown (for the general case of a cocycle with values in a
1316
+ countable group): let G be a countable group and f : X → G. If the cocycle (T, f) is
1317
+ recurrent, then Card(Rn(x)) = o(n) for a.e. x. If it is transient, there exists c > 0 such
1318
+ that Card(Rn(x)) ∼ c n for a.e. x.
1319
+ Using the lemma below, this implies for a.e. x:
1320
+ lim inf
1321
+ n
1322
+ Vn(x)
1323
+ n
1324
+ > 0 in the transient case, Vn(x)
1325
+ n
1326
+ → +∞ in the recurrent case.
1327
+ (23)
1328
+ To show (23) we use the following inequality valid for a general sequence (zk):
1329
+ Lemma 2.2. If A is a non empty subset in Zd, we have:
1330
+ Vn ≥
1331
+ ��n−1
1332
+ k=0 1zk ∈A
1333
+ �2
1334
+ Card(A)
1335
+ .
1336
+ (24)
1337
+ Proof. Cauchy-Schwarz inequality implies:
1338
+ n−1
1339
+
1340
+ k=0
1341
+ 1zk ∈A =
1342
+
1343
+ ℓ∈A
1344
+ n−1
1345
+
1346
+ k=0
1347
+ 1zk =ℓ ≤ (
1348
+
1349
+ ℓ∈A
1350
+ (
1351
+ n−1
1352
+
1353
+ k=0
1354
+ 1zk =ℓ)2)
1355
+ 1
1356
+ 2 (Card(A))
1357
+ 1
1358
+ 2 ≤ V
1359
+ 1
1360
+ 2
1361
+ n (Card(A))
1362
+ 1
1363
+ 2.
1364
+
1365
+ If zk = Skf(x), this show (23). Indeed by taking A = Rn(x) we get
1366
+ Vn(x) ≥
1367
+ n2
1368
+ Card(Rn(x)).
1369
+ (25)
1370
+ Lemma 2.3. The following formulas hold for quantities defined in 2.1.
1371
+ Vn(x)
1372
+ =
1373
+ n + 2
1374
+ n−1
1375
+
1376
+ k=1
1377
+ n−k−1
1378
+
1379
+ j=0
1380
+ (1fk(T jx)=0),
1381
+ (26)
1382
+ =
1383
+ 2[Nn−1(Tx, 0) + Nn−2(T 2x, 0) + ... + N1(T n−1x, 0)] + n, n ≥ 2,
1384
+ (27)
1385
+ Mn(x)
1386
+ =
1387
+ max[Nn(x, 0), 1 +
1388
+ max
1389
+ 1≤k≤n−1 Nn−k(T kx, 0)] ≤ 1 +
1390
+ max
1391
+ 0≤k≤n−1 Nn(T kx, 0),
1392
+ (28)
1393
+ =
1394
+ Mn−1(Tx) + 1ℓ(n−1,Tx)=0 ≤ Mn−1(Tx) + 1.
1395
+ (29)
1396
+ Proof. a) From fk(x) = f(x) + fk−1(Tx), k ≥ 1, it follows
1397
+ Nn(x, ℓ) = Nn−1(Tx, ℓ − f(x)) + 1f(x)=ℓ, n ≥ 1.
1398
+ (30)
1399
+
1400
+ 18
1401
+ GUY COHEN AND JEAN-PIERRE CONZE
1402
+ Therefore we have:
1403
+
1404
+ ℓ∈Zd
1405
+ N2
1406
+ n(x, ℓ) =
1407
+
1408
+ ℓ∈Zd
1409
+ [Nn−1(Tx, ℓ − f(x)) + 1f(x)=ℓ]2 =
1410
+
1411
+ ℓ∈Zd
1412
+ [Nn−1(Tx, ℓ) + 1ℓ=0]2
1413
+ =
1414
+
1415
+ ℓ̸=0
1416
+ [Nn−1(Tx, ℓ)]2 + [Nn−1(Tx, 0) + 1]2 =
1417
+
1418
+
1419
+ [Nn−1(Tx, ℓ)]2 + 2Nn−1(Tx, 0) + 1.
1420
+ Hence the relation
1421
+ Vn(x) = Vn−1(Tx) + 2Nn−1(Tx, 0) + 1.
1422
+ (31)
1423
+ We have V1(x) = 1 and by the previous relation we get by induction (26) and (27).
1424
+ b) For x ∈ X, let ℓ(n, x) (a most visited site by Sk(x) up to time n) be defined by
1425
+ ℓ(n, x)
1426
+ :=
1427
+ 0, if Nn(x, 0) ≥ Nn(x, ℓ), for all ℓ ̸= 0,
1428
+ else
1429
+ :=
1430
+ ℓ1, if ℓ1 is such that Mn(x) = Nn(x, ℓ1) > Nn(x, 0).
1431
+ Let pn(x) ∈ [1, n] be the first visit of Sk(x) to ℓ(n, x) for k = 1, ..., n.
1432
+ By definition
1433
+ Mn(x) = Nn(x, ℓ(n, x)).
1434
+ We have Mn(x) = Nn(x, 0) if ℓ(n, x) = 0, else Mn(x) = Nn−pn(x)(T pn(x)x, 0) + 1, by the
1435
+ cocycle relation Spn(x)+k(x) − Spn(x)(x) = Sk(T pn(x)x). This implies:
1436
+ Mn(x) ≤ max[Nn(x, 0), Nn−pn(x)(T pn(x)x, 0) + 1] ≤ max[Nn(x, 0), max
1437
+ 1≤k≤n Nn−k(T kx, 0) + 1].
1438
+ It follows (noticing that N0(x, 0) = 0):
1439
+ Mn(x) ≤ 1 +
1440
+ max
1441
+ 0≤k≤n−1 Nn−k(T kx, 0) ≤ 1 +
1442
+ max
1443
+ 0≤k≤n−1 Nn(T kx, 0).
1444
+ (32)
1445
+ This shows (28).
1446
+ c) Observe also the following relation: by (30) we have:
1447
+ Mn(x)
1448
+ =
1449
+ sup
1450
+
1451
+ [Nn−1(Tx, ℓ − f(x)) + 1ℓ−f(x)=0] = sup
1452
+
1453
+ [Nn−1(Tx, ℓ) + 1ℓ=0]
1454
+ =
1455
+ max [sup
1456
+ ℓ̸=0
1457
+ Nn−1(Tx, ℓ), Nn−1(Tx, 0) + 1].
1458
+ If ℓ(n − 1, Tx) = 0, then Nn−1(Tx, 0) ≥ supℓ̸=0 Nn−1(Tx, ℓ). If ℓ(n − 1, Tx) ̸= 0, then
1459
+ Nn−1(Tx, 0) < supℓ̸=0 Nn−1(Tx, ℓ). This shows (29).
1460
+ Remark 2.4. By (28), if Kn is a uniform bound over x of Nn(x, 0), then Mn(x) ≤ Kn.
1461
+ Likewise, if Nn(x, 0) ≤ Kn, for a.e. x, then Mn(x) ≤ Kn, for a.e. x.
1462
+ Now we show that the set of x ∈ X such that limn
1463
+ M2
1464
+ n(x)
1465
+ Vn(x) = 0 has measure 0 or 1.
1466
+ Lemma 2.5. It holds: lim
1467
+ n [M2
1468
+ n(x)
1469
+ Vn(x) − M2
1470
+ n(Tx)
1471
+ Vn(Tx) ] = 0.
1472
+ If T is ergodic, there is a constant γ ∈ [0, 1] such that lim sup
1473
+ n
1474
+ M2
1475
+ n(x)
1476
+ Vn(x) = γ for a.e. x.
1477
+
1478
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
1479
+ 19
1480
+ Proof. We use (31) and (29). Putting ε = 1ℓ(n−1,Tx)=0, we have:
1481
+ |M2
1482
+ n(x)
1483
+ Vn(x) − M2
1484
+ n−1(Tx)
1485
+ Vn−1(Tx) | = |M2
1486
+ n−1(Tx) + ε(2Mn−1(Tx) + 1)
1487
+ Vn−1(Tx) + 2Nn−1(Tx, 0) + 1 − M2
1488
+ n−1(Tx)
1489
+ Vn−1(Tx) |
1490
+ = |ε(2Mn−1(Tx) + 1)
1491
+ Vn(x)
1492
+ − (2Nn−1(Tx, 0) + 1)
1493
+ Vn(x)
1494
+ M2
1495
+ n−1(Tx)
1496
+ Vn−1(Tx) |
1497
+ ≤ 2Mn−1(Tx) + 1
1498
+ Vn(x)
1499
+ + 2Nn−1(Tx, 0) + 1
1500
+ Vn(x)
1501
+ ≤ 4Mn−1(Tx)
1502
+ Vn(x)
1503
+ +
1504
+ 2
1505
+ Vn(x) ≤
1506
+ 4
1507
+ √n +
1508
+ 2
1509
+ Vn(x).
1510
+ For the last inequality we use that either Mn(x) ≥ √n, hence Mn(x)
1511
+ Vn(x) ≤
1512
+ 1
1513
+ Mn(x) ≤
1514
+ 1
1515
+ √n, or
1516
+ Mn(x) < √n, hence Mn(x)
1517
+ Vn(x) ≤
1518
+ √n
1519
+ n =
1520
+ 1
1521
+ √n. Therefore,
1522
+ |M2
1523
+ n(x)
1524
+ Vn(x) − M2
1525
+ n−1(Tx)
1526
+ Vn−1(Tx) | → 0.
1527
+ (33)
1528
+ Observe now that
1529
+ Mn(x) = Mn−1(x) + εn, where εn = 0 or = 1,
1530
+ and εn = 1 if and only if there is ℓn such that
1531
+ Mn−1(x) = #{1 ≤ k ≤ n − 1 : fk(x) = ℓn} and fn(x) = ℓn.
1532
+ We have
1533
+ M2
1534
+ n(x) = M2
1535
+ n−1(x) + cn, with cn = εn(1 + 2Mn−1(x))
1536
+ and Nn(x, ℓ) = Nn−1(x, ℓ) + ε′
1537
+ n(ℓ), with ε′
1538
+ n(ℓ) = 1fn(x)=ℓ and �
1539
+ ℓ∈Zd ε′
1540
+ n(ℓ) = 1.
1541
+ Therefore,
1542
+ Vn(x) =
1543
+
1544
+ ℓ∈Zd
1545
+ (Nn−1(x, ℓ) + ε′
1546
+ n(ℓ))2 = Vn−1(x) + 2
1547
+
1548
+ ℓ∈Zd
1549
+ ε′
1550
+ n(ℓ)Nn(x, ℓ)) + 1,
1551
+ 0 ≤ Vn(x) = Vn−1(x) + dn, with dn ≤ 2Mn(x) + 1.
1552
+ |M2
1553
+ n(x)
1554
+ Vn(x) − M2
1555
+ n−1(x)
1556
+ Vn−1(x) | = |M2
1557
+ n−1(x) + cn
1558
+ Vn−1(x) + dn
1559
+ − M2
1560
+ n−1(x)
1561
+ Vn−1(x) | ≤
1562
+ cn
1563
+ Vn(x) +
1564
+ dn
1565
+ Vn(x)
1566
+ M2
1567
+ n−1(x)
1568
+ Vn−1(x)
1569
+ ≤ εn(1 + 2Mn−1(x)
1570
+ Vn(x)
1571
+ + 2Mn(x) + 1
1572
+ Vn(x)
1573
+ M2
1574
+ n−1(x)
1575
+ Vn−1(x) ≤
1576
+ 2
1577
+ Vn(x) + 4Mn(x)
1578
+ Vn(x)
1579
+
1580
+ 2
1581
+ Vn(x) + 4
1582
+ √n → 0.
1583
+ From (33) and the convergence above, it follows limn [ M2
1584
+ n(x)
1585
+ Vn(x) − M2
1586
+ n(Tx)
1587
+ Vn(Tx) ] = 0. By ergodicity
1588
+ of T, this shows the lemma
1589
+
1590
+ Case of a coboundary
1591
+ The case when f is coboundary degenerates. Indeed, the following holds:
1592
+ Proposition 2.6. If f is a coboundary we have:
1593
+ a) lim infn
1594
+ Mn(x)
1595
+ n
1596
+ > 0, for a.e. x;
1597
+ b) there is a constant β > 0 such that
1598
+ 1
1599
+ n2Vn(x) → β, for a.e. x;
1600
+ c) for a.e. x, lim infn
1601
+ M2
1602
+ n(x)
1603
+ Vn(x) > 0.
1604
+
1605
+ 20
1606
+ GUY COHEN AND JEAN-PIERRE CONZE
1607
+ Proof. Suppose that f is coboundary, f = TΦ − Φ. Since f has values in Zd and T is
1608
+ ergodic, for all component Φj of Φ, e2πiΦj is a constant. It follows that Φ has also its
1609
+ values in Zd up to an additive constant and we can assume that Φ has values in Zd.
1610
+ a) We have lim infn
1611
+ Mn(x)
1612
+ n
1613
+ ≥ lim infn
1614
+ 1
1615
+ nNn(x, 0) > 0, for a.e. x. The positivity results the
1616
+ following simple argument:
1617
+ For R ≥ 1, let AR denote the set ∪ℓ:∥ℓ∥≤R(Φ = ℓ). Since, for each ℓ, by Birkhoff’s theorem,
1618
+ limn
1619
+ 1
1620
+ n
1621
+
1622
+ 0≤k≤n−1 1Φ(T kx)=ℓ = µ(Φ = ℓ), it holds
1623
+ 1
1624
+ nNn(x, 0) ≥
1625
+
1626
+ ℓ∈AR
1627
+ 1(Φ=ℓ)(x) 1
1628
+ n
1629
+ n−1
1630
+
1631
+ k=0
1632
+ 1(Φ=ℓ)(T kx) →
1633
+
1634
+ ℓ∈AR
1635
+ 1(Φ=ℓ)(x) µ(Φ = ℓ).
1636
+ Therefore, for every R ≥ 1, lim infn
1637
+ Mn(x)
1638
+ n
1639
+ ≥ lim infn
1640
+ Nn(x,0)
1641
+ n
1642
+ ≥ �
1643
+ ℓ∈AR 1(Φ=ℓ)(x) µ(Φ = ℓ),
1644
+ and the limit when R → ∞ at right is > 0, for a.e. x.
1645
+ b) For Vn, we have:
1646
+ Vn(f, x)
1647
+ =
1648
+
1649
+ ℓ∈Zd
1650
+ N2
1651
+ n(x, ℓ) =
1652
+
1653
+ ℓ∈Zd
1654
+ #{0 ≤ k ≤ n − 1 : Φ(T kx) − Φ(x) = ℓ}2
1655
+ =
1656
+
1657
+ ℓ∈Zd
1658
+ #{0 ≤ k ≤ n − 1 : Φ(T kx) = ℓ}2 =
1659
+
1660
+ ℓ∈Zd
1661
+
1662
+
1663
+ 0≤k≤n−1
1664
+ 1Φ(T kx)=ℓ
1665
+ �2,
1666
+ hence: 1
1667
+ n2
1668
+
1669
+ ℓ∈AR
1670
+
1671
+
1672
+ 0≤k≤n−1
1673
+ 1Φ(T kx)=ℓ
1674
+ �2 =
1675
+
1676
+ ℓ∈AR
1677
+ �1
1678
+ n
1679
+
1680
+ 0≤k≤n−1
1681
+ 1Φ(T kx)=ℓ
1682
+ �2 →
1683
+
1684
+ ℓ∈AR
1685
+ (µ(Φ = ℓ))2.
1686
+ This implies, for every R ≥ 1,
1687
+ lim inf
1688
+ n
1689
+ 1
1690
+ n2
1691
+
1692
+ ℓ∈Zd
1693
+
1694
+
1695
+ 0≤k≤n−1
1696
+ 1Φ(T kx)=ℓ
1697
+ �2 ≥ lim
1698
+ n
1699
+ 1
1700
+ n2
1701
+
1702
+ ℓ∈AR
1703
+
1704
+
1705
+ 0≤k≤n−1
1706
+ 1Φ(T kx)=ℓ
1707
+ �2 =
1708
+
1709
+ ℓ∈AR
1710
+ (µ(Φ = ℓ))2.
1711
+ It follows: lim inf
1712
+ n
1713
+ 1
1714
+ n2
1715
+
1716
+ ℓ∈Zd
1717
+
1718
+
1719
+ 0≤k≤n−1
1720
+ 1Φ(T kx)=ℓ
1721
+ �2 ≥
1722
+
1723
+ ℓ∈Zd
1724
+ µ(Φ = ℓ)2. For the complementary
1725
+ of AR, it holds:
1726
+
1727
+ ℓ:∥ℓ∥>R
1728
+ � �
1729
+ 0≤k<n
1730
+ 1Φ(T kx)=ℓ
1731
+ �2 =
1732
+
1733
+ 0≤j,k<n
1734
+
1735
+ ℓ:∥ℓ∥>R
1736
+ 1Φ(T jx)=ℓ 1Φ(T kx)=ℓ
1737
+
1738
+
1739
+ 0≤j,k<n
1740
+ (
1741
+
1742
+ ℓ:∥ℓ∥>R
1743
+ 1Φ(T jx)=ℓ) (
1744
+
1745
+ ℓ:∥ℓ∥>R
1746
+ 1Φ(T kx)=ℓ) ≤
1747
+
1748
+ 0≤j,k<n
1749
+ 1Ac
1750
+ R(T jx)1Ac
1751
+ R(T kx) =
1752
+ � �
1753
+ 0≤k<n
1754
+ 1Ac
1755
+ R(T kx)
1756
+ �2.
1757
+ It follows for the upper bound:
1758
+ lim sup
1759
+ n
1760
+ 1
1761
+ n2
1762
+
1763
+ ℓ∈Zd
1764
+
1765
+
1766
+ 0≤k≤n−1
1767
+ 1Φ(T kx)=ℓ
1768
+ �2
1769
+ ≤ lim
1770
+ n
1771
+
1772
+ ℓ∈AR
1773
+ �1
1774
+ n
1775
+
1776
+ 0≤k≤n−1
1777
+ 1Φ(T kx)=ℓ)2 + lim
1778
+ n
1779
+ �1
1780
+ n
1781
+
1782
+ 0≤k<n
1783
+ 1Ac
1784
+ R(T kx)
1785
+ �2
1786
+ =
1787
+
1788
+ ℓ∈AR
1789
+ (µ(Φ = ℓ))2 + µ(Ac
1790
+ R)2
1791
+
1792
+ R→∞
1793
+
1794
+ ℓ∈Zd
1795
+ µ(Φ = ℓ)2.
1796
+
1797
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
1798
+ 21
1799
+ This shows b) with β = �
1800
+ ℓ∈Zd µ(Φ = ℓ)2 > 0.
1801
+ c) Follows from a) and b).
1802
+
1803
+ Proposition 2.7. There is a constant β ≥ 0 such that, for a.e. x, limn
1804
+ Vn(x)
1805
+ n2
1806
+ = β. We
1807
+ have β > 0 if and only if the cocycle (T, f) is a coboundary.
1808
+ Proof. The case of a coboundary follows from Proposition 2.6.
1809
+ Suppose now that the cocycle is not a coboundary. From (26), we can write
1810
+ Vn(x)
1811
+ n2
1812
+ = 1
1813
+ n + 2
1814
+ n
1815
+ n−1
1816
+
1817
+ k=1
1818
+ 1
1819
+ n
1820
+ n−k−1
1821
+
1822
+ j=0
1823
+ (1fk(T jx)=0)
1824
+ ≤ 1
1825
+ n + 2
1826
+ n
1827
+ n−1
1828
+
1829
+ k=1
1830
+ 1
1831
+ n
1832
+ n−1
1833
+
1834
+ j=0
1835
+ (1fk(T jx)=0) = 1
1836
+ n + 2
1837
+ n
1838
+ n−1
1839
+
1840
+ j=1
1841
+ Nn(T jx, 0)
1842
+ n
1843
+ .
1844
+ We will show that 1
1845
+ n
1846
+ n−1
1847
+
1848
+ j=0
1849
+ Nn(T jx, 0)
1850
+ n
1851
+ tend to 0 a.e.
1852
+ By the ergodic theorem of Dunford and Schwarz (in the space of infinite measure X × Z)
1853
+ applied to ˜Tf and φ0 = 1X×{0}, which is bounded and in Lp(X × Z), for every p ≥ 1, we
1854
+ get a function ˜φ0(x) which is ˜Tf-invariant and in L1(X × Z) and
1855
+ lim
1856
+ n
1857
+ Nn(x, 0)
1858
+ n
1859
+ = ˜φ0(x), a.s.
1860
+ As f is not a coboundary, ˜φ0 is zero a.e. (cf. for instance [12].)
1861
+ Observe that ∥ supn≥L
1862
+ Nn(x,0)
1863
+ n
1864
+ ∥2 → 0, as L goes to +∞. Indeed, for every 0 < ε ≤ 1,
1865
+ letting Aε,L := {x : supn≥L
1866
+ Nn(x,0)
1867
+ n
1868
+ > ε}, we have µ(Aε,L) → 0, when L → +∞. Since
1869
+ Nn(x,0)
1870
+ n
1871
+ ≤ 1, it follows, for L big enough:
1872
+ � �
1873
+ sup
1874
+ n≥L
1875
+ (Nn(x, 0)
1876
+ n
1877
+ )
1878
+ �2 dµ ≤ ε2 + µ(Aε,L) ≤ 2ε.
1879
+ We put Λn(x) := sup
1880
+ s≥n
1881
+ Ns(x, 0)
1882
+ s
1883
+ . By the previous observation, we have limn ∥Λn∥2 = 0.
1884
+ Let us consider the following maximal function for the action of T:
1885
+ ˜Λn(x) = sup
1886
+ 1≤r<∞
1887
+ 1
1888
+ r
1889
+ r−1
1890
+
1891
+ j=0
1892
+ Λn(T jx) = sup
1893
+ 1≤r<∞
1894
+ 1
1895
+ r
1896
+ r−1
1897
+
1898
+ j=0
1899
+ sup
1900
+ s≥n
1901
+ Ns(T jx, 0)
1902
+ s
1903
+ .
1904
+ (34)
1905
+ From a classical maximal inequality, we have ∥˜Λn∥2 ≤ 2∥Λn∥2 → 0.
1906
+ Observe also that, from the definition of ˜Λn in (34), the following inequalities hold:
1907
+ ˜Λn(x) ≥ sup
1908
+ r,s≥n
1909
+ 1
1910
+ r
1911
+ r−1
1912
+
1913
+ j=0
1914
+ Ns(T jx, 0)
1915
+ s
1916
+ ≥ 1
1917
+ n
1918
+ n−1
1919
+
1920
+ j=0
1921
+ Nn(T jx, 0)
1922
+ n
1923
+ .
1924
+
1925
+ 22
1926
+ GUY COHEN AND JEAN-PIERRE CONZE
1927
+ The sequence sup
1928
+ r,s≥n
1929
+ 1
1930
+ r
1931
+ r−1
1932
+
1933
+ j=0
1934
+ Ns(T jx, 0)
1935
+ s
1936
+ is non negative and decreasing. Since ∥˜Λn∥2 → 0,
1937
+ the L2-norm of its limit in (X, µ) is zero. The result follows.
1938
+
1939
+ Remark 2.8. (see also section 4 and [25])
1940
+ Let (Uℓ)ℓ∈Zd be a r.f. of square integrable r.v.’s on a probability space (Ω, F, P) stationary
1941
+ in the weak sense and such that �
1942
+ ℓ |⟨Uℓ, U0⟩| < +∞. By (4) and Proposition 2.7, if f is
1943
+ not a coboundary, it holds
1944
+ 1
1945
+ n2 ∥
1946
+ n−1
1947
+
1948
+ k=0
1949
+ Ufk(x)∥2
1950
+ 2 ≤ C Vn(x)
1951
+ n2
1952
+ → 0, for µ-a.e. x.
1953
+ Another result of norm convergence whose proof is like the proof of Proposition 1.2 is the
1954
+ following. Suppose that the r.f. is stationary. Let ϕ be an observable on the dynamical
1955
+ system (Ω, P, θ) with a spectral measure νϕ. We have:
1956
+
1957
+
1958
+ |
1959
+ n−1
1960
+
1961
+ j=0
1962
+ ϕ ◦ θzj|2 dP =
1963
+
1964
+ T1 |
1965
+ n−1
1966
+
1967
+ j=0
1968
+ e2πizjt|2 dνϕ(t).
1969
+ Assume that νϕ is absolutely continuous with respect to the Lebesgue measure on the
1970
+ torus, and let ρ ∈ L1(dt) such that dνϕ(t) = ρ(t)dt. For ε > 0 there is M such that
1971
+
1972
+ ρ>M ρ dt < ε. We have
1973
+ 1
1974
+ n2
1975
+
1976
+ Td |
1977
+ n−1
1978
+
1979
+ j=0
1980
+ e2πi⟨zj,t⟩|2 dνϕ(t)
1981
+
1982
+ M
1983
+ n2
1984
+
1985
+ Td |
1986
+ n−1
1987
+
1988
+ j=0
1989
+ e2πi⟨zj,t⟩|2 dt +
1990
+
1991
+ ρ>M
1992
+ ρ dt ≤ M Vn
1993
+ n2 + ε.
1994
+ This shows that Vn
1995
+ n2 → 0 implies 1
1996
+ n2
1997
+
1998
+
1999
+ |
2000
+ n−1
2001
+
2002
+ j=0
2003
+ ϕ ◦ θzj|2 dP → 0. This is satisfied by every
2004
+ ϕ ∈ L2(P), if the dynamical system has a Lebesgue spectrum.
2005
+ In particular, taking zk = fk(x), by Proposition 2.7, if f is not a coboundary, it holds
2006
+ 1
2007
+ n2
2008
+
2009
+
2010
+ |
2011
+ n−1
2012
+
2013
+ j=0
2014
+ ϕ(θfj(x)ω)|2 dP(ω) → 0, for a.e. x.
2015
+ When the spectral density is square integrable, as we have seen in Proposition 1.2, the
2016
+ pointwise convergence holds under quantitative hypothesis on the sequence (zk).
2017
+ 2.2. Non centered cocycles.
2018
+ In an ergodic dynamical system (X, µ, T), if f : X → R is an integrable function with
2019
+ µ(f) > 0, by the ergodic theorem for the ergodic sums ST
2020
+ n f(x) = �n−1
2021
+ k=0 f(T kx), it holds
2022
+ for a.e. x: limn
2023
+ 1
2024
+ nSnf(x) > 0 and therefore limn ST
2025
+ n f(x) = +∞. If f has values in Z, as
2026
+
2027
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
2028
+ 23
2029
+ the process ST
2030
+ n f(x) visits finitely often each site, one can think there is a chance that the
2031
+ following condition is satisfied:
2032
+ lim
2033
+ n
2034
+ M2
2035
+ n(T, f, x)
2036
+ Vn(T, f, x) = 0.
2037
+ (35)
2038
+ A case where (35) is satisfied is the following: let X be a topological compact space,
2039
+ T : X → X a continuous map, which is uniquely ergodic with µ as unique invariant
2040
+ measure. Let f : X → Z be an integrable function such that µ(f) ̸= 0. Assume f to be
2041
+ Riemann-integrable (i.e. such that, for every ε > 0, there are two continuous functions
2042
+ ψ0, ψ1 with ψ0 ≤ f ≤ ψ1 and µ(ψ1 − ψ0) ≤ ε).
2043
+ Then, the ergodic means of f converge uniformly, and this implies the existence of N
2044
+ such that 1
2045
+ n|ST
2046
+ n f(x)| ≥ 1
2047
+ 2|µ(f)| > 0 for n ≥ N and every x. It follows that the number of
2048
+ visits of ST
2049
+ n f(x) to 0 is ≤ N, for every x. By remark 2.4, Mn(x) ≤ N, for every x, and a
2050
+ fortiori (35) is satisfied.
2051
+ Nevertheless, we will see that (35) may fail in non uniform cases: there are dynamical
2052
+ systems and sets B of positive measure such that, for f = 1B,
2053
+ lim sup
2054
+ n
2055
+ M2
2056
+ n(T, f, x)
2057
+ Vn(T, f, x) = 1.
2058
+ (36)
2059
+ 2.3. Counterexamples.
2060
+ In this subsection, we construct a transient counterexample, and also a recurrent coun-
2061
+ terexample with a function f of null integral such that (36) is satisfied.
2062
+ To construct these counterexamples, we start by considering a general ergodic dynamical
2063
+ system (X, µ, T) and a measurable set B ⊂ X of positive measure. Let TB be the induced
2064
+ map on B, R(x) = RB(x) = inf{k ≥ 1 : T kx ∈ B} the first return time of x in B and
2065
+ Rn(x) = RB
2066
+ n (x) := �n−1
2067
+ k=0 R(T k
2068
+ Bx) the n-th return time of x in B.
2069
+ We take x ∈ B. If f is a function such that f = 0 outside B, the position of the sums
2070
+ up to time Rn−1(x) are the positions of the ergodic sums STB
2071
+ n f for the induced map up
2072
+ to time n, that is:
2073
+ {f(x), f(x) + f(TBx), ..., f(x) + f(TBx) + ... + f(T n−1
2074
+ B
2075
+ x)}.
2076
+ For a site ℓ, the number of visits up to time Rn−1(x) of the ergodic sums for T is
2077
+ NRn−1(x)(x, ℓ) =
2078
+ n−1
2079
+
2080
+ k=0
2081
+ RB(T k
2082
+ Bx) 1STB
2083
+ k
2084
+ f(x)=ℓ
2085
+ and therefore
2086
+ VRn−1(x)(T, x) =
2087
+
2088
+
2089
+ [
2090
+ n−1
2091
+
2092
+ k=0
2093
+ RB(T k
2094
+ Bx) 1S
2095
+ TB
2096
+ k
2097
+ f(x)=ℓ]2.
2098
+ (37)
2099
+
2100
+ 24
2101
+ GUY COHEN AND JEAN-PIERRE CONZE
2102
+ Case f = 1B. Clearly �n−1
2103
+ k=0 f(T k
2104
+ Bx) = n. For the map T, the ergodic sums of f are
2105
+ incremented by 1 when and only when the iterates T jx visit the set B. Otherwise, they
2106
+ stay fixed. The times of visits in B, for x ∈ B, are 0, R(x), R(x) + R(TBx), .... We have:
2107
+ for x ∈ B,
2108
+ Rn−1(x)+t
2109
+
2110
+ j=0
2111
+ f(T jx) = n, for t = 0, ..., Rn(x) − Rn−1(x) − 1.
2112
+ For Nn(T, x, ℓ) = Nn(T, f, x, ℓ), it holds:
2113
+ Nn(T, x, ℓ)
2114
+ =
2115
+ 0, if n < Rℓ(x),
2116
+ =
2117
+ t, if n = Rℓ(x) + t, with 0 ≤ t < Rℓ+1(x) − Rℓ(x),
2118
+ =
2119
+ Rℓ+1(x) − Rℓ(x) = R(T ℓ
2120
+ Bx), if n ≥ Rℓ+1(x).
2121
+ For L ≥ 1, we have for the time preceding the L-th return to the basis for f = 1B:
2122
+ MRL(x)−1(T, f, x) = max
2123
+ ℓ≤L R(T ℓ
2124
+ Bx), VRL(x)−1(T, f, x) =
2125
+
2126
+ ℓ≤L
2127
+ R2(T ℓ
2128
+ Bx).
2129
+ (38)
2130
+ In order to compute an explicit example, it is easier to start from a given map S and
2131
+ construct a special flow T over this map.
2132
+ Let ϕ : X → N be integrable and ≥ 1. The (discrete time) special map T = Tϕ is defined
2133
+ on ˜X := {(x, k), x ∈ X, k = 0, ..., ϕ(x) − 1} ⊂ X × R,
2134
+ by T(x, k) := (x, k + 1), if 0 ≤ k < ϕ(x) − 1, := (Sx, 0), if k = ϕ(x) − 1.
2135
+ Let ˜µ be the probability measure defined on ˜X by ˜µ(A × {k}) = µ(ϕ)−1 µ(A), for k ≥ 0
2136
+ and A ⊂ {x : k ≤ ϕ(x) − 1}. It is Tϕ-invariant. The space X can be identified with the
2137
+ subset B = {(x, 0), x ∈ X} of ˜X with normalized measure. The set B is the basis and
2138
+ ϕ − 1 the roof function of the special map Tϕ.
2139
+ As for the map S we will take an ergodic rotation, the special flow Tϕ will be also ergodic
2140
+ for the measure ˜µ on ˜X.
2141
+ Observe that the recurrence time R(x) = RB(x) for the special flow in the basis B is ϕ(x)
2142
+ and the n-th return time of x in B is Rn(x) = RB
2143
+ n (x) = �n−1
2144
+ k=0 ϕ(Skx).
2145
+ For S, let us take a rotation S = Sα on X = T/Z by α mod 1, where α is irrational. We
2146
+ denote by qn the denominators of α. We will construct the measure preserving transfor-
2147
+ mation which is the special flow (with discrete time) over Sα with a roof function ϕ such
2148
+ that, for cocycle generated by 1B in the system ( ˜X, ˜µ, T), lim inf
2149
+ n
2150
+ Vn(T, x)
2151
+ M2
2152
+ n(T, x) = 1.
2153
+ We will use the next lemma with p = pn, q = qn, the numerators and denominators of α.
2154
+ Lemma 2.9. Let p, q ≥ 1, (p, q) = 1, be such that |α − p/q| < 1/q2. For every x, there is
2155
+ a value 0 ≤ i < q such that x + iα mod 1 ∈ [0, 2/q].
2156
+ More generally, for every interval I of length 2/q, for every x, there is a value 0 ≤ i < q
2157
+ such that x + iα mod 1 ∈ I.
2158
+
2159
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
2160
+ 25
2161
+ Proof. It is well known that there is exactly one value of jα mod 1, for 0 ≤ j < q, in each
2162
+ interval [ ℓ
2163
+ q, ℓ+1
2164
+ q [, ℓ = 0, ..., q − 1. Let us recall a proof. For j = 0, jα ∈ [0, 1/q[. The map
2165
+ j → ℓj = jp mod q, which is injective, is a permutation of the set {1, ..., q − 1} onto itself.
2166
+ We have α = p/q + γ, with |γ| < 1/q2.
2167
+ Assuming γ > 0, it follows: jα mod 1 ∈ [ ℓj
2168
+ q , ℓj
2169
+ q + j
2170
+ q2] ⊂ [ ℓj
2171
+ q , ℓj+1
2172
+ q [, for j = 1, ..., q − 1. The
2173
+ case γ < 0 is treated the same way.
2174
+ Now let us prove the first point. Let x be in [0, 1[. There is i0 ∈ {0, ..., q − 1} such that
2175
+ x = i0
2176
+ q + θ, with 0 ≤ θ < 1/q. By the claim, there is i ∈ [0, q[ such that iα mod 1 ∈
2177
+ [ q−i0
2178
+ q , q−i0+1
2179
+ q
2180
+ ]. Hence x + iα mod 1 ∈ [θ, 1
2181
+ q + θ] ⊂ [0, 2
2182
+ q].
2183
+
2184
+ Let (λn) be an increasing sequence of positive integers which will be subjected below to
2185
+ growth conditions. First we assume that it satisfies the condition:
2186
+ qλn+1 ≥ 3qλn, ∀n ≥ 1.
2187
+ (39)
2188
+ Denote by Jn the interval Jn = [
2189
+ 3
2190
+ qλn+1
2191
+ , 3
2192
+ qλn
2193
+ ]. For the roof function, we take, with εn =
2194
+ 1
2195
+ n2,
2196
+ ϕ = 1 +
2197
+
2198
+ n≥1
2199
+ ⌊εnqλn⌋1Jn.
2200
+ The function ϕ is integrable:
2201
+
2202
+ ϕdµ ≤ 1 + 3 �
2203
+ n εn. Observe also that, by (39), the length
2204
+ of Jn is > 2/qλn and that (εnqλn) is not decreasing for n ≥ 2 .
2205
+ Let x be in the basis. By construction, the orbit of x under the iteration of Tϕ is that
2206
+ of the rotation Sα until it enters the set Bc, complementary of B at some time. Then it
2207
+ stays in this set, until it reaches the roof and comes down to the basis. Then the dynamic
2208
+ is that of the rotation, until again Sj
2209
+ αx falls in the set ϕ > 1 and so on.
2210
+ Let Wn(x) be the first visit of Sjx in Jn. By lemma 2.9, we have Wn(x) ≤ qλn.
2211
+ Now we choose f to get a transient counterexample and a recurrent one.
2212
+ Transient counterexample.
2213
+ We take f = 1 on the basis and 0 outside.
2214
+ The sequence (λn) is taken such that
2215
+ qλn ≥ n (qλn−1)2, n ≥ 1.
2216
+ (40)
2217
+ By (38), we obtain (recall that now TB, the induced map in the basis B, is the rotation
2218
+ S = Sα and R(T j
2219
+ Bx) = ϕ(Sj
2220
+ αx)):
2221
+ MRWn(x)(x)−1(T, x)
2222
+ =
2223
+ max
2224
+ j≤Wn(x) ϕ(Sjx),
2225
+ (41)
2226
+ VRWn(x)(x)−1(T, x)
2227
+ =
2228
+
2229
+ j≤Wn(x)
2230
+ ϕ2(Sjx).
2231
+ (42)
2232
+ In the above formula, ϕ(Sjx) is either 1 or (for some k ≤ n−1) 1+⌊εkqλk⌋ ≤ 1+εn−1qλn−1,
2233
+ excepted for the last term which is 1 + ⌊εnqλn⌋.
2234
+
2235
+ 26
2236
+ GUY COHEN AND JEAN-PIERRE CONZE
2237
+ The maximum in (41) (given by the first visit to Jn) is 1 + ⌊εnqλn⌋ ≥ εnqλn. As we have
2238
+ seen, this first visit for the iterates Sjx occurs at a time ≤ qλn. It follows by (40):
2239
+ VRWn(x)(x)−1(T, x)
2240
+ M2
2241
+ RWn(x)(x)−1(T, x)
2242
+
2243
+ qλn
2244
+ (εn−1 qλn−1)2
2245
+ (εn qλn − 1)2 + 1 ≤ (εn−1
2246
+ εn
2247
+ )2 (qλn−1)2
2248
+ qλn
2249
+ 1
2250
+ (1 − (εn qλn)−1)2 + 1
2251
+
2252
+ 2 (
2253
+ n
2254
+ n − 1)2 (qλn−1)2
2255
+ qλn
2256
+ + 1 ≤ 4
2257
+ n + 1, for n big enough.
2258
+ This shows: lim sup
2259
+ n
2260
+ M2
2261
+ n(T, f, x)
2262
+ Vn(T, f, x) = 1. The result is proved for x in the basis B, but is
2263
+ satisfied for a.e. x ∈ ˜X, since lim sup
2264
+ n
2265
+ M2
2266
+ n(T, f, x)
2267
+ Vn(T, f, x) is a.e. constant by ergodicity of the
2268
+ special flow and Lemma 2.5.
2269
+ Remark that Skf(x) → +∞ for every point x.The sequence (Nn(x, 0)) is bounded for
2270
+ every x, but not uniformly in x.
2271
+ Recurrent counterexample.
2272
+ In order to obtain a recurrent counterexample, we now use a special cocycle over a rotation
2273
+ by α (with α bpq) studied later (see Subsection 3.3).
2274
+ Let f defined on the basis by f(x) = 1[0, 1
2275
+ 2[(x) − 1[ 1
2276
+ 2,1[(x) and 0 outside, and Skf(x) =
2277
+ �k−1
2278
+ i=0 f(x + iα mod 1). By (37), we have
2279
+ VRn−1(x)(T, f, x)
2280
+ =
2281
+
2282
+
2283
+ [
2284
+ n−1
2285
+
2286
+ k=0
2287
+ ϕ(x + kα) 1Skf(x)=ℓ]2
2288
+ =
2289
+
2290
+
2291
+ [
2292
+ n−1
2293
+
2294
+ k=0
2295
+ (1 +
2296
+
2297
+ j
2298
+ εjqλj1Jj(x + kα)) 1Skf(x)=ℓ]2.
2299
+ Observe that for a constant C, 1 + �
2300
+ j<n εjqλj1Jj(x + kα)) ≤ Cqλn−1. Using the bounds
2301
+ for the special function f and α bpq, this implies:
2302
+ VRWn−1(x)(T, f, x)
2303
+
2304
+
2305
+
2306
+ [
2307
+ qλn
2308
+
2309
+ k=0
2310
+ (1 +
2311
+
2312
+ j<n
2313
+ εjqλj1Jj(x + kα)) 1Skf(x)=ℓ]2
2314
+
2315
+ C2 �
2316
+
2317
+ [
2318
+ qλn
2319
+
2320
+ k=0
2321
+ qλn−1 1Skf(x)=ℓ]2
2322
+
2323
+ C2q2
2324
+ λn−1
2325
+
2326
+
2327
+ [
2328
+ qλn
2329
+
2330
+ k=0
2331
+ 1Skf(x)=ℓ]2 ≤ C2q2
2332
+ λn−1 q2
2333
+ λn/
2334
+
2335
+ log qλn.
2336
+ Put Ln = SWn(x)f(x) for the site visited by the cocycle when Sjx enters Jn. We have
2337
+ MRWn(x)(T, f, x) ≥ NRWn(x)(T, f, x, Ln(x)) = εnqλn.
2338
+
2339
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
2340
+ 27
2341
+ Hence:
2342
+ 0 ≤
2343
+ VRWn(x)(T, f, x)
2344
+ M2
2345
+ RWn(x)(T, f, x) − 1
2346
+
2347
+ C2 q2
2348
+ λn−1 q2
2349
+ λn
2350
+
2351
+ log qλn
2352
+ 1
2353
+ ε2nq2
2354
+ λn
2355
+ = C2 n4q2
2356
+ λn−1
2357
+
2358
+ log qλn
2359
+ .
2360
+ Now, we choose a growth condition on (λn) stronger than (40), such that the above bound
2361
+ tends to 0.
2362
+ This shows the result for x in the basis, hence on the whole space using again Lemma 2.5.
2363
+ 3. Examples
2364
+ In general, for a dynamical system (X, µ, T) and a cocycle (T, f), it seems difficult to get
2365
+ a precise estimate of the quantities Nn(x, ℓ), Mn(x), Vn(x). In this section we present two
2366
+ types of cocycles for which this is possible, first in the case of strong stochastic properties,
2367
+ in particular for the classical case of random walks, then when they are generated by step
2368
+ functions over rotations.
2369
+ 3.1. Random walks.
2370
+ 1-dimensional cocycle satisfying the LIL.
2371
+ We start be a remark on the the law of iterated logarithm (LIL). Suppose that (T, f) is
2372
+ a 1-dimensional cocycle which satisfies the LIL. Then for a constant c1 > 0, for a.e. x,
2373
+ the inequality |fn(x)| > c1 (n ln ln n)
2374
+ 1
2375
+ 2 is satisfied only for finitely many values of n. This
2376
+ implies that, for a.e. x, there is N(x) such that |fn(x)| ≤ (c1 n ln ln n)
2377
+ 1
2378
+ 2, for n ≥ N(x);
2379
+ so that, for N(x) ≤ k < n, |fk(x)| ≤ (c1 k ln ln k)
2380
+ 1
2381
+ 2 ≤ (c1 n ln ln n)
2382
+ 1
2383
+ 2.
2384
+ Therefore we have Card(Rn(x)) ≤ c2(x) (n ln ln n)
2385
+ 1
2386
+ 2, with an a.e. finite constant c2(x).
2387
+ In dimension 1, by (25), we get that for a.e. x there is c(x) > 0 such that
2388
+ Vn(x) ≥ C(x) n
2389
+ 3
2390
+ 2 (ln ln n)− 1
2391
+ 2.
2392
+ The case where a LIL is valid includes the case of a 1-dimensional r.w. centered with
2393
+ finite variance, but also the class of cocycles for which a martingale method can be used.
2394
+ Random walks.
2395
+ Now we consider sequences given by a random walk. For random walks in Zd, the quan-
2396
+ tities Vn(x), Mn(x) have been studied in many papers since the 50’s. Mn(x) is called
2397
+ “maximal multiplicity of points on a random walk” by Erd¨os and Taylor [16]. Below, we
2398
+ give a brief survey of several results for r.w.s. First we recall some definitions.
2399
+ Let (ζi)i≥0 be a sequence of i.i.d. random vectors on a probability space (X, µ) with
2400
+ values in Zd and common probability distribution ν. The associated random walk (r.w.)
2401
+ Z = (Zn) in Zd starting from 0 is defined by Z0 := 0,
2402
+ Zn := ζ0 + ... + ζn−1, n ≥ 1.
2403
+
2404
+ 28
2405
+ GUY COHEN AND JEAN-PIERRE CONZE
2406
+ A r.w. can be seen as a special case of cocycle. Indeed, the r.v.’s ζi can be viewed as the
2407
+ coordinate maps on (X, µ) obtained as (Zd)Z equipped with the product measure ν⊗Z
2408
+ and with the shift T acting on the coordinates. We have ζi = ζ0 ◦ T i and the cocycle
2409
+ relation Zn+n′ = Zn + Zn′ ◦ T n, ∀n, n′ ≥ 0.
2410
+ Let S := {ℓ ∈ Zd : P(ζ0 = ℓ) > 0} be the support of ν and L the sub-lattice of Zd
2411
+ generated by S. Let D be the sub-lattice of Zd generated by {ℓ − ℓ′, ℓ, ℓ′ ∈ S}.
2412
+ For simplicity (and without loss of generality) in what follows we will assume that the
2413
+ random walk Z is aperiodic (L = Zd). We exclude also the “deterministic” case (i.e.,
2414
+ when P(ζ0 = ℓ) = 1 for some ℓ ∈ Zd) in dimension 1 (the deterministic case in higher
2415
+ dimension is excluded by aperiodicity).
2416
+ Notice that all the pointwise limits or bounds mentioned now for random walks are
2417
+ a.s.
2418
+ statements.
2419
+ These bounds will show that conditions (22), (21) are satisfied by
2420
+ Vn(x), Mn(x) a.s. for on random walks under mild assumptions.
2421
+ Recurrence/transience.
2422
+ Recall that a r.w. Z = (Zn) is recurrent if �∞
2423
+ n=1 µ(Zn = 0) = +∞ and otherwise transient.
2424
+ Recurrence occurs if and only if µ(Zn = 0 infinitely often) = 1, and transience if and only
2425
+ if µ(Zn = 0 infinitely often) = 0 (cf. [8], [9]).
2426
+ For an aperiodic r.w. Z in dimension d with a moment of order 2 (for d = 1, a moment
2427
+ of order 1 suffices), for d = 1, 2, Z is recurrent if and only if it is centered. For d ≥ 3, it
2428
+ is always transient.
2429
+ Variance.
2430
+ Let (Xℓ, ℓ ∈ Zd) be a stationary centered r.f. with summable correlation and spectral
2431
+ density ρ. We have
2432
+ 1
2433
+ n∥
2434
+ n−1
2435
+
2436
+ k=1
2437
+ XZk(x)∥2
2438
+ 2 =
2439
+
2440
+ Td
2441
+ 1
2442
+ n|
2443
+ n−1
2444
+
2445
+ k=0
2446
+ e2πi⟨Zk(x),t⟩|2 dt =
2447
+
2448
+ Td
2449
+ 1
2450
+ nKn(x, t) ρ(t) dt,
2451
+ where, using (17) with zk = Zk(x) and Zk(x) − Zj(x) = Zk(T jx), 1
2452
+ nKn reads
2453
+ 1
2454
+ nKn(x, t)
2455
+ = 1 + 2
2456
+
2457
+
2458
+ �n−1
2459
+
2460
+ k=1
2461
+ 1
2462
+ n
2463
+ n−k−1
2464
+
2465
+ j=0
2466
+ 1Zk(T jx)=ℓ
2467
+
2468
+ e2πi⟨ℓ,t⟩.
2469
+ (43)
2470
+ As already recalled, the existence of the asymptotic variance limn Vn(x)−1 �
2471
+ | �n−1
2472
+ k=0 XZk(x)|2dµ
2473
+ has been shown in [11] and the positivity of the limit has been discussed.
2474
+ The asymptotic variance may be zero in case a coboundary condition is satisfied. An
2475
+ interesting situation is that of the sums along a transient (non deterministic) r.w., where
2476
+ the asymptotic variance is always > 0. Below we will recall briefly a proof.
2477
+
2478
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
2479
+ 29
2480
+ Transient case
2481
+ For a transient random walk we use the following general result (Lemma 3.14 in [11]):
2482
+ Lemma 3.1. If (X, µ, T) is an ergodic dynamical system and (ϕk)k≥1 a sequence of func-
2483
+ tions in L1(X, µ) such that �
2484
+ k≥1 ∥ϕk∥1 < ∞, then
2485
+ lim
2486
+ n
2487
+ 1
2488
+ n
2489
+ n−1
2490
+
2491
+ k=1
2492
+ n−k−1
2493
+
2494
+ j=0
2495
+ ϕk(T jx) =
2496
+
2497
+
2498
+ k=1
2499
+
2500
+ ϕk dµ, for a.e. x.
2501
+ (44)
2502
+ Therefore, for a transient random walk, we obtain for µ-a.e. x:
2503
+ lim
2504
+ n
2505
+ Vn(x)
2506
+ n
2507
+ = 1 + 2 lim
2508
+ n
2509
+ n−1
2510
+
2511
+ k=1
2512
+ 1
2513
+ n
2514
+ n−k−1
2515
+
2516
+ j=0
2517
+ 1Zk(T jx)=0 = 1 + 2
2518
+
2519
+
2520
+ k=1
2521
+ µ(Zk = 0) < +∞
2522
+ and the normalisation for the variance is by n up to a finite constant factor.
2523
+ Variance in the non deterministic transient case.
2524
+ Now we recall the proof of the positivity of the asymptotic variance.
2525
+ Let Ψ(t) = E[e2πi⟨ζ0,t⟩], t ∈ Td. Observe that Ψ(t) ̸= 1 for t ̸= 0 in Td, when the r.w. is
2526
+ aperiodic and |Ψ(t)| < 1, for t ̸∈ Γ1, where Γ1 is the closed subgroup {t ∈ Td : e2πi⟨r,t⟩ =
2527
+ 1, ∀r ∈ D}. We put, for t ∈ Td\{0} and 0 ≤ λ < 1,
2528
+ Φ(t) := 1 − |Ψ(t)|2
2529
+ |1 − Ψ(t)|2 = ℜe[1 + Ψ(t)
2530
+ 1 − Ψ(t)],
2531
+ Φλ(t) := 1 − λ2|Ψ(t)|2
2532
+ |1 − λΨ(t)|2 = −1 + 2
2533
+
2534
+
2535
+ k=0
2536
+ λkℜe(Ψ(t)k) = −1 + 2
2537
+
2538
+
2539
+ k=0
2540
+ λkµ(Zk = ℓ) cos(2π⟨ℓ, t⟩),
2541
+ where the last relation follows from ℜe(Ψ(t)k) = ℜe(E[e2πi⟨Zk,t⟩]) = �
2542
+ ℓ µ(Zk = ℓ) cos(2π⟨ℓ, t⟩).
2543
+ We put Φ(0) = 0.The function Φ is even, non-negative and Φ(t) = 0 only on Γ1, which is
2544
+ ̸= Td when the r.w. is non deterministic (if the r.w. is deterministic, µ(ζ0 = ℓ) = 1 for
2545
+ some ℓ ∈ Zd and this implies |Ψ(t)| ≡ 1, but this case is excluded). Therefore Φ is ̸= 0
2546
+ a.e. for the Lebesgue measure on Td.
2547
+ Proposition 3.2. (cf. [28]) Let Z = (Zn) be a transient aperiodic random walk in Zd.
2548
+ There is a non-negative constant M such that the Fourier coefficients of 1
2549
+ nKn converges
2550
+ to those of Φ + Mδ0 and lim
2551
+ n
2552
+ � 1
2553
+ nKn ρ dt > 0.
2554
+ Proof. We use that, if (Zn) is a transient, for all ℓ ∈ Zd, we have �∞
2555
+ k=1 µ(Zk = ℓ) < +∞.
2556
+ Therefore, the series I(ℓ) := −1ℓ=0 + �∞
2557
+ k=0 [µ(Zk = ℓ) + µ(Zk = −ℓ)] converges and by
2558
+ (43) and Lemma 3.1, the even functions 1
2559
+ nKn(x, .) satisfy:
2560
+
2561
+ Td
2562
+ 1
2563
+ nKn(x, .) cos 2π⟨ℓ, .⟩ dt
2564
+ = −1ℓ=0 +
2565
+ n−1
2566
+
2567
+ k=0
2568
+ 1
2569
+ n
2570
+ n−k−1
2571
+
2572
+ j=0
2573
+ [1Zk(T jx)=ℓ + 1Zk(T jx)=−ℓ] →
2574
+ n→∞ I(ℓ).
2575
+
2576
+ 30
2577
+ GUY COHEN AND JEAN-PIERRE CONZE
2578
+ Note that above the sum over k is written starting from 0. By letting n tend to infinity
2579
+ in the relation
2580
+ −1ℓ=0 +
2581
+
2582
+
2583
+ k=0
2584
+ λk[µ(Zk = ℓ) + µ(Zk = −ℓ)]
2585
+ =
2586
+
2587
+ Td cos 2π⟨ℓ, .⟩ [−1 + 2ℜe(
2588
+ 1
2589
+ 1 − λΨ(.))] dt =
2590
+
2591
+ Td cos 2π⟨ℓ, t⟩ Φλ(.) dt,
2592
+ we get since the left sum tends to I(ℓ):
2593
+ I(ℓ) = lim
2594
+ λ↑1
2595
+
2596
+ Td cos 2π⟨ℓ, t⟩ Φλ(t) dt.
2597
+ Taking ℓ = 0 in the previous formula, it follows from Fatou’s lemma:
2598
+ I(0) = 1 + 2
2599
+
2600
+
2601
+ k=1
2602
+ µ(Zk = 0) = lim
2603
+ λ↑1
2604
+
2605
+ Td Φλ(t) dt ≥
2606
+
2607
+ Td lim
2608
+ λ↑1 Φλ(t) dt =
2609
+
2610
+ Td Φ(t) dt.
2611
+ This shows the integrability of Φ on Td and we can write with a constant M ≥ 0
2612
+ I(0) = lim
2613
+ λ↑1
2614
+
2615
+ Td Φλ(t) dt =
2616
+
2617
+ Td lim
2618
+ λ↑1 Φλ(t) dt + M =
2619
+
2620
+ Td Φ(t) dt + M.
2621
+ Let Uη be the ball of radius η > 0 centered at 0. By aperiodicity of the r.w., Ψ(t) ̸= 1 for
2622
+ t in Uc
2623
+ η, the complementary in Td of Uη, This implies supt∈Ucη supλ<1 Φλ(t) < +∞.
2624
+ Therefore, we get: lim
2625
+ λ↑1
2626
+
2627
+ Ucη
2628
+ cos 2π⟨ℓ, t⟩ Φλ(t) dt =
2629
+
2630
+ Ucη
2631
+ cos 2π⟨ℓ, t⟩ Φ(t) dt, hence:
2632
+ I(ℓ) =
2633
+
2634
+ Ucη
2635
+ cos 2π⟨ℓ, t⟩ Φ(t) dt + lim
2636
+ λ↑1
2637
+
2638
+
2639
+ cos 2π⟨ℓ, t⟩ Φλ(t) dt, ∀η > 0,
2640
+ which can be be written:
2641
+
2642
+
2643
+
2644
+ cos 2π⟨ℓ, .⟩ Φ dt = I(ℓ) −
2645
+
2646
+ Td cos 2π⟨ℓ, .⟩ Φ dt − lim
2647
+ λ↑1
2648
+
2649
+
2650
+ cos 2π⟨ℓ, .⟩ Φλ dt.
2651
+ (45)
2652
+ Let ε > 0. By positivity of Φλ, we have, for η(ε) small enough:
2653
+ (1 − ε)
2654
+
2655
+ Uη(ε)
2656
+ Φλ dt ≤
2657
+
2658
+ Uη(ε)
2659
+ cos 2π⟨ℓ, .⟩ Φλ dt ≤ (1 + ε)
2660
+
2661
+ Uη(ε)
2662
+ Φλ dt;
2663
+ By subtracting
2664
+
2665
+ Uη(ε) cos 2π⟨ℓ, t⟩ Φ(t) dt in the previous inequalities and (45), we get:
2666
+ (1 − ε)
2667
+
2668
+ Uη(ε)
2669
+ Φλ dt −
2670
+
2671
+ Uη(ε)
2672
+ cos 2π⟨ℓ, .⟩ Φ dt
2673
+ ≤ I(ℓ) −
2674
+
2675
+ Td cos 2π⟨ℓ, .⟩ Φ dt − lim
2676
+ λ↑1
2677
+
2678
+ Uη(ε)
2679
+ cos 2π⟨ℓ, .⟩ Φλ dt +
2680
+
2681
+ Uη(ε)
2682
+ cos 2π⟨ℓ, .⟩ Φλ dt
2683
+ ≤ (1 + ε)
2684
+
2685
+ Uη(ε)
2686
+ Φλ dt −
2687
+
2688
+ Uη(ε)
2689
+ cos 2π⟨ℓ, .⟩ Φ dt;
2690
+
2691
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
2692
+ 31
2693
+ As we can chose λ such that
2694
+ | − lim
2695
+ λ↑1
2696
+
2697
+ Uη(ε)
2698
+ cos 2π⟨ℓ, .⟩ Φλ dt +
2699
+
2700
+ Uη(ε)
2701
+ cos 2π⟨ℓ, .⟩ Φλ dt| ≤ ε,
2702
+ we obtain:
2703
+ −ε + (1 − ε)
2704
+
2705
+ Uη(ε)
2706
+ Φλ dt −
2707
+
2708
+ Uη(ε)
2709
+ cos 2π⟨ℓ, .⟩ Φ dt
2710
+ ≤ I(ℓ) −
2711
+
2712
+ Td cos 2π⟨ℓ, .⟩ Φ dt ≤ ε + (1 + ε)
2713
+
2714
+ Uη(ε)
2715
+ Φλ dt −
2716
+
2717
+ Uη(ε)
2718
+ cos 2π⟨ℓ, .⟩ Φ dt
2719
+ For ε small enough,
2720
+
2721
+ Uη(ε) cos 2π⟨ℓ, .⟩ Φ dt can be made arbitrary small, as well as ε supλ<1
2722
+
2723
+ Uη Φλ dt,
2724
+ since Φ is integrable and supλ<1
2725
+
2726
+ Td Φλ dt < ∞.
2727
+ This shows that I(ℓ) −
2728
+
2729
+ Td cos 2π⟨ℓ, .⟩ Φ dt −
2730
+
2731
+ Uη(ε) Φλ dt can be made arbitrarily small for
2732
+ ε > 0 small and λ close to 1. The same is true for ℓ = 0 and also for the difference
2733
+ [I(ℓ) −
2734
+
2735
+ Td cos 2π⟨ℓ, .⟩ Φ dt −
2736
+
2737
+ Uη(ε) Φλ dt] − [I(0) −
2738
+
2739
+ Td Φ dt −
2740
+
2741
+ Uη(ε) Φλ dt]
2742
+ = [I(ℓ) −
2743
+
2744
+ Td cos 2π⟨ℓ, .⟩ Φ dt] − [I(0) −
2745
+
2746
+ Td Φ dt] = [I(ℓ) −
2747
+
2748
+ Td cos 2π⟨ℓ, .⟩ Φ dt] − M].
2749
+ Therefore I(ℓ) =
2750
+
2751
+ Td cos 2π⟨ℓ, t⟩ Φ(t) dt + M for all ℓ and the Fourier coefficients of 1
2752
+ nKn
2753
+ converges to those of Φ + Mδ0.
2754
+ As the non-negative sequence ( 1
2755
+ nKn) is bounded in L1-norm and the density ρ is contin-
2756
+ uous, this proves
2757
+ � 1
2758
+ nKnρ dt →
2759
+
2760
+ Φρ dt + Mρ(0). Moreover, the limit is > 0 since both
2761
+ Φ and ρ are not 0 a.e.
2762
+ It is shown in [28] that M = 0 for d > 1.
2763
+
2764
+ Behaviour of Mn(x).
2765
+ In the transient case (d ≥ 3) (at least for a simple r.w.), Erd¨os and Taylor (1960) proved
2766
+ that for a constant γ > 0 depending on the dimension,
2767
+ lim
2768
+ n
2769
+ Mn(x)
2770
+ log n = γ.
2771
+ Recurrent case
2772
+ In dimension 1, H. Kesten has shown that lim sup
2773
+ n
2774
+ Mn
2775
+
2776
+ n ln ln n
2777
+ =
2778
+
2779
+ 2/σ.
2780
+ Therefore in
2781
+ dimension 1, we have the following lower and upper bounds for Vn:
2782
+ C1(x) n
2783
+ 3
2784
+ 2 (ln ln n)− 1
2785
+ 2 ≤ Vn(x) ≤ C2(x) n
2786
+ 3
2787
+ 2(ln ln n)
2788
+ 1
2789
+ 2.
2790
+ Dimension d = 2.
2791
+ There is a deterministic rate (law of large numbers): for a constant C0.
2792
+
2793
+ Vn dµ
2794
+ n log n → C0 and Vn(x)
2795
+ n log n → C0, for a.e. x.
2796
+
2797
+ 32
2798
+ GUY COHEN AND JEAN-PIERRE CONZE
2799
+ For a planar simple random walk, Erd¨os and Taylor [16] have shown:
2800
+ lim sup
2801
+ n
2802
+ Mn(x)
2803
+ (log n)2 ≤ 1
2804
+ π.
2805
+ (46)
2806
+ The result has been extended by Dembo, Peres, Rosen and Zeitouni [15], who proved for
2807
+ an aperiodic centered random walk on Z2 with moments of all orders:
2808
+ lim
2809
+ n
2810
+ Mn(x)
2811
+ (log n)2 =
2812
+ 1
2813
+ 2π det(Γ)
2814
+ 1
2815
+ 2 ,
2816
+ where Γ is the covariance matrix associated to the random walk.
2817
+ As shown in the proof in [15], it suffices to suppose that the 2-dimensional r.w. is aperiodic.
2818
+ Moreover, the proof for the upper bound is based on the local limit theorem which uses
2819
+ only the existence of the moment of order 2. Therefore, assuming the existence of the
2820
+ moment of order 2, the upper bound (46) holds.
2821
+ It follows in this case: there exist C(x) a.e finite such that:
2822
+ M2
2823
+ n(x)
2824
+ Vn(x) ≤ C(x)(log n)3
2825
+ n
2826
+ .
2827
+ 3.2. Extensions of the r.w. case.
2828
+ 1) Consequence of the Local Limit Theorem (LLT).
2829
+ The Local Limit Theorem, when it is satisfied by the cocycle (T, f), gives some pointwise
2830
+ information on Vn(x). For example, if d = 2, the following lemma holds:
2831
+ Lemma 3.3. Suppose that the LLT holds and d = 2. Then, for every ε > 0, there is an
2832
+ integrable function C (depending on ε), such that:
2833
+ Nn(x, 0) ≤ C(x)(ln n)2+ε, Vn(x) ≤ C(x) n (log n)2+ε.
2834
+ (47)
2835
+ Proof. By the LLT, it holds, for n ≥ 1,
2836
+
2837
+ Nn(., 0)dµ =
2838
+ n
2839
+
2840
+ k=1
2841
+
2842
+ 1fk=0 dµ ≤ C
2843
+ n
2844
+
2845
+ k=1
2846
+ 1
2847
+ k ≤ C ln n.
2848
+ Let ε be a positive constant. Putting Γ(x) = �∞
2849
+ n=1 n−(2+ε)N2n(x, 0), we have:
2850
+
2851
+ Γ(x) dµ(x) ≤ C
2852
+
2853
+
2854
+ n=1
2855
+ n−(2+ε)n = C
2856
+
2857
+
2858
+ n=1
2859
+ n−(1+ε) < +∞,
2860
+ so that N2n(x, 0) ≤ Γ(x) n2+ε, where Γ is integrable.
2861
+ If 2kn ≤ n < 2(kn+1), then with p = 2 + ε, we have for n big enough,
2862
+ Nn(x, 0)
2863
+ (log2 n)p ≤ N2(kn+1)(x, 0)
2864
+ kp
2865
+ n
2866
+ ≤ (kn + 1)p
2867
+ kp
2868
+ n
2869
+ N2(kn+1)(x, 0)
2870
+ (kn + 1)p
2871
+ = (1 + 1/kn)p Γ(x) ≤ 2Γ(x).
2872
+
2873
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
2874
+ 33
2875
+ For Vn(x), by (27) we have:
2876
+
2877
+ Vn(x) dµ(x) = 2
2878
+ n−1
2879
+
2880
+ k=1
2881
+
2882
+ Nn−k(x, 0) dµ(x) + n = O(n log n).
2883
+ As above for Nn(x, 0), the pointwise bound (47) follows.
2884
+
2885
+ Among example of cocycles satisfying a LLT, there are the r.w.’s (but with more precise
2886
+ results as recalled above), but also cocycles generated by functions with values in Zd
2887
+ depending on a finite number of coordinates over a sub-shift of finite type endowed with
2888
+ a Gibbs measure ([19]), ([18]).
2889
+ 2) Functions depending on a finite number of coordinates on a Bernoulli scheme.
2890
+ Now we try to bound Mn(x) in situation which extends slightly that of random walks.
2891
+ Suppose that (X, µ, T) is a Bernoulli scheme with X = IN, where I is a finite set. Let
2892
+ f : x → f(x1, ..., xr) be a centered function from X to Zd, d ≥ 1, depending on a finite
2893
+ number of coordinates.
2894
+ Let us consider the generalized random walk (Zn) defined by the sequence of ergodic sums
2895
+ Zn(x) = fn(x) = X0(x) + ... + Xn−1(x), where Xk(x) = f(T kx).
2896
+ Lemma 3.4. For all m ≥ 1 and for constants Cm, C′
2897
+ m independent of ℓ, we have:
2898
+
2899
+ Nm
2900
+ n (., ℓ) dµ =
2901
+
2902
+ [
2903
+ n
2904
+
2905
+ k=1
2906
+ 1fk=ℓ]m dµ
2907
+ ≤ Cmnm/2, for d = 1,
2908
+ ≤ C′
2909
+ m(Logn)m, for d = 2.
2910
+ Proof. We bound the sum �
2911
+ 1≤k1<k2<...<km≤n µ(fk1 = ℓ, fk2 = ℓ, ..., fkm = ℓ).
2912
+ For r ≤ k1 < k2 < ... < km ≤ n, writing Xk instead of T kf, we have:
2913
+ µ(X0 + ... + Xk1−1 = ℓ, Xk1 + ... + Xk2−1 = 0, ..., Xkm−1 + ... + Xkm−1 = 0)
2914
+ =
2915
+
2916
+ a1,1,...,ar,1,...,a1,m,...,ar,m∈S
2917
+ µ[
2918
+ X0 + ... + Xk1−r = ℓ − (a1,1 + ... + ar,1), Xk1−r+1 = a1,1,..., Xk1−1 = ar,1,
2919
+ Xk1 + ... + Xk2−r = −(a1,2 + ... + ar,2), Xk2−r+1 = a1,2, ..., Xk2−1 = ar,2, ...
2920
+ Xkm−1 + ... + Xkm−r = −(a1,m + ... + ar,m), Xkm−r+1 = a1,m, ..., Xkm−1 = ar,m];
2921
+ which is less than:
2922
+
2923
+ a1,1,...,ar,1,...,a1,m,...,ar,m∈S
2924
+ µ[X0 + ... + Xk1−r = ℓ − (a1,1 + ... + ar,1),
2925
+ Xk1 + ... + Xk2−r = −(a1,2 + ... + ar,2), ..., Xkm−1 + ... + Xkm−r = −(a1,m + ... + ar,m)].
2926
+
2927
+ 34
2928
+ GUY COHEN AND JEAN-PIERRE CONZE
2929
+ Now inside [.] the events are independent. By independence and stationarity, the preceding
2930
+ sum is
2931
+
2932
+ a1,1,...,ar,1,...,a1,m,...,ar,m∈S
2933
+ µ[X0 + ... + Xk1−r = ℓ − (a1,1 + ... + ar,1)]
2934
+ µ[X0 + ... + Xk2−k1−r = −(a1,2 + ... + ar,2)] ...
2935
+ µ[ X0 + ... + Xkm−km−1−r = −(a1,m + ... + ar,m)].
2936
+ With τn(k) = supℓ µ(X0 + ... + Xk−1 = ℓ) 1[0,n](k), we get the following bound
2937
+
2938
+ 1≤k1<k2<...<km≤n
2939
+ µ(fk1 = ℓ, fk2 = ℓ, ..., fkm = ℓ)
2940
+ =
2941
+
2942
+ r≤k1<k2<...<km≤n
2943
+ µ(X1 + ... + Xk1−1 = ℓ, Xk1 + ... + Xk2−1 = 0, ..., Xkm−1 + ... + Xkm−1 = 0)
2944
+ ≤ srm
2945
+
2946
+ r≤k1<k2<...<km≤n
2947
+ τn(k1 − r) τn(k2 − k1 − r) ... τn(km − km−1 − r)
2948
+ ≤ Csrm �
2949
+ k
2950
+ (τn ∗ τn ∗ ... ∗ τn)(k) ≤ Csrm(
2951
+
2952
+ k
2953
+ τn(k))m.
2954
+ We have srm terms for the first sum, where the cardinal |S| is denoted by s,.
2955
+ Now we can use, as for the usual r.w., convolution and the local limit theorem.
2956
+
2957
+ From the lemma, it follows easily in the recurrent case that for a.e. x, for all ε > 0,
2958
+ if d = 1, Mn(x) = o(n
2959
+ 1
2960
+ 2 +ε) and, if d = 2, Mn(x) = o(nε).
2961
+ In the transient case, if there is a moment of order η for some η > 0, then Mn(x) = o(nε)
2962
+ for all ε > 0. For these estimates, in both cases, see [11].
2963
+ A question if to extend the previous results to a larger class of functions depending weakly
2964
+ on the far coordinates. For such an extension is that explicit bounds in the LLT are not
2965
+ always available.
2966
+ 3.3. Step functions over rotations.
2967
+ Now we take X = Tr, r ≥ 1 endowed with µ, the uniform measure and we consider
2968
+ cocycles over rotations. When they are centered, such cocycles are strongly recurrent and
2969
+ therefore the associated quantities Vn and Mn are big. The difficult part is to bound them
2970
+ from above. We will give an example where an upper bound can be obtained.
2971
+ Let Tα be the rotation by an irrational α. For f : X → Zd, recall that the cylinder map
2972
+ (cf. Subsection 2.1) is ˜Tf,α = ˜Tα : X ×Zd → X ×Zd defined by ˜Tα(x, ℓ) = (x+α, ℓ+f(x)).
2973
+ Non centered step cocycles over a rotation.
2974
+ Let f be a non centered function with a finite number of values values in Zd. Suppose
2975
+ that f is Riemann integrable, which amounts to assume that, for the uniform measure of
2976
+ the torus, the measure of the set of discontinuity points of f is zero.
2977
+
2978
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
2979
+ 35
2980
+ Then by a remark in Subsection 2.2, Mn(x) is bounded uniformly in x and n. Therefore,
2981
+ for Vn(x), the bounds n ≤ Vn(x) ≤ Cn are satisfied.
2982
+ Centered step cocycles over a 1-dimensional rotation.
2983
+ The interesting situation is that of centered functions. We will consider the case r = 1
2984
+ and when the irrational number α has bounded partial quotients.
2985
+ Recall that an irrational α with continued fraction expansion [0; a1, a2, ..., an, ...] is said
2986
+ to have bounded partial quotients (bpq) if supn an < +∞. The set of bpq numbers has
2987
+ Lebesgue measure zero and Hausdorff dimension 1.
2988
+ In the sequel of this subsection, α will be an irrational bpq number (for instance a qua-
2989
+ dratic irrational) and f a centered function with values in Z and bounded variation.
2990
+ By Denjoy-Koksma inequality, there is a logarithmic bound for the cocycle (Tα, f): |fn(x)| ≤
2991
+ C ln n, for a constant C.
2992
+ The cocycle is strongly recurrent to 0 (and this is true for d ≥ 1 if f centered has values
2993
+ in Zd, when its components have bounded variation).
2994
+ This makes the corresponding
2995
+ maximum Mn(x) big. Nevertheless, we will see that condition (18) is satisfied, at least
2996
+ for a special example.
2997
+ Lower bound.
2998
+ Lower bound for Vn and variance, case d = 1.
2999
+ For a general sequence (zk), we can obtain a lower bound for Vn by an elementary method
3000
+ when there is an upper bound for the variance defined below.
3001
+ Lemma 3.5. Defining the mean mn and the variance σ2
3002
+ n by
3003
+ mn = 1
3004
+ n
3005
+ n
3006
+
3007
+ k=1
3008
+ zk, σ2
3009
+ n = 1
3010
+ n
3011
+ n
3012
+
3013
+ k=1
3014
+ (zk − mn)2,
3015
+ we have
3016
+ Vn ≥ 1
3017
+ 9
3018
+ n2
3019
+ σn
3020
+ , if σn > 1.
3021
+ (48)
3022
+ Proof. Suppose that σn > 0. For λ > 1, let ∆λ := [−λσn + mn, λσn + mn] � Z. We have:
3023
+ σ2
3024
+ n ≥ 1
3025
+ n
3026
+ n−1
3027
+
3028
+ k=0
3029
+ (zk − mn)21zk∈∆c
3030
+ λ ≥ 1
3031
+ n
3032
+ n−1
3033
+
3034
+ k=0
3035
+ (1zk∈∆c
3036
+ λ) λ2σ2
3037
+ n.
3038
+ Therefore: �n−1
3039
+ k=0 1zk∈∆λ ≥ n(1 − λ−2). As Card(∆λ) ≤ 2λσn + 1. It follows by (24):
3040
+ Vn ≥ (1 − λ−2)2
3041
+ 2λσn + 1 n2.
3042
+ For λ = 2 we get: Vn ≥
3043
+ 9
3044
+ 16
3045
+ 4σn+1 n2 ≥
3046
+ 9
3047
+ 80
3048
+ n2
3049
+ σn, if σn > 1; hence (48).
3050
+
3051
+
3052
+ 36
3053
+ GUY COHEN AND JEAN-PIERRE CONZE
3054
+ If zk is given by ergodic sums, i.e., zk = fk(x) , let
3055
+ mn(x) := 1
3056
+ n
3057
+ n
3058
+
3059
+ k=1
3060
+ fk(x), σ2
3061
+ n(x) = 1
3062
+ n
3063
+ n
3064
+
3065
+ k=1
3066
+ (fk(x) − mn(x))2.
3067
+ By [5, Proposition 13], for α bpq and f with bounded vartion, it holds σ2
3068
+ n(x) ≤ C ln n.
3069
+ Using (48) and Vn(x) ≤ nMn(x), this gives a lower bound for Vn(x) and Mn(x):
3070
+ Vn(x) ≥ c
3071
+ n2
3072
+
3073
+ ln n
3074
+ , Mn(x) ≥ c
3075
+ n
3076
+
3077
+ ln n
3078
+ .
3079
+ (49)
3080
+ Below we will get an estimate from above in the following example.
3081
+ Example 3.6. f = 1[0, 1
3082
+ 2 ) − 1[ 1
3083
+ 2,1) and α bpq.
3084
+ Upper bound for the example (3.6).
3085
+ For f as above and α bpq, we have by [1], for some constant C1 > 0,
3086
+ ∥Nn(·, 0)∥∞ = ∥ ˜Sn(1T1×{0})(·, 0)∥∞ ≤
3087
+ C1n
3088
+ √log n.
3089
+ (50)
3090
+ Remark that the bound (50) is obtained in [1] as the limit of ∥Nn(·, 0)∥p, the Lp-norm
3091
+ of Nn(·, 0), as p goes to ∞. Therefore the bound holds for the norm ∥.∥ess sup, but it can
3092
+ be easily replaced by the uniform norm as written above. Indeed, for any x, there is a
3093
+ neighborhood V (x) of x, such that for y ∈ V (x), |Nn(x, 0) − Nn(y, 0)| ≤ 1 (at most one
3094
+ jump in V (x)). As one can find y ∈ V (x) satisfying Nn(y, 0) ≤
3095
+ C1n
3096
+ √log n, the same inequality
3097
+ holds for x, with C1 replaced by 2C1.
3098
+ Using Remark 2.4, it follows:
3099
+ Mn(x) ≤ C1
3100
+ n
3101
+ √log n.
3102
+ (51)
3103
+ By (51) and since Vn(x) ≤ n Mn(x), we obtain
3104
+ Vn(x) ≤ C1
3105
+ n2
3106
+ √log n.
3107
+ (52)
3108
+ From (49), (51) and (52), it follows: Vn(x) ≍ n2/√log n and Mn(x) ≍ n/√log n, where
3109
+ an ≍ bn for two sequences (an) and (bn) means c an ≤ bn ≤ C an, ∀n ≥ 1, with two positive
3110
+ constants c, C.
3111
+ Therefore we get in this special example 3.6:
3112
+ M2
3113
+ n(x)
3114
+ Vn(x) ≤ ( C1n
3115
+ √log n)2/
3116
+ cn2
3117
+ √log n = C2
3118
+ 1
3119
+ c
3120
+ 1
3121
+ √log n → 0.
3122
+ (53)
3123
+ Condition (18) of Theorem 1.9 is satisfied in this example, as well as the condition of
3124
+ Theorem 1.6 a), hence a Glivenko-Cantelli theorem along (Snf(x)) for i.i.d. r.v.’s.
3125
+
3126
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
3127
+ 37
3128
+ But the sufficient conditions for the Glivenko-Cantelli theorems 1.5, 1.6 b), 1.8 are not
3129
+ satisfied by this cocycle and more generally, in view of the lower bound (49), by a cocycle
3130
+ defined by step functions over a bpq irrational rotation.
3131
+ 4. About limit theorems along ergodic sums
3132
+ 4.1. Glivenko-Cantelli theorem along ergodic sums.
3133
+ The Glivenko-Cantelli theorem recalled in the introduction is a (pointwise) law of large
3134
+ numbers uniform over a set of functions (here the indicators of intervals).
3135
+ When the
3136
+ r.v.’s Xk are i.i.d., the proof is an easy consequence of the strong law of large numbers
3137
+ applied to the sequence of i.i.d. bounded r.v.’s (1Xk≤s). Using Birkhoff’s ergodic theorem,
3138
+ the Glivenko-Cantelli theorem has been extended to the setting of a strictly stationary
3139
+ sequence (Xk) of random variables. More precisely, formulated in terms of dynamical
3140
+ systems, the following holds:
3141
+ Let (Y, A, ν) be a probability space and S an ergodic measure preserving transformation on
3142
+ Y . For any measurable function ϕ : Y → R, let us consider the strictly stationary sequence
3143
+ (Xk) defined by Xk = ϕ◦Sk, k ≥ 0. Then the sequence of empirical distribution functions
3144
+ satisfies: for ν a.e. y ∈ Y, sups | 1
3145
+ n
3146
+ �n−1
3147
+ k=0 1X(y)k≤s − F(s)| → 0, where F(s) = ν(ϕ ≤ s).
3148
+ Observe that the result is an application of Birkhoff’s theorem and Lemma 1.1 recalled
3149
+ in Section 1. Its extension to the non ergodic case has been formulated by Tucker [29],
3150
+ the distribution function F(s) being replaced by the conditional distribution function
3151
+ E(1ϕ≤s|J ), where J is the σ-algebra of S-invariant sets. In others words, we have:
3152
+ for ν a.e. y ∈ Y, lim
3153
+ n→∞ sup
3154
+ s | 1
3155
+ n
3156
+ n−1
3157
+
3158
+ k=0
3159
+ 1ϕ(Sky)≤s − E(1ϕ≤s|J )(y)| = 0.
3160
+ The above formula relies on the ergodic decomposition which can be used in the proof.
3161
+ In the previous framework, for a process, a Glivenko-Cantelli like theorem sampled along
3162
+ a sequence generated by a dynamical system can be obtained as follows:
3163
+ As in Subsection 2, let T be an ergodic measure preserving transformation on a probability
3164
+ space (X, B, µ) and f a measurable function on X with values in Zd, d ≥ 1.
3165
+ Let us take a second system (Ω, P, θ), where θ = (θℓ)ℓ∈Zd is a Zd-action preserving P.
3166
+ The skew product associated to the cocycle (T, f) and θ is the map: Tθ,f : (x, ω) →
3167
+ (Tx, θf(x)ω) from X × Ω to itself. By iteration we get:
3168
+ T k
3169
+ θ,f(x, ω) = (T kx, θfk(x)ω).
3170
+ For example, as Zd-action, we can take a Zd-Bernoulli shift (Ω, P, (θℓ)ℓ∈Zd), with P a
3171
+ product measure and θ the shift on the coordinates. If X0 is the first coordinate map,
3172
+ then (Xℓ) = (X0 ◦ θℓ) is a family of i.i.d. r.v.’s indexed by Zd.
3173
+
3174
+ 38
3175
+ GUY COHEN AND JEAN-PIERRE CONZE
3176
+ In general, let Iθ,f denote the conditional expectation with respect to the σ-algebra of
3177
+ Tθ,f-invariant sets. The ergodic theorem for Tθ,f shows that, for ψ ∈ L1(µ × P),
3178
+ lim
3179
+ n
3180
+ 1
3181
+ n
3182
+ n−1
3183
+
3184
+ k=0
3185
+ ψ(T kx, θfk(x)ω) = Iθ,f(ψ)(x, ω), for µ × P-a.e.(x, ω).
3186
+ (54)
3187
+ If ϕ is a measurable function on Ω, putting ψs(x, ω) = 1Is(ϕ(ω)), where Is is the half-line
3188
+ ] − ∞, s], we have
3189
+ ψs(T k
3190
+ θ,f(x, ω)) = 1Is(ϕ(θfk(x)ω)).
3191
+ By the quoted Tucker’s result, the convergence in (54) for each ψs, s ∈ R, can be strength-
3192
+ ened into a uniform convergence with respect to s:
3193
+ for µ × P-a.e (x, ω), 1
3194
+ n sups | �n−1
3195
+ k=0 1Is(ϕ(θfk(x)ω)) − I(ψs)(x, ω)| → 0.
3196
+ Therefore, by the Fubini theorem, there is a “sampled” version of the Glivenko-Cantelli
3197
+ theorem for the empirical process of a stationary sequence:
3198
+ Proposition 4.1. For µ-a.e x, we have
3199
+ | sups
3200
+ 1
3201
+ n
3202
+ �n−1
3203
+ k=0 1Is(ϕ(θfk(x)ω)) − I(ψs)(x, ω)| → 0, for P-a.e ω.
3204
+ When Tθ,f is ergodic, if ψ ∈ L1(µ × P), we have Iθ,f(ψ)(x, ω) =
3205
+
3206
+ ψ dµ dP, for µ ×
3207
+ P-a.e. (x, ω), and the centering I(ψs)(x, ω) is given by the distribution function F(s) =
3208
+ µ(ϕ ≤ s). In this case, for a.e. x, a Glivenko-Cantelli theorem with the usual centering
3209
+ holds for the empirical process sampled along the sequence (zn) given by zn = Snf(x)
3210
+ (with a set of ω’s of P-measure 1 depending on x).
3211
+ The lemma below shows, as it is known, that ergodicity of the cylinder map ˜Tf implies
3212
+ ergodicity of the skew map Tθ,f. Let us sketch a proof.
3213
+ Lemma 4.2. Suppose that the cocycle (T, f) is recurrent and the map ˜Tf ergodic. If the
3214
+ action of Zd by θ on (Ω, P) is ergodic, then Tθ,f is ergodic on (X × Ω, µ × P).
3215
+ Proof. : Let Φ be a Tθ,f invariant measurable function on X × Ω:
3216
+ Φ(Tx, θf(x)ω) = Φ(x, ω), for a.e. (x, ω).
3217
+ For a.e. x, there is a set Ω0
3218
+ x of full P-measure in Ω such that Φ(Tx, θf(x)ω) = Φ(x, ω), for
3219
+ all ω ∈ Ω0
3220
+ x. As Zd is countable, for a.e. x, there is a set Ωx of full measure such that
3221
+ Φ(Tx, θf(x)θℓω) = Φ(x, θℓω), for all ω ∈ Ωx.
3222
+ Let ω ∈ Ωx. The function ϕω(x, ℓ) := Φ(x, θℓω) on X × Zd is measurable, ˜Tf-invariant:
3223
+ ϕω( ˜Tf(x, ℓ))
3224
+ =
3225
+ ϕω(Tx, ℓ + f(x)) = Φ(Tx, θℓ+f(x)ω)
3226
+ =
3227
+ Φ(Tx, θf(x)θℓω) = Φ(x, θℓω) = ϕω(x, ℓ).
3228
+ It follows from the ergodicity of ˜Tf that there is a constant cω such that ϕω(x, ℓ) = cω for
3229
+ a.e. x. Therefore Φ coincides a.e. with a function ψ on Ω which is θ-invariant, hence a
3230
+ constant by the assumption of ergodicity of the action of Zd on Ω.
3231
+
3232
+
3233
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
3234
+ 39
3235
+ With Fubini’s argument, we get a Glivenko-Cantelli theorem for a.e. x, if we can show
3236
+ that the skew map Tθ,f is ergodic.
3237
+ There are many examples cylinder flows ˜Tf which are shown to be ergodic in the literature
3238
+ and so providing examples via Lemma 4.2. For instance, we can take for T an irrational
3239
+ rotation and f = 1[0, 1
3240
+ 2) − 1[ 1
3241
+ 2,1). The cocycle (T, f) is ergodic and the above version of
3242
+ Glivenko-Cantelli theorem applies for any stationary sequence (Xk) (with a conditional
3243
+ distribution if the stationary sequence is not ergodic). See also examples for which the
3244
+ skew map is ergodic in [25].
3245
+ 4.2. Discussion: universal sequences.
3246
+ The weakness in the approach of the previous subsection for a sampled Glivenko-Cantelli
3247
+ theorem along ergodic sums (Skf(x), k ≥ 0) is that it yields a set of x’s of µ-measure
3248
+ 1 depending on the dynamical system (Ω, P, θ) and on ϕ. One can try to reinforce the
3249
+ statement by introducing a notion of “universal property”.
3250
+ In this direction, the LLN for sums sampled along ergodic sums is closely related in the
3251
+ following way to the random ergodic theorems which have been studied in several papers.
3252
+ First, let us call “universally good” a sequence (zk) such that, for every dynamical system
3253
+ (Ω, P, θ), for every ϕ ∈ L1(P), the sequence 1
3254
+ n
3255
+ �n−1
3256
+ k=0 ϕ ◦ θzk converges P-a.e.
3257
+ We say that (T, f) a “(pointwise) good averaging cocycle” (or a universally representative
3258
+ sampling scheme) if, for µ-a.e. x, the sequence (Skf(x)) is universally good, i.e., for every
3259
+ dynamical system (Ω, P, θ), for every ϕ ∈ L1(P), 1
3260
+ n
3261
+ �n−1
3262
+ k=0 ϕ ◦ θSkf(x) converges P-a.e.
3263
+ The definition of a “mean good averaging cocycle” is similar, changing the above conver-
3264
+ gence into convergence in L2(P)-norm, for every ϕ in L2(P).
3265
+ A question which has been studied is to find mean or pointwise good averaging cocycles. In
3266
+ the first direction, examples and counterexamples of mean good averaging 1-dimensional
3267
+ cocycles are studied in [25],
3268
+ For pointwise convergence, there are 1-dimensional examples given by cocycles with a
3269
+ drift. In [23], the following result is shown: the cocycle defined by a random walk with a
3270
+ moment of order 2 is a pointwise good averaging cocycle if and only if it is not centered.
3271
+ Moreover it is shown that any ergodic integrable integer-valued stochastic process with
3272
+ nonzero mean is universally representative for bounded stationary processes. The proofs
3273
+ are based on the recurrence time theorem ([6]).
3274
+ Notice that a related, but different, notion can be introduced by restricting the dynamical
3275
+ system (Ω, P, θ) to belong to a given class C of dynamical systems.
3276
+ Let us call “pointwise good for a class C of dynamical systems”, a sequence (zk) such that,
3277
+ for every dynamical system (Ω, P, θ) in the class C, for every ϕ ∈ L1(P), limn
3278
+ 1
3279
+ n
3280
+ �n−1
3281
+ k=0 ϕ ◦
3282
+ θzk =
3283
+
3284
+ ϕ dP, P-a.e. There is a similar property for the mean convergence.
3285
+
3286
+ 40
3287
+ GUY COHEN AND JEAN-PIERRE CONZE
3288
+ This can be also expressed for a class of random fields satisfying a condition on the decay
3289
+ of correlations.
3290
+ For example, by Remark 2.8, every cocycle with values in Zd which is not a coboundary is
3291
+ a mean good averaging cocycle for the stationary r.f.s on Zd such that �
3292
+ ℓ |⟨Uℓ, U0⟩| < +∞.
3293
+ If (zk) is pointwise universally good for a class C, clearly we get the Glivenko-Cantelli
3294
+ property for any dynamical system (Ω, P, θ) in C and every measurable function ϕ, i.e.:
3295
+ sups | 1
3296
+ n
3297
+ �n−1
3298
+ k=0 1Is(ϕ(θzkω)) − P(ϕ ≤ s)| → 0, for P-a.e ω.
3299
+ (55)
3300
+ As we see, there are two different approaches of the notion of universal sequences for a
3301
+ law of large numbers: either we ask for a LLN along such a sequence for every dynamical
3302
+ system (Ω, P, θ) and all functions in L1(P) or we fix a class of dynamical systems, or a
3303
+ class of functions in L1(P). In the latter case, the condition on the sequence (zk) may be
3304
+ expressed in a quantitative way. Let us give a known example and recall the proof.
3305
+ Proposition 4.3. Let (zk) be a strictly increasing sequence of positive integers. If the
3306
+ sequence satisfies: for a finite constant C, zk ≤ Ck, ∀k ≥ 1, then (zk) is a pointwise good
3307
+ averaging sequence for the class C of dynamical systems (Ω, P, θ) with Lebesgue spectrum.
3308
+ Proof. There is a dense set of functions ϕ ∈ L1(P) such that
3309
+ 1
3310
+ n
3311
+ n−1
3312
+
3313
+ k=0
3314
+ ϕ(θzkω) converges P-a.e.
3315
+ (56)
3316
+ Indeed, by the SLLN for orthogonal random variables, (56) is satisfied by ϕ ∈ L2(P) such
3317
+ that ⟨ϕ, ϕ ◦ θk⟩ = 0, ∀k. The Lebesgue spectrum property implies that such functions
3318
+ span a dense linear space in L2(P), hence in L1(P).
3319
+ Moreover, the space of functions ϕ such that (56) holds is closed by the ergodic maximal
3320
+ lemma in view of the assumption on (zk). Therefore (56) is satisfied by every ϕ ∈ L1(P).
3321
+
3322
+ To finish, we recall the following example which shows that the behaviour may depend
3323
+ on the properties of the dynamical system (Ω, P, θ) (cf. [13]):
3324
+ Let (Ω, F, P) be the interval [0, 1] endowed with the Borel σ-algebra and the Lebesgue
3325
+ measure and take f = 1[0, 1
3326
+ 2]. Denote by T the class of invertible measure preserving
3327
+ transformations on this space. It can be shown that there are increasing sequences (zk)
3328
+ of positive integers satisfying the conditions of the previous proposition such that, for a
3329
+ dense Gδ of elements in T with continuous spectrum, the ergodic means of f along (zk)
3330
+ do not converge P-a.e.
3331
+ References
3332
+ [1]
3333
+ Aaronson, J., Bromberg, M. and Nakada, H.: Discrepancy skew products and affine random
3334
+ walks, Israel J. Math. 221 (2017), no. 2, 973-1010.
3335
+
3336
+ EMPIRICAL PROCESS SAMPLED ALONG A STATIONARY PROCESS
3337
+ 41
3338
+ [2]
3339
+ Ambrose L.: Functional generalizations of Hoeffding’s covariance lemma and a formula for
3340
+ Kendall’s tau, Statistics and Probability Letters, 122 (2017), 218-226.
3341
+ [3]
3342
+ Billingsley P.: Convergence of probability measures, 1rst ed. John Wiley & Sons, Inc., 1968.
3343
+ [4]
3344
+ Birkel, T.: A note on the strong law of large numbers for positively dependent random variables,
3345
+ Statist. Probab. Lett. 7 (1988), no. 1, 17-20.
3346
+ [5]
3347
+ Borda, B.: On the distribution of Sudler products and Birkhoff sums for the irrational rotation,
3348
+ Arkiv 2021.
3349
+ [6]
3350
+ Bourgain J., Furstenberg H., Katznelson Y. and Ornstein D.: Appendix on return-time se-
3351
+ quences. Publ. Math. IHES, (69) 42-45, 1989.
3352
+ [7]
3353
+ Chung, K.L: A course in probability theory, 3rd ed. (2001), Acad. Press, Inc., San Diego, CA.
3354
+ [8]
3355
+ Chung, K. L., Fuchs, W. H. J. On the distribution of values of sums of random variables, Mem.
3356
+ Amer. Math. Soc. 6 (1951), 157-168.
3357
+ [9]
3358
+ Chung, K. L., Ornstein, D., On the recurrence of sums of random variables, Bull. Amer. Math.
3359
+ Soc. 68 (1962), 30-32.
3360
+ [10]
3361
+ Cohen, G., Conze, J.-P.: On the quenched functional CLT in random sceneries, to appear in
3362
+ Studia Matematica.
3363
+ [11]
3364
+ Cohen, G., Conze, J.-P.: CLT for random walks of commuting endomorphisms on compact
3365
+ abelian groups, J. Theoret. Probab. 30 (2017), no. 1, 143-195.
3366
+ [12]
3367
+ Conze, J. P., Remarques sur les transformations cylindriques et les ´equations fonctionnelles,
3368
+ Pub. s´em. math. info. de Rennes, 1976, fasc. 2, http://www.numdam.org/actas/PSMIR/
3369
+ [13]
3370
+ Conze, J.-P., Convergence des moyennes ergodiques pour des sous-suites, Bull. Soc. Math. Fr.,
3371
+ M´emoire 35, (1973), p. 7-15.
3372
+ [14]
3373
+ Deligiannidis G., Gouezel, S., Kosloff, Z.: Boundary of the range of a random walk and the
3374
+ Folner property, Electron. J. Probab. 26: 1-39 (2021). DOI: 10.1214/21-EJP667
3375
+ [15]
3376
+ Dembo, A.; Peres, Y.; Rosen, J.; Zeitouni, O.: Thick points for planar Brownian motion and
3377
+ the Erdos-Taylor conjecture on random walk. Acta Math. 186 (2001), no. 2, 239-270.
3378
+ [16]
3379
+ Erdos, P.; Taylor, S. J.: Some problems concerning the structure of random walk paths. Acta
3380
+ Math. Acad. Sci. Hungar. 11 (1960), 137–162.
3381
+ [17]
3382
+ Esary, J. D.; Proschan, F.; Walkup, D. W., Association of random variables, with applications,
3383
+ Ann. Math. Statist. 38 (1967), 1466-1474.
3384
+ [18]
3385
+ Gou¨ezel S.: Berry-Esseen theorem and local limit theorem for non uniformly expanding maps,
3386
+ Ann. Inst. H. H. Poincar´e PR 41 (2005), 997–1024
3387
+ [19]
3388
+ Guivarc’h, Y. et Hardy, J.: Th´eor`emes limites pour une classe de chaˆınes de Markov et appli-
3389
+ cations aux diff´eomorphismes d’Anosov, Ann. Inst. H. Poincar´e 24 (1) (1988), 73-98.
3390
+ [20]
3391
+ Hoeffding W.: Probability Inequalities for Sums of Bounded Random Variables, Journal of the
3392
+ AMS vol. 58 (1963), no. 301, 13-30.
3393
+ [21]
3394
+ Hewitt, E.; Ross, K. A.: Abstract harmonic analysis, Vol. II. Die Grundlehren der mathematis-
3395
+ chen Wissenschaften, Band 152 Springer-Verlag, New York-Berlin 1970.
3396
+ [22]
3397
+ Kesten, H.: An iterated logarithm law for local time. Duke Math. J. 32 (1965), 447-456.
3398
+ [23]
3399
+ Lacey, M.; Petersen, K.; Wierdl, M.; Rudolph, D.: Random ergodic theorems with universally
3400
+ representative sequences., Ann. I. H. Poincar´e Prob. Stat. 30 (1994), no. 3, 353-395.
3401
+ [24]
3402
+ Lehmann, E. L.: Some concepts of dependence, Ann. Math. Statist. 37 (1963), 1137-1153.
3403
+ [25]
3404
+ Lema´nczyk, M.; Lesigne, E.; Parreau, F.; Voln´y, D.; Wierdl, M.: Random ergodic theorems and
3405
+ real cocycles, Israel J. Math. 130 (2002), 285-321.
3406
+ [26]
3407
+ Newman, C.M.: Normal fluctuations and the FKG inequalities, Comm. Math. Phys. 74 (1980),
3408
+ no. 2, 119-128.
3409
+
3410
+ 42
3411
+ GUY COHEN AND JEAN-PIERRE CONZE
3412
+ [27]
3413
+ Newman, C.M. and Wright, A.L.: An invariance principle for certain dependent sequences, Ann.
3414
+ Probab. 9 (1981), no. 4, 671-675.
3415
+ [28]
3416
+ Spitzer, F.: Principles of random walk, The University Series in Higher Mathematics D. Van
3417
+ Nostrand Co., Inc., Princeton, N.J.-Toronto-London (1964). doi: 10.1007/978-1-4757-4229-9
3418
+ [29]
3419
+ Tucker, H.: A generalization of the Glivenko-Cantelli theorem, Annals of Math. Stat., vol. 20,
3420
+ no 3, p. 828-830 (1959).
3421
+ [30]
3422
+ Yu Hao, A Glivenko-Cantelli lemma and weak convergence for empirical processes of associated
3423
+ sequences, Probab. Theory Related Fields 95 (1993), no. 3, 357-370.
3424
+ [31]
3425
+ Zygmund, A.: Trigonometric series, Second edition, Cambridge University Press, London-New
3426
+ York 1968.
3427
+ Guy Cohen,
3428
+ School of Electrical Engineering,
3429
+ Ben-Gurion University, Israel
3430
+ Email address: guycohen@bgu.ac.il
3431
+ Jean-Pierre Conze,
3432
+ IRMAR, CNRS UMR 6625,
3433
+ University of Rennes, Campus de Beaulieu, 35042 Rennes Cedex, France
3434
+ Email address: conze@univ-rennes1.fr
3435
+
4dFJT4oBgHgl3EQfjyxF/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
7tAzT4oBgHgl3EQfE_oc/content/tmp_files/2301.01001v1.pdf.txt ADDED
@@ -0,0 +1,1527 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01001v1 [math.DG] 3 Jan 2023
2
+ On a Class of Generalized Berwald Manifolds
3
+ A. Tayebi and F. Eslami
4
+ January 4, 2023
5
+ Abstract
6
+ The class of generalized Berwald metrics contains the class of Berwald metrics. In this
7
+ paper, we characterize two-dimensional generalized Berwald (α, β)-metrics with vanishing
8
+ S-curvature. Let F = αφ(s), s = β/α, be a two-dimensional generalized Berwald (α, β)-
9
+ metric on a manifold M. Suppose that F has vanishing S-curvature. We show that one of
10
+ the following holds: (i) if F is a regular metric, then it reduces to a Riemannian metric of
11
+ isotropic sectional curvature or a locally Minkowskian metric; (ii) if F is an almost regular
12
+ metric that is not Riemannian nor locally Minkowskian, then we find the explicit form of
13
+ φ = φ(s) which obtains a generalized Berwald metric that is neither a Berwald nor Lands-
14
+ berg nor a Douglas metric. This provides a generalization of Szab´o rigidity theorem for
15
+ the class of (α, β)-metrics. In the following, we prove that left invariant Finsler surfaces
16
+ with vanishing S-curvature must be Riemannian surfaces of constant sectional curvature.
17
+ Finally, we construct a family of odd-dimensional generalized Berwald Randers metrics.
18
+ Keywords: Generalized Berwald metric, Berwald metric, (α, β)-metric, S-curvature.1
19
+ 1
20
+ Introduction
21
+ A Finsler metric F on a manifold M is called a generalized Berwald metric if there exists a
22
+ covariant derivative ∇ on M such that the parallel translations induced by ∇ preserve the
23
+ Finsler function F [18][19][20][26]. In this case, F is called a generalized Berwald metric on
24
+ M and (M, F) is called a generalized Berwald manifold. If the covariant derivative ∇ is also
25
+ torsion-free, then F reduces to a Berwald metric.
26
+ Therefore, the class of Berwald metrics
27
+ belongs to the class of generalized Berwald metrics. The importance of the class of generalized
28
+ Berwald manifolds lies in the fact that these manifolds may have a rich isometry group [16][17].
29
+ For some progress about the class of generalized Berwald manifolds, see [2], [4], [20], [25], [26],
30
+ [27] and [28].
31
+ To find generalized Berwald metrics, one can consider the class of (α, β)-metrics. An (α, β)-
32
+ metric is a Finsler metric defined by F := αφ(s), s = β/α, where φ = φ(s) is a smooth function
33
+ on a symmetric interval (−b0, b0) with certain regularity, α =
34
+
35
+ aij(x)yiyj is a Riemannian
36
+ metric and β = bi(x)yi is a 1-form on the base manifold. The simplest (α, β)-metrics are the
37
+ Randers metrics F = α+β which were discovered by G. Randers when he studied 4-dimensional
38
+ general relativity. These metrics have been widely applied in many areas of natural sciences,
39
+ including physics, biology, psychology, etc [5]. In [26], Vincze proved that a Randers metric
40
+ F = α + β is a generalized Berwald metric if and only if dual vector field β♯ is of constant
41
+ 12010 Mathematics subject Classification: 53C60, 53C25.
42
+ 1
43
+
44
+ On a Class of Generalized Berwald Manifolds
45
+ 2
46
+ Riemannian length. In [20], Tayebi-Barzegari generalized Vincze’s result for (α, β)-metrics and
47
+ showed that an (α, β)-metric satisfying the so-called sign property is a generalized Berwald
48
+ metric if and only if β♯ is of constant Riemannian length. Then, Vincze showed that an (α, β)-
49
+ metric satisfying the regularity property φ′(0) ̸= 0 is a generalized Berwald metric if and only
50
+ if β♯ is of constant Riemannian length [25]. In [4], Bartelmeß-Matveev proved that a Finsler
51
+ metric is a generalized Berwald metric if and only if it is monochromatic.
52
+ Generally, two-dimensional Finsler metrics have some different and special Riemannian and
53
+ non-Riemannian curvature properties from the higher dimensions. For example, Bartelmeß-
54
+ Matveev showed that except for torus and the Klein bottle, the other closed 2-dimensional
55
+ manifolds can not have non-Riemannian generalized Berwald metrics [4]. In [28], Vincze et al.
56
+ showed that a connected generalized Berwald surface is a Landsberg surface if and only if it
57
+ is a Berwald surface. The S-curvature is constructed by Shen for given comparison theorems
58
+ on Finsler manifolds [5]. An interesting problem is to study generalized Berwald metrics with
59
+ vanishing S-curvature. Here, we characterize the class of two-dimensional generalized Berwald
60
+ (α, β)-metrics with vanishing S-curvature and prove the following.
61
+ Theorem 1.1. Let F = αφ(s), s = β/α, be a two-dimensional generalized Berwald (α, β)-
62
+ metric on a connected manifold M. Suppose that F has vanishing S-curvature. Then one of
63
+ the following holds:
64
+ (i) If F is a regular metric, then it reduces to a Riemannian metric of isotropic sectional
65
+ curvature or a locally Minkowskian metric;
66
+ (ii) If F is an almost regular metric that is not Riemannian nor locally Minkowskian, then φ
67
+ is given by
68
+ φ = c exp
69
+ � � s
70
+ 0
71
+ kt + q
72
+
73
+ b2 − t2
74
+ 1 + kt2 + qt
75
+
76
+ b2 − t2dt
77
+
78
+ ,
79
+ (1.1)
80
+ where c > 0, q > 0, and k are real constants, and β satisfies
81
+ rij = 0,
82
+ si = 0.
83
+ (1.2)
84
+ In this case, F is neither a Berwald nor Landsberg nor a Douglas metric.
85
+ It is remarkable that, for a generalized Berwald (α, β)-metric F = αφ(s), s = β/α, on an
86
+ n-dimensional manifold M, we show that S = 0 if and only if F is Riemannian or β is a Killing
87
+ form with constant length (see Lemma 3.2).
88
+ The celebrated Szab´o rigidity theorem states that every 2-dimensional Berwald surface is
89
+ either locally Minkowskian or Riemannian (see [13]).
90
+ Every Berwald metric has vanishing
91
+ S-curvature [22]. Then, Theorem 1.1 is an extension of Szab´o’s result for (α, β)-metrics.
92
+ The condition of generalized Berwaldness can not be dropped from the assumption of The-
93
+ orem 1.1. There are many two-dimensional (α, β)-metrics with vanishing S-curvature which
94
+ are not Riemannian nor locally Minkowskian. For example, let us consider Shen’s Fish-Tank
95
+ Randers metric F = α + β on R2 given by following
96
+ F =
97
+
98
+ (xv − yu)2 + (u2 + v2)(1 − x2 − y2)
99
+ 1 − x2 − y2
100
+ +
101
+ yu − xv
102
+ 1 − x2 − y2.
103
+ F has vanishing S-curvature [14] while it is not a generalized Berwald metric (∥β∥α =
104
+
105
+ x2 + y2).
106
+ This metric is not Riemannian. Also, F has vanishing flag curvature while it is not locally
107
+ Minkowskian.
108
+ In Theorem 1.1, the vanishing of S-curvature is necessary. For example, see the following.
109
+
110
+ On a Class of Generalized Berwald Manifolds
111
+ 3
112
+ Example 1. Let us consider G := {(x, y) ∈ R2|y > 0} and define a multiplication on G by
113
+ (x1, y1) ∗ (x2, y2) := (x2y1 + x1, y1y2), for (xi, yi) ∈ G, i = 1, 2. (G, ∗) is a Lie group [8]. One
114
+ can introduce a Randers metric F on G by
115
+ F =
116
+
117
+ 2dx2 + 2dxdx + 2dy2
118
+ y
119
+ + dx + dy
120
+ y
121
+ .
122
+ (1.3)
123
+ Then
124
+ a11 = a22 = 2
125
+ y2,
126
+ a21 = a12 = 1
127
+ y2,
128
+ b1 = b2 = 1
129
+ y,
130
+ a11 = a22 = 2
131
+ 3y2,
132
+ a12 = a21 = −1
133
+ 3y2,
134
+ b1 = b2 = 1
135
+ 3y.
136
+ It is easy to see that α is a positive-definite Riemannian metric. Also, we get
137
+ b := ∥β∥α =
138
+
139
+ aijbibj =
140
+
141
+ bibi =
142
+
143
+ b1b1 + b2b2 =
144
+
145
+ 2
146
+ 3
147
+ It follows that F is a positive-definite Randers metric on G. In [26], Vincze showed that a
148
+ Randers metric F = α + β is a generalized Berwald metric if and only if β is of constant length
149
+ with respect to α. Then, the Randers metric defined by (1.3) is a generalized Berwald metric.
150
+ We have
151
+ dβ = 1
152
+ y2dx ∧ dy ̸= 0.
153
+ Thus β is not closed which implies that F can not be a Berwald metric. We have
154
+ s11 = s22 = 0,
155
+ s12 = 1
156
+ y2 = −s21,
157
+ s1 = − 1
158
+ 3y,
159
+ s2 = 1
160
+ 3y.
161
+ (1.4)
162
+ where sij and si defined by (3.1) and (3.2). We claim that F has not vanishing S-curvature.
163
+ On contrary, let S = 0. Then by Lemma 3.2.1 in [5], we have rij + bisj + bjsi = 0. Contracting
164
+ it with bi yields
165
+ rj + b2sj = 0.
166
+ By considering ri + si = 0 and b2 = 2/3, we must have si = 0 which contradicts with (1.4).
167
+ Then, the Randers metric (1.3) is a generalized Berwald metric with B ̸= 0, L ̸= 0, D ̸= 0,
168
+ S ̸= 0 and E ̸= 0. We claim that F is not R-quadratic. On the contrary, let the Finsler metric
169
+ F defined by (1.3) be R-quadratic. By Theorem 6.2.1 in [5], if the two-dimensional Randers
170
+ metric (1.3) is R-quadratic then it has constant S-curvature S = 3cF, for some real constant
171
+ c. In [23], Xu-Deng proved that every homogeneous Finsler metric of isotropic S-curvature has
172
+ vanishing S-curvature. Thus, we must have S = 0 which is a contradiction. Therefore, the
173
+ Randers metric (1.3) is not R-quadratic.
174
+ Theorem 1.1 may not be held for an (α, β)-metric of constant S-curvature. For example, at
175
+ a point x = (x1, x2) ∈ R2 and in the direction y = (y1, y2) ∈ TxR2, consider the Riemannian
176
+ metric α = α(x, y) and one form β = β(x, y) as follows
177
+ α(x, y) :=
178
+
179
+ (y1)2 + e2x1(y2)2,
180
+ β(x, y) := y1.
181
+ (1.5)
182
+ Then sij = 0 and rij = b2aij − bibj which yield ri + si = 0. If φ = φ(s) satisfies
183
+ Φ = −6k φ∆2
184
+ b2 − s2,
185
+ (1.6)
186
+
187
+ On a Class of Generalized Berwald Manifolds
188
+ 4
189
+ for some constant k, then F = αφ(β/α) has constant S-curvature S = 3kF (see [6]). Here,
190
+ ∆ = ∆(b, s) and Φ = Φ(b, s) defined by (3.3) and (3.4), respectively. The existence of regular
191
+ solution of (1.6) for arbitrary k ∈ R, when α and β are given by (1.5), is proved in [10]. It is
192
+ easy to see that F is a generalized Berwald metric while it is not a Berwald metric.
193
+ We must mention that Theorem 1.1 does not hold for Finsler metrics of dim(M) ≥ 3.
194
+ Denote generic tangent vectors on S3 as u∂/∂x + v∂/∂y + w∂/∂z. The Finsler functions for
195
+ Bao-Shen’s Randers metrics F = α + β are given by the following
196
+ F =
197
+
198
+ K(cu − zv + yw)2 + (zu + cv − xw)2 + (xv + cw − yu)2
199
+ 1 + x2 + y2 + z2
200
+ ±
201
+
202
+ K − 1 (cu − zv + yw)
203
+ 1 + x2 + y2 + z2
204
+ ,
205
+ where K > 1 is a real constant. For these metrics, we have
206
+ b := ∥β∥α =
207
+
208
+ 1 − 1
209
+ K
210
+ One can see that, the one form β is not closed, and then F can not be a Douglas metric. This
211
+ family of Randers metrics is generalized Berwald metrics with S = 0 which are not Berwaldian.
212
+ In [4], it is proved that every 3-dimensional closed manifold admits a non-Riemannian
213
+ generalized Berwald metric. However, the closeness condition is very restrictive. Indeed, for
214
+ every manifold M with dim(M) ≥ 3, one can construct generalized Berwald (α, β)-metrics that
215
+ are not Berwaldian.
216
+ Example 2. The projective spherical metric on R3 is given by the following
217
+ α :=
218
+
219
+ (1 + ||X||2)||Y ||2 − ⟨X, Y ⟩2
220
+ 1 + ⟨X, X⟩
221
+ , X ∈ R3, Y ∈ TXR3,
222
+ (1.7)
223
+ where ⟨, ⟩ and ||.|| denote the Euclidean inner product and norm on R3, respectively.
224
+ Put
225
+ X = (x, y, z) and Y = (u, v, w). Suppose that β = κ(b1u + b2v + b3w) is a Killing 1-form of α,
226
+ where 0 < κ < 1. By a simple calculation, we get
227
+ b1 = A1
228
+ 2y + A1
229
+ 3z + C1
230
+ 1 + ⟨X, X⟩
231
+ ,
232
+ b2 = A2
233
+ 1x + A2
234
+ 3z + C2
235
+ 1 + ⟨X, X⟩
236
+ ,
237
+ b3 = A3
238
+ 1x + A3
239
+ 2y + C3
240
+ 1 + ⟨X, X⟩
241
+ ,
242
+ (1.8)
243
+ where A = (Ai
244
+ j) is an antisymmetric real matrix, and C = (Ci) is a constant vector in R3. Let
245
+ us put
246
+ C = (0, 1, 0),
247
+ A1
248
+ 2 = A2
249
+ 3 = 0,
250
+ A1
251
+ 3 = 1.
252
+ (1.9)
253
+ In this case, β is a Killing 1-form with ∥β∥α = κ < 1 such that is not closed. According to
254
+ Shen’s theorem in [12], a regular (α, β)-metric is a Berwald metric if and only if β is parallel
255
+ with respect to α. Using the Riemannian metric (1.7) and 1-form β satisfying (1.8) and (1.9),
256
+ one can construct generalized Berwald (α, β)-metrics which are not Berwaldian.
257
+ Homogeneous Finsler manifolds are those Finsler manifolds (M, F) that the orbit of the
258
+ natural action of I(M, F) on M at any point of M is the whole M. For the class of homogeneous
259
+ generalized Berwald (α, β)-metrics, we prove the following.
260
+ Corollary 1.1. Let F = αφ(s), s = β/α, be a two-dimensional homogeneous generalized
261
+ Berwald (α, β)-metric on a manifold M. Then F has isotropic S-curvature if and only if it is
262
+ a Riemannian metric of constant sectional curvature or a locally Minkowskian metric.
263
+
264
+ On a Class of Generalized Berwald Manifolds
265
+ 5
266
+ Let G be a connected Lie group with a bi-invariant Finsler metric ¯F, and H a closed
267
+ subgroup of G. Denote the Lie algebras of G and H as g and h, respectively. Let ρ be the natural
268
+ projection from G to M = G/H. Then there exists a uniquely defined G-invariant metric F
269
+ on M such that for any g ∈ G, the tangent map ρ∗ :
270
+
271
+ TgG, ¯F(g, .)
272
+
273
+
274
+
275
+ Tπ(g)M, F(π(g), .)
276
+
277
+ is
278
+ a submersion (see Lemma 3.1 in [24]). The G-invariant Finsler metric F is called the normal
279
+ homogeneous metric induced by ¯F. The pair (M, F) is said the normal homogeneous space
280
+ induced by ρ : G → M = G/H and ¯F. For normal homogeneous generalized Berwald metrics,
281
+ we have the following.
282
+ Corollary 1.2. Every two-dimensional normal homogeneous generalized Berwald (α, β)-metric
283
+ is a Riemannian metric of non-negative constant sectional curvature or a locally Minkowskian
284
+ metric.
285
+ A Finsler metric F on a manifold M is called a generalized normal homogeneous metric
286
+ if for every x, y ∈ M, there exists a δ(x)-translation of (M, F) sending x to y (see [30]). In
287
+ this paper, we consider two-dimensional homogeneous generalized Berwald Randers metric of
288
+ generalized normal-type and prove the following.
289
+ Corollary 1.3. Every two-dimensional generalized normal homogeneous generalized Berwald
290
+ Randers metric must be a Riemannian metric of non-negative constant sectional curvature or
291
+ a locally Minkowskian metric.
292
+ Let (G, ·) be a connected Lie group with identity element e, and λg denotes the left trans-
293
+ lation by g ∈ G. A Finsler metric F on G is called a left invariant Finsler metric if it satisfies
294
+ F ◦ (λg)∗ = F for any g ∈ G. In [2], Aradi proved that left invariant Finsler metrics are
295
+ generalized Berwald metrics. For this class of Finsler metrics, we have the following.
296
+ Theorem 1.2. Every left invariant Finsler surface has vanishing S-curvature if and only if it
297
+ is a Riemannian metric of constant sectional curvature.
298
+ By considering Theorems 1.1 and 1.2, it seems that two-dimensional generalized Berwald
299
+ (α, β)-metrics with vanishing S-curvature may be only Riemannian. But we do not find a short
300
+ proof for this conjecture.
301
+ A Finsler metric F on a manifold M is called an isotropic Berwald metric, if its Berwald
302
+ curvature is given by
303
+ By(u, v, w) = cF −1�
304
+ h(u, v)
305
+
306
+ w − gy(w, ℓ)ℓ
307
+
308
+ + h(v, w)
309
+
310
+ u − gy(u, ℓ)ℓ
311
+
312
+ +h(w, u)
313
+
314
+ v − gy(v, ℓ)ℓ
315
+
316
+ + 2FCy(u, v, w)ℓ
317
+
318
+ .
319
+ (1.10)
320
+ where hy(u, v) = gy(u, v)−F −2(y)gy(y, u)gy(y, v) is the angular form in direction y, C denotes
321
+ the Cartan torsion of F and c ∈ C∞(M). Every Berwald metric is an isotropic Berwald metric
322
+ with c = 0. The Funk metrics are isotropic Berwald metrics with c = 1/2. As a straightforward
323
+ conclusion of Theorem 1.2, one can get the following.
324
+ Corollary 1.4. Every left invariant isotropic Berwald surface must be a Riemannian surface
325
+ of constant sectional curvature.
326
+
327
+ On a Class of Generalized Berwald Manifolds
328
+ 6
329
+ 2
330
+ Preliminaries
331
+ Let M be an n-dimensional C∞ manifold, TM = �
332
+ x∈M TxM the tangent space and TM0 :=
333
+ TM − {0} the slit tangent space of M.
334
+ Let (M, F) be a Finsler manifold.
335
+ The following
336
+ quadratic form gy : TxM × TxM → R is called fundamental tensor
337
+ gy(u, v) := 1
338
+ 2
339
+ ∂2
340
+ ∂s∂t
341
+
342
+ F 2(y + su + tv)
343
+
344
+ s=t=0,
345
+ u, v ∈ TxM.
346
+ Let x ∈ M and Fx := F|TxM. To measure the non-Euclidean feature of Fx, one can define
347
+ Cy : TxM × TxM × TxM → R by
348
+ Cy(u, v, w) := 1
349
+ 2
350
+ d
351
+ dt
352
+
353
+ gy+tw(u, v)
354
+
355
+ t=0 ,
356
+ u, v, w ∈ TxM.
357
+ The family C := {Cy}y∈TM0 is called the Cartan torsion.
358
+ For a Finsler manifold (M, F), its induced spray on TM is denoted by G = G(x, y) which
359
+ in a standard coordinate (xi, yi) for TM0 is given by G = yi∂/∂xi − 2Gi(x, y)∂/∂yi, where
360
+ Gi := 1
361
+ 4gil� ∂2F 2
362
+ ∂xk∂yl yk − ∂F 2
363
+ ∂xl
364
+
365
+ .
366
+ For a Finsler metric F on an n-dimensional manifold M, the Busemann-Hausdorff volume form
367
+ dVF = σF(x)dx1 · · · dxn is defined by
368
+ σF(x) :=
369
+ Vol
370
+
371
+ Bn(1)
372
+
373
+ Vol
374
+
375
+ (yi) ∈ Rn �� F
376
+
377
+ yi ∂
378
+ ∂xi|x
379
+
380
+ < 1
381
+ �.
382
+ Let Gi denote the geodesic coefficients of F in the same local coordinate system. Then for
383
+ y = yi∂/∂xi|x ∈ TxM, the S-curvature is defined by
384
+ S(y) := ∂Gi
385
+ ∂yi (x, y) − yi ∂
386
+ ∂xi
387
+
388
+ ln σF(x)
389
+
390
+ .
391
+ (2.1)
392
+ For a vector y ∈ TxM0, the Berwald curvature By : TxM × TxM × TxM → TxM is defined
393
+ by By(u, v, w) := Bi
394
+ jkl(y)ujvkwl∂/∂xi|x, where
395
+ Bi
396
+ jkl :=
397
+ ∂3Gi
398
+ ∂yj∂yk∂yl .
399
+ F is called a Berwald metric if B = 0. Every Berwald metric satisfies S = 0 (see [22]).
400
+ For y ∈ TxM, define the Landsberg curvature Ly : TxM × TxM × TxM → R by
401
+ Ly(u, v, w) := −1
402
+ 2gy
403
+
404
+ By(u, v, w), y
405
+
406
+ .
407
+ A Finsler metric F is called a Landsberg metric if L = 0.
408
+ Taking a trace of Berwald curvature B give us the mean of Berwald curvature E which is
409
+ defined by Ey : TxM × TxM → R, where
410
+ Ey(u, v) := 1
411
+ 2
412
+ n
413
+
414
+ i=1
415
+ gij(y)gy
416
+
417
+ By(u, v, ∂i), ∂j
418
+
419
+ .
420
+ (2.2)
421
+ where {∂i} is a basis for TxM at x ∈ M. In local coordinates, Ey(u, v) := Eij(y)uivj, where
422
+ Eij := 1
423
+ 2Bm
424
+ mij.
425
+
426
+ On a Class of Generalized Berwald Manifolds
427
+ 7
428
+ Taking a horizontal derivation of the mean of Berwald curvature E along Finslerian geodesics
429
+ give us the H-curvature H = H(x, y) which is defined by Hy = Hijdxi ⊗ dxj, where
430
+ Hij := Eij|mym.
431
+ Here, “|” denotes the horizontal covariant differentiation with respect to the Berwald connection
432
+ of F.
433
+ For a non-zero vector y ∈ TxM0, one can define Dy : TxM × TxM × TxM → TxM by
434
+ Dy(u, v, w) := Di
435
+ jkl(y)uivjwk∂/∂xi|x, where
436
+ Di
437
+ jkl :=
438
+ ∂3
439
+ ∂yj∂yk∂yl
440
+
441
+ Gi −
442
+ 2
443
+ n + 1
444
+ ∂Gm
445
+ ∂ym yi
446
+
447
+ .
448
+ (2.3)
449
+ D is called the Douglas curvature. F is called a Douglas metric if D = 0.
450
+ For a non-zero vector y ∈ TxM0, the Riemann curvature is a family of linear transformation
451
+ Ry : TxM → TxM which is defined by Ry(u) := Ri
452
+ k(y)uk∂/∂xi, where
453
+ Ri
454
+ k(y) = 2∂Gi
455
+ ∂xk −
456
+ ∂2Gi
457
+ ∂xj∂yk yj + 2Gj ∂2Gi
458
+ ∂yj∂yk − ∂Gi
459
+ ∂yj
460
+ ∂Gj
461
+ ∂yk .
462
+ (2.4)
463
+ The family R := {Ry}y∈TM0 is called the Riemann curvature.
464
+ For a flag P := span{y, u} ⊂ TxM with the flagpole y, the flag curvature K = K(P, y) is
465
+ defined by
466
+ K(x, y, P) :=
467
+ gy
468
+
469
+ u, Ry(u)
470
+
471
+ gy(y, y)gy(u, u) − gy(y, u)2.
472
+ (2.5)
473
+ The flag curvature K(x, y, P) is a function of tangent planes P = span{y, v} ⊂ TxM. A Finsler
474
+ metric F is of scalar flag curvature if K(x, y, P) = K(x, y) is independent of P. Also, F is
475
+ called of isotropic and constant flag curvature if K = K(x) and K = constant, respectively.
476
+ Throughout this paper, we use the Berwald connection on Finsler manifolds. The pullback
477
+ bundle π∗TM admits a unique linear connection, called the Berwald connection. Let (M, F)
478
+ be an n-dimensional Finsler manifold. Let {ej} be a local frame for π∗TM, {ωi, ωn+i} be the
479
+ corresponding local coframe for T ∗(TM0) and {ωi
480
+ j} be the set of local Berwald connection forms
481
+ with respect to {ej}. Then the connection forms are characterized by the structure equations
482
+ as follows
483
+ • Torsion freeness:
484
+ dωi = ωj ∧ ωi
485
+ j.
486
+ (2.6)
487
+ • Almost metric compatibility:
488
+ dgij − gkjωk
489
+ i − gikωk
490
+ j = −2Lijkωk + 2Cijkωn+k,
491
+ (2.7)
492
+ where ωi := dxi and ωn+k := dyk + yjωk
493
+ j.
494
+ The horizontal and vertical covariant derivations with respect to the Berwald connection re-
495
+ spectively are denoted by “|” and “, ”. For more details, one can see [13].
496
+
497
+ On a Class of Generalized Berwald Manifolds
498
+ 8
499
+ 3
500
+ Proof of Theorems 1.1 and 1.2
501
+ For an (α, β)-metric, let us define bi;j by bi;jθj := dbi − bjθj
502
+ i , where θi := dxi and θj
503
+ i := γj
504
+ ikdxk
505
+ denote the Levi-Civita connection form of α. Let
506
+ rij := 1
507
+ 2(bi;j + bj;i),
508
+ sij := 1
509
+ 2(bi;j − bj;i),
510
+ ri0 := rijyj,
511
+ r00 := rijyiyj,
512
+ rj := birij,
513
+ (3.1)
514
+ si0 := sijyj,
515
+ sj := bisij,
516
+ si
517
+ j = aimsmj,
518
+ si
519
+ 0 = si
520
+ jyj,
521
+ r0 := rjyj,
522
+ s0 := sjyj.
523
+ (3.2)
524
+ Put
525
+ Q :=
526
+ φ′
527
+ φ − sφ,
528
+ ∆ := 1 + sQ + (b2 − s2)Q′,
529
+ Θ := Q − sQ′
530
+ 2∆
531
+ ,
532
+ (3.3)
533
+ Φ := −(Q − sQ′)(n∆ + 1 + sQ) − (b2 − s2)(1 + sQ)Q′′,
534
+ (3.4)
535
+ Ψ :=
536
+ φ′′
537
+ 2
538
+
539
+ (φ − sφ′) + (b2 − s2)φ′′�,
540
+ where b := ∥β∥α. Let Gi = Gi(x, y) and ¯Gi
541
+ α = ¯Gi
542
+ α(x, y) denote the coefficients of F and α,
543
+ respectively, in the same coordinate system. By definition, we have
544
+ Gi = Gi
545
+ α + αQsi
546
+ 0 + α−1(r00 − 2Qαs0)(Θyi + αΨbi).
547
+ (3.5)
548
+ Clearly, if β is parallel with respect to α, that is rij = sij = 0, then Gi = Gi
549
+ α = γi
550
+ jk(x)yjyk are
551
+ quadratic in y. In this case, F reduces to a Berwald metric.
552
+ For an (α, β)-metric F = αφ(s), s = β/α, on an n-dimensional manifold M, the S-curvature
553
+ is given by
554
+ S =
555
+
556
+ 2Ψ − f ′(b)
557
+ bf(b)
558
+
559
+ (r0 + s0) −
560
+ Φ
561
+ 2α∆2(r00 − 2αQs0),
562
+ (3.6)
563
+ where
564
+ f(b) :=
565
+ � π
566
+ 0 sinn−2 t T(b cos t)dt
567
+ � π
568
+ 0 sinn−2 tdt
569
+ ,
570
+ T(s) := φ(φ − sφ′)n−2�
571
+ (φ − sφ′) + (b2 − s2)φ′′�
572
+ .
573
+ For more details, see [6].
574
+ To prove Theorem 1.1, we need the following key lemma.
575
+ Lemma 3.1. ([4][20]) Let F = αφ(s), s = β/α, be an (α, β)-metric on a manifold M. Then
576
+ F is a generalized Berwald metric if and only if β has constant length with respect to α.
577
+ Proof. In [20], Tayebi-Barzegari showed that every (α, β)-metric F = αφ(s), s = β/α, with
578
+ sign property is a generalized Berwald metric if and only if the dual vector field β♯ is of
579
+ constant Riemannian length. In [9], Ivanov proved that a two-dimensional Finsler metric is a
580
+ generalized Berwald metric if and only if it is monochromatic, i.e., Finsler metrics such that
581
+ each two tangent spaces are isomorphic as normed spaces. In [4], Bartelmeß-Matveev extended
582
+ his result for n-dimensional Finsler metric. It follows that an (α, β)-metric is a generalized
583
+ Berwald metric if and only if dual vector field β♯ is of constant Riemannian length.
584
+ In [6], Cheng-Shen characterized (α, β)-metrics with isotropic S-curvature on a manifold M
585
+ of dimension n ≥ 3. Soon, they found that their result holds for the class of (α, β)-metrics with
586
+ constant length one-forms, only. Here, we give a characterization of the class of generalized
587
+ Berwald metrics with vanishing S-curvature.
588
+
589
+ On a Class of Generalized Berwald Manifolds
590
+ 9
591
+ Lemma 3.2. Let F = αφ(s), s = β/α, be a generalized Berwald (α, β)-metric on an n-
592
+ dimensional manifold M. Then S = 0 if and only if F is Riemannian or β is a Killing form
593
+ with constant length.
594
+ Proof. Let b := ∥β∥α =
595
+
596
+ amjbjbm = √bmbm. Then, the following holds
597
+ ∂b
598
+ ∂xi = 1
599
+ bbmbm|i = 1
600
+ b(ri + si).
601
+ (3.7)
602
+ By Lemma 3.1, we have b = constant. Then, by (3.7) we obtain ri + si = 0. In this case, (3.6)
603
+ reduces to following
604
+ S = −
605
+ Φ
606
+ 2α∆2(r00 − 2αQs0),
607
+ (3.8)
608
+ By (3.8), S = 0 if and only if Φ = 0 or β satisfies
609
+ r00 = 2αQs0.
610
+ (3.9)
611
+ In the case of Φ = 0, F reduces to a Riemannian metric (see Proposition 2.2 in [11]). Now,
612
+ let (3.9) holds. We are going to simplify (3.9). For this aim, one can change the y-coordinates
613
+ (yi), i = 1, · · · , n, at a point to the polar coordinates (s, uA), where A = 2, · · · , n (see [6]).
614
+ For an arbitrary and fix point x ∈ M, let us take an orthonormal basis ei at x such that the
615
+ Riemannian metric is written as α =
616
+ ��n
617
+ i=1(yi)2 and its related one-form is given by β = by1,
618
+ where b := ||β||α. Let us fix an arbitrary number s such that |s| < b. Define
619
+ ¯α =
620
+
621
+
622
+
623
+
624
+ n
625
+
626
+ A=2
627
+ (yA)2.
628
+ Then, by β = sα we get
629
+ y1 =
630
+ s
631
+
632
+ b2 − s2 ¯α,
633
+ yA = uA.
634
+ (3.10)
635
+ Also, we have
636
+ α =
637
+ b
638
+
639
+ b2 − s2 ¯α,
640
+ β =
641
+ bs
642
+
643
+ b2 − s2 ¯α.
644
+ (3.11)
645
+ Let us put
646
+ ¯r10 :=
647
+ n
648
+
649
+ A=2
650
+ r1AyA,
651
+ ¯s10 :=
652
+ n
653
+
654
+ A=2
655
+ s1AyA,
656
+ ¯r00 :=
657
+ n
658
+
659
+ A,B=2
660
+ rAByAyB,
661
+ ¯r0 :=
662
+ n
663
+
664
+ A=2
665
+ rAyA,
666
+ ¯s0 :=
667
+ n
668
+
669
+ A=2
670
+ sAyA.
671
+ Then we get the following useful relations
672
+ r1 = br11,
673
+ rA = br1A,
674
+ s1 = 0,
675
+ sA = bs1A,
676
+ (3.12)
677
+ r00 =
678
+ s2
679
+ b2 − s2 ¯α2r11 +
680
+ 2s
681
+
682
+ b2 − s2 ¯α¯r10 + ¯r00,
683
+ (3.13)
684
+ r10 =
685
+ s
686
+
687
+ b2 − s2 ¯αr11 + ¯r10,
688
+ s0 = ¯s0 = b¯s10.
689
+ (3.14)
690
+ Using (3.11)-(3.14), the equation (3.9) can be written as follows
691
+ ¯r00 +
692
+ s2
693
+ b2 − s2 ¯α2 r11 =
694
+ 2
695
+
696
+ b2 − s2
697
+
698
+ b2Q¯s10 − s¯r10
699
+
700
+ ¯α.
701
+ (3.15)
702
+
703
+ On a Class of Generalized Berwald Manifolds
704
+ 10
705
+ By (3.15), we get two following relations
706
+ ¯r00 +
707
+ s2
708
+ b2 − s2 ¯α2r11 = 0,
709
+ (3.16)
710
+ b2Q¯s10 − s¯r10 = 0.
711
+ (3.17)
712
+ On the other hand, (3.11) implies that
713
+ s2
714
+ b2 − s2 ¯α2 − 1
715
+ b2β2 = 0.
716
+ (3.18)
717
+ By (3.16) and (3.18), we get
718
+ b2¯r00 + β2r11 = 0.
719
+ (3.19)
720
+ The following hold
721
+ ∂¯r00
722
+ ∂y1 = 0,
723
+ ∂β
724
+ ∂y1 = b.
725
+ Then by differentiating (3.19) with respect to y1 we have β/br11 = 0. Thus r11 = 0 and by
726
+ putting it in (3.19), we get ¯r00 = 0. Putting these relations in (3.13) and (3.14) imply that
727
+ r00 =
728
+ 2s¯α
729
+
730
+ b2 − s2 ¯r10 = 2β
731
+ b ¯r10,
732
+ (3.20)
733
+ r10 = ¯r10.
734
+ (3.21)
735
+ Now, by considering (3.20) and (3.21), we divide the problem into two cases: (a) ¯r10 = 0 and
736
+ (b) ¯r10 ̸= 0.
737
+ Case (a):
738
+ ¯r10 = 0.
739
+ In this case, by (3.20) we get rij = 0.
740
+ Putting it in (3.9) implies
741
+ that si = 0. In this case, β reduces to a Killing one-form of constant length with respect to α.
742
+ Case (b): ¯r10 ̸= 0. We have ∂¯r10/∂y1 = 0 and ∂¯s10/∂y1 = 0. Thus, differentiating (3.17) with
743
+ respect to y1 yields
744
+ (s)y1¯r10 = b2(Q)y1¯s10.
745
+ (3.22)
746
+ Contracting (3.17) with (s)y1 give us
747
+ s(s)y1¯r10 = b2Q(s)y1¯s10.
748
+ (3.23)
749
+ By (3.22) and (3.23), we get
750
+
751
+ (s)y1Q − s(Q)y1
752
+
753
+ ¯s10 = 0.
754
+ (3.24)
755
+ According to (3.24), we get ¯s10 = 0 or Q(s)y1 = s(Q)y1. Let ¯s10 = 0 holds. Then (3.17) reduces
756
+ to s = 0, which is impossible. Then, we have
757
+ Q(s)y1 = s(Q)y1,
758
+ which is equal to
759
+ (Q)y1
760
+ Q
761
+ = (s)y1
762
+ s
763
+ .
764
+ (3.25)
765
+
766
+ On a Class of Generalized Berwald Manifolds
767
+ 11
768
+ Using sy1 ̸= 0 and (Q)y1 = sy1(Q)s, then (3.25) give us
769
+ (Q)s
770
+ Q
771
+ = 1
772
+ s
773
+ which yields
774
+ ln(Q) − ln(s) = c,
775
+ where c is a real constant. Thus Q = ks, where k is a non-zero real constant. In this case, we
776
+ get φ =
777
+
778
+ 1 + ks2 which shows that F is a Riemannian metric. This completes the proof.
779
+ Here, we solve an ODE which will appear in the proof of Theorem 1.1.
780
+ Lemma 3.3. Let F = αφ(s), s = β/α, be an (α, β)-metric on a manifold M. Suppose that φ
781
+ satisfies following
782
+ αΘ2 + 2Λ1Θ1 + Λ2Q = 0,
783
+ (3.26)
784
+ where
785
+ Λ1 := biαyi,
786
+ Λ2 := bibjαyiyj,
787
+ Θ1 := biQyi,
788
+ Θ2 := bibjQyiyj.
789
+ Then F is a singular Finsler metric given by
790
+ φ = c exp
791
+ � � s
792
+ 0
793
+ k1t + k2
794
+
795
+ b2 − t2
796
+ 1 + t
797
+
798
+ k1t + k2
799
+
800
+ b2 − t2�dt
801
+
802
+ ,
803
+ (3.27)
804
+ where k1 and k2 are real constants, and c > 0 is a non-zero constant.
805
+ Proof. Let us put
806
+ Ajk := α2ajk − yjyk.
807
+ Then, the followings hold
808
+ αyi = 1
809
+ αyi,
810
+ αyjyk = 1
811
+ α3Ajk,
812
+ αyjykyl = − 1
813
+ α5
814
+
815
+ Ajkyl + Ajlyk + Alkyj
816
+
817
+ .
818
+ Also, one can obtain the following
819
+ Λ1 = s,
820
+ (3.28)
821
+ Λ2 = 1
822
+ α(b2 − s2),
823
+ (3.29)
824
+ Θ1 = 1
825
+ α(b2 − s2)Q′,
826
+ (3.30)
827
+ Θ2 = 1
828
+ α2(b2 − s2)
829
+
830
+ (b2 − s2)Q′′ − 3sQ′�
831
+ .
832
+ (3.31)
833
+ Suppose that (3.26) holds. Putting (3.28)-(3.31) into (3.26) yield
834
+ Q′′ −
835
+ s
836
+ b2 − s2Q′ +
837
+ 1
838
+ b2 − s2Q = 0,
839
+
840
+ On a Class of Generalized Berwald Manifolds
841
+ 12
842
+ which implies that
843
+ Q = k1s + k2
844
+
845
+ b2 − s2,
846
+ (3.32)
847
+ where k1 and k2 are real constants. Considering (3.3), the equation (3.32) is equal to following
848
+
849
+ 1 + k1s2 + k2s
850
+
851
+ b2 − s2
852
+
853
+ φ′ =
854
+
855
+ k1s + k2
856
+
857
+ b2 − s2
858
+
859
+ φ.
860
+ (3.33)
861
+ By (3.33), we get (3.27). It is an almost regular (α, β)-metric, namely, it is singular in two
862
+ directions y = (±b, 0, 0) ∈ TxM at any point x (for more details, see [12]).
863
+ The function φ in (3.27) is specifically has been seen for the first time in Asanov’s paper [3]
864
+ as follows
865
+ φ = c exp
866
+ � � s
867
+ 0
868
+ k
869
+
870
+ b2 − t2
871
+ 1 + tk
872
+
873
+ b2 − t2dt
874
+
875
+ .
876
+ (3.34)
877
+ Then, its more general form (3.27) found by Shen in [12], where he looked for unicorn metrics,
878
+ namely the Landsberg metrics which are not Berwaldian.
879
+ Shen realized that the function
880
+ φ = φ(b, s) in (3.27) can be expressed in terms of elementary functions. See (7.2) in [12].
881
+ Here, we show that the Douglas curvature of Finsler surfaces satisfies a special relation.
882
+ More precisely, we prove the following.
883
+ Lemma 3.4. The Douglas curvature of any Finsler surface (M, F) satisfies
884
+ Di
885
+ jkl|sys = Djklyi
886
+ (3.35)
887
+ for some tensor D = Dijkdxi ⊗ dxj ⊗ dxk which are homogeneous of degree -1 in y.
888
+ Proof. By definition, the Douglas curvature of a two-dimensional Finsler metric is given by
889
+ Di
890
+ jkl = Bi
891
+ jkl − 2
892
+ 3
893
+
894
+ Ejkδi
895
+ l + Eklδi
896
+ j + Eljδi
897
+ k + Ejk,lyi�
898
+ .
899
+ (3.36)
900
+ Taking a horizontal derivation of (3.36) along Finslerian geodesics and using yi
901
+ |s = 0 give us
902
+ the following
903
+ Di
904
+ jkl|mym = Bi
905
+ jkl|mym − 2
906
+ 3
907
+
908
+ Hjkδi
909
+ l + Hklδi
910
+ j + Hljδi
911
+ k + Ejk,l|mymyi�
912
+ .
913
+ (3.37)
914
+ Every Finsler surface is of scalar flag curvature K = K(x, y). Then, by (11.24) in [13] we have
915
+ Bi
916
+ jml|kyk = 2KCjlmyi − 1
917
+ 3
918
+
919
+ ylδi
920
+ m + ymδi
921
+ l − 2glmyi�
922
+ Kj − 1
923
+ 3
924
+
925
+ yjδi
926
+ m + ymδi
927
+ j − 2gjmyi�
928
+ Kl
929
+ −1
930
+ 3
931
+
932
+ yjδi
933
+ l + ylδi
934
+ j − 2gjlyi�
935
+ Km − 1
936
+ 3F 2�
937
+ Kjmhi
938
+ l + Kjlhi
939
+ m + Klmhi
940
+ j
941
+
942
+ , (3.38)
943
+ where yi = FFyi, Kj := Kyj and Kjk := Kyjyk. Taking a trace of (3.38) yields
944
+ Hjl = −1
945
+ 2
946
+
947
+ ylKj + yjKl + F 2Kjl
948
+
949
+ .
950
+ (3.39)
951
+ By putting (3.38) and (3.39) in (3.37) we get
952
+ Di
953
+ jkl|mym = 1
954
+ 3
955
+
956
+ 6KCjkl + 2
957
+
958
+ gklKj + gkjKl + gjlKk
959
+
960
+ +
961
+
962
+ yjKkl + ykKjl + ylKkj
963
+
964
+ −2Ejk,l|mym�
965
+ yi.(3.40)
966
+ (3.40) give us (3.35).
967
+
968
+ On a Class of Generalized Berwald Manifolds
969
+ 13
970
+ Now, we show that the covariant derivative of Berwald curvature of 2-dimensional gener-
971
+ alized Berwald (α, β)-metric with vanishing S-curvature satisfies an interesting relation which
972
+ will play an important role in the proof of Theorem 1.1.
973
+ Lemma 3.5. The Berwald curvature of any non-Riemannian 2-dimensional generalized Berwald
974
+ (α, β)-metric with vanishing S-curvature satisfies following
975
+ hm
976
+ p Bp
977
+ jkl|sys =
978
+ sm
979
+ 0|0(αjklQ + αjkQl + αlkQj + αljQk + αQjkl + αlQjk + αjQlk + αkQjl)
980
+ +sm
981
+ 0(αjklQ|0 + αjkQl|0 + αlkQj|0 + αljQk|0 + αQjkl|0 + αlQjk|0 + αjQlk|0
982
+ +αkQjl|0) + Am
983
+ lXjk + Am
984
+ jXlk + Am
985
+ kXjl + Bm
986
+ l Yjk + Bm
987
+ jYlk + Bm
988
+ kYjl = 0, (3.41)
989
+ where
990
+ Xjk := Q|0αjk + Qk|0αj + Qj|0αk + Qjk|0α,
991
+ Yjk := Qαjk + Qkαj + Qjαk + αQjk,
992
+ Am
993
+ l := sm
994
+ l − F −2s0
995
+ lym,
996
+ Bm
997
+ l := Am
998
+ l|sys = sm
999
+ l|0 − F −2s0
1000
+ l|0ym.
1001
+ Proof. By Lemma 3.2, we have rij = 0 and sj = 0. In this case, (3.5) reduces to following
1002
+ Gi = Gi
1003
+ α + αQsi
1004
+ 0.
1005
+ (3.42)
1006
+ Taking three vertical derivations of (3.42) with respect to yj, yl and yk give us
1007
+ Bi
1008
+ jkl =
1009
+ si
1010
+ 0
1011
+
1012
+ αQjkl + αlQjk + αjQlk + αkQjl + αjklQ + αjkQl + αlkQj + αljQk
1013
+
1014
+ + si
1015
+ j
1016
+
1017
+ Qαlk + Qkαl + Qlαk + αQlk
1018
+
1019
+ + si
1020
+ k
1021
+
1022
+ Qαjl + Qjαl + Qlαj + αQjl
1023
+
1024
+ +si
1025
+ l
1026
+
1027
+ Qαjk + Qkαj + Qjαk + αQjk
1028
+
1029
+ .
1030
+ (3.43)
1031
+ Contracting (3.43) with hm
1032
+ i implies that
1033
+ hm
1034
+ i Bi
1035
+ jkl =
1036
+
1037
+ αjklQ + αjkQl + αlkQj + αljQk + αQjkl + αlQjk + αjQlk + αkQjl
1038
+
1039
+ sm
1040
+ 0
1041
+ +
1042
+
1043
+ Qαjk + Qkαj + Qjαk + αQjk
1044
+
1045
+ (sm
1046
+ l − F −2s0
1047
+ lym)
1048
+ +
1049
+
1050
+ Qαlk + Qkαl + Qlαk + αQlk
1051
+
1052
+ (sm
1053
+ j − F −2s0
1054
+ jym)
1055
+ +
1056
+
1057
+ Qαjl + Qjαl + Qlαj + αQjl
1058
+
1059
+ (sm
1060
+ k − F −2s0
1061
+ kym).
1062
+ (3.44)
1063
+ On the other hand, by taking a horizontal derivation of Douglas curvature along Finslerian
1064
+ geodesics and contracting the result with hm
1065
+ i , we get the following
1066
+ hm
1067
+ i Di
1068
+ jkl|sys = hm
1069
+ i Bi
1070
+ jkl|sys − 2
1071
+ 3
1072
+
1073
+ Hjkhm
1074
+ l + Hklhm
1075
+ j + Hljhm
1076
+ k
1077
+
1078
+ .
1079
+ (3.45)
1080
+ Contracting (3.35) with hm
1081
+ i yields
1082
+ hm
1083
+ i Di
1084
+ jkl|sys = 0.
1085
+ (3.46)
1086
+ Since S = 0, then by definition we get H = 0. Thus, (3.45) and (3.46) imply that
1087
+ hm
1088
+ i Bi
1089
+ jkl|sys = 0.
1090
+ (3.47)
1091
+ We have hm
1092
+ i|s = 0. Then, by considering (3.47), we have
1093
+
1094
+ hm
1095
+ i Bi
1096
+ jkl
1097
+
1098
+ |sys = hm
1099
+ i Bi
1100
+ jkl|sys = 0.
1101
+ (3.48)
1102
+ Therefore, taking a horizontal derivation of (3.44) along Finslerian geodesic and considering
1103
+ (3.48) give us (3.41).
1104
+
1105
+ On a Class of Generalized Berwald Manifolds
1106
+ 14
1107
+ Proof of Theorem 1.1: Taking a horizontal derivation of ymsm
1108
+ 0 = 0 with respect to the
1109
+ Berwald connection of F implies that
1110
+ ym|0sm
1111
+ 0 + ymsm
1112
+ 0|0 = 0.
1113
+ (3.49)
1114
+ Since ym|0 = 0, then (3.49) reduces to following
1115
+ ymsm
1116
+ 0|0 = 0.
1117
+ (3.50)
1118
+ By contracting (3.41) with ym and considering (3.50), we get
1119
+ s0
1120
+ lXjk + s0
1121
+ jXlk + s0
1122
+ kXjl + s0
1123
+ l|0Yjk + s0
1124
+ j|0Ylk + s0
1125
+ k|0Yjl = 0.
1126
+ (3.51)
1127
+ Since sj = 0, then one can get
1128
+ 0 = (sj)|0 = (rm0 + sm0)sm
1129
+ j + bmsm
1130
+ j|0
1131
+ which considering rij = 0, it reduces to following
1132
+ bmsm
1133
+ j|0 = −sm0sm
1134
+ j.
1135
+ (3.52)
1136
+ Multiplying (3.52) with yj yields
1137
+ bisi
1138
+ 0|0 + si0si
1139
+ 0 = 0.
1140
+ (3.53)
1141
+ Also, contracting (3.52) with bj implies that
1142
+ bisi
1143
+ j|0bj = 0.
1144
+ (3.54)
1145
+ Multiplying (3.51) with bjbkbl and considering (3.53) and (3.54) give us
1146
+ (αΘ2 + 2Λ1Θ1 + Λ2Q)sm0sm
1147
+ 0 = 0.
1148
+ (3.55)
1149
+ By (3.55), we have two cases: if sm
1150
+ 0sm0 = 0, since α is a positive-definite metric, then we
1151
+ find that β is closed. Therefore, by (3.43) we conclude that F reduces to a Berwald metric.
1152
+ By Szabo’s rigidity result for Finsler surfaces, F reduces to a locally Minkowskian metric
1153
+ or a Riemannian metric. On the other hand, every Finsler surface has scalar flag curvature
1154
+ K = K(x, y). According to Akbar-Zadeh theorem in [1], a Finsler manifold (M, F) of scalar
1155
+ flag curvature K = K(x, y) has isotropic flag curvature K = K(x) if and only if it has vanishing
1156
+ H-curvature H = 0. Thus, the obtained Riemannian metric has isotropic sectional curvature.
1157
+ Now, suppose that F is not a Riemannian metric nor a locally Minkowskian metric. Then,
1158
+ by (3.55) we have αΘ2 + 2Λ1Θ1 + Λ2Q = 0. By Lemma 3.3 we obtain (1.1). In this case, since
1159
+ S = 0, then F can not be a Douglas metric. On the other hand, the Berwald curvature of
1160
+ 2-dimensional Finsler manifold is given by
1161
+ Bi
1162
+ jkl = − 2
1163
+ F 2Ljklyi + 2
1164
+ 3
1165
+
1166
+ Ejkhi
1167
+ l + Eklhi
1168
+ j + Ejlhi
1169
+ k
1170
+
1171
+ .
1172
+ (3.56)
1173
+ See the relation (15) in [21]. If F is a Landsberg metric then by considering S = 0, (3.56) implies
1174
+ that F is a Berwald metric. This is a contradiction. Then, (1.1) is a generalized Berwald metric
1175
+ which is not Berwald, Landsberg nor Douglas metric.
1176
+ Proof of Corollary 1.1: In [23], Xu-Deng proved that every homogeneous Finsler metric
1177
+ of isotropic S-curvature has vanishing S-curvature. Then, by assumption we get S = 0. The
1178
+
1179
+ On a Class of Generalized Berwald Manifolds
1180
+ 15
1181
+ Akbar-Zadeh theorem in [1] stated that a Finsler manifold (M, F) of scalar flag curvature
1182
+ K = K(x, y) has isotropic flag curvature K = K(x) if and only if it has vanishing H-curvature
1183
+ H = 0. On the other hand, every Finsler surface has scalar flag curvature K = K(x, y). Thus,
1184
+ by Akbar-Zadeh theorem we get K = K(x). Every scalar function on M which is invariant
1185
+ under isometries of (M, F) is a constant function. The homogeneity of (M, F) and invariancy
1186
+ of the flag curvature under isometries of F imply that K = constant. Then, by Theorem 1.1
1187
+ we get the proof.
1188
+ Proof of Corollary 1.2: Let F = αφ(s), s = β/α, be a two-dimensional normal homogeneous
1189
+ generalized Berwald (α, β)-metric. In [24], Xu-Deng proved that every normal homogeneous
1190
+ manifold has vanishing S-curvature and non-negative flag curvature. By Theorem 1.1 and the
1191
+ same method used in the proof of Corollary 1.1, it follows that F is a Riemannian metric of
1192
+ non-negative constant sectional curvature or a locally Minkowskian metric.
1193
+ Proof of Corollary 1.3: Let (G/H, F) be a generalized normal homogeneous Randers man-
1194
+ ifold. In [30], Zhang-Deng proved that F has vanishing S-curvature (Corollary 3.11). Also,
1195
+ they showed that any generalized normal homogeneous Randers metric has non-negative flag
1196
+ curvature (see Proposition 3.13 in [30]). Then, by Theorem 1.1 we get the proof.
1197
+ According to Theorem 1.1, every two-dimensional generalized Berwald (α, β)-metric with
1198
+ vanishing S-curvature is Riemannian or locally Minkowskian. Every left invariant Finsler metric
1199
+ is a generalized Berwald metric [2]. Here, we prove Theorem 1.2 which states that left invariant
1200
+ Finsler metrics with vanishing S-curvature reduce to Riemannian metrics, only. The approach
1201
+ of the proof of Theorem 1.2 is completely different from Theorem 1.1.
1202
+ Proof of Theorem 1.2: To prove a homogeneous surface with S = 0 is Riemannian, we only
1203
+ need to consider the nontrivial case, namely, a 2-dimensional non-Abelian Lie group G with a
1204
+ left invariant Finsler metric F. At each y ∈ g with F(y) = 1, there is a gy orthonormal basis
1205
+ e1 = y and e2 tangent to F = 1. At almost all non-zero y, the spray vector field η is nonzero,
1206
+ i.e., η(y) is a nonzero multiple of e2. By the homogenous S-curvature formula, we have
1207
+ S(y) = −I(η) = −Cy(η, e2, e2) = 0.
1208
+ Then, Cy(e2, e2, e2) = 0 everywhere. The speciality of 2-dimensional spaces implies C = 0
1209
+ everywhere, so F is Riemannian. By the same method used to prove Corollary 1.1, one can
1210
+ conclude that the Riemannian metric is of constant sectional curvature. This completes the
1211
+ proof.
1212
+ Proof of Corollary 1.4: By assumption, F has isotropic Berwald curvature
1213
+ Bi
1214
+ jkl = cF −1�
1215
+ hjkhi
1216
+ l + hjlhi
1217
+ k + hklhi
1218
+ j + 2Cjklyi�
1219
+ .
1220
+ (3.57)
1221
+ where c = c(x) is a scalar function on M. In [22], it is proved that every Finsler surface of
1222
+ isotropic Berwald curvature (3.57) metric has isotropic S-curvature S = 3cF. By Xu-Deng’s
1223
+ result in [24], F has vanishing S-curvature S = 0. Then, by Theorem 1.2, F reduces to a
1224
+ Riemannian metric.
1225
+
1226
+ On a Class of Generalized Berwald Manifolds
1227
+ 16
1228
+ 4
1229
+ Some Examples of Generalized Berwald Manifolds
1230
+ In this section, we are going to give some important examples of the class of generalized Berwald
1231
+ manifolds. First, by using trans-Sasakian structure, we construct a family of odd-dimensional
1232
+ generalized Berwald Randers metrics.
1233
+ Example 3.
1234
+
1235
+ Odd-dimensional generalized Berwald Randers metrics
1236
+
1237
+ Let M be a differen-
1238
+ tiable manifold of dimension 2n + 1.
1239
+ Suppose that η = ηi(x)dxi, ξ = ξi∂/∂xi and ϕ =
1240
+ ϕi
1241
+ j∂/∂xi ⊗ dxj are a 1-form, a vector field, and a (1, 1)-tensor on M, respectively. The triple
1242
+ (η, ξ, ϕ) is called an almost contact structure on M if it satisfies
1243
+ ϕ(ξ) = 0,
1244
+ η(ξ) = 1,
1245
+ ϕ2 = −I + η ⊗ ξ.
1246
+ A differentiable manifold of odd dimension 2n+ 1 with an almost contact structure is called an
1247
+ almost contact manifold. Let a manifold M with the (η, ξ, ϕ) structure admits a Riemannian
1248
+ metric g such that
1249
+ g(ϕX, ϕY ) = g(X, Y ) − η(X)η(Y ).
1250
+ Then M is called an almost contact metric structure and g is called a compatible metric. In this
1251
+ case, (η, ξ, ϕ, g) is called almost contact metric structure. An almost contact metric structure
1252
+ (η, ξ, ϕ, g) on M is called a trans-Sasakian structure if it satisfies
1253
+ (∇Xϕ)Y = c1
1254
+
1255
+ g(X, Y )ξ − η(Y )X
1256
+
1257
+ + c2
1258
+
1259
+ g(ϕX, Y )ξ − η(Y )ϕX
1260
+
1261
+ for some scalar functions c1 = c1(x) and c2 = c2(x) on M, where ∇ denotes the Levi-Civita
1262
+ connection of g.
1263
+ Now, let (η, ξ, ϕ, g) be a trans-Sasakian structure on M. Define α, β : TM → [0, ∞) by
1264
+ ∀(x, y) ∈ TM,
1265
+ α(x, y) :=
1266
+
1267
+ gx(y, y),
1268
+ β(x, y) := ǫ ηx(y),
1269
+ (4.1)
1270
+ where 0 < ǫ < 1 be a constant. Then, for the Randers metric F := α + β, we have
1271
+ ||β||α = ǫ.
1272
+ It follows that the class of Randers metrics induced by trans-Sasakian manifolds (M, η, ξ, ϕ, g)
1273
+ are (2n + 1)-dimensional generalized Berwald metrics on M.
1274
+ Here, we give a two-dimensional Randers metric F = α +β with vanishing S-curvature. We
1275
+ show that if F is a generalized Berwald metric then it reduces to a Riemannian metric.
1276
+ Example 4. Let y = u∂/∂x + v∂/∂y ∈ T(x,y)R2. Consider the Randers metric F = α + β,
1277
+ where α = α(y) and β = β(y) are given by
1278
+ α :=
1279
+ ��
1280
+ 1 + (1 − ǫ2)(x2 + y2)
1281
+
1282
+ (u2 + v2) +
1283
+
1284
+ 1 + ǫ2 + x2 + y2
1285
+
1286
+ (xv − yu)2
1287
+
1288
+ 1 + (1 − ǫ2)(x2 + y2)
1289
+ ��
1290
+ 1 + x2 + y2
1291
+ ,
1292
+ β := −
1293
+ ǫ(xv − yu)
1294
+ 1 + (1 − ǫ2)(x2 + y2),
1295
+
1296
+ On a Class of Generalized Berwald Manifolds
1297
+ 17
1298
+ and ǫ is a real constant (see [15]). F is defined on the whole sphere for |ǫ| < 1. It is remarkable
1299
+ that, one can rewrite F in a polar coordinate system, x = r cos(θ), y = r sin(θ). Express
1300
+ α =
1301
+
1302
+ a11µ2 + a12µν + a21νµ + a22ν2,
1303
+ β = b1µ + b2ν,
1304
+ where
1305
+ a11 =
1306
+ 1
1307
+ (1 + r2)
1308
+
1309
+ 1 + (1 − ǫ2)r2�,
1310
+ a12 = a21 = 0,
1311
+ a22 =
1312
+ r2(1 + r2)
1313
+
1314
+ 1 + (1 − ǫ2)r2�2,
1315
+ b1 = 0,
1316
+ b2 = −
1317
+ ǫr2
1318
+ 1 + (1 − ǫ2)r2.
1319
+ Let us put A := det(aij). Then, we get
1320
+ a11 =
1321
+ r2(1 + r2)
1322
+ A
1323
+
1324
+ 1 + (1 − ǫ2)r2�2,
1325
+ a22 =
1326
+ 1
1327
+ A(1 + r2)
1328
+
1329
+ 1 + (1 − ǫ2)r2�,
1330
+ a12 = a21 = 0,
1331
+ b1 = 0,
1332
+ b2 = −
1333
+ ǫr2
1334
+ A(1 + r2)
1335
+
1336
+ 1 + (1 − ǫ2)r2�2.
1337
+ Therefore, we obtain
1338
+ ∥β∥2
1339
+ α = aijbibj = bibi =
1340
+ ǫ2r4
1341
+ A(1 + r2)
1342
+
1343
+ 1 + (1 − ǫ2)r2�3 ̸= constant.
1344
+ (4.2)
1345
+ (4.2) means that F is not a generalized Berwald metric. On the other hand, a direct computa-
1346
+ tion yields
1347
+ r11 = r22 = 0,
1348
+ r12 = r21 =
1349
+ ǫ3r3
1350
+ (1 + r2)
1351
+
1352
+ 1 + (1 − ǫ2)r2�2,
1353
+ s11 = s22 = 0,
1354
+ s12 =
1355
+ ǫr
1356
+
1357
+ 1 + (1 − ǫ2)r2�2 = −s21
1358
+ s1 =
1359
+ ǫ2r
1360
+ (1 + r2)
1361
+
1362
+ 1 + (1 − ǫ2)r2��,
1363
+ s2 = 0,
1364
+ r1 = −
1365
+ ǫ4r5
1366
+ A(1 + r2)2�
1367
+ 1 + (1 − ǫ2)r2�4,
1368
+ r2 = 0.
1369
+ It is easy to find that rij + bisj + bjsi = 0. Then by Lemma 3.2.1 in [5], we get S = 0. Also,
1370
+ one can see that the following holds
1371
+ ri + si = ǫ2r
1372
+
1373
+ A(1 + r2)(1 + (1 − ǫ2)r2)3 − ǫ2r4�
1374
+ A(1 + r2)2�
1375
+ 1 + (1 − ǫ2)r2�4
1376
+ .
1377
+ (4.3)
1378
+ According to (4.3), F is a generalized Berwald metric (equivalently, ri + si = 0) if and only if
1379
+ ǫ = 0 or the following holds
1380
+
1381
+ 1 + (1 − ǫ2)r2�4 = 0.
1382
+ (4.4)
1383
+ (4.4) contradicts with |ǫ| < 1. Therefore, F is a generalized Berwald metric if and only if ǫ = 0
1384
+ or equivalently β = 0. In this case, F reduces to the standard Riemannian metric F = α.
1385
+
1386
+ On a Class of Generalized Berwald Manifolds
1387
+ 18
1388
+ Example 5. (Xu) It is proved that a Finsler metric F = F(x, y) is of Randers type F = α+β if
1389
+ and only if it is a solution of the navigation problem on a Riemannian manifold (M, h) (see [5]).
1390
+ Zermelo navigation is an efficient method to study of Randers metrics with certain Riemannian
1391
+ and non-Riemannian curvature properties. More precisely, any Randers metric F = α + β on
1392
+ a manifold M is a solution of the following Zermelo navigation problem
1393
+ h
1394
+
1395
+ x, y
1396
+ F − Wx
1397
+
1398
+ = 1,
1399
+ where h =
1400
+
1401
+ hij(x)yiyj is a Riemannian metric and W = Wi(x)∂/∂xi is a vector field such
1402
+ that
1403
+ h(x, −Wx) =
1404
+
1405
+ hij(x)Wi(x)Wj(x) < 1.
1406
+ In fact, α and β are given by
1407
+ α =
1408
+ √λh2 + W0
1409
+ λ
1410
+ ,
1411
+ β = −W0
1412
+ λ ,
1413
+ respectively and moreover,
1414
+ λ := 1 − ∥W∥2
1415
+ h,
1416
+ W0 := hijWiyj.
1417
+ For more details, see [5]. Then, F can be written as follows
1418
+ F =
1419
+
1420
+ λh2 + W2
1421
+ 0
1422
+ λ
1423
+ − W0
1424
+ λ .
1425
+ (4.5)
1426
+ In this case, the pair (h, W) is called the navigation data of F.
1427
+ Now, let G/H be any homogeneous manifold and g = h + m is its reductive decomposition.
1428
+ Suppose m = m0 + m1 be an Ad(H)-invariant decomposition, in which m0 is 1-dimensional
1429
+ and the Ad(H)-action on m0 is trivial. Let h be a G-invariant Riemannian metric on G/H,
1430
+ such that m0 and m1 are h-orthogonal to each other. Let W be a G-invairant vector field on
1431
+ G/H, such that W(o) ∈ m0\{0}. Then, the navigation process with the data (h, W) provides
1432
+ a G-invariant generalized Berwald Randers metric with S = 0 (see [7]).
1433
+ Example 6. (Xu) As we mentioned in Introduction, the Bao-Shen’s Randers metrics on S3 are
1434
+ concrete generalized Berwald metrics, namely they are not Berwaldian. Any non-Riemannian
1435
+ homogeneous Randers sphere S3 = SU(3)/SU(2) (including Bao-Shen’s Randers metrics) sat-
1436
+ isfies S = 0 with constant pointwise ∥β∥α-norms. Then, every non-Riemannian homogeneous
1437
+ Randers sphere is a generalized Berwald metric. An S3 × S1, in which S3 = SU(3)/SU(2) and
1438
+ the navigation field is tangent to the S3-factor, is a 4-dimensional generalized Berwald Randers
1439
+ metric (see [29]).
1440
+ Acknowledgments: The authors are so grateful to Ming Xu for his valuable comments on
1441
+ this manuscript. Likewise, we thank him for providing us with examples 5 and 6 which improve
1442
+ the quality of our manuscript. Also, we are thankful to Behzad Najafi, Mansoor Barzegari and
1443
+ Libing Huang for their reading of this manuscript and their comments.
1444
+
1445
+ On a Class of Generalized Berwald Manifolds
1446
+ 19
1447
+ References
1448
+ [1] H. Akbar-Zadeh, Sur les espaces de Finsler ´a courbures sectionnelles constantes, Bull.
1449
+ Acad. Roy. Bel. Cl, Sci, 5e S´erie - Tome LXXXIV (1988), 281-322.
1450
+ [2] B. Aradi, Left invariant Finsler manifolds are generalized Berwald, Eur. J. Pure Appl.
1451
+ Math. 8(1) (2015), 118–125.
1452
+ [3] G.
1453
+ S.
1454
+ Asanov,
1455
+ Finsleroid-Finsler
1456
+ space
1457
+ with
1458
+ Berwald
1459
+ and
1460
+ Landsberg
1461
+ conditions,
1462
+ arXiv:math0603472.
1463
+ [4] N. Bartelmeß and V. S. Matveev, Monochromatic metrics are generalized Berwald, Differ.
1464
+ Geom. Appl. 58(2018), 264-271.
1465
+ [5] X. Cheng and Z. Shen, Finsler Geometry- An Approach via Randers Spaces, Springer,
1466
+ Heidelberg and Science Press, Beijing, 2012.
1467
+ [6] X. Cheng and Z. Shen, A class of Finsler metrics with isotropic S-curvature, Israel J.
1468
+ Math. 169(2009), 317-340.
1469
+ [7] Z. Hu and S. Deng, Homogeneous Randers spaces with isotropic S-curvature and positive
1470
+ flag curvature, Math. Z. 270(2012), 989-1009.
1471
+ [8] L. Huang and X. Mo, Geodesics of invariant Finsler metrics on a Lie group, preprint.
1472
+ [9] S. Ivanov, Monochromatic Finsler surfaces and a local ellipsoid characterisation, Proc.
1473
+ Am. Math. Soc. 146(2018), 1741-1755.
1474
+ [10] X. Mo and X. Wang, On Finsler metrics of constant S-curvature, Bull. Korean Math. Soc.
1475
+ 50(2) (2013), 639-648.
1476
+ [11] X. Cheng, H. Wang and M. Wang, (α, β)-metrics with relatively isotropic mean Landsberg
1477
+ curvature, Publ. Math. Debrecen. 72(2008), 475-485.
1478
+ [12] Z. Shen, On a class of Landsberg metrics in Finsler geometry, Canadian. J. Math. 61(6)
1479
+ (2009), 1357-1374.
1480
+ [13] Z. Shen, Differential Geometry of Spray and Finsler Spaces, Kluwer Academic Publishers,
1481
+ 2001.
1482
+ [14] Z. Shen, Finsler metrics with K = 0 and S = 0, Canadian J. of Math. 55(2003), 112-132.
1483
+ [15] Z. Shen, Two-dimensional Finsler metrics of constant curvature, Manuscripta Mathemat-
1484
+ ica. 109(3) (2002), 349-366.
1485
+ [16] Z. I. Szab´o, Generalized spaces with many isometries, Geometria Dedicata, 11(1981), 369-
1486
+ 383.
1487
+ [17] Sz. Szak´al and J. Szilasi, A new approach to generalized Berwald manifolds I, SUT J. Math.
1488
+ 37(2001), 19–41.
1489
+ [18] J. Szilasi and Sz. Szak´al,
1490
+ A new approach to generalized Berwald manifolds II,
1491
+ Publ. Math. Debrecen, 60(2002), 429–453.
1492
+
1493
+ On a Class of Generalized Berwald Manifolds
1494
+ 20
1495
+ [19] J. Szilasi, R. L. Lovas, and D. Cs. Kert´esz, Connections, Sprays and Finsler structures,
1496
+ World Scientific, 2014.
1497
+ [20] A. Tayebi and M. Barzegari, Generalized Berwald spaces with (α, β)-metrics, Indagationes.
1498
+ Math. (N.S.). 27(2016), 670-683.
1499
+ [21] A. Tayebi and E. Peyghan, On Douglas surfaces, Bull. Math. Soc. Science. Math.
1500
+ Roumanie. Tome 55 (103), No 3, (2012), 327-335.
1501
+ [22] A. Tayebi and M. Rafie Rad, S-curvature of isotropic Berwald metrics, Sci. China. Series
1502
+ A: Math. 51(2008), 2198-2204.
1503
+ [23] M. Xu and S. Deng, Killing frames and S-curvature of homogeneous Finsler spaces, Glas-
1504
+ gow. Math. Journal. 57(2015), 457-464.
1505
+ [24] M. Xu and S. Deng, Normal homogeneous Finsler spaces, Transformation Groups.
1506
+ 22(2017), 1143-1183.
1507
+ [25] C. Vincze, On a special type of generalized Berwald manifolds: semi-symmetric linear
1508
+ connections preserving the Finslerian length of tangent vectors, European Journal of Math.
1509
+ 3(2017), 1098-1171.
1510
+ [26] C. Vincze, On Randers manifolds with semi-symmetric compatible linear connections, Inda-
1511
+ gationes. Math. (N.S.). 26(2015), 363-379.
1512
+ [27] C. Vincze, On generalized Berwald manifolds with semi-symmetric compatible linear con-
1513
+ nections, Publ. Math. Debrecen. 83(2013), 741-755.
1514
+ [28] C. Vincze, T. R. Khoshdani, S. M. Z. Gilani, and M. Ol´ah, On compatible linear connections
1515
+ of two-dimensional generalized Berwald manifolds: a classical approach, Commun. Math.
1516
+ 27(2019), 51-68.
1517
+ [29] M. Xu, Geodesic orbit spheres and constant curvature in Finsler geometry, Differ. Geom.
1518
+ Appl. 61(2018), 197-206.
1519
+ [30] L. Zhang and S. Deng, On generalized normal homogeneous Randers spaces, Publ. Math.
1520
+ Debrecen. 90(2017), 507-523.
1521
+ Akbar Tayebi and Faezeh Eslami
1522
+ Department of Mathematics, Faculty of Science
1523
+ University of Qom
1524
+ Qom. Iran
1525
+ Email: akbar.tayebi@gmail.com
1526
+ Email: faezeh.eslami70@gmail.com
1527
+
7tAzT4oBgHgl3EQfE_oc/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
7tE1T4oBgHgl3EQfBwI8/content/2301.02855v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63f1d74c2d7cbf7129daea6e275e70b483848df463f2ab2fdaf3044bdea45372
3
+ size 725928
7tE1T4oBgHgl3EQfBwI8/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02ce3adfd32913a18dfb4d6192478d4ba76295788c26358619ed16e1771b8e5b
3
+ size 2555949
89E1T4oBgHgl3EQf7wWX/content/tmp_files/2301.03538v1.pdf.txt ADDED
@@ -0,0 +1,1370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ British Journal of Social Policy (2022), 2, 1–18
2
+ doi:–
3
+ British
4
+ Journal of
5
+ Social
6
+ Policy
7
+ ARTICLE
8
+ Quantify how space mission influence geopolitical
9
+ dynamics? A security and social policy approach.
10
+ Andrea Russo*,1,2 and Davide Coco3
11
+ 1University of Catania, Italy
12
+ 2University College Dublin, Ireland
13
+ 3University of Rome, Sapienza University, Italy
14
+ *Corresponding author. Email: Andrea.russo@phd.unict.it
15
+ (Received –; revised –; accepted –; first published online –)
16
+ Abstract
17
+ We present a computational method to quantify the geopolitical impact of a space mission, based on the
18
+ national budget and data logs of previous mission, and evidencing how even if some missions succeed,
19
+ they can bring negative effect to the sponsored country. The objective of this research is to study how the
20
+ success (or failure) of a space mission can bring an economical and political benefit (or loss) to a country.
21
+ By retrieving various data, including sentiment from #hashtags related to the considered space missions,
22
+ national budgets for space exploration, and the reliability of space launch systems, from social networks,
23
+ public institutions, and online repositories, we propose an equation to evaluate the geopolitical importance
24
+ of a space mission for a particular country or space agency. The geopolitical equation can be used by public
25
+ institutions or private companies to estimate the potential impact of a space mission on the public opinion
26
+ and international relationships, which can be either positive or negative, as even successful missions may
27
+ negatively affect the international relationships and negotiation with some countries and their partners.
28
+ Also we combine the ideology of classic social policy with a security and space mission point of view,
29
+ to enlighten cultural, institutional, and political limits in public spending decisions.
30
+ Keywords: Geopolitical dynamics, Space Mission, Social policy, Political dynamics and Computational methods
31
+ 1.
32
+ Introduction
33
+ ———————————-
34
+ Since the end of World War II, the proposal of ambitious space programs by governments and
35
+ national space agencies has always been a means not only to push forward space exploration and
36
+ research but also to alter the prestige and geopolitical influence in the international context. For
37
+ instance, the social policy action by the 35th United States president John Fitzgerald Kennedy to
38
+ invest $25 billion (1961 US dollar value) in the Apollo mission [Bozzo 2018] was not only a social
39
+ investment to the increase public work with high qualification skills, but also a geopolitical plan
40
+ action against the URSS space expansionism.
41
+ Geopolitical value appears since the first space missions, for instance, after Russian first space satellite
42
+ "Sputnik 1" successfully orbited the Earth. The Space Race that characterized the 20th century, was
43
+ actually a geopolitical and propaganda race to determine which country would have finally had
44
+ © Cambridge University Press 2022. This is an Open Access article, distributed under the terms of the Creative Commons Attribu-
45
+ tion licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium,
46
+ provided the original work is properly cited.
47
+ arXiv:2301.03538v1 [physics.soc-ph] 9 Jan 2023
48
+
49
+ 2
50
+ Andrea Russo et al.
51
+ access (and conquered) the “new and endless world above us”. This geopolitical space race has been
52
+ sustained by a huge effort from a social policy prospective. The $25 billion USD for the Apollo
53
+ mission on 12th September 1962 (same day of the “Address at Rice University on the Nation’s Space
54
+ Effort” speech by United States President John F. Kennedy to further inform the public about his plan
55
+ to land a man on the Moon before 1970), are the equivalent of $231 billion USD on 7th February
56
+ 2022 [Appendix 1].
57
+ Nowadays, superpowers like the United States, China, or the Russian federation, have increased
58
+ the frequency of space missions to show their presence and value in a geopolitical and international
59
+ perspective. The surge of space missions’ proposals in the last decades was due also to the cost of
60
+ access to space, which significantly decreased thanks to the development of reusable launch systems,
61
+ performing hardware, and IT and IoT improvement.
62
+ Space missions have attracted huge money investments by public and private actors, with a social
63
+ and business impact, due to the their potential economic return and their socioeconomic impact, as
64
+ the design of a space mission encourages public high quality work and many public services originate
65
+ from space activity (GPS, global mapping, high speed connection, global communication, and many
66
+ others) [ASI 2020].
67
+ Given the aforementioned result, thus the spin-offs of space services to the population, can we
68
+ define space missions (both research and security missions) as social policy, given the socio-economic
69
+ effects and the infrastructure-related services?
70
+ The security policy of the US government in early 2000s launched by the “Pentagon” (i.e., the
71
+ headquarters building of the United States Department of Defense) highlights how the difference
72
+ between social policy and security policy became less evident. Indeed, the financial investment was
73
+ aimed at developing a security program to defend social and public infrastructures and resulted in
74
+ the X37-B program, a small shuttle that could defend other satellites, which provide social services,
75
+ from Russian and Chinese physical and cyber-attacks, thus interlacing the security and social aspects
76
+ [Spagnulo 2020]. Today, the relationship among social policy, security, space missions, and geopoli-
77
+ tics is more intricate and complex than we could have imagined years ago.
78
+ Designing a space mission is an extremely difficult task, with a high probability of failure due
79
+ to the complexity of aerospace systems and the harsh conditions under which they are supposed to
80
+ operate. The launch system plays an essential role in a space mission, as rockets must be “perfect”
81
+ systems that respond seamlessly to all the perturbations that they experience during the atmospheric
82
+ ascent and the exoatmospheric flight [Benedikter et al. 2021, 2022]. Every phase of the ascent
83
+ trajectory must be carefully studied and planned before flight, as the margin of error is extremely
84
+ small, even for the apparently simple scenarios. A first distinction by difficulty in space launches can
85
+ be made by considering suborbital flights and orbital flights. In the former, the greatest difficulty
86
+ is reaching a high altitude and then re-enter the atmosphere before completing a full revolution
87
+ around the Earth. Suborbital flights generally cross what is called the Karman line, an imaginary line
88
+ at an altitude of 100 km (330’000 ft) above sea level that conventionally marks the boundary between
89
+ the Earth’s atmosphere and outer space. Although it is not a necessary requirement, the apogee (i.e.,
90
+ the maximum altitude) is a benchmark for more or less complex suborbital flights, as it indirectly
91
+ determines the speed that the vehicle will reach during re-entry and therefore the thermal and
92
+ mechanical stresses that it will undergo. For orbital flights, instead, the inherent difficulty depends
93
+ on the target orbit that the payload must be released into. The range of possible scenarios increases
94
+ enormously when it comes to orbital flights from one planet (or celestial body) to another. For
95
+ instance, a flyby, that is, the short passage of a high-speed probe near a celestial body, is seen as
96
+ an extremely critical moment compared to the simple time spent cruising. On the other hand, the
97
+ orbit insertion of a probe around a celestial body is a more critical moment than the flyby because it
98
+
99
+ British Journal of Social Policy
100
+ 3
101
+ requires several active maneuvers that involve multiple simultaneously operating systems. In a similar
102
+ way, landing on a planet or asteroid is even a more critical accomplishment, as it involves much more
103
+ complex operations.
104
+ This paper is inspired by the growing amount of usable data from social and space science.
105
+ During recent years, social science has acquired methods and skills to collect data and use it like hard
106
+ science to highlight and identify social patterns or social dynamics. Information technology has also
107
+ increased the quality of social research, thanks to huge advancements in algorithms and data analysis.
108
+ This social-informatics improvement allows social science to connect with others disciplines, such as
109
+ space science, engineering, and complex systems. Social science can finally prove or evidence social
110
+ complex interactions in political science and others social disciplines. For example, in this work, we
111
+ try to evidence a complex interaction between space missions and international cooperation, based
112
+ on risk-cost-benefit analysis and political affinity.
113
+ Due to the great inherent complexity of aerospace projects, international cooperation allows for
114
+ mitigating the risks and costs (both financial and time-related) of space missions. There are several
115
+ examples that show that the cooperation among national space agencies or research institutes has
116
+ brought benefits to all the parties involved, not only relative to the economic return of scientific
117
+ discoveries and patented technologies, but also to a positive outcome in terms of reputation and
118
+ geopolitical prestige associated with these missions.
119
+ Besides the technical aspects, the organization and management of a space mission are also
120
+ quite challenging because every political interaction or action during the mission has a series of
121
+ emerging behaviors in international affairs, and nonlinear interactions affect also reactions on political,
122
+ economical, and security layers, which go beyond the space system [Dittmer 2014]. Complexity
123
+ science tries to explain the dynamics of a system (e.g., physical, biological, social, or economical) that
124
+ bring a different result than the expected one. A simple way to define a complex system is when the
125
+ whole system is bigger than the sum of its parts. Since space missions are related not only to social
126
+ and security policies but also to international relationships and geopolitical dynamics, then they can
127
+ be considered as complex systems.
128
+ Social policy, social network activity, and space exploration are just a slice of the whole part,
129
+ but sufficient to understand how space missions influence the geopolitical strategy. In complexity
130
+ sciences, the online social network activity patterns consist of successive, spike-like perturbations,
131
+ generated by consecutive shocks bearing conceptual similarities to the tremors preceding or following
132
+ an earthquake [Lymperopoulos 2017]. Indeed, the earthquake’s dynamics follow the Self-Organized
133
+ Critically (SOC) model [Bak and Chen 1991], present not only in nature but also in social systems
134
+ [Dmitriev, Dmitriev, and Balybin 2019].
135
+ These dynamics could have been studied only with an hard science methodology and high quality
136
+ data, which, in the pasts years, could have been very hard to collect.
137
+ In this paper, we used computational method to collect high quality data with a rigorous
138
+ methodology to understand geopolitical and public policy effect from space mission. More precisely,
139
+ we acquired data from social networks like Twitter, to evaluate the sentiment of space missions, which
140
+ evidences the social reaction about the related space event. The sentiment analysis has been used by
141
+ many others scientist over different research areas, and, if combined with a high quality computational
142
+ method, it could highlight important patterns related to social events. This methodology has been
143
+ called Computational Social Science (CSS) [Cioffi-Revilla 2014].
144
+ 1.1
145
+ Related Work
146
+ Computational Social Science, since its introduction, provided a valuable approach to asses and
147
+ explain social dynamical events. In this short section, we present some noteworthy works, relative to
148
+
149
+ 4
150
+ Andrea Russo et al.
151
+ computational social complexity and sentiment analysis, helpful to understand our project.
152
+ Joseph Downing and Richard Dron [Downing and Dron 2020] have contributed to the under-
153
+ standing of constructivist security by analysing social media outputs to understand who is influential
154
+ in the security debate. Jie Yin et al. [Yin et al. 2015] have focused on analyzing Twitter messages
155
+ generated during humanitarian crises and disasters. They presented relevant methods for burst
156
+ detection, tweet filtering and classification, online clustering, and geotagging to classify whether or
157
+ not a tweet is talking about a disastrous event. A disaster type classifier groups tweets according to
158
+ representative disaster types: earth-quake, flooding, fire, storm, other disasters (e.g., traffic accident
159
+ and civil disorders), and non-disaster.
160
+ Sancheng Peng et al. [Peng et al. 2017] have worked on a framework that quantifies social
161
+ influence in mobile social networks (Phones). The social influence of users was measured by analyzing
162
+ the SMS/MMS-based communication behaviors among individuals. They have also revealed and
163
+ characterized the social relations among mobile users through the analysis of the entropy of friend
164
+ nodes and the entropy of interaction frequency. The extensive analytical results demonstrate that
165
+ the influence spread of their proposed method outperforms a random method and a degree-based
166
+ method.
167
+ Kolli et al. [Kolli, Balakrishnan, and Ramakrishnan 2017] have quantified the predictability of
168
+ cascade volumes in online social media communications, and tried to understand to what degree
169
+ are future cascade trajectories predictable and to what degree are they random. The predictability
170
+ analysis in their work reveals that for methods that combine information on frequency with temporal
171
+ correlation of the trajectory, provides the theoretical limit on predictability. Hence, their methods
172
+ such as AR and ARMA models that rely on the time order of data have the potential to achieve
173
+ prediction accuracy as high as 83% in the MemeTracker dataset and 87% in the Twitter Hashtag
174
+ dataset. Correspondingly, these methods have the potential to achieve prediction accuracy as high as
175
+ 94% in the MemeTracker dataset.
176
+ Cihon and Yasseri [Cihon and Yasseri 2016], have written a short review that considers a small
177
+ number of selected papers on computational social science related to sociology and social science; they
178
+ analyze their theoretical approaches, review their methodological innovations, and offer suggestions
179
+ given the relevance of their results to political scientists and sociologists. They evidence how the
180
+ sentiment analysis content using semantic and sentiment analytic algorithms on individual users
181
+ opinions from the 15M movement in Spain ( analyzing up to 200 tweets on the topic per user) is a
182
+ promising technique for future studies of political activity, and, indeed, any activity, on Twitter.
183
+ Vidgen and Yasseri [Vidgen and Yasseri 2020] built a multi-class classifier that distinguishes
184
+ between non-Islamophobic, weak Islamophobic, and strong Islamophobic content. Accuracy is
185
+ 77.6% and balanced accuracy is 83%. They applied the classifier to a dataset of 109 488 tweets
186
+ produced by far right Twitter accounts during 2017. While most tweets resulted not Islamophobic,
187
+ weak Islamophobia was considerably more prevalent (36 963 tweets) than strong (14 895 tweets).
188
+ 1.2
189
+ Geopolitical Efects of Space Missions
190
+ Sentiment analysis has been used in different research areas and with different goals. In this paper, we
191
+ apply it to assess the geopolitical effect of space missions by collecting data from latest and legacy space
192
+ mission that have or have not been successful. Improving the success rate of space missions implies,
193
+ from a geopolitical standpoint, an improvement of the international status. However, on the other
194
+ hand, a failure can damage the relationship among international partners. The success of the Apollo
195
+ 11 mission by NASA made the United States the winner of the space race and raised its geopolitical
196
+ value even though many milestones were reached earlier by the URSS (first orbiting satellite and first
197
+ human in space, to name a couple). Recently, numerous space missions were successfully launched and
198
+ fulfilled their planned goals or even performed beyond expectations, receiving positive reception from
199
+ the public opinion and altering (or consolidating) the international status of the involved countries.
200
+
201
+ British Journal of Social Policy
202
+ 5
203
+ Noteworthy examples of recent successful missions are the Ingenuity helicopter, the first helicopter
204
+ to fly outside the planet Earth, which was sent to Mars together with the Perseverance rover as part
205
+ of NASA’s Mars2020 mission, the Rosetta mission, which carried Philae, the first spacecraft ever to
206
+ accomplish a soft landing on the surface of a comet (67P/Churyumov-Gerasimenko), launched in
207
+ 2014 by ESA (European Space Agency), the latest James Webb Space Telescope, placed in a very
208
+ challenging orbit (Sun–Earth Lagrangian point L2) in 2021 by NASA and ESA, and the Chang’e 4
209
+ mission, featuring the first soft landing rover on the far side of the Moon, lunched in 2018 by CNSA
210
+ (China National Space Administration).
211
+ Even when a mission succeeds, there may be criticism from the society or even consequences
212
+ and repercussions from others international actors, affecting the geopolitical status and international
213
+ relationships of the sponsoring country. A computational social science methodology [Lazer et
214
+ al. 2020] [Lazer et al. 2009] [Cihon and Yasseri 2016] could assess how much space missions could
215
+ increase or decrease (in case of failures or partially successful missions) the geopolitical status of each
216
+ country. In particular, the reaction of people on social networks generally gives a "feel strong" status
217
+ to each country during international negotiations. Also, a statistical qualitative evaluation of the
218
+ success-to-failure ratio of space missions and rocket launches indicates the reliability of space launch
219
+ systems and mission design and management of every country. The geopolitical value depends on
220
+ other people’s perception of strength. Therefore, if many people make negative comments (or have a
221
+ negative sentiment) toward something that has been accomplished by some entity, implicitly it will
222
+ come to change other people’s perception of it, and thus also its geopolitical value.
223
+ Over the past decades, the goal of numerous space missions was not only to meet the social
224
+ policy agenda of each country or related to scientific investigations, but also to reinforce the strength
225
+ of international relationships between countries. For instance, the ExoMars mission has helped
226
+ reinforcing the relationship between ESA and Roscosmos, the Russian space agency, before the
227
+ Russian-Ukrainian war. Likewise, the DART mission is a cooperation between ASI (Italian Space
228
+ Agency) and NASA, which strengthened the bond between European countries and the United States
229
+ in space operations. The number of cooperative space missions holds the promise to significantly
230
+ increase in the near future, not only to strengthen and seal international relationships, but also to
231
+ reduce the cost and time needed to design and accomplish a space mission.
232
+ The United States was the first country to understand this new political frontier [Bozzo 2018],
233
+ and its actions have paved the way (from the industrial side) for the design of new engines and
234
+ the world’s first reusable rocket, giving a huge advantage from the other space competitors. The
235
+ objective of this paper is to quantify the geopolitical score of each state, and evaluate how much it
236
+ may fluctuate depending on space missions.
237
+ 1.3
238
+ Paper Outline
239
+ This paper is organized as follows. Section 2 introduces the research hypotheses on space missions
240
+ and geopolitical dynamics. Section 3 describes the methodological approach. In Section 4, we present
241
+ the considered data sets and discuss the results. Section 5 explores the opportunity for a Space Mission
242
+ Proposal by ESA, which can improve its geopolitical score. A conclusion section ends the manuscript.
243
+ 2.
244
+ Research hypotheses
245
+ Our research hypothesis, is to establish whether it is possible to create an equation that gives a
246
+ geopolitical scores inherent to space missions, since, we think that as a ’social policy’, are included all
247
+ those services deriving from space missions that also increase the security and geopolitical prestige
248
+ of the state. To equate the outcome of space missions, as a public policy, is not so wrong given the
249
+ increases in public services and highly skilled labour that are initiated at the beginning and end of
250
+ each space mission. Defence missions can also be part of an increase in work and services to citizens.
251
+ Indeed, the US, Russian Federation, China, and EU countries use defense budgets to organize space
252
+
253
+ 6
254
+ Andrea Russo et al.
255
+ missions for security and communication operation. Space missions can provide useful information
256
+ to the state, companies and citizens, such as weather alerts to prevent catastrophes, prediction and
257
+ prevention of potentially dangerous events, increased communication areas (internet and telephones)
258
+ and also precise information on possible antagonists of the state to which they belong. In fact, over
259
+ the past decades, many administrations have financed security space mission to obtain data about
260
+ their competitors (using for example spy satellites) or just to amplify the satellite communication
261
+ network. Anyway, the political tension about the dynamical and unpredictable security situation
262
+ between US and China, did not touch only the military and economic sector, but have reached also
263
+ the space and intelligence system [Giannangeli 2022]. As a matter of fact, Admiral Rob Bauer, the
264
+ Dutchman who since June 2021 chairs the NATO Military Committee, the highest military body
265
+ of the Atlantic Alliance, did not turn around when it comes to describing the framework of the
266
+ challenges pressing on the Euro-Atlantic area. He said that "Russia and China are increasing their
267
+ respective military capabilities, conventional and nuclear, space and cyber: it’s a fact, not an opinion."
268
+ [Pioppi 2021]
269
+ However, space missions can also be used to improve conditions between states. Indeed, inter-
270
+ national events, such as space missions, are often used as a time to reconcile or strengthen relations
271
+ between states. In fact, projects between the US and ESA are not only tools for providing jobs and
272
+ services to citizens, thus moving the economy, but are also areas of cultural inclusion and knowledge,
273
+ which is jealously guarded, especially the last one since it can be copied by foreign competitors.
274
+ There can therefore be real international as well as economic strategies to get a space project off the
275
+ ground.
276
+ In order to calculate, how space missions have a geopolitical impact, hence on security and citizen
277
+ services, we took a large part of the space missions inherent to the various countries.
278
+ 2.1
279
+ Considered Space Missions
280
+ We searched which kind of recent (i.e., after social networks existence) space missions had had
281
+ repercussions in the geopolitical and security domains, and we collected data for the missions
282
+ reported in Table 1. The list includes only missions sponsored by countries that develop launch
283
+ vehicles, as launch success and failure data are used as a way to estimate the country’s experience in
284
+ space missions (see Section Methodology or Table 4).
285
+ Table 1. Space mission data collected
286
+ Mission
287
+ Country
288
+ Result
289
+ Tianwen-1
290
+ China
291
+ Succeeded
292
+ Tianhe
293
+ China
294
+ Succeeded
295
+ ASAT
296
+ Russia
297
+ Succeeded
298
+ Mars 2020
299
+ US
300
+ Succeeded
301
+ James Webb
302
+ US/EU
303
+ Succeeded
304
+ Rosetta
305
+ EU
306
+ Semi-Succeeded
307
+ The Tianwen-1 is a Chinese interplanetary mission to send a robotic spacecraft on Mars. It was
308
+ launched July 23, 2020 [CNSA.gov 2020], and it landed on Mars on May 14, 2021 [CNSA 2021].
309
+ The mission consisted of an orbiter, a lander, and a rover called "Zhurong". The mission succeeded,
310
+ and NASA’s associate administrator Thomas Zurbuchen and the director general of Roscosmos
311
+ Dmitry Rogozin congratulated the China National Space Administration (CNSA).
312
+ China has also lunched the Tianhe core module, which is the first module of the Tiangong space
313
+ station, on April 29, 2021 atop a Long March 5B rocket [S. CNSA 2021]. The core stage of the
314
+
315
+ British Journal of Social Policy
316
+ 7
317
+ LM-5B crashed back to Earth on Saturday, May 8, 2021, after 10 controversial days that captured
318
+ the world’s attention and started a wider conversation about orbital debris and the responsibility for
319
+ the return of spent stages.
320
+ On November 15, 2021, Russia conducted a direct-ascent anti-satellite test (ASAT), destroying
321
+ one of its own space objects, a defunct satellite, in low-earth orbit [BBC 2021]. The test captured
322
+ international attention and was quickly and widely condemned as threatening and irresponsible
323
+ action, not least for the cloud of uncontrollable debris it created, which will endanger both space assets
324
+ and human spaceflight for years to come. Other countries in the past have already organised ASAT
325
+ missions, like the Mission Shakti (India), the ASM-135 ASAT (US) and the 2007 Chinese anti-satellite
326
+ missile test (China). Others to object in the wake of the test included Australia, the European Union,
327
+ Japan, NATO, and South Korea. China and India – the two countries other than Russia and the
328
+ US that have previously conducted destructive ASAT tests —- are yet to comment publicly. Also,
329
+ following the Russian ASAT mission, the International Space Station started emergency procedures
330
+ due to the debris, closing its security hatches while the crew sheltered.
331
+ Mars 2020 is a Mars rover mission [NASA.gov 2021] that includes the rover Perseverance and
332
+ a small helicopter called Ingenuity. It was launched from Earth on July 30, 2020 and landed in
333
+ Martian crater Jezero on the 18th of February of the following year. The Mars 2020 mission is
334
+ forming part of NASA’s Mars Exploration Program, which will continue with a sample return from
335
+ Mars. Ingenuity is a robotic helicopter that demonstrated the technology for rotor-craft flight in the
336
+ extremely thin atmosphere of Mars, becoming the first controlled helicopter on another planet. The
337
+ budget for the Perseverance rover was US$2.8 billion in 2020 and was cheaper than its predecessor,
338
+ the Curiosity rover, which costed $3.2 billion. Wikipedia 2022
339
+ The James Webb Space Telescope [J. NASA 2020] is a space telescope designed primarily to conduct
340
+ infrared astronomy. NASA led the development of the telescope in collaboration with ESA and
341
+ CSA (Canadian Space Agency). The mission duration is expected to be about 20 years and, the time
342
+ planning for all the missions was 20 years (10 for planning and 10 for realization). It was Launched
343
+ on December 25, 2021 by the contractor Arianespace from the Centre Spatial Guyanais, with an
344
+ Ariane 5 rocket. The James Webb space telescope had a total budget of USD ∼ 9.70 billion (2002
345
+ to 2021) and has several scientific goals, including the search for light coming from the first stars
346
+ and galaxies that formed in the universe after the Big Bang, the study of the galaxy, star, and planet
347
+ formation and evolution, and studying planetary systems and the origins of life.
348
+ The Rosetta mission [ESA 2014] was a space probe built by the ESA (European Space Agency)
349
+ launched on March 2, 2004. The mission’s goal was to perform a detailed and comprehensive study
350
+ of comet 67P/Churyumov–Gerasimenko (67P). On August 2014, the spacecraft reached the comet
351
+ and performed a series of manoeuvers to eventually orbit the comet at distances of 30 to 10 kilometres.
352
+ The probe also housed a lander called Philae, which unfortunately was unable to last long on the
353
+ comet’s surface after a less-than-perfect landing. This was, indeed, the first mission landing on a
354
+ comet. Yet, despite the problems, the probe’s instruments obtained the first images from a comet’s
355
+ surface, and several instruments made the first direct analysis of a comet, sending back data that
356
+ would be analysed to determine the composition of the surface. The mission cost was about ∼ 1.3
357
+ billion € (US ∼ $ 1.8 billion).
358
+ It is worth noting that even when the mission succeeds, it may happen that the mission goal has
359
+ external or side effects that provoke a social or political reaction. Such reactions is spread, when there
360
+ is something that does not belong to the common day-order, or that may provoke instability in
361
+
362
+ 8
363
+ Andrea Russo et al.
364
+ some country-system. We collect data from Twitter to see the social reaction to the space missions in
365
+ Table 1. In all those cases, the missions succeeded, but, in some cases, the mission can cause a social
366
+ reaction due to side effects.
367
+ 3.
368
+ Methodology
369
+ Online social networks have evolved into valuable sources of information and pervasive commu-
370
+ nication platforms where people, businesses, and organizations generate and share content, build
371
+ relationships, and join public conversations. In this online ecosystem, social networks are where
372
+ information propagation is affected by external sources of influence, such as mass media, socioe-
373
+ conomic circumstances, advertising, or events, giving rise to collective intelligence or collective
374
+ behaviour patterns.
375
+ We have collected social reactions from Twitter, for every mission in Table 1 and compared them
376
+ with the difficulty and quality of the national space organization and the country that launched the
377
+ space mission to quantify the "Authority and the status of power" status in geopolitical interaction
378
+ dynamics. To simplify this operation, we create an equation called GSS, to quantify how space
379
+ mission can influence the geopolitical feelings during international affairs dynamics.
380
+ The geopolitical space score (GSS) index is the geopolitical score value from each country,
381
+ depending on the result of the spaceflight mission, related to statistical and social events [Equation (1)].
382
+ GSS =
383
+ S
384
+ G ∗ B ∗ (F + Q)
385
+ (1)
386
+ S is the Sentiment value from the Twitter event; B is the amount of money invested (budget); G is a
387
+ difficulty rate associated with the country that launched the space mission, "not because they are
388
+ easy, but because they are hard" that imply Geopolitical effects; F is related to the success or Failure
389
+ of the mission; and Q stands for the statistical Quality and difficulty of the country in spaceflight
390
+ launch organization.
391
+ The S and Q parameters are evaluated by data scientists through statistical methods. The G score
392
+ is the only parameter that needs a subjective value because it depends on personal evaluation and
393
+ hypotheses to evaluate the difficulty rate of reaching that goal.
394
+ 3.1
395
+ S Factor
396
+ To collect data for the selected topic, we use the public API by Twitter and Tweepy. Both services
397
+ use permission from Twitter to obtain and gather data. We collected a total of ∼ 7000 tweets, but
398
+ any downloaded topic needs revisions and a cleaning process to increase the quality of the research.
399
+ This has significantly reduced the volume of the tweets. We used the same methodology for each
400
+ topic to obtain standard and quality data. In addition, to obtain the correct amount of tweets for
401
+ each day we use getdaytrends.com, a specific site where it is possible to monitor every topic in real
402
+ time as well as aged topics.
403
+ To calculate the sentiment for the selected topic, we used the VADER sentiment analysis tools
404
+ provided by MIT. The VADER sentiment quantifies each selected post or sentence’s negative, neutral,
405
+ or positive sentiment, giving in the end of the analysis, a compound score, i.e. the average between
406
+ all sentence.
407
+ However, to evaluate the sentiment for every mission, it is necessary that each mission be very
408
+ important to the general public or, at least, sufficiently viral among the space community.
409
+
410
+ British Journal of Social Policy
411
+ 9
412
+ 3.2
413
+ G Factor
414
+ As mentioned above, the G factor is the only parameter without computational or statistical data.
415
+ This parameter evaluates the country that launches the space mission, corresponding to a difficulty
416
+ rate implying geopolitical effects. For example, if a small state succeeds in a mission with the same
417
+ budget and other factors in comparison to a big state such as the US etc., the small state will get a
418
+ bigger bonus.
419
+ Unfortunately, there is no universal value factor that gives a score to quantify the difficulty of space
420
+ missions.
421
+ Geopolitics is a social evaluation, therefore deriving from the perceptual fluctuations of people.
422
+ Since people make up a complex system, they cannot give a univocal value to the G factor, because it
423
+ always depends on the value of the individual and on the oscillations (provoked by news, newspapers,
424
+ friends, etc.) that modify the perception and evaluation of individuals and society on certain topics.
425
+ Due to this problem, we decided to self-evaluate the difficulty of a space mission, even if it is
426
+ hard to make an assessment that takes into account everything. There are many pros and cons, and,
427
+ in each case, we know that people cannot evaluate sufficiently well the difficulty of a space mission.
428
+ Yet, we believe that people can understand whether it is a surprise if a very small country alone (like
429
+ Ireland or Pakistan) can accomplish, for example, to build a Martian base, and the US could not do it.
430
+ We estimate a self-score from 0.1 to 1, where 0.1 is the maximum and 1 is the minimum possible
431
+ score. For example, a sub-orbital lunch mission could have a 1 score, an orbital mission could be a 0.8
432
+ score. The Tianwen-1 could be evaluated as 0.3, like the Martian Ingenuity helicopter on Mars. A
433
+ full Moon base could be evaluated as 0.2 (or 0.1 if it is on Mars). All other space missions beyond the
434
+ state-of-the-art technology level cannot be evaluated, because, we do not have sufficient information
435
+ to be able to carry out the mission successfully, like a human base on the surface of Mercury, Titan
436
+ or Europa, for example.
437
+ 3.3
438
+ B Factor
439
+ In the equation, we thought it was important to evaluate, then quantify, the level of resources
440
+ invested versus the expected outcome (successful or failed mission). We thought this based on the
441
+ logic "why should a mission cost a lot of money, while it is possible doing the same things while
442
+ spending fewer resources?". To evaluate and quantify this factor, we collected data from the most
443
+ important space agencies of each country. Since the experience gained in the design and management
444
+ of past missions can be exploited in newer missions, we chose not to use a specific budget for each
445
+ mission in the equation, but, rather, the annual funds dedicated to space missions. In this logic, it
446
+ is hard to quantify how much money the oldest mission have helped (with knowledge, moneys,
447
+ competence, technology) the newest space mission. Also, huge space missions like the JWST, or the
448
+ Rosetta mission’s budget, grows up during years. The budget for the mission is spread over years,
449
+ and we cannot only take a single space mission budget, because there are also many other funded
450
+ missions different from the JWST in the same year. Therefore, we thought that the Budget factor,
451
+ imply also public opinion logic. Public opinion is usually skeptic about spending money for space
452
+ mission, because they did not (unfortunately) see the huge policy investment on research, security
453
+ and works employments, "Rockets don’t run on cash". And nowadays, is still di��cult to quantify
454
+ the investment return (knowledge, moneys, competence, technology) form the policy investment
455
+ from space mission. In addition, the budget factor imply also a economic strength from the country
456
+ that invest on space mission, For example, in the equation, if a small country achieve a successful
457
+ space mission with a low budget, the GSS score will be higher than a the same country (or a bigger
458
+ country) will achieve the same mission with a higher budget.
459
+ However, to give an insight into space investment, in 2020, the policy plan for the major gov-
460
+ ernment in space mission amount of a $73.98 billion, and it is the ∼ 0,927 % as a share of gross
461
+
462
+ 10
463
+ Andrea Russo et al.
464
+ domestic product (GDP), with a medium of ∼0,115875 % for each country. The Organisation for
465
+ Economic Co-operation and Development (OECD) as show the total value of space budgets from
466
+ the G20 country [Table 2], and we add in comparison, the years military budget for each country.
467
+ Table 2. G20 government space and military budgets (2020)
468
+ Country
469
+ 2020 Space Budget in Billions
470
+ ∼ National Space Budget %
471
+ ∼ National Military Budget %
472
+ US
473
+ 22.62
474
+ 0.480%
475
+ 3.74 %
476
+ Russia
477
+ 3.58
478
+ 0.210 %
479
+ 4.26 %
480
+ France
481
+ 4.04
482
+ 0.122%
483
+ 2.07 %
484
+ Japan
485
+ 3.32
486
+ 0.076 %
487
+ 1 %
488
+ Saudi Arabia
489
+ 2.1
490
+ 0.076 %
491
+ 8.45 %
492
+ China
493
+ 8.85
494
+ 0.075%
495
+ 1.75 %
496
+ Italy
497
+ 2.0
498
+ 0.069 %
499
+ 1.56 %
500
+ Germany
501
+ 2.40
502
+ 0.049 %
503
+ 1.4 %
504
+ % in billion U.S. dollars [Appendix 2]
505
+ 3.4
506
+ F Factor
507
+ The factor F, was needed to the equation to weigh/ponder the space mission, and it shows if the
508
+ mission succeeded or failed. The F factor is equal to 1 if the mission succeeded, or 0 if it failed.
509
+ 3.5
510
+ Q Factor
511
+ The factor Q, evidence the statistical risk factor and the reliability for each Country about spaceflight
512
+ missions. Usually, every space mission have a risk factor, like the Apollo mission had the 95% of failure
513
+ [NASA 1965], but this information (if they still made it) is not available to the public. Therefore, since
514
+ we cannot obtain data for every single mission, we hypothesize a different evaluation method, so we
515
+ rely as a risk factor on collecting data on the success/failure of each country’s space launch. Those
516
+ data give a specific statistical risk and reliability factor to each country, since year 2010. [Appendix 3]
517
+ It can happen that some space mission fail. In this scenario, we had imagined a failure factor that
518
+ influence as a feature on GSS equation. The failures factor arises when you make mistakes over and
519
+ over again, and in this case you always lose trust from others. The "Success/Authority improvement"
520
+ comes from maintaining your own (high) standard for as long as possible. We chose to not put the
521
+ Failures Factor in the equation, because factor Q evidence all the space mission (satellite mission,
522
+ supplying mission, scientific mission etc.), and not only the scientific space mission, like those we
523
+ have chosen as subject of this paper (Table 1).
524
+ A well know example for a Failures Factor, could be the Tianhe mission from China, that had
525
+ unfortunately an uncontrolled stage reentry, and the vector crashed back to Earth without having
526
+ the possibility to calculate the final crash site, due the amount of variables on the descent stage. Sadly,
527
+ this inconvenience will arise often by the China, any time when they decide to add a core module
528
+ on his space station, because the Long March 5B (Y2) cannot claim to get the core module in orbit
529
+ (hooked to the Tianhe core stage), without losing control of the rocket on re-entry. [Jones 2021]
530
+ The Long March 5B re-entry had provoke concern about the security for some city, because
531
+ the rocket had an orbital inclination of 41.5 degrees, means the rocket body passes a little farther
532
+ north than New York, Madrid and Beijing and as far south as southern Chile and Wellington, New
533
+ Zealand, and could make it is reentry at any point within this area. With obvious concerns for those
534
+ country.
535
+
536
+ British Journal of Social Policy
537
+ 11
538
+ 4.
539
+ Data & Result
540
+ According to our model/equation, the most important and determinant score are the sentiment
541
+ score, which refers as the reaction and evaluation about the space mission’s result. The resulting
542
+ online activity give us a social input valuable to quantify and qualify the international social reaction
543
+ about space mission. Through the analysis of online activity topics, we identified the sentiment that
544
+ describing the dynamic between space activity mission and geopolitical dynamics. In table 3 we
545
+ show the sentiment results (S) from the most recently and most know space mission of the last decade.
546
+ Table 3. S factor - Sentiment analysis
547
+ Mission
548
+ Hashtag
549
+ Sentiment
550
+ Result
551
+ Tianwen-1
552
+ Tianwen-1
553
+ 0,46447
554
+ Succeeded
555
+ Tianhe*
556
+ ChineseRocket*
557
+ -0,05151
558
+ Succeeded (Sides efects)
559
+ ASAT
560
+ ASAT
561
+ -0,16607
562
+ Succeeded (Sides efects)
563
+ Mars 2020*
564
+ Perseverance*
565
+ 0,428263
566
+ Succeeded
567
+ Mars 2020
568
+ Mars2020*
569
+ 0,487525
570
+ Succeeded
571
+ James Webb
572
+ JWST*
573
+ 0,480994
574
+ Succeeded
575
+ Rosetta
576
+ Rosetta
577
+ 0,429542
578
+ Semi-Succeeded
579
+ * Actually, some hashtags are derived not from the mission itself, but from the ef-
580
+ fect achieved (losing control of the Chinese rocket on re-entry) or the main sub-
581
+ ject of the mission (perseverance).
582
+ The results clearly evidence that an increase of side effect during mission, increases the negative
583
+ score sentiments from the space mission community.
584
+ Regarding the economical resources invested (B) we collect data from different source, and
585
+ arranged it on USD Billions dollars. Table 4 was the difficult one to make, because it is hard to obtain
586
+ data from not-direct-democratic state, like China, and also because the inclusion of both civilian and
587
+ military space budget for security missions.
588
+ Table 4. B factor - Space budgets
589
+ Country
590
+ 2010
591
+ 2011
592
+ 2012
593
+ 2013
594
+ 2014
595
+ 2015
596
+ 2016
597
+ 2017
598
+ 2018
599
+ 2019
600
+ 2020
601
+ 2021
602
+ Japan
603
+ 1.67
604
+ 1.59
605
+ 1.68
606
+ 1.6
607
+ 1.76
608
+ 1.56
609
+ 1.33
610
+ 1.32
611
+ 3.06
612
+ 1.34
613
+ 3.32
614
+ 4.14
615
+ Russia
616
+ 2.4
617
+ 3.8
618
+ -
619
+ 5.6
620
+ 4.39
621
+ 2.42
622
+ 3.18
623
+ -
624
+ 4.17
625
+ 3.58
626
+ 3.58
627
+ 1.92
628
+ EU
629
+ 4,19
630
+ 4,52
631
+ 4,56
632
+ 4,85
633
+ 4,85
634
+ 4,65
635
+ 5,95
636
+ 6,52
637
+ 6,35
638
+ 6,49
639
+ 5,52
640
+ 5,16
641
+ China
642
+ -
643
+ -
644
+ -
645
+ -
646
+ 2.66
647
+ -
648
+ 4.91
649
+ -
650
+ 5.83
651
+ 8.00
652
+ 8.85
653
+ 10.28
654
+ US
655
+ 18.72
656
+ 18.44
657
+ 17.77
658
+ 16.86
659
+ 17.64
660
+ 18.01
661
+ 19.3
662
+ 19.50
663
+ 20.73
664
+ 21.5
665
+ 22.629
666
+ 23.27
667
+ In billion U.S. dollars [Appendix 2] [Appendix 3]
668
+ Therefore, the data from the statistical failures presence in our equation, that mark the quality of
669
+ space launch system for each country (Q) are shows in Table 5. The data are collected since the year
670
+ 2010 to 2021 (the 2022 is not finished yet), show the total number of Core Stage Manufacture send
671
+ in space, between overall launch log outside brackets and failures inside brackets. In the end, we
672
+ shows also the total launch and failures between 2010 and 2021 and the failures percentage. The
673
+ failures percentage is the Q value in our equation, evidencing the failures probability to each space
674
+ launch system-country for each launch.
675
+ It is possible to see that the Chinese government have many more launch than the US and Europe,
676
+ this is due to the low presence of space satellite (for communication and security mission) orbiting
677
+ the Earth by the Chinese government. The Chinese government had invested a huge amount of
678
+
679
+ 12
680
+ Andrea Russo et al.
681
+ Table 5. Q factor - Statistical failures
682
+ Country
683
+ 2010
684
+ 2011
685
+ 2012
686
+ 2013
687
+ 2014
688
+ 2015
689
+ 2016
690
+ 2017
691
+ 2018
692
+ 2019
693
+ 2020
694
+ 2021
695
+ TOT
696
+ Failures %
697
+ China
698
+ 55(3)
699
+ 39(4)
700
+ 34(2)
701
+ 39(1)
702
+ 18(2)
703
+ 22(2)
704
+ 19(0)
705
+ 16(0)
706
+ 15(1)
707
+ 19(0)
708
+ 19(1)
709
+ 15(0)
710
+ 310 (16)
711
+ 5,2
712
+ Russia
713
+ 25(2)
714
+ 17(0)
715
+ 25(0)
716
+ 20(1)
717
+ 20(1)
718
+ 19(1)
719
+ 27(3)
720
+ 35(3)
721
+ 32(1)
722
+ 26(2)
723
+ 32(4)
724
+ 31(1)
725
+ 309(19)
726
+ 6,1
727
+ US
728
+ 43(2)
729
+ 35(3)
730
+ 19(0)
731
+ 29(0)
732
+ 28(0)
733
+ 21(0)
734
+ 20(2)
735
+ 20(0)
736
+ 17(0)
737
+ 13(1)
738
+ 15(1)
739
+ 15(0)
740
+ 275(9)
741
+ 3,3
742
+ Europe
743
+ 6(0)
744
+ 5(1)
745
+ 6(1)
746
+ 8(1)
747
+ 9(0)
748
+ 9(0)
749
+ 8(0)
750
+ 7(0)
751
+ 5(0)
752
+ 8(0)
753
+ 7(0)
754
+ 6(0)
755
+ 84(3)
756
+ 3,6
757
+ India
758
+ 2(1)
759
+ 2(0)
760
+ 6(0)
761
+ 7(0)
762
+ 5(1)
763
+ 7(0)
764
+ 5(0)
765
+ 4(0)
766
+ 3(0)
767
+ 2(0)
768
+ 3(0)
769
+ 3(2)
770
+ 49(4)
771
+ 8,2
772
+ Japan
773
+ 3(0)
774
+ 4(0)
775
+ 2(0)
776
+ 6(0)
777
+ 7(1)
778
+ 4(0)
779
+ 4(0)
780
+ 4(0)
781
+ 3(0)
782
+ 2(0)
783
+ 3(0)
784
+ 2(0)
785
+ 42(1)
786
+ 2,4
787
+ New Zeal.
788
+ 6(1)
789
+ 7(1)
790
+ 6(0)
791
+ 3(0)
792
+ 1(1)
793
+ -
794
+ -
795
+ -
796
+ -
797
+ -
798
+ -
799
+ -
800
+ 23(3)
801
+ 13,6
802
+ Iran
803
+ 1(1)
804
+ 2(1)
805
+ 2(2)
806
+ -
807
+ -
808
+ -
809
+ 1(0)
810
+ -
811
+ -
812
+ 3(2)
813
+ 1(0)
814
+ -
815
+ 10(6)
816
+ 60
817
+ Total launch by year (total launch failures by year) [Appendix 4]
818
+ money to get enough satellites to make public infrastructures work. Also, Russian launch operation
819
+ have been increased since the 2011, due to the retirement of the space shuttle (last mission 21th July
820
+ 2011), and as result, the US astronauts had to traveling by Russian Soyuz spacecraft to get to the
821
+ international space station. We have compared the data from the equation (S, T, R and Q parameters),
822
+ and in Figure 1 and Table 6 we show the score after the mission succeeded or failed.
823
+ Figure 1. GSS’s info-graphic of mains space mission
824
+ As is easiest to see, there is a negative score (we have highlight it with red on Figure 1), due
825
+ to the risk of the ASAT and the space debris rocket had on population on heart and on the ISS.
826
+ Tianwen-1 and Rosetta mission have a high GSS score also because it was a first huge milestone
827
+ for the respectively space agency (ESA and CNSA). These data evidence the difference between a
828
+ successful mission, achieving its goal; wile a successful mission, still achieving is goal but with side
829
+ effect.
830
+ 4.1
831
+ GSS trend in time
832
+ We have tried to quantify a GSS value for space Research Mission only between 2010 and 2021, for
833
+ each country, due the fact that ASAT is a military space mission. So we have gather data from all
834
+
835
+ Geopolitical Space Score
836
+ Rosetta
837
+ JWST
838
+ Mars 2020
839
+ Perseverance
840
+ ASAT
841
+ ChineseRock
842
+ Tianwen-1
843
+ -2
844
+ 0
845
+ 2
846
+ 4
847
+ 6
848
+ 8
849
+ 10British Journal of Social Policy
850
+ 13
851
+ Table 6. Geopolitical Space Score
852
+ Mission
853
+ Geopolitical Space Score
854
+ Tianwen-1
855
+ 9,547438889
856
+ ChineseRock
857
+ -0,208492857
858
+ ASAT
859
+ -0,659007937
860
+ Perseverance
861
+ 2,198416733
862
+ Mars 2020
863
+ 2,502628333
864
+ JWST
865
+ 2,469102533
866
+ Rosetta
867
+ 7,588575333
868
+ factor for the equation, but unfortunately we did not get all the Sentiment Factor (S), because usually
869
+ unknown space mission does not have much reaction on social network; As said before to evaluate
870
+ the sentiment for each missions, those missions should be very important for the Humankind, or at
871
+ least "virality" for the space community, and many mission unfortunately become known to major
872
+ society only from newspaper and occasionally from TV news. To solve this problem, we estimate a
873
+ medium 0,425 S factor from successful mission, and a medium -0,125 for failed missions. For the
874
+ other well know mission (Table 1) we have used the original sentiment. Regarding the economical
875
+ resources invested (B) toward the data accumulated, we did not get the full budget planning over
876
+ years for China and Russia, so we estimate a progressive linear regression between the missing data
877
+ on table 4.
878
+ Appendix 5 - 6
879
+ Figure 2. GSS - Research missions
880
+ As Figure 2 shows, it is possible to see the fluctuations characterised by the missions. In addition,
881
+ it is also possible to see the type of strategy and activity for each country/agencies.
882
+ For example, ESA, which is characterised by the limitation of not having a launch station on the
883
+ east coast, has specialised in international collaboration, and it is possible to see how ESA manages
884
+ missions with fluctuations of about one year (given the many collaborations, especially with NASA).
885
+
886
+ 6
887
+ 5
888
+ 4
889
+ .
890
+ 3
891
+ GSS
892
+ 2
893
+ 1
894
+ 0
895
+ -1
896
+ 2009
897
+ 2010
898
+ 2011
899
+ 2012
900
+ 2013
901
+ 2014
902
+ 2015
903
+ 2016
904
+ 2017
905
+ 2018
906
+ 2019
907
+ 2020
908
+ 2021
909
+ China
910
+ Russia
911
+ ·USA
912
+ ESA14
913
+ Andrea Russo et al.
914
+ Moreover, it should be noted that the Rosetta mission, like the Tianwen-1 mission, could be designed
915
+ as a "baptism of independence", derived from "not because they are easy, but because they are hard"
916
+ philosophy; Thy was indeed, the first to do that, showing his competence and skill upon the others
917
+ superpowers space country like US and Russia. China, on the other hand, is concentrating on a few
918
+ space research missions, since it is a new superpower and has yet to stabilise its infrastructure and
919
+ network telecommunication on space. Instead, the US have/had many active missions for a long
920
+ time, in fact the score is relatively lower than the others precisely because of their resilience. People
921
+ expect them to fail less than the others, so the astonishing/sensational felling can only be found in
922
+ very difficult missions, or when they failed badly.
923
+ Appendix 5 - 6
924
+ Figure 3. GSS - Research missions cumulative
925
+ 5.
926
+ Analysis
927
+ The analyses support the expectation that: 1) that it is possible to numerically assess geopolitical
928
+ factors; 2) that even in the case of a successful mission, there is the possibility of a negative geopolitical
929
+ feedback; 3) that public spending on space missions can also positively improve the geopolitical level
930
+ of the investing state; 4) that the geopolitical level can vary over time, depending on the success of
931
+ the missions, proportional to the investment.
932
+ An increase in the social policy inherent in space missions, in addition to providing public
933
+ employment with low-high qualification (and therefore removing people from the poverty line),
934
+ would improve the risk factor (Q Factor) decreasing it to each future space mission, thus improving
935
+ the geopolitical sentiment and also the geopolitical weight of the state.
936
+ In fact, as already mentioned, the most resilient state is the US. They are not first on the geopolitical
937
+ level despite having a high number of missions both quantitatively and qualitatively, in indeed, this
938
+ score is given by the fact that they rarely miss, thus showing a high quality work, given the quality
939
+ of the workers.
940
+ 6.
941
+ Conclusions
942
+ Our study shows that can be possible using computational social method to quantify geopolitical
943
+ dynamics for every country. In our case we have shown how space mission influence geopolitical
944
+
945
+ 20
946
+ 15
947
+ 10
948
+ GSS
949
+ 5
950
+ 0
951
+ -5
952
+ 2009
953
+ 2010
954
+ 2011
955
+ 2012
956
+ 2013
957
+ 2014
958
+ 2015
959
+ 2016
960
+ 2017
961
+ 2018
962
+ 2019
963
+ 2020
964
+ 2021
965
+ China
966
+ Russia
967
+ ·USA
968
+ ESABritish Journal of Social Policy
969
+ 15
970
+ dynamics, supported by a security and social policy approach. In a more specific way, we have
971
+ demonstrated (1) A socio-physical method for assessing the geopolitical level of any country, based
972
+ on its goal and its system-organization; (2) That space missions, even if successful, can bring negative
973
+ sentiment, which goes to reflect on the geopolitical value to the state itself; (3) That social policy is
974
+ also an investment in the defense and security system, and that increased investment of government
975
+ spending on space missions, also has a spillover effect on geopolitical value. From a sociological and
976
+ socio-physical point of view, through the contribution of a Computational social science model, the
977
+ GSS equation can be used to evaluate the geopolitical level not only in the spatial domain, but also in
978
+ other "competitions", such as sports competitions (Olympics, world sports championships) or music
979
+ competitions or international wars, where there are enough social reactions (S Factor) and enough
980
+ statistical data (Q - B Factor) to evaluate both the event and the organization/country participating
981
+ in the event. Therefore, our work would emphasise the most general Political policy, and in the
982
+ most specific way, the Social policy, as not only a economical financial aids, but also a more complex
983
+ systems, related to complex dynamics. As we demonstrated that Social policy is also funding the
984
+ defense, security, and geopolitical systems. It has become not only an aid to the population, but much
985
+ more. Moreover, an increase in the spending policy for space missions (which we define as part of
986
+ social policy), also has a geopolitical impact on the state that promotes them, and by reflection, to its
987
+ society.
988
+ 7.
989
+ Declaration of interest statement
990
+ The authors declare no conflict of interest.
991
+ 8.
992
+ Authors details
993
+ 8.1
994
+ Andrea Russo
995
+ Is a PhD candidate in Complex Systems at the University of Catania. He is currently working at the
996
+ Department of Physics and Astronomy. He collaborated with CNR Ibam, he also has worked purely
997
+ on projects involving technology and society. His main research field and interests are focused on
998
+ the study and the development of Computational social method to explain social complexity, in
999
+ particular field like Politics - Economics - Business and Defense-Security sector applications. ORCID:
1000
+ 0000-0003-3816-0539
1001
+ Corresponding author. Email: Andrea.russo@phd.unict.it
1002
+ 8.2
1003
+ Davide Coco
1004
+ M.Sc. in Space and Astronautical Engineering at University of Rome "Sapienza", with several years
1005
+ of experience in mission design and preliminary studies of space missions.
1006
+ His main goal is to further specialize and be involved in conceptual studies, system requirements
1007
+ definition, mission proposal writing, space system modeling and simulation, launcher or mission
1008
+ design and operations.
1009
+ He is currently leading the design and development of Cubesat’s subsystems in a small Italian startup,
1010
+ Human4Research.
1011
+ ORCID: 0000-0001-8010-9468
1012
+ Email: davide.coco@outlook.it
1013
+ Acknowledgement
1014
+ We thank Taha Yasseri for for his crucial help in the equation.
1015
+
1016
+ 16
1017
+ Andrea Russo et al.
1018
+ References
1019
+ ASI. 2020. Documento di visione strategica per lo spazio (dvss). ASI Archive 1.
1020
+ Bak, Per, and Kan Chen. 1991. Self-organized criticality. Scientific American 264 (1): 46–53.
1021
+ BBC, News. 2021. Russian anti-satellite missile test draws condemnation. bbc.com.
1022
+ Benedikter, Boris, Alessandro Zavoli, Guido Colasurdo, Simone Pizzurro, and Enrico Cavallini. 2021. Convex approach
1023
+ to three-dimensional launch vehicle ascent trajectory optimization. Journal of Guidance, Control, and Dynamics 44 (6):
1024
+ 1116–1131. https://doi.org/10.2514/1.G005376.
1025
+ . 2022. Convex optimization of launch vehicle ascent trajectory with heat-flux and splash-down constraints. Journal of
1026
+ Spacecraft and Rockets 59 (3): 900–915. https://doi.org/10.2514/1.A35194.
1027
+ Bozzo, Brian. 2018. Not because It Is Easy: Exploring National Incentives for Commercial Space Exploration through a
1028
+ Geopolitical Lens Notes [in eng]. Drexel Law Review 11 (2): 597–650. Accessed August 25, 2021. https://heinonline.org/
1029
+ HOL/P?h=hein.journals/drexel11%5C&i=605.
1030
+ Cihon, Peter, and Taha Yasseri. 2016. A biased review of biases in twitter studies on political collective action. Frontiers in
1031
+ Physics 4:34.
1032
+ Cioffi-Revilla, Claudio. 2014. Introduction to computational social science. London and Heidelberg: Springer.
1033
+ CNSA. 2021. China’s tianwen-1 probe sends back mars landing visuals. http://www.cnsa.gov.cn.
1034
+ CNSA, Site. 2021. China launches first section of its massive space station. http://www.cnsa.gov.cn.
1035
+ CNSA.gov. 2020. China successfully launches probe in first mars mission. http://www.cnsa.gov.cn.
1036
+ Dittmer, Jason. 2014. Geopolitical assemblages and complexity. Progress in Human Geography 38 (3): 385–401.
1037
+ Dmitriev, Andrey, Victor Dmitriev, and Stepan Balybin. 2019. Self-organized criticality on twitter: phenomenological theory
1038
+ and empirical investigation based on data analysis results. Complexity 2019.
1039
+ Downing, Joseph, and Richard Dron. 2020. Theorising the ‘security influencer’: speaking security, terror and muslims on
1040
+ social media during the manchester bombings. new media & society, 1461444820971786.
1041
+ ESA. 2014. Rosetta operation. esa.int.
1042
+ Giannangeli, Marco. 2022. Global tensions grow as chinese rocket scientist defects to the west. Express.co.uk 1 (1): 1.
1043
+ Jones, Andrew. 2021. China acknowledges long march 5b situation as rocket heads for weekend reentry. spacenews.com.
1044
+ Kolli, Naimisha, N Balakrishnan, and KR Ramakrishnan. 2017. On quantifying predictability in online social media cascades
1045
+ using entropy. In Proceedings of the 2017 ieee/acm international conference on advances in social networks analysis and mining
1046
+ 2017, 109–114.
1047
+ Lazer, David, Alex Pentland, Lada Adamic, Sinan Aral, Albert-László Barabási, Devon Brewer, Nicholas Christakis, Noshir
1048
+ Contractor, James Fowler, Myron Gutmann, et al. 2009. Computational social science. Science 323 (5915): 721–723.
1049
+ Lazer, David MJ, Alex Pentland, Duncan J Watts, Sinan Aral, Susan Athey, Noshir Contractor, Deen Freelon, Sandra
1050
+ Gonzalez-Bailon, Gary King, Helen Margetts, et al. 2020. Computational social science: obstacles and opportunities.
1051
+ Science 369 (6507): 1060–1062.
1052
+ Lymperopoulos, Ilias N. 2017. Dynamic response and transfer function of social systems: a neuro-inspired model of collective
1053
+ human activity patterns. Neural Networks 94:125–140.
1054
+ NASA. 1965. Apollo reliability and quality assurance program quarterly status report, second quarter 1965. NasaArchive 1:83.
1055
+ NASA, JWST. 2020. James webb space telescope. nasa.gov.
1056
+ NASA.gov. 2021. Mars 2020 perseverance rover. nasa.gov.
1057
+ Peng, Sancheng, Aimin Yang, Lihong Cao, Shui Yu, and Dongqing Xie. 2017. Social influence modeling using information
1058
+ theory in mobile social networks. Information Sciences 379:146–159.
1059
+ Pioppi, Stefano. 2021. Ipersonica e cyber, russia e cina sfidano la nato. parla l’ammiraglio bauer. formiche.net 1 (1): 1.
1060
+ Salas, Erick Burgueño. 2021. Government space program spending of the leading countries in the world 2014-2020. statista.com.
1061
+ Spagnulo, Marcello. 2020. Ecco lo spazioplano del pentagono, tra misteri e tecnologie futuribili. formiche.net 1 (1): 2.
1062
+
1063
+ British Journal of Social Policy
1064
+ 17
1065
+ Vidgen, Bertie, and Taha Yasseri. 2020. Detecting weak and strong islamophobic hate speech on social media. Journal of
1066
+ Information Technology & Politics 17 (1): 66–78.
1067
+ Wikipedia. 2022. Curiosity (rover, https://en.wikipedia.org/wiki/Curiosity_(rover)#Cost.
1068
+ Yin, Jie, Sarvnaz Karimi, Andrew Lampert, Mark Cameron, Bella Robinson, and Robert Power. 2015. Using social media to
1069
+ enhance emergency situation awareness. In Twenty-fourth international joint conference on artificial intelligence.
1070
+ Appendix 1.
1071
+ Equivalence of the 1962 USD to 2022 USD
1072
+ More specifically: 231 408 697 113.64 USD $ .
1073
+ Inflation over the period: 825.63 %
1074
+ Index used: USCPI31011913 (Bureau of Labor Statistics).
1075
+ Initial Index: 309.12, Final Index: 2 861.33
1076
+ Link=https://fxtop.com/it/conversione-valute-passato.php
1077
+ Appendix 2.
1078
+ Space annual budget (∼ approximately) over years and percentage of the na-
1079
+ tional budget
1080
+ It is very hard to collect data for the most important Space Agency, this is due to the different money
1081
+ value (Yen - EURO - USD), and also the difficult to obtain data from not-democratic state, like
1082
+ China, due to information security and also due to the inclusion of both civilian and military space
1083
+ spending global security.
1084
+ Source:
1085
+ 1. https://www.statista.com/statistics/745717/global-governmental-spending-on-space-programs-
1086
+ leading-countries/
1087
+ 2. https://www.euroconsult-ec.com/press-release/government-space-budgets-driven-by-space-exploration-
1088
+ and-militarization-hit-record-92-billion-investment-in-2021-despite-covid-with-1-trillion-forecast-
1089
+ over-the-decade/
1090
+ 3. https://spacenews.com/op-ed-global-government-space-budgets-continues-multiyear-rebound/
1091
+ 4. https://stacker.com/stories/2524/countries-spend-most-space-exploration
1092
+ 5. https://www.oecd.org/sti/inno/space-forum/space-economy-for-people-planet-and-prosperity.pdf
1093
+ 6. https://global,jaxa,jp/about/transition/index,html
1094
+ 7. esa.int
1095
+ 8. https://en.wikipedia.org/wiki/Budget_of_NASA
1096
+ 9. https://global.jaxa.jp/
1097
+ [Salas 2021]
1098
+ Military budget:
1099
+ 1. https://databank.worldbank.org/reports.aspx?source=2&type=metadata&series=MS.MIL.XPND.GD.ZS#
1100
+ 2. https://www.defensenews.com/global/2021/04/26/the-world-spent-almost-2-trillion-on-defense-
1101
+ in-2020/
1102
+ Appendix 3.
1103
+ Billion U.S. dollars
1104
+ Exchange rate 1.1269 at 27/02/2022 02:50 28 Feb 2020 - 25 Feb 2022
1105
+ Appendix 4.
1106
+ Statistical failures
1107
+ Annual space reports:
1108
+ From 2010 to 2021 Launch Log
1109
+ Source = https://www.spacelaunchreport.com/index.html
1110
+
1111
+ 18
1112
+ Andrea Russo et al.
1113
+ Appendix 5.
1114
+ GSS - Research Missions
1115
+ Table 7. GSS related to space research missions
1116
+ Country
1117
+ 2009
1118
+ 2010
1119
+ 2011
1120
+ 2012
1121
+ 2013
1122
+ 2014
1123
+ 2015
1124
+ 2016
1125
+ 2017
1126
+ 2018
1127
+ 2019
1128
+ 2020
1129
+ 2021
1130
+ China
1131
+ -
1132
+ -
1133
+ -0,541666667
1134
+ 4,969230769
1135
+ 5,383333333
1136
+ -
1137
+ -
1138
+ -
1139
+ -
1140
+ -
1141
+ 1,615
1142
+ 1,490577956
1143
+ -
1144
+ Russia
1145
+ -
1146
+ -
1147
+ -0,334429825
1148
+ -
1149
+ -
1150
+ -
1151
+ -
1152
+ 2,389937107
1153
+ -
1154
+ -
1155
+ 1,75719055
1156
+ -
1157
+ -
1158
+ USA
1159
+ -
1160
+ 0,808621288
1161
+ 0,464438894
1162
+ 1,026104163
1163
+ -0,040508492
1164
+ 0,492783694
1165
+ 0,512710274
1166
+ 0,976252159
1167
+ -
1168
+ 0,776134433
1169
+ 0,876356589
1170
+ 0,011896792
1171
+ 0,942333613
1172
+ ESA
1173
+ -
1174
+ 4,647391567
1175
+ -
1176
+ -
1177
+ 2,648339061
1178
+ -
1179
+ 2,762246117
1180
+ 1,128012708
1181
+ -
1182
+ 2,654855643
1183
+ -
1184
+ 1,903820817
1185
+ 2,11289354
1186
+ Japan
1187
+ -
1188
+ 4,542027057
1189
+ -
1190
+ -2,976190476
1191
+ -
1192
+ 4,277597403
1193
+ -
1194
+ -
1195
+ -
1196
+ 5,740740741
1197
+ -7,462686567
1198
+ -
1199
+ -
1200
+ TOT
1201
+ -
1202
+ 3,332679971
1203
+ -0,137219199
1204
+ 1,006381485
1205
+ 2,663721301
1206
+ 2,385190548
1207
+ 1,637478196
1208
+ 1,498067325
1209
+ -
1210
+ 3,057243606
1211
+ -0,803534857
1212
+ 1,135431855
1213
+ 1,527613577
1214
+ GSS - Research missions
1215
+ Appendix 6.
1216
+ Country Succeeded and Failed Space Research Mission
1217
+ Table 8. Country Succeeded(Failed) Space Research Mission
1218
+ Year
1219
+ China
1220
+ Russia
1221
+ USA
1222
+ ESA
1223
+ Japan
1224
+ India
1225
+ 2010
1226
+ -
1227
+ -
1228
+ Deep Impact
1229
+ Rosetta
1230
+ Akatsuki
1231
+ -
1232
+ -
1233
+ -
1234
+ Stardust
1235
+ -
1236
+ IKAROS (Shin’en)
1237
+ -
1238
+ 2011
1239
+ (Yinghuo-1)
1240
+ (Fobos-Grunt)
1241
+ Dawn
1242
+ -
1243
+ -
1244
+ -
1245
+ 2012
1246
+ Chang’e 2
1247
+ -
1248
+ MSL Curiosity
1249
+ -
1250
+ (PROCYON)
1251
+ -
1252
+ 2013
1253
+ Chang’e 3
1254
+ -
1255
+ (Deep Impact)
1256
+ Gaia
1257
+ -
1258
+ -
1259
+ 2014
1260
+ -
1261
+ -
1262
+ MAVEN
1263
+ -
1264
+ Shin’en 2
1265
+ Mangalyaan
1266
+ 2015
1267
+ -
1268
+ -
1269
+ DSCOVR
1270
+ LISA Pathfinder
1271
+ -
1272
+ -
1273
+ -
1274
+ -
1275
+ New Horizons
1276
+ -
1277
+ -
1278
+ -
1279
+ -
1280
+ -
1281
+ Dawn
1282
+ -
1283
+ -
1284
+ -
1285
+ 2016
1286
+ -
1287
+ ExoMars 2016 (Schiaparelli EDM lander)
1288
+ Juno
1289
+ ExoMars 2016 (Schiaparelli EDM lander)
1290
+ -
1291
+ -
1292
+ 2017
1293
+ -
1294
+ -
1295
+ -
1296
+ -
1297
+ -
1298
+ -
1299
+ 2018
1300
+ -
1301
+ -
1302
+ Parker Solar Probe
1303
+ MASCOT
1304
+ Hayabusa2
1305
+ -
1306
+ -
1307
+ -
1308
+ MarCO A "WALL-E"
1309
+ BepiColombo
1310
+ BepiColombo
1311
+ -
1312
+ -
1313
+ -
1314
+ MarCO B "EVE"
1315
+ -
1316
+ -
1317
+ -
1318
+ -
1319
+ -
1320
+ OSIRIS-REx
1321
+ -
1322
+ -
1323
+ -
1324
+ -
1325
+ -
1326
+ InSight
1327
+ -
1328
+ -
1329
+ -
1330
+ 2019
1331
+ Chang’e 4
1332
+ Spektr-RG
1333
+ New Horizons
1334
+ -
1335
+ (Minerva II-2)
1336
+ -
1337
+ -
1338
+ -
1339
+ Spektr-RG
1340
+ -
1341
+ -
1342
+ -
1343
+ 2020
1344
+ Chang’e 5
1345
+ -
1346
+ Mars 2020
1347
+ Solar Orbiter
1348
+ -
1349
+ -
1350
+ Tianwen-1
1351
+ -
1352
+ -
1353
+ -
1354
+ -
1355
+ -
1356
+ Beidou
1357
+ -
1358
+ -
1359
+ -
1360
+ -
1361
+ -
1362
+ 2021
1363
+ -
1364
+ -
1365
+ James Webb
1366
+ James Webb
1367
+ -
1368
+ -
1369
+ GSS - Research missions
1370
+
89E1T4oBgHgl3EQf7wWX/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8dAzT4oBgHgl3EQfgfzo/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:932a9a3768e781909011168e896773dddf90160a4177fc10f6c8cca9ba7b294f
3
+ size 2818093
BdAzT4oBgHgl3EQfv_7e/content/tmp_files/2301.01717v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
BdAzT4oBgHgl3EQfv_7e/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
C9AyT4oBgHgl3EQf4fqw/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a58867fb938839e1b5e4e18ff7e7ae09f1772d290fbb55f878abf24dd1c86b3a
3
+ size 93499
CdFAT4oBgHgl3EQftB4M/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc212d1798906fbbe3ade440655026029f5e12830fe23568081d19c5a109728c
3
+ size 4390957
CtE0T4oBgHgl3EQfyQKJ/content/2301.02657v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c50f45f5ce0eb5b5a2d5aaa275eb95591b4236457ecec12acf5252c8ee41667a
3
+ size 31911079
CtE1T4oBgHgl3EQf9wbT/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cfa4cd2db444b86d44c8d0e2208ee89efb429b7654e1c97999ab33d81874e34
3
+ size 8716333
D9E4T4oBgHgl3EQfGQye/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc24caa93f8e32e0f7b8080019729a72233b6ba22a32f4541d8b241d0ed4771b
3
+ size 276578
F9AzT4oBgHgl3EQfHPuf/content/2301.01042v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9d10a2dffc705545a72389309c4f5cc73e90d229429fb2fe67a45727072c363
3
+ size 683977
F9AzT4oBgHgl3EQfHPuf/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a88dcb1411b240edd19402e16e292fa762a330c83a50d58d1311d534d69d7bd
3
+ size 185860
FNAzT4oBgHgl3EQfw_7Q/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf386933d8f09ed642773b4072d1de379f8b2e0a183ca912f24e24a34d6c0804
3
+ size 114110
FNE4T4oBgHgl3EQf7A5n/content/tmp_files/2301.05336v1.pdf.txt ADDED
@@ -0,0 +1,2041 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
2
+ 1
3
+ Multi-task Weakly Supervised Learning for
4
+ Origin–Destination Travel Time Estimation
5
+ Hongjun Wang, Zhiwen Zhang, Zipei Fan, Jiyuan Chen,
6
+ Lingyu Zhang, Ryosuke Shibasaki and Xuan Song
7
+ Abstract—Travel time estimation from GPS trips is of great importance to order duration, ridesharing, taxi dispatching, etc. However,
8
+ the dense trajectory is not always available due to the limitation of data privacy and acquisition, while the origin-destination (OD) type
9
+ of data, such as NYC taxi data, NYC bike data, and Capital Bikeshare data, is more accessible. To address this issue, this paper starts
10
+ to estimate the OD trips travel time combined with the road network. Subsequently, a Multi-task Weakly Supervised Learning
11
+ Framework for Travel Time Estimation (MWSL-TTE) has been proposed to infer transition probability between roads segments, and the
12
+ travel time on road segments and intersection simultaneously. Technically, given an OD pair, the transition probability intends to recover
13
+ the most possible route. And then, the output of travel time is equal to the summation of all segments’ and intersections’ travel time in
14
+ this route. A novel route recovery function has been proposed to iteratively maximize the current routes’ co-occurrence probability, and
15
+ minimize the discrepancy between routes’ probability distribution and the inverse distribution of routes’ estimation loss. Moreover, the
16
+ expected log-likelihood function based on a weakly-supervised framework has been deployed in optimizing the travel time from road
17
+ segments and intersections concurrently. We conduct experiments on a wide range of real-world taxi datasets in Xi’an and Chengdu
18
+ and demonstrate our method’s effectiveness on route recovery and travel time estimation.
19
+ Index Terms—Travel Time Estimation, Urban Computing, Weakly Supervised Learning
20
+ !
21
+ 1
22
+ INTRODUCTION
23
+ With the emergence of newly-developed applications, es-
24
+ timating travel time has become one of the hottest top-
25
+ ics, which is of great importance to route planning, taxi
26
+ dispatching, and ride-sharing in recent years. In the early
27
+ phase, the data of real traffic state is mainly collected from
28
+ loop sensors, which can only provide the individual travel
29
+ time in a certain road segment and usually face the sparse
30
+ issue. Recently, an alternative solution is to use floating
31
+ car data. The floating cars equipped with GPS receivers,
32
+ including taxis, buses, private cars, and online ride-hailing,
33
+ record time stamps, longitude, latitude, speed, and other in-
34
+ formation at regular intervals, which can reflect the vehicle’s
35
+ operation status.
36
+ As a result, a good deal of travel time estimation tech-
37
+ niques based on floating car data have been proposed in
38
+ different scenes, such as dense trajectory [1], [2], [3], low-
39
+
40
+ Hongjun
41
+ Wang,
42
+ Jiyuan
43
+ Chen
44
+ and
45
+ Xuan
46
+ Song
47
+ are
48
+ with
49
+ (1)
50
+ SUSTech-UTokyo
51
+ Joint
52
+ Research
53
+ Center
54
+ on
55
+ Super
56
+ Smart
57
+ City,
58
+ Department
59
+ of
60
+ Computer
61
+ Science
62
+ and
63
+ Engineering
64
+ (2)
65
+ Research
66
+ Institute of Trustworthy Autonomous Systems, Southern University
67
+ of
68
+ Science
69
+ and
70
+ Technology
71
+ (SUSTech),
72
+ Shenzhen,
73
+ China.
74
+ E-mail:
75
+ wanghj2020,11811810@mail.sustech.edu.cn and songx@sustech.edu.cn.
76
+
77
+ Zipei Fan, Ryosuke Shibasaki and Zhiwen Zhang are The University
78
+ of Tokyo, 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba, 277-8561, Japan;
79
+ emails: zhangzhiwen@csis.u-tokyo.ac.jp, fanzipei@iis.u-tokyo.ac.jp, and
80
+ shiba@skl.iis.u-tokyo.ac.jp
81
+
82
+ Lingyu Zhang is Research Institute of Trustworthy Autonomous Systems,
83
+ Southern University of Science and Technology (SUSTech), Shenzhen,
84
+ China. emails: zhanglingyu@didiglobal.com
85
+
86
+ Corresponding to Zipei Fan, Xuan Song;
87
+
88
+ Hongjun Wang, Zhiwen Zhang equal contribution;
89
+ sampling-rate trajectory [4], [5], [6]. However, due to the
90
+ privacy concern and data acquisition problems, extensive
91
+ works focus on inferring the travel time from OD data,
92
+ which only gives the origin-destination location, such as
93
+ finding nearby neighbors [7], such as distance-based [8] or
94
+ representation-based [9] neighbors. In general, the OD type
95
+ of data is more available than the dense type, and multiple
96
+ sources of OD data have been released, for example, the
97
+ NYC taxi data1, NYC bike data2 and Capital Bikeshare
98
+ Data3. However, as far as we know, previous literature omits
99
+ the factor of the road network, which often leads to a high
100
+ estimating error. Since the total travel time of the trajectory
101
+ is equal to the sum of the travel time of all road segments
102
+ and intersections (e.g., waiting traffic signal). Each traffic
103
+ condition in road segment change will affect the total travel
104
+ time. With the road network introduced, here we face three
105
+ intractable problems:
106
+ 1) How to recover the route when only OD pairs are given.
107
+ 2) How to effectively estimate the travel time when the route
108
+ has been obtained.
109
+ 3) How to learn features from complex road network.
110
+ At first glance, given a pair of origin-destination, the short-
111
+ est path algorithm (e.g., Dijkstra’s algorithm) is a natural
112
+ choice for problem 1) because people usually choose paths
113
+ that are similar to the shortest path with less number of
114
+ turns. However, the shortest path in the geometry aspect
115
+ may not always match the definition of the ’shortest path’
116
+ in the driver’s route choice. For example, some resident or
117
+ 1. https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
118
+ 2. https://data.cityofnewyork.us/Transportation/Bike-Data/374u-
119
+ 5ie7
120
+ 3. https://www.capitalbikeshare.com/system-data
121
+ arXiv:2301.05336v1 [cs.AI] 13 Jan 2023
122
+
123
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
124
+ 2
125
+ tertiary types of road are shorter than primary and trunk
126
+ types of road, but they are more vulnerable to congestion,
127
+ since the complex traffic state (many pedestrians), or nar-
128
+ rowness of road width. How to encode the road features
129
+ into the road search procedure? One way can be done by
130
+ learning the transition probability between road segments
131
+ and inferring the route via the search for the maximum
132
+ route probability, where the superiority of this approach is
133
+ that the character of the road will be considered in every
134
+ search process. Inspired by [10], this article employs the
135
+ graph neural network (GCN) to learn the features, such
136
+ as road type, road length, road sign, and road lanes, of
137
+ each road segment. Consequently, the problem of ignoring
138
+ natural road networks in the shortest-path algorithm can be
139
+ alleviated. Fig. 1 shows an example of searching candidate
140
+ path through transition probability. Based on the Markov
141
+ assumption, the routing probability can be obtained by
142
+ multiplying the probabilities and equal to the sum of the
143
+ log probabilities. The candidate paths r1, r2, r3 are acquired
144
+ by Depth First Searching (DFS) algorithm with pruning
145
+ operation.
146
+ As we mentioned, this paper infers the overall travel
147
+ time of a given route by summing up the travel time of
148
+ all the road segments and intersections on that route. This
149
+ raises the question of how to estimate the travel time of road
150
+ segments and intersections reasonably. One concern is how
151
+ to model the differences in individual driving behaviors,
152
+ since given a specific OD pair, the travel time in the same
153
+ time interval is varied. To address this problem, we here
154
+ model the travel time with uncertainty, which means that
155
+ each road/intersection travel time follows certain distribu-
156
+ tion (e.g., Lognormal). Those roads/intersections tend to
157
+ be provided a large variance σ, for example, with large
158
+ crowd flow. In conclusion, the uncertain travel time of
159
+ route generated by route searching has been obtained. To
160
+ effectively optimize the distribution, this paper formulates
161
+ it as the inexact supervision learning problem [11]. One of
162
+ the most well-known examples of inexact supervision is the
163
+ drug activity prediction problem [12], which predicts if a
164
+ molecule induces a given effect. Inexact supervision deals
165
+ with training data arranged in sets called bags, and the labels
166
+ are only annotated on bags. For modeling the uncertainty,
167
+ given a pair of origin and destination, we consider the
168
+ travel time label annotated on the unobserved routes (bags)
169
+ and use the normal distribution [13], [14], [15] to model
170
+ the uncertainty for each road segment and the interaction
171
+ travel time in bags. We drive the objective function (Eq.
172
+ (2)) based on the assumption of aggregated observation and
173
+ Markov chain, and solve it with a general inexact learning
174
+ probabilistic framework [16] and an iterative route recovery
175
+ algorithm.
176
+ After solution 1) and 2) has been discussed, we finally
177
+ introduce how to learn the meaningful features from a
178
+ complicated road network, since multiple factors will affect
179
+ the traffic condition, such as road types, road lanes, speed
180
+ limitations, and traffic signals. To learn the intricate relation
181
+ of road networks, many existing works modeling the prob-
182
+ lem of estimating travel time from either road segments [1],
183
+ [3] or intersections [17], but do not assemble those features
184
+ simultaneously. However, we argue that this will cause error
185
+ accumulation with the road segments increasing. To fill this
186
+ ������������7
187
+ Source
188
+ Target
189
+ ������������1
190
+ ������������2
191
+ ������������3
192
+ 0.4
193
+ 0.6
194
+ 0.2
195
+ 0.8
196
+ 1
197
+ 0.9
198
+ 0.5
199
+ 0.5
200
+ 1
201
+ 0.1
202
+ 0.9
203
+ Candidate paths:
204
+ ������������1: ������������1 → ������������3 → ������������5 → ������������7 , ������������1 = log 2.5
205
+ ������������2: ������������1 → ������������3 → ������������8 → ������������5 → ������������7, ������������2 = log 2.2
206
+ ������������3: ������������1 → ������������2 → ��������������4 → ������������6 → ������������7, ������������3 = log 2.3
207
+ ������������4
208
+ ������������5
209
+ ������������6
210
+ ������������8
211
+ ������������9
212
+ 0.1
213
+ Road-based graph
214
+ Intersection-based graph
215
+ ������������1
216
+ ������������2
217
+ ������������8
218
+ Road segment
219
+ ������������3
220
+ ������������5
221
+ Intersection ������������2
222
+ ������������1
223
+ ������������2
224
+ Intersection ������������1
225
+ Road Network
226
+ Fig. 1. The motivation in this paper is illustrated above. Given any OD
227
+ pair, we want to recover the path by searching the maximum route
228
+ probability learned by the model, and construct a dual graph from road-
229
+ based and intersection-based aspects to estimate the travel time of road
230
+ segment and intersection respectively.
231
+ gap, in this work, we construct a dual graph comprised of
232
+ the road-based graph and the intersection-based graph, and
233
+ estimate the travel time by summing up all road segments
234
+ and intersections on one route. In the meanwhile, we also
235
+ take the connected relation4 into consideration from one
236
+ road segment to one road segment, such as primary →
237
+ secondary, or trunk → residential, and one intersection to one
238
+ intersection, such as tertiary and residential. We introduce the
239
+ Relational Graph Convolutional Networks (R-GCNs) [18] to
240
+ learn the complex connected relation of the road network.
241
+ Moreover, to solve the problem of losing local patterns
242
+ when expanding the receptive neighborhood in GCN, we
243
+ combine the Relational Graph Convolutional Networks, and
244
+ gated recurrent unit (GRU) [19] together as the stacked
245
+ architecture to capture both global and local features [20].
246
+ Fig. 1 represents the procedure to construct the dual graph
247
+ (gray box) of the road network and to recover the route
248
+ from the candidate set that is searched from the transition
249
+ probability.
250
+ The main contributions of this paper can be summarized
251
+ as follows.
252
+ • We propose a multi-task framework to estimate the
253
+ travel time of road segments and intersection, and
254
+ transition probability simultaneously. To the best of
255
+ our knowledge, it is the first attempt to recover the
256
+ route and jointly model the factor of intersection and
257
+ road segments in the general OD travel time estimation
258
+ problem.
259
+ • For the first time, we consider the estimation of the OD
260
+ travel time as the weakly supervised learning problem,
261
+ since the observation of the OD travel time is annotated
262
+ with a bag of unobserved routes. This paper aims to
263
+ infer each road segment and the intersection travel time
264
+ distribution from the aggregation observation.
265
+ • We validate the effectiveness of travel time estimation
266
+ and route recovery using large-scale datasets from the
267
+ real world in Chengdu and Xi’an, respectively, which
268
+ significantly outperform current methods.
269
+ Here, we list the organization in this paper: Sec. 2 gives
270
+ the related works, including weakly supervised learning,
271
+ travel time estimation, as well as route estimation. Sec. 3
272
+ introduces the preliminary knowledge, such as the road net-
273
+ 4. https://wiki.openstreetmap.org/wiki/Key:highway
274
+
275
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
276
+ 3
277
+ work, the origin-destination. Sec. 4 provides the definition of
278
+ our formulation, assumptions, and objective function. Sec.
279
+ 5 gives the methodology of our MWSL-TTE. Sec. 6 and
280
+ Sec. 7 conduct the qualitative and quantitative experiments
281
+ respectively to demonstrate the superiority of MWSL-TTE.
282
+ Sec. 8 gives a summary of this paper and future work.
283
+ 2
284
+ RELATED WORK
285
+ In this section, we will discuss several relevant topics about
286
+ weakly supervised learning, travel time estimation, as well
287
+ as route estimation.
288
+ 2.1
289
+ Weakly Supervised Learning
290
+ Since the general supervised learning method requires each
291
+ data in the training set to be labeled, this expensive la-
292
+ beling consumes a lot of manpower and time. Therefore,
293
+ learning under the condition of weakly supervised infor-
294
+ mation has become a hot research topic in the field of
295
+ machine learning in recent years [21], [22], [23]. The weakly
296
+ supervised learning methods focus on addressing the low-
297
+ quality labels scenarios 1) incomplete supervision [24] : only
298
+ part of data can be labeled. 2) inexact supervision [12]:
299
+ only have coarse-grained labels . 3) inaccurate supervision
300
+ [25] : only part of the data owns true labels. The task of
301
+ estimating travel time from OD can be considered as the
302
+ inexact supervision belonging to the category of Weakly
303
+ Supervised Learning, where the only observations are the
304
+ total travel time and OD locations, but the accurate routes
305
+ are unknown. Different from traditional approaches [7],
306
+ [26] searching similar historical trajectories from data but
307
+ ignoring the city road network structure, in this paper, we
308
+ aim to infer the potential route from OD by using transition
309
+ probability between road segments, where a set of potential
310
+ routes can be seen as a bag, and the OD travel time can be
311
+ obtained by summing up the estimated times of the road
312
+ segments in bag. To the best of our knowledge, we are the
313
+ first to introduce the problem of travel time estimation into
314
+ the inexact supervision framework.
315
+ 2.2
316
+ Travel Time Estimation
317
+ Various TTE implementations were classified into three
318
+ groups, traditional approaches, deep learning-based ap-
319
+ proaches and graph neural network-based approaches. Tra-
320
+ ditional approaches for TTE include the road-segment-
321
+ based and path-based methods. The road segment-based
322
+ methods [27], [28] coarsely forecast the route travel time
323
+ by summing up all estimated times of roads by using the
324
+ data collected from sensors like magnetometer detectors or
325
+ highway cameras, which omitted the necessary factor of
326
+ intersection and relationship among road segments. And the
327
+ path-based methods address the above challenges mainly by
328
+ nearest neighbors search [7], [26] and trajectory regression
329
+ [28], [29], [30]. Nearest neighbors search (NNS) finds nearby
330
+ historical trajectories according to the assumption that the
331
+ routes with similar origins and destinations own close travel
332
+ time. Trajectory regression methods predict the whole route
333
+ travel time based on the given historical trajectories.
334
+ Recently, deep learning-based approaches have become
335
+ especially important in the task of TTE. These approaches
336
+ can be divided into two groups, classical deep learning-
337
+ based methods and graph deep learning-based methods.
338
+ Some classical methods such as deep neural networks
339
+ (DNNs) [1] and convolutional neural networks (CNNs)
340
+ [2], [31] have been successfully applied in TTE. For exam-
341
+ ple, Deep-TTE [1] proposed a CNN-based framework to
342
+ integrate various types of attribute information (such as
343
+ weather, time ID and driver ID) for TTE. However, most
344
+ of these methods model the road network as a grid-based
345
+ map, but they ignore the graphical structure of real-world
346
+ road network.
347
+ To fully utilize spatial information, GNN is an emerging
348
+ tool to analyze the topological relations of graph-structured
349
+ traffic data. Especially, Spatial-Temporal Graph Neural Net-
350
+ work (STGNN) [32] is a framework that integrates GNN
351
+ and temporal processing modules, which can handle spa-
352
+ tial relations and temporal trends simultaneously. Due to
353
+ the spatio-temporal characteristics of the real-world road
354
+ network, STGNN are widely adopted in TTE. For example,
355
+ diffusion convolutional recurrent neural network (DCRNN)
356
+ [33] modeled the graph-structured traffic data as a diffu-
357
+ sion process on a directed graph and transformed spatio-
358
+ temporal features into a seq2seq framework. ASTGNN [34]
359
+ proposes a trend-aware multi-head attention mechanism to
360
+ capture multiple potential correlations in traffic forecasting.
361
+ However, these works only consider the spatio-temporal at-
362
+ tributes of road segments but ignore the interactive correla-
363
+ tions between intersections and road segments. Meanwhile,
364
+ both the real route of OD pairs and road condition also have
365
+ an important influence on TTE.
366
+ 2.3
367
+ Route Estimation
368
+ Another bunch of research in travel time estimation is to
369
+ solve the issue of sparse trajectory due to the privacy, busi-
370
+ ness competition [4], [5], [6], and limitation of GPS devices,
371
+ or in the scene of ETC [35], [36] and surveillance cameras
372
+ [37]. A common strategy for solving the sparse trajectory is
373
+ to infer the potential route based on the information of the
374
+ road network. Reference [4] applies the inverse reinforce-
375
+ ment learning to learn the latent cost (reward) of a road
376
+ through historical data, and proposed Exact Route Search
377
+ approach to find the maximum probabilistic route based on
378
+ dynamic programming. However, route search-based algo-
379
+ rithms are only adapted in low-sampling rate trajectories,
380
+ but not OD problem, due to the heavy computational cost.
381
+ Because of the large distance between a pair of toll stations
382
+ or surveillance cameras, a frequently used path inferring
383
+ algorithm is based on the Depth First Searching algorithm
384
+ to find all possible simple paths that the one road segment can
385
+ only appear at most once. However, those methods omitting
386
+ the real traffic condition tend to generate the unreal route
387
+ in the path inferring procedure. To this end, in this paper,
388
+ we combine the transition probability and route search
389
+ approach together to find the optimal route based on the
390
+ real travel time and road network structure.
391
+ 3
392
+ PRELIMINARIES
393
+ We start with giving the definition about the road network,
394
+ Origin-Destination, simple path as well as route.
395
+
396
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
397
+ 4
398
+ X
399
+ A
400
+ OD
401
+ Z
402
+ T
403
+ W
404
+ K nodes and links
405
+ Fig. 2. The graphical model of the data generating process.
406
+ Definition 3.1. Road Network. A road network is a directed
407
+ graph G = (V, E, π), where V denotes the set of nodes, E ⊆ V×V
408
+ is the set of directed edges, and π is a node’s feature set. This paper
409
+ uses Gv and Ge to denote the node-wise and link-wise graphs
410
+ respectively. For the π in Gv, the features of V can be such as,
411
+ junction types, and traffic signals. For the π in Ge, the features of
412
+ v can be such as, road types, road lanes, and speed limits.
413
+ Definition 3.2. Origin-Destination (OD). In this paper, a OD
414
+ pair represents a tuple with {ei, ej, tth}, where ei and ej denote
415
+ the start and end road segment, respectively, and tth is the start
416
+ time interval of a day (e.g., 8:00am-9:00am). Note that we assume
417
+ the traffic conditions for all road segments and intersections are
418
+ invariable within the same time slot.
419
+ Definition 3.3. Simple Path. A simple path tr can be presented
420
+ as a series of time-ordered links. We have tr : e1 → e2 → · · · →
421
+ e|tr|, where each link satisfied ei ̸= ej.
422
+ Definition 3.4. Route. A route r in this paper is a sequence
423
+ alternating with links and intersections. We have r : e1 → v2 →
424
+ e3 → · · · eK−1 → vK, where vi is the intersection of edges pair
425
+ (ei−1, ei), and K is the length of r.
426
+ 4
427
+ PROBLEM FORMULATION
428
+ To overcome the previous issue with ignoring the road
429
+ network in the OD-TTE problem, we here intend to give
430
+ a formulation under the weakly supervised learning. This
431
+ solution is motivated by the advance in weakly supervised
432
+ learning and GCN path inference.
433
+ Given a pair of origin and destination, estimating the
434
+ travel time is to infer the total time cost. Traditionally,
435
+ the formulation of TTE can be divided into two parts: 1)
436
+ inferring the future traffic through historical state [3], [38],
437
+ 2) online infer traffic through real-time trajectories [39]. The
438
+ previous one is mainly related to the robust traffic state,
439
+ which is calculated on either dense trajectories [3] or loop
440
+ detectors [32]. Another one intends to resolve the sparse
441
+ issue by estimating (imputation) citywide-level traffic state
442
+ from fewer real-time trajectories. Since this paper is under
443
+ the OD scenario and hard to give a valid historical traffic
444
+ state, we here follow the online TTE formulation. Formally,
445
+ this paper follows the assumption that the traffic state at
446
+ one road segment in a specific time interval ∆t = 60mins,
447
+ e.g., 10:00 am-11:00 am, under the same distribution, such
448
+ as, Gaussian distribution N(µ, σ) with µ = 60s and σ = 1.
449
+ Subsequently, suppose the current time is t, we train the
450
+ real-time OD pairs in time slot [t, t + ∆t1), the online traffic
451
+ state will be completed and evaluated with the OD pairs in
452
+ [t + ∆t1, t + ∆t).
453
+ TABLE 1
454
+ Partial Symbols Description.
455
+ Notation
456
+ Description
457
+ A
458
+ Links inner transition probability matrix
459
+ Q
460
+ Aggregate function
461
+ ΩOD
462
+ Set of candidate paths from O to D
463
+ f
464
+ Multi-task function
465
+ X
466
+ Features of links and nodes
467
+ Z
468
+ Unobservable travel time of links and nodes
469
+ Gv and Ge
470
+ Node-wise and link-wise graph respectively
471
+ T and �T
472
+ Ground truth and estimated of travel time
473
+ T
474
+ Time embedding vector
475
+ W
476
+ Learnable parameters
477
+ About the training procedure of MWSL-TTE, Fig. 2 il-
478
+ lustrates the data generating process with graphical repre-
479
+ sentation. OD are the features of origin-destination loca-
480
+ tions. X1:K5 stands for the features vectors of unobserved
481
+ K nodes and links in route r : e1 → v2 → e3 →
482
+ · · · eK−1 → vK, and Z1:K is the unobservable travel time
483
+ of nodes and links. We assume that r is conditioned on OD
484
+ and transition matrix A, where Ai,j indicates the transition
485
+ probability from ei to ej. Therefore, we have p(r|A; OD) =
486
+ p(X1:K|A; OD), where A is generated by multi-task func-
487
+ tion f with A = f(WA; Ge), Ge is the link graph, and
488
+ W is the learnable parameters. Meanwhile, Z also under
489
+ a parametric distribution p(Z1:K|θz = f(X1:K; WZ; )) =
490
+ p(Z1:K|(µ, σ) = f(X1:K; WZ; )) on the factors of X1:K and
491
+ WZ, where µ and σ are the mean and variance of Gaussian
492
+ distribution. For the relation between Z1:K and aggregate
493
+ observation of travel time T, we have the following defini-
494
+ tion:
495
+ Definition 4.1. Given a route ri : e1 → v1 → e2 →
496
+ · · · v|ri|−1 → e|ri| under a pair of OD, and it’s unobserved
497
+ travel time Z1:K = {˜te1, ˜tv2, ˜te3, · · · ˜tvK−1, ˜teK}. The aggregate
498
+ functions Q can be defined as
499
+ Q(Z) = ˜te1 + ˜tv1 + ˜te2 · · · + ˜tvK−1 + ˜tK,
500
+ (1)
501
+ where an aggregate function Q : Z
502
+ → T is a mapping
503
+ function from unobserved variable Z to observation T. Since
504
+ we assume Z follows a Gaussian distribution, we can writ-
505
+ ten T
506
+ = Q(Z) = �
507
+ i ˜ti = �
508
+ i µi according to the na-
509
+ ture of additivity: X + Y
510
+ ∼ N
511
+ �µ1 + µ2, σ2
512
+ 1 + σ2
513
+ 2
514
+
515
+ , where
516
+ X ∼ N
517
+ �µ1, σ2
518
+ 1
519
+ � Y ∼ N
520
+ �µ2, σ2
521
+ 2
522
+
523
+ .
524
+ Subsequently, we summarize our assumptions below.
525
+ Assumption 1(Aggregate observation assumption) p(T |
526
+ X1:K, Z1:K) = p(T | Z1:K)
527
+ We here assume that the observation T is conditionally
528
+ independent X1:K when given Z1:K (Def. 4.1). Intuitively,
529
+ given Z, in fact, the travel time T can be obtained by
530
+ summing up all components in Z.
531
+ Assumption
532
+ 2
533
+ (Markov
534
+ chain
535
+ assumption) p(Z1:K
536
+ |
537
+ X1:K) = p(Z1 | X1) �K
538
+ i=2 p(Zi+1 | Xi; Zi)
539
+ We assume that the road travel times Zi+1 are mutually
540
+ independent except for Zi, which is consistently under the
541
+ 5. The subscript for example X1:K denotes an abbreviation for the
542
+ set {X1, X2, · · · , XK}
543
+
544
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
545
+ 5
546
+ Nodes Embedding
547
+ Links Embedding
548
+ Road Attributes
549
+ Embedding Layer
550
+ Link Embedding
551
+ Node Embedding
552
+ Link-wise Stacked
553
+ R-GCNs
554
+ Node-wise Stacked
555
+ R-GCNs
556
+ Multi-task Learning Model
557
+ Multi-task Learning Module
558
+ Spatial GCN module
559
+ Transition
560
+ Probability
561
+ Travel Time
562
+ Distribution of Nodes
563
+ Travel Time
564
+ Distribution of Links
565
+ Update
566
+ Path Search
567
+ (b) Detailed Structure of
568
+ Stacked R-GCNs
569
+ R-GCN
570
+ GRU
571
+ GRU
572
+ Concatenation
573
+ R-GCN
574
+ GRU
575
+
576
+ (a) Overview of Our Model
577
+ Concatenation
578
+ R-GCN
579
+
580
+ GRU
581
+ Source
582
+ Target
583
+ OD Pair
584
+ Initialize Transition
585
+ Probability
586
+ Candidate paths
587
+ Path Travel Time
588
+ Estimation
589
+ Road Features
590
+ Embedding
591
+ Link-wise Stacked
592
+ Relational GCN
593
+ Node-wise Stacked
594
+ Relational GCN
595
+ MLP
596
+ SoftMax
597
+ Positional Vector
598
+ (N paths*dim)
599
+ Transition Probability Generative Layer
600
+ Current
601
+ Path
602
+ Travel Time
603
+ Distribution of Links
604
+ Travel Time
605
+ Distribution of Nodes
606
+ Transition
607
+ Probability
608
+ (c) Detailed Structure of Multi-task
609
+ Learning Module
610
+ ReLU
611
+ MLP
612
+ Candidate Route
613
+ Posterior Probability
614
+
615
+
616
+ Node-wise Graph
617
+ Link-wise Graph
618
+ Time Embedding
619
+ Selection
620
+ F
621
+ F
622
+ F
623
+
624
+ Route Search Layer
625
+ Fig. 3. Framework design of MWSL-TTE. The overview of our proposed model is depicted in (a), which includes the route search layer, road
626
+ attributes embedding layer, spatial GCN module, and multi-task learning module. (b) is the architecture of the stacked R-GCNs layer, which attempt
627
+ to capture both global and local relation by modeling the GRU and R-GCNs together. (c) is our multi-task learning layer, which will estimate the
628
+ travel time of links, intersections, and transition probability simultaneously. The transition probability will be updated when the multi-task learning
629
+ layer is output, and then, the path search algorithm will work to find the candidate paths.
630
+ assumption of Markov chain and extensive applications in
631
+ trajectory data mining [40], [41]. Furthermore, since T can
632
+ be determined by function Q, the conditional probability for
633
+ p(T | Z1:K) = δQ(Z1:K)(T), where δ(·) represents the Dirac
634
+ delta function.
635
+ To sum up, the objective function in this paper can be
636
+ defined as
637
+ max
638
+ X1:K p (T; X1:K | A; OD)
639
+ =p (T | X1:K) p(X1:K|A; OD)
640
+ =
641
+
642
+ ZK p (T; Z1:K | X1:K) dZ1:K p(X1:K|A; OD)
643
+ =
644
+
645
+ ZK δQ(z1:K)(T)p(Z1:K | X1:K)dZ1:K p(X1:K|A; OD)
646
+ =
647
+ E
648
+ Zi∼p(Zi|Xi−1;Zi−1)
649
+ �δQ(Z1:K)(T)
650
+ � p(X1:K|A; OD)
651
+ (2)
652
+ Therefore, according to Eq. (2), our training procedure
653
+ could be split into two stages: 1) maximizes the posterior
654
+ probability p(X1:K|A; OD). 2) maximizes the conditional
655
+ probability p(Z1:K | X1:K) to estimate each road segments
656
+ and intersection travel time, which can be optimized by
657
+ expected log-likelihood [16]:
658
+ ℓ(W) = E [log p (T | X1:K; W)]
659
+ (3)
660
+ For ease of reference, some important notations are summa-
661
+ rized in Table 1.
662
+ We here give a brief summarization of our problem for-
663
+ mulation. In this paper, we aim to solve the two challenges
664
+ in OD travel time estimation, which are uncertain routes and
665
+ uncertain travel time. We intend to infer the potential route
666
+ r between source and destination by transition probability
667
+ Fig. 4. In the speed distributions of neighbor road segments during
668
+ the 7 days of National Day, we observe that the speed distribution is
669
+ highly consistent with the connected type. The color indicate the road
670
+ correspondingly.
671
+ A. Subsequently, we optimize the travel time distribution
672
+ Z1:K = {˜te1, ˜tv2, ˜te3, · · · ˜tvK−1, ˜teK} at r via weakly su-
673
+ pervised learning (Eq. (3)). We assume ˜t under Gaussian
674
+ distribution N(µ, σ). Therefore, transition probability A and
675
+ parameter at Gaussian distribution can be generated by
676
+ (A, µ, σ) = f(W; Gv; Ge), where µ, σ are the mean and
677
+ variance in Gaussian distribution, respectively.
678
+ 5
679
+ METHODOLOGY
680
+ In this section, the MWSL-TTE will be detailedly introduced.
681
+ Specifically, the overview of MWSL-TTE has been depicted
682
+ in Fig. 3 (a) including with four components included road
683
+ attributes embedding layer, spatial GCN module, route
684
+ search layer, and multi-task learning module. Fig. 3 (b)
685
+ shows the inner structure of the stacked R-GCNs layer, and
686
+ Fig. 3 (c) represents the multi-task learning module.
687
+
688
+ 80
689
+ 70
690
+ Xiuyuan East Road
691
+ 60
692
+ (km/
693
+ Second Section of Jianshe
694
+ speed
695
+ Guoguang Road
696
+ North Road
697
+ 40
698
+ 30
699
+ Jianshe Road
700
+ 20
701
+ 10
702
+ 00:00
703
+ 04:00
704
+ 08:00
705
+ 12:00
706
+ 16:00
707
+ 20:00
708
+ TimeJOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
709
+ 6
710
+ 5.1
711
+ Road Attributes Embedding layer
712
+ Let each latent variable ˜ti ∈ Z belong to Gaussian dis-
713
+ tribution N(µi, σi), which is a common assumption and
714
+ widely be used in modeling the travel time distribution
715
+ [13], [14], [15]. Given a pair of OD, we have the conditional
716
+ probability p(Z1:K|θz = f(X1:K; WZ)) = p(Z1:K|(µ, σ) =
717
+ f(X1:K; WZ)). Therefore, one of the tasks for neural net-
718
+ work f is to estimate the distribution parameters µ, σ for
719
+ each road segment and intersection. Since the road features
720
+ X1:K could affect the travel time estimation Z, we consider
721
+ the follow statistical spatial factors as the important matters
722
+ for road segments:
723
+ • Road types: e.g., primary, primary link, secondary, sec-
724
+ ondary link, tertiary, residential, service road, etc.;
725
+ • Number of lanes: how many traffic lines in the road;
726
+ • Otherwise features: e.g., road length, whether it is a one
727
+ way or not, limiting velocity, unique ID;
728
+ and intersections, such as
729
+ • Node tags: e.g., speed camera, traffic signal, crossing sign,
730
+ turn circle, stop sign;
731
+ • Node street count: e.g., T-juction X-junction, and 5-way
732
+ junction:
733
+ • Otherwise features: e.g., unique ID, GPS coordinate.
734
+ To obtain the feature representations of both links and
735
+ nodes, we use the embedding method [42] to transform each
736
+ categorical attribute into a low-dimensional feature vec-
737
+ tor by multiplying the spatial feature embedding matrices
738
+ E ∈ Rn(s)×d(s). Here, n(s) represents the number of possible
739
+ values of the categorical features, and d(s) represents the
740
+ embedding dimension. This allows us to share efficient
741
+ information among different road segments or intersections,
742
+ so that rarely traveled segments could be learned from
743
+ those frequently traveled with similar semantic meaning.
744
+ Besides the categorical road attributes, we concatenate the
745
+ obtained embedded feature vectors together with other road
746
+ attributes (e.g., road length and GPS coordinate). Based on
747
+ the above feature representations of the dual graph (link-
748
+ wise and node-wise), we can obtain the corresponding input
749
+ for the subsequent spatial GCN module.
750
+ 5.2
751
+ Spatial GCN Module
752
+ After the embedding characteristics of the road attributes
753
+ have been obtained, we next introduce the spatial GCN
754
+ module serving as modeling the complex spatial relations
755
+ from the dual graph. The motivation for introducing R-
756
+ GCNs in the travel-time estimation problem has been rep-
757
+ resented in Fig. 4. We can observe that the travel speed is
758
+ highly similar with the connected types. For specifically,
759
+ even though the Second Section of JianShe North road is
760
+ the neighbor of Xiuyuan East and Guoguang road, its speed
761
+ distributions are more related to JianShe road, where their
762
+ road types are the same. Therefore, we are concerned that
763
+ the features of road types, the connected types, for example,
764
+ resident → secondary (link level), and secondary (node level),
765
+ are also important. To this end, here we introduce the Re-
766
+ lational Graph Convolutional Networks [18] in our model,
767
+ which can be defined as
768
+ h(l+1)
769
+ i
770
+ = σ
771
+
772
+ � �
773
+ r∈R
774
+
775
+ j∈N r
776
+ i
777
+ 1
778
+ ci,r
779
+ W (l)
780
+ r h(l)
781
+ j
782
+ + W (l)
783
+ 0 h(l)
784
+ i
785
+
786
+ � ,
787
+ (4)
788
+ where h(l)
789
+ i
790
+ ∈ Rd(l) is the hidden state of road ri in the lth
791
+ layer of model with dimensionality d(l). N r
792
+ i denotes the set
793
+ of neighboring road segment/intersection indices under the
794
+ relation r. W (l)
795
+ r , W (l)
796
+ 0
797
+ are the learnable parameters, ci,r is the
798
+ normalization constant, and σ(·) is the activation function.
799
+ In this paper, we set ci,r = |N r
800
+ i |.
801
+ Next, we will introduce the stacking operation based on
802
+ R-GCNs. The stacking operation has recently been demon-
803
+ strated to prevent local information loss [20], [43]. Thus, we
804
+ model the temporal trends in the stacked GCN architecture
805
+ combined with the spatial feature representations of both
806
+ nodes and links. In this paper, we use a gated recurrent unit
807
+ (GRU) [19] as a temporal processing module to incremen-
808
+ tally concatenate multi-scale features, which can be written
809
+ as
810
+ c(0) = GRU
811
+
812
+ h(0), c(−1)�
813
+ ,
814
+ h(1) = σ
815
+
816
+ � �
817
+ r∈R
818
+
819
+ j∈N r
820
+ i
821
+ 1
822
+ ci,r
823
+ W (0)
824
+ r
825
+ h(l)
826
+ j
827
+ + W (0)
828
+ 0
829
+ h(0)
830
+
831
+ � ,
832
+ h(2) = σ
833
+
834
+ � �
835
+ r∈R
836
+
837
+ j∈N r
838
+ i
839
+ 1
840
+ ci,r
841
+ W (1)
842
+ r
843
+
844
+ h(0)
845
+ j
846
+ , h(1)
847
+ j
848
+
849
+ + W (1)
850
+ 0
851
+
852
+ h(0), h(1)�
853
+
854
+ � ,
855
+ h(l+1) = σ
856
+
857
+ � �
858
+ r∈R
859
+
860
+ j∈N r
861
+ i
862
+ 1
863
+ ci,r
864
+ W (l)
865
+ r
866
+
867
+ c(l−1)
868
+ j
869
+ , h(l)
870
+ j
871
+
872
+ + W (l)
873
+ 0
874
+
875
+ c(l−1), h(l)�
876
+
877
+ � ,
878
+ (5)
879
+ l = 2, 3, . . . , n − 1,
880
+ where c(l) is the hidden state of the output of GRU, and
881
+ c(−1) is initialized with 0. h(l) is the latent state of the link
882
+ and node at lth hop. The detailed architecture of the stacked
883
+ R-GCNs has been shown in Fig. 3 (b), and the formula of
884
+ GRU can be expressed as
885
+ o = σ (Wo1h(t) + Oo1c(t − 1) + bo1) ,
886
+ q = σ (Wq1h(t) + Oq1c(t − 1) + bq1) ,
887
+ c′(t) = tanh (Wh1h(t) + Oh1(q ⊙ c(t − 1)) + bh1) ,
888
+ c(t) = o ⊙ h(t − 1) + (1 − o) ⊙ c′(t)
889
+ (6)
890
+ where Wo1, Oo1, Wq1, Oq1, Wh1, Oh1 are the learnable pa-
891
+ rameters, and bo1, bq1, bh1 are biases.
892
+ 5.3
893
+ Multi-task Learning Module
894
+ In this section, we will introduce the productions of MWSL-
895
+ TTE and the route recovery algorithm together.
896
+ 5.3.1
897
+ Generating nodes and links travel time
898
+ As we mentioned in Sec. 4, the task of TTE can be formu-
899
+ lated as given the real-time OD pairs and the corresponding
900
+ observation T in [t, t + ∆t1), we aim to complete the travel
901
+ time for all links and nodes, and evaluate them using the OD
902
+ pairs in [t + ∆t1, t + ∆t). To address the data sparse issue,
903
+ this paper formulates it as the problem of tensor completion
904
+ [44] by tensor decomposition technique, a popular method
905
+
906
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
907
+ 7
908
+ for traffic missing value imputation method. Since urban
909
+ travel time has typical temporal and spatial distribution
910
+ characteristics, it can frequently be divided into two levels:
911
+ one is the modeling of road segments or intersections in
912
+ space, and the other is temporal embedding in time (such
913
+ as Weather ID and Holiday ID). Based on the above spatio-
914
+ temporal embedding, we finally employ the 1st order CP
915
+ decomposition to reconstruct the travel time distribution as
916
+ µe = (h(l+1)
917
+ e
918
+ Wµe + bµe) ⊗ Ti, σe = (h(l+1)
919
+ e
920
+ Wσe + bσe) ⊗ Ti,
921
+ µv = (h(l+1)
922
+ v
923
+ Wµv + bµv) ⊗ Ti, σv = (h(l+1)
924
+ v
925
+ Wσv + bσv) ⊗ Ti,
926
+ where µe, σe ∈ R|E|×1, and µv, σv ∈ R|V|×1 are the mean
927
+ and variance in Gaussian distribution for link and node
928
+ respectively. Wµe, Wσe, Wµv, Wσv ∈ Rd(l)×d(l+1), bµe, bσe ∈
929
+ R|E|, and bµv, bσv ∈ R|V| are the parameters in the fully
930
+ connected layer (FC). T ∈ Rd(l+1)×I is the embedding tensor
931
+ of time, and Ti ∈ Rd(l+1)×1 denotes the embedding vectors
932
+ in real-time interval. We discretize the day of time into I
933
+ time slots (e.g., ∆t=15 minutes). According to expected log-
934
+ likelihood in Eq. (3), since the normal distribution is closed
935
+ with addition, the loss function can be derived as
936
+ Lµ,σ = −(Q(Z1:K) − Q(µ1:K))2
937
+ 2 �
938
+ i(σ2)i
939
+ − 1
940
+ 2 log
941
+
942
+
943
+
944
+ i
945
+ (σ2)i
946
+
947
+ = −(T − Q(µ1:K))2
948
+ 2σ2
949
+ − 1
950
+ 2 log
951
+ �2πσ2� ,
952
+ (7)
953
+ where Q is the aggregation function defined in Def. 4.1, and
954
+ K denotes the number of samples in bag. In this paper, bag
955
+ is equal to route (Def. 3.4) between origin-destination.
956
+ 5.3.2
957
+ Transition Probability Generative Layer
958
+ We thereafter introduce the detailed structure of the transi-
959
+ tion probability generative layer to generate the link tran-
960
+ sition probability by using edges features h(l+1) (Eq. (5)).
961
+ Technically, for the last two layers, we use the multi-layer
962
+ perceptron (MLP) to produce the weights of links
963
+ ai→j = MLP
964
+
965
+ h(l+1)
966
+ i
967
+ || h(l+1)
968
+ j
969
+
970
+ ,
971
+ where || is the operator of features-wise concatenation, and
972
+ then apply the softmax layer over outgoing links
973
+ p(ej | ei) =
974
+ exp(ai→j)
975
+
976
+ vn∈Vi exp(ai→n)
977
+ (8)
978
+ After that, the transition probability in Eq. (8) will be em-
979
+ ployed in the Route Search Layer. For simplicity, we use
980
+ the transition matrix A ∈ R|V|×|V| to represent all links’
981
+ possibilities; for example, we have A[i, j] = P(ej | ei).
982
+ 5.3.3
983
+ Route Search Layer
984
+ As aforementioned, the shortest route algorithms omit the
985
+ condition of the road in practice. To solve this problem, we
986
+ intend to construct the transition probability between road
987
+ segments and combine the transition probability to infer the
988
+ route. However, considering the complex highway graph,
989
+ there are a tremendous number of routes for any OD pair.
990
+ It would be reasonable to prune the routes through some
991
+ thresholds. Therefore, in this paper, we prune the route from
992
+ two aspects: 1) The lengths of the simple route r from origin
993
+ o to destination d should satisfy r.length < rshort + δlens,
994
+ where rshort is the shortest simple route and δlens is the
995
+ distance threshold. 2) the co-occurrence probability of a
996
+ simple route P(r|OD; A) should also meet the criteria
997
+ P(r|OD; A) > δprobs, where δprobs is the probability thresh-
998
+ old. Next, we will introduce the definition of probability
999
+ P(r|OD; A). According to Assumption 2 and Eq. (8), the
1000
+ co-occurrence route probability
1001
+ p(r) = P(e1, e2, · · · eK) = p(e0)
1002
+ K
1003
+
1004
+ i=1
1005
+ p(ei | ei−1),
1006
+ (9)
1007
+ where e0 is the origin location, and we have p(e0) = 1.
1008
+ After the strategy of pruning has been introduced, the
1009
+ candidate routes can be obtained by the Depth First Search
1010
+ (DFS) algorithm. Specifically, we generate the candidate
1011
+ route set ΩOD via posterior probability p(r|OD; A) and
1012
+ choose the route with maximum probability as the optimal
1013
+ solution. Formally, given an OD pair and observation T, the
1014
+ optimal route r∗ can be written as
1015
+ r∗ = arg max
1016
+ ri∈ΩOD p(ri | OD; A; T; Z)
1017
+ = arg min
1018
+ r
1019
+ |T −
1020
+
1021
+ ei∈rj
1022
+ µ(i)
1023
+ e
1024
+
1025
+
1026
+ vi∈rj
1027
+ µ(i)
1028
+ v |, ∀rj ∈ ΩOD. (10)
1029
+ Eq. 10 selects the most suitable route r∗ regarding current
1030
+ travel time µ.
1031
+ 5.3.4
1032
+ Model Training
1033
+ Next, we will introduce the optimizing procedure of MWSL-
1034
+ TTE. As we discussed in Sec. 5.3.3, the top m maximum
1035
+ routes have been obtained, and we chose the most satisfied
1036
+ one by Eq. (10). However, such a choice may fall into a local
1037
+ solution, and other candidate routes might never be picked.
1038
+ To address this problem, we here introduce the ϵ-greedy
1039
+ algorithm, which means that the route satisfied Eq. (10) will
1040
+ be chosen in 1 − ϵ probability. Otherwise, randomly select
1041
+ the routes with top m maximum probability in ϵ probability.
1042
+ Moreover, we wish that the transition probability could help
1043
+ us infer the most possible route based on the ground truth
1044
+ (observation travel time). So, we here adopt the Kullback-
1045
+ Leibler (KL) divergence to measure the coherence, which
1046
+ can be written as
1047
+ DKL(P∥Q) = −
1048
+
1049
+ i
1050
+ P(i) ln Q(i)
1051
+ P(i),
1052
+ (11)
1053
+ where P is the probability distribution of each route in the
1054
+ candidate set, and Q is the inverse estimation loss distri-
1055
+ bution between Q(µ) and ground truth. In other words,
1056
+ the optimization direction is towards both higher route
1057
+ probability and more accurate travel time estimation. For
1058
+ the route picked up through Eq. (10), the Negative Log
1059
+ Likelihood (NLL) loss has been employed to minimize the
1060
+ negative log likelihood function, which can be defined as
1061
+ Ltp = −
1062
+ |tr|
1063
+
1064
+ i=2
1065
+ log(p(ei | ei−1, θ)),
1066
+ (12)
1067
+ where θ is the model’s trainable parameters to represent
1068
+ the posterior probability. By fusing all objective functions
1069
+
1070
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
1071
+ 8
1072
+ together, our model is trained to minimize the weighted
1073
+ combination of three loss terms
1074
+ L = αLµ,σ + βLtp + (1 − α − β)DKL(P∥Q)
1075
+ (13)
1076
+ where α and β are the const parameter to balance three loss
1077
+ terms Lµ,σ, Ltp and DKL(P∥Q). The training pseudocode
1078
+ of MWSL-TTE has been depicted in Algorithm 1.
1079
+ Algorithm 1: Training Procedure of MWSL-TTE
1080
+ Input: OD datasets D, node-wise graph Gv, link-wise
1081
+ graph Ge, and number of candidates routes N
1082
+ Output: OD TTE estimation function f
1083
+ 1 while not convergence do
1084
+ 2
1085
+ (µ, σ, A) = f(W; Gv; Ge)
1086
+ 3
1087
+ for OD ∈ D do
1088
+ 1. Generating Top N candidates route set ΩOD
1089
+ 2. Select the route ri from ΩOD through ϵ-greedy
1090
+ 3. Calculate the loss by Eq. (13) and update the
1091
+ parameters through back-propagation.
1092
+ 4
1093
+ end
1094
+ 5 end
1095
+ 6 return f
1096
+ 6
1097
+ EXPERIMENTS
1098
+ In this section, various experiments will be conducted based
1099
+ on a wide range of public real-world taxi dataset in Xi’an,
1100
+ and Chengdu to evaluate the superiority of MWSL-TTE in
1101
+ TTE and route recovery aspects.
1102
+ 6.1
1103
+ Datasets
1104
+ Road Networks. We use two road networks: Chengdu
1105
+ Road Network and Xi’an Road Network. Both of them are
1106
+ extracted from OpenStreetMap [45], and include nine road
1107
+ types (trunk, trunk link, freeway link, primary, primary
1108
+ link, secondary, secondary link, tertiary, tertiary link). Here,
1109
+ Chengdu road network contains 8221 edges and 5182 nodes,
1110
+ which ranges from 30.63° to 30.69° in latitude and 104°
1111
+ to 104.07° in longitude. And Xi’an road network contains
1112
+ 4780 edges and 3782 nodes, ranging from 34.20° to 34.29° in
1113
+ latitude and 108.90° to 108.99° in longitude.
1114
+ Taxi OD Orders. We use two public taxi trajectory datasets
1115
+ come from the Didi Express platform to generate the OD
1116
+ orders. Each generated order corresponds to a trip record
1117
+ that consists of the time-stamps and locations of an OD.
1118
+ Here, we implement Xi’an dataset that is from 10/10/2016
1119
+ - 10/22/2016, and Chengdu Dataset is from 08/18/2014 -
1120
+ 08/24/2014 (a whole week from Monday to Sunday). The
1121
+ GPS points of both two datasets have been tied to the road
1122
+ and the interval of sample trajectory points is 2-4s, ensuring
1123
+ that the vehicle trajectory can correspond to the actual road
1124
+ information. Especially, we generate the ground truth route
1125
+ of the original vehicle trajectories for the route recovery task
1126
+ via a map-matching tool FMM [46].
1127
+ 6.2
1128
+ Baseline Methods and Metrics
1129
+ We first compare our models with six baseline methods for
1130
+ the task of OD travel time estimation:
1131
+ • TEMP: Temporally weighted neighbors [7] is a nearest-
1132
+ neighbor-based approach that estimates the OD travel
1133
+ time by averaging the travel time of all historical trajec-
1134
+ tories falling in the same time slot with a similar origin
1135
+ and destination.
1136
+ • GBDT: Gradient boosting decision tree [47] is used for the
1137
+ regression of OD travel time estimation.
1138
+ • STNN: Spatio-temporal deep neural network [8] is a deep
1139
+ neural network-based approach that first predicts the
1140
+ travel distance given an OD pair, and then combines this
1141
+ prediction with the departure time to estimate the travel
1142
+ time.
1143
+ • MURAT: Multi-task representation learning [9] is a deep
1144
+ neural network-based approach that jointly predicts the
1145
+ travel distance and the travel time for taxi orders by
1146
+ learning representations of road segments and the origin-
1147
+ destination information.
1148
+ • DCRNN: It exploits GCN to capture spatial dependency,
1149
+ and then uses recurrent neural networks to model tempo-
1150
+ ral dependency [33]. We implemented this model based
1151
+ on OD estimation prediction of road network. The hidden
1152
+ vector size of GCN and GRU are set as 20 and 128,
1153
+ respectively.
1154
+ • ConSTGAT: This model adopts a graph attention mech-
1155
+ anism to explore the joint relations of spatio-temporal
1156
+ information [48]. The parameter setting is basically same
1157
+ with the original model. In the integration module, we
1158
+ also use two-layer MLP.
1159
+ • ASTGNN: This model consider multiple factors in traffic
1160
+ forecasting, such as, periodicity, spatial heterogeneity by
1161
+ leveraging a trend-aware multi-head attention mechanism
1162
+ [34]. The number of layers for both encoder and decoder
1163
+ is set to 3. And the kernel size of convolution is set to 5.
1164
+ Moreover, for the task of OD route recovery, we compare
1165
+ with two representative baselines of route recovery from
1166
+ sparse trajectories. Both of them are based on inverse rein-
1167
+ forcement learning to capture the spatial transition proba-
1168
+ bilities, and the difference between these two models is that
1169
+ the temporal components:
1170
+ • STRS: Spatio-temporal-based route recovery system [4]
1171
+ seeks to recover the route from sparse trajectories.
1172
+ The temporal components of STRS comprise a matrix
1173
+ factorization-based method.
1174
+ • DeepGTT-STRS: Li et al. [3] proposes a deep genera-
1175
+ tive travel time estimation model named DeepGTT that
1176
+ replaces the temporal component of STRS.
1177
+ Evaluation Metrics. For the OD travel time estimation
1178
+ of our MWSL, we evaluate the performance with RMSE
1179
+ (Root Mean Square Error), MAE (Mean Absolute Error), and
1180
+ MAPE (Mean Absolute Percentage Error). Then we adopt
1181
+ the accuracy of route recovery as the main performance
1182
+ metric for the route recovery task. It is defined as the ratio
1183
+ of the length of a correctly inferred route to the length of the
1184
+ ground truth route RG or the inferred route RI whichever
1185
+ is longer, i.e., accuracy =
1186
+ (RG∩RI).len
1187
+ max{RG.len,RI.len}.
1188
+ 6.3
1189
+ Experimental Settings
1190
+ The experiments are implemented with PyTorch 1.6.0 and
1191
+ Python 3.6, and trained with a RTX2080 GPU. The platform
1192
+ ran on Ubuntu 16.04 OS. We trained the models using Adam
1193
+
1194
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
1195
+ 9
1196
+ TABLE 2
1197
+ Performance of MWSL and its variants for OD travel time estimation, compared with other baseline methods.
1198
+ Models
1199
+ Xi’an
1200
+ Chengdu
1201
+ RMSE (sec)
1202
+ MAE (sec)
1203
+ MAPE
1204
+ RMSE (sec)
1205
+ MAE (sec)
1206
+ MAPE
1207
+ TEMP
1208
+ 398.95
1209
+ 277.56
1210
+ 34.24%
1211
+ 446.98
1212
+ 327.99
1213
+ 32.00%
1214
+ GBDT
1215
+ 365.72
1216
+ 250.63
1217
+ 31.27%
1218
+ 435.88
1219
+ 303.83
1220
+ 30.35%
1221
+ STNN
1222
+ 353.06
1223
+ 241.19
1224
+ 30.43%
1225
+ 425.53
1226
+ 293.10
1227
+ 28.25%
1228
+ MURAT
1229
+ 538.23
1230
+ 512.65
1231
+ 127.87%
1232
+ 519.20
1233
+ 503.36
1234
+ 118.62%
1235
+ DCRNN
1236
+ 282.54
1237
+ 191.41
1238
+ 24.94%
1239
+ 392.45
1240
+ 263.91
1241
+ 25.95%
1242
+ ConSTGAT
1243
+ 283.89
1244
+ 195.31
1245
+ 25.32%
1246
+ 403.31
1247
+ 280.90
1248
+ 28.16%
1249
+ ASTGNN
1250
+ 259.46
1251
+ 179.08
1252
+ 23.86%
1253
+ 362.48
1254
+ 244.03
1255
+ 23.52%
1256
+ N-Node
1257
+ 253.62
1258
+ 173.71
1259
+ 23.04%
1260
+ 367.04
1261
+ 236.18
1262
+ 23.69%
1263
+ N-GRU
1264
+ 259.52
1265
+ 181.37
1266
+ 24.25%
1267
+ 371.34
1268
+ 240.45
1269
+ 24.03%
1270
+ N-R-GCN
1271
+ 263.46
1272
+ 184.73
1273
+ 24.60%
1274
+ 364.58
1275
+ 234.37
1276
+ 23.46%
1277
+ N-PathUpdate
1278
+ 257.62
1279
+ 178.25
1280
+ 23.59%
1281
+ 358.09
1282
+ 229.56
1283
+ 23.15%
1284
+ MWSL-TTE
1285
+ 238.86
1286
+ 162.37
1287
+ 21.33%
1288
+ 341.02
1289
+ 215.03
1290
+ 22.27%
1291
+ TABLE 3
1292
+ Inference time cost comparison for GCN-based travel time estimation
1293
+ models. The time unit is second.
1294
+ Models
1295
+ Xi’an
1296
+ Chengdu
1297
+ Time (sec)
1298
+ Time (sec)
1299
+ DCRNN
1300
+ 0.15
1301
+ 0.16
1302
+ ConSTGAT
1303
+ 0.41
1304
+ 0.45
1305
+ ASTGNN
1306
+ 0.34
1307
+ 0.37
1308
+ MWSL-TTE
1309
+ 0.23
1310
+ 0.25
1311
+ optimizer with an initial learning rate of 0.001 on both
1312
+ Chengdu and Xi’an datasets, and early stopping is used on
1313
+ the validation dataset. Especially, we run each experiment
1314
+ for three times.
1315
+ The main hyper-parameter settings of our proposed
1316
+ method are described as follows:
1317
+ • In the generation of a candidate route set Ωa,b, m
1318
+ candidate routes are selected between the OD pairs.
1319
+ Here, m = 6 is used for Xi’an, and m = 8 for
1320
+ Chengdu, respectively. Both two hyper-parameters can
1321
+ ensure that over 90 percents of ground truth route can
1322
+ be acquired from ΩOD.
1323
+ • The number of stacked R-GCNs is set to 3.
1324
+ • In the road attributes embedding layer, the embedding
1325
+ sizes of link feature representation (road ID, road types,
1326
+ number of lanes and one way or not) are set to 128, 8,
1327
+ 4 and 2, respectively. And the embedding size of node
1328
+ feature representation (node ID, node type and node
1329
+ street count) are set to 96, 2 and 2, respectively.
1330
+ • In the temporal embedding component, we embed
1331
+ Weather ID and Holiday ID in R8 and R4, respectively.
1332
+ 6.4
1333
+ Experimental Results
1334
+ We compare our MWSL-TTE with other baseline methods
1335
+ under two datasets.
1336
+ Performance on Travel Time Estimation. Table 2 shows
1337
+ the overall performance of estimating the travel times of
1338
+ Taxi OD orders. From the performance comparison, we find
1339
+ that our MWSL-TTE achieves the best performance than
1340
+ TABLE 4
1341
+ Performance of MWSL and STRS-based baseline methods for route
1342
+ recovery. The column T.time refers to the training time of the model and
1343
+ its unit is hour.
1344
+ Models
1345
+ Xi’an
1346
+ Chengdu
1347
+ Acc
1348
+ T. time
1349
+ Acc
1350
+ T. time
1351
+ STRS
1352
+ 82.71%
1353
+ 3.18
1354
+ 71.64%
1355
+ 4.74
1356
+ DeepGTT-STRS
1357
+ 79.39%
1358
+ 3.37
1359
+ 68.72%
1360
+ 5.02
1361
+ MWSL-TTE
1362
+ 86.25%
1363
+ 0.52
1364
+ 77.03%
1365
+ 0.69
1366
+ Congested
1367
+ Unblocked
1368
+ (a) 9:00 AM, Monday
1369
+ (b) 22:00 PM, Monday
1370
+ crossing
1371
+ traffic signal
1372
+ bus stop
1373
+ Fig. 5. The time-consuming ratio for some nodes in the Xi’an road
1374
+ network. Here, we select three types of nodes including ”crossing”,
1375
+ ”traffic signal” and ”bus stop”.
1376
+ other methods in terms of all three metrics. The better
1377
+ prediction results can be explained in two aspects. First,
1378
+ our model implements an effective graphical structure to
1379
+ capture the prior information of road network. Although
1380
+ all DCRNN, ConSTGAT and ASTGNN model the link-
1381
+ wise adjacency of road segments, the node-wise features
1382
+ especially traffic signal junctions are ignored. Thus, these
1383
+ three models without considering the node-wise adjacency
1384
+ cannot achieve higher accuracy. Second, given an OD pair,
1385
+ our weak supervision-based method can study the travel
1386
+ time distributions of the links and nodes. Compared with
1387
+ Taxi OD estimation baselines such as STNN and MURAT,
1388
+ in-process information of OD pairs improves our estimation
1389
+ results.
1390
+ Ablation Study. In Table 2, except for the comparison
1391
+ experiment with the six baseline methods, we also con-
1392
+
1393
+ 0
1394
+ 4
1395
+ 9
1396
+ 8
1397
+ 0
1398
+ 2
1399
+ 4
1400
+ 6
1401
+ 8
1402
+ 0.0
1403
+ 0.2
1404
+ 0.4
1405
+ 0.6
1406
+ 0.8
1407
+ 1.0Kangjia
1408
+ 油家村
1409
+ 莲湖区
1410
+ 新城区
1411
+ 长乐西路
1412
+ 环城西路
1413
+ 东关南街
1414
+ 张家村
1415
+ 乐居场
1416
+ 太乙路
1417
+ 文艺路
1418
+ 安路
1419
+ 李家村
1420
+ 雁塔区
1421
+ 小寨路eiGuan
1422
+ Kangjia
1423
+ 潘家村
1424
+ 新城区
1425
+ 莲湖区
1426
+ 长乐西路
1427
+ 环城西路
1428
+ 东关南街
1429
+ 张家村
1430
+ 乐居场
1431
+ 太乙路
1432
+ 文艺路
1433
+ 安路
1434
+ 李家村
1435
+ 小寨路
1436
+ 雁塔区iGuan
1437
+ Kangjia
1438
+ 潘家村
1439
+ 新城区
1440
+ 莲湖区
1441
+ 长乐西路
1442
+ 环城西路
1443
+ 东关南街
1444
+
1445
+ 张家村
1446
+ 太乙路
1447
+ 乐居场
1448
+ 文艺路
1449
+ 李家村
1450
+
1451
+ #
1452
+
1453
+ #
1454
+ 小寨路
1455
+ 中:国JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
1456
+ 10
1457
+ TABLE 5
1458
+ Performance of MWSL under different combinations of α and β based on Xi’an and Chengdu dataset for both OD travel time estimation and route
1459
+ recovery.
1460
+ Datasets
1461
+ Xi’an/Chengdu
1462
+ Parameters
1463
+ TTE
1464
+ Route recovery
1465
+ RMSE (sec)
1466
+ MAE (sec)
1467
+ MAPE (%)
1468
+ Acc (%)
1469
+ (α=1, β=0)
1470
+ 240.62/354.36
1471
+ 161.12/233.27
1472
+ 21.62/23.17
1473
+ \
1474
+ (α=0.8, β=0.2)
1475
+ 243.63/347.48
1476
+ 164.15/221.19
1477
+ 20.86/22.64
1478
+ 85.94/73.36
1479
+ (α=0.8, β=0.1)
1480
+ 238.86/341.02
1481
+ 162.37/215.03
1482
+ 21.33/22.27
1483
+ 86.25/77.03
1484
+ (α=0.6, β=0.2)
1485
+ 248.41/352.29
1486
+ 167.56/232.34
1487
+ 22.06/22.83
1488
+ 86.92/78.14
1489
+ (α=0.6, β=0.3)
1490
+ 247.46/358.75
1491
+ 166.55/239.49
1492
+ 22.12/23.39
1493
+ 84.17/73.22
1494
+ (α=0.6, β=0.1)
1495
+ 249.00/365.27
1496
+ 166.59/237.92
1497
+ 21.84/23.78
1498
+ 80.83/67.50
1499
+ (b) East part of second ring south road (type: “trunk”)
1500
+ (a) Xiaozhai east road (type: “primary”)
1501
+ Fig. 6. Learned speed distributions of two sample links by MWSL,
1502
+ compared with the speed distributions computed by the original taxi
1503
+ trajectories.
1504
+ duct the ablation study by replacing our MWSL-TTE with
1505
+ four variations, namely N-Node, N-GRU, N-R-GCN and
1506
+ N-PathUpdate, to evaluate the effectiveness of different
1507
+ modules in MWSL-TTE (see Fig. 3). In N-Node, we remove
1508
+ the node-wise estimation. In N-GRU, we remove the stacked
1509
+ GRU and only use the same number of layers of R-GCN.
1510
+ In N-R-GCN, we remove the R-GCN and replace it with
1511
+ the same number of layers of normal GCN [49]. In N-
1512
+ PathUpdate, we only implement the initial path as the in-
1513
+ process trajectories of OD orders and do not update the path
1514
+ based on the learned travel time and transition probability.
1515
+ The result comparison, it shows that R-GCN and node-
1516
+ wise estimation are the most critical parts. Regardless of the
1517
+ node-wise aspect, the performance becomes worse, and it
1518
+ proves the importance of modeling the complex adjacency
1519
+ for both the links and nodes. Furthermore, stacked R-GCNs
1520
+ with GRU integrate the multiscale information to capture
1521
+ both global and local features, and path update is also
1522
+ important in improving the travel time estimation. To sum
1523
+ up, the key designs of MWSL-TTE are effective.
1524
+ Time Cost Analysis. Due to the online travel time esti-
1525
+ mation of our proposed MWSL-TTE, we further compare
1526
+ the inference time with main GNN-based baselines. As
1527
+ Table 2 shows, GNN-based travel time estimation mod-
1528
+ els have significantly better performance than common
1529
+ deep learning-based methods, with a batch size 32 using a
1530
+ RTX2080 GPU card. Table 3 provides the inference time cost
1531
+ comparison for main GNN-based models. Although our
1532
+ proposed model is slower than DCRNN, the inference time
1533
+ of our model can acquire a faster inference time comapred
1534
+ with other relatively complex models. And this inference
1535
+ time result also proves that our model can process an online
1536
+ travel time estimation.
1537
+ Performance on Route Recovery. Table 4 shows the per-
1538
+ formance of the route recovery task by given OD pairs.
1539
+ We can find that our path recovery of MWSL has better
1540
+ recovery accuracy and shorter model training time. STRS-
1541
+ based methods are very time-consuming due to the long
1542
+ iterations of inverse reinforcement learning to acquire the
1543
+ transition probability among road segments. Observe that
1544
+ the accuracy of Deep-STRS is always worse than that of
1545
+ STRS. The reason is that a larger sampling time interval be-
1546
+ tween OD pairs leads to a more inaccurate grid-based traffic
1547
+ tensor which is the input of DeepGTT to model the traffic
1548
+ representation. In particular, a more complex Chengdu road
1549
+ network leads that the recovery accuracy of three methods
1550
+ dropping as expected.
1551
+ Hyper-parameter Analysis. To further show the effective-
1552
+ ness of multi-task components of our model, we conduct
1553
+ experiments under different combinations of parameters α
1554
+ and β based on both of the two datasets. As observed in
1555
+ Table 5, on one hand, we find that in terms of the TTE task,
1556
+ the overall TTE performance improves as α changes from
1557
+ 0.6 to 0.8 under the datasets of two cities. However, α = 1
1558
+ doesn’t achieve the best TTE performance. This is because
1559
+ that the β that controls the loss terms of route recovery also
1560
+ has some impacts on TTE prediction. More accurate path
1561
+ updates can bring the improvement of TTE prediction. On
1562
+ the other hand, the route recovery performance achieves the
1563
+ best accuracy when α = 0.8 and β = 0.1. This indicates
1564
+ that the DKL(P∥Q) also plays its part in obtaining a higher
1565
+ accuracy for route recovery. Furthermore, we compare the
1566
+ route recovery performance when α = 0.6. The optimal
1567
+ hyper-parameter is the combination of Ltp and DKL(P∥Q)
1568
+ as well, but excessive loss weight of DKL(P∥Q) would
1569
+
1570
+ Computed distribution
1571
+ 80
1572
+ Learned distribution
1573
+ 60
1574
+ 40
1575
+ 20
1576
+ 07:00
1577
+ 09:00
1578
+ 11:00
1579
+ 13:00
1580
+ 15:00
1581
+ 17:00
1582
+ 19:00
1583
+ 21:00
1584
+ 23:00Computed distribution
1585
+ 40
1586
+ Learned distribution
1587
+ 30
1588
+ 20
1589
+ 10
1590
+ 07:00
1591
+ 09:00
1592
+ 11:00
1593
+ 13:00
1594
+ 15:00
1595
+ 17:00
1596
+ 19:00
1597
+ 21:00
1598
+ 23:00JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
1599
+ 11
1600
+ GT
1601
+ 05:00
1602
+ 09:00
1603
+ 13:00
1604
+ 17:00
1605
+ 21:00
1606
+ Estimation
1607
+ Fig. 7. Comparison between ground truth traffic state and estimated traffic state from MWSL-TTE. Here we use four kinds of colors to represent the
1608
+ different road states, which can be defined as 1) red - very congested, 2) orange - congested, 3) yellow - slow, and 4) green - unblocked.
1609
+ cause the poor prediction performance. To sum up, we
1610
+ conclude that the experimental results demonstrated the
1611
+ superiority and generality of the multi-task components of
1612
+ our proposed MWSL.
1613
+ 7
1614
+ CASE STUDY
1615
+ Our MWSl-TTE not only can conduct path travel time
1616
+ estimation and route recovery for OD GPS trips, but also
1617
+ learn the travel time distributions of the links and nodes
1618
+ based on weakly supervised learning. Thus, in addition to
1619
+ the quantified evaluations described in Section 6.4, we also
1620
+ conduct a real-world case study in Xi’an, which visualizes
1621
+ the learned distributions of the links and nodes from the
1622
+ road network. Especially, we conducted the comparison
1623
+ with road condition computed by original taxi trajectories.
1624
+ 7.1
1625
+ Learned Distributions for Nodes and Links
1626
+ On one hand, we first provide the estimation results for
1627
+ different types of nodes, which are depicted in Fig. 5. We
1628
+ select three types of nodes in a road network and use the
1629
+ 0AM
1630
+ 3AM
1631
+ 6AM
1632
+ 9AM
1633
+ 12NN
1634
+ 15PM
1635
+ 18PM
1636
+ 21PM
1637
+ 0.8
1638
+ Our MWSL
1639
+ STRS
1640
+ DeepGTT
1641
+ (a) Xi’an.
1642
+ 0AM
1643
+ 3AM
1644
+ 6AM
1645
+ 9AM
1646
+ 12NN
1647
+ 15PM
1648
+ 18PM
1649
+ 21PM
1650
+ 0.7
1651
+ Our MWSL
1652
+ STRS
1653
+ DeepGTT
1654
+ (b) Chengdu.
1655
+ Fig. 8. The 24-h divergence between generated road conditions and
1656
+ ground truth under datasets of both Xi’an and Chengdu, compared with
1657
+ the temporal components of two baseline methods.
1658
+ time-consuming ratio to represent congestion status. The
1659
+ time-consuming ratio is calculated by dividing the corre-
1660
+ sponding mean value of learned travel time distributions
1661
+ by the maximum mean value among all nodes (Noted that
1662
+ we filter out top 1% largest node travel time). From Fig.
1663
+ 5, we can find that nodes with the ”traffic signal” type are
1664
+ more time-consuming than the other two types of nodes,
1665
+ and most nodes in the morning peak are easy to become
1666
+ congested. These indicate that our node travel time estima-
1667
+ tion is reasonable in both spatial and temporal aspects.
1668
+ On the other hand, we transform the travel time dis-
1669
+ tribution into speed distributions in terms of links, and
1670
+ this transformation process is based on [41]. The reason
1671
+ for this transformation is that most users drive cars with
1672
+ a normal speed range (e.g., 10 kmph to 120 kmph), and
1673
+ thus we can easily analyze the rationality of learned speed
1674
+ distribution compared with link travel time. Two links’
1675
+ speed distributions were generated by the proposed MWSL,
1676
+ as is shown in Fig. 6, and we compared them with the
1677
+ speed distribution that is computed by the original taxi
1678
+ trajectories. To test the generalization of our model, we
1679
+ select two types of links, where the Xiaozhai east road is
1680
+ a busy link (type: ”primary”), and we can find that the
1681
+ morning and evening peaks are obvious in both learned
1682
+ speed distribution and computed distribution. Another link
1683
+ is a highway link, and the commuting pattern only appears
1684
+ in the evening for both distributions. Based on the above
1685
+ analysis, it is concluded that the learned distributions of
1686
+ links can effectively represent different functional types of
1687
+ links. Furthermore, the mean values µ of learned speed
1688
+ distributions are closer to the ground truth.
1689
+ 7.2
1690
+ A Demo of the Generated Road Conditions
1691
+ We generate the road conditions by our MWSL based on taxi
1692
+ OD trips, and we compare with the ground truth computed
1693
+ by the original taxi trajectories. Especially, we mark it with
1694
+ the unblocked state for the road segments without taxi
1695
+ trajectories. Since the speed limit for each road is varied,
1696
+ which is primarily defined by road type or road length,
1697
+
1698
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
1699
+ 12
1700
+ we use four kinds of colors to represent the different road
1701
+ states (very congested, congested, slow and unblocked). We
1702
+ divide the limiting-velocity for each road type equally, for
1703
+ example, the rate-limiting of primary road is 60kph, so
1704
+ the interval between very congested is [0, 15), congested
1705
+ is [15, 30) ,slow is [30, 45) ,unblocked is [45, 60). The com-
1706
+ pared result is shown in Fig. 7. From the comparison of
1707
+ the generated road condition and ground truth at several
1708
+ time slots, we can acquire the following insights: 1) similar
1709
+ traffic state. The road condition generated by our model is
1710
+ similar to the ground truth. Most road segments have the
1711
+ same road states, and those road segments with different
1712
+ road states frequently have a consistent tendency; 2) rational
1713
+ adjacency correlation. We can find that the road segments
1714
+ with neighbor segments often have the similar road state
1715
+ for the generated road condition map. This indicates that
1716
+ our model can learn adjacency correlation of road network.
1717
+ To better illustrate the model performance for generated
1718
+ road conditions, we also conduct quantitative analysis un-
1719
+ der Xi’an and Chengdu datasets. As is shown in Fig. 8,
1720
+ we computes the 24-h divergence between generated road
1721
+ conditions and ground truth. The prediction accuracy is
1722
+ relatively worse among these three methods from 22:00 PM
1723
+ to 2AM. This is because that a very small number of OD
1724
+ pairs in these time slots can not provide efficient model
1725
+ training for better prediction performance. However, the
1726
+ plots show that the generated conditions achieve the accu-
1727
+ racy of around 80% and 70% at ordinary time slots under the
1728
+ datasets of both Xi’an and Chengdu, respectively. Compared
1729
+ with ground truth, our weak supervision learning method
1730
+ can provide believable road conditions only relying on the
1731
+ OD pairs.
1732
+ 8
1733
+ CONCLUSION
1734
+ For the first time, we consider the OD travel time estimation
1735
+ as an inexact supervision problem and propose a multi-task
1736
+ framework to infer the optimal route based on transition
1737
+ probability and learn the travel time distribution for each
1738
+ road segment and intersection through expect MLE frame-
1739
+ work [16]. The stacked R-GCN architecture has been em-
1740
+ ployed to learn the complex relations of the road network,
1741
+ and we generate the travel time distribution for both road
1742
+ segments and intersections by 1st-order CP decomposition.
1743
+ Finally, we produce the transition probability between road
1744
+ segments by multi-layer perception. Moreover, an iterative
1745
+ update strategy has been proposed to update the transition
1746
+ probability and candidate paths during the training process.
1747
+ We evaluate our model on two real-world public datasets
1748
+ and verify the effectiveness of our proposed algorithm.
1749
+ Future work can be concluded in four parts. Firstly, more
1750
+ potential superior distribution can be developed under the
1751
+ assumptions of weakly supervised learning, since in this
1752
+ paper, only log-normal distribution has been employed.
1753
+ Secondly, more advanced route search algorithms could
1754
+ be designed based on, for example, transition probability,
1755
+ travel time, or route probability. Thirdly, more urban sce-
1756
+ narios, such as buses, subways, and people, can be tried to
1757
+ extend the applications of weakly supervised learning or
1758
+ travel time distribution. Lastly, a federated learning-based
1759
+ method can be designed, since the OD types of data save
1760
+ abundant storage costs on the client’s mobile phone.
1761
+ 9
1762
+ ACKNOWLEDGMENT
1763
+ We are grateful to anonymous reviewers for their help-
1764
+ ful comments. This work was partially supported by the
1765
+ grants of National Key Research and Development Program
1766
+ of China (No. 2018AAA0101100), National Key Research
1767
+ and Development Project (2021YFB1714400) of China and
1768
+ Guangdong Provincial Key Laboratory (2020B121201001).
1769
+ REFERENCES
1770
+ [1]
1771
+ Z. Wang, K. Fu, and J. Ye, “Learning to estimate the travel time,”
1772
+ in Proceedings of the 24th ACM SIGKDD International Conference on
1773
+ Knowledge Discovery & Data Mining, 2018, pp. 858–866.
1774
+ [2]
1775
+ D. Wang, J. Zhang, W. Cao, J. Li, and Y. Zheng, “When will you
1776
+ arrive? estimating travel time based on deep neural networks,” in
1777
+ Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
1778
+ [3]
1779
+ X. Li, G. Cong, A. Sun, and Y. Cheng, “Learning travel time
1780
+ distributions with deep generative model,” in The World Wide Web
1781
+ Conference, 2019, pp. 1017–1027.
1782
+ [4]
1783
+ H. Wu, J. Mao, W. Sun, B. Zheng, H. Zhang, Z. Chen, and W. Wang,
1784
+ “Probabilistic robust route recovery with spatio-temporal dynam-
1785
+ ics,” in Proceedings of the 22nd ACM SIGKDD International Confer-
1786
+ ence on Knowledge Discovery and Data Mining, 2016, pp. 1915–1924.
1787
+ [5]
1788
+ Y. Wang, Y. Zheng, and Y. Xue, “Travel time estimation of a path
1789
+ using sparse trajectories,” in Proceedings of the 20th ACM SIGKDD
1790
+ international conference on Knowledge discovery and data mining, 2014,
1791
+ pp. 25–34.
1792
+ [6]
1793
+ I. Sanaullah, M. Quddus, and M. Enoch, “Developing travel time
1794
+ estimation methods using sparse gps data,” Journal of Intelligent
1795
+ Transportation Systems, vol. 20, no. 6, pp. 532–544, 2016.
1796
+ [7]
1797
+ H. Wang, X. Tang, Y.-H. Kuo, D. Kifer, and Z. Li, “A simple base-
1798
+ line for travel time estimation using large-scale trip data,” ACM
1799
+ Transactions on Intelligent Systems and Technology (TIST), vol. 10,
1800
+ no. 2, pp. 1–22, 2019.
1801
+ [8]
1802
+ I. Jindal, X. Chen, M. Nokleby, J. Ye et al., “A unified neural
1803
+ network approach for estimating travel time and distance for a
1804
+ taxi trip,” arXiv preprint arXiv:1710.04350, 2017.
1805
+ [9]
1806
+ Y. Li, K. Fu, Z. Wang, C. Shahabi, J. Ye, and Y. Liu, “Multi-task
1807
+ representation learning for travel time estimation,” in Proceedings
1808
+ of the 24th ACM SIGKDD International Conference on Knowledge
1809
+ Discovery & Data Mining, 2018, pp. 1695–1704.
1810
+ [10] J.-B. Cordonnier and A. Loukas, “Extrapolating paths with graph
1811
+ neural networks,” arXiv preprint arXiv:1903.07518, 2019.
1812
+ [11] Z.-H. Zhou, “A brief introduction to weakly supervised learning,”
1813
+ National science review, vol. 5, no. 1, pp. 44–53, 2018.
1814
+ [12] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P´erez, “Solving the
1815
+ multiple instance problem with axis-parallel rectangles,” Artificial
1816
+ intelligence, vol. 89, no. 1-2, pp. 31–71, 1997.
1817
+ [13] A. Richardson and M. Taylor, “Travel time variability on com-
1818
+ muter journeys,” High Speed Ground Transportation Journal, vol. 12,
1819
+ no. 1, 1978.
1820
+ [14] H. A. Rakha, I. El-Shawarby, M. Arafeh, and F. Dion, “Estimating
1821
+ path travel-time reliability,” in 2006 IEEE Intelligent Transportation
1822
+ Systems Conference.
1823
+ IEEE, 2006, pp. 236–241.
1824
+ [15] M. AREZOUMANDI, “Estimation of travel time reliability for
1825
+ freeways using mean and standard deviation of travel time,” Jour-
1826
+ nal of Transportation Systems Engineering and Information Technology,
1827
+ vol. 11, no. 6, pp. 74–84, 2011.
1828
+ [16] Y. Zhang, N. Charoenphakdee, Z. Wu, and M. Sugiyama, “Learn-
1829
+ ing from aggregate observations,” arXiv preprint arXiv:2004.06316,
1830
+ 2020.
1831
+ [17] M.-x. Wang, W.-C. Lee, T.-y. Fu, and G. Yu, “Learning embeddings
1832
+ of intersections on road networks,” in Proceedings of the 27th ACM
1833
+ SIGSPATIAL International Conference on Advances in Geographic In-
1834
+ formation Systems, 2019, pp. 309–318.
1835
+ [18] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. Van Den Berg, I. Titov, and
1836
+ M. Welling, “Modeling relational data with graph convolutional
1837
+ networks,” in European semantic web conference.
1838
+ Springer, 2018,
1839
+ pp. 593–607.
1840
+
1841
+ JOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
1842
+ 13
1843
+ [19] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evalua-
1844
+ tion of gated recurrent neural networks on sequence modeling,”
1845
+ arXiv preprint arXiv:1412.3555, 2014.
1846
+ [20] X. Wang, Y. Ma, Y. Wang, W. Jin, X. Wang, J. Tang, C. Jia, and
1847
+ J. Yu, “Traffic flow prediction via spatial temporal graph neural
1848
+ network,” in Proceedings of The Web Conference 2020, 2020, pp. 1082–
1849
+ 1092.
1850
+ [21] D. Zhang, J. Han, G. Cheng, and M.-H. Yang, “Weakly supervised
1851
+ object localization and detection: A survey,” IEEE transactions on
1852
+ pattern analysis and machine intelligence, vol. 44, no. 9, pp. 5866–
1853
+ 5885, 2021.
1854
+ [22] Y.-F. Li, L.-Z. Guo, and Z.-H. Zhou, “Towards safe weakly super-
1855
+ vised learning,” IEEE transactions on pattern analysis and machine
1856
+ intelligence, vol. 43, no. 1, pp. 334–346, 2019.
1857
+ [23] P. Nodet, V. Lemaire, A. Bondu, A. Cornu´ejols, and A. Ouorou,
1858
+ “From weakly supervised learning to biquality learning: an intro-
1859
+ duction,” in 2021 International Joint Conference on Neural Networks
1860
+ (IJCNN).
1861
+ IEEE, 2021, pp. 1–10.
1862
+ [24] B. Settles, “Active learning literature survey,” 2009.
1863
+ [25] B. Fr´enay and M. Verleysen, “Classification in the presence of label
1864
+ noise: a survey,” IEEE transactions on neural networks and learning
1865
+ systems, vol. 25, no. 5, pp. 845–869, 2013.
1866
+ [26] D. Tiesyte and C. S. Jensen, “Similarity-based prediction of travel
1867
+ times for vehicles traveling on known routes,” in Proceedings of
1868
+ the 16th ACM SIGSPATIAL international conference on Advances in
1869
+ geographic information systems, 2008, pp. 1–10.
1870
+ [27] C.-H. Wu, J.-M. Ho, and D.-T. Lee, “Travel-time prediction with
1871
+ support vector regression,” IEEE transactions on intelligent trans-
1872
+ portation systems, vol. 5, no. 4, pp. 276–281, 2004.
1873
+ [28] R. Sevlian and R. Rajagopal, “Travel time estimation using floating
1874
+ car data,” arXiv preprint arXiv:1012.4249, 2010.
1875
+ [29] W. Luo, H. Tan, L. Chen, and L. M. Ni, “Finding time period-based
1876
+ most frequent path in big trajectory data,” in Proceedings of the 2013
1877
+ ACM SIGMOD international conference on management of data, 2013,
1878
+ pp. 713–724.
1879
+ [30] B. Yang, J. Dai, C. Guo, C. S. Jensen, and J. Hu, “Pace: a pa th-
1880
+ ce ntric paradigm for stochastic path finding,” The VLDB Journal,
1881
+ vol. 27, no. 2, pp. 153–178, 2018.
1882
+ [31] T.-y. Fu and W.-C. Lee, “Deepist: Deep image-based spatio-
1883
+ temporal network for travel time estimation,” in Proceedings of
1884
+ the 28th ACM International Conference on Information and Knowledge
1885
+ Management, 2019, pp. 69–78.
1886
+ [32] B. Yu, H. Yin, and Z. Zhu, “Spatio-temporal graph convolutional
1887
+ networks: A deep learning framework for traffic forecasting,”
1888
+ arXiv preprint arXiv:1709.04875, 2017.
1889
+ [33] Y. Li, R. Yu, C. Shahabi, and Y. Liu, “Diffusion convolutional
1890
+ recurrent neural network: Data-driven traffic forecasting,” arXiv
1891
+ preprint arXiv:1707.01926, 2017.
1892
+ [34] S. Guo, Y. Lin, H. Wan, X. Li, and G. Cong, “Learning dynam-
1893
+ ics and heterogeneity of spatial-temporal graph data for traffic
1894
+ forecasting,” IEEE Transactions on Knowledge and Data Engineering,
1895
+ 2021.
1896
+ [35] Y. Yang, F. Zhang, and D. Zhang, “Sharededge: Gps-free fine-
1897
+ grained travel time estimation in state-level highway systems,”
1898
+ Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiqui-
1899
+ tous Technologies, vol. 2, no. 1, pp. 1–26, 2018.
1900
+ [36] H. Chen, B. An, G. Sharon, J. Hanna, P. Stone, C. Miao, and Y. Soh,
1901
+ “Dyetc: Dynamic electronic toll collection for traffic congestion
1902
+ alleviation,” in Proceedings of the AAAI Conference on Artificial
1903
+ Intelligence, vol. 32, no. 1, 2018.
1904
+ [37] K. Shao, K. Wang, L. Chen, and Z. Zhou, “Estimation of urban
1905
+ travel time with sparse traffic surveillance data,” in Proceedings of
1906
+ the 2020 4th High Performance Computing and Cluster Technologies
1907
+ Conference & 2020 3rd International Conference on Big Data and
1908
+ Artificial Intelligence, 2020, pp. 218–223.
1909
+ [38] G. Jin, H. Yan, F. Li, J. Huang, and Y. Li, “Spatial-temporal dual
1910
+ graph neural networks for travel time estimation,” arXiv preprint
1911
+ arXiv:2105.13591, 2021.
1912
+ [39] J. James, “Citywide estimation of travel time distributions with
1913
+ bayesian deep graph learning,” IEEE Transactions on Knowledge and
1914
+ Data Engineering, 2021.
1915
+ [40] S. Brakatsoulas, D. Pfoser, R. Salas, and C. Wenk, “On map-
1916
+ matching vehicle tracking data,” in Proceedings of the 31st inter-
1917
+ national conference on Very large data bases, 2005, pp. 853–864.
1918
+ [41] M. Li, A. Ahmed, and A. J. Smola, “Inferring movement tra-
1919
+ jectories from gps snippets,” in Proceedings of the Eighth ACM
1920
+ International Conference on Web Search and Data Mining, 2015, pp.
1921
+ 325–334.
1922
+ [42] Y. Gal and Z. Ghahramani, “A theoretically grounded application
1923
+ of dropout in recurrent neural networks,” Advances in neural
1924
+ information processing systems, vol. 29, 2016.
1925
+ [43] S. Luan, M. Zhao, X.-W. Chang, and D. Precup, “Break the ceiling:
1926
+ Stronger multi-scale deep graph convolutional networks,” arXiv
1927
+ preprint arXiv:1906.02174, 2019.
1928
+ [44] Y. Li, Z. Li, and L. Li, “Missing traffic data: comparison of impu-
1929
+ tation methods,” IET Intelligent Transport Systems, vol. 8, no. 1, pp.
1930
+ 51–57, 2014.
1931
+ [45] M. Haklay and P. Weber, “Openstreetmap: User-generated street
1932
+ maps,” IEEE Pervasive computing, vol. 7, no. 4, pp. 12–18, 2008.
1933
+ [46] C. Yang and G. Gidofalvi, “Fast map matching, an algorithm inte-
1934
+ grating hidden markov model with precomputation,” International
1935
+ Journal of Geographical Information Science, vol. 32, no. 3, pp. 547–
1936
+ 570, 2018.
1937
+ [47] J. H. Friedman, “Greedy function approximation: a gradient boost-
1938
+ ing machine,” Annals of statistics, pp. 1189–1232, 2001.
1939
+ [48] X. Fang, J. Huang, F. Wang, L. Zeng, H. Liang, and H. Wang,
1940
+ “Constgat: Contextual spatial-temporal graph attention network
1941
+ for travel time estimation at baidu maps,” in Proceedings of the 26th
1942
+ ACM SIGKDD International Conference on Knowledge Discovery &
1943
+ Data Mining, 2020, pp. 2697–2705.
1944
+ [49] T. N. Kipf and M. Welling, “Semi-supervised classification with
1945
+ graph convolutional networks,” arXiv preprint arXiv:1609.02907,
1946
+ 2016.
1947
+ Hongjun Wang is working toward the M.S. de-
1948
+ gree in computer science and technology from
1949
+ Southern University of Science and Technology,
1950
+ China. He received the B.E. degree from the
1951
+ Nanjing University of Posts and Telecommunica-
1952
+ tions, China, in 2019. His research interests are
1953
+ broadly in machine learning, with urban comput-
1954
+ ing, explainable AI, data mining, data visualiza-
1955
+ tion.
1956
+ Zhiwen Zhang received the B.E. and M.S. de-
1957
+ gree in Artificial Intelligence from Nankai Univer-
1958
+ sity, China, in 2016 and 2019 respectively. From
1959
+ 2019, he is currently pursuing a Ph.D. degree at
1960
+ the Department of Socio-Cultural Environmental
1961
+ Studies, The University of Tokyo. His current
1962
+ research interests include urban computing and
1963
+ data visualization.
1964
+ Zipei Fan received his B.S. degree in Computer
1965
+ Science from Beihang University, China, in 2012,
1966
+ both M.S. and a Ph.D. degree in Civil Engi-
1967
+ neering from The University of Tokyo, Japan, in
1968
+ 2014 and 2017 respectively. He became Project
1969
+ Researcher and Project Assistant Professor in
1970
+ 2017 and 2019, and he has promoted to Project
1971
+ Lecturer at the Center for Spatial Information
1972
+ Science, the University of Tokyo in 2020. His
1973
+ research interests include ubiquitous computing,
1974
+ machine learning, Spatio-temporal data mining,
1975
+ and heterogeneous data fusion.
1976
+
1977
+ GOLE
1978
+ BADLOADJOURNAL OF LATEX CLASS FILES, VOL. XX, NO. X, AUGUST 201X
1979
+ 14
1980
+ Jiyuan Chen is working towards his B.S. de-
1981
+ gree in Computer Science and Technology from
1982
+ Southern University of Science and Technology,
1983
+ China. His major research fields include artifi-
1984
+ cial intelligence, deep learning, urban computing
1985
+ and data mining.
1986
+ Lingyu Zhang joined Baidu in 2012 as a search
1987
+ strategy algorithm research and development
1988
+ engineer. He joined Didi in 2013 and served as
1989
+ senior algorithm engineer, technical director of
1990
+ taxi strategy algorithm direction, and technical
1991
+ expert of strategy model department. Currently
1992
+ a researcher at Didi AI Labs, he used machine
1993
+ learning and big data technology to design and
1994
+ lead the implementation of multiple company-
1995
+ level intelligent system engines during his work
1996
+ at Didi, such as the order distribution system
1997
+ based on combination optimization, and the capacity based on density
1998
+ clustering and global optimization. Scheduling engine, traffic guidance
1999
+ and personalized recommendation engine, ”Guess where you are going”
2000
+ personalized destination recommendation system, etc. Participated in
2001
+ the company’s dozens of international and domestic core technology
2002
+ innovation patent research and development, application, good at using
2003
+ mathematical modeling, business model abstraction, machine learning,
2004
+ etc. to solve practical business problems. He has won honorary titles
2005
+ such as Beijing Invention and Innovation Patent Gold Award and QCon
2006
+ Star Lecturer, and his research results have been included in top in-
2007
+ ternational conferences related to artificial intelligence and data mining
2008
+ such as KDD, SIGIR, AAAI, and CIKM.
2009
+ Ryosuke Shibasaki was born in Fukuoka,
2010
+ Japan. He received his B.S., M.S., and Doctoral
2011
+ degrees in Civil Engineering from The Univer-
2012
+ sity of Tokyo, Japan, in 1980, 1982, and 1987,
2013
+ respectively. From 1982 to 1988, he was with
2014
+ the Public Works Research Institute, Ministry
2015
+ of Construction. From 1988 to 1991, he was
2016
+ an Associate Professor in the Civil Engineering
2017
+ Department, The University of Tokyo. In 1991,
2018
+ he joined the Institute of Industrial Science, The
2019
+ University of Tokyo. In 1998, he was promoted to
2020
+ Professor in the Center for Spatial Information Science, The University
2021
+ of Tokyo. His research interests cover three-dimensional data acquisi-
2022
+ tion for GIS, conceptual modeling for spatial objects, and agent-based
2023
+ microsimulation in a GIS environment.
2024
+ Prof. Xuan Song received the Ph.D. degree in
2025
+ signal and information processing from Peking
2026
+ University in 2010. In 2017, he was selected as
2027
+ an Excellent Young Researcher of Japan MEXT.
2028
+ In the past ten years, he led and participated
2029
+ in many important projects as a principal inves-
2030
+ tigator or primary actor in Japan, such as the
2031
+ DIAS/GRENE Grant of MEXT, Japan; Japan/US
2032
+ Big Data and Disaster Project of JST, Japan;
2033
+ Young Scientists Grant and Scientific Research
2034
+ Grant of MEXT, Japan; Research Grant of MLIT,
2035
+ Japan; CORE Project of Microsoft; Grant of JR EAST Company and
2036
+ Hitachi Company, Japan. He served as Associate Editor, Guest Editor,
2037
+ Area Chair, Program Committee Member or reviewer for many famous
2038
+ journals and top-tier conferences, such as IMWUT, IEEE Transactions
2039
+ on Multimedia, WWW Journal, Big Data Journal, ISTC, MIPR, ACM
2040
+ TIST, IEEE TKDE, UbiComp, ICCV, CVPR, ICRA and etc.
2041
+
FNE4T4oBgHgl3EQf7A5n/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
GNFIT4oBgHgl3EQfWStC/content/2301.11239v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46df6a7bb03a5870daf6cf95a0b98f02a8f0d9525af9b0a367adfa8b9a35c70a
3
+ size 7311891
H9FAT4oBgHgl3EQfuR4A/content/tmp_files/2301.08668v1.pdf.txt ADDED
@@ -0,0 +1,2795 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.08668v1 [cs.CR] 20 Jan 2023
2
+ 1
3
+ Key-and-Signature Compact Multi-Signatures
4
+ for Blockchain: A Compiler with Realizations
5
+ Shaoquan Jiang, Dima Alhadidi, Hamid Fazli Khojir
6
+ Abstract—Multi-signature is a protocol where a
7
+ set of signatures jointly sign a message so that
8
+ the final signature is significantly shorter than con-
9
+ catenating individual signatures together. Recently,
10
+ it finds applications in blockchain, where several
11
+ users want to jointly authorize a payment through a
12
+ multi-signature. However, in this setting, there is no
13
+ centralized authority and it could suffer from a rogue
14
+ key attack where the attacker can generate his own
15
+ keys arbitrarily. Further, to minimize the storage on
16
+ blockchain, it is desired that the aggregated public-
17
+ key and the aggregated signature are both as short
18
+ as possible. In this paper, we find a compiler that
19
+ converts a kind of identification (ID) scheme (which
20
+ we call a linear ID) to a multi-signature so that
21
+ both the aggregated public-key and the aggregated
22
+ signature have a size independent of the number
23
+ of signers. Our compiler is provably secure. The
24
+ advantage of our results is that we reduce a multi-
25
+ party problem to a weakly secure two-party problem.
26
+ We realize our compiler with two ID schemes. The
27
+ first is Schnorr ID. The second is a new lattice-based
28
+ ID scheme, which via our compiler gives the first
29
+ regular lattice-based multi-signature scheme with
30
+ key-and-signature compact without a restart during
31
+ signing process.
32
+ I. INTRODUCTION
33
+ A multi-signature scheme allows a group of
34
+ signers to jointly generate a signature while no
35
+ subset of them can represent all the members to
36
+ generate it. It was first introduced by Itakura and
37
+ Nakamura [26]. A trivial method is to ask each
38
+ signer to generate a signature on the message and
39
+ concatenate their signatures together. However, this
40
+ is not efficient: (1) the signature size is linear in
41
+ All authors are with School of Computer Science, Uni-
42
+ versity of Windsor, Windsor, ON N9B 3P4. Email: jiang-
43
+ shq@uwindsor.ca
44
+ the number of signers n; (2) we need to provide
45
+ n signer public-keys to verifier; (3) the verifica-
46
+ tion needs to verify n signatures; (4) all the n
47
+ public-keys need to be provided to the verifier; (5)
48
+ the communication and storage complexity for the
49
+ signature are both linear in n. With applications
50
+ in blockchain, these problems are crucial as the
51
+ signature will be transmitted, verified and stored
52
+ in the blockchain network. Hence, it is desired to
53
+ find multi-signature that has a signature with these
54
+ efficiency measure independent of n.
55
+ Early multi-signarture schemes [25], [30], [44]
56
+ assumed the signer keys are chosen honestly. In
57
+ Bitcoin [41], every user can choose his own public-
58
+ key. However, this might raise a very serious issue.
59
+ For example, if a user wants to generate a multi-
60
+ signature with users of 3 pubic-keys gx1, gx2, gx3,
61
+ he could choose s randomly and compute his
62
+ public-key as pk = gs(gx1+x2+x3)−1. If the ag-
63
+ gregated public-key (which is the only public-key
64
+ provided to the verifier) is the multiplication of the
65
+ four public-keys, then attacker knows its secret and
66
+ hence can forge a multi-signature. This is called
67
+ a rogue key attack. How to construct a key-and-
68
+ signature compact multi-signature scheme secure
69
+ against a rogue key attack is an important question.
70
+ A. Related Works
71
+ A multi-signature scheme [26] is a special case
72
+ of aggregate signature [12] where each signer of
73
+ the latter can sign a possibly different message.
74
+ In this work, we only discuss a multi-signature
75
+ scheme with a motivation of blockchain application
76
+ where the public-key is arbitrary and the target is to
77
+ minimize the signature and the aggregated public-
78
+ key size. Micali et al. [39] requires an interactive
79
+ key generation among signers and hence is not
80
+
81
+ suitable. Boldyreva [11] and Lu et al. [32] require
82
+ signers to add proof of possession (PoP) to their
83
+ public-keys, which is typically a signature of the
84
+ user’s public-key. The main disadvantage of this
85
+ assumption is the increase of the public-key size.
86
+ In the signing process, it also requires a signer to
87
+ verify the PoP of all the other signers. In addition,
88
+ this assumption is not compatible with an ordinary
89
+ signature where PoP is not required.
90
+ Bellare and Neven [8] converted the Schnorr
91
+ signature [48] into a multi-signature by linearly
92
+ adding the signature together. Their protocol is of 3-
93
+ round but without the aggregated key aggregation.
94
+ Bagherzandi et al. [3], Ma et al. [36], Syta et al.
95
+ [51] and Maxwell et al. [38] attempted to construct
96
+ a 2-round multi-signature scheme which essentially
97
+ tries to remove the preliminary committing message
98
+ which is a hash of the first message in an ID scheme
99
+ (see [8] for example). However, Drijvers et al. [17]
100
+ pointed out that all these schemes have proof flaws.
101
+ They then proved that a slightly modified scheme
102
+ of Bagherzandi et al. [3] is secure under the PoP
103
+ assumption. Other 2-round proposals that support
104
+ the key-and-signature aggregation are due to Alper
105
+ and Burdges [2] and Nick et al. [42], [43], where
106
+ Nick et al. [43] employed a generic NIZK proof
107
+ while the other two proposals [2], [42] are efficient
108
+ in aggregated key and signature and verification
109
+ cost (similar to the original Schnorr signature).
110
+ Boneh et al. [13] proved the security of a modi-
111
+ fied version of Maxwell et al. [38] via an added
112
+ preliminary committing message and hence it is a
113
+ 3-round scheme. Bellare and Dai [4] proposed a 2-
114
+ round multi-signature scheme with a tight reduction
115
+ without the key aggregation.
116
+ The above constructions are all based on variants
117
+ of the discrete logarithm assumption. It is important
118
+ to find out quantum-resistant schemes while this is
119
+ not easy. For instance, lattice-based scheme [28] is
120
+ insecure [31]. Also, the proof for a ring-SIS based
121
+ scheme [27] is invalid. They reduced to find a short
122
+ W for ring-SIS problem AW = 0 with public
123
+ parameter A. However, their obtained W is trivially
124
+ zero which does not contradict the ring-SIS as-
125
+ sumption. Some schemes [21], [22], [37], [14] need
126
+ an exponential number of restarts of the signing
127
+ Key
128
+ Round
129
+
130
+ Assump/
131
+ Compact
132
+ Comp
133
+ Restart
134
+ Remark
135
+ [21]
136
+ No
137
+ 3
138
+ exp
139
+ R-LWE
140
+ [22]
141
+ No
142
+ 3
143
+ exp
144
+ non-standard
145
+ [14]
146
+ Yes
147
+ 2
148
+ exp
149
+ R-MLWE&
150
+ R-MSIS
151
+ [16]
152
+ No
153
+ 2
154
+ exp
155
+ R-MLWE&
156
+ R-MSIS
157
+ [20]
158
+ No
159
+ 1
160
+ 0
161
+ R-SIS
162
+ limited-sign
163
+ ours
164
+ Yes
165
+ 3
166
+ 0
167
+ R-SIS &
168
+ R-LWE
169
+ Fig. 1.
170
+ Performance of Lattice-based Secure Multi-
171
+ Signature Schemes: compact means the size indepen-
172
+ dent of ♯ signers; all schemes have compact signatures;
173
+ schemes requiring a honest key generations are not in-
174
+ cluded; ♯ restart is ♯ of repeated runs of signing algorithm
175
+ (in case it aborts); exp means exponential in either ♯
176
+ signers or the security parameter; limited-sign restricts
177
+ each user to have a bounded number of signings.
178
+ algorithm, due to a noticeable probability of abort
179
+ event. Some schemes [19], [37] are provably secure
180
+ only when all the user keys are generated honestly
181
+ which is not suitable for blockchain. Damg˚ard et al.
182
+ [16] and Fleischhacker et al. [20] do not support key
183
+ aggregations while the latter can only allow a signer
184
+ to sign a predefined number of signatures. Thus,
185
+ currently no multi-signature scheme can support a
186
+ key-and-signature aggregation without a restart and
187
+ allow an unlimited number of signing. Our work is
188
+ to study this question in details.
189
+ B. Contribution
190
+ In this paper, we consider the key-and-signature
191
+ compact multi-signature. That is, both key and
192
+ signature support aggregation and have a size inde-
193
+ pendent of the number of signers. Toward this, we
194
+ formulate the linear identification scheme (ID) and
195
+ propose a compiler that transforms a linear ID to a
196
+ key-and-signature compact multi-signature scheme,
197
+ where the signature size and the aggregated public-
198
+ key are independent of the number of signers.
199
+ The advantage of our compiler is that we reduce
200
+ the multi-party signature problem to a two-party
201
+ identification problem and hence it is much easier
202
+
203
+ to deal with and also the security proof for latter
204
+ should be simpler. We formulate the linearity of ID
205
+ via the R-module from algebra. Our compiler is
206
+ provably secure. We realize our compiler with two
207
+ ID schemes. The first is Schnorr ID scheme. The
208
+ second one is a new ID scheme over ring that is
209
+ secure under ring-LWE and ring-SIS assumptions.
210
+ Our ID scheme via the compiler gives the first
211
+ key-and-signature compact multi-signature without
212
+ a restart during the signing process (see Fig. 1 for
213
+ a comparison with other schemes), where a signer
214
+ can do an unlimited number of signing (unlike
215
+ [20], which can only do a predetermined number of
216
+ signings). The security of ID schemes is formulated
217
+ in terms of unforgeability against an aggregated key
218
+ of multi-users with at least one of them honest.
219
+ Our ID schemes are proven secure through a new
220
+ forking lemma (called nested forking lemma). Our
221
+ forking algorithm has a nested rewinding and is
222
+ more effective than the previous algorithms which
223
+ fork at two or more spots sequentially.
224
+ II. PRELIMINARIES
225
+ Notations. We will use the following notations.
226
+ • x ← S samples x uniformly random from a
227
+ set S.
228
+ • For a randomized algorithm A, u = A(x; r)
229
+ denotes the output of A with input x and
230
+ randomness r, while u ← A(x) denotes the
231
+ random output (with unspecified randomness).
232
+ • We use PR(r) to denote the probability
233
+ Pr(R = r); for Boolean variable G, Pr(G)
234
+ means Pr(G = 1).
235
+ • PPT stands for probabilistic polynomial time.
236
+ • Min-entropy
237
+ H∞(X)
238
+ =
239
+ − log(maxx log PX(x)).
240
+ • A|B stands for A concatenating with B.
241
+ • negl(λ)
242
+ is
243
+ negligible:
244
+ limλ→∞ poly(λ)negl(λ)
245
+ =
246
+ 0
247
+ for
248
+ any
249
+ polynomial poly(λ).
250
+ • [ν] denotes set {1, · · · , ν}.
251
+ A. Ring and Module
252
+ In this section, we review a math concept: mod-
253
+ ule (for details, see [29]). We start with the concept
254
+ of ring. A ring A is a set, associated with multipli-
255
+ cation and addition operators, respectively written
256
+ as a product and a sum, satisfying the following
257
+ conditions:
258
+ - R-1. A is a commutative group under addition
259
+ operator + with identity element 0.
260
+ - R-2.
261
+ A is associative under multiplication
262
+ operator: for a, b, c ∈ A, (ab)c=a(bc). Also,
263
+ it has a unit element 1: 1a=a.
264
+ - R-3.
265
+ It satisfies the distributive law: for
266
+ a, b, c ∈ A, a(b + c) = ab + ac and (b + c)a =
267
+ ba + ca.
268
+ In this paper, we only consider a commutative ring:
269
+ if a, b ∈ A, then ab = ba. That is, when we say
270
+ ring, it always means a commutative ring. Note that
271
+ a non-zero element in a ring does not necessarily
272
+ have a (multiplicative) inverse, where b is an inverse
273
+ of a if ab = 1. For instance, in Z10, 3 is an inverse
274
+ of 7 while 5 does not have an inverse. If A is a
275
+ commutative ring with 0 ̸= 1 and every non-zero
276
+ element in A has an inverse, then A is a field.
277
+ Now we introduce the concept module.
278
+ Definition 1: Let R be a ring. An Abelian group
279
+ M (with group operator ⊞) is a R-module, if (1) it
280
+ has defined a multiplication operator • between R
281
+ and M: for any r ∈ R, m ∈ M, r•m ∈ M; (2) the
282
+ following conditions are satisfied: for any r, s ∈ R
283
+ and x, y ∈ M,
284
+ 1. r • (x ⊞ y) = (r • x) ⊞ (r • y);
285
+ 2. (r + s) • x = (r • x) ⊞ (s • x)
286
+ 3. (rs) • x = r • (s • x)
287
+ 4. 1R • x = x, where 1R is the multiplicative
288
+ identity of R.
289
+ We remark that the group operator ⊞ for M is
290
+ not necessarily the regular number addition (e.g., it
291
+ can be the integer multiplication).
292
+ In the following, we give some R-module exam-
293
+ ples.
294
+ Example 1. Let q be a prime and M is a group of
295
+ order q with generator g (i.e., M = ⟨g⟩). Examples
296
+ of M are a subgroup of Z∗
297
+ p or an elliptic curve
298
+ group. x, y ∈ M, xy denotes its group operation.
299
+ Then, M is a Zq-module with • defined as r•m
300
+ def
301
+ =
302
+ mr, for r ∈ Zq and m ∈ M. It is well-defined:
303
+
304
+ since mq = 1, any representative r in Zq such as
305
+ r, r + q gives the same result r • m. For r, s ∈ Zq
306
+ and x, y ∈ M, we check the module conditions:
307
+ (1) s • (xy) = (xy)s = xsys = (s • x)(s • y); (2)
308
+ (r + s) • m = mr+s = mrms = (r • m)(s • m);
309
+ (3) (rs) • x = xrs = (xs)r = r • (s • x); (4)
310
+ 1 • x = x1 = x.
311
+ Example 2.
312
+ For any integer n > 0, M = Zn
313
+ (as an additive group) is a Zn-module, where • is
314
+ simply the modular multiplication. The verification
315
+ of module properties is straightforward.
316
+ Example 3.
317
+ Let n be a positive integer. Then,
318
+ the polynomial ring M = Zn[x] (as an additive
319
+ group) is a Zn-module with • being the modular n
320
+ multiplication: for s ∈ Zn, m = �t
321
+ i=0 uixi, s•m =
322
+ �t
323
+ i=0 uisxi, where uis is the multiplication over
324
+ Zn. All the other verifications of the properties are
325
+ straightforward.
326
+ III. NESTED FORKING LEMMA
327
+ The original forking lemma was formulated by
328
+ Pointcheval and Stern [46] to analyze Schnorr sig-
329
+ nature [48]. It basically shows that if the attacker
330
+ can forge a Schnorr signature in the random or-
331
+ acle model [7] with a non-negligible probability,
332
+ then it can generate two forgeries when reminding
333
+ to the place where the random oracle value was
334
+ revised. Bellare and Neven [8] generalized the
335
+ forking lemma to a general algorithm A, without
336
+ resorting to a signature scheme. This was further
337
+ generalized by Bagherzandi et al. [3] so that A is
338
+ rewound to many places. However, the algorithm
339
+ needs O(n2q/ǫ) rewindings, where q is the num-
340
+ ber of random values in one run of A (which is
341
+ the number of random oracle queries in typical
342
+ cryptographic applications) and ǫ is the successful
343
+ probability of A while n is the number of rewinding
344
+ spots. However, this is not efficient and can even
345
+ be essentially exponential for a non-negligible ǫ.
346
+ The main issue comes from the fact the rewinding
347
+ for each spot is repeated independently until a
348
+ new success is achieved. But it does not relate
349
+ different rewindings. In this section, we give a
350
+ new forking lemma for two rewinding spots (say
351
+ at index i, j with i < j) while it can be generalized
352
+ to n rewinding spots. The new feature here is
353
+ that the rewinding is nested. To see this, suppose
354
+ that the first run of A uses the list of random
355
+ values: h1, · · · , hi−1, hi, · · · , hj−1, hj, · · · , hq and
356
+ the rewinding spots are chosen at index i and
357
+ j. Then, we execute A for another 3 runs with
358
+ rewindings that respectively the following lists of
359
+ random values:
360
+ h1, · · · , hi−1, hi, · · · , hj−1, h′
361
+ j, · · · , h′
362
+ q;
363
+ (1)
364
+ h1, · · · , hi−1, ¯hi, · · · , ¯hj−1, ¯hj, · · · , ¯hq;
365
+ (2)
366
+ h1, · · · , hi−1, ¯hi, · · · , ¯hj−1, hj, · · · , h′
367
+ q.
368
+ (3)
369
+ That is, execution (1) rewinds the initial execu-
370
+ tion to index j; execution (2) rewinds the initial
371
+ execution to index i while execution (3) rewinds
372
+ the (rewound) execution (2) to index j. With these
373
+ related executions, we are able to claim the outputs
374
+ are all successful with probability at least O(ǫ4),
375
+ which is still non-negligible. The advantage of this
376
+ nested forking is that it can be directly used to
377
+ extract a secret hidden in recursive random oracle
378
+ evaluations. Our algorithm will use the following
379
+ notations.
380
+ h[[1, · · · , q]]
381
+ def
382
+ = h1, · · · , hq (a sequence of ele-
383
+ ments);
384
+ h[[1, · · · , �
385
+ i, · · · , q]]
386
+ def
387
+ = h1, · · · , hi−1, ˆhi, · · · , ˆhq;
388
+ h[[1, · · · , �
389
+ i, · · · , j, j + 1, · · · , q]]
390
+ =h1 · · · , hi−1, ˆhi, · · · , ˆhj, ¯hj+1, · · · , ¯hq.
391
+ Other
392
+ variants
393
+ such
394
+ as
395
+ h[[1, · · · , i, · · · , j, j + 1, · · · , q]]) can be defined
396
+ similarly. Our forking algorithm is in Fig. 2.
397
+ Before introducing our lemma, we give two facts.
398
+ Fact 1.
399
+ For any random variable I, R and any
400
+ function F() on I, R, we have
401
+ Pr(I = i∧F(I, R) = f) = Pr(I = i∧F(i, R) = f).
402
+ Proof. For any function G and any random variable
403
+ W, Pr(G(W) = g) = �
404
+ w:G(w)=g PW (w). Apply-
405
+ ing this to W = (I, R) and G = (I, F), a simple
406
+ calculation gives the result as (I, F) = (i, f) is
407
+ I = i ∧ F = f.
408
+
409
+
410
+ —————————————————————-
411
+ Algorithm FA(x)
412
+ —————————————————————-
413
+ pick coin ρ for A at random
414
+ h1, · · · , hq ← H
415
+ (I0, J0, σ0) ← A(x, h[[1, · · · , q]]; ρ)
416
+ If I0 = 0 or J0 = 0 or I0 ≥ J0, return Fail
417
+ ˆhJ0, · · · , ˆhq ← H
418
+ (I1, J1, σ1) ← A(x, h[[1, · · · ,
419
+
420
+ J0, · · · , q]]; ρ)
421
+ If I1 = 0 or J1 = 0, return Fail
422
+ ¯hI0, · · · , ¯hq ← H
423
+ (I2, J2, σ2) ← A(x, h[[1, · · · , I0, · · · , q]]; ρ)
424
+ If I2 = 0 or J2 = 0, return Fail
425
+ hJ0, · · · , hq ← H
426
+ (I3, J3, σ3) ← A(x, h[[1, · · ·, I0, · · · , J0 − 1, J0, · · · , q]]; ρ)
427
+ If I3 = 0 or J3 = 0, return Fail
428
+ Let Flag1 = (I0 = I1 = I2 = I3) ∧ (J0 = J1 =
429
+ J2 = J3)
430
+ Let Flag2 = (hI0 ̸= ¯hI0)∧(hJ0 ̸= ˆhJ0)∧(¯hJ0 ̸= hJ0)
431
+ If Flag1 ∧ Flag2, return (I0, J0, {σi}3
432
+ i=0)
433
+ else return Fail.
434
+ —————————————————————-
435
+ Fig. 2. Forking Algorithm FA
436
+ Fact 2.
437
+ Let B′, B, R be independent random
438
+ variables with B′, B identically distributed. Let G
439
+ be a fixed boolean function. Then,
440
+ Pr(G(R, B) ∧ G(R, B′)) =
441
+
442
+ r
443
+ PR(r) · Pr2(G(r, B)).
444
+ Proof. Notice Pr(X = x) = �
445
+ r Pr(R = r, X =
446
+ x) for variable R, X. Together with Fact 1, we have
447
+ Pr(G(R, B) ∧ G(R, B′))
448
+ =
449
+
450
+ r
451
+ Pr(R = r, {G(R, B) ∧ G(R, B′)} = 1)
452
+ =
453
+
454
+ r
455
+ Pr(R = r, {G(r, B) ∧ G(r, B′)} = 1)
456
+ =
457
+
458
+ r
459
+ PR(r) · Pr(G(r, B)) · Pr(G(r, B′))
460
+ =
461
+
462
+ r
463
+ PR(r) · Pr2(G(r, B)),
464
+ where the third equality uses the independence of
465
+ R, B, B′ and the last equality uses the fact that B′
466
+ and B are identically distributed.
467
+
468
+ Now we are ready to present our forking lemma.
469
+ Lemma 1: Let q ≥ 2 be a fixed integer and H
470
+ be a set of size N ≥ 2. Let A be a randomized
471
+ algorithm that on input x, h1, · · · , hq returns a
472
+ triple, the first two elements of which are integers
473
+ from {0, 1, · · · , q} and the last element of which
474
+ is a side output. Let IG be a randomized algorithm
475
+ (called input generator). The accepting probability
476
+ of A, denoted by acc, is defined as the probability
477
+ that I, J ≥ 1 in the experiment
478
+ x ← IG; h1, · · · , hq ← H;
479
+ (I, J, σ) ← A(x, h[[1, · · · , q]]).
480
+ The forking algorithm FA associated with A is a
481
+ randomized algorithm that takes x as input and
482
+ proceeds as in Fig. 2.
483
+ Let frk = Pr[FA(x) ̸= Fail : x ← IG]. Then,
484
+ frk ≥
485
+ 8 · acc4
486
+ q3(q − 1)3 − 3
487
+ N .
488
+ (4)
489
+ Proof. With respect to Flag1, we define Flag∗
490
+ 1 as
491
+ event
492
+ (I0 = · · · = I3 ≥ 1)∧(J0 = · · · = J3 ≥ 1)∧(J0 > I0).
493
+ Then, it is easy to check that FA(x) ̸= Fail is
494
+ equivalent to Flag∗
495
+ 1 ∧ Flag2 = 1. Since hI0 = ¯hI0
496
+ (resp. hJ0 = ˆhJ0, or, ¯hJ0 = hJ0) in ¬Flag2 holds
497
+ with probability 1/N. It follows that
498
+ frk = Pr(Flag∗
499
+ 1 ∧ Flag2 = 1)
500
+ ≥ Pr(Flag∗
501
+ 1 = 1) − 3/N.
502
+ (5)
503
+ Notice that
504
+ Pr(Flag∗
505
+ 1 = 1)
506
+ =
507
+ q
508
+
509
+ i=1
510
+ q
511
+
512
+ j=i+1
513
+ Pr(∧3
514
+ b=0{Ib = i ∧ Jb = j}).
515
+ (6)
516
+ Let A1 (resp. A2, A12) be three variants of
517
+ algorithm A with the only difference in the output
518
+ which is the first element (resp. the second element,
519
+ the first two elements) of A’s output. For instance,
520
+ J1 =A2(x, h[[1, · · · , J0 − 1,
521
+
522
+ J0, · · · , q]]; ρ),
523
+ (7)
524
+ I2 =A1(x, h[[1, · · · , I0 − 1, I0, · · · , q]]; ρ).
525
+ (8)
526
+ Assigning I0 = i and J0 = j, we denote
527
+ J′
528
+ 1 =A2(x, h[[1, · · · , j − 1, �
529
+ j, · · · , q]]; ρ),
530
+ (9)
531
+ I′
532
+ 2 =A1(x, h[[1, · · · , i − 1, i, · · · , q]]; ρ).
533
+ (10)
534
+
535
+ We can similarly define I′
536
+ 1, J′
537
+ 2, I′
538
+ 3, J′
539
+ 3. So Ib, Jb for
540
+ b ≥ 1 are functions (of A’s inputs and randomness)
541
+ and when assigning I0 = i and J0 = j, they
542
+ become I′
543
+ b, J′
544
+ b. Hence, we can apply fact 1 to
545
+ evaluate Eq. (6). Then, assigning I0 = i and J0 = j,
546
+ applying Fact 1 to realize I0 = i and J0 = j in
547
+ A1 (for Ib) and A2 (for Jb), Ib and Jb respectively
548
+ become I′
549
+ b and J′
550
+ b. Hence, we have
551
+ Pr(Flag∗
552
+ 1 = 1)
553
+ (11)
554
+ =
555
+ q
556
+
557
+ i=1
558
+ q
559
+
560
+ j=i+1
561
+ Pr(∧3
562
+ b=0{I′
563
+ b = i ∧ J′
564
+ b = j}),
565
+ (12)
566
+ where I0, J0 is rewritten as I′
567
+ 0, J′
568
+ 0 for brevity (so the
569
+ term {I0 = i ∧ J0 = j} becomes {I′
570
+ 0 = i ∧ J′
571
+ 0 =
572
+ j}). Notice ∧1
573
+ b=0(I′
574
+ b = i ∧ J′
575
+ b = j) is a random
576
+ variable, with randomness R = (x, ρ, h1, · · · , hi−1)
577
+ and B
578
+ = (hi, · · · , hq, ˆhj, · · · , ˆhq). So we can
579
+ define
580
+ ∧1
581
+ b=0 (I′
582
+ b = i ∧ J′
583
+ b = j) = G(R, B)
584
+ (13)
585
+ for some boolean function G.
586
+ Besides, by verifying the definition of I′
587
+ b, J′
588
+ b, we
589
+ can see that
590
+ ∧3
591
+ b=2 (I′
592
+ b = i ∧ J′
593
+ b = j) = G(R, B′)
594
+ (14)
595
+ with B′ = (¯hi, · · · , ¯hq, hj, · · · , hq).
596
+ Hence, applying Fact 2 to Eq. (12), we have
597
+ Pr(Flag∗
598
+ 1 = 1)
599
+ =
600
+
601
+ r
602
+ 1≤i<j≤q
603
+ PR(r)Pr2(∧1
604
+ b=0(I′
605
+ br = i ∧ J′
606
+ br = j))
607
+ =
608
+
609
+ r
610
+ 1≤i<j≤q
611
+ PR(r)Pr2(∧1
612
+ b=0(I′
613
+ br, J′
614
+ br) = (i, j)).
615
+ (15)
616
+ where I′
617
+ br (resp. J′
618
+ br) is I′
619
+ b (resp. J′
620
+ b) with R = r.
621
+ Notice that (I′
622
+ 0r, J′
623
+ 0r)
624
+ = (i, j) is a boolean
625
+ random variable (i.e., the result is true only if the
626
+ equality holds), determined by hi, · · · , hq. We can
627
+ define
628
+ G′(S, C)
629
+ def
630
+ = {(I′
631
+ 0r, J′
632
+ 0r) = (i, j)}
633
+ (16)
634
+ for some function G′, where S = hi, · · · , hj−1 and
635
+ C = hj, · · · , hq.
636
+ Checking the definition of (I′
637
+ 1r, J′
638
+ 1r), we can see
639
+ {(I′
640
+ 1r, J′
641
+ 1r) = (i, j)} = G′(S, C′)
642
+ (17)
643
+ with C′ = ˆhj, · · · , ˆhq.
644
+ Thus, Eq. (15) is
645
+ Pr(Flag∗
646
+ 1 = 1)
647
+ =
648
+
649
+ r
650
+ PR(r)Pr2�
651
+ G′(S, C) ∧ G′(S, C′)
652
+
653
+ .
654
+ (18)
655
+ Hence, we can apply Fact 2 to Eq. (18) and
656
+ obtain
657
+ Pr(Flag∗
658
+ 1 = 1)
659
+ =
660
+
661
+ r
662
+ 1≤i<j≤q
663
+ PR(r)[
664
+
665
+ s
666
+ PS(s)Pr2((I′
667
+ 0rs, J′
668
+ 0rs) = (i, j))]2
669
+
670
+
671
+ 1≤i<j≤q
672
+ [
673
+
674
+ r,s
675
+ PR(r)PS(s)Pr2((I′
676
+ 0rs, J′
677
+ 0rs) = (i, j))]2
678
+
679
+
680
+ 1≤i<j≤q
681
+ [
682
+
683
+ r,s
684
+ PR(r)PS(s)Pr((I′
685
+ 0rs, J′
686
+ 0rs) = (i, j))]4
687
+ =
688
+
689
+ 1≤i<j≤q
690
+ [Pr((I′
691
+ 0, J′
692
+ 0) = (i, j))]4,
693
+
694
+
695
+
696
+
697
+ 1≤i<j≤q
698
+ Pr((I′
699
+ 0, J′
700
+ 0) = (i, j))
701
+
702
+
703
+ 4
704
+ /(q3(q − 1)3/23)
705
+ where (I′
706
+ 0rs, J′
707
+ 0rs) is (I′
708
+ 0r, J′
709
+ 0r) with S = s, the
710
+ first two inequalities follow from Cauchy-Schwarz
711
+ inequality1 (the first one is over distribution PR(·)
712
+ and the second one is over distribution PR(·)PS(·));
713
+ the last inequality is to apply Cauchy-Schwarz in-
714
+ equality �n
715
+ i=1 x2
716
+ i ≥ (�
717
+ i xi)2/n twice by noticing
718
+ that y4
719
+ i
720
+ = (y2
721
+ i )2 so that the first time we use
722
+ xi = y2
723
+ i for Cauchy-Schwarz inequality. Finally,
724
+ notice that I′
725
+ 0 = I0 and J′
726
+ 0 = J0 by definition.
727
+ Also, �
728
+ 1≤i<j≤q Pr((I0, J0) = (i, j)) is exactly
729
+ acc by definition. It follows that Pr(Flag∗
730
+ 1
731
+ =
732
+ 1) ≥
733
+ acc4
734
+ q3(q−1)3/23 . From Eq. (5), we have frk ≥
735
+ 8·acc4
736
+ q3(q−1)3 − 3/N.
737
+
738
+ IV. MODEL OF MULTI-SIGNATURE
739
+ In this section, we introduce the model of multi-
740
+ signature. It consists of the multi-signature defini-
741
+ tion and the security formalization.
742
+ 1�
743
+ i pix2
744
+ i ≥ (�
745
+ i pixi)2, if pi ≥ 0 and �
746
+ i pi = 1
747
+
748
+ A. Syntax
749
+ Mult-signature is a signature with a group of
750
+ signers, where each of them has a public-key and
751
+ a private key. They jointly generate a signature.
752
+ The interaction between them proceeds in rounds.
753
+ Signers are pair-wise connected but the channel is
754
+ not secure. The signing protocol is to generate a
755
+ signature so that the successful verification would
756
+ indicate that all signers have agreed to sign the
757
+ message. The target is to generate a compact signa-
758
+ ture that is shorter than concatenating all signers’
759
+ individual signatures together.
760
+ Definition 2: A multi-signature is a tuple of al-
761
+ gorithms (Setup, KeyGen, Sign, Verify), described
762
+ as follows.
763
+ Setup. Given security parameter λ, it generates a
764
+ system parameter param that serves as part of the
765
+ input for KeyGen, Sign, Verify (but for brevity, we
766
+ omit it).
767
+ KeyGen. It takes param as input and outputs for
768
+ a user a private key sk and a public-key pk.
769
+ Sign.
770
+ Assume
771
+ n
772
+ users
773
+ with
774
+ public-keys
775
+ (pk1, · · · , pkn) want to jointly sign a message
776
+ M. Then, each user i takes its private key ski as
777
+ input and interacts with other signers. Finally, each
778
+ of them outputs a signature σ (note: this is for
779
+ simplicity only; in literature, usually a designated
780
+ leader outputs σ). Besides, there is a function F
781
+ that aggregates (pk1, · · · , pkn) into a compact
782
+ public-key pk = F(pk1, · · · , pkn).
783
+ Verify. Upon (σ, M) with the aggregated public-
784
+ key pk = F(pk1, · · · , pkn), verifier takes σ, M and
785
+ pk as input, outputs 1 (for accept) or 0 (for reject).
786
+ Remark.
787
+ The verify algorithm only uses the
788
+ aggregated key pk to verify the signature. This
789
+ is important for blockchain, where the recipient
790
+ only uses pk as the public-key. Also, the redeem
791
+ signature only uses the multi-signature σ. It is
792
+ desired that both pk and σ are independent of
793
+ n while no attacker can forge a valid signature
794
+ w.r.t. this short pk. Even though, our definition
795
+ generally does not make any restriction on pk and
796
+ it especially can be (pk1, · · · , pkn).
797
+ B. Security Model
798
+ In this section, we introduce the security model
799
+ [13] of a multi-signature. It formulates the exis-
800
+ tential unforgeability. Essentially, it says that no at-
801
+ tacker can forge a valid signature on a new message
802
+ as long as the signing group contains an honest
803
+ member. Toward this, the attacker can access to a
804
+ signing oracle and create fake public-keys at will.
805
+ The security is defined through a game between a
806
+ challenger CHAL and an attacker A.
807
+ Initially, CHAL runs Setup(1λ) to generate sys-
808
+ tem parameter param and executes KeyGen to
809
+ generate a public-key pk∗ and a private key sk∗. It
810
+ then provides pk∗|param to A who interacts with
811
+ CHAL through signing oracle below.
812
+ Sign Os(PK, M).
813
+ Here PK is a set of distinct
814
+ public-keys with pk∗ ∈ PK. Upon this query,
815
+ CHAL represents pk∗ and A represents PK −
816
+ {pk∗} to run the signing protocol on message M.
817
+ Finally, Os outputs the multi-signature σ (if it
818
+ succeeds) or ⊥ (if it fails).
819
+ Forgery.
820
+ Finally, A outputs a signature σ∗
821
+ for
822
+ a
823
+ message
824
+ M∗,
825
+ w.r.t.
826
+ a
827
+ set
828
+ of
829
+ distinct
830
+ public-keys
831
+ (pk∗
832
+ 1, · · · , pk∗
833
+ N)
834
+ s.t.
835
+ pk∗
836
+ =
837
+ pk∗
838
+ i
839
+ for
840
+ some
841
+ i.
842
+ A
843
+ succeeds
844
+ if
845
+ two
846
+ conditions
847
+ are met: (a) Verify(pk∗, σ∗, M∗)
848
+ =
849
+ 1 (where
850
+ pk∗
851
+ =
852
+ F(pk∗
853
+ 1, · · · , pk∗
854
+ N));
855
+ (b)
856
+ no
857
+ query
858
+ ((pk∗
859
+ 1, · · · , pk∗
860
+ N), M∗) was issued to Os. Denote a
861
+ success forgery event by succ.
862
+ Now we can define the security of a multi-
863
+ signature.
864
+ Definition
865
+ 3:
866
+ A
867
+ multi-signature
868
+ scheme
869
+ (Setup, KeyGen, Sign, Verify) is existentially
870
+ unforgeable against chosen message attack
871
+ (or EU-CMA for short), if satisfies the correctness
872
+ and existential unforgeability below.
873
+ • Correctness.
874
+ For (sk1, pk1), · · · , (skn, pkn)
875
+ generated by KeyGen, the signature generated
876
+ by signing algorithm on a message M will
877
+ pass the verification, except for a negligible
878
+ probability.
879
+ • Existential Unforgeability. For any PPT ad-
880
+ versary A, Pr(succ(A)) is negligible.
881
+ The multi-signature scheme is said t-EU-CMA, if it
882
+ is EU-CMA w.r.t. adversary A who always restricts
883
+
884
+ the number of signers in each signing query and the
885
+ final forgery to be at most t.
886
+ V. MODEL OF CANONICAL LINEAR
887
+ IDENTIFICATION
888
+ In this section, we introduce a variant model
889
+ of canonical identification (ID) scheme and extend
890
+ it with linearity. We label the ID scheme with a
891
+ parameter τ. This is needed in order to include
892
+ our lattice-based ID scheme as a realization for our
893
+ multi-signature compiler.
894
+ Definition 4: A canonical identification scheme
895
+ with parameter τ ∈ N is a tuple of algorithms
896
+ ID = (Setup, KeyGen, P, Vτ, Θ), where Setup
897
+ takes security parameter λ as input and generates
898
+ a system parameter param; KeyGen is a key
899
+ generation algorithm that takes param as input and
900
+ outputs a public key pk and a private key sk;
901
+ P is an algorithm, executed by prover; Vτ is a
902
+ verification algorithm parameterized by τ, executed
903
+ by Verifier; Θ is a set. ID scheme is a three-
904
+ round protocol depicted in Fig. 3, where Prover
905
+ first generates a committing message CMT with
906
+ H∞(CMT) = ω(log λ), and then Verifier replies
907
+ with a challenge CH ← Θ and finally Prover
908
+ finishes with a response Rsp which will be either
909
+ rejected or accepted by Vτ.
910
+ Denote the domain of sk, pk, CMT, Rsp respec-
911
+ tively by SK, PK, CMT , RSP. In the following,
912
+ we define linearity and simutability for an ID
913
+ scheme. Simulatbility appeared before (e.g., [1])
914
+ while the linearity is new.
915
+ Linearity.
916
+ A
917
+ canonical
918
+ ID
919
+ scheme
920
+ ID
921
+ =
922
+ (Setup, KeyGen, P, Vτ, Θ) is linear if it satisfies
923
+ the following conditions.
924
+ i. SK, PK, CMT , RSP
925
+ are
926
+ R-modules
927
+ for
928
+ some ring R with Θ ⊆ R (as a set);
929
+ ii. For any λ1, · · · , λt ∈ Θ and public/private
930
+ pairs (ski, pki) (i = 1, · · · , t), we have that
931
+ sk
932
+ =
933
+ �t
934
+ i=1 λi • ski is a private key of
935
+ pk = �t
936
+ i=1 λi • pki.
937
+ Note:
938
+ Operator • between R and SK (resp.
939
+ PK, CMT , RSP) might be different (as long
940
+ as it is clear from the context), even though
941
+ we use the same symbol •.
942
+ iii. Let λi ← Θ and (pki, ski) ← KeyGen(1κ),
943
+ for i = 1, · · · , t. If CMTi|CH|Rspi is a faith-
944
+ fully generated transcript of the ID scheme
945
+ w.r.t. pki, then
946
+ Vτ(pk, CMT|CH|Rsp) = 1,
947
+ (19)
948
+ where pk = �t
949
+ i=1 λi • pki, CMT = �t
950
+ i=1 λi •
951
+ CMTi and Rsp = �t
952
+ i=1 λi • Rspi.
953
+ Simulability.
954
+ ID
955
+ is
956
+ simulatable
957
+ if
958
+ there
959
+ ex-
960
+ ists a PPT algorithm SIM s.t. for (sk, pk) ←
961
+ KeyGen(1λ), CH ← Θ and (CMT, Rsp) ←
962
+ SIM(CH, pk, param), it holds that CMT|CH|Rsp
963
+ is indistinguishable from a real transcript, even
964
+ if the distinguisher is given pk|param and has
965
+ access to oracle Oid(sk, pk), where Oid(sk, pk) is
966
+ as follows: (st, CMT) ← P(param); CH ← Θ;
967
+ Rsp ← P(st|sk|pk, CH); output CMT|CH|Rsp.
968
+ Now we define the security for an ID scheme.
969
+ Essentially, it is desired that an attacker is unable
970
+ to impersonate a prover w.r.t. an aggregated public-
971
+ key, where at least one of the participating public-
972
+ keys is not generated by attacker. Later we will
973
+ use this definition to convert an ID scheme into a
974
+ secure multi-signature. In our definition, the prover
975
+ does not access to additional information. He is not
976
+ given extra capability, either. Thus, our definition is
977
+ rather weak.
978
+ Definition 5: A canonical identification scheme
979
+ ID = (Setup, KeyGen, P, Vτ, Θ) with linearity
980
+ and τ ∈ N is secure if it satisfies correctness and
981
+ security below.
982
+ Correctness. When no attack presents, Prover will
983
+ convince Verifier, except for a negligible probabil-
984
+ ity.
985
+ Security.
986
+ For
987
+ any
988
+ PPT
989
+ adversary
990
+ A,
991
+ Pr(EXPID,A = 1) is negligible, where EXPID,A
992
+ is defined as follows, where pki ∈ PK for i ∈ [t]
993
+ and pk = �t
994
+ i=1 λi • pki.
995
+ Experiment EXPID,A(λ)
996
+ param← Setup(1λ);
997
+ (pk1, sk1) ← KeyGen(param);
998
+ (st0, pk2, · · · , pkt) ← A(param, pk1)
999
+ λ1, · · · , λt ← Θ
1000
+ st1|CMT ← A(st0, λ1, · · · , λt);
1001
+
1002
+ Prover (sk, pk|τ)
1003
+ Verifier (pk|τ)
1004
+ (st, CMT) ← P(param)
1005
+ CMT
1006
+
1007
+ CH ← Θ
1008
+ CH
1009
+
1010
+ Rsp ← P(st|sk|pk, CH)
1011
+ Rsp
1012
+
1013
+ Vτ(pk, CMT|CH|Rsp) ?= 1
1014
+ Fig. 3. Canonical Identification Protocol
1015
+ CH ← Θ; Rsp ← A(st1, CH);
1016
+ b ← Vt(pk, CMT|CH|Rsp);
1017
+ output b.
1018
+ ID is said t∗-secure if the security holds for any
1019
+ t ≤ t∗.
1020
+ VI. FROM CANONICAL LINEAR ID SCHEME TO
1021
+ KEY-AND-SIGNATURE COMPACT
1022
+ MULTI-SIGNATURE
1023
+ In this section, we show how to convert a linear
1024
+ ID scheme into a multi-sinagure so that the aggre-
1025
+ gated public-key and signature are both compact.
1026
+ The idea is to linearly add the member signatures
1027
+ (resp. public-keys) together with weights while the
1028
+ weight depends on all public-keys and is different
1029
+ for each user.
1030
+ A. Construction
1031
+ Let
1032
+ ID = (Setupid, KeyGenid, P, Vτ, Θ)
1033
+ be a canonical linear ID with parameter τ ∈ N.
1034
+ H0, H1 are two random oracles from {0, 1}∗ to
1035
+ Θ with Θ ⊆ R, where R is the ring defined for
1036
+ the linearity property of ID. Our multi-signature
1037
+ scheme Π = (Setup, KeyGen, Sign, Verify) is
1038
+ as follows.
1039
+ Setup.
1040
+ Sample
1041
+ and
1042
+ output
1043
+ param
1044
+
1045
+ Setupid(1λ).
1046
+ Note:
1047
+ param
1048
+ should
1049
+ be
1050
+ part
1051
+ of the input to the algorithms below. But for
1052
+ brevity, we omit it in the future.
1053
+ KeyGen.
1054
+ Sample
1055
+ (pk, sk)
1056
+
1057
+ KeyGenid(param);
1058
+ output
1059
+ a
1060
+ public-key
1061
+ pk
1062
+ and private key sk.
1063
+ Sign.
1064
+ Suppose
1065
+ that
1066
+ users
1067
+ with
1068
+ public-keys
1069
+ pki, i = 1, · · · , t want to jointly sign a message
1070
+ M. Let λi = H0(pki, PK) and pk = �t
1071
+ i=1 λi•pki,
1072
+ where PK = {pk1, · · · , pkt}. They run the follow-
1073
+ ing procedure.
1074
+ • R-1.
1075
+ User
1076
+ i
1077
+ takes
1078
+ (sti, CMTi)
1079
+
1080
+ P(param) and sends ri := H0(CMTi|pki) to
1081
+ other users.
1082
+ • R-2.
1083
+ Upon rj for all j (we do not restrict
1084
+ j ̸= i for simplicity), user i verifies if rj =
1085
+ H0(CMTj|pkj). If no, it aborts; otherwise, it
1086
+ sends CMTi to other users.
1087
+ • R-3.
1088
+ Upon CMTj, j = 1, · · · , t, user i com-
1089
+ putes CMT = �t
1090
+ j=1 λj • CMTj. It computes
1091
+ CH = H1(pk|CMT|M). Finally, it computes
1092
+ Rspi = P(sti|ski|pki, CH) and sends it to
1093
+ other signers.
1094
+ • Output. Upon Rspj, j = 1, · · · , t, user i com-
1095
+ putes Rsp = �t
1096
+ j=1 λj • Rspj, and outputs the
1097
+ aggregated public-key pk|t and multi-signature
1098
+ CMT|Rsp.
1099
+ Verify.
1100
+ Upon signature (CMT, Rsp) on mes-
1101
+ sage M with the aggregated public key pk|t,
1102
+ it outputs Vt(pk, CMT|CH|RSP), where CH =
1103
+ H1(pk|CMT|M).
1104
+ Remark. (1) Since pk|t is the aggregated public-
1105
+ key, we assume that it will be correctly computed
1106
+ and available to verifier, which is true for the
1107
+ Bitcoin application.
1108
+ (2) The most damaging attack to a multi-signature
1109
+ is the rogue key attack, where an attacker chooses
1110
+ his public-key after seeing other signers’ public-
1111
+ keys. By doing this, the attacker could man-
1112
+ age to reach an aggregated key for which he
1113
+ knows the private key. In our construction, attacker
1114
+ can not achieve this. Indeed, notice that pk =
1115
+
1116
+ H0(pkn, PK) • pkn + �n−1
1117
+ i=1 H0(pki, PK) • pki,
1118
+ where PK
1119
+ = {pk1, · · · , pkn}. The hash-value
1120
+ weights can be computed only after PK has been
1121
+ determined. Also, if pkn is the honest user’s key,
1122
+ then it is quite random. So, H0(pkn, PK) (hence
1123
+ H0(pkn, PK) • pkn and also pk) will be random,
1124
+ given other variables in pk. So it is unlikely that
1125
+ attacker can predetermined pk and so the rogue key
1126
+ attack can not succeed.
1127
+ B. Security Theorem
1128
+ In this section, we prove the security of our
1129
+ scheme. The idea is as follows. We notice that
1130
+ the multi-signature is (CMT, RSP) that satis-
1131
+ fies Vt(pk, CMT|CH|RSP) = 1, where CH =
1132
+ H0(pk|CMT|M). Assume PK = {pk1, · · · , pkt},
1133
+ where pk1 is an honest user’s key and other keys are
1134
+ created by attacker. We want to reduce the multi-
1135
+ signature security to the security of ID scheme.
1136
+ In this case, pk will be the aggregated key with
1137
+ weights λi = H0(pki, PK). If an attacker can
1138
+ forge a multi-signature with respect to pk, we want
1139
+ to convert it into an impersonate attack to the ID
1140
+ scheme w.r.t. pk. There are two difficulties for
1141
+ this task. First, we need to simulate the signing
1142
+ oracle without sk1, where we have to compute the
1143
+ response Rsp for user of pk1 without sk1. Our idea
1144
+ is to use the simulability of the ID scheme to help:
1145
+ take a random CH and simulate an ID transcript
1146
+ CMT′|CH|Rsp′. Then, we send CMT1 = CMT′
1147
+ as the committing message. The simulation will
1148
+ be well done if we can manage to define CH as
1149
+ H1(pk|CMT|M). This will be fine if pk|CMT|M
1150
+ was never queried to H1 oracle. Fortunately, this
1151
+ is true with high probability: due to the initial
1152
+ registration message at round R-1, attacker can
1153
+ not know CMT1 before registering CMTj using
1154
+ rj (hence CMTj is known to us through oracle
1155
+ H0). Hence, CMT will have a min-entropy of
1156
+ H∞(CMT1), which is super-logarithmic and hence
1157
+ can not be guessed. That is, pk|CMT|M was
1158
+ unlikely to be queried to H1 before. Hence, the
1159
+ signing oracle will be simulated without difficulty.
1160
+ The second difficulty is how to convert the forgery
1161
+ into an impersonating attack. In the ID attack, CH
1162
+ is provided by challenger while in the forgery, CH
1163
+ is the hash value from H1. The problem is the
1164
+ attacker could make a query pk|CMT|M to H1
1165
+ oracle (we maintain) while we do not know whether
1166
+ this query is toward his final forgery output or not
1167
+ and so we do not know which CMT should be
1168
+ sent to our challenger and consequently we do not
1169
+ know which of such queries should be answered
1170
+ with our challenger’s CH. Fortunately, this is not a
1171
+ big issue as we can guess which query will be used
1172
+ for the forgery. There are a polynomial number of
1173
+ such queries. Our random guess only degrades the
1174
+ success probability by a polynomial fraction. This
1175
+ completes our idea. Now we give full details below.
1176
+ Theorem 1: Assume that h ← Θ is invertible
1177
+ in R with probability 1 − negl(λ). Let ID =
1178
+ (Setupid, KeyGenid, P, Vτ, Θ) be a secure iden-
1179
+ tification scheme with linearity and simulability.
1180
+ Then, our multi-signature scheme is EU-CMA se-
1181
+ cure.
1182
+ Proof.
1183
+ We show that if the multi-signature is
1184
+ broken by D with non-negligible probability ǫ,
1185
+ then we can construct an attacker B to break ID
1186
+ scheme with a non-negligible probability ǫ′. Given
1187
+ the challenge public-key pk∗
1188
+ 1, B needs to come
1189
+ up with some other public-keys pk∗
1190
+ 2, · · · , pk∗
1191
+ ν for
1192
+ some ν of his choice and receives a list of random
1193
+ numbers λ∗
1194
+ i ← Θ for i = 1, · · · , ν. Then, he needs
1195
+ to play as a prover in the ID protocol for public-
1196
+ key pk∗ = �ν
1197
+ i=1 λ∗
1198
+ i • pk∗
1199
+ i to convince the verifier
1200
+ (his challenger). Toward this, B will simulate an
1201
+ environment for D and use the responses from D to
1202
+ help complete his attack activity. The details follow.
1203
+ Upon receiving the challenge public-key pk∗
1204
+ 1
1205
+ and system parameter param, B samples ℓ∗
1206
+ H0 ←
1207
+ {1, · · · , q∗
1208
+ H0}, where q∗
1209
+ H0 is the upper bound on
1210
+ the number of new queries (i.e., not queried be-
1211
+ fore) of form (pk, PK) to random oracle H0 s.t.
1212
+ pk, pk∗
1213
+ 1 ∈ PK (call it a Type-I irregular query).
1214
+ In addition, a new query of format CMT|pk∗|∗ to
1215
+ oracle H1 after the ℓ∗
1216
+ H0th Type-I irregular query
1217
+ will be called a Type-II irregular query, where
1218
+ CMT ∈ CMT , pk∗ = �ν
1219
+ i=1 H0(pk∗
1220
+ i , PK∗) • pk∗
1221
+ i
1222
+ and PK∗ = {pk∗
1223
+ 1, · · · , pk∗
1224
+ ν} is the public-key set
1225
+ for the ℓ∗
1226
+ H0th Type-I irregular query. Let q∗
1227
+ ch be the
1228
+ upper bound on the number of the Type-II irregu-
1229
+
1230
+ lar queries. It then samples ℓ∗
1231
+ ch ← {1, · · · , q∗
1232
+ ch}.
1233
+ B invokes D with pk∗
1234
+ 1 and param and answers
1235
+ his random oracle queries and signing queries as
1236
+ follows.
1237
+ Random Oracle H(·).
1238
+ For simplicity, we main-
1239
+ tain one random oracle H with H0(x) = H(0, x)
1240
+ and H1(x) = H(1, x). The query x to Hb is
1241
+ automatically interpreted as query b|x to H. With
1242
+ this in mind, it maintains a hash list LH (initially
1243
+ empty), consisting of records of form (u, y), where
1244
+ y = H(u). Upon a query b|x, it first checks if
1245
+ there was a record (b|x, y) in LH for some y. If
1246
+ yes, it returns y; otherwise, there are three cases
1247
+ (all irregular queries will be in these cases as they
1248
+ are unrecorded by definition).
1249
+
1250
+ x is not a (Type-I or Type-II) irregular query
1251
+ to Hb.
1252
+ In this case, it takes y ← Θ and adds
1253
+ (b|x, y) into LH.
1254
+
1255
+ x is a Type-I irregular query to Hb (thus b =
1256
+ 0).
1257
+ In this setting, there are two cases.
1258
+ - x is not the ℓ∗
1259
+ H0th irregular query.
1260
+ In this
1261
+ case, for each pk′ ∈ PK, it takes h ← Θ
1262
+ and adds (0|(pk′, PK), h) into LH. Note for
1263
+ convenience, we treat each new record in LH
1264
+ as created due to a hash query (from either
1265
+ simulator B or D). For the technical reason,
1266
+ for given PK with pk∗
1267
+ 1
1268
+ ∈ PK, we treat
1269
+ (0|(pk∗
1270
+ 1, PK), ∗) as the last record created in
1271
+ LH among all records of (0|(pk′, PK), ∗) with
1272
+ pk′ ∈ PK. Our treatment is well-defined and
1273
+ perfectly consistent with random oracle, as
1274
+ by our convention, all records of (pk′, PK)
1275
+ with pk′, pk∗
1276
+ 1 ∈ PK will be recorded in LH
1277
+ simultaneously whenever it receives a Type-I
1278
+ irregular query (which is 0|x in our case).
1279
+ - x is the ℓ∗
1280
+ H0th irregular query.
1281
+ In this
1282
+ case, let 0|x = 0|(pk, PK∗) with PK∗ =
1283
+ {pk∗
1284
+ 1, · · · , pk∗
1285
+ ν} for some ν ≥ 2. B sends
1286
+ {pk∗
1287
+ 2, · · · , pk∗
1288
+ ν} to his challenger and re-
1289
+ ceives λ∗
1290
+ 1, · · · , λ∗
1291
+ ν
1292
+ (each of which is uni-
1293
+ formly random over Θ). Then, B inserts
1294
+ (0|(pk∗
1295
+ i , PK∗), λ∗
1296
+ i ) into LH for i = 1, · · · , ν.
1297
+ This treatment is perfectly consistent with ran-
1298
+ dom oracles: a Type-I irregular query by defi-
1299
+ nition is an unrecorded query (i.e., not queried
1300
+ before) and 0|(pk′, PK∗) for each pk′ ∈ PK∗
1301
+ will be recorded in LH within one hash query
1302
+ (thus none of them was queried before).
1303
+
1304
+ x is a Type-II irregular query to Hb (thus b =
1305
+ 1).
1306
+ In this setting, there are two cases.
1307
+ - x is not the ℓ∗
1308
+ chth Type-II irregular query. In
1309
+ this case, it takes y ← Θ and adds (1|x, y)
1310
+ into LH.
1311
+ - x is the ℓ∗
1312
+ chth Type-II irregular query.
1313
+ In
1314
+ this case, it parses x = CMT∗|pk∗|M∗ with
1315
+ CMT∗ ∈ CMT . Then, it sends CMT∗ to
1316
+ its challenger and receive CH∗. Then, it adds
1317
+ (1|x, CH∗) to LH.
1318
+ After our treatment above, x now has been recorded
1319
+ in LH. Then, the oracle returns y for (b|x, y) ∈ LH.
1320
+ Sign Os (pk1, · · · , pkn, M).
1321
+ By our security
1322
+ model, it is assumed that pk∗
1323
+ 1 = pkt for some t.
1324
+ Then, B plays the role of user pkt while D plays
1325
+ users of pkj for j ̸= t in the signing algorithm. The
1326
+ action of B is as follows.
1327
+ • R-1.
1328
+ B generates rt ← Θ and sends to other
1329
+ signers (played by D).
1330
+ • R-2.
1331
+ Upon {rj}j̸=t from D, B first is-
1332
+ sues hash queries (pki, PK) for each pki ∈
1333
+ PK to compute λi = H0(pki, PK), where
1334
+ PK = {pk1, · · · , pkn}. Then, it computes pk,
1335
+ takes h ← Θ and runs SIM(h, pk∗, param)
1336
+ to simulate an ID transcript (CMT′, h, Rsp′).
1337
+ Then,
1338
+ he
1339
+ defines
1340
+ CMTt
1341
+ =
1342
+ CMT′.
1343
+ He
1344
+ also adds (0|CMTt|pkt, rt) into LH (in case
1345
+ (0|CMTt|pkt, ∗) not in LH) and otherwise
1346
+ aborts with ⊥ (denoted by event Bad0).
1347
+ Next, for each j ̸= t, it searches a record
1348
+ (0|CMTj|pkj, rj) in LH
1349
+ for some CMTj
1350
+ which results in two cases.
1351
+ (i)
1352
+ If
1353
+ (0|CMTj|pkj, rj)
1354
+ for
1355
+ all
1356
+ j
1357
+ ̸=
1358
+ t are found in LH, it computes
1359
+ CMT = �n
1360
+ i=1 λi • CMTi and checks whether
1361
+ (1|pk|CMT|M, y) ∈ LH for some y. If this y
1362
+ does not exist, it records (1|pk|CMT|M, h)
1363
+ into LH and defines CH = h and sends CMTt
1364
+ to D; otherwise (denote this event by Bad1),
1365
+ B aborts with ⊥ .
1366
+ (ii)
1367
+ If (0|CMTj∗|pkj∗, rj∗) does not ex-
1368
+ ist in LH
1369
+ for some j∗, it sends CMTt
1370
+
1371
+ to D (normally). However, we remark that
1372
+ CMTj∗ later in Step R-3 (from j∗) satis-
1373
+ fies H0(CMTj∗|pkj∗) = rj∗ (which will be
1374
+ checked there) only negligibly (so this case
1375
+ will not raise a simulation difficulty), as the
1376
+ hash value is even undefined yet and hence
1377
+ equals rj with probability 1/|Θ| only, which
1378
+ we ignore it now.
1379
+ • R-3.
1380
+ Upon
1381
+ {CMTj}j̸=t,
1382
+ B
1383
+ checks
1384
+ if
1385
+ H0(CMTj|pkj) = rj for each j. If it does
1386
+ not hold for some j, Os outputs ⊥ (normally);
1387
+ otherwise, it sends Rspt := Rsp′ to D. We
1388
+ clarify two events: (1) some CMTj found in
1389
+ R-2(i) is different from that received in the
1390
+ current step. In this case, the check in the
1391
+ current step is consistent with a negligible
1392
+ probability only as H for two different inputs
1393
+ are independent. (2) R-2(ii) occurs to some j∗
1394
+ (so CMTj∗ is not found there) while CMTj∗
1395
+ received in the current step is consistent with
1396
+ rj. As seen above, this holds with proba-
1397
+ bility 1/|Θ| only. Ignoring these events, CH
1398
+ and {CMTj}j are determined in R-2(i) and
1399
+ {CMTj}j are consistent with those received
1400
+ in the current step.
1401
+ • Output.
1402
+ Upon Rspj for j ̸= t, it computes
1403
+ RSP = �n
1404
+ j=1 λj • Rspj. The final signature is
1405
+ (CMT, RSP) with the aggregated key pk|t.
1406
+ Finally, D outputs a forgery (α, β) for mes-
1407
+ sage M′ and public keys PK′. If α|PK′|M′ ̸=
1408
+ CMT∗|PK∗|M∗ or α|β is invalid (when verified
1409
+ using V (·)), B exits with ⊥; otherwise, he verifies
1410
+ (α, β). If invalid, he outputs ⊥; otherwise, he de-
1411
+ fines Rsp∗ = β and sends it back to his challenger.
1412
+ This completes the description of B.
1413
+ We now analyze the success probability of B.
1414
+ First, the view of D is identical to the real game,
1415
+ except for the following events.
1416
+ a. In step R-2 of Os, (CMT′, h, Rsp′) is sim-
1417
+ ulated by SIM (instead of being computed
1418
+ using sk∗
1419
+ 1). However, by hybrid reduction to
1420
+ simulability of ID, the view of D is statistical
1421
+ close from his view when this transcript is
1422
+ generated using sk∗
1423
+ 1 (with the same h).
1424
+ b. In
1425
+ step
1426
+ R-2
1427
+ of
1428
+ oracle
1429
+ Os,
1430
+ when
1431
+ (0|CMTt|pkt, y)
1432
+
1433
+ LH, Bad0 occurs for
1434
+ some y (hence the view of D is inconsistent
1435
+ if y ̸= rt). However, since CMT′ (i.e., CMTt)
1436
+ is just simulated in this oracle query and
1437
+ H∞(CMT′) = ω(log λ), CMT′ is independent
1438
+ of current records in LH. Hence, Bad0 occurs
1439
+ with
1440
+ probability
1441
+ at
1442
+ most
1443
+ Q/2H∞(CMT
1444
+ ′)
1445
+ (negligible),
1446
+ where
1447
+ Q
1448
+ is
1449
+ the
1450
+ number
1451
+ of
1452
+ records in LH. We ignore this negligible
1453
+ probability from now on.
1454
+ c. In step R-2 (i), if (1|pk|CMT|M, y) ∈ LH
1455
+ for some y, then event Bad1 occurs. In this
1456
+ case, A can not define CH = h and the
1457
+ simulation can not continue. However, since
1458
+ CMT = λt • CMT′ + �
1459
+ j̸=t λj • CMTj and
1460
+ CMT′ is simulated in the current oracle and
1461
+ hence independent of the rest variables in this
1462
+ equation. Hence, as long as λt is invertible
1463
+ (which is violated only negligibly), CMT has
1464
+ a min-entropy at least H∞(CMT) = ω(log λ).
1465
+ Thus, similar to Bad0 event, Bad1 occurs
1466
+ negligibly only.
1467
+ d. Finally, when D outputs (α, β) for mes-
1468
+ sage M′ and public-key set PK′, it has
1469
+ α|PK′|M′ ̸= CMT∗|PK∗|M∗. Since (α, β)
1470
+ has been verified, a Type-I irregular query
1471
+ (pk, PK′)
1472
+ and
1473
+ a
1474
+ Type-II irregular
1475
+ query
1476
+ α|pk′|M must have been issued (the first query
1477
+ (pk, PK′) for some pk ∈ PK′ is the Type-I ir-
1478
+ regular query while the first query of α|pk′|M
1479
+ is the Type-II query; the existence of such
1480
+ queries are guaranteed as the verification of
1481
+ (α, β) by B will certainly issue these queries).
1482
+ Since ℓ∗
1483
+ H and ℓ∗
1484
+ ch are chosen uniformly ran-
1485
+ dom, they happened to the foregoing two
1486
+ queries with probability
1487
+ 1
1488
+ q∗
1489
+ Hq∗
1490
+ ch ≥
1491
+ 1
1492
+ q0q1 , where
1493
+ q0 (resp. q1) is the upper bound on ♯ queries
1494
+ to H0 (resp. H1).
1495
+ From the analysis of (a)(b)(c), their occurrence
1496
+ changes the adversary view negligibly. Ignoring
1497
+ this, from item d, when ℓ∗
1498
+ H and ℓ∗
1499
+ ch is chosen
1500
+ correctly, the view of D is indistinguishable from its
1501
+ view in the real game. On the other hand, it is easy
1502
+ to verify that conditional on this correct choice, a
1503
+ valid forgery indicates a successful attack by B.
1504
+ Hence, B can break the ID security with probability
1505
+ at least ǫ/q0q1, non-negligible. This contradicts the
1506
+
1507
+ security of our ID scheme.
1508
+
1509
+ If the adversary always restricts the number of
1510
+ signers in the signing query and the forgery to be
1511
+ at most T, then Theorem immediately implies the
1512
+ following corollary.
1513
+ Corollary 1: Let T ≥ 2. Assume that h ← Θ
1514
+ is invertible in R with probability 1 − negl(λ).
1515
+ Let ID = (Setupid, KeyGenid, P, Vτ, Θ) be a
1516
+ T-secure identification scheme with linearity and
1517
+ simulability. Then, our multi-signature scheme is
1518
+ T-EU-CMA secure.
1519
+ VII. REALIZATIONS
1520
+ In this section, we will realize our compiler with
1521
+ ID schemes: Schnorr ID scheme and a lattice-based
1522
+ ID scheme. The first scheme is similar to Boneh
1523
+ et al. [13]. But we keep it as it is very simple
1524
+ and efficient and can demonstrate the usage of our
1525
+ compiler. The second one is new and breaks a
1526
+ barrier that the previous schemes can not overcome.
1527
+ A. Realization I: Schnorr Identification
1528
+ In this section, we apply our compiler to the well-
1529
+ known Schnorr ID scheme [48]. Toward this, we
1530
+ only need to show that it is linear with simula-
1531
+ bility and security. For clarity, we first review this
1532
+ scheme.
1533
+ Let q be a large prime. Consider a prime group of
1534
+ order q with a random generator g (e.g., the group
1535
+ on elliptic curve secp256k1 of y2 = x3 + 7 for
1536
+ Bitcoin). The Schnorr identification is depicted in
1537
+ Fig. 4. This scheme can be regarded as a realization
1538
+ of the parameterized ID scheme with parameter τ
1539
+ never used. In the following, we show that Schnorr
1540
+ ID scheme satisfies the three properties.
1541
+ Linearity.
1542
+ Notice that SK = RSP = R =
1543
+ Θ = Zq, CMT = PK = ⟨g⟩. We now verify the
1544
+ linearity property.
1545
+ i. As seen in Section II-A, Zq and ⟨g⟩ are
1546
+ both Zq-modules, where the multiplication •
1547
+ between R = Zq and Zq is the multiplica-
1548
+ tion of Zq, while • between R = Zq and
1549
+ ⟨g⟩ is exponentiation: s • m = ms. Hence,
1550
+ SK, PK, CMT , RSP are R-modules.
1551
+ ii. Let pki = gsi with ski = si, i = 1, · · · , n. Let
1552
+ λ1, · · · , λn ∈ R. Then, sk = �n
1553
+ i=1 λi • ski =
1554
+ �n
1555
+ i=1 λisi, where the addition is the group
1556
+ operation for SK (i.e., addition in Zq). Note
1557
+ the group operation for PK is the multipli-
1558
+ cation in ⟨g⟩. Hence, pk = �n
1559
+ i=1 λi • pki =
1560
+ �n
1561
+ i=1 pkλi
1562
+ i
1563
+ = g
1564
+ �n
1565
+ i=1 λisi. Thus, sk ∈ SK is the
1566
+ private key of pk ∈ PK.
1567
+ iii. Let Xi|c|zi be a transcript of ID w.r.t., pki =
1568
+ gsi and ski = si, i = 1, · · · , n. For λi ∈ R,
1569
+ X = �n
1570
+ i=1 λi • Xi = �n
1571
+ i=1 Xλi
1572
+ i
1573
+ and z =
1574
+ �n
1575
+ i=1 λi • zi = �n
1576
+ i=1 λizi. If gzi = pkc
1577
+ iXi,
1578
+ then �n
1579
+ i=1 gλizi = �n
1580
+ i=1(pkc
1581
+ i Xi)λi. Hence,
1582
+ gz = pk
1583
+ cX, desired!
1584
+ Simulability.
1585
+ Let pk = gs be the public-key
1586
+ and sk = s be the private key. For c ← Zq, we
1587
+ define SIM by taking z ← Zq and X = gzpk−c.
1588
+ The simulated ID transcript is X|c|z. Obviously,
1589
+ this transcript is valid (i.e., it passes the verifica-
1590
+ tion). Now we show that for any (even unbounded)
1591
+ distinguisher D that has oracle access to Oid can
1592
+ not distinguish the output of SIM from the real
1593
+ ID transcript. Notice for both simulated and real
1594
+ transcripts X|c|z, it satisfies gz = pkcX. Hence,
1595
+ X = gx for some x ∈ Zq and z = cs + x. In
1596
+ the real transcript, x ← Zq while the simulated
1597
+ transcript z ← Zq. Hence, given c, (x, z) (hence
1598
+ X, z) in both transcripts has the same distribution.
1599
+ Since c is uniformly random in Zq in the simulation,
1600
+ the simulated and real transcripts have the same
1601
+ distribution (independent of adversary view before
1602
+ the challenge which includes the responses from
1603
+ Oid). Thus, the adversary view, given oracle access
1604
+ to Oid, in both cases has the same distribution. The
1605
+ simulability follows.
1606
+ Security.
1607
+ We now prove the security of Schnorr
1608
+ ID scheme under Definition 5.
1609
+ Lemma 2: Under discrete logarithm assumption,
1610
+ Schnorr ID scheme is secure w.r.t. Definition 5.
1611
+ Proof. If there exists an adversary D that breaks the
1612
+ Schnorr ID scheme with non-negligible probability
1613
+ ǫ, then we construct an adversary A that breaks
1614
+ discrete logarithm in ⟨g⟩ with a non-negligible
1615
+ probability ǫ′. The idea is to make use of D to con-
1616
+ struct an algorithm A for the nested forking lemma
1617
+
1618
+ Prover (s, A = gs)
1619
+ Verifier (A = gs)
1620
+ x ← Zq, X = gx
1621
+ X
1622
+
1623
+ c ← Zq
1624
+ c
1625
+
1626
+ z = sc + x mod q
1627
+ z
1628
+
1629
+ gz ?= AcX
1630
+ Fig. 4. Schnorr Identification Scheme
1631
+ and then use the output of the forking algorithm
1632
+ to derive the discrete logarithm for the challenge.
1633
+ Upon a challenge A1 = gx and parameters q, g,
1634
+ A constructs A((A1, g, q), λ1, c; ρ) as follows (so
1635
+ h1|h2 = λ1|c with q = 2 in the forking algorithm),
1636
+ where A = �t
1637
+ i=1 Aλi
1638
+ i .
1639
+ Algorithm A((A1, g, q), λ1, c; ρ)
1640
+ Parse ρ as two parts: ρ = ρ0|ρ1
1641
+ (st0, A2, · · · , At) ← D(q, g, A1; ρ0)
1642
+ λ2, · · · , λt ← Zq using randomness ρ1
1643
+ st1|X ← D(st0, λ1, · · · , λt);
1644
+ z ← D(st1, c);
1645
+ If gz = A
1646
+ c · X, then b = 1;
1647
+ else b = 0;
1648
+ output (b, 2b, {Ai|λi}t
1649
+ 1|X|z|c|g|q).
1650
+ From the description of A and the forking algorithm
1651
+ FA (for the forking lemma), the rewinding in the
1652
+ forking algorithm FA only changes λ1 and/or c as
1653
+ well as those affected by (λ1, c). In terms of forking
1654
+ lemma terminology, we have (h1, h2) = (λ1, c)
1655
+ and I0 = 1, J0 = 2 (for a successful execution;
1656
+ otherwise, A will abort when I0 ≤ J0). Let us now
1657
+ analyze algorithm forking algorithm FA. When four
1658
+ executions are executed successfully (i.e., b = 1 for
1659
+ all cases), then the output for each execution will be
1660
+ described as follows. Let Ai = gai for i = 1, · · · , t.
1661
+ - Execution
1662
+ 0.
1663
+ It
1664
+ outputs
1665
+ (1, 2, {Ai|λi}t
1666
+ 1|X|z|c|g|q). As the verification
1667
+ passes,
1668
+ z = (
1669
+ t
1670
+
1671
+ i=1
1672
+ λiai)c + x,
1673
+ (20)
1674
+ where X = gx.
1675
+ - Execution 1. Compared with execution 0, the
1676
+ input only changes c to ˆc. From the code of
1677
+ A, the output is (1, 2, {Ai|λi}t
1678
+ 1|X|ˆz|ˆc|g|q). As
1679
+ the verification passes,
1680
+ ˆz = (
1681
+ t
1682
+
1683
+ i=1
1684
+ λiai)ˆc + x.
1685
+ (21)
1686
+ - Execution 2.
1687
+ Compared with execution 0,
1688
+ the
1689
+ input
1690
+ changes
1691
+ λ1
1692
+ to
1693
+ ¯λ1
1694
+ and
1695
+ c
1696
+ to
1697
+ ¯c.
1698
+ From
1699
+ the
1700
+ code
1701
+ of
1702
+ A,
1703
+ the
1704
+ output
1705
+ is
1706
+ (1, 2, {Ai|λi}t
1707
+ 2|A1|¯λ1|X′|¯z|¯c|g|q). As the ver-
1708
+ ification passes,
1709
+ ¯z = (¯λ1a1 +
1710
+ t
1711
+
1712
+ i=2
1713
+ λiai)¯c + x′,
1714
+ (22)
1715
+ where X′ = gx′.
1716
+ - Execution 3.
1717
+ Compared with execution 0,
1718
+ the
1719
+ input
1720
+ changes
1721
+ λ1
1722
+ to
1723
+ ¯λ1
1724
+ and
1725
+ c
1726
+ to
1727
+ c.
1728
+ From
1729
+ the
1730
+ code
1731
+ of
1732
+ A,
1733
+ the
1734
+ output
1735
+ is
1736
+ (1, 2, {Ai|λi}t
1737
+ 2|A1|¯λ1|X′|z|c|g|q). As the ver-
1738
+ ification passes,
1739
+ z = (¯λ1a1 +
1740
+ t
1741
+
1742
+ i=2
1743
+ λiai)c + x′.
1744
+ (23)
1745
+ From Eqs. (23)(22), A can derive ¯λ1a1+�t
1746
+ i=2 λiai,
1747
+ as long as c ̸= c′ in Zq. Similarly, from Eqs.
1748
+ (21)(20), A can derive λ1a1 + �t
1749
+ i=2 λiai, as long
1750
+ as ¯c ̸= c. This can further give a1, as long as
1751
+ λ1 ̸= ¯λ1 in Zq. Finally, if the forking algorithm
1752
+ does not fail, then the four executions succeeds
1753
+ and (c ̸= c′) ∧ (¯c ̸= c) ∧ (λ1 ̸= ¯λ1) =True. By
1754
+ forking lemma, it does not fail with probability at
1755
+ least ǫ4/(1 · 1) − 3/|Θ| = ǫ4 − 3/q. Hence, A can
1756
+ obtain a1 with probability at least ǫ4 − 3/q, non-
1757
+ negligible. This contradicts to the discrete logarithm
1758
+ assumption.
1759
+
1760
+ Key-and-Signature
1761
+ Compact
1762
+ Multi-Signature
1763
+ from Schnorr ID Scheme.
1764
+ Since Schnorr ID
1765
+
1766
+ scheme satisfies the linearity, simulability and
1767
+ special soundness, the multi-signature from this
1768
+ scheme using our compiler is obtained. For clarity,
1769
+ we give the complete signature in the following.
1770
+ Let pki = gsi be the public-key with private key
1771
+ ski = si for i = 1, · · · , n. When users PK =
1772
+ {pk1, · · · , pkn} want to jointly sign a message M,
1773
+ they act as follows.
1774
+ • R-1.
1775
+ User i generates Xi = gxi for xi ← Zq
1776
+ and sends H0(Xi|pki) to other users.
1777
+ • R-2.
1778
+ Upon {rj}n
1779
+ j=1, user i sends Xi to other
1780
+ users.
1781
+ • R-3. Upon {Xj}n
1782
+ j=1, user i checks rj
1783
+ ?=
1784
+ H0(Xj|pkj) for all j. If not, he rejects; other-
1785
+ wise, he computes
1786
+ pk =
1787
+ n
1788
+
1789
+ i=1
1790
+ pkH0(pki,P K)
1791
+ i
1792
+ (24)
1793
+ X =
1794
+ n
1795
+
1796
+ i=1
1797
+ XH0(pki,P K)
1798
+ i
1799
+ .
1800
+ (25)
1801
+ Then, he computes
1802
+ c = H1(pk|X|M), zi = sic + xi
1803
+ (26)
1804
+ and sends zi to leader.
1805
+ • Output.
1806
+ Receiving all zj’s, user i computes
1807
+ z =
1808
+ n
1809
+
1810
+ j=1
1811
+ H0(pkj, PK)zj.
1812
+ Finally, it outputs (X, z) as the multi-signature
1813
+ of M with the aggregated public-key pk (note:
1814
+ the compiler protocol includes n in the aggre-
1815
+ gated key; we omit it here as it is not used in
1816
+ the verification).
1817
+ • Verification.
1818
+ To verify signature (X, z) for
1819
+ M with the aggregated public-key pk, it com-
1820
+ putes c = H1(pk|X|M). It accepts only if
1821
+ gz = pk
1822
+ c · X.
1823
+ We denote this signature scheme by Schnorr-
1824
+ MultiSig. Notice that c ← Zq is invertible in R
1825
+ with probability 1 − 1/q. As it satisfies linearity,
1826
+ simulability and security, by Theorem 1, we have
1827
+ the following.
1828
+ Corollary 2: If Discrete logarithm assumption in
1829
+ ⟨g⟩ holds, then Schnorr-MultiSig is EU-CMA.
1830
+ Remark. Boneh et al. [13] proposed a method
1831
+ that transforms Schnorr ID to a key-and-signature
1832
+ compact multi-signature. Their protocol is an im-
1833
+ provement of Maxwell et al. [38] to overcome
1834
+ a simulation flaw. Their protocol is also 3-round
1835
+ but computationally more efficient in the signing
1836
+ process than ours. However, our sizes of aggregated
1837
+ (public-key, signature) as well as the verification
1838
+ cost are all the same as theirs (also identical to
1839
+ the original Schnorr signature case). Aggregated
1840
+ public-key and signature have impacts on the stor-
1841
+ age at a large number of blockchain nodes and
1842
+ the verification cost has the impact on the power
1843
+ consumption on these nodes. The signing cost is
1844
+ relatively not so important as it only has impact on
1845
+ the involved signers. Boneh et al. [13] uses λisi as
1846
+ a secret for public-key pkλi
1847
+ i
1848
+ to generate a member
1849
+ signature Xi|c|zi and the final multi-signature �
1850
+ X =
1851
+
1852
+ i Xi and ˜z = �
1853
+ i zi. Their main saving (over
1854
+ us) is to avoid n exponentiations in computing our
1855
+ X. One might be motivated to modify our general
1856
+ compiler so that it uses λi•pki (whose private key is
1857
+ λi•ski) to generate a member signature CMTi|Rspi
1858
+ so that the final multi-signature is �
1859
+ CMT|�
1860
+ RSP with
1861
+
1862
+ CMT = �
1863
+ i CMTi and �
1864
+ RSP = �
1865
+ i Rspi. However,
1866
+ this looking secure scheme has a simulation issue
1867
+ in general when we prove Theorem 1: it is required
1868
+ that {SIM(CH, λ•pk)}λ is indistinguishable from
1869
+ the list of real transcripts for a fixed but random
1870
+ pk while it is not clear how this can be proven
1871
+ generally.
1872
+ B. Realization II: a new lattice-based ID scheme
1873
+ In this section, we propose a new ID scheme
1874
+ from lattice and then apply our compiler to obtain
1875
+ a lattice-based multi-signature scheme. This is the
1876
+ first lattice-based multi-signature that has both a
1877
+ compact public-key and a compact signature with-
1878
+ out a restart during the signing process.
1879
+ Notations.
1880
+ The following notations are specific
1881
+ for this section (see Section II for more).
1882
+ • As a convention for lattice over ring, this
1883
+ section uses security parameter n (a power of
1884
+ 2), instead of λ;
1885
+ • q is a prime with q ≡ 3 mod 8;
1886
+
1887
+ • R = Z[x]/(xn + 1); Rq = Zq[x]/(xn + 1); R∗
1888
+ q
1889
+ is the set of invertible elements in Rq;
1890
+ • for a vector w, we implicitly assume it is a
1891
+ column vector and the ith component is wi or
1892
+ w[i];
1893
+ • for a matrix or vector X, XT is its transpose;
1894
+ • 1 denotes the all-1 vector (1, · · · , 1)T of di-
1895
+ mension that is clear from the context;
1896
+ • for u = �n−1
1897
+ i=0 uixi ∈ R, ||u||∞ = maxi |ui|;
1898
+ • α ∈ Zq always uses the default representative
1899
+ with −(q−1)/2 ≤ α ≤ (q−1)/2 and similarly,
1900
+ for u ∈ Rq, each coefficient of u by default
1901
+ belongs to this range;
1902
+ • e = 2.71828 · · · is the Euler’s number;
1903
+ • C = {c ∈ R | ||c||∞ ≤ log n, deg(c) < n/2}
1904
+ • Y = {y ∈ R | ||y||∞ ≤ n1.5σ log3 n}
1905
+ • Z = {z ∈ R | ||z||∞ ≤ (n − 1)n1/2σ log3 n}.
1906
+ 1) Ring-LWE and Ring-SIS: In this section, we
1907
+ introduce the ring-LWE amd ring-SIS assumptions
1908
+ (see [35], [45], [33] for details). For σ > 0, distri-
1909
+ bution DZn,σ assigns the probability proportional to
1910
+ e−π||y||2/σ2 for any y ∈ Zn and 0 for other cases.
1911
+ As in [1], y ← DR,σ samples y = �n−1
1912
+ i=0 yixi from
1913
+ R with yi ← DZ,σ.
1914
+ The
1915
+ Ring
1916
+ Learning
1917
+ With
1918
+ Error
1919
+ (Ring-
1920
+ LWEq,σ,2n) problem over R with standard deviation
1921
+ σ is defined as follows. Initially, it takes s ← DR,σ
1922
+ as secret. It then takes a ← Rq, e ← DR,σ and
1923
+ outputs (a, as + e). The problem is to distinguish
1924
+ (a, as + e) from a tuple (a, b) for a, b ← Rq. The
1925
+ Ring-LWEq,σ,2n assumption is to say that no PPT
1926
+ algorithm can solve Ring-LWEq,σ,2n problem with
1927
+ a non-negligible advantage. According to [34],
1928
+ [18], ring-LWE assumption with σ =
1929
+ ˜Ω(n3/4)
1930
+ is provably hard and so it is safe to assume
1931
+ σ = Ω(n).
1932
+ The Small Integer Solution problem with pa-
1933
+ rameters q, m, β over ring R (ring-SISq,m,β) is
1934
+ as follows: given m uniformly random elements
1935
+ a1, · · · , am over Rq, find (t1, · · · , tm) so that
1936
+ ||ti||∞ ≤ β and a1t1 + · · · + amtm = 0 (note:
1937
+ here we use || · ||∞ norm while the literature
1938
+ regularly uses square-root norm || · ||. However, the
1939
+ gap is only a factor n on β and does not affect
1940
+ the validity of the assumption according to the
1941
+ current research status for ring-SIS). We consider
1942
+ the case m = 3. As we use q = 3 mod 8, by
1943
+ [9, Theorem 1], xn + 1 = Φ1(x)Φ2(x) for irre-
1944
+ ducible polynomials Φ1(x), Φ2(x) of degree n/2.
1945
+ So by Chinese remainder theorem, ai is invertible,
1946
+ except for probability 2q−n/2. Hence, ring-SIS is
1947
+ equivalent to the case of invertible a2 which is
1948
+ further equivalent to problem a1t1 + t2 + a3t3 = 0,
1949
+ as we can multiply it by a−1
1950
+ 2 . By [33], [15], the
1951
+ best quantum polynomial algorithm for ring-SIS
1952
+ problem with q, m can only solve β = 2 ˜O(√n) case.
1953
+ Thus, it is safe to assume Ring-SISq,m,β for any
1954
+ polynomial β or even β = 2
1955
+ 4√n.
1956
+ 2) Construction: We now describe our new ID
1957
+ scheme from ring R. Initially, take s1, s2
1958
+
1959
+ DR,σ, a ← R∗
1960
+ q and compute u = as1 + s2. The
1961
+ system parameter is a; the public key is u and the
1962
+ private key is (s1, s2). Our ID scheme is as follows;
1963
+ also see Fig. 5.
1964
+ 1. Prover generates y1, y2 ← Yµ and computes
1965
+ v = ay1 + y2 and sends v to Verifier, where
1966
+ µ ≥ log2 n.
1967
+ 2. Receiver samples c ← C and sends it to Prover.
1968
+ 3. Upon c, Prover does the following:
1969
+ a. Compute z1 = s1c · 1 + y1, z2 = s2c ·
1970
+ 1 + y2;
1971
+ b. Let A = {j | z1j, z2j ∈ Z}. If A = ∅,
1972
+ abort; otherwise, take j∗ ← A and com-
1973
+ pute
1974
+ z1 = z1j∗ +
1975
+
1976
+ j̸=j∗
1977
+ y1j, z2 = z2j∗ +
1978
+
1979
+ j̸=j∗
1980
+ y2j.
1981
+ 4. Upon z1, z2, Verifier checks
1982
+ µ
1983
+
1984
+ i=1
1985
+ vi
1986
+ ?= az1 + z2 − uc, ||zb||∞
1987
+ ?
1988
+ ≤ η, b = 1, 2,
1989
+ where ηt = 5σn2√tµ log6 n and t is a positive
1990
+ integer (see the remark below). If all are valid,
1991
+ it accepts; otherwise, it rejects.
1992
+ Remark. We give two clarifications.
1993
+ (1)
1994
+ The correctness does not need ηt to vary with
1995
+ t as it is defined so. Actually, η1 = 3σn1.5√µ log4 n
1996
+ suffices for all t. However, we need the dependency
1997
+ on t for the linearity and later for the multi-
1998
+ signature. Especially, for linearity with t transcripts,
1999
+ ηt is needed to depend on t. Further, for the
2000
+
2001
+ multi-signature scenario, t stands for the number
2002
+ of signers.
2003
+ (2)
2004
+ It should be pointed out that the choice of j∗
2005
+ (if it exists) does not affect z1, z2 at all as zb = sbc+
2006
+ �t
2007
+ j=1 ybj for b = 1, 2. In addition, the probability
2008
+ that j∗ does not exist is exponentially small in n and
2009
+ so defining j∗ is unnecessary. However, we keep it
2010
+ for ease of analysis later.
2011
+ Correctness. We now prove the correctness with ηt
2012
+ replaced by a smaller value η1 = 3σn1.5√µ log4 n.
2013
+ When all signers are honest, the protocol is easily
2014
+ seen to be correct if we can show A = ∅ or
2015
+ ||zb||∞ > η1 has a negligible probability. The
2016
+ former is shown in Lemma 5 below. For the latter,
2017
+ notice that z1 = s1c + y11 + · · · + y1µ. If we
2018
+ use w ∈ R to denote the coefficient vector of the
2019
+ polynomial w. Then,
2020
+ y11 + · · · + y1µ = y11 + · · · + y1µ.
2021
+ (27)
2022
+ Notice each component of y1j is uniformly random
2023
+ in {−σn1.5 log3 n, · · · , σn1.5 log3 n}. By Hoeffd-
2024
+ ing inequality on each of the vector component
2025
+ in Eq. (27), || �
2026
+ i y1i||∞ > 2σn1.5√µ log4 n only
2027
+ has a probability at most 2ne− log2 n. By Lemma
2028
+ 3 below, ||sc||∞ > σn1/2 log3 n with probability
2029
+ at most e−Ω(log2 n). Hence, correctness holds for
2030
+ bound η1, except for probability at most e−Ω(log2 n)
2031
+ (note: for brevity, this quantity should be under-
2032
+ stood as there exists constant C so that the excep-
2033
+ tion probability is at most e−C log2 n; we will later
2034
+ keep this convention without a mention).
2035
+ 3) Analysis: In this section, we analyze our
2036
+ ID scheme. We start with some preparations. The
2037
+ following lemma is adapted from [1, Lemma 4],
2038
+ where our restriction that the element c of C has a
2039
+ degree at most n/2, does not affect the proof.
2040
+ Lemma 3: [1] If s ← DR,σ and c ← C, then
2041
+ Pr(||sc||∞ ≤ σn1/2 log3 n) ≥ 1 − e−Ω(log2 n),
2042
+ where e = 2.71828 · · · is the Euler’s number
2043
+ The lemma below was in the proof of [1, Lemma
2044
+ 3].
2045
+ Lemma 4: [1] Fix γ
2046
+
2047
+ R with ||γ||∞
2048
+
2049
+ σn1/2 log3 n. Then, for y ← Y, we have
2050
+ Pr(γ + y ∈ Z) ≥ 1
2051
+ e − 1
2052
+ en
2053
+ Pr(γ + y = g | γ + y ∈ Z) =
2054
+ 1
2055
+ |Z|, ∀g ∈ Z.
2056
+ Lemma 5: Let A be the index set in our ID
2057
+ scheme. Then, Pr(A
2058
+ =
2059
+ ∅)
2060
+ <
2061
+ e−Ω(log2 n) for
2062
+ µ ≥ Ω(log2 n).
2063
+ Proof.
2064
+ Notice zbj = sc + ybj for b = 1, 2. By
2065
+ Lemma 3, ||sc||∞ ≤ σn1/2 log3 n with probabil-
2066
+ ity 1 − e−Ω(log2 n). Fixing sc (that satisfies this
2067
+ condition), zbj for b = 1, 2, j = 1, · · · , µ are
2068
+ independent and thus by Lemma 4, A = ∅ with
2069
+ probability at most (1− 1
2070
+ e2 (1− 1
2071
+ n)2)µ < (1−
2072
+ 1
2073
+ 4e2 )µ,
2074
+ exponentially small. Together with the probability
2075
+ for ||sc||∞ ≤ σn1/2 log3 n, we conclude the lemma.
2076
+
2077
+ Lemma 6: If u ← C, then u is invertible in Rq
2078
+ with probability 1 − (1 + 2 log n)−n/2.
2079
+ Proof. Recall that q ≡ 3 mod 8 in this section. By
2080
+ Blake et al. [9, Theorem 1], xn + 1 = Φ1(x)Φ2(x)
2081
+ mod q, where Φ1(x), Φ2(x) have degree n/2 and
2082
+ are irreducible over Zq. By Chinese remainder
2083
+ theorem, u is invertible in Rq if and only if it is
2084
+ non-zero mod Φb(x) for both b = 1, 2. Since u has
2085
+ a degree at most n/2 − 1, u remains unchanged
2086
+ after mod Φb(x). Hence, it is invertible in Rq if
2087
+ and only if u is non-zero. This has a probability
2088
+ 1 − (1 + 2 log n)−n/2.
2089
+
2090
+ Simulability. We now show the simulability of our
2091
+ ID scheme. Given the public-key u and c ← C, we
2092
+ define the simulator SIM as follows.
2093
+ - Sample j∗ ← [µ] and z1j∗, z2j∗ ← Z; compute
2094
+ vj∗ = az1j∗ + z2j∗ − uc;
2095
+ - For j ∈ [µ] − {j∗}, sample y1j, y2j ← Y and
2096
+ compute vj = ay1j + y2j.
2097
+ - Compute zb = zbj∗ + �
2098
+ j̸=j∗ ybj, b = 1, 2.
2099
+ - Output v = (v1, · · · , vµ)T and z1, z2.
2100
+ This simulation is valid by the following lemma.
2101
+ Lemma 7: The output of SIM is statistically
2102
+ close to the real transcript, even if the distin-
2103
+ guisher has oracle access to O((s1, s2), u), where
2104
+
2105
+ Prover ((s1, s2), u|t)
2106
+ Verifier (u|t)
2107
+ y1, y2 ← Yµ, v = ay1 + y2
2108
+ v
2109
+
2110
+ b = 1, 2 :
2111
+ zb = sbc·1+yb
2112
+ A = {j | z1j, z2j ∈
2113
+ Z}, j∗ ← A
2114
+ b = 1, 2 : zb = zbj∗+�
2115
+ j̸=j∗ ybj
2116
+ c ← C
2117
+ c
2118
+
2119
+ z1,z2
2120
+
2121
+ b = 1, 2 :
2122
+ ||zb||∞ < ηt?
2123
+ �µ
2124
+ j=1 vj
2125
+ ?= az1 + z2 − uc
2126
+ Fig. 5. Our Lattice-based ID Scheme (Note: Membership checks c ∈ C at Prover is important but omitted in the figure; 1 is the vector of all 1 of length µ.)
2127
+ (s1, s2) ← D2
2128
+ R,σ is the private key and u = as1+s2
2129
+ is the public-key.
2130
+ Proof.
2131
+ First, we can assume A ̸= ∅ for the
2132
+ real transcript as by Lemma 5 this is violated
2133
+ negligibly only. Then, by symmetry, j∗ for the real
2134
+ transcript is uniformly random over {1, · · · , µ}.
2135
+ By the definition of j∗, we know that z1j∗, z2j∗
2136
+ both belong to Z. In this case, by Lemma 4,
2137
+ sc + y1j∗, sc + y2j∗ for the real transcript with
2138
+ given sc satisfying ||sc||∞ < σn1/2 log3 n, are
2139
+ independent and uniformly random over Z. By
2140
+ lemma 3, we conclude that z1j∗ and z2j∗ are
2141
+ statistically close to uniform over Z if they belong
2142
+ to Z. On the other hand, when z1j∗ and z2j∗ are
2143
+ given, vj∗ is fixed as vj∗ = az1j∗ + z2j∗ − uc.
2144
+ Thus, our simulation of z1j∗, z2j∗, vj∗ is statistically
2145
+ close to that in the real transcript. On the other
2146
+ hand, our simulation of y1j, y2j, vj for j ̸= j∗ is
2147
+ exactly according to the real distribution. Thus, our
2148
+ simulation is statistically close to the real transcript.
2149
+ This closeness holds (even given adversary view,
2150
+ which includes the responses from Oid). Hence, the
2151
+ simulability follows.
2152
+
2153
+ Security.
2154
+ Now we prove the security of our ID
2155
+ scheme, where the attacker needs to generate z1, z2
2156
+ (given challenge c) to pass the verification w.r.t.
2157
+ an aggregated public-key u. We show that this is
2158
+ unlikely by the ring-SIS assumption.
2159
+ Lemma 8: Under ring-LWEq,σ,2n
2160
+ and ring-
2161
+ SIS3,q,βt∗ assumptions, our scheme is t∗-secure
2162
+ (with respect to Definition 5), where βt∗
2163
+ =
2164
+ 16ηt∗√n log2 n and σ = Ω(n).
2165
+ Proof.
2166
+ If there exists an adversary D that breaks
2167
+ our ring-based ID scheme with non-negligible prob-
2168
+ ability ǫ, then we construct an adversary A that
2169
+ breaks ring-SIS assumption with a non-negligible
2170
+ probability ǫ′. The idea is to make use of D to
2171
+ construct an algorithm A for the nested forking
2172
+ lemma and then uses the output of the forking
2173
+ algorithm to obtain a solution for ring-SIS problem.
2174
+ Upon a challenge u1 and a (both uniformly over
2175
+ Rq), A needs to find short α1, α2, α3 ∈ R so that
2176
+ aα1 +α2 +u1α3 = 0. Toward this, A constructs an
2177
+ algorithm A((u1, a), λ1, c; ρ) as follows (so q = 2
2178
+ in the forking algorithm), where λi, c ← C and
2179
+ u = �t
2180
+ i=1 λi · ui with ui ∈ Rq (in the description
2181
+ of A).
2182
+ Algorithm A((u1, a), λ1, c; ρ)
2183
+ Parse ρ as two parts: ρ = ρ0|ρ1
2184
+ (st0, u2, · · · , ut) ← D(u1, a; ρ0)
2185
+ λ2, · · · , λt ← C using randomness ρ1
2186
+ st1|v ← D(st0, λ1, · · · , λt);
2187
+ (z1, z2) ← D(st1, c);
2188
+ If ||zb||∞ < ηt and �µ
2189
+ j=1 vj = az1 + z2 − uc,
2190
+ then
2191
+ b = 1;
2192
+ else
2193
+ b = 0;
2194
+ Output (b, 2b, {ui|λi}t
2195
+ 1|v|z1|z2|c|a).
2196
+ From the description of A and the forking al-
2197
+ gorithm FA (for the forking lemma), the rewind-
2198
+ ing in FA only updates λ1 and/or c as well as
2199
+ variables affected by (λ1, c). In terms of forking
2200
+
2201
+ lemma terminology, we have (h1, h2) = (λ1, c)
2202
+ and I0 = 1, J0 = 2 (for a successful execution;
2203
+ otherwise, A will abort when I0 ≤ J0). Let us now
2204
+ analyze algorithm forking algorithm FA. When four
2205
+ executions are executed successfully (i.e., b = 1 for
2206
+ all cases), then the output for each execution will
2207
+ be described as follows.
2208
+ - Execution
2209
+ 0.
2210
+ It
2211
+ outputs
2212
+ (1, 2, {ui|λi}t
2213
+ 1|v|z1|z2|c|a). Since it succeeds,
2214
+ ||zb||∞ ≤ ηt (b = 1, 2) and
2215
+ µ
2216
+
2217
+ i=1
2218
+ vi = az1 + z2 − uc.
2219
+ (28)
2220
+ - Execution 1. Compared with execution 0, the
2221
+ input only changes c to ˆc. From the code of A,
2222
+ the output is (1, 2, {ui|λi}t
2223
+ 1|v|ˆz1|ˆz2|ˆc|a). Since
2224
+ it succeeds, ||ˆzb||∞ ≤ ηt (b = 1, 2) and
2225
+ µ
2226
+
2227
+ i=1
2228
+ vi = aˆz1 + ˆz2 − uˆc.
2229
+ (29)
2230
+ - Execution 2.
2231
+ Compared with execution 0,
2232
+ the input changes λ1 to ¯λ1 and changes
2233
+ c to ¯c. From the code of A, the output
2234
+ is (1, 2, {ui|λi}t
2235
+ 2|u1|¯λ1|v′|¯z1|¯z2|¯c|a). Since it
2236
+ succeeds, ||¯zb||∞ ≤ ηt (b = 1, 2) and
2237
+ µ
2238
+
2239
+ i=1
2240
+ v′
2241
+ i = a¯z1 + ¯z2 − u′¯c,
2242
+ (30)
2243
+ where u′ = ¯λ1u1 + �t
2244
+ i=2 λiui.
2245
+ - Execution 3.
2246
+ Compared with execution 0,
2247
+ the input changes λ1 to ¯λ1 and changes
2248
+ c to c. From the code of A, the output
2249
+ is (1, 2, {ui|λi}t
2250
+ 2|u1|¯λ1|v′|z1|z2|c|a). Since it
2251
+ succeeds, ||zb||∞ ≤ ηt (b = 1, 2) and
2252
+ µ
2253
+
2254
+ i=1
2255
+ v′
2256
+ i = az1 + z2 − u′c.
2257
+ (31)
2258
+ From Eqs. (31)(30), A can derive
2259
+ a(z1 − ¯z1) + (z2 − ¯z2) − u′(c − c) = 0.
2260
+ (32)
2261
+ From Eqs. (29)(28),
2262
+ a(ˆz1 − z1) + (ˆz2 − z2) − u(ˆc − c) = 0.
2263
+ (33)
2264
+ Notice that Eq. (32)×(ˆc−c)-Eq. (33)×(c−c) gives
2265
+ aα1 + α2 − u1α3 = 0,
2266
+ (34)
2267
+ where
2268
+ α1 =(z1 − ¯z1)(ˆc − c) − (ˆz1 − z1)(c − c)
2269
+ (35)
2270
+ α2 =(z2 − ¯z2)(ˆc − c) − (ˆz2 − z2)(c − c)
2271
+ (36)
2272
+ α3 =(λ1 − ¯λ1)(ˆc − c)(c − c).
2273
+ (37)
2274
+ Hence, (α1, α2, −α3) forms a solution to ring-
2275
+ SIS problem with parameter (a, 1, u). It suffices to
2276
+ verify that each αi is short and also at least one
2277
+ of them is non-zero. For the second condition, it
2278
+ suffices to make sure that the probability for α3 = 0
2279
+ is small. Notice that by Chinese remainder theorem,
2280
+ α3 = 0 implies λ1 = ¯λ1 mod Φ1(x) or c = c
2281
+ mod Φ1(x) or ˆc = c mod Φ1(x). Similarly, this
2282
+ must also hold for modular Φ2(x) but it suffices
2283
+ to consider Φ1(x) only. Since λ1, ¯λ1, c, c, ˆc is uni-
2284
+ formly random over C, each of the equality holds
2285
+ with probability (1 + 2 log n)−n/2 only and hence
2286
+ Pr(α3 = 0) ≤ 3(1 + 2 log n)−n/2, negligible! For
2287
+ the first condition, notice that ||ˆc − c||∞ ≤ 2 log n
2288
+ and ||z1 − z1||∞ ≤ 2ηt. Further, the constant term
2289
+ of (ˆc − c)(z1 − z1) is
2290
+ (ˆc−c)[0]·(z1−z1)[0]−
2291
+ n
2292
+ 2 −1
2293
+
2294
+ k=1
2295
+ (ˆc−c)[k]·(z1−z1)[n−k]
2296
+ which, by Heoffding inequality, has an abso-
2297
+ lute value at most
2298
+
2299
+ n/2 log n · 8ηt log n
2300
+
2301
+ 8ηt
2302
+ √n log2 n,
2303
+ with
2304
+ probability
2305
+ at
2306
+ least
2307
+ 1 −
2308
+ e−Ω(log2 n). The constant term of (ˆz1 − z1)(c −
2309
+ c) is similar. Hence, |α1[0]| ≤ 16ηt
2310
+ √n log2 n,
2311
+ with probability at least 1 − e−Ω(log2 n). The gen-
2312
+ eral case of α1[i] is similar. Hence, ||α1||∞ ≤
2313
+ 16ηt
2314
+ √n log2 n with probability 1−e−Ω(log2 n). Sim-
2315
+ ilarly, ||α2||∞ has the same property. We can use
2316
+ the above proof technique to show that ||(ˆc −
2317
+ c)(c − c)||∞ ≤ 8 log n · √n log2 n with proba-
2318
+ bility 1 − e−Ω(log2 n). Since λ1, ¯λ1 is uniformly
2319
+ random over C, using the same technique, we have
2320
+ ||α3||∞ ≤ √n log n · 32√n log4 n = 32n log5 n,
2321
+ with probability 1 − e−Ω(log2 n). Thus, we find a
2322
+ ring-SIS solution (α1, α2, −α3) of length at most
2323
+ 16ηt
2324
+ √n log2 n. Assume that the probability that D
2325
+ succeeds in one execution is ˆǫ. Then, by forking
2326
+ lemma, it succeeds in four executions with proba-
2327
+ bility ˆǫ4 − 3(1 + 2 log n)−n/2. This implies that A
2328
+ breaks the ring-SIS assumption with probability at
2329
+ least ˆǫ4 − 3(1 + 2 log n)−n/2 − e−Ω(log2 n).
2330
+
2331
+ Finally, notice that the input u1 is uniformly ran-
2332
+ dom over Rq while in our ID scheme u1 = as1+s2
2333
+ for s1, s2 ← DR,σ. However, under ring-LWE
2334
+ assumption, it is immediate that ˆǫ ≥ ǫ − negl(n).
2335
+ Hence, A can succeed with probability at least
2336
+ ǫ4 − negl(n), this contradicts the assumption of
2337
+ ring-SIS.
2338
+
2339
+ Linearity. Let SK = RSP = (Rq, Rq), CMT =
2340
+
2341
+ q , PK = Rq, R = Rq. We now verifies the
2342
+ linearity.
2343
+ i. Obviously, SK is a R-module under the op-
2344
+ eration •: for (s1, s2) ∈ SK and c ∈ R,
2345
+ c•(s1, s2) = (cs1, cs2), where cs1 and cs2 are
2346
+ multiplications in Rq. Other cases are similar.
2347
+ ii. If
2348
+ (s1i, s2i)
2349
+
2350
+ SK
2351
+ and
2352
+ λi
2353
+
2354
+ C
2355
+ for
2356
+ i
2357
+ =
2358
+ 1, · · · , t, then �t
2359
+ i=1(λis1i, λis2i)
2360
+ =
2361
+ (�t
2362
+ i=1 λis1i, �t
2363
+ i=1 λis2i) is obviously the pri-
2364
+ vate
2365
+ key
2366
+ of
2367
+ �t
2368
+ i=1 λi · (as1i + s2i)
2369
+ =
2370
+ a(�t
2371
+ i=1 λis1i) + (�t
2372
+ i=1 λis2i). However, we
2373
+ emphasize
2374
+ that
2375
+ this
2376
+ key
2377
+ is
2378
+ not
2379
+ neces-
2380
+ sarily
2381
+ short.
2382
+ But for
2383
+ randomly generated
2384
+ (pki, ski, λi)’s, Lemma 9 implicitly implies
2385
+ that the aggregated private key has length
2386
+ at most 2
2387
+
2388
+ ntσ log3 n (except for probability
2389
+ e−Ω(log2 n)); see maxv |Sv| with |Sv| given in
2390
+ the proof of Lemma 9).
2391
+ iii. If {(vi, c, z1i, z2i)}t
2392
+ i=1 are honestly generated
2393
+ accepting transcripts w.r.t the honestly pub-
2394
+ lic/private key pairs {(ui, (s1i, s2i))}i, then
2395
+ µ
2396
+
2397
+ j=1
2398
+ vij = az1i + z2i − uic.
2399
+ (38)
2400
+ Together
2401
+ with
2402
+ Lemma
2403
+ 9
2404
+ be-
2405
+ low,
2406
+ for
2407
+ h1, · · · , ht
2408
+
2409
+ C,
2410
+ (�t
2411
+ i=1 hivi, c, �t
2412
+ i=1 hiz1i, �t
2413
+ i=1 hiz2i)
2414
+ satisfies (except for probability e−Ω(log2 n))
2415
+ ||
2416
+ t
2417
+
2418
+ i=1
2419
+ hiz1i||∞ ≤ηt,
2420
+ ||
2421
+ t
2422
+
2423
+ i=1
2424
+ hiz1i||∞ ≤ ηt,
2425
+ µ
2426
+
2427
+ j=1
2428
+ (
2429
+ t
2430
+
2431
+ i=1
2432
+ hivij) =a(
2433
+ t
2434
+
2435
+ i=1
2436
+ hiz1i) + (
2437
+ t
2438
+
2439
+ i=1
2440
+ hiz2i) − (
2441
+ t
2442
+
2443
+ i=1
2444
+ hiui)c,
2445
+ where ηt = 5σn2√tµ log6 n. The linearity
2446
+ follows.
2447
+ Lemma 9: Fix integer t ≥ 2 and σ ≥ ω(log n).
2448
+ Assume si ← DR,σ, hi ← C, yij ← Y for i ∈
2449
+ [t], j ∈ [µ], c ← C. Let
2450
+ Z =
2451
+ t
2452
+
2453
+ i=1
2454
+ hi(sic +
2455
+ µ
2456
+
2457
+ j=1
2458
+ yij).
2459
+ (39)
2460
+ Then, ||Z||∞ ≤ ηt with probability 1 − e−Ω(log2 n).
2461
+ Proof. Notice
2462
+ Z[0] =
2463
+ n−1
2464
+
2465
+ v=0
2466
+ Sv · c[v] −
2467
+ t
2468
+
2469
+ i=1
2470
+ n−1
2471
+
2472
+ k=0
2473
+ hi[n − k] · Yik,
2474
+ where Yik = �µ
2475
+ j=1 yij[k], hi[n]
2476
+ def
2477
+ = −hi[0] and
2478
+ Sv =
2479
+ t
2480
+
2481
+ i=1
2482
+ n−1
2483
+
2484
+ k=0
2485
+ hi[n − k]si[k − v].
2486
+ By [24, Lemma 4.2], ||si||∞ ≤ σ log n, except
2487
+ for probability e−Ω(log2 n). When this is satisfied,
2488
+ terms hi[n − k]si[k − v] in Sv are independent
2489
+ random variables in the range [−σ log2 n, σ log2 n].
2490
+ By Heoffding inequality, |Sv| ≤ 2
2491
+
2492
+ ntσ log3 n, ex-
2493
+ cept for probability e−Ω(log2 n). Since yij[k] is uni-
2494
+ formly random over [−σn1.5 log3 n, σn1.5 log3 n],
2495
+ by Heoffding inequality, |Yik| ≤ 2σ√µn1.5 log4 n,
2496
+ except for a probability e− log2 n. Assuming these
2497
+ inequalities for Sv and Yik, we know that from
2498
+ Heoffding inequality again,
2499
+ |
2500
+ n−1
2501
+
2502
+ v=1
2503
+ Sv · c[v]| ≤ 4σn
2504
+
2505
+ t log5 n
2506
+ |
2507
+
2508
+ i,k
2509
+ hi[n − k] · Yik| ≤
2510
+
2511
+ nt log n · 4σ√µn1.5 log5 n,
2512
+ except for probability e−Ω(log2 n). Hence, we con-
2513
+ clude that |Z[0]| ≤ 5σn2√tµ log6 n, except for
2514
+ e−Ω(log2 n). We can similarly bound Z[i] for i ≥ 1
2515
+ and so ||Z||∞ ≤ 5σn2√tµ log6 n, except for prob-
2516
+ ability e−Ω(log2 n).
2517
+
2518
+ 4) Key-and-Signature Compact Multi-signature
2519
+ Scheme from our ID scheme.:
2520
+ With the simu-
2521
+ lability, linearity and security for our ID, we can
2522
+ use our compiler to convert it into a secure multi-
2523
+ signature. We now describe this scheme as follows.
2524
+ Let (s1i, s2i) be the private key of public-key
2525
+ ui = as1i + s2i for i = 1, · · · , t. If the users
2526
+
2527
+ of u1, · · · , ut want to jointly sign M, they com-
2528
+ pute the aggregated public-key u|t and execute the
2529
+ protocol as follows, where H0, H1 : {0, 1}∗ → C
2530
+ and we define w = �t
2531
+ i=1 H0(ui, U)wi for any
2532
+ list of variables w1, · · · , wt in the description and
2533
+ U = (u1, · · · , ut) (e.g., v = �t
2534
+ i=1 H0(ui, U) · vi).
2535
+ • R-1.
2536
+ User generates y1i, y2i ← Yµ, com-
2537
+ putes vi = ay1i + y2i and sends H0(vi|ui) to
2538
+ other users.
2539
+ • R-2.
2540
+ Upon receiving all rj, j = 1, · · · , t,
2541
+ user i sends vi to other users.
2542
+ • R-3.
2543
+ Upon all vj, user i verifies its consis-
2544
+ tency with rj. If verification fails, it rejects;
2545
+ otherwise, it computes v and c = H1(u|v|M)
2546
+ as well as the response (z1i, z2i) for challenge
2547
+ c in the ID scheme with committing message
2548
+ vi.
2549
+ • output.
2550
+ After receiving (z1j, z2j) for j ∈ [t],
2551
+ user i computes multi-signature (z1, z2, v).
2552
+ The aggregated public-key is u|t.
2553
+ • Verify.
2554
+ Upon (z1, z2, v), it verifies the fol-
2555
+ lowing with u|t and accepts only if it is valid:
2556
+ ||z1||∞ ≤ ηt,
2557
+ ||z2||∞ ≤ ηt,
2558
+ (40)
2559
+ µ
2560
+
2561
+ j=1
2562
+ vj = az1 + z2 − uc,
2563
+ (41)
2564
+ where ηt = 5σn2√tµ log6 n. Denote this multi-
2565
+ signature scheme by RLWE-MultiSig. From our
2566
+ compiler and the properties of our ID scheme, we
2567
+ obtain the following.
2568
+ Corollary 3: Let ηt∗
2569
+ =
2570
+ 5σn2√t∗µ log6 n,
2571
+ σ = Ω(n) and βt∗
2572
+ = 16ηt∗√n log2 n. Under
2573
+ Ring-LWEq,σ,2n and Ring-SIS3,q,βt∗ assumptions,
2574
+ RLWE-MultiSig is t∗-EU-CMA secure. Especially,
2575
+ if the assumptions hold for t∗ = 2
2576
+ 4√n, then RLWE-
2577
+ MultiSig is EU-CMA secure.
2578
+ Remark.
2579
+ As the best algorithm [33], [15] can
2580
+ only solve ring-SISq,3,β with β = 2 ˜O(√n), it is safe
2581
+ to assume ring-SISq,3,β with any polynomial β. If
2582
+ the assumption is sound for β = 2
2583
+ 4√n, our multi-
2584
+ signature scheme is EU-CMA secure, as t∗ can only
2585
+ be polynomial for a PPT adversary.
2586
+ VIII. CONCLUSION
2587
+ In this paper, we proposed a compiler that con-
2588
+ verts a type of identification scheme to a key-
2589
+ and-signature compact multi-signature. This special
2590
+ type of ID owns a linear property. The aggregated
2591
+ public-key and multi-signature are of size both in-
2592
+ dependent of the number of signers. We formulated
2593
+ this compiler through linear ID via the language of
2594
+ R-module and proved the security through a new
2595
+ forking lemma called nested forking lemma. Under
2596
+ our compiler, the compact multi-signature problem
2597
+ has been reduced from a multi-party problem to a
2598
+ two-party problem. We realized our compiler with
2599
+ Schnorr ID scheme and a new lattice-based scheme.
2600
+ Our lattice multi-signature is the first of its kind
2601
+ that is key-and-signature compact without a restart
2602
+ in the signing process.
2603
+ REFERENCES
2604
+ [1] Michel Abdalla, Pierre Alain Fouque, Vadim Lyuba-
2605
+ shevsky, Mehdi Tibouchi, Tightly-Secure Signatures from
2606
+ Lossy Identification Schemes. EUROCRYPT 2012, 572-
2607
+ 590.
2608
+ [2] H. K. Alper and J. Burdges. Two-round trip schnorr multi-
2609
+ signatures via delinearized witnesses. In T. Malkin and C.
2610
+ Peikert, editors, CRYPTO 2021, Part I, volume 12825 of
2611
+ LNCS, pages 157-188, Virtual Event, Aug. 2021. Springer,
2612
+ Heidelberg.
2613
+ [3] Ali Bagherzandi, Jung Hee Cheon and Stanislaw Jarecki,
2614
+ Multisignatures secure under the discrete logarithm as-
2615
+ sumption and a generalized forking lemma. CCS 2008,
2616
+ pp. 449-458, 2008.
2617
+ [4] Mihir Bellare, Wei Dai, Chain Reductions for Multi-
2618
+ signatures and the HBMS Scheme. ASIACRYPT 2021, Part
2619
+ IV: 650-678
2620
+ [5] Mihir Bellare, Adriana Palacio, GQ and Schnorr Identifi-
2621
+ cation Schemes: Proofs of Security against Impersonation
2622
+ under Active and Concurrent Attacks. CRYPTO 2002:
2623
+ 162-177.
2624
+ [6] M. Bellare and G. Neven, Identity-Based Multi-signatures
2625
+ from RSA, CT-RSA 2007, M. Abe (Ed.), LNCS 4377, pp.
2626
+ 145-162, 2007.
2627
+ [7] Mihir Bellare, Phillip Rogaway: Random Oracles are Prac-
2628
+ tical: A Paradigm for Designing Efficient Protocols. CCS
2629
+ 1993: 62-73, 1993.
2630
+ [8] Mihir Bellare, Gregory Neven: Multi-signatures in the
2631
+ plain public-Key model and a general forking lemma. CCS
2632
+ 2006: 390-399
2633
+ [9] Ian F. Blake, Shuhong Gao and Ronald C. Mullin, Explicit
2634
+ Factorization of x2k + 1 over Fp with Prime p ≡ 3 mod
2635
+ 4. Appl. Algebra Eng. Commun. Comput. 4:89-94 (1993)
2636
+ [10] Florian B¨ohl, Dennis Hofheinz, Tibor Jager, Jessica
2637
+ Koch, Christoph Striecks: Confined Guessing: New Signa-
2638
+ tures From Standard Assumptions. J. Cryptol. 28(1): 176-
2639
+ 208 (2015)
2640
+
2641
+ [11] Alexandra Boldyreva, Threshold Signatures, Multisig-
2642
+ natures and Blind Signatures Based on the Gap-Diffie-
2643
+ Hellman-Group Signature Scheme. Public Key Cryptog-
2644
+ raphy 2003: 31-46.
2645
+ [12] D. Boneh, C. Gentry, B. Lynn, and H. Shacham. Ag-
2646
+ gregate and verifiably encrypted signatures from bilinear
2647
+ maps. In E. Biham, editor, EUROCRYPT 2003, volume
2648
+ 2656 of LNCS, pages 416-432. Springer-Verlag, 2003.
2649
+ [13] Dan Boneh, Manu Drijvers, Gregory Neven: Compact
2650
+ Multi-signatures for Smaller Blockchains. ASIACRYPT
2651
+ (2) 2018: 435-464
2652
+ [14] Cecilia Boschini, Akira Takahashi, and Mehdi Tibouchi.
2653
+ Musig-L: Lattice-based multi-signature with single-round
2654
+ online phase, CRYPTO’22.
2655
+ [15] Ronald Cramer, L´eo Ducas, and Benjamin Wesolowski.
2656
+ Short stickelberger class relations and application to ideal-
2657
+ svp. Eurocrypt 2017.
2658
+ [16] Ivan Damg˚ard, Claudio Orlandi, Akira Takahashi, and
2659
+ Mehdi Tibouchi. Two-round n-out-of-n and multisigna-
2660
+ tures and trapdoor commitment from lattices. PKC 2021,
2661
+ LNCS 12710, pages 99-130, 2021.
2662
+ [17] Manu Drijvers, Kasra Edalatnejad, Bryan Ford, Eike
2663
+ Kiltz, Julian Loss, Gregory Neven, Igors Stepanovs, On the
2664
+ Security of Two-Round Multi-Signatures. IEEE Sympo-
2665
+ sium on Security and Privacy 2019, pp. 1084-1101, IEEE,
2666
+ 2019.
2667
+ [18] L´eo Ducas and Alain Durmus. Ring-lwe in polynomial
2668
+ rings. In PKC 2012, LNCS 7293, pages 34-51. Springer,
2669
+ 2012.
2670
+ [19] Rachid El Bansarkhani and Jan Sturm. An efficient
2671
+ lattice-based multisignature scheme with applications to
2672
+ bitcoins, CANS’16, pages 140-155.
2673
+ [20] Nils Fleischhacker, Mark Simkin, Zhenfei Zhang: Squir-
2674
+ rel: Efficient Synchronized Multi-Signatures from Lattices.
2675
+ CCS 2022, papges 1109-1123, 2022.
2676
+ [21] Masayuki Fukumitsu and Shingo Hasegawa. A tightly-
2677
+ secure lattice-based multisignature. The 6th Asia Public-
2678
+ Key Cryptography Workshop 2019, page 3-11, 2019.
2679
+ [22] Masayuki Fukumitsu and Shingo Hasegawa. A lattice-
2680
+ based provably secure multisignature scheme in quantum
2681
+ random oracle model, ProvSec 2020.
2682
+ [23] C. Gentry and Z. Ramzan. Identity-based aggregate sig-
2683
+ natures. In M. Yung, editor, PKC 2006, volume 3958 of
2684
+ LNCS, pages 257-273. Springer-Verlag, 2006.
2685
+ [24] C. Gentry, C. Peikert, and V. Vaikuntanathan. Trapdoors
2686
+ for hard lattices and new cryptographic constructions.
2687
+ STOC’08, pp. 197-206, 2008.
2688
+ [25] L. Harn. Group-oriented (t, n) threshold digital signa-
2689
+ ture scheme and digital multisignature. IEE Proceedings-
2690
+ Computers and Digital Techniques, 141(5):307-313, 1994.
2691
+ [26] K. Itakura and K. akamura, A public-key cryptosystem
2692
+ suitable for digital multisignatures. NEC Research & De-
2693
+ velopment, 71:1-8, 1983.
2694
+ [27] Meenakshi Kansal, Amit Kumar Singh, Ratna Dutta,
2695
+ Efficient Multi-Signature Scheme Using Lattice. Comput.
2696
+ J. 65(9): 2421-2429 (2022)
2697
+ [28] Meenakshi Kansal and Ratna Dutta, Round Optimal Se-
2698
+ cure Multisignature Schemes from Lattice with Public Key
2699
+ Aggregation and Signature Compression. AFRICACRYPT
2700
+ 2020, pages 281-300, 2020.
2701
+ [29] Serge Lang, Algebra, GTM 211, Springer-Verlag, 2002.
2702
+ [30] C. M. Li, T. Hwang, and N. Y. Lee. Threshold-
2703
+ multisignature schemes where suspected forgery implies
2704
+ traceability of adversarial shareholders. In A. D. Santis,
2705
+ editor, EUROCRYPT’94, volume 950 of LNCS, pages
2706
+ 194-204. Springer, Heidelberg, May 1995
2707
+ [31] Zi-Yuan Liu, Yi-Fan Tseng, and Raylin Tso. Cryptanaly-
2708
+ sis of a round optimal lattice-based multisignature scheme.
2709
+ Cryptology ePrint Archive, Report 2020/1172, 2020.
2710
+ [32] Steve Lu, Rafail Ostrovsky, Amit Sahai, Hovav Shacham,
2711
+ Brent Waters: Sequential Aggregate Signatures and Mul-
2712
+ tisignatures
2713
+ Without
2714
+ Random
2715
+ Oracles.
2716
+ EUROCRYPT
2717
+ 2006: 465-485
2718
+ [33] Vadim Lyubashevsky and Daniele Micciancio, General-
2719
+ ized Compact Knapsacks Are Collision Resistant. ICALP
2720
+ 2006, part 2, pages 144-155, 2006.
2721
+ [34] Vadim Lyubashevsky, Chris Peikert, and Oded Regev. On
2722
+ ideal lattices and learning with errors over rings. J. ACM,
2723
+ 60(6):43:1-43:35, 2013.
2724
+ [35] V. Lyubashevsky, C. Peikert, and O. Regev. A toolkit
2725
+ for ring-LWE cryptography. EUROCRYPT’13, pages 35-
2726
+ 54, 2013.
2727
+ [36] Changshe Ma, Jian Weng, Yingjiu Li, Robert H. Deng:
2728
+ Efficient discrete logarithm based multi-signature scheme
2729
+ in the plain public key model. Des. Codes Cryptogr. 54(2):
2730
+ 121-133 (2010)
2731
+ [37] Changshe Ma, Mei Jiang, Practical Lattice-Based Mul-
2732
+ tisignature Schemes for Blockchains. IEEE Access 7:
2733
+ 179765-179778 (2019)
2734
+ [38] Maxwell, G., Poelstra, A., Seurin, Y., Wuille, P.: Sim-
2735
+ ple schnorr multi-signatures with applications to bit-
2736
+ coin. Cryptology ePrint Archive, Report 2018/068 (2018),
2737
+ https://eprint.iacr.org/2018/068/20180118:124757
2738
+ [39] Silvio Micali, Kazuo Ohta, Leonid Reyzin: Accountable-
2739
+ subgroup multisignatures: extended abstract. CCS 2001:
2740
+ 245-254.
2741
+ [40] D. Micciancio and O. Regev. Worst-case to average-case
2742
+ reductions based on gaussian measures. SIAM J. Comput.,
2743
+ 37(1): 267-302, 2007.
2744
+ [41] Satoshi
2745
+ Nakamoto.
2746
+ Bitcoin:
2747
+ A
2748
+ Peer-to-Peer
2749
+ Electronic
2750
+ Cash
2751
+ System,
2752
+ 2008.
2753
+ Available
2754
+ at
2755
+ http://bitcoin.org/bitcoin.pdf
2756
+ [42] J. Nick, T. Ruffing, and Y. Seurin. MuSig2: Simple two-
2757
+ round Schnorr multi-signatures. CRYPTO 2021, Part I,
2758
+ LNCS 12825, pp. 189-221, Springer, 2021.
2759
+ [43] J. Nick, T. Ruffing, Y. Seurin, and P. Wuille. MuSig-
2760
+ DN: Schnorr multi-signatures with verifiably deterministic
2761
+ nonces. In J. Ligatti, X. Ou, J. Katz, and G. Vigna, editors,
2762
+ ACM CCS 2020, pages 1717-1731. ACM Press, Nov.
2763
+ 2020.
2764
+ [44] K. Ohta and T. Okamoto. A digital multisignature scheme
2765
+ based on the Fiat-Shamir scheme. In H. Imai, R. L. Rivest,
2766
+ and T. Matsumoto, editors, ASIACRYPT’91, volume 739
2767
+ of LNCS, pages 139-148. Springer, Heidelberg, Nov. 1993.
2768
+ [45] Chris Peikert, Alon Rosen, Efficient Collision-Resistant
2769
+ Hashing from Worst-Case Assumptions on Cyclic Lattices.
2770
+ TCC 2006, pages 145-166, 2006.
2771
+
2772
+ [46] D. Pointcheval and J. Stern. Security arguments for digital
2773
+ signatures and blind signatures. Journal of Cryptology,
2774
+ 13(3):361-396, 2000.
2775
+ [47] Ronald L. Rivest, Adi Shamir, Leonard M. Adleman: A
2776
+ Method for Obtaining Digital Signatures and Public-Key
2777
+ Cryptosystems. Commun. ACM 21(2): 120-126, 1978.
2778
+ [48] C. P. Schnorr, Efficient Signature Generation by Smart
2779
+ Cards, Journal of Cryptology, vol 4, no. 3, pp. 161-174,
2780
+ 1991.
2781
+ [49] Damien Stehl´e and Ron Steinfeld, Making NTRU as
2782
+ secure as worst-case problems over ideal lattices, EURO-
2783
+ CRYPT 2011, K. G. Paterson (ed.), LNCS 6632, pp. 27-47,
2784
+ 2011.
2785
+ [50] Damien Stehl´e and Ron Steinfeld, Making NTRUEncrypt
2786
+ and NTRUSign as secure as standard worst-case problems
2787
+ over ideal lattices, Cryptology ePrint Archive, Report
2788
+ 2013/004, 2013, http://eprint.iacr.org/. Full version of [49].
2789
+ [51] Ewa Syta, Iulia Tamas, Dylan Visher, David Isaac Wolin-
2790
+ sky, Philipp Jovanovic, Linus Gasser, Nicolas Gailly, Is-
2791
+ mail Khoffi, and Bryan Ford. Keeping authorities “honest
2792
+ or bust” with decentralized witness cosigning. IEEE Sym-
2793
+ posium on Security and Privacy 2016, pp. 526-545. IEEE
2794
+ Computer Society Press, May 2016.
2795
+
H9FAT4oBgHgl3EQfuR4A/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
JNFLT4oBgHgl3EQfJy82/content/tmp_files/2301.12005v1.pdf.txt ADDED
@@ -0,0 +1,2861 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ EmbedDistill: A Geometric Knowledge Distillation
2
+ for Information Retrieval
3
+ Seungyeon Kim, Ankit Singh Rawat, Manzil Zaheer,
4
+ Sadeep Jayasumana, Veeranjaneyulu Sadhanala, Wittawat Jitkrittum,
5
+ Aditya Krishna Menon, Rob Fergus, Sanjiv Kumar
6
+ Google LLC, USA
7
+ {seungyeonk,ankitsrawat,manzilzaheer}@google.com
8
+ {sadeep,veerus,wittawat,adityakmenon,robfergus,sanjivk}@google.com
9
+ Abstract
10
+ Large neural models (such as Transformers) achieve state-of-the-art performance for information
11
+ retrieval (IR). In this paper, we aim to improve distillation methods that pave the way for the deployment
12
+ of such models in practice. The proposed distillation approach supports both retrieval and re-ranking
13
+ stages and crucially leverages the relative geometry among queries and documents learned by the large
14
+ teacher model. It goes beyond existing distillation methods in the IR literature, which simply rely on
15
+ the teacher’s scalar scores over the training data, on two fronts: providing stronger signals about local
16
+ geometry via embedding matching and attaining better coverage of data manifold globally via query
17
+ generation. Embedding matching provides a stronger signal to align the representations of the teacher and
18
+ student models. At the same time, query generation explores the data manifold to reduce the discrepancies
19
+ between the student and teacher where training data is sparse. Our distillation approach is theoretically
20
+ justified and applies to both dual encoder (DE) and cross-encoder (CE) models. Furthermore, for distilling
21
+ a CE model to a DE model via embedding matching, we propose a novel dual pooling-based scorer for the
22
+ CE model that facilitates a distillation-friendly embedding geometry, especially for DE student models.
23
+ 1
24
+ Introduction
25
+ Neural models for information retrieval (IR) are increasingly used to model the true ranking function in
26
+ various applications, including web search [Mitra and Craswell, 2018], recommendation [Zhang et al., 2019],
27
+ and question-answering (QA; Chen et al. 2017). Notably, the recent success of Transformers [Vaswani et al.,
28
+ 2017]-based pre-trained language models [Devlin et al., 2019, Liu et al., 2019, Raffel et al., 2020] on a
29
+ wide range of natural language understanding tasks has also prompted their utilization in IR to capture
30
+ query-document relevance [see, e.g., Dai and Callan, 2019b, MacAvaney et al., 2019a, Nogueira and Cho,
31
+ 2019, Lee et al., 2019, Karpukhin et al., 2020a].
32
+ A typical IR system comprises two stages: (1) A retriever first selects a small subset of potentially relevant
33
+ candidate documents (out of a large collection) for a given query; and (2) A re-ranker then identifies a
34
+ precise ranking among the candidates provided by the retriever. Dual-encoder (DE) models are the de-facto
35
+ architecture for retrievers [Lee et al., 2019, Karpukhin et al., 2020a]. Such models independently embed
36
+ queries and documents into a common space, and capture their relevance by simple operations on these
37
+ embeddings such as the inner product. This enables offline creation of a document index and supports fast
38
+ retrieval during inference via efficient maximum inner product search (MIPS) implementations [Guo et al.,
39
+ 1
40
+ arXiv:2301.12005v1 [cs.LG] 27 Jan 2023
41
+
42
+ 2020, Johnson et al., 2021], with online query embedding generation primarily dictating the inference latency.
43
+ Cross-encoder (CE) models, on the other hand, are preferred as re-rankers, owing to their excellent perfor-
44
+ mance [Nogueira and Cho, 2019, Dai and Callan, 2019a, Yilmaz et al., 2019]. A CE model jointly encodes a
45
+ query-document pair while enabling early interaction among query and document features. Employing a CE
46
+ model for retrieval is often infeasible, as it would require processing a given query with every document in
47
+ the collection at inference time. In fact, even in the re-ranking stage, the inference cost of CE models is high
48
+ enough [Khattab and Zaharia, 2020] to warrant exploration of efficient alternatives [Hofst¨atter et al., 2020,
49
+ Khattab and Zaharia, 2020, Menon et al., 2022].
50
+ Knowledge distillation [Bucilˇa et al., 2006, Hinton et al., 2015] provides a general strategy to address the
51
+ prohibitive inference cost associated with high-quality large neural models. In the IR literature, most existing
52
+ distillation methods only rely on the teacher’s query-document relevance scores [see, e.g., Lu et al., 2020,
53
+ Hofst¨atter et al., 2020, Chen et al., 2021, Ren et al., 2021, Santhanam et al., 2021] or their proxies [Izacard
54
+ and Grave, 2021]. However, given that neural IR models are inherently embedding-based, it is natural to ask:
55
+ Is it useful to go beyond matching of the teacher and student models’ scores, and directly aim to align their
56
+ embedding spaces?
57
+ With this in mind, we propose a novel distillation method for IR models that utilizes an embedding matching
58
+ task to train the student. The proposed method supports cross-architecture distillation and improves upon
59
+ existing distillation methods for both retriever and re-ranker models. When distilling a large DE model
60
+ into a smaller DE model, for a given query (document), one can minimize the distance between the query
61
+ (document) embeddings of the teacher and student after compatible projection layers to account for any
62
+ dimensionality mismatch. In contrast, defining an embedding matching task for distilling a CE model into
63
+ a DE model is not as straightforward. For Transformers-based CE models, it is common to use the final
64
+ embedding of a special token, e.g., [CLS] in BERT [Devlin et al., 2019], to compute query-document
65
+ relevance [Nogueira and Cho, 2019]. However, as we note in Sec. 4.2, this token embedding does not capture
66
+ semantic similarity between the query and document. To make CE models more amenable to distillation via
67
+ embedding matching, we propose a modified CE scoring approach by utilizing a novel dual-pooling strategy:
68
+ this separately pools the final query and document token embeddings, and then computes the inner product
69
+ between the pooled embeddings as the relevance score. Our key contributions toward improving IR models
70
+ via distillation are:
71
+ • We propose a novel distillation approach for neural IR models, namely EmbedDistill, that goes beyond
72
+ score matching and aligns the embedding spaces of the teacher and student models.
73
+ • We consider a novel DE to DE distillation setup to showcase the effectiveness of EmbedDistill (Sec. 4.1).
74
+ Specifically, we consider a student DE model with an asymmetric configuration, consisting of a small
75
+ query encoder and a frozen document encoder inherited from the teacher. This configuration significantly
76
+ reduces inference latency of query embedding generation, while leveraging the teachers’ high-quality
77
+ document index.
78
+ • We show that EmbedDistill can leverage synthetic data to improve a student by further aligning the
79
+ embedding spaces of the teacher and student (Sec. 4.3).
80
+ • We theoretically justify both embedding matching and query generation components of our proposed
81
+ method (Sec. 5). Further, we provide a comprehensive empirical evaluation of the method (Sec. 6) on two
82
+ standard IR benchmarks – Natural Questions [Kwiatkowski et al., 2019a] and MSMARCO [Nguyen et al.,
83
+ 2016]. Additionally, we also evaluate EmbedDistill on BEIR benchmark [Thakur et al., 2021] which is
84
+ used to measure the zero-shot performance of an IR model.
85
+ • Finally, we demonstrate the utility of embedding matching for CE to DE distillation on MSMARCO
86
+ by employing a novel pooling strategy, namely dual pooling (Sec. 4.2), which may be of independent
87
+ interest.
88
+ 2
89
+
90
+ 2
91
+ Related work
92
+ Here, we review some existing Transformers-based IR models, and discuss prior work on distillation and data
93
+ augmentation for such models. We also cover prior efforts on aligning representations during distillation for
94
+ non-IR settings. Unlike our problem setting where the DE student is factorized, these works mainly consider
95
+ distilling a single large Transformer into a smaller one.
96
+ Transformers-based architectures for IR. Besides DE and CE models described in Sec. 1, intermediate
97
+ configurations [MacAvaney et al., 2020, Khattab and Zaharia, 2020, Nie et al., 2020, Luan et al., 2021] have
98
+ been proposed. Such models independently encode query and document before applying a more complex late
99
+ interaction between the two. Interestingly, Nogueira et al. [2020] explore generative encoder-decoder style
100
+ model for re-ranking, where a T5 [Raffel et al., 2020] model takes a query-document pair as input and its
101
+ score for certain target tokens (e.g., True/False) defines the relevance score for the pair. In this paper, we
102
+ focus on basic DE and CE models to showcase the benefits of our proposed geometric distillation approach.
103
+ Exploring embedding matching for other aforementioned architectures is an interesting avenue for future
104
+ exploration.
105
+ Distillation for IR. Traditional distillation techniques have been widely applied in the IR literature, often to
106
+ distill a teacher CE model to a student DE model [Li et al., 2020, Chen et al., 2021]. Recently, distillation from
107
+ a DE model (with complex late interaction) to another DE model (with inner-product scoring) has also been
108
+ considered [Lin et al., 2021, Hofst¨atter et al., 2021]. As for distilling across different model architectures, Lu
109
+ et al. [2020], Izacard and Grave [2021] consider distillation from a teacher CE model to a student DE model.
110
+ Hofst¨atter et al. [2020] conduct an extensive study of knowledge distillation across a wide-range of model
111
+ architectures. Most existing distillation schemes for IR rely on only teacher scores; by contrast, we propose a
112
+ geometric approach that also utilizes the teacher embeddings. Many recent efforts [Qu et al., 2021, Ren et al.,
113
+ 2021, Santhanam et al., 2021] show that iterative multi-stage (self-)distillation improves upon single-stage
114
+ distillation [Qu et al., 2021, Ren et al., 2021, Santhanam et al., 2021]. These approaches use a model from
115
+ the previous stage to obtain labels [Santhanam et al., 2021] as well as mine harder-negatives [Xiong et al.,
116
+ 2021]. We only focus on the single-stage distillation in this paper. Multi-stage procedures are complementary
117
+ to our work, as one can employ our proposed embedding-matching approach in various stages of such a
118
+ procedure. Interestingly, we demonstrate in Sec. 6 that our proposed EmbedDistill can successfully benefit
119
+ from high quality models trained with such complex procedures [Reimers et al., 2019, Zhang et al., 2022].
120
+ In particular, our single-stage distillation method can transfer almost all of their performance gains to even
121
+ smaller models. Also to showcase that our method brings gain orthogonal to how teacher was trained, we
122
+ conduct experiments with single-stage trained teacher in Appendix F.5.
123
+ Distillation with representation alignments. Outside of the IR context, a few prior works proposed to utilize
124
+ alignment between hidden layers during distillation [Romero et al., 2014, Sanh et al., 2019, Jiao et al., 2020,
125
+ Aguilar et al., 2020, Zhang and Ma, 2020]. Chen et al. [2022] utilize the representation alignment to re-use
126
+ teacher’s classification layer. Our work differs from these as it needs to address multiple challenges presented
127
+ by an IR setting: 1) cross-architecture distillation such as CE to DE distillation; 2) partial representation
128
+ alignment of query or document representations as opposed to aligning for the entire input, i.e., a query-
129
+ documents pair; 3) catering representation alignment approach to novel IR setups such as asymmetric DE
130
+ configuration; and 4) dealing with negative sampling due to a large number of classes (documents). To the
131
+ best of our knowledge, our work is first in the IR literature that goes beyond simply matching scores (or its
132
+ proxies).
133
+ Semi-supervised learning for IR. Data augmentation or semi-supervised learning has been previously
134
+ used to ensure data efficiency in IR [see, e.g., MacAvaney et al., 2019b, Zhao et al., 2021]. More inter-
135
+ estingly, data augmentation via large pre-trained models have enabled performance improvements as well.
136
+ 3
137
+
138
+ Teacher
139
+ Cross Encoder
140
+ Student Query
141
+ Encoder
142
+ Student Doc
143
+ Encoder
144
+
145
+
146
+ Score
147
+ Score
148
+ Special tokens
149
+ Query & doc tokens
150
+ Score-based
151
+ distillation
152
+
153
+
154
+ Doc tokens
155
+ Query tokens
156
+ Doc tokens
157
+ Query tokens
158
+ Pooling
159
+ Pooling
160
+ (a) Score-based CE to DE distillation
161
+ Special tokens
162
+ Query & doc tokens
163
+ Student Query
164
+ Encoder
165
+
166
+
167
+ Score
168
+ Teacher Doc
169
+ Encoder
170
+ Score
171
+ Teacher Query
172
+ Encoder
173
+ Student Doc
174
+ Encoder
175
+ Pooling
176
+
177
+
178
+ Doc tokens
179
+ Query tokens
180
+ Doc tokens
181
+ Query tokens
182
+ Score-based distillation
183
+ Pooling
184
+ Pooling
185
+ Pooling
186
+ (b) Score-based DE to DE distillation
187
+ Figure 1: Illustration of score-based distillation for IR (cf. Section 3.2). Fig. 1a describes distillation from
188
+ a teacher [CLS]-pooled CE model to a student DE model. Fig. 1b depicts distillation from a teacher DE
189
+ model to a student DE model. Here, both distillation setup employ symmetric DE configurations where query
190
+ and document encoders of the student model are of the same size.
191
+ Doc2query [Nogueira et al., 2019b,a] performs document expansion by generating queries that are relevant to
192
+ the document and appending those queries to the document. Query expansion has also been considered, e.g.,
193
+ for document re-ranking [Zheng et al., 2020]. Notably, generating synthetic (query, passage, answer) triples
194
+ from a text corpus to augment existing training data for QA systems also leads to significant gains [Alberti
195
+ et al., 2019, O˘guz et al., 2021]. Furthermore, even zero-shot approaches, where no labeled query-document
196
+ pairs are used, can also perform competitively to supervised methods. Such methods train a DE model by
197
+ relying on inverse cloze task [Lee et al., 2019, Izacard et al., 2021], synthetic query-document pairs given a
198
+ target text corpus [Ma et al., 2021], or relevance scores from large pretrained models [Sachan et al., 2022].
199
+ Unlike these works, we utilize query-generation capability to ensure tighter alignment between the embedding
200
+ spaces of the teacher and student.
201
+ 3
202
+ Background
203
+ Let Q and D denote the query and document spaces, respectively. An IR model is equivalent to a scorer
204
+ s : Q × D → R, i.e., it assigns a (relevance) score s(q, d) for a query-document pair (q, d) ∈ Q × D.
205
+ Ideally, we want to learn an IR model or scorer that is faithful to the true query-document relevance, i.e.,
206
+ s(q, d) > s(q, d′) iff the document d is more relevant to the query q than document d′. We assume access to
207
+ n labeled training examples Sn = {(qi, di, yi)}i∈[n]. Here, di = (di,1, . . . , di,L) ∈ DL, ∀i ∈ [n], denotes a
208
+ list of L documents and yi = (yi,1, . . . , yi,L) ∈ {0, 1}L denotes the corresponding labels such that yi,j = 1
209
+ iff the document di,j is relevant to the query qi. Given Sn, we learn an IR model by minimizing
210
+ R(s; Sn) := 1
211
+ n
212
+
213
+ i∈[n] ℓ
214
+
215
+ sqi,di, yi
216
+
217
+ ,
218
+ (1)
219
+ where sqi,di := (s(qi, d1,i), . . . , s(qi, d1,L)) and, accordingly, ℓ
220
+
221
+ sqi,di, yi
222
+
223
+ denotes the loss s incurs on
224
+ (qi, di, yi). We defer concrete choices for the loss function ℓ to Appendix A.
225
+ While this learning framework is general enough to work with any IR models as the scorers, next, we formally
226
+ introduce two families of Transformer-based IR models that are prevalent in the recent literature.
227
+ 4
228
+
229
+ 3.1
230
+ Transformer-based IR models: Cross-encoders and Dual-encoders
231
+ Let query q = (q1, . . . , qm1) and document d = (d1, . . . , dm2) consist of m1 and m2 tokens, respectively. We
232
+ now discuss how Transformers-based CE and DE models process a the (q, d) pair.
233
+ Cross-encoder model. Let p = [q; d] be the sequence obtained by concatenating q and d. Further, let ˜p
234
+ be the sequence obtained by adding special tokens such [CLS] and [SEP] to p. Given an encoder-only
235
+ Transformer model Enc, the relevance score for the (q, d) pair is
236
+ s(q, d) = ⟨w, pool
237
+
238
+ Enc(˜p)
239
+
240
+ ⟩ = ⟨w, embq,d⟩,
241
+ (2)
242
+ where w is a d-dimensional classification vector, and pool(·) denotes a pooling operation that transforms the
243
+ contextualized token embeddings Enc(˜p) to a joint embedding vector embt
244
+ q,d. [CLS]-pooling is a common
245
+ operation that simply outputs the embedding of the [CLS] token as embt
246
+ q,d.
247
+ Dual-encoder model. Let ˜q and ˜d be the sequences obtained by adding appropriate special tokens to q and
248
+ d, respectively. A DE model comprises two (encoder-only) Transformers EncQ and EncD, which we call
249
+ query and document encoders, respectively.1 Let embq = pool
250
+
251
+ EncQ(˜q)
252
+
253
+ and embd = pool
254
+
255
+ EncD( ˜d)
256
+
257
+ denote
258
+ the query and document embeddings, respectively. Now, one can define s(q, d) = ⟨embq, embd⟩ to be the
259
+ relevance score assigned to the (q, d) pair by the DE model.
260
+ 3.2
261
+ Score-based distillation for IR models
262
+ Most distillation schemes for IR [e.g., Lu et al., 2020, Hofst¨atter et al., 2020, Chen et al., 2021] rely on
263
+ teacher relevance scores (cf. Fig. 1). In particular, given a training set Sn and a teacher with scorer st, one
264
+ learns a student with scorer ss by minimizing
265
+ R(ss, st; Sn) = 1
266
+ n
267
+
268
+ i∈[n] ℓd
269
+
270
+ ss
271
+ q,di, st
272
+ q,di
273
+
274
+ ,
275
+ (3)
276
+ where ℓd captures the discrepancy between ss and st. See Appendix A for common choices for ℓd.
277
+ 4
278
+ Embedding-matching based distillation
279
+ Since modern neural IR models are inherently embedding-based, we propose to explicitly align the embedding
280
+ spaces of the teacher and student via a novel distillation method, namely EmbedDistill. Our proposal goes
281
+ beyond existing distillation methods in the IR literature that only use the teacher scores. Next, we introduce
282
+ EmbedDistill for two prevalent settings: (1) distilling a large DE model to a smaller DE model;2 and (2)
283
+ distilling a CE model to a DE model.
284
+ 4.1
285
+ DE to DE distillation
286
+ Given a (q, d) pair, let embt
287
+ q and embt
288
+ d be the query and document embeddings produced by the query encoder
289
+ Enct
290
+ Q and document encoder Enct
291
+ D of the teacher DE model, respectively. Similarly, let embs
292
+ q and embs
293
+ d
294
+ 1It is common to employ dual-encoder models where query and document encoders are shared.
295
+ 2We focus on DE to DE distillation setup as the CE to CE distillation is special case of the former with the classification vector w
296
+ (cf. Eq. 2) being the trivial second encoder.
297
+ 5
298
+
299
+ Special tokens
300
+ Query & doc tokens
301
+ Student Query
302
+ Encoder
303
+ Teacher Doc
304
+ Encoder
305
+
306
+
307
+ Doc tokens
308
+ Query tokens
309
+ Score
310
+ Teacher Doc
311
+ Encoder
312
+
313
+
314
+ Doc tokens
315
+ Query tokens
316
+ Score
317
+ Teacher Query
318
+ Encoder
319
+
320
+ Embedding matching
321
+ Pooling
322
+ Pooling
323
+ Pooling
324
+ Pooling
325
+ Score-based distillation
326
+ Figure 2: Proposed DE to DE distillation with query embedding matching. The figure describes a setting
327
+ where student employs an asymmetric DE configuration with a small query encoder and a large (non-trainable)
328
+ document encoder inherited from the teacher. The smaller query encoder ensures small latency for encoding
329
+ query during inference, and large document encoder leads to a good quality document index.
330
+ denote the query and document embeddings produced by a student DE model with (Encs
331
+ Q, Encs
332
+ D) as its
333
+ query and document encoders. Now, EmbedDistill optimizes the following embedding alignment loss in
334
+ addition to the score-matching loss from Sec. 3.2:
335
+ REmb(t, s; Sn)= 1
336
+ n
337
+
338
+ (q,d)∈Sn
339
+
340
+ ∥embt
341
+ q−proj
342
+
343
+ embs
344
+ q
345
+
346
+ ∥ + ∥embt
347
+ d−proj
348
+
349
+ embs
350
+ d)∥
351
+
352
+ ,
353
+ (4)
354
+ where proj is an optional trainable layer that is required if the teacher and student produce different sized
355
+ embeddings. Alternatively, one can employ other variants of EmbedDistill, e.g., focusing on only aligning the
356
+ query embeddings takes the following form (cf. Fig. 2).
357
+ REmb,Q(t, s; Sn) = 1
358
+ n
359
+
360
+ q∈Sn
361
+ ∥embt
362
+ q − proj
363
+
364
+ embs
365
+ q
366
+
367
+ ∥.
368
+ (5)
369
+ Asymmetric DE. We also propose a novel student DE configuration where the student employs the teacher’s
370
+ document encoder (i.e., Encs
371
+ D = Enct
372
+ D) and only train its query encoder, which is much smaller compared
373
+ to the teacher’s query encoder. For such a setting, it is natural to only employ the embedding matching loss in
374
+ Eq. 5 as the document embeddings are aligned by design (cf. Fig. 2).
375
+ Note that this asymmetric student DE does not incur an increase in latency despite the use of a large teacher
376
+ document encoder. This is because the large document encoder is only needed to create a good quality
377
+ document index offline, and only the query encoder is evaluated at inference time. Thus, for DE to DE
378
+ distillation, we prescribe the asymmetric DE configuration universally. Our theoretical analysis (cf. Sec. 5)
379
+ and experimental results (cf. Sec. 6) suggest that the ability to inherit the document tower from the teacher
380
+ DE model can drastically improve the final performance, especially when combined with EmbedDistill.
381
+ 4.2
382
+ CE to DE distillation
383
+ Let Enct be the (single) teacher CE model encoder, and (Encs
384
+ Q, Encs
385
+ D) denote the student DE model’s query
386
+ and document encoders. When distilling from a CE to DE model, defining an effective embedding matching
387
+ task is not as straightforward as in Sec. 4.1: since CE models jointly encode query-document pairs, it is not
388
+ obvious how to extract individual query and document embeddings for matching.
389
+ 6
390
+
391
+ Teacher
392
+ Cross Encoder
393
+ Student Query
394
+ Encoder
395
+ Student Doc
396
+ Encoder
397
+
398
+
399
+ Doc tokens
400
+ Query tokens
401
+ Score
402
+ Score
403
+ Special tokens
404
+ Query & doc tokens
405
+
406
+
407
+ Doc tokens
408
+ Query tokens
409
+ Pooling
410
+ Pooling
411
+ Score-based
412
+ distillation
413
+
414
+ Embedding matching
415
+ Figure 3: Illustration of CE to DE distillation using EmbedDistill, with CE model employing dual pooling.
416
+ As a na¨ıve solution, for a (q, d) pair, one can simply match a joint transformation of the student’s query
417
+ embedding embs
418
+ q and document embedding embs
419
+ d to the teacher’s joint embedding embt
420
+ q,d. However, we
421
+ observed that including such an embedding matching task often leads to severe over-fitting, and results in
422
+ a poor student. Since st(q, d) = ⟨w, embt
423
+ q,d⟩, during CE model training, the joint embeddings embt
424
+ q,d for
425
+ relevant and irrelevant (q, d) pairs are encouraged to be aligned with w and −w, respectively. This produces
426
+ degenerate embeddings that do not capture semantic query-to-document relationships. We notice that even
427
+ the final query and document token embeddings lose such semantic structure (cf. Appendix G.2). Thus, a
428
+ teacher CE model with st(q, d) = ⟨w, embt
429
+ q,d⟩ does not add value for distillation beyond score-matching; in
430
+ fact, it hurts to include na¨ıve embedding matching. Next, we propose a modified CE model training strategy
431
+ that facilitates EmbedDistill.
432
+ CE models with dual pooling. We propose dual pooling to produce two embeddings embt
433
+ q←(q,d) and
434
+ embt
435
+ d←(q,d) from a CE model that serve as the proxy query and document embeddings, respectively. Ac-
436
+ cordingly, we define the relevance score as st(q, d) = ⟨embt
437
+ q←(q,d), embt
438
+ d←(q,d)⟩. We explore two variants of
439
+ dual pooling: (1) special token-based pooling that pools from [CLS] and [SEP]; and (2) segment-based
440
+ weighted mean pooling that separately performs weighted averaging on the query and document segments of
441
+ the final token embeddings. See Appendix B for details.
442
+ In addition to employing the dual pooling, we also utilize a reconstruction loss during the CE training, which
443
+ measures the likelihood of predicting each token of the original input from the final token embeddings. This
444
+ loss encourages reconstruction of query and document tokens based on the final token embeddings and
445
+ prevents the degeneration of the token embeddings during training. Given proxy embeddings from the teacher
446
+ CE, we can perform EmbedDistill with the embedding matching loss defined in Eq. 4 or Eq. 5 (cf. Fig. 3).
447
+ 4.3
448
+ Task-specific online data generation
449
+ Data augmentation as a general technique has been previously considered in the IR literature [see, e.g.,
450
+ Nogueira et al., 2019b, O˘guz et al., 2021, Izacard et al., 2021], especially in data-limited, out-of-domain,
451
+ or zero-shot settings. As EmbedDistill aims to align the embeddings spaces of the teacher and student, the
452
+ ability to generate similar queries or documents can naturally help enforce such an alignment globally on the
453
+ task-specific manifold. Given a set of unlabeled task-specific query and document pairs Um, we can further
454
+ add the embedding-alignment loss REmb(t, s; Um) to our training objective (cf. Eq.4). Interestingly, for DE
455
+ to DE distillation setting, our approach can even benefit from a large collection of task-specific queries Q′ or
456
+ documents D′. Here, we can independently employ embedding matching losses REmb,Q(t, s; Q′) (cf. Eq. 5)
457
+ 7
458
+
459
+ and REmb,D(t, s; D′) that focus on queries and documents, respectively. Please refer to Appendix E for
460
+ additional details.
461
+ 5
462
+ Improvements in the student generalization
463
+ Note that we motivate EmbedDistill as well as asymmetric DE configuration where the student DE model
464
+ inherits the teacher’s document encoder from their potential ability to ensure a better alignment between the
465
+ teacher and student embedding spaces. In this section, we provide a theoretical justification for both of these
466
+ proposals by showing that they indeed result in a better generalization (test-time) performance for the student
467
+ and reduce the gap between the teacher and the student.
468
+ Let R(s) = E
469
+
470
+
471
+
472
+ sq,d, y
473
+ ��
474
+ be the population version of the empirical risk in Eq. 1. Note that R(s) is a
475
+ measure of the test time performance of the IR model defined by the scorer s. Since we want the student
476
+ model to (at least) closely approximate the teacher model’s performance, we are interested in bounding
477
+ R(ss) − R(st), i.e., the gap between the test time performance of the student and teacher models. The
478
+ following result provides a bound on this quantity (see Appendix C.1 for a formal statement and proof). For
479
+ simplicity, we focus on L = 1 (cf. Sec. 3). The result can be extended to L > 1 with more complex notation.
480
+ Theorem 5.1 (Teacher-student performance gap (informal)). Let F and G denote the function classes for the
481
+ query and document encoders for the student model, respectively. Suppose that the score-based distillation
482
+ loss ℓd in Eq. 3 is based on binary cross entropy loss (Eq. 11 in Appendix A). Let one-hot (label-dependent)
483
+ loss ℓ in Eq. 1 be the binary cross entropy loss (Eq. 9 in Appendix A). Further, assume that all encoders have
484
+ the same output dimension and embeddings have their ℓ2-norm bounded by K. Then, we have
485
+ R(ss) − R(st) ≤ En(F, G ) + 2KREmb,Q(t, s; Sn) + 2KREmb,D(t, s; Sn)
486
+
487
+ +
488
+ ∆(st; Sn) + K2�
489
+ E
490
+ ���σ(st
491
+ q,d) − y
492
+ ���
493
+ + 1
494
+ n
495
+
496
+ i∈[n]
497
+ ��σ(st
498
+ qi,di) − yi
499
+ �� �
500
+ ,
501
+ where σ denotes the sigmoid function,
502
+ En(F, G ) :=
503
+ sup
504
+ ss∈F×G
505
+ ��R(ss, st; Sn) − Eℓd
506
+
507
+ ss
508
+ q,d, st
509
+ q,d
510
+ ���
511
+ and ∆(st; Sn) denotes the deviation between the empirical risk (on Sn) and population risk of the teacher
512
+ model st.
513
+ The last three quantities in our bound on the teacher-student performance gap, namely ∆(st; Sn), E[|σ(st
514
+ q,d)−
515
+ y|], and 1
516
+ n
517
+
518
+ i∈[n] |σ(st
519
+ qi,di)−yi|, are independent of the underlying student model. These terms solely depend
520
+ on the quality of the underlying teacher model st.
521
+ More importantly, the teacher-student gap can be made small by reducing the following three terms: 1)
522
+ uniform deviation of the student’s empirical distillation risk from its population version En(F, G ); 2) query
523
+ embedding matching loss for the student REmb,Q(t, s; Sn); and 3) doc embedding matching loss for the
524
+ student REmb,D(t, s; Sn). The last two terms suggest that the teacher-student gap naturally goes down as
525
+ the embeddings of the student and teacher models become aligned. In particular, when the student inherits
526
+ the document encoder from the teacher, the third term REmb,D(t, s; Sn) vanishes. In general, these last two
527
+ terms justify EmbedDistill which either employ Eq. 4 or Eq. 5 (when student inherits teacher’s document
528
+ encoder).
529
+ 8
530
+
531
+ Table 1: Full recall performance of various student DE models on NQ dev set, including symmetric DE
532
+ student model (67.5M or 11.3M transformer for both encoders), and asymmetric DE student model (67.5M
533
+ or 11.3M transformer as query encoder and document embeddings inherited from the teacher). All distilled
534
+ students used the same teacher (110.1M parameter BERT-base models as both encoders), with the full
535
+ Recall@5 = 72.3, Recall@20 = 86.1, and Recall@100 = 93.6.
536
+ Method
537
+ 6-Layer (67.5M)
538
+ 4-Layer (11.3M)
539
+ R@5 R@20 R@100
540
+ R@5 R@20 R@100
541
+ Train student directly
542
+ 36.2
543
+ 59.7
544
+ 80.0
545
+ 24.8
546
+ 44.7
547
+ 67.5
548
+ + Distill from teacher
549
+ 65.3
550
+ 81.6
551
+ 91.2
552
+ 44.3
553
+ 64.9
554
+ 81.0
555
+ + Inherit doc embeddings
556
+ 69.9
557
+ 83.9
558
+ 92.3
559
+ 56.3
560
+ 70.9
561
+ 82.5
562
+ + Query embedding matching
563
+ 72.7
564
+ 86.5
565
+ 93.9
566
+ 61.2
567
+ 75.2
568
+ 85.1
569
+ + Query generation
570
+ 73.4
571
+ 86.3
572
+ 93.8
573
+ 64.3
574
+ 77.8
575
+ 87.9
576
+ Train student using only
577
+ embedding matching and
578
+ inherit doc embeddings
579
+ 71.4
580
+ 84.9
581
+ 92.6
582
+ 64.6
583
+ 50.2
584
+ 76.8
585
+ + Query generation
586
+ 71.8
587
+ 85.0
588
+ 93.0
589
+ 54.2
590
+ 68.9
591
+ 80.8
592
+ Next, we focus on the first term En(F, G ) which captures the uniform deviation between the training and test
593
+ time performance of the student, as measured by the distillation loss. Again, we restrict ourselves to the setting
594
+ of Theorem 5.1 which assumes a binary cross-entropy loss with L = 1 for simplicity. (See Appendix C.2 for
595
+ a more precise statement and proof.)
596
+ Proposition 5.2. Let ℓd be a distillation loss which is Lℓd-Lipschitz in its first argument. Let F and G
597
+ denote the function classes for the query and document encoders, respectively. Further assume that, for each
598
+ query and document encoder in our function class, the query and document embeddings have their ℓ2-norm
599
+ bounded by K. Then,
600
+ En(F, G ) ≤ ESn
601
+ 48KLℓd
602
+ √n
603
+ � ∞
604
+ 0
605
+
606
+ log
607
+
608
+ N(u, F)N(u, G )
609
+
610
+ du.
611
+ (6)
612
+ Furthermore, with a fixed document encoder, i.e., G = {g∗},
613
+ En(F, {g∗}) ≤ ESn
614
+ 48KLℓd
615
+ √n
616
+ � ∞
617
+ 0
618
+
619
+ log N(u, F) du.
620
+ (7)
621
+ Here, N(u, ·) is the u-covering number of a function class.
622
+ Note that Eq. 6 and Eq. 7 correspond to uniform deviation when we train without and with a frozen document
623
+ encoder, respectively. It is clear that the bound in Eq. 7 is less than or equal to that in Eq. 6 (because
624
+ N(u, G ) ≥ 1 for any u), which alludes to desirable impact of employing a frozen document encoder (in
625
+ terms of deviation in train and test performance). When we further employ EmbedDistill (e.g., with the loss in
626
+ Eq. 5), it regularizes the function class of query encoders; effectively, reducing it to F ′ with |F ′| ≤ |F|.
627
+ This has further desirable implication for reducing the uniform deviation bounds as N(u, F ′) ≤ N(u, F).
628
+ 6
629
+ Experiments
630
+ We now conduct a comprehensive evaluation of the proposed distillation approach. Specifically, we highlight
631
+ the utility of the approach for both DE to DE and CE to DE distillation. We also showcase the benefits of
632
+ combining our distillation approach with query generation methods.
633
+ 9
634
+
635
+ Table 2: Performance of EmbedDistill for DE to DE distillation on NQ test set. Note that the prior work
636
+ mentioned in the table rely on techniques such as negative mining and multi-stage training. In contrast, we
637
+ explore the orthogonal direction of embedding-matching that improves single-stage distillation, which can be
638
+ combined with the aforementioned techniques.
639
+ Method
640
+ #Layers
641
+ R@20
642
+ R@100
643
+ DPR [Karpukhin et al., 2020a]
644
+ 12
645
+ 78.4
646
+ 85.4
647
+ DPR + PAQ [O˘guz et al., 2021]
648
+ 12
649
+ 84.0
650
+ 89.2
651
+ DPR + PAQ [O˘guz et al., 2021]
652
+ 24
653
+ 84.7
654
+ 89.2
655
+ ACNE [Xiong et al., 2021]
656
+ 12
657
+ 81.9
658
+ 87.5
659
+ RocketQA [Qu et al., 2021]
660
+ 12
661
+ 82.7
662
+ 88.5
663
+ MSS-DPR [Sachan et al., 2021]
664
+ 12
665
+ 84.0
666
+ 89.2
667
+ MSS-DPR [Sachan et al., 2021]
668
+ 24
669
+ 84.8
670
+ 89.8
671
+ Our teacher [Zhang et al., 2022]
672
+ 12 (220.2M)
673
+ 85.4
674
+ 90.0
675
+ Student w/ proposed method
676
+ 6 (67.5M)
677
+ 85.1
678
+ 89.8
679
+ Student w/ proposed method
680
+ 4 (11.3M)
681
+ 81.2
682
+ 87.4
683
+ 6.1
684
+ Setup
685
+ Benchmarks and evaluation metrics. We consider two popular IR benchmarks — Natural Questions
686
+ (NQ; Kwiatkowski et al. 2019b) and MSMARCO [Nguyen et al., 2016], which focus on finding the most
687
+ relevant passage/document given a question and a search query, respectively. NQ provides both standard
688
+ test and dev sets, whereas MSMARCO has only standard dev set publicly available. In what follows, we
689
+ use the terms query (document) and question (passages) interchangeably. For NQ, we use the standard
690
+ full recall (strict) as well as the relaxed recall metric [Karpukhin et al., 2020a] to evaluate the retrieval
691
+ performance. For MSMARCO, we focus on the standard metrics Mean Reciprocal Rank (MRR)@10, and
692
+ normalized Discounted Cumulative Gain (nDCG)@10 to evaluate both re-ranking and retrieval performance.
693
+ See Appendix D for a detailed discussion on these evaluation metrics. Finally, we also evaluate EmbedDistill on
694
+ the BEIR benchmark [Thakur et al., 2021] – a zero-shot retrieval benchmark – in terms of nDCG@10 and
695
+ recall@100 metrics.
696
+ Model architectures. We follow the standard Transformers-based IR model architectures similar
697
+ to Karpukhin et al. [2020a], Qu et al. [2021], O˘guz et al. [2021]. Our CE model is based on a RoBERTa-base
698
+ model (Liu et al. [2019]; 12-layer, 768 dim, 124M parameters). We utilized various sizes of DE models based
699
+ on BERT-base ([Devlin et al., 2019], 12-layer, 768 dim, 110M parameters), DistilBERT (Sanh et al. [2019];
700
+ 6-layer, 768 dim, 67.5M parameters – ∼ 2/3 of base), or BERT-mini (Turc et al. [2019]; 4-layer, 256 dim,
701
+ 11.3M parameters – ∼ 1/10 of base). For query generation (cf. Sec. 4.3), we employ BART-base [Lewis
702
+ et al., 2020], an encoder-decoder model, to generate similar questions from each training example’s input
703
+ question (query). We randomly mask 10% of tokens and inject zero mean Gaussian noise with σ = {0.1, 0.2}
704
+ between the encoder and decoder. See Appendix E for more details on query generation and Appendix F.1 for
705
+ hyperparameters.
706
+ 10
707
+
708
+ Table 3: Performance of various DE models on MSMARCO dev set for re-ranking task. A teacher model
709
+ (110.1M parameter BERT-base models as both encoders) achieving MRR@10 of 36.8 and nDCG@10 of
710
+ 42.7 is used. The table shows performance of the symmetric DE student model (67.5M or 11.3M transformer
711
+ as both encoders), and asymmetric DE student model (67.5M or 11.3M transformer as query encoder and
712
+ document embeddings inherited from the teacher).
713
+ Method
714
+ MRR@10
715
+ nDCG@10
716
+ 67.5M
717
+ 11.3M
718
+ 67.5M
719
+ 11.3M
720
+ Train student directly
721
+ 27.0
722
+ 23.0
723
+ 32.2
724
+ 29.7
725
+ + Distill from teacher
726
+ 34.6
727
+ 30.4
728
+ 40.2
729
+ 35.8
730
+ + Inherit doc embeddings
731
+ 35.2
732
+ 32.1
733
+ 41.0
734
+ 37.7
735
+ + Query embedding matching
736
+ 36.2
737
+ 35.0
738
+ 42.0
739
+ 40.8
740
+ + Query generation
741
+ 36.2
742
+ 34.4
743
+ 42.0
744
+ 40.1
745
+ Train student using only
746
+ embedding matching and
747
+ inherit doc embeddings
748
+ 36.5
749
+ 33.5
750
+ 42.3
751
+ 39.3
752
+ + Query generation
753
+ 36.4
754
+ 34.1
755
+ 42.3
756
+ 39.9
757
+ 6.2
758
+ DE to DE distillation
759
+ We employ AR2 [Zhang et al., 2022]3 and SentenceBERT-v5 [Reimers et al., 2019]4 as teacher DE models
760
+ for NQ and MSMARCO. Note that both models are based on BERT-base. For DE to DE distillation, we
761
+ consider two kinds of configurations for the student DE model: (1) Symmetric: We use identical question and
762
+ document encoders. We evaluate DistilBERT and BERT-mini on both datasets. (2) Asymmetric: The student
763
+ DE model inherits its document embeddings from the teacher DE model, which are not trained during the
764
+ distillation. For query encoder, we use DistilBERT or BERT-mini which are smaller than document encoder.
765
+ Student DE model training. We train student DE models using a combination of (i) one-hot loss (cf. Eq. 8
766
+ in Appendix A) on training data; (ii) distillation loss in (cf. Eq. 10 in Appendix A); and (iii) embedding
767
+ matching loss in Eq. 5. We used [CLS]-pooling for all student encoders. Unlike DPR [Karpukhin et al.,
768
+ 2020a] or AR2, we do not use hard negatives from BM25 or other models, which greatly simplifies our
769
+ distillation procedure.
770
+ Results and discussion. To understand the impact of various proposed configurations and losses, we train
771
+ models by sequentially adding components and evaluate their retrieval performance on NQ and MSMARCO
772
+ dev set as shown in Table 1 and 4, respectively. (See Table 7 in Appendix F.2 for performance on NQ in terms
773
+ of the relaxed recall.) We also evaluate re-ranking task on MSMARCO dev set (Table 3).
774
+ We begin by training a symmetric DE without distillation. As expected, moving to distillation brings in
775
+ considerable gains. Next, we swap the student document encoder with document embeddings from the teacher
776
+ (non-trainable), which leads to a good jump in the performance. Now we can introduce EmbedDistill with
777
+ Eq. 5 for aligning query representations between student and teacher. The two losses are combined with
778
+ weight of 1.0 (except for BERT-mini models in the presence of query generation with 5.0). This improves
779
+ performance significantly, e.g.,it provides ∼3 and ∼5 points increase in recall@5 on NQ with students based
780
+ on DistilBERT and BERT-mini, respectively (Table 1). We further explore the utility of EmbedDistill in
781
+ aligning the teacher and student embedding spaces in Appendix G.1.
782
+ 3https://github.com/microsoft/AR2/tree/main/AR2
783
+ 4https://huggingface.co/sentence-transformers/msmarco-bert-base-dot-v5
784
+ 11
785
+
786
+ Table 4: Performance of various DE models on MSMARCO dev set for retrieval task. A teacher model
787
+ (110.1M parameter RoBERTa-base models as both encoders) achieving MRR@10 of 37.2 and nDCG@10 of
788
+ 44.2 is used. The table shows performance of the symmetric DE student model (67.5M or 11.3M transformer
789
+ as both encoders), and asymmetric DE student model (67.5M or 11.3M transformer as query encoder and
790
+ document embeddings inherited from the teacher).
791
+ Method
792
+ MRR@10
793
+ nDCG@10
794
+ 67.5M
795
+ 11.3M
796
+ 67.5M
797
+ 11.3M
798
+ Train student directly
799
+ 22.6
800
+ 18.6
801
+ 27.2
802
+ 22.5
803
+ + Distill from teacher
804
+ 35.0
805
+ 28.6
806
+ 41.3
807
+ 34.1
808
+ + Inherit doc embeddings
809
+ 35.7
810
+ 30.3
811
+ 42.2
812
+ 36.2
813
+ + Query embedding matching
814
+ 37.1
815
+ 35.4
816
+ 43.8
817
+ 41.9
818
+ + Query generation
819
+ 37.2
820
+ 34.8
821
+ 43.8
822
+ 41.2
823
+ Train student using only
824
+ embedding matching and
825
+ inherit doc embeddings
826
+ 36.6
827
+ 31.4
828
+ 43.3
829
+ 37.6
830
+ + Query generation
831
+ 36.7
832
+ 32.8
833
+ 43.4
834
+ 39.2
835
+ Table 5: Average BEIR performance of our DE teacher and EmbedDistill student models and their numbers of
836
+ trainable parameters. Both models are trained on MSMARCO and evaluated on 14 other datasets (the average
837
+ does not include MSMARCO). The full table is at Appendix F.4. With EmbedDistill, student materializes
838
+ most of the performance of the teacher on the unforeseen datasets.
839
+ Method
840
+ #Layers
841
+ nDCG@10
842
+ R@100
843
+ DPR [Karpukhin et al., 2020b]
844
+ 12
845
+ 22.5
846
+ 47.7
847
+ ANCE [Xiong et al., 2021]
848
+ 12
849
+ 40.5
850
+ 60.0
851
+ TAS-B [Hofst¨atter et al., 2021]
852
+ 6
853
+ 42.8
854
+ 64.8
855
+ GenQ [Thakur et al., 2021]
856
+ 6
857
+ 42.5
858
+ 64.2
859
+ Our teacher [Reimers et al., 2019]
860
+ 12 (220.2M)
861
+ 45.7
862
+ 65.1
863
+ Student w/ EmbedDistill
864
+ 6 (67.5M)
865
+ 44.0
866
+ 63.5
867
+ On top of the two losses (standard distillation and embedding matching), we also use REmb,Q(t, s; Q′) from
868
+ Sec. 4.3 on 2 additional questions (per input question) generated from BART. We also try a variant where we
869
+ eliminate the standard distillation loss and only employ the embedding matching loss in Eq. 5 along with
870
+ inheriting teacher’s document embeddings. This configuration without the standard distillation loss leads to
871
+ excellent performance (with query generation again providing additional gains in most cases.)
872
+ It is worth highlighting that DE models trained with the proposed methods (e.g., asymmetric DE with
873
+ embedding matching and generation) achieve 99% of the performance in both NQ/MSMARCO tasks with a
874
+ query encoder that is 2/3rd the size of that of the teacher. Furthermore, even with 1/10th size of the query
875
+ encoder, our proposal can achieve 95-97% of the performance. This is particularly useful for latency critical
876
+ applications with minimal impact on the final performance.
877
+ Finally, we take our best student models, i.e., one trained using with additional embedding matching loss and
878
+ using data augmentation from query generation, and evaluate on test sets. We compare with various prior
879
+ work and note that most prior work used considerably bigger models in terms of parameters, depth (12 or
880
+ 24 layers), or width (upto 1024 dims). For NQ test set results are reported in Table 2, but as MSMARCO
881
+ does not have any public test set, we instead present results for the BEIR benchmark in Table 5. Note we
882
+ 12
883
+
884
+ Table 6: Performance of DE models distilled from [CLS]-pooled and Dual-pooled CE models on MS-
885
+ MARCO re-ranking task. While both teacher models perform similarly, embedding matching-based distilla-
886
+ tion only works with the Dual-pooled teacher. See Appendix F for nDCG@10 metric.
887
+ Method
888
+ MRR@10
889
+ [CLS]-pooled teacher
890
+ 37.1
891
+ Dual-pooled teacher
892
+ 37.0
893
+ Standard distillation from [CLS]-pooled teacher
894
+ 33.0
895
+ +Joint matching
896
+ 32.4
897
+ Standard distillation from Dual-pooled teacher
898
+ 33.3
899
+ +Query matching
900
+ 33.7
901
+ also provide evaluation of our SentenceBERT teacher achieving very high performance on the benchmark
902
+ which can be of independent interest (please refer to Appendix F.4 for details). For both NQ and BEIR, our
903
+ approach obtains competitive student model with fewer than 50% of the parameters: even with 6 layers, our
904
+ student model is very close (98-99%) to its teacher.
905
+ 6.3
906
+ CE to DE distillation
907
+ We consider two CE teachers for MSMARCO re-ranking task5: a standard [CLS]-pooled CE teacher, and
908
+ the Dual-pooled CE teacher (cf. Sec. 4.2). Both teachers are based on RoBERTa-base and trained on triples
909
+ in the training set for 300K steps with cross-entropy loss.
910
+ Student DE model training. We considered the following distillation variants: standard score-based distil-
911
+ lation from the [CLS]-pooled teacher, and our novel Dual-pooled CE teacher (with and without embedding
912
+ matching loss). For each variant, we initialize encoders of the student DE model with two RoBERTa-base
913
+ models and train for 500K steps We performed the na¨ıve joint embedding matching for the [CLS]-pooled
914
+ teacher (cf. Sec. 4.2) and employed the query embedding matching (cf. Eq.5) for the Dual-pooled CE teacher.
915
+ In either case, embedding-matching loss is added on top of the standard cross entropy loss with the weight of
916
+ 1.0 (when used).
917
+ Results and discussion. Table 6 evaluates the effectiveness of the dual pooling and the embedding matching
918
+ for CE to DE distillation. As described in Sec. 4.2, the traditional [CLS]-pooled teacher did not provide any
919
+ useful embedding for the embedding matching (see Appendix G.2 for the further analysis of the resulting
920
+ embedding space). However, with the Dual-pooled teacher, embedding matching does boost student’s
921
+ performance.
922
+ 7
923
+ Conclusion
924
+ We propose EmbedDistill — a novel distillation method for IR that goes beyond simple score matching. We
925
+ specialize it to distill a DE model into another DE model by (a) reusing the teacher’s document encoder in
926
+ the student and (b) aligning query embeddings of the teacher and student. This simple approach delivers im-
927
+ mediate quality and computational gains in practical deployments and we demonstrate them on MSMARCO,
928
+ NQ, and BEIR benchmarks. We show that query generation technique further improves the performance of
929
+ the distilled student in most cases. We generalize the proposed approach to distill a CE model to a DE model
930
+ 5Note: Full retrieval is prohibitively expensive with CE models.
931
+ 13
932
+
933
+ and show the benefits on MSMARCO. Finally, our theoretical analysis alludes to the favorable implications
934
+ of both embedding matching and inheriting document encoder in DE to DE distillation setting. A more
935
+ comprehensive and systematic analysis of embedding matching-based distillation for IR is an exciting avenue
936
+ for future research.
937
+ References
938
+ Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo. Knowledge distillation
939
+ from internal representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34,
940
+ pages 7350–7357, 2020.
941
+ Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. Synthetic QA corpora
942
+ generation with roundtrip consistency. In Proceedings of the 57th Annual Meeting of the Association for
943
+ Computational Linguistics, pages 6168–6173, Florence, Italy, July 2019. Association for Computational
944
+ Linguistics. doi: 10.18653/v1/P19-1620. URL https://aclanthology.org/P19-1620.
945
+ Olivier Bousquet, St´ephane Boucheron, and G´abor Lugosi. Introduction to Statistical Learning Theory,
946
+ pages 169–207. Springer Berlin Heidelberg, Berlin, Heidelberg, 2004. ISBN 978-3-540-28650-9. doi:
947
+ 10.1007/978-3-540-28650-9 8. URL https://doi.org/10.1007/978-3-540-28650-9_8.
948
+ Cristian Bucilˇa, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the
949
+ 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, pages
950
+ 535–541, New York, NY, USA, 2006. ACM.
951
+ Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open-domain
952
+ questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics
953
+ (Volume 1: Long Papers), pages 1870–1879, Vancouver, Canada, July 2017. Association for Computational
954
+ Linguistics. doi: 10.18653/v1/P17-1171. URL https://aclanthology.org/P17-1171.
955
+ Defang Chen, Jian-Ping Mei, Hailin Zhang, Can Wang, Yan Feng, and Chun Chen. Knowledge distillation
956
+ with the reused teacher classifier. In Proceedings of the IEEE/CVF Conference on Computer Vision and
957
+ Pattern Recognition, pages 11933–11942, 2022.
958
+ Xuanang Chen, Ben He, Kai Hui, Le Sun, and Yingfei Sun. Simplified tinybert: Knowledge distillation for
959
+ document retrieval. In Djoerd Hiemstra, Marie-Francine Moens, Josiane Mothe, Raffaele Perego, Martin
960
+ Potthast, and Fabrizio Sebastiani, editors, Advances in Information Retrieval, pages 241–248, Cham, 2021.
961
+ Springer International Publishing. ISBN 978-3-030-72240-1.
962
+ Zhuyun Dai and Jamie Callan. Deeper text understanding for IR with contextual neural language modeling.
963
+ In Benjamin Piwowarski, Max Chevalier, ´Eric Gaussier, Yoelle Maarek, Jian-Yun Nie, and Falk Scholer,
964
+ editors, Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in
965
+ Information Retrieval, SIGIR 2019, Paris, France, July 21-25, 2019, pages 985–988. ACM, 2019a.
966
+ Zhuyun Dai and Jamie Callan. Context-aware sentence/passage term importance estimation for first stage
967
+ retrieval. arXiv preprint arXiv:1910.10687, 2019b.
968
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional
969
+ transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors,
970
+ Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational
971
+ Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019,
972
+ Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics, 2019.
973
+ 14
974
+
975
+ Ruiqi Guo, Philip Sun, Erik Lindgren, Quan Geng, David Simcha, Felix Chern, and Sanjiv Kumar. Accelerat-
976
+ ing large-scale inference with anisotropic vector quantization. In International Conference on Machine
977
+ Learning, 2020. URL https://arxiv.org/abs/1908.10396.
978
+ Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.
979
+ Sebastian Hofst¨atter, Sophia Althammer, Michael Schr¨oder, Mete Sertkan, and Allan Hanbury. Improving
980
+ efficient neural ranking models with cross-architecture knowledge distillation. CoRR, abs/2010.02666,
981
+ 2020. URL https://arxiv.org/abs/2010.02666.
982
+ Sebastian Hofst¨atter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. Efficiently
983
+ teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th
984
+ International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’21,
985
+ page 113–122, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450380379.
986
+ doi: 10.1145/3404835.3462891. URL https://doi.org/10.1145/3404835.3462891.
987
+ Gautier Izacard and Edouard Grave. Distilling knowledge from reader to retriever for question answering.
988
+ In International Conference on Learning Representations, 2021. URL https://openreview.net/
989
+ forum?id=NTEz-6wysdb.
990
+ Gautier Izacard, Mathild Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and
991
+ Edouard Grave. Unsupervised dense information retrieval with contrastive learning. arXiv preprint
992
+ arXiv:2112.09118, 2021.
993
+ Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. TinyBERT:
994
+ Distilling BERT for natural language understanding. In Findings of the Association for Computational
995
+ Linguistics: EMNLP 2020, pages 4163–4174, Online, November 2020. Association for Computational
996
+ Linguistics. doi: 10.18653/v1/2020.findings-emnlp.372. URL https://aclanthology.org/2020.
997
+ findings-emnlp.372.
998
+ Jeff Johnson, Matthijs Douze, and Herv´e J´egou. Billion-scale similarity search with gpus. IEEE Transactions
999
+ on Big Data, 7(3):535–547, 2021. doi: 10.1109/TBDATA.2019.2921572.
1000
+ Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and
1001
+ Wen-tau Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020
1002
+ Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online,
1003
+ November 2020a. Association for Computational Linguistics.
1004
+ Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen,
1005
+ and Wen-tau Yih.
1006
+ Dense passage retrieval for open-domain question answering.
1007
+ arXiv preprint
1008
+ arXiv:2004.04906, 2020b.
1009
+ Omar Khattab and Matei Zaharia. ColBERT: Efficient and Effective Passage Search via Contextualized Late
1010
+ Interaction over BERT, page 39–48. Association for Computing Machinery, New York, NY, USA, 2020.
1011
+ ISBN 9781450380164.
1012
+ Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti,
1013
+ Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew
1014
+ Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A
1015
+ benchmark for question answering research. Transactions of the Association for Computational Linguistics,
1016
+ 7:452–466, 2019a. doi: 10.1162/tacl a 00276. URL https://aclanthology.org/Q19-1026.
1017
+ 15
1018
+
1019
+ Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti,
1020
+ Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for
1021
+ question answering research. Transactions of the Association for Computational Linguistics, 7:453–466,
1022
+ 2019b.
1023
+ Michel Ledoux and Michel Talagrand. Probability in Banach spaces. Springer-Verlag, 1991.
1024
+ Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain
1025
+ question answering. In Anna Korhonen, David R. Traum, and Llu´ıs M`arquez, editors, Proceedings of the
1026
+ 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-
1027
+ August 2, 2019, Volume 1: Long Papers, pages 6086–6096. Association for Computational Linguistics,
1028
+ 2019.
1029
+ Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
1030
+ Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural
1031
+ language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of
1032
+ the Association for Computational Linguistics, pages 7871–7880, Online, July 2020. Association for
1033
+ Computational Linguistics. doi: 10.18653/v1/2020.acl-main.703. URL https://aclanthology.
1034
+ org/2020.acl-main.703.
1035
+ Canjia Li, Andrew Yates, Sean MacAvaney, Ben He, and Yingfei Sun. Parade: Passage representation
1036
+ aggregation for document reranking. arXiv preprint arXiv:2008.09093, 2020.
1037
+ Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. In-batch negatives for knowledge distillation with
1038
+ tightly-coupled teachers for dense retrieval. In Proceedings of the 6th Workshop on Representation
1039
+ Learning for NLP (RepL4NLP-2021), pages 163–173, Online, August 2021. Association for Computational
1040
+ Linguistics. doi: 10.18653/v1/2021.repl4nlp-1.17. URL https://aclanthology.org/2021.
1041
+ repl4nlp-1.17.
1042
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
1043
+ Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv
1044
+ preprint arXiv:1907.11692, 2019.
1045
+ Wenhao Lu, Jian Jiao, and Ruofei Zhang. Twinbert: Distilling knowledge to twin-structured compressed bert
1046
+ models for large-scale retrieval. In Proceedings of the 29th ACM International Conference on Information
1047
+ & Knowledge Management, CIKM ’20, page 2645–2652, New York, NY, USA, 2020. Association for
1048
+ Computing Machinery. ISBN 9781450368599. doi: 10.1145/3340531.3412747. URL https://doi.
1049
+ org/10.1145/3340531.3412747.
1050
+ Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. Sparse, dense, and attentional represen-
1051
+ tations for text retrieval. Transactions of the Association for Computational Linguistics, 9:329–345, 2021.
1052
+ doi: 10.1162/tacl a 00369. URL https://aclanthology.org/2021.tacl-1.20.
1053
+ Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, and Ryan McDonald. Zero-shot neural passage retrieval via
1054
+ domain-targeted synthetic question generation. In Proceedings of the 16th Conference of the European
1055
+ Chapter of the Association for Computational Linguistics: Main Volume, pages 1075–1088, Online,
1056
+ April 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.eacl-main.92. URL
1057
+ https://aclanthology.org/2021.eacl-main.92.
1058
+ Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. CEDR: Contextualized embeddings
1059
+ for document ranking. In Proceedings of the 42nd International ACM SIGIR Conference on Research
1060
+ 16
1061
+
1062
+ and Development in Information Retrieval, SIGIR’19, page 1101–1104, New York, NY, USA, 2019a.
1063
+ Association for Computing Machinery. ISBN 9781450361729. doi: 10.1145/3331184.3331317. URL
1064
+ https://doi.org/10.1145/3331184.3331317.
1065
+ Sean MacAvaney, Andrew Yates, Kai Hui, and Ophir Frieder. Content-based weak supervision for ad-
1066
+ hoc re-ranking. In Proceedings of the 42nd International ACM SIGIR Conference on Research and
1067
+ Development in Information Retrieval, SIGIR’19, page 993–996, New York, NY, USA, 2019b. Association
1068
+ for Computing Machinery. ISBN 9781450361729. doi: 10.1145/3331184.3331316. URL https:
1069
+ //doi.org/10.1145/3331184.3331316.
1070
+ Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir
1071
+ Frieder. Efficient Document Re-Ranking for Transformers by Precomputing Term Representations, page
1072
+ 49–58. Association for Computing Machinery, New York, NY, USA, 2020. ISBN 9781450380164.
1073
+ Aditya Menon, Sadeep Jayasumana, Ankit Singh Rawat, Seungyeon Kim, Sashank Reddi, and Sanjiv Kumar.
1074
+ In defense of dual-encoders for neural ranking. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba
1075
+ Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on
1076
+ Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 15376–15400.
1077
+ PMLR, 17–23 Jul 2022. URL https://proceedings.mlr.press/v162/menon22a.html.
1078
+ Bhaskar Mitra and Nick Craswell. An introduction to neural information retrieval. Foundations and
1079
+ Trends® in Information Retrieval, 13(1):1–126, 2018. ISSN 1554-0669. doi: 10.1561/1500000061. URL
1080
+ http://dx.doi.org/10.1561/1500000061.
1081
+ Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS
1082
+ MARCO: A human generated machine reading comprehension dataset. In Tarek Richard Besold, Antoine
1083
+ Bordes, Artur S. d’Avila Garcez, and Greg Wayne, editors, Proceedings of the Workshop on Cognitive
1084
+ Computation: Integrating neural and symbolic approaches 2016, volume 1773 of CEUR Workshop
1085
+ Proceedings. CEUR-WS.org, 2016.
1086
+ Ping Nie, Yuyu Zhang, Xiubo Geng, Arun Ramamurthy, Le Song, and Daxin Jiang. DC-BERT: decoupling
1087
+ question and document for efficient contextual encoding. In Jimmy Huang, Yi Chang, Xueqi Cheng, Jaap
1088
+ Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, editors, Proceedings of the 43rd International
1089
+ ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual
1090
+ Event, China, July 25-30, 2020, pages 1829–1832. ACM, 2020. doi: 10.1145/3397271.3401271. URL
1091
+ https://doi.org/10.1145/3397271.3401271.
1092
+ Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with BERT. CoRR, abs/1901.04085, 2019. URL
1093
+ http://arxiv.org/abs/1901.04085.
1094
+ Rodrigo Nogueira, Jimmy Lin, and AI Epistemic. From doc2query to doctttttquery. Online preprint, 6,
1095
+ 2019a.
1096
+ Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. Document expansion by query prediction.
1097
+ arXiv preprint arXiv:1904.08375, 2019b.
1098
+ Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. Document ranking with a pretrained
1099
+ sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020,
1100
+ pages 708–718, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/
1101
+ 2020.findings-emnlp.63. URL https://aclanthology.org/2020.findings-emnlp.63.
1102
+ 17
1103
+
1104
+ Barlas O˘guz, Kushal Lakhotia, Anchit Gupta, Patrick Lewis, Vladimir Karpukhin, Aleksandra Piktus, Xilun
1105
+ Chen, Sebastian Riedel, Wen-tau Yih, Sonal Gupta, et al. Domain-matched pre-training tasks for dense
1106
+ retrieval. arXiv preprint arXiv:2107.13602, 2021.
1107
+ Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and
1108
+ Haifeng Wang. RocketQA: An optimized training approach to dense passage retrieval for open-domain
1109
+ question answering. In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-T¨ur,
1110
+ Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors, Proceedings of
1111
+ the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu-
1112
+ man Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5835–5847. Association
1113
+ for Computational Linguistics, 2021.
1114
+ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
1115
+ Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer.
1116
+ Journal of Machine Learning Research, 21(140):1–67, 2020. URL http://jmlr.org/papers/
1117
+ v21/20-074.html.
1118
+ Nils Reimers, Iryna Gurevych, and Iryna Gurevych. Sentence-BERT: Sentence embeddings using siamese
1119
+ bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language
1120
+ Processing. Association for Computational Linguistics, 11 2019. URL http://arxiv.org/abs/
1121
+ 1908.10084.
1122
+ Ruiyang Ren, Yingqi Qu, Jing Liu, Wayne Xin Zhao, Qiaoqiao She, Hua Wu, Haifeng Wang, and Ji-Rong Wen.
1123
+ Rocketqav2: A joint training method for dense passage retrieval and passage re-ranking. In Proceedings of
1124
+ EMNLP, 2021.
1125
+ Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua
1126
+ Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
1127
+ Devendra Sachan, Mostofa Patwary, Mohammad Shoeybi, Neel Kant, Wei Ping, William L. Hamilton,
1128
+ and Bryan Catanzaro. End-to-end training of neural retrievers for open-domain question answering.
1129
+ In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the
1130
+ 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages
1131
+ 6648–6662, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.
1132
+ acl-long.519. URL https://aclanthology.org/2021.acl-long.519.
1133
+ Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau,
1134
+ and Luke Zettlemoyer. Improving passage retrieval with zero-shot question generation. arXiv preprint
1135
+ arXiv:2204.07496, 2022.
1136
+ Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert:
1137
+ smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
1138
+ Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. Colbertv2:
1139
+ Effective and efficient retrieval via lightweight late interaction. CoRR, abs/2112.01488, 2021.
1140
+ Nandan Thakur, Nils Reimers, Andreas R¨uckl´e, Abhishek Srivastava, and Iryna Gurevych. BEIR: A hetero-
1141
+ geneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference
1142
+ on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL
1143
+ https://openreview.net/forum?id=wCu6T5xFjeJ.
1144
+ 18
1145
+
1146
+ Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: On the
1147
+ importance of pre-training compact models. arXiv preprint arXiv:1908.08962, 2019.
1148
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser,
1149
+ and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on
1150
+ Neural Information Processing Systems, NIPS’17, page 6000–6010, Red Hook, NY, USA, 2017. Curran
1151
+ Associates Inc. ISBN 9781510860964.
1152
+ Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and
1153
+ Arnold Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval.
1154
+ In International Conference on Learning Representations, 2021. URL https://openreview.net/
1155
+ forum?id=zeFrfgyZln.
1156
+ Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. Cross-domain modeling of sentence-
1157
+ level evidence for document retrieval. In Proceedings of the 2019 Conference on Empirical Methods in
1158
+ Natural Language Processing and the 9th International Joint Conference on Natural Language Processing
1159
+ (EMNLP-IJCNLP), pages 3490–3496, Hong Kong, China, November 2019. Association for Computational
1160
+ Linguistics.
1161
+ Hang Zhang, Yeyun Gong, Yelong Shen, Jiancheng Lv, Nan Duan, and Weizhu Chen. Adversarial retriever-
1162
+ ranker for dense text retrieval. In International Conference on Learning Representations, 2022. URL
1163
+ https://openreview.net/forum?id=MR7XubKUFB.
1164
+ Linfeng Zhang and Kaisheng Ma. Improve object detection with feature-based knowledge distillation:
1165
+ Towards accurate and efficient detectors. In International Conference on Learning Representations, 2020.
1166
+ Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. Deep learning based recommender system: A survey and
1167
+ new perspectives. ACM Comput. Surv., 52(1), feb 2019. ISSN 0360-0300. doi: 10.1145/3285029. URL
1168
+ https://doi.org/10.1145/3285029.
1169
+ Chen Zhao, Chenyan Xiong, Jordan Boyd-Graber, and Hal Daum´e III. Distantly-supervised dense retrieval
1170
+ enables open-domain question answering without evidence annotation. In Proceedings of the 2021
1171
+ Conference on Empirical Methods in Natural Language Processing, pages 9612–9622, Online and Punta
1172
+ Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/
1173
+ v1/2021.emnlp-main.756. URL https://aclanthology.org/2021.emnlp-main.756.
1174
+ Zhi Zheng, Kai Hui, Ben He, Xianpei Han, Le Sun, and Andrew Yates.
1175
+ BERT-QE: Contextualized
1176
+ Query Expansion for Document Re-ranking. In Findings of the Association for Computational Lin-
1177
+ guistics: EMNLP 2020, pages 4718–4728, Online, November 2020. Association for Computational Lin-
1178
+ guistics. doi: 10.18653/v1/2020.findings-emnlp.424. URL https://aclanthology.org/2020.
1179
+ findings-emnlp.424.
1180
+ 19
1181
+
1182
+ A
1183
+ Loss functions
1184
+ Here, we state various (per-example) loss functions that most commonly define training objectives for IR
1185
+ models. Typically, one hot training with original label is performed using softmax-based cross-entropy loss
1186
+ functions:
1187
+
1188
+
1189
+ sq,di, yi
1190
+
1191
+ = −
1192
+
1193
+ j∈[L]
1194
+ yi,j · log
1195
+
1196
+ exp(s(qi, di,j))
1197
+
1198
+ j′∈[L]
1199
+ exp(s(qi, di,j′))
1200
+
1201
+ .
1202
+ (8)
1203
+ In our experiments, we use in-batch negatives for obtaining the irrelevant documents di,j′. Alternatively, it is
1204
+ also common to employ a one-vs-all loss function based on binary cross-entropy loss as follows:
1205
+
1206
+
1207
+ sq,di, yi
1208
+
1209
+ = −
1210
+
1211
+ j∈[L]
1212
+
1213
+ yi,j · log
1214
+
1215
+ 1
1216
+ 1 + exp(−s(qi, di,j))
1217
+
1218
+ +
1219
+ (1 − yi,j) · log
1220
+
1221
+ 1
1222
+ 1 + exp(s(qi, di,j))
1223
+ ��
1224
+ .
1225
+ (9)
1226
+ As for distillation, one can define a distillation objective based on the softmax-based cross-entropy loss as6:
1227
+ ℓd
1228
+
1229
+ ss
1230
+ q,di, st
1231
+ q,di
1232
+
1233
+ = −
1234
+
1235
+ j∈[L]
1236
+
1237
+ exp(st
1238
+ i,j)
1239
+
1240
+ j′∈[L] exp(st
1241
+ i,j′) · log
1242
+
1243
+ exp(ss
1244
+ i,j)
1245
+
1246
+ j′∈[L] exp(ss
1247
+ i,j′)
1248
+ ��
1249
+ ,
1250
+ (10)
1251
+ where st
1252
+ i,j := st(qi, di,j) and ss
1253
+ i,j := ss(qi, di,j) denote the teacher and student scores, respectively. On the
1254
+ other hand, the distillation objective with the binary cross-entropy takes the form:
1255
+ ℓd
1256
+
1257
+ ss
1258
+ q,di, st
1259
+ q,di
1260
+
1261
+ = −
1262
+
1263
+ j∈[L]
1264
+
1265
+ 1
1266
+ 1 + exp(−st
1267
+ i,j) · log
1268
+
1269
+ 1
1270
+ 1 + exp(−ss
1271
+ i,j)
1272
+
1273
+ +
1274
+ 1
1275
+ 1 + exp(st
1276
+ i,j) · log
1277
+
1278
+ 1
1279
+ 1 + exp(ss
1280
+ i,j)
1281
+ ��
1282
+ .
1283
+ (11)
1284
+ Finally, distillation based on the meas square error (MSE) loss (aka. logit matching) employs the following
1285
+ loss function:
1286
+ ℓd
1287
+
1288
+ ss
1289
+ q,di, st
1290
+ q,di
1291
+
1292
+ =
1293
+
1294
+ j∈[L]
1295
+
1296
+ st(qi, di,j) − ss(qi, di,j)
1297
+ �2.
1298
+ (12)
1299
+ B
1300
+ Dual pooling details
1301
+ In this work, we focus on two kinds of dual pooling strategies:
1302
+ • Special tokens-based dual pooling. Let poolCLS and poolSEP denote the pooling operations that
1303
+ return the embeddings of the [CLS] and [SEP] tokens, respectively. We define
1304
+ embt
1305
+ q←(q,d) = poolCLS
1306
+
1307
+ Enct(˜o)
1308
+
1309
+ ,
1310
+ embt
1311
+ d←(q,d) = poolSEP
1312
+
1313
+ Enct(˜o)
1314
+
1315
+ ,
1316
+ (13)
1317
+ where ˜o denotes the input token sequence to the Transformers-based encoder, which consists of { query,
1318
+ document, special } tokens.
1319
+ 6It is common to employ temperature scaling with softmax operation. We do not explicitly show the temperature parameter for
1320
+ ease of exposition.
1321
+ 20
1322
+
1323
+ • Segment-based weighted-mean dual pooling. Let Enct(˜o)|Q and Enct(˜o)|D denote the final query
1324
+ token embeddings and document token embeddings produced by the encoder, respectively. We define
1325
+ the proxy query and document embeddings
1326
+ embt
1327
+ q←(q,d) = meanwt
1328
+
1329
+ Enct(˜o)|Q
1330
+
1331
+ ,
1332
+ embt
1333
+ d←(q,d) = meanwt
1334
+
1335
+ Enct(˜o)|D
1336
+
1337
+ ,
1338
+ (14)
1339
+ where meanwt(·) denotes the weighted mean operation. We employ the specific weighting scheme
1340
+ where each token receives a weight equal to the inverse of the square root of the token-sequence length.
1341
+ C
1342
+ Deferred details and proofs from Section 5
1343
+ In this section we present more precise statements and proofs of Theorem 5.1 and Proposition 5.2 (stated
1344
+ informally in Section 5 of the main text) along with the necessary background. First, for the ease of exposition,
1345
+ we define new notation which will facilitate theoretical analysis in this section.
1346
+ Notation. Denote the query and document encoders as f : Q → Rk and g: D → Rk for the student, and
1347
+ F : Q → Rk, G: D → Rk for the teacher (in the dual-encoder setting). With q denoting a query and d
1348
+ denoting a document, f(q) and g(d) then denote query and document embeddings, respectively, generated by
1349
+ the student. We define F(q) and G(d) similarly for embeddings by the teacher.7
1350
+ Theorem C.1 (Formal statement of Theorem 5.1). Let F and G denote the function classes for the query
1351
+ and document encoders for the student model, respectively. Given n examples Sn = {(qi, di, yi)}i∈[n] ⊂
1352
+ Q × D × {0, 1}, let ss(q, d) := sf,g(qi, di) = f(qi)T g(di) be the scores assigned to the (qi, di) pair by a
1353
+ dual-encoder model with f ∈ F and g ∈ G as query and document encoders, respectively. Let ℓ and ℓd be
1354
+ the binary cross-entropy loss (cf. Eq. 9 with L = 1) and the distillation-specific loss based on it (cf. Eq. 11
1355
+ with L = 1), respectively. In particular,
1356
+ ℓ(sF,G(qi, di), yi) := −yi log σ
1357
+
1358
+ F(qi)⊤G(di)
1359
+
1360
+ − (1 − yi) log
1361
+
1362
+ 1 − σ
1363
+
1364
+ F(qi)⊤G(di)
1365
+ ��
1366
+ ℓd(sf,g(qi, di), sF,G(qi, di)) := −σ
1367
+
1368
+ F(qi)⊤G(di)
1369
+
1370
+ · log σ
1371
+
1372
+ f(qi)⊤g(di)
1373
+
1374
+
1375
+ [1 − σ
1376
+
1377
+ F(qi)⊤G(di)
1378
+
1379
+ ] · log
1380
+
1381
+ 1 − σ
1382
+
1383
+ f(qi)⊤g(di)
1384
+ ��
1385
+ ,
1386
+ where σ is the sigmoid function and st := sF,G denotes the teacher dual-encoder model with F and Q as its
1387
+ query and document encoders, respectively. Assume that
1388
+ 1. All encoders f, g, F, and G have the same output dimension.
1389
+ 2. ∃ K ∈ (0, ∞) such that
1390
+ sup
1391
+ q∈Q
1392
+ max {∥f(q)∥2, ∥F(q)∥2} ≤ K
1393
+ and
1394
+ sup
1395
+ d∈D
1396
+ max {∥g(d)∥2, ∥G(d)∥2} ≤ K.
1397
+ 7Note that, as per the notations in the main text, we have (f, g) = (Encs
1398
+ Q, Encs
1399
+ D) and (F, G) = (Enct
1400
+ Q, Enct
1401
+ D). Similarly, we
1402
+ have (embt
1403
+ q, embt
1404
+ d) = (f(q), g(d)) and (embt
1405
+ q, embt
1406
+ d) = (F(q), G(d)).
1407
+ 21
1408
+
1409
+ Then, we have
1410
+ E
1411
+
1412
+ sf,g(q, d)
1413
+
1414
+
1415
+ ��
1416
+
1417
+ :=R(ss)=R(sf,g)
1418
+ − E
1419
+
1420
+ sF,G(q, d)
1421
+
1422
+
1423
+ ��
1424
+
1425
+ :=R(st)=R(sF,G)
1426
+
1427
+ sup
1428
+ (f,g)∈F×G
1429
+ ���R(sf,g, sF,G; Sn) − E
1430
+
1431
+ ℓd
1432
+
1433
+ sf,g(q, d), sF,G(q, d)
1434
+ �����
1435
+
1436
+ ��
1437
+
1438
+ :=En(F,G )
1439
+ + 2K
1440
+ � 1
1441
+ n
1442
+
1443
+ i∈[n]
1444
+ ∥g(di) − G(di)∥2
1445
+
1446
+ ��
1447
+
1448
+ :=REmb,D(t,s;Sn)
1449
+ + 1
1450
+ n
1451
+
1452
+ i∈[n]
1453
+ ∥f(qi) − F(qi)∥2
1454
+
1455
+
1456
+ ��
1457
+
1458
+ :=REmb,Q(t,s;Sn)
1459
+ + R(sF,G; Sn) − R(sF,G)
1460
+
1461
+ ��
1462
+
1463
+ :=∆(st;Sn)
1464
+ + K2�
1465
+ E
1466
+ ����σ(F(q)⊤G(d)) − y
1467
+ ���
1468
+
1469
+ + 1
1470
+ n
1471
+
1472
+ i∈[n]
1473
+ ���σ
1474
+
1475
+ F(qi)⊤G(di)
1476
+
1477
+ − yi
1478
+ ���
1479
+
1480
+ .
1481
+ (15)
1482
+ Proof. Note that
1483
+ R(sf,g) − R(sF,G) = R(sf,g) − R(sf,g, sF,G) + R(sf,g, sF,G) − R(sF,G)
1484
+ (a)
1485
+ ≤ K2E
1486
+ ����σ(F(q)⊤G(d)) − y
1487
+ ���
1488
+
1489
+ + R(sf,g, sF,G) − R(sF,G)
1490
+ = K2E
1491
+ ����σ(F(q)⊤G(d)) − y
1492
+ ���
1493
+
1494
+ + R(sf,g, sF,G) − R(sf,g, sF,G; Sn) + R(sf,g, sF,G; Sn) − R(sF,G)
1495
+ (b)
1496
+ ≤ K2E
1497
+ ����σ(F(q)⊤G(d)) − y
1498
+ ���
1499
+
1500
+ + En(F, G ) + R(sf,g, sF,G; Sn) − R(sF,G)
1501
+ = K2E
1502
+ ����σ(F(q)⊤G(d)) − y
1503
+ ���
1504
+
1505
+ + En(F, G ) + R(sf,g, sF,G; Sn) − R(sF,G; Sn)+
1506
+ R(sF,G; Sn) − R(sF,G)
1507
+ (c)
1508
+ ≤ K2E
1509
+ ����σ(F(q)⊤G(d)) − y
1510
+ ���
1511
+
1512
+ + En(F, G ) + R(sF,G; Sn) − R(sF,G)
1513
+
1514
+ ��
1515
+
1516
+ :=∆(st;Sn)
1517
+ +
1518
+ 2K
1519
+ n
1520
+
1521
+ i∈[n]
1522
+ ∥g(di) − G(di)∥2 + 2K
1523
+ n
1524
+
1525
+ i∈[n]
1526
+ ∥f(qi) − F(qi)∥2 + K2
1527
+ n
1528
+
1529
+ i∈[n]
1530
+ ���σ
1531
+
1532
+ F(qi)⊤G(di)
1533
+
1534
+ − yi
1535
+ ���
1536
+ (16)
1537
+ where (a) follows from Lemma C.3, (b) follows from the definition of En(F, G ), and (c) follows from
1538
+ Proposition C.2.
1539
+ C.1
1540
+ Bounding the difference between student’s empirical distillation risk and teacher’s em-
1541
+ pirical risk
1542
+ Lemma C.2. Given n examples Sn = {(qi, di, yi)}i∈[n] ⊂ Q×D×{0, 1}, let sf,g(qi, di) = f(qi)T g(di) be
1543
+ the scores assigned to the (qi, di) pair by a dual-encoder model with f and g as query and document encoders,
1544
+ respectively. Let ℓ and ℓd be the binary cross-entropy loss (cf. Eq. 9 with L = 1) and the distillation-specific
1545
+ loss based on it (cf. Eq. 11 with L = 1), respectively. In particular,
1546
+ ℓ(sF,G(qi, di), yi) := −yi log σ
1547
+
1548
+ F(qi)⊤G(di)
1549
+
1550
+ − (1 − yi) log
1551
+
1552
+ 1 − σ
1553
+
1554
+ F(qi)⊤G(di)
1555
+ ��
1556
+ ℓd(sf,g(qi, di), sF,G(qi, di)) := −σ
1557
+
1558
+ F(qi)⊤G(di)
1559
+
1560
+ · log σ
1561
+
1562
+ f(qi)⊤g(di)
1563
+
1564
+
1565
+ [1 − σ
1566
+
1567
+ F(qi)⊤G(di)
1568
+
1569
+ ] · log
1570
+
1571
+ 1 − σ
1572
+
1573
+ f(qi)⊤g(di)
1574
+ ��
1575
+ ,
1576
+ 22
1577
+
1578
+ where σ is the sigmoid function and sF,G denotes the teacher dual-encoder model with F and Q as its query
1579
+ and document encoders, respectively. Assume that
1580
+ 1. All encoders f, g, F, and G have the same output dimension k ≥ 1.
1581
+ 2. ∃ K ∈ (0, ∞) such that
1582
+ sup
1583
+ q∈Q
1584
+ max {∥f(q)∥2, ∥F(q)∥2} ≤ K
1585
+ and
1586
+ sup
1587
+ d∈D
1588
+ max {∥g(d)∥2, ∥G(d)∥2} ≤ K.
1589
+ Then, we have
1590
+ 1
1591
+ n
1592
+
1593
+ i∈[n]
1594
+ ℓd
1595
+
1596
+ sf,g(qi, di), sF,G(qi, di)
1597
+
1598
+ − 1
1599
+ n
1600
+
1601
+ i∈[n]
1602
+
1603
+
1604
+ sF,G(qi, di), yi
1605
+
1606
+
1607
+ 2K
1608
+ n
1609
+
1610
+ i∈[n]
1611
+ ∥g(di) − G(di)∥2 + 2K
1612
+ n
1613
+
1614
+ i∈[n]
1615
+ ∥f(qi) − F(qi)∥2 + K2
1616
+ n
1617
+
1618
+ i∈[n]
1619
+ ���σ
1620
+
1621
+ F(qi)⊤G(di)
1622
+
1623
+ − yi
1624
+ ��� .
1625
+ (17)
1626
+ Proof. We first note that the distillation loss can be rewritten as
1627
+ ℓd
1628
+
1629
+ sf,g(q, d), sF,G(q, d)
1630
+
1631
+ =
1632
+
1633
+ 1 − σ(F(q)⊤G(d)
1634
+
1635
+ f(q)⊤g(d) + γ(−f(q)⊤g(d)),
1636
+ where γ(v) := log[1 + ev] is the softplus function. Similarly, the one-hot (label-dependent) loss can be
1637
+ rewritten as
1638
+
1639
+
1640
+ sF,G(q, d), y
1641
+
1642
+ = (1 − y)F(q)⊤G(d) + γ(−F(q)⊤G(d)).
1643
+ Recall from our notation in Section 3 that
1644
+ R(sf,g, sF,G; Sn) := 1
1645
+ n
1646
+
1647
+ i∈[n]
1648
+ ℓd
1649
+
1650
+ sf,g(qi, di), sF,G(qi, di)
1651
+
1652
+ ,
1653
+ (18)
1654
+ R(sF,G; Sn) := 1
1655
+ n
1656
+
1657
+ i∈[n]
1658
+
1659
+
1660
+ sF,G(qi, di), yi
1661
+
1662
+ ,
1663
+ (19)
1664
+ as the empirical risk based on the distillation loss, and the empirical risk based on the label-dependent loss,
1665
+ respectively. With this notation, the quantity to upper bound can be rewritten as
1666
+ R(sf,g, sF,G; Sn) − R(sF,G; Sn) = R(sf,g, , sF,G; Sn) − R(sf,G, sF,G; Sn)
1667
+
1668
+ ��
1669
+
1670
+ :=□1
1671
+ +
1672
+ R(sf,G, sF,G; Sn) − R(sF,G, sF,G; Sn)
1673
+
1674
+ ��
1675
+
1676
+ :=□2
1677
+ +
1678
+ R(sF,G, sF,G; Sn) − R(sF,G; Sn)
1679
+
1680
+ ��
1681
+
1682
+ :=□3
1683
+ .
1684
+ (20)
1685
+ We start by bounding □1 as
1686
+ □1 = 1
1687
+ n
1688
+
1689
+ i∈[n]
1690
+
1691
+ ℓd
1692
+
1693
+ sf,g(qi, di), sF,G(qi, di)
1694
+
1695
+ − ℓd
1696
+
1697
+ sf,G(qi, di), sF,G(qi, di)
1698
+ ��
1699
+ 23
1700
+
1701
+ = 1
1702
+ n
1703
+
1704
+ i∈[n]
1705
+ � �
1706
+ 1 − σ(F(qi)⊤G(di))
1707
+
1708
+ f(qi)⊤g(di) + γ(−f(qi)⊤g(di))
1709
+
1710
+
1711
+ 1 − σ(F(qi)⊤G(di))
1712
+
1713
+ f(qi)⊤G(di) − γ(−f(qi)⊤G(di))
1714
+
1715
+ = 1
1716
+ n
1717
+
1718
+ i∈[n]
1719
+
1720
+ f(qi)⊤�
1721
+ g(di) − G(di)
1722
+ � �
1723
+ 1 − σ(F(qi)⊤G(di))
1724
+
1725
+ + γ(−f(qi)⊤g(di)) − γ(−f(qi)⊤G(di))
1726
+
1727
+ (a)
1728
+ ≤ 1
1729
+ n
1730
+
1731
+ i∈[n]
1732
+
1733
+ f(qi)⊤�
1734
+ g(di) − G(di)
1735
+ � �
1736
+ 1 − σ(F(qi)⊤G(di))
1737
+
1738
+ +
1739
+ ���f(qi)⊤g(di) − f(qi)⊤G(di)
1740
+ ���
1741
+
1742
+ (b)
1743
+ ≤ 1
1744
+ n
1745
+
1746
+ i∈[n]
1747
+
1748
+ ∥f(qi)∥∥g(di) − G(di)∥
1749
+
1750
+ 1 − σ(F(qi)⊤G(di))
1751
+
1752
+ + ∥f(qi)∥∥g(di) − G(di)∥
1753
+
1754
+ ≤ K
1755
+ n
1756
+
1757
+ i∈[n]
1758
+ ∥g(di) − G(di)∥2
1759
+
1760
+ 2 − σ(F(qi)⊤G(di))
1761
+ � �
1762
+ ≤ 2K
1763
+ n
1764
+
1765
+ i∈[n]
1766
+ ∥g(di) − G(di)∥2,
1767
+ (21)
1768
+ where at (a) we use the fact that γ is a Lipschitz continuous function with Lipschitz constant 1, and at (b) we
1769
+ use Cauchy-Schwarz inequality.
1770
+ Similarly for □2, we proceed as
1771
+ □2 = 1
1772
+ n
1773
+
1774
+ i∈[n]
1775
+
1776
+ ℓd
1777
+
1778
+ sf,G(qi, di), sF,G(qi, di)
1779
+
1780
+ − ℓd
1781
+
1782
+ sF,G(qi, di), sF,G(qi, di)
1783
+ ��
1784
+ = 1
1785
+ n
1786
+
1787
+ i∈[n]
1788
+ � �
1789
+ 1 − σ(F(qi)⊤G(di))
1790
+
1791
+ f(qi)⊤G(di) + γ(−f(qi)⊤G(di))
1792
+
1793
+
1794
+ 1 − σ(F(qi)⊤G(di))
1795
+
1796
+ F(qi)⊤G(di) − γ(−F(qi)⊤G(di))
1797
+
1798
+ = 1
1799
+ n
1800
+
1801
+ i∈[n]
1802
+
1803
+ G(di)⊤(f(qi) − F(qi))
1804
+
1805
+ 1 − σ(F(qi)⊤G(di))
1806
+
1807
+ + γ(−f(qi)⊤G(di)) − γ(−F(qi)⊤G(di))
1808
+
1809
+ ≤ 1
1810
+ n
1811
+
1812
+ i∈[n]
1813
+
1814
+ ∥G(di)∥∥f(qi) − F(qi)∥ +
1815
+ ���f(qi)⊤G(di) − F(qi)⊤G(di)
1816
+ ���
1817
+
1818
+ ≤ 2K
1819
+ n
1820
+
1821
+ i∈[n]
1822
+ ∥f(qi) − F(qi)∥2.
1823
+ (22)
1824
+ □3 can be bounded as
1825
+ □3 = R(sF,G, sF,G; Sn) − R(sF,G; Sn)
1826
+ = 1
1827
+ n
1828
+
1829
+ i∈[n]
1830
+
1831
+ ℓd
1832
+
1833
+ sF,G(qi, di), sF,G(qi, di)
1834
+
1835
+ − ℓ
1836
+
1837
+ sF,G(qi, di), yi
1838
+ ��
1839
+ = 1
1840
+ n
1841
+
1842
+ i∈[n]
1843
+ � �
1844
+ 1 − σ(F(qi)⊤G(di))
1845
+
1846
+ F(qi)⊤G(di) + γ(−F(qi)⊤G(di))
1847
+ − (1 − yi)F(qi)⊤G(di) − γ(−F(qi)⊤G(di))
1848
+
1849
+ 24
1850
+
1851
+ = 1
1852
+ n
1853
+
1854
+ i∈[n]
1855
+ ��
1856
+ 1 − σ(F(qi)⊤G(di)) − (1 − yi)
1857
+
1858
+ F(qi)⊤G(di)
1859
+
1860
+ ≤ K2
1861
+ n
1862
+
1863
+ i∈[n]
1864
+ ���σ(F(qi)⊤G(di)) − yi
1865
+ ��� .
1866
+ (23)
1867
+ Combining Eq. 20, 21, 22, and 23 establishes the bound in Eq. 17.
1868
+ Lemma C.3. Given an example (q, d, y) ∈ Q×D×{0, 1}, let sf,g(q, d) = f(q)T g(d) be the scores assigned
1869
+ to the (q, d) pair by a dual-encoder model with f and g as query and document encoders, respectively. Let ℓ
1870
+ and ℓd be the binary cross-entropy loss (cf. Eq. 9 with L = 1) and the distillation-specific loss based on it
1871
+ (cf. Eq. 11 with L = 1), respectively. In particular,
1872
+ ℓ(sf,g(q, d), y) := −y log σ
1873
+
1874
+ f(q)⊤g(d)
1875
+
1876
+ − (1 − y) log
1877
+
1878
+ 1 − σ
1879
+
1880
+ f(q)⊤g(d)
1881
+ ��
1882
+ ℓd(sf,g(q, d), sF,G(q, d)) := −σ
1883
+
1884
+ F(q)⊤G(d)
1885
+
1886
+ · log σ
1887
+
1888
+ f(q)⊤g(d)
1889
+
1890
+
1891
+ [1 − σ
1892
+
1893
+ F(q)⊤G(d)
1894
+
1895
+ ] · log
1896
+
1897
+ 1 − σ
1898
+
1899
+ f(q)⊤g(d)
1900
+ ��
1901
+ ,
1902
+ where σ is the sigmoid function and sF,G denotes the teacher dual-encoder model with F and Q as its query
1903
+ and document encoders, respectively. Assume that
1904
+ 1. All encoders f, g, F, and G have the same output dimension k ≥ 1.
1905
+ 2. ∃ K ∈ (0, ∞) such that
1906
+ sup
1907
+ q∈Q
1908
+ max {∥f(q)∥2, ∥F(q)∥2} ≤ K
1909
+ and
1910
+ sup
1911
+ d∈D
1912
+ max {∥g(d)∥2, ∥G(d)∥2} ≤ K.
1913
+ Then, we have
1914
+ E
1915
+
1916
+
1917
+
1918
+ sf,g(q, d), y
1919
+ ��
1920
+
1921
+ ��
1922
+
1923
+ :=R(sf,g)
1924
+ − E
1925
+
1926
+ ℓd
1927
+
1928
+ sf,g(q, d), sF,G(q, d)
1929
+ ��
1930
+
1931
+ ��
1932
+
1933
+ :=R(sf,g,sF,G)
1934
+ ≤ KQKDE
1935
+ ����σ(F(q)⊤G(d)) − y
1936
+ ���
1937
+
1938
+ (24)
1939
+ where expectation are defined by a joint distribution P(q, d, y) over Q × D × {0, 1}
1940
+ Proof. Similar to the proof of Proposition C.2, we utilize the fact that
1941
+
1942
+
1943
+ sF,G(q, d), y
1944
+
1945
+ = (1 − y)F(q)⊤G(d) + γ(−F(q)⊤G(d)),
1946
+ ℓd
1947
+
1948
+ sf,g(q, d), sF,G(q, d)
1949
+
1950
+ =
1951
+
1952
+ 1 − σ(F(q)⊤G(d)
1953
+
1954
+ f(q)⊤g(d) + γ(−f(q)⊤g(d)),
1955
+ where γ(v) := log[1 + ev] is the softplus function. Now,
1956
+ E
1957
+
1958
+
1959
+
1960
+ sf,g(q, d), y
1961
+
1962
+ − ℓd
1963
+
1964
+ sf,g(q, d), sF,G(q, d)
1965
+ ��
1966
+ (25)
1967
+ =E
1968
+
1969
+ (1 − y)f(q)⊤g(d) + γ(−f(q)⊤g(d))
1970
+
1971
+ − E
1972
+ ��
1973
+ 1 − σ(F(q)⊤G(d))
1974
+
1975
+ f(q)⊤g(d) + γ(−f(q)⊤g(d))
1976
+
1977
+ 25
1978
+
1979
+ = E
1980
+ ��
1981
+ 1 − y −
1982
+
1983
+ 1 − σ(F(q)⊤G(d))
1984
+ ��
1985
+ F(q)⊤G(d)
1986
+
1987
+ ≤ K2E
1988
+ ����σ(F(q)⊤G(d)) − y
1989
+ ���
1990
+
1991
+ ,
1992
+ (26)
1993
+ which completes the proof.
1994
+ C.2
1995
+ Uniform deviation bound
1996
+ Let F denote the class of functions that map queries in Q to their embeddings in Rk via the query encoder.
1997
+ Define G analogously for the doc encoder, which consists of functions that map documents in D to their
1998
+ embeddings in Rk. To simplify exposition, we assume that each training example consists of a single relevant
1999
+ or irrelevant document for each query, i.e., L = 1 in Section 3. Let
2000
+ FG = {(q, d) �→ f(q)⊤g(d) | f ∈ F, g ∈ G }
2001
+ Given Sn = {(qi, di, yi) : i ∈ [n]}, let N(ϵ, H ) denote the ϵ-covering number of a function class H with
2002
+ respect to L2(Pn) norm, where ∥h∥2
2003
+ L2(Pn) := ∥h∥2
2004
+ n := 1
2005
+ n
2006
+ �n
2007
+ i=1 ∥h(qi, di)∥2
2008
+ 2. Depending on the context, the
2009
+ functions in H may map to R or Rd.
2010
+ Proposition C.4. Let st be scorer of a teacher model and ℓd be a distillation loss function which is Lℓd-
2011
+ Lipschitz in its first argument. Let the embedding functions in F and G output vectors with ℓ2 norms at most
2012
+ K. Define the uniform deviation
2013
+ En(F, G ) =
2014
+ sup
2015
+ f∈F,g∈G
2016
+ ����
2017
+ 1
2018
+ n
2019
+
2020
+ i∈[n] ℓd
2021
+
2022
+ f(qi)⊤g(di), st
2023
+ qi,di
2024
+
2025
+ − Eq,dℓd
2026
+
2027
+ f(q)⊤g(d), st
2028
+ q,d
2029
+ ����� .
2030
+ For any g∗ ∈ G , we have
2031
+ ESnEn(F, G ) ≤ ESn
2032
+ 48KLℓd
2033
+ √n
2034
+ � ∞
2035
+ 0
2036
+
2037
+ log N(u, F) + log N(u, G ) du,
2038
+ ESnEn(F, {g∗}) ≤ ESn
2039
+ 48KLℓd
2040
+ √n
2041
+ � ∞
2042
+ 0
2043
+
2044
+ log N(u, F) du.
2045
+ Proof of Proposition C.4. We first symmetrize excess risk to get Rademacher complexity, then bound the
2046
+ Rademacher complexity with Dudley’s entropy integral.
2047
+ For a training set Sn, the empirical Rademacher complexity of a class of functions H that maps Q × D to
2048
+ R is defined by
2049
+ Radn(H ) = Eσ sup
2050
+ h∈H
2051
+ 1
2052
+ n
2053
+ n
2054
+
2055
+ i=1
2056
+ εih(qi, di),
2057
+ where {εi} denote i.i.d. Rademacher random variables taking the value in {+1, −1} with equal probability.
2058
+ By symmetrization [Bousquet et al., 2004] and the fact that ℓd is Lℓd-Lipschitz in its first argument, we get
2059
+ ESnEn(F, G ) ≤ 2LℓdESnRadn(FG ).
2060
+ Then, Dudley’s entropy integral [see, e.g., Ledoux and Talagrand, 1991] gives
2061
+ Radn(FG ) ≤ 12
2062
+ √n
2063
+ � ∞
2064
+ 0
2065
+
2066
+ log N(u, FG ) du.
2067
+ 26
2068
+
2069
+ From Lemma C.5 with KQ = KD = K, for any u > 0,
2070
+ N(u, FG ) ≤ N
2071
+ � u
2072
+ 2K , F
2073
+
2074
+ N
2075
+ � u
2076
+ 2K , G
2077
+
2078
+ .
2079
+ Putting these together,
2080
+ ESnEn(F, G ) ≤ 24Lℓd
2081
+ √n
2082
+ � ∞
2083
+ 0
2084
+
2085
+ log N(u/2K, F) + log N(u/2K, G ) du.
2086
+ (27)
2087
+ Following the same steps with G replaced by {g∗}, we get
2088
+ ESnEn(F, {g∗}) ≤ 24Lℓd
2089
+ √n
2090
+ � ∞
2091
+ 0
2092
+
2093
+ log N(u/2K, F) du
2094
+ (28)
2095
+ By changing variable in Eq. 27 and Eq. 28, we get the stated bounds.
2096
+ For f : Q → Rk, g : D → Rk, define fg : Q × D → R by fg(q, d) = f(q)⊤g(d).
2097
+ Lemma C.5. Let f1, . . . , fN be an ϵ-cover of F and g1, . . . , gM be an ϵ-cover of G in L2(Pn) norm. Let
2098
+ supf∈F supq∈Q ∥f(q)∥2 ≤ KQ and supg∈G supd∈D ∥g(d)∥2 ≤ KD. Then,
2099
+ {figj | i ∈ [N], j ∈ [M]}
2100
+ is a (KQ + KD)ϵ-cover of FG .
2101
+ Proof of Lemma C.5. For arbitrary f ∈ F, g ∈ G , there exist ˜f ∈ {f1, . . . , fN}, ˜g ∈ {g1, . . . , gM} such
2102
+ that ∥f − ˜f∥n ≤ ϵ, ∥g − ˜g∥n ≤ ϵ. It is sufficient to show that ∥fg − ˜f˜g∥n ≤ (KQ + KD)ϵ. Decomposing
2103
+ using triangle inequality,
2104
+ ∥fg − ˜f˜g∥n = ∥fg − f˜g + f˜g − ˜f˜g∥n
2105
+ ≤ ∥fg − f˜g∥n + ∥f˜g − ˜f˜g∥n.
2106
+ (29)
2107
+ To bound the first term, using Cauchy-Schwartz inequality, we can write
2108
+ 1
2109
+ n
2110
+ n
2111
+
2112
+ i=1
2113
+
2114
+ f(qi)⊤g(di) − ˜f(qi)⊤˜g(di)
2115
+ �2
2116
+ ≤ sup
2117
+ q∈Q
2118
+ ∥f(q)∥2
2119
+ 2 · 1
2120
+ n
2121
+ n
2122
+
2123
+ i=1
2124
+ ∥(g − ˜g)(di)∥2
2125
+ 2.
2126
+ Therefore
2127
+ ∥fg − f˜g∥n ≤ KQ∥g − ˜g∥n ≤ KQϵ.
2128
+ Similarly
2129
+ ∥f˜g − ˜f˜g∥n ≤ KD∥f − ˜f∥n ≤ KDϵ
2130
+ Plugging these in Eq. 29, we get
2131
+ ∥fg − ˜f˜g∥n ≤ (KQ + KD)ϵ.
2132
+ This completes the proof.
2133
+ 27
2134
+
2135
+ D
2136
+ Evaluation metric details
2137
+ For NQ, we evaluate models with full strict recall metric, meaning that the model is required to find a golden
2138
+ passage from the whole set of candidates (21M). Specifically, for k ≥ 1, recall@k or R@k denotes the
2139
+ percentage of questions for which the associated golden passage is among the k passages that receive the
2140
+ highest relevance scores by the model. In addition, we also present results for relaxed recall metric considered
2141
+ by Karpukhin et al. [2020a], where R@k denotes the percentage of questions where the corresponding answer
2142
+ string is present in at least one of the k passages with the highest model (relevance) scores.
2143
+ For both MSMARCO retrieval and re-ranking tasks, we follow the standard evaluation metrics Mean
2144
+ Reciprocal Rank(MRR)@10 and normalized Discounted Cumulative Gain (nDCG)@10. For retrieval tasks,
2145
+ these metrics are computed with respect to the whole set of candidates passages (8.8M). On the other hand,
2146
+ for re-ranking task, the metrics are computed with respect to BM25 generated 1000 candidate passages for
2147
+ each query. We report 100 × MRR@10 and 100 × nDCG@10, as per the convention followed in the prior
2148
+ works.
2149
+ E
2150
+ Query generation details
2151
+ We introduced query generation to encourage geometric matching in local regions, which can aid in trans-
2152
+ ferring more knowledge in confusing neighborhoods. As expected, this further improves the distillation
2153
+ effectiveness on top of the embedding matching in most cases. To focus on the local regions, we generate
2154
+ queries from the observed examples by adding local perturbation in the data manifold (embedding space).
2155
+ Specifically, we employ an off-the-shelf encoder-decoder model – BART-base Lewis et al. [2020]. First,
2156
+ we embed an observed query in the corresponding dataset. Second, we add a small perturbation to the
2157
+ query embedding. Finally, we decode the perturbed embedding to generate a new query in the input space.
2158
+ Formally, the generated query x′ given an original query x takes the form x′ = Dec(Enc(x) + ϵ), where
2159
+ Enc() and Dec() correspond to the encoder and the decoder from the off-the-shelf model, respectively, and ϵ
2160
+ is an isotropic Gaussian noise. Furthermore, we also randomly mask the original query tokens with a small
2161
+ probability. We generate two new queries from an observed query and use them as additional data points
2162
+ during our distillation procedure.
2163
+ As a comparison, we tried adding the same size of random sampled queries instead of the ones generated via
2164
+ the method described above. That did not show any benefit, which justifies the use of our query/question
2165
+ generation method.
2166
+ F
2167
+ Experimental details and additional results
2168
+ F.1
2169
+ Additional training details
2170
+ Optimization. For all of our experiments, we use ADAM weight decay optimizer with a short warm up
2171
+ period and a linear decay schedule. We use the initial learning rate of 10−5 and 2.8 × 10−5 for experiments
2172
+ on NQ and MSMARCO, respectively. The batch sizes is 128.
2173
+ 28
2174
+
2175
+ F.2
2176
+ Additional results on NQ
2177
+ See Table 7 for the performance of various DE models on NQ, as measured by the relaxed recall metric.
2178
+ Table 7: Relaxed recall performance of various student DE models on NQ dev set, including symmetric DE
2179
+ student model (67.5M or 11.3M transformer for both encoders), and asymmetric DE student model (67.5M
2180
+ or 11.3M transformer as query encoder and document embeddings inherited from the teacher). All distilled
2181
+ students used the same teacher (110M parameter BERT-base models as both encoders), with the performance
2182
+ (in terms of relaxed recall) of Recall@5 = 87.2, Recall@20 = 94.7, Recall@100 = 98.1. Note: the proposed
2183
+ method can achieve 100% of teacher’s performance even with 2/3rd size of the query encoder, and 92-97%
2184
+ with even 1/8th size.
2185
+ Method
2186
+ Recall@5
2187
+ Recall@20
2188
+ Recall@100
2189
+ 67.5M
2190
+ 11.3M
2191
+ 67.5M
2192
+ 11.3M
2193
+ 67.5M
2194
+ 11.3M
2195
+ Train student directly
2196
+ 62.5
2197
+ 49.7
2198
+ 82.5
2199
+ 73.0
2200
+ 93.7
2201
+ 88.2
2202
+ + Distill from teacher
2203
+ 82.7
2204
+ 66.1
2205
+ 92.9
2206
+ 84.0
2207
+ 97.3
2208
+ 93.1
2209
+ + Inherit document embeddings
2210
+ 84.7
2211
+ 73.0
2212
+ 93.7
2213
+ 85.4
2214
+ 97.6
2215
+ 93.3
2216
+ + Query embedding matching
2217
+ 87.2
2218
+ 77.6
2219
+ 95.0
2220
+ 88.0
2221
+ 97.9
2222
+ 94.3
2223
+ + Query generation
2224
+ 87.8
2225
+ 80.3
2226
+ 94.8
2227
+ 89.9
2228
+ 98.0
2229
+ 95.6
2230
+ Train student only using embedding
2231
+ matching and inherit doc embeddings
2232
+ 86.4
2233
+ 69.1
2234
+ 94.2
2235
+ 81.6
2236
+ 97.7
2237
+ 89.9
2238
+ + Query generation
2239
+ 86.7
2240
+ 72.9
2241
+ 94.4
2242
+ 84.9
2243
+ 97.8
2244
+ 92.2
2245
+ F.3
2246
+ Additional results on MSMARCO
2247
+ See Table 8 for CE to DE distillation results on MSMARCO re-ranking task, as measured by the nDCG@10
2248
+ metric.
2249
+ Table 8: Performance of CE to DE distillation on MSMARCO re-ranking task, as measured by the nDCG@10
2250
+ metric. As for the teacher CE models, we consider two kinds of CE models based on two different pooling
2251
+ mechanism.
2252
+ Method
2253
+ nDCG@10
2254
+ [CLS]-pooled teacher
2255
+ 43.0
2256
+ Dual-pooled teacher
2257
+ 42.8
2258
+ Standard distillation from [CLS]-teacher
2259
+ 38.8
2260
+ +Joint matching
2261
+ 38.0
2262
+ Standard distillation from Dual-pooling teacher
2263
+ 39.2
2264
+ +Query matching
2265
+ 39.4
2266
+ 29
2267
+
2268
+ F.4
2269
+ Additional results on BEIR benchmark
2270
+ See Table 9 (NDCG@10) and Table 10 (Recall@100) for BEIR benchmark results. All numbers are from
2271
+ BEIR benchmark paper [Thakur et al., 2021]. As common practice, non-public benchmark sets8, {BioASQ,
2272
+ Signal-1M(RT), TREC-NEWS, Robust04}, are removed from the table. Following the original BEIR pa-
2273
+ per [Thakur et al., 2021] (Table 9 and Appendix G from the original paper), we utilized Capped Recall@100
2274
+ for TREC-COVID dataset.
2275
+ Table 9: In-domain and zero-shot retrieval performance on BEIR benchmark [Thakur et al., 2021], as
2276
+ measured by nDCG@10. All the baseline number in the table are taken from Thakur et al. [2021]. We
2277
+ exclude (in-domain) MSMARCO from average computation as common practice.
2278
+ Model (→)
2279
+ Lexical
2280
+ Sparse
2281
+ Dense
2282
+ Dataset (↓)
2283
+ BM25
2284
+ DeepCT
2285
+ SPARTA
2286
+ docT5query
2287
+ DPR
2288
+ ANCE
2289
+ TAS-B
2290
+ GenQ
2291
+ SentenceBERT
2292
+ (our teacher)
2293
+ EmbedDistill
2294
+ (ours)
2295
+ MS MARCO
2296
+ 22.8
2297
+ 29.6‡
2298
+ 35.1‡
2299
+ 33.8‡
2300
+ 17.7
2301
+ 38.8‡
2302
+ 40.8‡
2303
+ 40.8‡
2304
+ 47.1‡
2305
+ 46.6‡
2306
+ TREC-COVID
2307
+ 65.6
2308
+ 40.6
2309
+ 53.8
2310
+ 71.3
2311
+ 33.2
2312
+ 65.4
2313
+ 48.1
2314
+ 61.9
2315
+ 75.4
2316
+ 72.3
2317
+ NFCorpus
2318
+ 32.5
2319
+ 28.3
2320
+ 30.1
2321
+ 32.8
2322
+ 18.9
2323
+ 23.7
2324
+ 31.9
2325
+ 31.9
2326
+ 31.0
2327
+ 30.7
2328
+ NQ
2329
+ 32.9
2330
+ 18.8
2331
+ 39.8
2332
+ 39.9
2333
+ 47.4‡
2334
+ 44.6
2335
+ 46.3
2336
+ 35.8
2337
+ 51.5
2338
+ 50.8
2339
+ HotpotQA
2340
+ 60.3
2341
+ 50.3
2342
+ 49.2
2343
+ 58.0
2344
+ 39.1
2345
+ 45.6
2346
+ 58.4
2347
+ 53.4
2348
+ 58.0
2349
+ 56.0
2350
+ FiQA-2018
2351
+ 23.6
2352
+ 19.1
2353
+ 19.8
2354
+ 29.1
2355
+ 11.2
2356
+ 29.5
2357
+ 30.0
2358
+ 30.8
2359
+ 31.8
2360
+ 29.5
2361
+ ArguAna
2362
+ 31.5
2363
+ 30.9
2364
+ 27.9
2365
+ 34.9
2366
+ 17.5
2367
+ 41.5
2368
+ 42.9
2369
+ 49.3
2370
+ 38.5
2371
+ 34.9
2372
+ Touch´e-2020
2373
+ 36.7
2374
+ 15.6
2375
+ 17.5
2376
+ 34.7
2377
+ 13.1
2378
+ 24.0
2379
+ 16.2
2380
+ 18.2
2381
+ 22.9
2382
+ 24.7
2383
+ CQADupStack
2384
+ 29.9
2385
+ 26.8
2386
+ 25.7
2387
+ 32.5
2388
+ 15.3
2389
+ 29.6
2390
+ 31.4
2391
+ 34.7
2392
+ 33.5
2393
+ 30.6
2394
+ Quora
2395
+ 78.9
2396
+ 69.1
2397
+ 63.0
2398
+ 80.2
2399
+ 24.8
2400
+ 85.2
2401
+ 83.5
2402
+ 83.0
2403
+ 84.2
2404
+ 81.4
2405
+ DBPedia
2406
+ 31.3
2407
+ 17.7
2408
+ 31.4
2409
+ 33.1
2410
+ 26.3
2411
+ 28.1
2412
+ 38.4
2413
+ 32.8
2414
+ 37.7
2415
+ 35.9
2416
+ SCIDOCS
2417
+ 15.8
2418
+ 12.4
2419
+ 12.6
2420
+ 16.2
2421
+ 07.7
2422
+ 12.2
2423
+ 14.9
2424
+ 14.3
2425
+ 14.8
2426
+ 14.4
2427
+ FEVER
2428
+ 75.3
2429
+ 35.3
2430
+ 59.6
2431
+ 71.4
2432
+ 56.2
2433
+ 66.9
2434
+ 70.0
2435
+ 66.9
2436
+ 76.7
2437
+ 76.9
2438
+ Climate-FEVER
2439
+ 21.3
2440
+ 06.6
2441
+ 08.2
2442
+ 20.1
2443
+ 14.8
2444
+ 19.8
2445
+ 22.8
2446
+ 17.5
2447
+ 23.5
2448
+ 22.5
2449
+ SciFact
2450
+ 66.5
2451
+ 63.0
2452
+ 58.2
2453
+ 67.5
2454
+ 31.8
2455
+ 50.7
2456
+ 64.3
2457
+ 64.4
2458
+ 59.8
2459
+ 55.5
2460
+ AVG (w/o MSMARCO)
2461
+ 43.0
2462
+ 31.0
2463
+ 35.5
2464
+ 44.4
2465
+ 25.5
2466
+ 40.5
2467
+ 42.8
2468
+ 42.5
2469
+ 45.7
2470
+ 44.0
2471
+ Table 10: In-domain and zero-shot retrieval performance on BEIR benchmark [Thakur et al., 2021], as
2472
+ measured by Recall@100. All the baseline number in the table are taken from Thakur et al. [2021]. ‡
2473
+ indicates in-domain retrieval performance. ∗ indicates capped recall following original benchmark setup. We
2474
+ exclude (in-domain) MSMARCO from average computation as common practice.
2475
+ Model (→)
2476
+ Lexical
2477
+ Sparse
2478
+ Dense
2479
+ Dataset (↓)
2480
+ BM25
2481
+ DeepCT
2482
+ SPARTA
2483
+ docT5query
2484
+ DPR
2485
+ ANCE
2486
+ TAS-B
2487
+ GenQ
2488
+ SentenceBERT
2489
+ (our teacher)
2490
+ EmbedDistill
2491
+ (ours)
2492
+ MS MARCO
2493
+ 65.8
2494
+ 75.2‡
2495
+ 79.3‡
2496
+ 81.9‡
2497
+ 55.2
2498
+ 85.2‡
2499
+ 88.4‡
2500
+ 88.4‡
2501
+ 91.7‡
2502
+ 90.6‡
2503
+ TREC-COVID
2504
+ 49.8∗
2505
+ 34.7∗
2506
+ 40.9∗
2507
+ 54.1∗
2508
+ 21.2∗
2509
+ 45.7∗
2510
+ 38.7∗
2511
+ 45.6∗
2512
+ 54.1∗
2513
+ 48.8∗
2514
+ NFCorpus
2515
+ 25.0
2516
+ 23.5
2517
+ 24.3
2518
+ 25.3
2519
+ 20.8
2520
+ 23.2
2521
+ 28.0
2522
+ 28.0
2523
+ 27.7
2524
+ 26.7
2525
+ NQ
2526
+ 76.0
2527
+ 63.6
2528
+ 78.7
2529
+ 83.2
2530
+ 88.0‡
2531
+ 83.6
2532
+ 90.3
2533
+ 86.2
2534
+ 91.1
2535
+ 89.9
2536
+ HotpotQA
2537
+ 74.0
2538
+ 73.1
2539
+ 65.1
2540
+ 70.9
2541
+ 59.1
2542
+ 57.8
2543
+ 72.8
2544
+ 67.3
2545
+ 69.7
2546
+ 68.3
2547
+ FiQA-2018
2548
+ 53.9
2549
+ 48.9
2550
+ 44.6
2551
+ 59.8
2552
+ 34.2
2553
+ 58.1
2554
+ 59.3
2555
+ 61.8
2556
+ 62.0
2557
+ 60.1
2558
+ ArguAna
2559
+ 94.2
2560
+ 93.2
2561
+ 89.3
2562
+ 97.2
2563
+ 75.1
2564
+ 93.7
2565
+ 94.2
2566
+ 97.8
2567
+ 89.2
2568
+ 87.8
2569
+ Touch´e-2020
2570
+ 53.8
2571
+ 40.6
2572
+ 38.1
2573
+ 55.7
2574
+ 30.1
2575
+ 45.8
2576
+ 43.1
2577
+ 45.1
2578
+ 45.3
2579
+ 45.5
2580
+ CQADupStack
2581
+ 60.6
2582
+ 54.5
2583
+ 52.1
2584
+ 63.8
2585
+ 40.3
2586
+ 57.9
2587
+ 62.2
2588
+ 65.4
2589
+ 63.9
2590
+ 61.3
2591
+ Quora
2592
+ 97.3
2593
+ 95.4
2594
+ 89.6
2595
+ 98.2
2596
+ 47.0
2597
+ 98.7
2598
+ 98.6
2599
+ 98.8
2600
+ 98.5
2601
+ 98.1
2602
+ DBPedia
2603
+ 39.8
2604
+ 37.2
2605
+ 41.1
2606
+ 36.5
2607
+ 34.9
2608
+ 31.9
2609
+ 49.9
2610
+ 43.1
2611
+ 46.0
2612
+ 42.6
2613
+ SCIDOCS
2614
+ 35.6
2615
+ 31.4
2616
+ 29.7
2617
+ 36.0
2618
+ 21.9
2619
+ 26.9
2620
+ 33.5
2621
+ 33.2
2622
+ 32.5
2623
+ 31.5
2624
+ FEVER
2625
+ 93.1
2626
+ 73.5
2627
+ 84.3
2628
+ 91.6
2629
+ 84.0
2630
+ 90.0
2631
+ 93.7
2632
+ 92.8
2633
+ 93.9
2634
+ 93.8
2635
+ Climate-FEVER
2636
+ 43.6
2637
+ 23.2
2638
+ 22.7
2639
+ 42.7
2640
+ 39.0
2641
+ 44.5
2642
+ 53.4
2643
+ 45.0
2644
+ 49.3
2645
+ 47.6
2646
+ SciFact
2647
+ 90.8
2648
+ 89.3
2649
+ 86.3
2650
+ 91.4
2651
+ 72.7
2652
+ 81.6
2653
+ 89.1
2654
+ 89.3
2655
+ 88.9
2656
+ 87.2
2657
+ AVG (w/o MSMARCO)
2658
+ 63.4
2659
+ 55.9
2660
+ 56.2
2661
+ 64.7
2662
+ 47.7
2663
+ 60.0
2664
+ 64.8
2665
+ 64.2
2666
+ 65.1
2667
+ 63.5
2668
+ 8https://github.com/beir-cellar/beir
2669
+ 30
2670
+
2671
+ F.5
2672
+ Additional results with single-stage trained teachers
2673
+ Hereby we evaluate EmbedDistill with a simple single-stage trained teachers instead of teachers trained in
2674
+ complex multi-stage frameworks, in order to test the generalizability of the method.
2675
+ Similar to Table 1, we conducted an experiment on top of single-stage trained teacher based on RoBERTa-
2676
+ base instead of AR2 [Zhang et al., 2022] in the main text. We also changed the student to be based on
2677
+ DistilRoBERTa or RoBERTa-mini accordingly for simplicity to use same tokenizer.
2678
+ Table 11 demonstrates that EmbedDistill provides a significant boost of the performance on top of standard
2679
+ distillation techniques similar to what we observed in Table 1.
2680
+ Table 11: Full recall performance of various student DE models on NQ dev set, including symmetric DE
2681
+ student model, and asymmetric DE student models. All students used the same in-house teacher (124M
2682
+ parameter RoBERTa-base models as both encoders), with the full Recall@5 = 64.6, Recall@20 = 81.7, and
2683
+ Recall@100 = 91.5.
2684
+ Method
2685
+ 6-Layer (82M)
2686
+ 4-Layer (16M)
2687
+ R@5 R@20 R@100
2688
+ R@5 R@20 R@100
2689
+ Train student directly
2690
+ 41.9
2691
+ 64.5
2692
+ 82.0
2693
+ 39.5
2694
+ 59.9
2695
+ 76.3
2696
+ + Distill from teacher
2697
+ 48.3
2698
+ 67.2
2699
+ 80.9
2700
+ 44.9
2701
+ 61.1
2702
+ 74.8
2703
+ + Inherit doc embeddings
2704
+ 56.9
2705
+ 74.3
2706
+ 85.4
2707
+ 47.2
2708
+ 64.0
2709
+ 77.0
2710
+ + Query embedding matching
2711
+ 61.8
2712
+ 78.7
2713
+ 89.0
2714
+ 56.7
2715
+ 74.6
2716
+ 85.9
2717
+ + Query generation
2718
+ 61.7
2719
+ 79.4
2720
+ 89.6
2721
+ 57.1
2722
+ 75.2
2723
+ 86.7
2724
+ Train student using only
2725
+ embedding matching and
2726
+ inherit doc embeddings
2727
+ 63.7
2728
+ 80.3
2729
+ 90.3
2730
+ 57.9
2731
+ 74.6
2732
+ 85.7
2733
+ + Query generation
2734
+ 64.1
2735
+ 80.5
2736
+ 90.4
2737
+ 58.9
2738
+ 76.0
2739
+ 86.6
2740
+ Furthermore, we also consider a in-house trained teacher (RoBERTa-base) for MSMARCO reranking task.
2741
+ Table 12 demonstrates a similar pattern to Table 3, providing evidence of generalizability of EmbedDistill.
2742
+ Table 12: Reranking performance of various DE models on MSMARCO dev set. We utilize a RoBERTa-base
2743
+ in-house trained teacher achieving MRR@10 of 33.1 and nDCG@10 of 38.8 is used. The table shows
2744
+ performance of the symmetric DE student model and asymmetric DE student models.
2745
+ Method
2746
+ MRR@10
2747
+ nDCG@10
2748
+ 82M
2749
+ 16M
2750
+ 82M
2751
+ 16M
2752
+ Train student directly
2753
+ 29.7
2754
+ 26.3
2755
+ 35.2
2756
+ 31.4
2757
+ + Distill from teacher
2758
+ 31.6
2759
+ 28.4
2760
+ 37.2
2761
+ 33.5
2762
+ + Inherit doc embeddings
2763
+ 32.4
2764
+ 30.2
2765
+ 38.0
2766
+ 35.8
2767
+ + Query embedding matching
2768
+ 32.8
2769
+ 31.9
2770
+ 38.6
2771
+ 37.6
2772
+ + Query generation
2773
+ 33.0
2774
+ 32.0
2775
+ 38.8
2776
+ 37.7
2777
+ Train student only using embedding
2778
+ matching and inherit doc embeddings
2779
+ 32.7
2780
+ 31.8
2781
+ 38.5
2782
+ 37.5
2783
+ + Query generation
2784
+ 33.0
2785
+ 31.8
2786
+ 38.9
2787
+ 37.5
2788
+ These result showcase that our method brings performance boost orthogonal to how teacher was trained,
2789
+ whether single-staged or multi-staged.
2790
+ 31
2791
+
2792
+ G
2793
+ Embedding analysis
2794
+ G.1
2795
+ DE to DE distillation
2796
+ Traditional score matching-based distillation might not result in transfer of relative geometry from teacher to
2797
+ student. To assess this, we look at the discrepancy between the teacher and student query embeddings for all
2798
+ q, q′ pairs: ∥embt
2799
+ q − embt
2800
+ q′∥ − ∥embs
2801
+ q − embs
2802
+ q′∥. Note that the analysis is based on NQ, and we focus on the
2803
+ teacher and student DE models based on BERT-base and DistilBERT, respectively. As evident from Fig. 4,
2804
+ embedding matching loss significantly reduces this discrepancy.
2805
+ 1.0
2806
+ 0.5
2807
+ 0.0
2808
+ 0.5
2809
+ 1.0
2810
+ Discrepancy in distance from teacher
2811
+ 0
2812
+ 5
2813
+ 10
2814
+ 15
2815
+ Density
2816
+ Standard
2817
+ distillation
2818
+ Embedding
2819
+ matching
2820
+ Figure 4: Histogram of teacher-student distance discrepancy in queries.
2821
+ G.2
2822
+ CE to DE distillation
2823
+ We qualitatively look at embeddings from CE model in Fig. 5. The embedding embt
2824
+ q,d from [CLS]-pooled
2825
+ CE model does not capture semantic similarity between query and document as it is solely trained to classify
2826
+ whether the query-document pair is relevant or not. In contrast, the (proxy) query embeddings embt
2827
+ q←(q,d)
2828
+ from our Dual-pooled CE model with reconstruction loss do not degenerate and its embeddings groups same
2829
+ query whether conditioned on positive or negative document together. Furthermore, other related queries are
2830
+ closer than unrelated queries. Such informative embedding space would aid distillation to a DE model via
2831
+ embedding matching.
2832
+ 32
2833
+
2834
+ q1: macy credit card
2835
+ phone number
2836
+ q2: phone number to
2837
+ experian credit bureau
2838
+ q4: is phosphorus diatomic
2839
+ q5: what is a cancer
2840
+ doctor called
2841
+ q3: colloids chemistry
2842
+ definition
2843
+ q6: physiological disease
2844
+ examples
2845
+ All positive
2846
+ pairs
2847
+ All negative
2848
+ pairs
2849
+ [CLS]-pooled CE model
2850
+ Dual pooled
2851
+ CE model
2852
+ Pairwise distance matrix
2853
+ Dual pooled
2854
+ [CLS]-pooled
2855
+ Figure 5: Illustration of geometry expressed by [CLS]-pooled CE and our Dual-pooled CE model on 6
2856
+ queries from MSMARCO and 12 passages based on pairwise distance matrix across these 72 pairs. [CLS]-
2857
+ pooled CE embeddings degenerates as all positive and negative query-document pairs almost collapse to two
2858
+ points and fail to capture semantic information. In contrast, our Dual-pooled CE model leads to much richer
2859
+ representation that can express semantic information.
2860
+ 33
2861
+
JNFLT4oBgHgl3EQfJy82/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
OdFAT4oBgHgl3EQfyx46/content/tmp_files/2301.08694v1.pdf.txt ADDED
@@ -0,0 +1,1636 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.08694v1 [math.PR] 20 Jan 2023
2
+ Almost everywhere continuity of conditional
3
+ expectations
4
+ Alberto Alonso ∗,a and Fernando Brambila-Paz †,a
5
+ aDepartment of Mathematics, Faculty of Science - UNAM, Mexico
6
+ Abstract
7
+ A necessary and sufficient condition on a sequence {An}n∈N of σ-subalgebras that assures convergence almost
8
+ every where of conditional expectations is given.
9
+ Keywords:
10
+ 1.
11
+ Introduction
12
+ Let A be a σ-algebra of a set X with a probability measure µ. Given a sequence {An}n∈N of σ-subalgebras of A, it it
13
+ a natural problem to establish conditions under which the conditional expectations converge almost everywhere.
14
+ In [] we analyzed the case for a sequence to converge in Lp and provided necessary an sufficient conditions for that
15
+ to happen. We defined two σ-subalgebras Aµ and A⊥ and proved that we have convergence in Lp if and only if
16
+ Aµ = A⊥. We also established a chain of σ-subalgebras.
17
+ A ⊂ Aµ ⊂ A⊥ ⊂ A.
18
+ where A and A are the inferior and the superior limit of the the σ-subalgebras {An}n∈N. That is A = �∞
19
+ m=1
20
+ �∞
21
+ n=m An
22
+ and A = �∞
23
+ m=1
24
+ �∞
25
+ n=m. As a corollary we had convergence in Lp in the special case A = A which was a result previ-
26
+ ously obtained by Fetter []. In that paper she pondered if, as in the case of monotone convergence of σ-algebras,
27
+ we have also convergence a.e.. However, in [] a negative answer was provided by establishing a counterexample.
28
+ In this paper we will analyze a chain of families of sets
29
+ A ⊂ Aw.a.e. ⊂ Aµ ⊂ A⊥ ⊂ A⊥a.e. ⊂ A.
30
+ We will show that we have a.e. convergence if and only if Aw.a.e. = A⊥a.e. provided that these two sets are σ-
31
+ subalgebras.
32
+ 2.
33
+ Previous-Works
34
+ As usual, we will use the notation E (f |A) for the conditional expectation of f given the σ-algebra A. Some well-
35
+ known results for a.e.-convergence and Lp-convergence (1 ≤ p < ∞) are :
36
+ Theorem 2.1. (Martingales). If {An}n∈N is monotone increasing sequence of σ-subalgebras of A, that is, An ⊂ An+1 for
37
+ any n ∈ N, then
38
+ E (f |An)
39
+ a.e.
40
+ −→
41
+ Lp E
42
+
43
+ f
44
+ �������
45
+
46
+
47
+ n=1
48
+ An
49
+
50
+ ,
51
+ ∗Email address: alonsoalber@gmail.com
52
+ †Email address: fernandobrambila@gmail.com
53
+ 1
54
+
55
+ for every f ∈ Lp(A), where �∞
56
+ n=1 An stands for the minimum σ-algebra that contains �∞
57
+ n=1 An.
58
+
59
+ Or if {An}n∈N is
60
+ monotone decreasing, that is, An ⊃ An+1, then E (f |An)
61
+ a.e.
62
+ −→
63
+ Lp E
64
+
65
+ f
66
+ ����∞
67
+ n=1 An
68
+ � �
69
+ .
70
+ In [1], Boylan introduced a Hausdorff metric of the space of σ-algebras. It gives us a relationship between Cauchy
71
+ sequences of σ-subalgebras and Lp-convergence of conditional expectations.
72
+ Theorem 2.2. (Boylan, Equiconvergence). Let {An}n∈N be a Cauchy sequence on the space of σ-algebras with the
73
+ Hausdorff metric, that is,
74
+ d(An,Am) = sup
75
+ A∈An
76
+
77
+ inf
78
+ B∈Am
79
+ µ(A △ B)
80
+
81
+ + sup
82
+ B∈An
83
+
84
+ inf
85
+ A∈Am
86
+ µ(A △ B)
87
+
88
+ ,
89
+ here A △ B = (A \ B) ∪ (B \ A). There is a subalgebra D such that
90
+ lim
91
+ n→∞d(An,D) = 0,
92
+ and
93
+ E (f |An)
94
+ Lp
95
+ −→ E (f |D),
96
+ for every f ∈ Lp(A), 1 ≤ p < ∞ (see [2,3]).
97
+ Another approach was given by Fetter([]). She proved that if the lim sup of a sequence of σ-algebras coincides
98
+ with the lim inf, then we have convergence in Lp. Indeed
99
+ Theorem 2.3. (Fetter). If {An}n∈N is such that A = A where
100
+ A =
101
+
102
+
103
+ m=1
104
+
105
+
106
+ n=m
107
+ An
108
+ and
109
+ A =
110
+
111
+
112
+ m=1
113
+
114
+
115
+ n=m
116
+ An,
117
+ then
118
+ E (f |An) −→
119
+ Lp E
120
+
121
+ f
122
+ ���A
123
+
124
+ ,
125
+ for every f ∈ Lp(A), 1 ≤ p < ∞ (see [1,4]).
126
+ Since the condition of the above theorem is fulfilled when we have monotone sequences of σ-algebras, Fetter�s
127
+ result implies that of the martingale theorem in the case of Lp convergence. However, we point out that in [4] it
128
+ was proved that the condition A = A does not imply convergence almost everywhere.
129
+ Given a sequence {An}n∈N of σ-algebras, two relevant σ-algebras were defined in []:
130
+ Aµ =
131
+
132
+ A ∈ A : ∃An ∈ An, lim
133
+ n→∞µ(An △ A) = 0
134
+
135
+ ,
136
+ and if
137
+ W =
138
+
139
+ g ∈ L2(A) : ∃Ank ∈ Ank,with χAnk
140
+ weakly
141
+ −→ g
142
+
143
+ A⊥ was defined as the minimum complete σ-algebra such that g is A⊥ measurable for all g ∈ W . The importance
144
+ of these σ-algebras is clear due to the following theorem:
145
+ 2
146
+
147
+ Theorem 2.4. (Alonso-Brambila). Let {An}n∈N be such that there is a σ-subalgebra A∞ with the property that An
148
+ µ→ A∞
149
+ and An
150
+ ⊥→ A∞. Then and only then
151
+ E (f |An)
152
+ Lp(A)
153
+ −→ E (f |A∞ ),
154
+ for every f ∈ Lp(A), 1 ≤ p < ∞.
155
+ Since these σ-algebras satisfy the relationship
156
+ A ⊆ Aµ ⊆ A⊥ ⊆ A.
157
+ We have that if A = A Fetter�s theorem becomes an immediate corollary.
158
+ Finally we point out that none of the theorems but theorem 2.1 deal with the problem of convergence a.e.
159
+ 3.
160
+ A sigma-subalgebra
161
+ In [5] it was introduced the concept for a sequence of σ-subalgebras {An} to µ-approach a σ-subalgebra D if for
162
+ each D ∈ D there were An ∈ An such that µ(An △D) → 0. It was established that in such a case and only in that case
163
+ we have for f ∈ Lp(D) (1 ≤ p < ∞) that
164
+ E(f |An)
165
+ Lp
166
+ −→ E(f |D) = f .
167
+ It is clear that in order to have convergence a.e. we need to introduce a more stronger concept for sets in {An} to
168
+ approach those of D.
169
+ As usual, in the following Ac will stand for X \ A, χA for the characteristic function of a set A, A △ B for the
170
+ symmetric difference of the sets A and B, and A = B a.e for µ(A △ B) = 0.
171
+ We begin by first establishing the following lemma.
172
+ Lemma 3.1. Let (X,A,µ) a probability space with σ-algebra A and measure µ. If {An}n∈N is a sequence of elements in
173
+ A and A ∈ A, the following statements are equivalent:
174
+ i) χAn −→
175
+ a.e χA.
176
+ ii) A =
177
+
178
+
179
+ N=1
180
+
181
+ n≥N
182
+ An =
183
+
184
+
185
+ N=1
186
+
187
+ n≥N
188
+ An a.e.
189
+ iii)
190
+ lim
191
+ N→∞µ
192
+
193
+ 
194
+
195
+ n>N
196
+ (An △ A)
197
+
198
+  = 0.
199
+ Proof. Notice that χAn → χA a.e implies that, for all x ∈ X\M, where M is a set of measure zero, there is an Nx ∈ N
200
+ such that if n > Nx
201
+ ���χAn(x) − χA(x)
202
+ ��� < 1
203
+ 2.
204
+ (1)
205
+ To prove i) → ii), we first take a look at the elements of A \ M. Since
206
+ ���χAn(x) − χA(x)
207
+ ��� =
208
+ ���χAcn(x) − χAc(x)
209
+ ���,
210
+ (2)
211
+ (1) and (??) tell us that for n > Nx and x ∈ A \ M; χAcn(x) < 1/2, and so x ∈ An. Therefore
212
+ A \ M ⊂
213
+
214
+
215
+ N=1
216
+
217
+ n>N
218
+ An.
219
+ 3
220
+
221
+ Using the same argument for Ac we get
222
+ Ac \ M ⊂
223
+
224
+
225
+ N=1
226
+
227
+ n>N
228
+ Ac
229
+ n.
230
+ Thus
231
+ A ∪ M ⊃
232
+
233
+
234
+ N=1
235
+
236
+ n>N
237
+ An.
238
+ Since
239
+
240
+
241
+ N=1
242
+
243
+ n>N
244
+ An ⊂
245
+
246
+
247
+ N=1
248
+
249
+ n>N
250
+ An,
251
+ we have
252
+
253
+ 
254
+
255
+
256
+ N=1
257
+
258
+ n>N
259
+ An
260
+
261
+  ∩ Mc ⊂ A ∩ Mc ⊂
262
+
263
+
264
+ N=1
265
+
266
+ n>N
267
+ An ⊂
268
+
269
+
270
+ N=1
271
+
272
+ n>N
273
+ An.
274
+ So
275
+
276
+ 
277
+
278
+
279
+ N=1
280
+
281
+ n>N
282
+ An
283
+
284
+  ∩ Mc =
285
+
286
+ 
287
+
288
+
289
+ N=1
290
+
291
+ n>N
292
+ An
293
+
294
+  ∩ Mc = A ∩ Mc.
295
+ Therefore i → ii).
296
+ We will prove now that iii) → i). Let
297
+ M =
298
+
299
+
300
+ N=1
301
+
302
+ n>N
303
+ (An △ A).
304
+ Then, by hypothesis µ(M) = 0. Now, as
305
+ Mc = X \ M =
306
+
307
+
308
+ N=1
309
+
310
+ n>N
311
+ (An △ A)c.
312
+ if x ∈ X \ M there is an N ∈ N such that
313
+ x ∈
314
+
315
+ n>N
316
+ (An △ A)c,
317
+ and hence x ∈ (An △ A)c for all n > N. That is
318
+ 0 = χAn△A(x) =
319
+ ���χA(x) − χAn(x)
320
+ ��� < ǫ,
321
+ for n > N.
322
+ Finally, to prove that ii) implies iii), we notice that:
323
+ µ
324
+
325
+ 
326
+
327
+ n>N
328
+ (An △ A)
329
+
330
+  = µ
331
+
332
+ 
333
+
334
+ 
335
+
336
+ n>N
337
+ An
338
+
339
+  ∩ Ac
340
+
341
+  + µ
342
+
343
+ 
344
+
345
+ 
346
+
347
+ n>N
348
+ Ac
349
+ n
350
+
351
+  ∩ A
352
+
353
+ .
354
+ 4
355
+
356
+ Since by hypothesis
357
+ 0 = µ
358
+
359
+ A △
360
+
361
+
362
+ N=1
363
+
364
+ n>N
365
+ An
366
+
367
+  ≥ µ
368
+
369
+ A \
370
+
371
+
372
+ N=1
373
+
374
+ n>N
375
+ An
376
+
377
+  = µ
378
+
379
+ A ∩
380
+
381
+
382
+ N=1
383
+
384
+ n>N
385
+ Ac
386
+ n
387
+
388
+  = lim
389
+ N→∞µ
390
+
391
+ A ∩
392
+
393
+ n>N
394
+ Ac
395
+ n
396
+
397
+ ,
398
+ and
399
+ 0 = µ
400
+
401
+ A △
402
+
403
+
404
+ N=1
405
+
406
+ n>N
407
+ An
408
+
409
+  ≥ µ
410
+
411
+ 
412
+
413
+
414
+ N=1
415
+
416
+ n>N
417
+ An \ A
418
+
419
+  = lim
420
+ N→∞µ
421
+
422
+ 
423
+
424
+ n>N
425
+ An ∩ Ac
426
+
427
+ ,
428
+ we get
429
+ lim
430
+ N→∞µ
431
+
432
+ 
433
+
434
+ n>N
435
+ (An △ A)
436
+
437
+  = 0
438
+ Given a sequence {An}n∈N of σ-subalgebras of A we will define the family of sets F as
439
+ F =
440
+ A ∈ A : A =
441
+
442
+
443
+ N=1
444
+
445
+ n>N
446
+ An =
447
+
448
+
449
+ N=1
450
+
451
+ n>N
452
+ An a.e. for a sequence {An} such that An ∈ An
453
+ .
454
+ We will prove that F is a σ-algebra. To do this we will need the following lemma.
455
+ Lemma 3.2. If for any r > 0 there is a sequence {Arn : Arn ∈ An} and Nr ∈ N such that
456
+ µ
457
+
458
+ 
459
+
460
+ n>Nr
461
+ (Ar
462
+ n △ B)
463
+
464
+  < r,
465
+ then B ∈ F .
466
+ Proof. Let Sm = 1/2m for m ∈ N and {Qm}m∈N a strictly increasing sequence such that Qm ≥ NSm. As Qn ≥ NSm we
467
+ have
468
+ µ
469
+
470
+ 
471
+
472
+ n>Qm
473
+ (ASm
474
+ n △ B)
475
+
476
+  ≤ µ
477
+
478
+ 
479
+
480
+ n>NSm
481
+ (ASm
482
+ n △ B)
483
+
484
+  < Sm.
485
+ For n > Q1 define the sequence {An} by:
486
+ An = ASk
487
+ n
488
+ if
489
+ Qk < n ≤ Qk+1,
490
+ since ASk
491
+ n ∈ An so is An. We have
492
+ µ
493
+
494
+ 
495
+
496
+ n>Qm
497
+ (An △ B)
498
+
499
+  =µ
500
+
501
+ 
502
+
503
+
504
+ k=m
505
+
506
+ Qk<n≤Qk+1
507
+ (ASk
508
+ n △ B)
509
+
510
+  ≤
511
+
512
+
513
+ k=m
514
+ µ
515
+
516
+ 
517
+
518
+ Qk<n≤Qk+1
519
+ (ASk
520
+ n △ B)
521
+
522
+ 
523
+
524
+
525
+
526
+ k=m
527
+ µ
528
+
529
+ 
530
+
531
+ n>Qk
532
+ (ASk
533
+ n △ B)
534
+
535
+  <
536
+
537
+
538
+ k=m
539
+ Sk =
540
+ 1
541
+ 2m−1 ,
542
+ and therefore
543
+ 5
544
+
545
+ lim
546
+ N→∞µ
547
+
548
+ 
549
+
550
+ n>N
551
+ (An △ B)
552
+
553
+  = 0.
554
+ The proof follows from lemma 3.1
555
+ We can now prove that F is a σ-algebra.
556
+ Proposition 3.3. F is a σ-subalgebra of A
557
+ Proof. It is clear that ∅, X are elements of F since we can take An = ∅ for all n ∈ N or An = X for all n ∈ N.
558
+ If A ∈ F , by Proposition 3.1 there are {An ∈ An} such that
559
+ χAn −→ χA a.e.
560
+ Since Ac
561
+ n ∈ An and
562
+ χAcn = 1 − χAn
563
+ a.e.
564
+ −→ 1 − χA = χAc,
565
+ we have that Ac ∈ F .
566
+ Let A,B ∈ F . By proposition 3.1 there are two sequences {An}, {Bn}, with An,Bn ∈ An for all n ∈ N, such that
567
+ χAn → χA a.e. and χBn → χB a.e. Since An ∩Bn ∈ An and χAn∩Bn = χAnχBn −→
568
+ a.e. χAχB = χA∩B we have that A∩B ∈ F .
569
+ Thus F is an algebra.
570
+ To show that is is a σ-algebra, let B = �∞
571
+ k=1 Dk with Dk ∈ F . Since F is an algebra, we can consider the sets {Dk}
572
+ disjoint.
573
+ Define EM = �M
574
+ k=1 Dk and let r > 0. Since the sets Dk are disjoint, there is an M1 ∈ N such that
575
+ µ
576
+
577
+ 
578
+
579
+
580
+ k>M1
581
+ Dk
582
+
583
+  < r
584
+ 2.
585
+ On the other hand, since EM1 ∈ F . There is a sequence {An ∈ An} such that
586
+ lim
587
+ N→∞µ
588
+
589
+ 
590
+
591
+ n>N
592
+ (An △ EM1)
593
+
594
+  = 0.
595
+ Therefore, there is an N1 ∈ N such that µ
596
+ ��
597
+ n>N1(An △ EM1)
598
+
599
+ < r/2.
600
+ As EM1 ⊂ B, An \ B ⊂ An \ EM1. So
601
+ µ
602
+
603
+ 
604
+
605
+ n>N1
606
+ An \ B
607
+
608
+  ≤ µ
609
+
610
+ 
611
+
612
+ n>N1
613
+ (An \ EM1)
614
+
615
+  ≤ µ
616
+
617
+ 
618
+
619
+ n>N1
620
+ (An △ EM1)
621
+
622
+  < r
623
+ 2.
624
+ Also, as B is the union of the disjoint sets EM1 and �∞
625
+ k>M1 Dk we have
626
+ µ
627
+
628
+ 
629
+
630
+ n>N1
631
+ (B \ An)
632
+
633
+  =µ
634
+
635
+ 
636
+
637
+ n>N1
638
+ (EM1 \ An)
639
+
640
+  + µ
641
+
642
+ 
643
+
644
+ n>N1
645
+
646
+ 
647
+
648
+
649
+ k>M1
650
+ Dk
651
+
652
+  \ An
653
+
654
+ 
655
+ ≤µ
656
+
657
+ 
658
+
659
+ k>N1
660
+ (EM1 △ An)
661
+
662
+  + µ
663
+
664
+ 
665
+
666
+
667
+ k>M1
668
+ Dk
669
+
670
+  < r
671
+ 2 + r
672
+ 2.
673
+ Therefore µ
674
+ ��
675
+ n>N1(An △ B)
676
+
677
+ < r.
678
+ So, by lemma 3.2 B is in F .
679
+ 6
680
+
681
+ Given a sequence of σ-subalgebras of A in [5] we found for the σ-algebra Aµ defined by
682
+ Aµ =
683
+
684
+ A ∈ A : ∃An ∈ An, lim
685
+ n→∞µ(An △ A) = 0
686
+
687
+ ,
688
+ the relationships
689
+
690
+
691
+ m=1
692
+
693
+
694
+ n=m
695
+ An ≡ A ⊂ Aµ ⊂ A ≡
696
+
697
+
698
+ m=1
699
+
700
+
701
+ n=m
702
+ An.
703
+ It is clear by its definition that F ⊂ Aµ. Now, let A ∈ �∞
704
+ n=N An, take An = A for n > N, then trivially χAn −→
705
+ a.e. χA and
706
+ thus A ∈ F . Since F is a σ-algebra, we have
707
+ F ⊃
708
+
709
+
710
+ N=1
711
+
712
+
713
+ n=N
714
+ An.
715
+ Therefore
716
+ A ⊂ F ⊂ Aµ ⊂ A.
717
+ What we have proven is that given a sequence of σ-algebras {An}, if A is a set F measurable, there are fn func-
718
+ tions An measurable such that fn → χA a.e. However that does not imply that E(χA|An) → χA a.e. even when the
719
+ conditional expectations do so in Lp (1 ≤ p < ∞).
720
+ In the example shown in [4] we have the property that A = A and hence A = F , and a Borel set for which
721
+ the conditional expectations do not converge a.e. to its characteristic function. Let us show an easier example
722
+ for which the conditional expectations for a characteristic function in F do not converge a.e. but they do so in
723
+ Lp (1 ≤ p < ∞).
724
+ Let X = [0,1] and A the Lebesgue measurable sets. We are going to define a sequence {An} that lie in
725
+ � 1
726
+ 2,1
727
+
728
+ and
729
+ such that χAn → χ[ 1
730
+ 2 ,1) a.e. {Jk} will be a sequence that lies in
731
+ � 1
732
+ 2,1
733
+
734
+ \ An but with measure half its size
735
+ Specifically
736
+ An =
737
+ �1
738
+ 2,1 − 1
739
+ 2n
740
+
741
+ ,
742
+ Jn =
743
+
744
+ 1 −
745
+ 1
746
+ 2n+1 ,1
747
+
748
+ ,
749
+ n ∈ N.
750
+ On
751
+
752
+ 0, 1
753
+ 2
754
+
755
+ define a sequence of intervals that go back and forth in the interval and in each turn diminishing its size
756
+ In,k =
757
+
758
+ k
759
+ 4 · 2n , k + 1
760
+ 4 · 2n
761
+
762
+ with 0 ≤ k < 2 · 2n,
763
+ n ∈ N.
764
+ We have that for each n ∈ N
765
+ 2·2n−1
766
+
767
+ k=0
768
+ In,k =
769
+
770
+ 0, 1
771
+ 2
772
+
773
+ .
774
+ Let Bn,k = In,k ∪ Jn and Cn,k = (An ∪ Bn,k)c = Ac
775
+ n ∩ Bc
776
+ n,k.
777
+ Consider the sequence of σ-subalgebras
778
+ An,k = �∅,X,An,Bn,k,Cn,k
779
+ �,
780
+ Since
781
+ 7
782
+
783
+ E
784
+
785
+ χ[ 1
786
+ 2 ,1)
787
+ ���An,k
788
+
789
+ =
790
+ ⟨χ[ 1
791
+ 2 ,1),χAn⟩
792
+ µ(An)
793
+ χAn +
794
+ ⟨χ[ 1
795
+ 2 ,1),χBn,k ⟩
796
+ µ(Bn,k)
797
+ χBn,k +
798
+ ⟨χ[ 1
799
+ 2 ,1),χCn,k ⟩
800
+ µ(Cn,k)
801
+ χCn,k,
802
+ we have
803
+ E
804
+
805
+ χ[ 1
806
+ 2 ,1)
807
+ ���An,k
808
+
809
+ =χAn +
810
+ 1
811
+ 2n+1
812
+ 3
813
+ 2
814
+ 1
815
+ 2n+1
816
+ χBn,k +
817
+ 1
818
+ 2n+1
819
+ 1
820
+ 2 +
821
+ 1
822
+ 2n+2
823
+ χCn,k
824
+ =χAn + 2
825
+ 3χBn,k +
826
+ 2
827
+ 1 + 2n+1 χCn,k
828
+ =χAn + 2
829
+ 3χIn,k + 2
830
+ 3χJn +
831
+ 2
832
+ 1 + 2n+1 χCn,k .
833
+ Since χAn → χ[ 1
834
+ 2 ,1) a.e., 2
835
+ 3χJn → 0 a.e. and
836
+ 2
837
+ 1 + 2n+1 χCn,k → 0 a.e. but 2
838
+ 3χIn,k does not convergence a.e., we have that
839
+ � 1
840
+ 2,1
841
+
842
+ ∈ F but E
843
+
844
+ χ[ 1
845
+ 2 ,1)
846
+ ���An,k
847
+
848
+ ↛ χ[ 1
849
+ 2 ,1) a.e.
850
+ 4.
851
+ Necessary and sufficient conditions for almost everywhere convergence.
852
+ 4.1.
853
+ On almost everywhere convergence of characteristics functions
854
+ Let B be a σ-subalgebra of A.
855
+ Definition 4.1. Define the seminorm ∥ · ∥B for f ∈ L∞(dµ) as
856
+ ∥f ∥B = ∥E (f |B)∥∞ .
857
+ In the following we are going to use the notation ∥A∥B = ∥χA∥B for any set A ∈ A. We have the property
858
+ Lemma 4.2. Let A be a measurable set in A and χA its characteristic function. Then
859
+ ∥A∥B = ∥χA∥B = sup
860
+ B ∈ B
861
+ µ(B) > 0
862
+ µ(A ∩ B)
863
+ µ(B)
864
+ .
865
+ Proof. Let B ∈ B. We have then
866
+ µ(A ∩ B) =
867
+
868
+ χA∩Bdµ =
869
+
870
+ χAχBdµ
871
+ =
872
+
873
+ E(χA|B)χBdµ ≤ ∥E(χA|B)∥∞ µ(B).
874
+ Therefore, for any B ∈ B with µ(B) > 0
875
+ µ(A ∩ B)
876
+ µ(B)
877
+ ≤ ∥E(χA|B)∥∞ .
878
+ Since the case ∥E(χA|B)∥∞ = 0 is trivial, take any ε > 0 such that
879
+ ∥E(χA|B)∥∞ − ε > 0.
880
+ By definition the set
881
+ 8
882
+
883
+ C = �x : E(χA|B) > ∥E(χA|B)∥∞ − ε�
884
+ is B-measurable and
885
+
886
+ C
887
+ E(χA|B)dµ ≥ (∥E(χA|B)∥∞ − ε)µ(C).
888
+ as
889
+
890
+ C
891
+ E(χA|B)dµ =
892
+
893
+ χCE(χA|B)dµ =
894
+
895
+ χCχAdµ = µ(C ∩ A),
896
+ we have then
897
+ µ(C ∩ A)
898
+ µ(C)
899
+ ≥ ∥E(χA|B)∥∞ − ε.
900
+ Definition 4.3. We say that A ∈ A is uniformly covered by the sequence of σ-subalgebras {An}n∈N if there is a sequence
901
+ {An ∈ An}n∈N such that
902
+ i) A =
903
+
904
+
905
+ N=1
906
+
907
+ n≥N
908
+ An =
909
+
910
+
911
+ N=1
912
+
913
+ n≥N
914
+ An.
915
+ ii)
916
+ ���χA\An
917
+ ���An → 0 as
918
+ n → ∞.
919
+ Next lemma shows that actually, we can relax a little bit condition ii).
920
+ Lemma 4.4. A is uniformly covered by {An}n∈N if and only if for any r > 0 there is a sequence {Ar
921
+ n}n∈N, Ar
922
+ n ∈ An and
923
+ M ∈ N such that
924
+ i) A =
925
+
926
+
927
+ N=1
928
+
929
+ n>N
930
+ Ar
931
+ n =
932
+
933
+
934
+ N=1
935
+
936
+ n>N
937
+ Ar
938
+ n .
939
+ ii) ∥A \ Ar∥An < r
940
+ if
941
+ n > Mr.
942
+ Proof. By lemma 3.1 the condition i) means that
943
+ lim
944
+ N→∞µ
945
+
946
+ 
947
+
948
+ n>N
949
+ (A △ Ar
950
+ n)
951
+
952
+  = 0.
953
+ Thus, for r = 1 let N1 > M1 and such that
954
+ µ
955
+
956
+ 
957
+
958
+ n>N1
959
+ (A △ Ar
960
+ n
961
+
962
+  < 1
963
+ .
964
+ In general let rk = 2−k and ˜Nk such that
965
+ µ
966
+
967
+ 
968
+
969
+ n> ˜Nk
970
+ (A △ Ak
971
+ n)
972
+
973
+  < rk,
974
+ 9
975
+
976
+ and N′
977
+ k such that
978
+ ���A \ Arkn
979
+ ���An < rk
980
+ for n > N′
981
+ k.
982
+ It is clear that we can define a strictly increasing sequence Nk such that Nk > max( ˜Nk,N′
983
+ k)
984
+ Define An ∈ An as Arkn if Nk ≤ n ≤ Nk+1. Then
985
+ ∥A \ An∥An =
986
+ ���A \ Arkn
987
+ ���An < 1
988
+ 2k ,
989
+ we also have
990
+ µ
991
+
992
+ 
993
+
994
+ n≥�
995
+ Nk′
996
+ (A △ An)
997
+
998
+  = µ
999
+
1000
+ 
1001
+
1002
+
1003
+ k=k′
1004
+
1005
+
1006
+ Nk≤n<�
1007
+ Nk+1
1008
+ (A △ An)
1009
+
1010
+  ≤
1011
+
1012
+
1013
+ k=k′
1014
+ µ
1015
+
1016
+ 
1017
+
1018
+
1019
+ Nk≤n<�
1020
+ Nk+1
1021
+ A △ Arkn
1022
+
1023
+  <
1024
+
1025
+
1026
+ k=k′
1027
+ 1
1028
+ 2k =
1029
+ 1
1030
+ 2k′−1 .
1031
+ Therefore since µ(�
1032
+ n>N A △ An) is monotone
1033
+ lim
1034
+ N→∞µ
1035
+
1036
+ 
1037
+
1038
+ n≥N
1039
+ A △ An
1040
+
1041
+  = 0.
1042
+ (3)
1043
+ and thus finally
1044
+ lim
1045
+ n→∞∥A \ An∥An = 0.
1046
+ We are going to say that a sequence of sets {An ∈ An} uniformly covers a set A ∈ A if conditions i) and ii) of
1047
+ definition XXX are satisfied. A couple of results regarding uniform covering are:
1048
+ Lemma 4.5. If {An ∈ An} uniformly covers a set A ∈ A and {A′n ∈ An} is such that An ⊂ A′n for all n ∈ N and
1049
+ χA′n −→ χA a.e. then it uniformly covers A.
1050
+ Proof. The proof is trivial since for 0 ≤ f ≤ g, and B σ-subalgebra E(f |B) ≤ E(g|B) and so, as A \ A′n ⊂ A \ An,
1051
+ E(χA\A′n|An) ≤ E(χA\An|An)
1052
+ Lemma 4.6. If A and B are sets in A uniformly covered by {An}n∈N. Then so are A�B and A�B.
1053
+ Proof. Let An ∈ An and Bn ∈ An sequences of sets that uniformly cover A and B respectively. Consider first the
1054
+ sequence Cn = An
1055
+ �Bn ∈ An. Since by property i) of the definition of uniform covering we have that χAn
1056
+ a.e.
1057
+ −→ χA
1058
+ and χBn
1059
+ a.e.
1060
+ −→ χB:
1061
+ χAn
1062
+ �Bn = χAnχBn
1063
+ a.e.
1064
+ −→ χAχB = χA�B
1065
+ We have also that
1066
+ A ∩ B \ (An ∩ Bn) = A ∩ B ∩ (Ac
1067
+ n ∪ Bc
1068
+ n) ⊂ (A ∩ Ac
1069
+ n) ∪ (B ∩ Bc
1070
+ n)
1071
+ Therefore
1072
+ ∥A ∩ B \ (An ∩ Bn)∥An ≤ ∥(A ∩ Ac
1073
+ n) ∪ (B ∩ Bc
1074
+ n)∥An ≤ ∥A ∩ Ac
1075
+ n∥An + ∥B ∩ Bc
1076
+ n∥An −→ 0a.e.
1077
+ For the case of the union of two sets we have
1078
+ χAn
1079
+ �Bn = χAn + χBn − χAnχBn
1080
+ a.e.
1081
+ −→ χA + χB − χAχB = χA�B
1082
+ and
1083
+ (A ∪ B) \ (An ∪ Bn) = (A ∪ B) ∩ (Ac
1084
+ n ∩ Bc
1085
+ n) ⊂ (A ∩ Ac
1086
+ n) ∪ (B ∩ Bc
1087
+ n)
1088
+ 10
1089
+
1090
+ Lemma 4.7. If A is uniformly covered by {An}n∈N. Then
1091
+ lim
1092
+ n→∞(E(χA|An))(x) ≤ χA(x) a.e.
1093
+ Proof. Let {An} be a sequence with the properties of Definition 4.8. Then
1094
+ E(χA|An) = E(χAχAn + χAχAcn|An)
1095
+ = χAnE(χA|An) + E(χA\An|An)
1096
+ ≤ χAn +
1097
+ ���χA\An
1098
+ ���An .
1099
+ as A is uniformly covered by the sequence {An} then i) of Definition 4.8 implies that
1100
+ χAn
1101
+ a.e.
1102
+ −→ χA.
1103
+ So
1104
+ lim
1105
+ n→∞E(χA|An) ≤ χA a.e.
1106
+ xxx222
1107
+ In view of the proof we can actually relax somewhat the condition of uniformly covering to get a similar result.
1108
+ Lemma 4.8. Let {An}n∈N be a sequence of σ-subalgebras . If A ∈ A is such that there is a sequence {An ∈ An}n∈N
1109
+ satisfying
1110
+ i) A =
1111
+
1112
+
1113
+ N=1
1114
+
1115
+ n≥N
1116
+ An =
1117
+
1118
+
1119
+ N=1
1120
+
1121
+ n≥N
1122
+ An.
1123
+ ii) E(χA\An|An) → 0 a.e as n → ∞.
1124
+ Then
1125
+ lim
1126
+ n→∞(E(χA|An))(x) ≤ χA(x) a.e.
1127
+ Proof. The proof is exactly the same as the one of the above lemma.
1128
+ E(χA|An) = E(χAχAn + χAχAcn|An)
1129
+ = χAnE(χA|An) + E(χA\An|An)
1130
+ ≤ χAn + E(χA\An|An).
1131
+ Notice that if A is uniformly covered by {An} the conditions of lemma 4.8xx are satisfied.
1132
+ The interesting case occurs when both A and Ac are uniformly covered by {An} or satisfy the conditions of the
1133
+ above lemma.
1134
+ Lemma 4.9. If A ∈ A is such that A and Ac are uniformly covered by {An} (or satisy the conditions of lemma 4.9x) then
1135
+ E(χA|An)
1136
+ a.e.
1137
+ −→ χA.
1138
+ 11
1139
+
1140
+ Proof.
1141
+ lim
1142
+ n→∞E(χAc|An) = lim
1143
+ n→∞E(1 − χA|An)
1144
+ = lim
1145
+ n→∞(1 − E(χA|An))
1146
+ = 1 − lim
1147
+ n→∞
1148
+ E(χA|An).
1149
+ Since Ac is uniformly covered
1150
+ χAc = 1 − χA ≥ 1 − lim
1151
+ n→∞
1152
+ E(χA|An),
1153
+ and so
1154
+ lim
1155
+ n→∞
1156
+ E(χA|An) ≥ χA a.e.
1157
+ finally, as A is uniformly covered
1158
+ lim
1159
+ n→∞E(χA|An) ≤ χA ≤ lim
1160
+ n→∞
1161
+ E(χA|An) a.e.
1162
+ And then
1163
+ E(χA|An)
1164
+ a.e.
1165
+ −→ χA.
1166
+ Now the necessary lemma for a.e. convergence of characteristic functions.
1167
+ Lemma 4.10. Let A ∈ A and {An}n∈N σ-subalgebras such that
1168
+ E(χA|An)
1169
+ a.e.
1170
+ −→ χA.
1171
+ Then A and Ac are uniformly covered by {An}n∈N.
1172
+ Proof. Let 0 < r < 1. Define An ∈ An as
1173
+ An = {x ∈ X : E(χA|An)(x) ≥ r},
1174
+ since E(χA|An)
1175
+ a.e.
1176
+ → χA. We have that
1177
+ χAn
1178
+ a.e.
1179
+ −→ χA,
1180
+ and therefore
1181
+ A =
1182
+
1183
+
1184
+ N=1
1185
+
1186
+ n>N
1187
+ An =
1188
+
1189
+
1190
+ N=1
1191
+
1192
+ n>N
1193
+ An.
1194
+ It is also clear that
1195
+ 0 ≤E(χA\An|An) = E(χAχAcn|An)
1196
+ =χAcnE(χA|An) < χAcnr ≤ r.
1197
+ Thus
1198
+ ���E(χA\An|An)
1199
+ ���∞ ≤ r and by the above lemma A is uniformly covered.
1200
+ Since E(χA|An)
1201
+ a.e.
1202
+ −→ χA implies that E(χAc|An)
1203
+ a.e.
1204
+ −→ χAc, we have that Ac is also uniformly covered.
1205
+ 12
1206
+
1207
+ From Lemma 4.9 and Lemma 4.10 we have the following theorem.
1208
+ Theorem 4.11. Let {An}n∈N be a sequence of σ-algebras and let A ∈ A. Then
1209
+ E(χA|An)
1210
+ a.e.
1211
+ −→ χA,
1212
+ if and only if A and Ac are uniformly covered by {An}n∈N.
1213
+ In view of the above theorem it is clear that it is convenient to establish the following definition.
1214
+ Definition 4.12. Define Aµ.a.e. as
1215
+ Aµ.a.e. = {A ∈ A : A and Ac are uniformly covered by {An}n∈N}
1216
+ Before we proceed we observe that if {An ∈ An} is a sequence of sets that uniformly covers A and {Cn ∈ An} is a
1217
+ sequence of sets that uniformly covers Ac, the sequence An
1218
+ �Cn uniformly covers A and Ac.
1219
+ We have then that Aµ.a.e. is an algebra of sets. XXXX No de hecho necesitamos que acomplemento este. Si ese es el
1220
+ caso si pero ¿por que?
1221
+ Notice that if Aµ.a.e. is σ-subalgebra then
1222
+ A ⊂ Aµ.a.e. ⊂ F ⊂ Aµ ⊂ A⊥ ⊂ A.
1223
+ Lemma 4.13. If Aµ.a.e. is σ-subalgebra then
1224
+ i) f ∈ L∞ �
1225
+ Aµ.a.e.
1226
+
1227
+ we have that
1228
+ E(f |An)
1229
+ a.e.
1230
+ −→ E(f |Aµ.a.e.) = f .
1231
+ ii) If there is a σ-subalgebra B such that for all f ∈ L∞ �
1232
+ Aµ.a.e.
1233
+
1234
+ E(f |An)
1235
+ a.e.
1236
+ −→ E(f |B) thenB ⊂ Aµ.a.e..
1237
+ Proof. Let ǫ > 0. As f ∈ L∞ �
1238
+ Aµ.a.e.
1239
+
1240
+ , there is a simple function g, Aµ.a.e.-measurable such that ∥f − g∥∞ < ǫ/2. It is
1241
+ clear that Lemma 4.10 implies that E(g|An)
1242
+ a.e.
1243
+ −→ g. Thus
1244
+ |E(f |An) − f |(x) ≤|E(f |An) − E(g|An)|(x) + |E(g|An) − g|(x) + |g − f |(x)
1245
+ ≤|E(f − g|An)|(x) + |E(g|An) − g|(x) + ∥g − f ∥∞
1246
+ ≤ǫ + |E(g|An) − g|(x).
1247
+ So lim
1248
+ n→∞|E(f |An) − f |(x) ≤ ǫ a.e. for every ǫ and therefore lim
1249
+ n→∞E(f |An) = f a.e.
1250
+ To prove ii), let A ∈ B. By hypothesis E(χA|An) −→ χA a.e..
1251
+ For 0 < ǫ < 1 define An = {x : E(χA|An) > ǫ} ∈ An.
1252
+ Since for almost all x ∈ A, E(χA|An)(x) −→
1253
+ n→∞ 1, x ∈ An for n big enough. So χA(x)χAn(x) −→ χA(x) a.e.. The case
1254
+ for Ac is similar. In this case, for almost all x � A, E(χA|An)(x) −→
1255
+ n→∞ 0. Hence x � An for n big enough. Thus
1256
+ χAc(x)χAn(x) −→ 0 a.e.. And thus χAn(x) −→ χA(x) a.e..
1257
+ A =
1258
+
1259
+
1260
+ N=1
1261
+
1262
+ n>N
1263
+ An =
1264
+
1265
+
1266
+ N=1
1267
+
1268
+ n>N
1269
+ An.
1270
+ we also have
1271
+ ∥A \ An∥An =
1272
+ ���E(χAχAcn|An)
1273
+ ���∞ =
1274
+ ���χAcnE(χA|An)
1275
+ ���∞ < ǫ
1276
+ ���χAcn
1277
+ ���∞ < ǫ.
1278
+ And so by Lemma 3.2 A is uniformly covered by by {An}.
1279
+ 13
1280
+
1281
+ We can improve the above lemma by considering functions in Lp.
1282
+ Lemma 4.14. If Aµ.a.e. is σ-subalgebra and 1 ≤ p < ∞ then for all f ∈ L2 �
1283
+ Aµ.a.e.
1284
+
1285
+ we have that
1286
+ E(f |An)
1287
+ a.e.
1288
+ −→ E(f |Aµ.a.e.) = f .
1289
+ How do the above results look in the case of An are monotone?. In the case we have a monotone decreasing
1290
+ sequence of σ-subalgebras it is clear that AN = �∞
1291
+ n=N An and that for any N, �∞
1292
+ n=N An = �∞
1293
+ n=1 An. Therefore
1294
+
1295
+
1296
+ n=1
1297
+ An = A ⊂ F ⊂ A ⊂
1298
+
1299
+
1300
+ n=1
1301
+ An
1302
+ So if A ∈ (F) the sequence An = A ∈ An and trivially uniformly covers A.
1303
+ XXXXXX falta la demostracion
1304
+ XXXXXXXXXXXXXXXXXXXXX
1305
+ 4.2.
1306
+ Necessary and sufficient conditions to have
1307
+ In [4.x] we defined a σ-subalgebra A⊥ by first considering the set W
1308
+ W =
1309
+
1310
+ g ∈ L2(A) : ∃Ank ∈ Ank with χAnk
1311
+ L2−weakly
1312
+ −→
1313
+ g
1314
+
1315
+ ,
1316
+ A⊥ was defined as the minimum σ-algebra generated by W. We are going to do something similar. First we notice
1317
+ the obvious fact χAn = E(χAn|An). Define then, given a sequence {An}n∈N σ-subalgebras, the set
1318
+ CN =
1319
+ h ∈ L1(µ) : h =
1320
+
1321
+ k≥N
1322
+ E(χBk|Ak), Bk ∈ A, disjoint, and such that there are only finite many of such B′
1323
+ ks
1324
+ .
1325
+ Notice that if h ∈ CN
1326
+ ∥h∥1 =
1327
+ � �
1328
+ n≥N
1329
+ E(χBn|An)dµ =
1330
+
1331
+ n≥N
1332
+ µ(Bn) = µ
1333
+
1334
+ 
1335
+
1336
+ n≥N
1337
+ Bn
1338
+
1339
+  ≤ 1.
1340
+ XXX”””XXX”””
1341
+ We will define
1342
+ Definition 4.15.
1343
+ W ⊥a.e. =
1344
+
1345
+ f ∈ L∞(µ) : for every subsequence
1346
+
1347
+ hNk
1348
+
1349
+ , hNk ∈ CNk,
1350
+
1351
+ f ,hNk
1352
+
1353
+ −→
1354
+ k→∞ 0
1355
+
1356
+ .
1357
+ Theorem 4.16. If f ∈ L∞(µ) then
1358
+ E(f |An)
1359
+ a.e.
1360
+ −→ 0,
1361
+ (4)
1362
+ if and only if f ∈ W ⊥a.e..
1363
+ Proof. ⇒) Let f ∈ L∞(µ). Without loss of generality we can assume that ∥f ∥∞ ≤ 1. First, notice that if h ∈ CN we
1364
+ have
1365
+ �h,f � =
1366
+
1367
+ n≥N
1368
+
1369
+ E(χBn,An),f
1370
+
1371
+ =
1372
+
1373
+ n≥N
1374
+
1375
+ χBn,E(f |An)
1376
+
1377
+ .
1378
+ 14
1379
+
1380
+ Let ǫ > 0. Since E(f ,An)
1381
+ a.e.
1382
+ −→ 0, Egoroff’s theorem implies that there is a set Mǫ and N1 ∈ N such that m ≥ N1
1383
+ µ(Mc
1384
+ ǫ) < ǫ
1385
+ 2
1386
+ and
1387
+ ���χMǫE(f |An)
1388
+ ��� < ǫ
1389
+ 2.
1390
+ Thus if N > N1
1391
+ |�hN,f �| =
1392
+ �������
1393
+
1394
+ n≥N
1395
+
1396
+ χBn,E(f ,An)
1397
+
1398
+ �������
1399
+
1400
+
1401
+ n≥N
1402
+ �����
1403
+
1404
+ χBn,χMǫE(f ,An)
1405
+ ����� +
1406
+ ����
1407
+
1408
+ χBnχMcǫ,E(f |An)
1409
+ �����
1410
+
1411
+ ≤ ǫ
1412
+ 2
1413
+
1414
+ n≥N
1415
+
1416
+ χBn,χMǫ
1417
+
1418
+ +
1419
+
1420
+ n≥N
1421
+ ∥E(f |An)∥∞
1422
+ ���χBnχMcǫ
1423
+ ���1
1424
+ ≤ ǫ
1425
+ 2µ(Mǫ) + ∥f ∥∞ µ(Mc
1426
+ ǫ) ≤ ǫ,
1427
+ thus �hN,f � −→
1428
+ N→∞ 0.
1429
+ ⇐) To prove the theorem in the other direction first we notice that since we can take f or −f , without loss of
1430
+ generality we can assume that there is an ǫ such that
1431
+ µ
1432
+
1433
+ 
1434
+
1435
+
1436
+ N=1
1437
+
1438
+ n≥N
1439
+ {x ∈ X : E(f |An)(x) ≥ ǫ}
1440
+
1441
+  > 0.
1442
+ That is, there is an r > 0 such that for any N there is an M with
1443
+ µ
1444
+
1445
+ 
1446
+
1447
+ N≤n≥M
1448
+ {x ∈ X : E(f |An)(x) ≥ ǫ}
1449
+
1450
+  > r.
1451
+ Let An = {x ∈ X : |E(f |An)(x)| > ǫ} and as usual
1452
+ BN = AN,
1453
+ Bk = Ak \ �k−1
1454
+ j=N Bj for N < k < M,
1455
+ (5)
1456
+ {Bk} is a disjoint family and
1457
+
1458
+ N≤n<M
1459
+ Bn =
1460
+
1461
+ N≤n<M
1462
+ An.
1463
+ Let
1464
+ hN =
1465
+
1466
+ N≤n<M
1467
+ E(χBn|An),
1468
+ we have that
1469
+ �hN,f � =
1470
+
1471
+ N≤n<M
1472
+
1473
+ χBn|E(f |An)
1474
+
1475
+ > ǫ
1476
+
1477
+ N≤n<M
1478
+ µ(χBn) = ǫµ
1479
+
1480
+ 
1481
+
1482
+ N≤n<M
1483
+ Bn
1484
+
1485
+  > ǫr.
1486
+ We have then constructed a sequence {hN}N∈N such that for all N, �hN,f � > ǫr. Therefore f � W ⊥a.e..
1487
+ 15
1488
+
1489
+ In XXX we defined the orthogonal conditional expectation induced by D as the operator E⊥
1490
+ B = I − EB, which in
1491
+ L2(A) is the orthogonal projection E⊥
1492
+ B : L2(A) → L2(B)⊥. Let D be the following family of σ-subalgebras
1493
+ D =
1494
+
1495
+ B ∈ A : E⊥
1496
+ Bf ∈ W ⊥a.e. for all f ∈ L∞(A)
1497
+
1498
+ It is clear that D is not empty since it trivially contains A.
1499
+ An immediate property is the following.
1500
+ Proposition 4.17. Let C,B be two σ-subalgebras. If C ∈ D and B ⊃ C, then B ∈ D
1501
+ Proof. Notice that in this case E⊥
1502
+ B = E⊥
1503
+ CE⊥
1504
+ B. So, as f in L∞ implies that E⊥
1505
+ Bf is also in L∞,
1506
+ E⊥
1507
+ Bf = E⊥
1508
+ C(E⊥
1509
+ Bf ) ∈ W ⊥a.e.
1510
+ Definition 4.18. Amin will be the minimum complete σ-subalgebra such that contains the set
1511
+ ˜W =
1512
+
1513
+ g ∈ L2(A) : there is
1514
+
1515
+ hNk
1516
+
1517
+ ,hNk ∈ CNk such that hNk −→ g weakly in L2�
1518
+ Lemma 4.19. If B ∈ D then Amin ⊂ B.
1519
+ Proof. Let g ∈ ˜W and hN ∈ CN such that hN
1520
+ w
1521
+ −→ g. Since B is in D we have that for all f in Lp ∩ L∞
1522
+ �f ,E(g|B)� = �E(f |B),g� = lim
1523
+ N→∞
1524
+ �E(f |B),hN
1525
+ � = lim
1526
+ N→∞
1527
+
1528
+ (I − E⊥
1529
+ B)f ,hN
1530
+
1531
+ = lim
1532
+ N→∞
1533
+
1534
+ f ,(I − E⊥
1535
+ B)hN
1536
+
1537
+ = lim
1538
+ N→∞
1539
+ �f ,hN
1540
+ � = �f ,g�
1541
+ Therefore g is equal almost everywhere to E(g|B) and so is B measurable. Since by definition Amin is the minumum
1542
+ σ-subalgebra such that makes all such g measurable so Amin ⊂ B.
1543
+ Definition 4.20. We will call the sequence {An}n∈N 2-bounded if for all hN in CN sup∥hN∥2 < ∞.
1544
+ Lemma 4.21. If {An} is 2-bounded and B is a σ-subalgebra, then B ∈ D if and only if Amin ⊂ B.
1545
+ Proof. In view of lemma xxx we only need to prove the assertion in the sense (⇐) Assume Amin ⊂ B and suppose
1546
+ that Amin is not in D. That means that there is an f ∈ L∞ such that its orthogonal projection with respect to Amin
1547
+ is not in W ⊥a.e..That is, there is a sequence {hN ∈ CN} such that �f ,hN
1548
+ � does not converge to zero. Hence, there is
1549
+ an ǫ > 0 and a subsequence
1550
+
1551
+ hNk
1552
+
1553
+ such that
1554
+ ����
1555
+
1556
+ E⊥
1557
+ Aminf ,hNk
1558
+ ����� > ǫ. By hypothesis the sequence of σ-subalgebras is 2-
1559
+ bounded, so there is a subsequence, which we still denote by hNk, that weakly converges to an h in L2. By definition
1560
+ h is in ˜W and therefore is Amin measurable. But that leads us to a contradiction since 0 < ǫ <
1561
+ ����
1562
+
1563
+ E⊥
1564
+ Aminf ,h
1565
+ ����� =
1566
+ ����
1567
+
1568
+ f ,E⊥
1569
+ Aminh
1570
+ ����� = 0.
1571
+ Of course the 2-boundeness condition is a very strong restriction. In the following we will show that the main
1572
+ property is the fact that D has a minimum σ-subalgebra. In those cases we will denote that minimum by A⊥a.e..
1573
+ Of course in the case the sequence is 2-bounded A⊥a.e. = Amin
1574
+ HOOOOOOOOOOOOOOOLA XXXXXXXXXXXXXXXXXXXXX
1575
+ 4.3.
1576
+ Necessary and sufficient conditions to have comvergence a.e.
1577
+ Theorem 4.22. If {An} is a sequence of σ-subalgebras such that
1578
+ i)
1579
+ ii)
1580
+ for all f ∈ L∞, E(f |An) converges a.e. then Aµa.e. is a σ-subalgebra, D has a minimum σ-subalgebra and Aµa.e. = A⊥a.e.
1581
+ 16
1582
+
1583
+ Proof. By definition Aµa.e. = {A ∈ A|A and Ac are univormly covered} Let us recall that for xxx Aµa.e. is an algebra.
1584
+ Thus, in order to prove that is a σ−subalgebra we need only to check that if we have a sequence of pairwise disjoint
1585
+ sets {Bk}, with Bk ∈ Aµa.e. , then ∪∞
1586
+ k=1Bk ∈ Aµa.e..
1587
+ In view of theorem 4.11, we need to prove that E(χ∪∞
1588
+ k=1Bk|An)
1589
+ a.e.
1590
+ −→ χ∪∞
1591
+ k=1Bk. Now, since the Bk are in Aµa.e. for any
1592
+ natural M we have
1593
+ E(χ∪∞
1594
+ k=1Bk|An) ≥ E(χ∪M
1595
+ k=1Bk|An)
1596
+ a.e.
1597
+ −→ χ∪M
1598
+ k=1Bk
1599
+ By hypothesis, the conditional expectations converge almost everywhere for any f ∈ L∞. Therefore there is a
1600
+ g ∈ L∞ such that
1601
+ E(χ∪∞
1602
+ k=1Bk|An)
1603
+ a.e.
1604
+ −→ g
1605
+ and thus g ≥ χ∪M
1606
+ k=1Bk. Since this is true for any M, we conclude that g ≥ χ∪∞
1607
+ k=1Bk.
1608
+ Using the dominated convergence theorem we have that
1609
+ 0 ≤
1610
+
1611
+ 1,g − χ∪∞
1612
+ k=1Bk
1613
+
1614
+ = lim
1615
+
1616
+ 1,E(χ∪∞
1617
+ k=1Bk|An) − χ∪∞
1618
+ k=1Bk
1619
+
1620
+ = µ(∪∞
1621
+ k=1Bk) − µ(∪∞
1622
+ k=1Bk) = 0
1623
+ 0 ≤
1624
+
1625
+ g − χ∪∞
1626
+ k=1Bk dµ = lim
1627
+
1628
+ E(χ∪∞
1629
+ k=1Bk|An) − χ∪∞
1630
+ k=1Bk dµ = µ(∪∞
1631
+ k=1Bk) − µ(∪∞
1632
+ k=1Bk) = 0
1633
+ And hence g = χ∪∞
1634
+ k=1Bk a.e..
1635
+ 17
1636
+
OdFAT4oBgHgl3EQfyx46/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
OtAyT4oBgHgl3EQfUfen/content/2301.00127v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:825885e610f9e8dcc2b37adc488be0ed6a3a706a51e1a4adf026dcbe315b891a
3
+ size 10895619
PdE0T4oBgHgl3EQfkAEs/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:854a808ad7570e13eff25fcf955d9fd8d904a038c36da5efc43cb37e6816a451
3
+ size 3538989
PdFQT4oBgHgl3EQfYjZ5/content/2301.13312v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:507873fb8163e2c0587e46e31f0b722c673949d44e87b361498dd6bde3a57083
3
+ size 571932
PdFQT4oBgHgl3EQfYjZ5/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a193c67ed97a181980ff8556c8b291a1e2e0cdcd6c81ea88e548cb36360ad28
3
+ size 6029357
PdFQT4oBgHgl3EQfYjZ5/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2362316c6974f6db9584ea7c181f8c5bb38d60b311d002b711c08366ba66ff5
3
+ size 207235
PtA0T4oBgHgl3EQfDP9d/content/2301.02000v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:827470b7a96af34eed66c86fb55d4f08f3f8b19f1cb5a6fc5c3f9ceccd12c484
3
+ size 343541
PtA0T4oBgHgl3EQfDP9d/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b42daefe6c94131e848cda1e0471fa6bca6c65e11e18cc9897d80d130d46a24e
3
+ size 3997741
PtA0T4oBgHgl3EQfDP9d/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c7cebe3f200862dbd7ea8beb62d10af89353cd32cee9b92e71e919535433b0c
3
+ size 153418
PtE0T4oBgHgl3EQfTwCq/content/tmp_files/2301.02241v1.pdf.txt ADDED
@@ -0,0 +1,2158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CiT: Curation in Training for Effective Vision-Language Data
2
+ Hu Xu
3
+ Saining Xie
4
+ Po-Yao Huang
5
+ Licheng Yu
6
+ Russell Howes
7
+ Gargi Ghosh
8
+ Luke Zettlemoyer
9
+ Christoph Feichtenhofer
10
+ FAIR, Meta AI
11
+ https://github.com/facebookresearch/CiT
12
+ Abstract
13
+ Large vision-language models are generally applicable
14
+ to many downstream tasks, but come at an exorbitant train-
15
+ ing cost that only large institutions can afford. This pa-
16
+ per trades generality for efficiency and presents Curation
17
+ in Training (CiT), a simple and efficient vision-text learning
18
+ algorithm that couples a data objective into training. CiT
19
+ automatically yields quality data to speed-up contrastive
20
+ image-text training and alleviates the need for an offline
21
+ data filtering pipeline, allowing broad data sources (includ-
22
+ ing raw image-text pairs from the web). CiT contains two
23
+ loops: an outer loop curating the training data and an inner
24
+ loop consuming the curated training data. The text encoder
25
+ connects the two loops. Given metadata for tasks of interest,
26
+ e.g., class names, and a large pool of image-text pairs, CiT
27
+ alternatively selects relevant training data from the pool by
28
+ measuring the similarity of their text embeddings and em-
29
+ beddings of the metadata. In our experiments, we observe
30
+ that CiT can speed up training by over an order of magni-
31
+ tude, especially if the raw data size is large.
32
+ 1. Introduction
33
+ Vision-language models have demonstrated success for
34
+ fine-tuning and zero-shot transfer to downstream tasks [12,
35
+ 21,26] by training on a general-purpose large-scale dataset
36
+ instead of a small task-level dataset. While general, large-
37
+ scale pre-training is computationally expensive (e.g. CoCa
38
+ [36] trains on 2048 TPUs for 5 days) and typically per-
39
+ formed on a pre-filtered dataset (e.g. WIT400M [21] used
40
+ by CLIP [21] is created by searching for image-text pairs
41
+ with text containing a set of 500,000 queries and [24] uses
42
+ this model to create another dataset).
43
+ Such filtering pipelines usually involve manual labor-
44
+ intensive efforts to remove data that is unlikely useful for
45
+ downstream tasks [12, 21]. Recent effort has been made to
46
+ curate data for high-quality image-text pairs (such as CC3M
47
+ [25], CC12M [3], YFCC15M [21,29], WIT400M [21] and
48
+ LAION [23, 24]). Nevertheless, research is typically tied
49
+ to the static datasets or model weights (if the data is not re-
50
+ Text
51
+ Model
52
+ Vision
53
+ Model
54
+ CLIP Pre-training
55
+ Filtering Pipeline
56
+ RawData
57
+ StaticTrainingData
58
+ DynamicTrainingData
59
+ Curation in Training
60
+ Text
61
+ Model
62
+ Vision
63
+ Model
64
+ Training Loop
65
+ Training Loop
66
+ Data Curation Loop
67
+ Text
68
+ Model
69
+ Metadata
70
+ RawData
71
+ Figure 1.
72
+ A conceptual illustration of CLIP training vs. CiT.
73
+ Vanilla CLIP training uses static data from offline human filter-
74
+ ing (e.g. cleaned YFCC15M or WIT400M [21]) and optimizes the
75
+ model. Instead, our CiT incorporates dynamic data curation into
76
+ training in two loops: (i) an outer curation loop improving data (for
77
+ downstream tasks) given the current model; (ii) an inner loop op-
78
+ timizing the model given the curated data. The trained text model
79
+ connects the loops by providing embeddings for curation.
80
+ leased) and is not able to access or change the data pipelines
81
+ or model architectures. Further, work is limited by the pro-
82
+ hibitive cost of training on these large image-text datasets
83
+ (e.g. the CLIP model is trained on WIT400M for 12 days
84
+ using 256 GPUs).
85
+ In this work, our goal is to empower training with the ca-
86
+ pability of adjusting the data distribution. Our intention is to
87
+ dynamically curate the data during training and our key idea
88
+ is to use the learned text representation of vision-language
89
+ models to measure relevance of the data w.r.t. the task of in-
90
+ terest. Given metadata (from downstream tasks e.g. a class
91
+ name such as “chicken”), we measure its embedding simi-
92
+ larity to the training data. This similarity can guide us for
93
+ the decision of including this data into our training process.
94
+ For example a caption containing the word “giraffe” will
95
+ have higher embedding similarity to “chicken” than a cap-
96
+ tion such as “throwback Thursday”.
97
+ arXiv:2301.02241v1 [cs.CV] 5 Jan 2023
98
+
99
+ Driven by this idea, we presents a simple algorithm that
100
+ incorporates data Curation in Training (CiT), aiming at im-
101
+ proving both data efficiency and model performance. CiT
102
+ works as follows. Given a large source of image-text pairs
103
+ and metadata (e.g. a list of class names used in this paper),
104
+ CiT alternatively performs curation of the data and training
105
+ on that curated data. As shown in Figure 1, CiT contains
106
+ two loops: an outer loop to curate data given the current
107
+ model and an inner loop trains the model given the curated
108
+ data. Similar as Locked image Tuning (LiT [38]), CiT uses
109
+ pre-trained image and text encoders and freezes the image
110
+ one. The text model connects the two loops by serving cu-
111
+ rated data to inner loop for training which in turn learns
112
+ good representations for the outer loop for curation.
113
+ CiT can speed up training by multiple orders of mag-
114
+ nitude, especially if the raw data size is large; e.g. when
115
+ trained on LAION-400M data, CiT reaches similar Ima-
116
+ geNet zero-shot1 accuracy as OpenCLIP [31], while being
117
+ 37.7× faster in training. Since CiT changes the training
118
+ data distribution that focuses on one or more tasks of in-
119
+ terest, it can even handle image-text pairs from any (noisy)
120
+ source with unknown distribution. Our experiments reveal
121
+ that vanilla CLIP/LiT training fails on raw random image-
122
+ text pairs crawled from the web, while CiT trains easily.
123
+ 2. Related Work
124
+ Vision-Language Learning. Contrastive learning was ini-
125
+ tially popular in vision self-supervision [4,11,32] and later
126
+ adopted for cross-modal learning [16, 18, 19,21, 33]. CLIP
127
+ [21] populates the idea of contrastive learning from image-
128
+ text pairs (used before e.g. in ConVIRT [40]) at scale and
129
+ shows a strong performance of zero-shot transfer to image
130
+ classification and retrieval tasks. SLIP [20] combines image
131
+ self-supervision and language supervision. LiT [38] shows
132
+ that when a good pre-trained vision encoder is adopted, it
133
+ is better to lock (freeze) the well pre-trained vision encoder
134
+ to protect vision representations from being corrupted by
135
+ noisy language supervision. Flamingo also use pre-trained
136
+ models for various tasks [1].
137
+ Vision-Language Data.
138
+ Large-scale vision-language
139
+ learning is typically coupled to a data pipeline to yield high-
140
+ quality data for efficient training [12, 26, 37]. For exam-
141
+ ple, CC3M [25] heavily filters web crawled pairs and only
142
+ keeps 0.1% of the raw data.
143
+ Both CC3M and CC12M
144
+ [3] leverage Google Cloud APIs with models predicting a
145
+ large number of classes (on the order of 105) [25] to filter
146
+ out mismatched image-text pairs. YFCC100M is curated
147
+ from Yahoo Flicker using text fields (such as title, descrip-
148
+ 1Zero-shot refers to not seeing any training examples of the target
149
+ dataset. We note that our approach uses extra information of the down-
150
+ stream task, such as class names; however, this metadata is easy to acquire
151
+ and can be of various forms as shown in experiments.
152
+ tion, etc.). This ensures certain data quality but limits the
153
+ scale. Later YFCC100M is further cleaned as YFCC15M
154
+ to contain English-only image-text pairs by [21]. Due to
155
+ the limited scale, CLIP further curates a WebImageText
156
+ dataset (WIT400M) by formulating queries from Wikipedia
157
+ and performing searching them online to obtain image-text
158
+ pairs. Florence [37] curates a dataset with the extra multi-
159
+ label signals to improve supervision. ALIGN [12] relaxes
160
+ CC12M filtering to show that training on 1.8B noisy pairs
161
+ can achieve CLIP-level performance. FLAVA [26] com-
162
+ bines existing human annotated datasets of smaller scale for
163
+ high-quality image-text pairs. Different to related research,
164
+ CiT improves data within the training algorithm, and not as
165
+ a pre-filtering. We demonstrate that such approach allows
166
+ us to effectively learn from raw image-text pairs.
167
+ Related Areas. Our work is related to research in other do-
168
+ mains. In NLP, there are existing works on domain-adaptive
169
+ finetuning and retrieval [9, 14, 15, 34, 35, 39]. In machine
170
+ learning research, subset selection [13, 30] cast data selec-
171
+ tion as a discrete bi-level optimization problem.
172
+ 3. Method
173
+ In CLIP pre-training, the training objective (e.g. con-
174
+ trastive image-text correspondence) operates as a training
175
+ proxy that approximates downstream tasks (e.g. classifica-
176
+ tion accuracy). Our CiT introduces a data proxy to fit the
177
+ data distribution to downstream tasks. In this section, we
178
+ first go through the details of the CiT algorithm in §3.1,
179
+ training loop in §3.2 and the data proxy for the curation
180
+ loop in §3.3.
181
+ 3.1. CiT Algorithm
182
+ CiT contains two loops: the curation loop curates data
183
+ given the current weights of the model and the training loop
184
+ optimizing the weights given the curated data.
185
+ Let D = {(xi
186
+ img, xi
187
+ txt)}N
188
+ i=1, be the set of source of image-
189
+ text pairs. Then DC ⊆ D is the actual training data we
190
+ aim to curate from the source. We define two functions:
191
+ (i) Curation(D; Θ), and (ii) Training(Θ; DT ), for curation
192
+ and training loops, respectively. Importantly, the weights
193
+ of the learned model Θ connects the two loops and serves
194
+ the curation loop with the updated representations from the
195
+ training loop. There is no notion of a fixed dataset or train-
196
+ ing epochs over D; instead, we view the data source as an
197
+ online data stream. CiT uses a sequential setup that alter-
198
+ natively performs curation for every s pairs of training.
199
+ CiT is shown in Algorithm 1. It takes 3 inputs: a data
200
+ source D, the pre-trained weights Θ and a training budget
201
+ b, which can be training time, resources consumed, etc. We
202
+ simply use steps of weight updates as the training cost in
203
+ this paper. Line 1 initializes the training budget. Line 2
204
+ determines if current training exceeds that training budget.
205
+
206
+ Algorithm 1: CiT (see §A.1.1 for pseudo code)
207
+ Input: D: data source
208
+ Θ: model’s pre-trained weights
209
+ b: training budget
210
+ 1 c ← 0
211
+ 2 while c < b do
212
+ 3
213
+ DT ← Curation
214
+
215
+ D; Θ
216
+
217
+ 4
218
+ Θ, n ← Training(Θ; DT )
219
+ 5
220
+ c ← c + n
221
+ 6 end
222
+ Algorithm 2: Curation
223
+ Input
224
+ : Θ: model’s current weights
225
+ D: data source
226
+ Constant: m(·; ·): model architecture
227
+ Tmeta: metadata for tasks of interests
228
+ s: number of expected pairs
229
+ 1 xmeta ← m(Tmeta; Θ)
230
+ 2 DC ← ∅
231
+ 3 while |DC| < s and Draw ⊂ D do
232
+ 4
233
+ Draw,txt ← {xi
234
+ txt|(xi
235
+ img, xi
236
+ txt) ∈ Draw}
237
+ 5
238
+ xtxt ← m(Draw,txt; Θ)
239
+ 6
240
+ f ← DataProxy(Draw, xtxt, xmeta)
241
+ 7
242
+ DC ← f(Draw; Θ, Draw, Tmeta) ∪ DC
243
+ 8 end
244
+ 9 return DC
245
+ The main framework of CiT is to alternatively perform cu-
246
+ ration and training in line 2-4. To recap CLIP pre-training,
247
+ we first detail the training function next.
248
+ 3.2. Training
249
+ The core of CLIP [21] training is the contrastive cross-
250
+ modal objective serving as the proxy to approximates down-
251
+ stream tasks (e.g. higher classification accuracy). This ob-
252
+ jective pulls embeddings of positive image-text pairs closer
253
+ and pushes negative pairs from other examples in a training
254
+ batch apart; thus it creates a proxy for classification, which
255
+ has one example per class and the rest of the batch are other
256
+ classes described by natural language.
257
+ The training loop is shown in Algorithm 3, with the
258
+ training data DC, delivered from curation. We let m(·; ·)
259
+ denote the image-text model.
260
+ We use sim(ximg, xtxt) =
261
+ ximgx⊤
262
+ txt/(∥ximg∥∥xtxt∥) in line 3 to compute the image-to-
263
+ text cosine similarity, divided by a trainable temperature τ.
264
+ Our CiT training objective has almost the same structure as
265
+ in CLIP, except that we only use an image-to-text (and no
266
+ text-to-image) contrastive loss (Limg2txt) in line 4. We ab-
267
+ late this loss versus the averaged bidirectional contrastive
268
+ loss (used by CLIP) in our experiments. Line 5 updates the
269
+ model parameters and line 6 counts training cost.
270
+ 3.3. Curation
271
+ CiT also has a data objective that curates data using the
272
+ (previously updated) model. Encoding the data with an up-
273
+ Algorithm 3: Training
274
+ Input
275
+ : DC: curated training data
276
+ Θ: model’s weights
277
+ Constant: m(·; ·): model architecture
278
+ 1 foreach Dbatch ⊂ DC do
279
+ 2
280
+ ximg, xtxt ← m(Dbatch; Θ)
281
+ 3
282
+ l ← sim(ximg, xtxt)/τ
283
+ 4
284
+ Limg2txt ← CrossEntropy(l, arange(|Dbatch|))
285
+ 5
286
+ Θ ← Limg2txt(Θ; Dbatch)
287
+ 6
288
+ n ← n + 1
289
+ 7 end
290
+ 8 return Θ, n
291
+ dated model allows for better representation of the data.
292
+ Akin to the contrastive objective for training, the core func-
293
+ tion in curation is a data proxy (or objective) that selects
294
+ data based on the metadata (e.g. a list of class names).
295
+ We detail the curation loop in Algorithm 2. It takes the
296
+ following inputs: model weights Θ, a data source D, the
297
+ model architecture, the metadata for downstream tasks Tmeta
298
+ and an expected size of curated data s. Tmeta is a list con-
299
+ taining a pre-defined taxonomy; (e.g. ImageNet WordNet
300
+ lemmas or a combination from a group of tasks in our ex-
301
+ periments), but could be generalized to other forms of text.
302
+ Algorithm 2 first obtains the embeddings for the meta-
303
+ data in line 1. Then it sets up the curated set DC for the next
304
+ round of training and keeps curating data in line 3-7. Line 3
305
+ gets the next batch of raw image-text pairs. Line 4 obtains
306
+ its text part and line 5 computes the text embedding from
307
+ the current model. Line 6 is the data proxy, which approx-
308
+ imates the data distribution for the downstream tasks (de-
309
+ tailed in the next subsection). Lastly, we merge the newly
310
+ curated subset into the curated set DC.
311
+ Data Proxy.
312
+ We use language-based metadata and the
313
+ text encoder to measure the relevance of training data. This
314
+ favors efficiency because the text encoders are typically sig-
315
+ nificantly cheaper to evaluate (e.g. the text encoder only
316
+ uses ∼4.6% of the ViT-L image-encoders’ compute).
317
+ In DataProxy(Draw, xtxt, xmeta) of Algorithm 2, we first
318
+ compute the similarities of text embeddings (xtxt) over em-
319
+ beddings of the metadata (xmeta):
320
+ vi
321
+ max = max
322
+ j (sim(xi
323
+ txt, xj
324
+ meta)),
325
+ (1)
326
+ where sim(xi
327
+ txt, xj
328
+ meta) = xi
329
+ txtxj,⊤
330
+ meta/(∥xi
331
+ txt∥∥xj
332
+ meta∥) is the
333
+ cosine similarity between embeddings of sample i and
334
+ metadata j. Here the highest similarity over all metadata
335
+ vi
336
+ max is used to measure the sample quality.
337
+ Let Dt = {(xi
338
+ img, xi
339
+ txt)|(xi
340
+ img, xi
341
+ txt) ∈ Draw and vi
342
+ max > t}
343
+ denote a subset, where all samples have a maximum simi-
344
+ larity above a curation threshold t. Given the best possible
345
+ match to metadata, we use a mixed strategy to determine if
346
+
347
+ a sample shall be used:
348
+
349
+ Dt
350
+ if
351
+ |Dt|
352
+ |Draw| > γ,
353
+ arg topki(vi
354
+ max, k = γ|Draw|),
355
+ otherwise,
356
+ (2)
357
+ where
358
+ |Dt|
359
+ |Draw| is the ratio of curation with γ being a pre-
360
+ defined minimal ratio of curation. If enough samples meet
361
+ the threshold t, Dt is used. Otherwise, we use a minimal
362
+ ratio γ of samples, that represent the top-k matching ones
363
+ (with k = γ|Draw|) in terms of similarity across metadata.
364
+ The threshold t is crucial for CiT to balance the tradeoff
365
+ between data quality and quantity. A higher t leads to high
366
+ data quality, but can lead a lower ratio of curation. We adopt
367
+ this mixed strategy because line 3 in Algorithm 2 could be-
368
+ come a near infinite loop if the ratio of curation is low and
369
+ not enough data that meets t can be found. This could hap-
370
+ pen because the threshold is set too high, or the data source
371
+ has low metadata correspondence. The otherwise part in
372
+ equation 2 resolves this by selecting the γ (typically set to
373
+ around 1% - 5%) best possible matches for training. See
374
+ §A.1.1 for PyTorch pseudo code of CiT.
375
+ 4. Experiments
376
+ We use training data from two categories shown below;
377
+ clean data that involves human-based offline filter pipelines
378
+ and raw data that has not undergone cleaning.
379
+ 4.1. Cleaned Training Data
380
+ YFCC15M. We use the 15M subset of YFCC100M [29]
381
+ (filtered by [21]) as the main evaluation dataset as it is
382
+ widely adopted in existing literatures [20–22, 38]. It con-
383
+ sists of English-only titles, descriptions, and tags. We sim-
384
+ ply refer to this as YFCC15M in this paper. Except for ap-
385
+ plying the script from [20] to remove HTML formatting,
386
+ we do not perform any extra filtering or preprocessing. In
387
+ contrast, LiT [38] performs extra filtering such as remov-
388
+ ing titles that start with “DSC”, “IMG” and “Picture”, or
389
+ removing them if more than half of them contain digits.
390
+ CC12M. Since YFCC15M may lack enough training data,
391
+ LiT [38] also combines YFCC15M with Conceptual Cap-
392
+ tions 12M (CC12M) [3], which is filtered and transformed
393
+ from image & alt-text pairs from web pages. CC12M in-
394
+ volves cleaning by supervised models from Google Cloud
395
+ APIs to match the image’s prediction over classes with text.
396
+ LAION400M [24] contains 400M English only image-text
397
+ pairs. It is crawled from 2 and later filtered by a CLIP [21]
398
+ model. Thus, LAION400M implicitly carries the data filter
399
+ pipeline of WIT400M on which CLIP has been trained.
400
+ 2https://commoncrawl.org
401
+ 4.2. Raw Training Data
402
+ YFCC100M. We use the raw YFCC100M (the source
403
+ of YFCC15M) to compare with YFCC15M. Note that
404
+ YFCC100M is multilingual, whereas YFCC15M is English.
405
+ Raw Image-Text Crawl. To challenge CiT with real-world
406
+ data, we further collect raw (unfiltered) image-text pairs
407
+ from Common Crawl.
408
+ We only perform de-duplication
409
+ and NSFW filtering, but no filtering on image-text associ-
410
+ ation. This ended with 1.2B multilingual image-text pairs
411
+ and 28.56% pairs are English (identified by our language
412
+ identification system but this information is not used for CiT
413
+ training). As such, about 343M image-text pairs are in En-
414
+ glish, which are slightly smaller than the scale of WIT400M
415
+ or LAION400M, but much more noisy.
416
+ 4.3. Implementation and Training
417
+ Our training recipe uses a global batch size of 16,384,
418
+ which is trained in 16 Nvidia V100 32GB GPUs. Our vi-
419
+ sion encoder corresponds to ViT [7] of various sizes and
420
+ the text encoder defaults to BERTbase-SimCSE [6,8] with a
421
+ maximum token length of 32, similar to LiT [38]. Unless
422
+ specified, we set a budget of training to be within b = 5000
423
+ steps (81M image-text pairs). We report hyper-parameters
424
+ and an extra low-cost single-GPU setting in §A.1.
425
+ We use pre-trained vision and text encoders and join
426
+ them via two randomly initialized projection layers. Fol-
427
+ lowing LiT, we freeze the vision encoder and make the text
428
+ encoder and two projection layers trainable. One can either
429
+ use the text representation before, or after the projection
430
+ layer for computing cosine similarity during curation. We
431
+ ablate these two choices in §4.6.
432
+ 4.4. Evaluation
433
+ We evaluate zero-shot (0-shot) transfer accuracy of CiT
434
+ on 26 benchmarks, following [20,21]. In our ablation stud-
435
+ ies, we use YFCC15M as the main data source for training
436
+ and ImageNet-1K (IN-1K) as the downstream task. We use
437
+ prompts from CLIP for all 26 tasks and additionally use the
438
+ extra 2 prompts from LiT [38] for ImageNet for a fair com-
439
+ parison with LiT. Following CLIP, we perform prompt en-
440
+ sembling by averaging the class embeddings for each class
441
+ across the prompt templates. For classification, cosine sim-
442
+ ilarity is computed between an image embedding and the
443
+ averaged class embeddings and the class with the highest
444
+ cosine similarity is CiT’s prediction. We perform validation
445
+ every 500 training steps and stop training if the accuracy
446
+ does not increase over the previous validation. The corre-
447
+ sponding total training time (including curation and train-
448
+ ing) is reported along with the validation accuracy. We esti-
449
+ mate the training time of baselines by re-running them un-
450
+ der the same setup as CiT (i.e. 16 GPUs) and maximize the
451
+ GPU usage for best throughput. More results are in §A.2.
452
+
453
+ Curation
454
+ Acc
455
+ online
456
+ 61.4
457
+ offline
458
+ 57.5
459
+ no
460
+ 53.8
461
+ (a) Curation effect
462
+ # of steps
463
+ Acc
464
+ 50
465
+ 61.0
466
+ 100
467
+ 61.4
468
+ 200
469
+ 61.5
470
+ 300
471
+ 61.1
472
+ (b) Curation freq.
473
+ Feature of Curation
474
+ Acc
475
+ pooled encoder
476
+ 61.4
477
+ projection output
478
+ 60.7
479
+ w/ prompts
480
+ 61.4
481
+ (c) Curation feature
482
+ Threshold t
483
+ Acc
484
+ 0.5
485
+ 60.9
486
+ t = 0.55
487
+ 61.4
488
+ 0.6
489
+ 61.1
490
+ 0.7
491
+ 59.7
492
+ (d) Threshold t
493
+ Text Variants
494
+ Acc
495
+ BERT max len. 32
496
+ 61.4
497
+ BERT max len. 77
498
+ 61.2
499
+ w/o YFCC tag
500
+ 59.1
501
+ w/o YFCC tag aug.
502
+ 60.8
503
+ BERT first 6 layers
504
+ 60.2
505
+ (e) Text variants
506
+ Table 1. Ablation experiments. We use MoCo-v3 / BERTbase-SimCSE, YFCC15M as data source and report IN-1K Accuracy.
507
+ 4.5. Choice of Pre-trained Models
508
+ We first study the effects of pre-trained encoders.
509
+ As vision encoder, we consider (1) ViT-B/16 [7] (patch
510
+ size of 16×16 pixels) with pre-trained weights from
511
+ self-supervised MoCo-v3 [5], DINO [2] and MAE [10],
512
+ all trained on IN-1K but without any labels.
513
+ To be
514
+ consistent with LiT [38], we also consider (2) super-
515
+ vised ViT(AugReg) [28] B/32, B/16, and L/16 trained
516
+ on ImageNet-21K3. Finally, we also explore weakly-
517
+ supervised ViT-B/16 and ViT-H/14 SWAG [27].
518
+ Vision Model
519
+ Pre-train Obj.
520
+ Pre-train Data
521
+ IN-1K Acc.
522
+ MoCo-v3 [5]
523
+ Contrastive
524
+ IN-1K
525
+ 61.4
526
+ DINO [2]
527
+ Contrastive
528
+ IN-1K
529
+ 60.3
530
+ MAE [10]
531
+ Masking
532
+ IN-1K
533
+ 42.4
534
+ AugReg [28]
535
+ Supervised
536
+ IN-21K
537
+ 69.4
538
+ SWAG [27]
539
+ Weakly-Supervised
540
+ IG 3.6B
541
+ 67.5
542
+ Table 2.
543
+ Ablation study of different vision encoders on ViT-
544
+ B/16 with text encoder as BERTbase-SimCSE on YFCC15M. Pre-
545
+ training objective matters for CiT training.
546
+ Results for different vision encoder weights under the
547
+ same ViT-B/16 architecture are in Table 2. We notice that
548
+ the accuracy of MoCo-v3 (61.4%) and DINO (60.3%) pre-
549
+ training are close, while MAE is worse (42.4%), presum-
550
+ ably because the representations learned by instance dis-
551
+ crimination (MoCo-v3 and DINO), which learns different
552
+ embeddings for different images, is closer to zero-shot clas-
553
+ sification than MAE’s training objective. AugReg performs
554
+ best with 69.4% accuracy, presumably because the super-
555
+ vised pre-training on IN-21K is superior to unsupervised
556
+ IN-1K pre-training. Finally, SWAG is worse than AugReg,
557
+ but better than MoCo-v3. In the following experiments of
558
+ this section, we will show larger variants.
559
+ For text encoder, we consider self-supervised base mod-
560
+ els from (1) language models BERT [6]; and contrastive
561
+ tuned (2) BERT-SimCSE and RoBERTa-SimCSE [8], as
562
+ shown in Table 3.
563
+ Text Model
564
+ Pre-training obj.
565
+ IN-1K Acc.
566
+ BERTbase(uncased) [6]
567
+ from scratch
568
+ 57.7
569
+ BERTbase(uncased) [6]
570
+ SimCSE [8]
571
+ 61.4
572
+ BERTbase(uncased) [6]
573
+ BERT NSP [6]
574
+ 59.9
575
+ RoBERTabase [17]
576
+ SimCSE [8]
577
+ 59.7
578
+ Table 3. Ablation study of different text encoders with MoCo-v3
579
+ on YFCC15M: contrastive pre-training yields better accuracy.
580
+ 3We follow LiT here, but note that using IN-21K is not strictly a zero-
581
+ shot setting, because 999 of the 1000 classes in IN-1K are in IN-21K.
582
+ We observe similar trends as for vision: SimCSE trained
583
+ BERT is better than vanilla BERT or RoBERTa, proba-
584
+ bly because contrastively trained [CLS] token by Sim-
585
+ CSE can perform better text similarity than BERT’s pair-
586
+ wise (a.k.a, next sentence prediction) trained [CLS] token
587
+ or RoBERTa’s no training on [CLS] token.
588
+ 4.6. Ablations
589
+ We adopt the combination of MoCo-v3 ViT B/16 and
590
+ BERT-SimCSE as our default setting. We summarize abla-
591
+ tion experiments of CiT in Table 1.
592
+ Stage of Curation. We first ablate the effects of curation
593
+ in Table 1a. We see that CiT has a 7.6% boost compared
594
+ to no curation. We further ablate a offline curation before
595
+ training. This is sub-optimal as the SimCSE purely pre-
596
+ trained from the text may not learn good representations for
597
+ semantic-level similarity (discussion in §3.1).
598
+ Frequency of Curation. Next, we are interested in how
599
+ frequently curation needs to be performed. Table 1b varies
600
+ the number of steps (and therefore pairs s when multiplied
601
+ with the batch-size) for curation (in Alg. 2). We found that
602
+ curating too frequent or infrequent yields sub-optimal re-
603
+ sults, but the change is marginal so we chose 100 steps as
604
+ default.
605
+ Feature for Curation. In Table 1c, we find that using the
606
+ feature before the projection layer (e.g. the direct output
607
+ of SimCSE) is better than the features from the projection
608
+ layer. This is probably because the projection layer tends
609
+ to be more unstable during training (e.g. randomly initial-
610
+ ized and needs longer training to align with the visual repre-
611
+ sentation), whereas the SimCSE embedding is already pre-
612
+ trained for text similarity.
613
+ Threshold. In Table 1d we ablate the threshold t, which
614
+ controls the trade-off for data quality and quantity. A lower
615
+ threshold adds more low-quality data and a higher threshold
616
+ reduces data quantity, so t = 0.55 is a good balance.
617
+ Text Variants. We ablate the length of text encoders in
618
+ Table 1e to understand the memory/text sequence length
619
+ tradeoff. We find that longer text sequences (77) (we re-
620
+ duce batch size per GPU to half and double the number of
621
+ GPUs) are slightly worse. We also ablate the effectiveness
622
+ of YFCC15M tag augmentation, adopted from LiT. Lastly,
623
+ we are wondering if a shallow (6 layers) BERT-SimCSE is
624
+ also a good text encoder. We obtain 1.2% worse results.
625
+
626
+ Pre-train Data
627
+ Method
628
+ Vision Encoder
629
+ Vision Initialization
630
+ w/ Labels
631
+ Total Time
632
+ IN-1K Acc
633
+ YFCC15M
634
+ CLIP [21]
635
+ ResNet-50
636
+ scratch
637
+ 
638
+ 25 hrs
639
+ 31.3
640
+ OpenCLIP [31]
641
+ ResNet-50
642
+ scratch
643
+ 
644
+ 25 hrs
645
+ 32.7
646
+ LiT [38]
647
+ ViT-B/16
648
+ DINO [2]
649
+ 
650
+ n/a
651
+ 55.4
652
+ LiT [38]
653
+ ViT-B/16
654
+ MoCo-v3 [5]
655
+ 
656
+ n/a
657
+ 55.5
658
+ LiT [38]
659
+ ViT-B/16
660
+ AugReg [28]
661
+ IN-21K
662
+ n/a
663
+ 55.9†
664
+ LiT [38]
665
+ ViT-B/32
666
+ AugReg [28]
667
+ IN-21K
668
+ 64 hrs
669
+ 59.9*
670
+ CiT
671
+ ViT-B/16
672
+ DINO [2]
673
+ 
674
+ n/a
675
+ 60.3
676
+ CiT
677
+ ViT-B/16
678
+ MoCo-v3 [5]
679
+ 
680
+ 5 hrs
681
+ 61.4
682
+ CiT
683
+ ViT-B/32
684
+ AugReg [28]
685
+ IN-21K
686
+ 11 hrs
687
+ 63.3
688
+ CiT
689
+ ViT-B/16
690
+ AugReg [28]
691
+ IN-21K
692
+ 8 hrs
693
+ 69.4
694
+ CiT
695
+ ViT-L/16
696
+ AugReg [28]
697
+ IN-21K
698
+ 8 hrs
699
+ 72.0
700
+ CiT
701
+ ViT-H/14
702
+ SWAG [27]
703
+ IG hashtags
704
+ 11 hrs
705
+ 73.7
706
+ YFCC15M+CC12M
707
+ LiT [38]
708
+ ViT-L/16
709
+ AugReg [28]
710
+ IN-21K
711
+ 112 hrs
712
+ 72.2*
713
+ CiT
714
+ ViT-L/16
715
+ AugReg [28]
716
+ IN-21K
717
+ 32 hrs
718
+ 75.6
719
+ YFCC100M
720
+ LiT [38]
721
+ ViT-B/32
722
+ AugReg [28]
723
+ IN-21K
724
+ 153 hrs
725
+ 58.9*
726
+ CiT
727
+ ViT-B/16
728
+ MoCo-v3 [5]
729
+ 
730
+ 48 hrs
731
+ 64.6
732
+ CiT
733
+ ViT-B/32
734
+ AugReg [28]
735
+ IN-21K
736
+ 64 hrs
737
+ 65.6
738
+ CiT
739
+ ViT-B/16
740
+ AugReg [28]
741
+ IN-21K
742
+ 66 hrs
743
+ 72.2
744
+ CiT
745
+ ViT-L/16
746
+ AugReg [28]
747
+ IN-21K
748
+ 66 hrs
749
+ 74.8
750
+ CiT
751
+ ViT-H/14
752
+ SWAG [27]
753
+ IG hashtags
754
+ 62 hrs
755
+ 75.5
756
+ Table 4. Comparison to existing methods on YFCC and CC12M. Under identical vision encoders, CiT achieves +3.2% higher accuracy
757
+ with YFCC100M than using the human-cleaned YFCC15M subset and +5.9% accuracy over LiT on YFCC15M. * indicates reproduced
758
+ results with BERTbase (uncased) for fair comparison; see appendix for the implementation differences to original LiT [38]. Total time for
759
+ training and curation is reported for 16 V100 GPUs and varies depending on quality of embeddings from the vision encoder.
760
+ Method
761
+ Vision Encoder
762
+ Vision Initialization
763
+ w/ Labeled Data
764
+ Total Time
765
+ IN-1K Acc
766
+ OpenCLIP
767
+ ViT-B/32
768
+ scratch
769
+ 
770
+ 458 hrs
771
+ 62.9
772
+ OpenCLIP
773
+ ViT-B/16
774
+ scratch
775
+ 
776
+ 981 hrs
777
+ 67.1
778
+ OpenCLIP
779
+ ViT-L/14
780
+ scratch
781
+ 
782
+ 6803 hrs
783
+ 72.8
784
+ LiT [38]
785
+ ViT-B/32
786
+ AugReg [28]
787
+ IN-21K
788
+ 31 hrs
789
+ 62.8
790
+ CiT
791
+ ViT-B/16
792
+ MoCo-v3 [5]
793
+ 
794
+ 26 hrs
795
+ 67.1
796
+ CiT
797
+ ViT-B/32
798
+ AugReg [28]
799
+ IN-21K
800
+ 62 hrs
801
+ 67.5
802
+ CiT
803
+ ViT-B/16
804
+ AugReg [28]
805
+ IN-21K
806
+ 63 hrs
807
+ 73.1
808
+ CiT
809
+ ViT-L/16
810
+ AugReg [28]
811
+ IN-21K
812
+ 27 hrs
813
+ 75.8
814
+ CiT
815
+ ViT-H/14
816
+ SWAG [27]
817
+ IG hashtags
818
+ 26 hrs
819
+ 76.4
820
+ Table 5. CiT on LAION400M: CiT reaches OpenCLIP-level accuracy with 37× total training time improvement.
821
+ 4.7. Comparison to prior work on ImageNet
822
+ We compare CiT with existing contrastive cross-modal
823
+ models in Tables 4 (YFCC and CC12M), 5 (LAION400M)
824
+ and 6 (raw image-text crawl). We report the pre-training
825
+ method (CLIP/LiT/CiT), vision encoder and initialization,
826
+ usage of human-annotated labels, total training time in our
827
+ setup (16 GPUs), as well as the ImageNet 0-shot accuracy.
828
+ YFCC. In Table 4 we report several data points for LiT
829
+ and CiT training with various vision encoders and initial-
830
+ ization.
831
+ On YFCC15M, CiT outperforms LiT on self-
832
+ supervised MoCo-v3 vision encoders by +5.9% accuracy.
833
+ With ViT-B/32 trained with supervised AugReg on IN-21K,
834
+ CiT yields a +3.4% gain over LiT. On YFCC15M+CC12M
835
+ data with ViT-L/16 models, CiT outperforms LiT by +3.4%.
836
+ On YFCC100M we observe that LiT underperforms
837
+ compared to YFCC15M (58.9 vs 59.9), due to cleaning [21]
838
+ of the 15M subset. CiT however can reverse the trend. CiT
839
+ outperforms its counterpart from YFCC15M by 3%+ when
840
+ using the less curated YFCC100M. This indicates human
841
+ cleaning of YFCC100M by CLIP [21] is sub-optimal. The
842
+ performance of CiT on YFCC100M is even +2.6% bet-
843
+ ter than LiT on YFCC15M+CC12M. This trend holds for
844
+ larger image model sizes (ViT-L/H) and stronger initializa-
845
+ tion (AugReg/SWAG), which lead to better accuracy.
846
+ LAION400M. In Table 5 we see that CiT performs better
847
+ than OpenCLIP on LAION400M, while being substantially
848
+ faster. For example, CiT with ViT-B/16 MoCo-v3 vision
849
+ encoder performs as good as OpenCLIP but is 37.7×faster
850
+ in training. With more advanced initialization and larger
851
+ ViT-L models, CiT is 283× faster and 3% more accurate,
852
+ producing 75.8% in 1.1 days with a 16 GPU setup, while
853
+ OpenCLIP would take ∼283 days for an accuracy of 72.8%.
854
+ We note that this extreme speedup comes with the caveat
855
+ that our approach curates data with respect to downstream
856
+ tasks; therefore, CiT only uses 26 hours for training, com-
857
+ pared to 981 hours for OpenCLIP pre-training.
858
+ Raw Image-Text Crawl.
859
+ We further test CiT on our
860
+ raw image-text crawl containing 1.2B unfiltered image-text
861
+ pairs from the web (about 343M pairs have English text).
862
+
863
+ Method
864
+ Vision Encoder
865
+ Vision Initialization
866
+ w/ Labeled Data
867
+ Total Time
868
+ IN-1K Acc.
869
+ OpenCLIP
870
+ ViT-B/16
871
+ from scratch
872
+ 
873
+ n/a
874
+ NaN loss
875
+ LiT
876
+ ViT-B/16
877
+ MoCo-v3 [5]
878
+ 
879
+ n/a
880
+ NaN loss
881
+ LiT (English filter)
882
+ ViT-B/16
883
+ MoCo-v3 [5]
884
+ 
885
+ 65 hrs
886
+ 56.7
887
+ CiT
888
+ ViT-B/16
889
+ MoCo-v3 [5]
890
+ 
891
+ 39 hrs
892
+ 68.7
893
+ CiT
894
+ ViT-B/32
895
+ AugReg [28]
896
+ IN-21K
897
+ 69 hrs
898
+ 68.4
899
+ CiT
900
+ ViT-B/16
901
+ AugReg [28]
902
+ IN-21K
903
+ 72 hrs
904
+ 75.2
905
+ CiT
906
+ ViT-L/16
907
+ AugReg [28]
908
+ IN-21K
909
+ 105 hrs
910
+ 77.9
911
+ CiT
912
+ ViT-H/14
913
+ SWAG [27]
914
+ IG hashtags
915
+ 43 hrs
916
+ 77.4
917
+ Table 6. CiT on Raw Image-Text Crawl: CiT is able to produce strong results when learning from raw image-text data. The raw data
918
+ contains 1.2B image-text pairs. An English language filter, which reduces the data to 343M pairs, is required to stabilize LiT training.
919
+ Time
920
+ Food-101
921
+ CIFAR10
922
+ CIFAR100
923
+ CUB
924
+ SUN397
925
+ Cars
926
+ Aircraft
927
+ DTD
928
+ Pets
929
+ Caltech-101
930
+ Flowers
931
+ MNIST
932
+ FER-2013
933
+ STL-10
934
+ EuroSAT
935
+ RESISC45
936
+ GTSRB
937
+ KITTI
938
+ Country211
939
+ PCAM
940
+ UCF101
941
+ Kinetics700
942
+ CLEVR
943
+ HatefulMemes
944
+ SST2
945
+ ImageNet
946
+ Avg
947
+ CLIP [20,21]
948
+ 27
949
+ 50.6 66.0 34.5 38.8 51.1 4.0
950
+ 5.4 21.2 28.5 60.9 53.3 8.4 17.3 90.5 30.2 21.5 6.1 35.1 10.5 53.5 28.5 22.1 10.8 52.4 50.7 37.6 34.2
951
+ SLIP [20]
952
+ 41
953
+ 59.5 78.6 45.2 38.7 53.4 5.4
954
+ 5.7 26.1 31.1 71.0 56.6 9.8 19.6 94.4 20.3 28.9 14.5 34.0 11.6 55.4 37.7 26.9 17.5 52.8 51.1 42.8 38.0
955
+ CiT-1K-meta
956
+ 5
957
+ 45.6 81.0 49.9 30.4 44.9 6.3
958
+ 8.3 26.8 80.0 71.2 25.1 7.3 26.0 95.2 19.1 14.3 6.9 22.2 6.2 54.1 34.7 24.7 13.4 50.7 50.1 61.2 38.5
959
+ CiT-21K-meta
960
+ 15
961
+ 51.2 84.4 53.5 45.7 52.3 7.6
962
+ 9.0 31.6 69.2 73.8 56.1 10.6 24.5 95.7 30.1 23.4 7.9 28.5 9.2 51.0 39.5 28.7 15.0 49.3 49.1 57.4 40.6
963
+ CiT-multi-meta
964
+ 11
965
+ 51.3 81.8 50.5 50.7 51.6 9.5 14.6 30.8 75.6 73.3 58.7 10.3 26.2 95.6 23.2 19.1 7.8 14.6 9.4 50.8 39.7 28.0 14.7 52.8 50.0 58.8 40.4
966
+ CiT-sep.-meta
967
+ 7
968
+ 59.1 82.2 55.2 56.6 50.7 13.0 13.1 32.8 74.8 77.6 65.9 16.9 13.8 96.3 17.1 21.6 7.6 40.6 9.4 53.5 42.7 27.8 14.2 52.2 50.9 50.7 42.2
969
+ Table 7. CiT on 26 zero-shot benchmarks when trained on YFCC15M. We vary metadata from IN-1K, IN-21K, combined (multi) as well
970
+ as separate (sep.) across 26 tasks. All methods use ViT-B/16 and we use MoCo-v3 vision initialization. Larger encoders are in Table 12.
971
+ The data contains a large degree of noise. Results are shown
972
+ in Table 6. To understand the challenge of training on raw
973
+ image-text pairs, we run CLIP and LiT training on the raw
974
+ image-text pairs. This yields unstable training that quickly
975
+ reaches NaN loss for both a CLIP and LiT training. We be-
976
+ lieve some noisy pairs are unhealthy for training. By using
977
+ our English filter to clean the text, we can train LiT and it
978
+ reaches 56.7% IN-1K zero-shot accuracy. Training our CiT
979
+ (without even using an English filter) achieves 68.7% which
980
+ is +12.0% higher. This indicates raw and very noisy image-
981
+ text pairs lead to poor accuracy, but CiT can overcome this
982
+ and curate high-quality data for vision-language learning.
983
+ Surprisingly,
984
+ as shown in Table 5,
985
+ CiT achieves
986
+ much better performance than OpenCLIP trained on
987
+ LAION400M. CiT on raw image-text reaches 77.9%, which
988
+ is +5.1% better than OpenCLIP ViT-L/14 (cf. Table 5).
989
+ Note that our source is raw, with multilingual texts, whereas
990
+ LAION400M is a curated English-only dataset filtered by
991
+ the CLIP model. The training data used by CiT (e.g. 131M
992
+ for 77.9%) is just around 1/5 of the scale of LAION400M
993
+ dataset (one epoch), showing the effectiveness of curating
994
+ training data.
995
+ 4.8. Comparison across 26 benchmarks
996
+ We extend CiT to 26 common 0-shot evaluation tasks for
997
+ CLIP/SLIP models [20] on the public dataset YFCC15M.
998
+ We provide more comparisons with further encoders as well
999
+ as pre-training on LAION400M in the appendix. We evalu-
1000
+ ate with prompts from CLIP/SLIP. For ImageNet, we drop
1001
+ the extra prompts used by LiT for a fair comparison with the
1002
+ baselines. We use three setups of metadata: (i) IN-1K, (ii)
1003
+ IN-21K, and (iii) multi-task CiT that combines class names
1004
+ from all 26 tasks (iv) we run every task separately on a sin-
1005
+ gle GPU as a low-compute setup (this trains a model for
1006
+ each task with separate metadata). Results are in Table 7
1007
+ and discussed next.
1008
+ We first evaluate CiT trained with IN-1K metadata on all
1009
+ 26 tasks. As expected accuracy on ImageNet and Pets is
1010
+ highest among the metadata variants (i-iv). Overall, we ob-
1011
+ serve that CiT 1K meta already exhibits certain generality
1012
+ to all tasks and can outperform CLIP (34.2 vs. 38.5%) and
1013
+ is similar to SLIP, but 8.2× faster (5 vs. 41 hours), demon-
1014
+ strating its efficiency.
1015
+ Next, we explore the WordNet lemma from ImageNet-
1016
+ 21K as a relatively general metadata for training CiT. In
1017
+ Table 7, CiT-21K-meta improves broadly over IN-1K lead-
1018
+ ing to 40.6% average accuracy, showing that a more general
1019
+ taxonomy works well across tasks.
1020
+ We combine the taxonomies from all 26 tasks in CiT-
1021
+ multi-meta. This allows us to curate training data for all 26
1022
+ tasks at again almost no extra training cost. We notice that
1023
+ multi-task CiT is on average similarly accurate as IN-21K
1024
+ metadata (40.4% vs. 40.6%) and converges faster because
1025
+ CiT is more targeted towards tasks of interest.
1026
+ Finally, we compare a setup that trains a model for
1027
+ each task with separate metadata.
1028
+ CiT-sep.-meta in Ta-
1029
+ ble 7 achieves overall the best average accuracy of 42.2%
1030
+ across tasks. This setup uses a restricted 1-GPU setting
1031
+ to save compute and could be boosted further with longer
1032
+ training. We think that this scenario might be quite practi-
1033
+ cal, where some domain data exists (e.g. on bird images in
1034
+ CUB) and one wants to build a classification system given
1035
+ a large amount of noisy image-text data from the web.
1036
+
1037
+ Step (c)
1038
+ Text
1039
+ ImageNet Class
1040
+ Cosine Sim.
1041
+ 0
1042
+ title: “Wollaston Beach”
1043
+ beach
1044
+ 0.739
1045
+ 100
1046
+ title: “tn_kif_3128”
1047
+ Vizsla
1048
+ 0.779
1049
+ 1000
1050
+ tag: “beach plumisland parker river national wildlife refuge newburyport massachusetts ocean”
1051
+ beach
1052
+ 0.716
1053
+ 2000
1054
+ desc: “These guys were nice, told me all about this and other planes of the show, but unfortunately...”
1055
+ military aircraft
1056
+ 0.725
1057
+ 3000
1058
+ title: “Turtle”
1059
+ terrapin
1060
+ 0.725
1061
+ 4000
1062
+ desc: “One of the fountains close by the south west entrance to the park”
1063
+ fountain
1064
+ 0.734
1065
+ 5000
1066
+ title: “butterfly”
1067
+ Papillon
1068
+ 0.735
1069
+ 5000
1070
+ tag: “ash;explosion;sakurajima;kagoshima;桜島;鹿児島県;volcano;tarumizu;垂水市;japan;eruption;日本”
1071
+ volcano
1072
+ 0.645
1073
+ Table 8. Samples of curated text over training steps (c) from YFCC100M. CiT uses MoCo-v3 initialized vision encoder.
1074
+ 4.9. Further Analysis
1075
+ Samples of Curated Data. We further investigate samples
1076
+ curated by CiT on YFCC100M dataset in Table 8. We show
1077
+ training steps, a sample text, the related ImageNet meta-
1078
+ data, as well as the cosine similarity in CiT’s data proxy.
1079
+ At step c = 0 CiT’s data proxy tends to select text with
1080
+ similar length as class names and string-matching behavior;
1081
+ the short-term run of CiT (e.g. c = 100) has some match-
1082
+ ing issues with many false positives. Later on, CiT starts to
1083
+ select texts of various lengths with similar semantics as the
1084
+ metadata. We do not observe any clearly less useful sam-
1085
+ ples such as file names after c = 2000. Interestingly, CiT
1086
+ can even use the English part of mixed language texts from
1087
+ YFCC100M (as in the last example).
1088
+ Speed/accuracy trade-off.
1089
+ In Figure 2, we show the
1090
+ speed/accuracy tradeoff of CiT vs. LiT [38], corresponding
1091
+ to results in Table 4). We see that CiT achieves a win-win
1092
+ scenario compared to LiT on identical AugReg ViT-B/32 vi-
1093
+ sion encoders: a +3.4% higher accuracy on ImageNet, and
1094
+ a 5× faster total training time (including the curation time).
1095
+ on data YFCC15M [21].
1096
+ 0
1097
+ 10
1098
+ 20
1099
+ 30
1100
+ 40
1101
+ 50
1102
+ 60
1103
+ Total Training Time (hours)
1104
+ 25
1105
+ 30
1106
+ 35
1107
+ 40
1108
+ 45
1109
+ 50
1110
+ 55
1111
+ 60
1112
+ 65
1113
+ ImageNet Acc.
1114
+ CiT
1115
+ LiT
1116
+ Figure 2. CiT on provides >5× speedup and +3.4% accuracy gain
1117
+ over LiT [38] on AugReg ViT-B/32 vision encoders. Training data
1118
+ is YFCC15M. Models are evaluated at 6 evenly sampled iterations.
1119
+ 0
1120
+ 1000
1121
+ 2000
1122
+ 3000
1123
+ 4000
1124
+ Step
1125
+ 0.0
1126
+ 0.1
1127
+ 0.2
1128
+ 0.3
1129
+ 0.4
1130
+ 0.5
1131
+ 0.6
1132
+ 0.7
1133
+ Ratio of Curation for Training
1134
+ t=0.5
1135
+ t=0.55
1136
+ t=0.6
1137
+ Figure 3.
1138
+ Ratio of curation under different thresholds t.
1139
+ CiT
1140
+ broadly uses data first and curates more towards end of training.
1141
+ Ratio of Curation. We are interested in the training dy-
1142
+ namics of CiT. We use different curation thresholds t and
1143
+ inspect the amount of curated training data. In Figure 3, we
1144
+ see that the ratio of curation which corresponds to the frac-
1145
+ tion of used training samples from the raw data source, see
1146
+ §3.3, keeps changing over steps for curation/training. Ini-
1147
+ tially, CiT uses more data, e.g. for a threshold of t = 0.5,
1148
+ it peaks at about 75%. In this phase, the latent space of the
1149
+ text encoder is less aligned with the vision latents. Later on
1150
+ during training, CiT starts to produce embeddings that bet-
1151
+ ter represent the downstream task, producing a lower ratio.
1152
+ 5. Conclusion
1153
+ This paper contributes CiT, a novel learning algorithm
1154
+ for efficient pre-training from noisy image-text data. CiT
1155
+ incorporates a curation process into learning to pull the
1156
+ training data distribution closer to downstream tasks. Our
1157
+ experiments demonstrate both significant accuracy and
1158
+ training time improvements when learning from either pub-
1159
+ lic or our own uncurated data from the web. We observe
1160
+ that training on the raw image-text pairs in YFCC can
1161
+ achieve better accuracy over the cleaned version from a
1162
+ hand-crafted filter pipeline. Further, we show that CiT can
1163
+ train with raw image-text pairs crawled from the web, which
1164
+ would lead to instability for vanilla pre-training objectives.
1165
+ Acknowledgement. We thank Norman Mu, Shang-Wen Li,
1166
+ Vasu Sharma, Wojciech Galuba and Max Bain for help.
1167
+
1168
+ A. Appendix
1169
+ In this appendix, §A.1 contains implementation details
1170
+ and §A.2 contains further results as well as ablations.
1171
+ A.1. Implementation Details
1172
+ A.1.1
1173
+ PyTorch Pseudo Code
1174
+ To facilitate implementation of CiT, we provide the PyTorch
1175
+ pseudo-code in Algorithm 4 below.
1176
+ Algorithm 4: CiT: PyTorch Pseudo Code
1177
+ 1
1178
+ # b: maximum training steps as budget.
1179
+ 2
1180
+ # d: raw data loader.
1181
+ 3
1182
+ # t_meta: textual metadata.
1183
+ 4
1184
+ # bsz: batch_size.
1185
+ 5
1186
+ # t: threshold.
1187
+ 6
1188
+ # gamma: target radio for curation.
1189
+ 7
1190
+ # s: number of expected pairs.
1191
+ 8
1192
+ 9
1193
+ c = 0
1194
+ 10
1195
+ while c < b:
1196
+ 11
1197
+ if c % int(s//bsz) == 0:
1198
+ 12
1199
+ x_meta = model(t_meta)
1200
+ 13
1201
+ x_meta = normalize(x_meta)
1202
+ 14
1203
+ d_c = []
1204
+ 15
1205
+ while len(d_c) < s:
1206
+ 16
1207
+ x_imgs, x_txts = next(d)
1208
+ 17
1209
+ x_txts = model(x_txts)
1210
+ 18
1211
+ x_txts = normalize(x_txts)
1212
+ 19
1213
+ v = x_txts @ x_meta.t()
1214
+ 20
1215
+ sel = max(v) > t
1216
+ 21
1217
+ b_ratio = sum(sel) / len(sel)
1218
+ 22
1219
+ if b_ratio < gamma:
1220
+ 23
1221
+ sel = max(v).topk(
1222
+ 24
1223
+ k=int(bsz*gamma), dim=0)
1224
+ 25
1225
+ d_c.extend((x_imgs[sel], x_txts[sel]))
1226
+ 26
1227
+ 27
1228
+ for (x_imgs, x_txts) in batchify(d_c):
1229
+ 28
1230
+ x_imgs, x_txts = model(x_imgs, x_txts)
1231
+ 29
1232
+ x_imgs, x_txts = normalize(x_imgs,x_txts)
1233
+ 30
1234
+ # scale: learnable log logit scale
1235
+ 31
1236
+ l = exp(scale) * x_imgs @ x_txts.t()
1237
+ 32
1238
+ labels = arange(bsz)
1239
+ 33
1240
+ loss = cross_entropy(l, labels)
1241
+ 34
1242
+ loss.backward()
1243
+ 35
1244
+ c += 1
1245
+ A.1.2
1246
+ Dataloader Implementation
1247
+ For efficiency, we only load text during the curation loop
1248
+ and the training loop uses the curated indices to reload the
1249
+ full image-text pairs. Our implementation also supports in-
1250
+ memory storage of curated image-text pairs in case the data
1251
+ source is not randomly accessible for (re-)loading curated
1252
+ data, where all s pairs of training data can be stored in
1253
+ the CPU memory with image tensors represented as uint8
1254
+ data. We use a larger batch size for curation (compared to
1255
+ training) to speed up CiT.
1256
+ Hyperparameter
1257
+ Value
1258
+ Optimizer
1259
+ AdamW
1260
+ Optimizer momentum
1261
+ β1 = 0.9, β2 = 0.999
1262
+ Optimizer ϵ
1263
+ 1e-8
1264
+ Weight Decay (proj.)
1265
+ 1.0
1266
+ Weight Decay (other)
1267
+ 0.2
1268
+ Base Learning Rate
1269
+ 5e-4
1270
+ Learning Rate Schedule
1271
+ cosine decay
1272
+ Minimum Learning Rate
1273
+ 1e-5
1274
+ Gradient Clipping
1275
+ None
1276
+ Warm-up % of Train Steps
1277
+ 4%
1278
+ Batch size
1279
+ 16,384
1280
+ GPUs
1281
+ 16 Nvidia V100 32GB GPUs
1282
+ precision
1283
+ float16
1284
+ Max BERT len.
1285
+ 32
1286
+ Train Aug.
1287
+ RandomResizedCrop(224, scale=(0.5, 1.0))
1288
+ YFCC15M/YFCC100M Aug.
1289
+ shuffle/join tags [38]
1290
+ Eval Aug.
1291
+ Resize(256), CenterCrop(224)
1292
+ AugReg rgb Mean
1293
+ (0.5, 0.5, 0.5)
1294
+ AugReg rgb Std.
1295
+ (0.5, 0.5, 0.5)
1296
+ Other encoder rgb Mean
1297
+ (0.485, 0.456, 0.406)
1298
+ Other encoder rgb Std.
1299
+ (0.229, 0.224, 0.225)
1300
+ Table 9. Hyperparameters of CiT Training.
1301
+ Data Source
1302
+ Metadata
1303
+ b
1304
+ t
1305
+ γ
1306
+ YFCC15M
1307
+ IN-1K
1308
+ 5K
1309
+ 0.55 0.003
1310
+ YFCC15M
1311
+ IN-21K
1312
+ 8K
1313
+ 0.55 0.003
1314
+ YFCC15M
1315
+ multi.
1316
+ 8K
1317
+ 0.55 0.003
1318
+ YFCC100M
1319
+ IN-1K
1320
+ 5K
1321
+ 0.7
1322
+ 0.01
1323
+ LAION400M
1324
+ IN-1K
1325
+ 5K
1326
+ 0.6
1327
+ 0.01
1328
+ LAION400M
1329
+ IN-21K
1330
+ 30K 0.65
1331
+ 0.01
1332
+ LAION400M
1333
+ multi.
1334
+ 16K
1335
+ 0.6
1336
+ 0.01
1337
+ RAW IMG-TXT IN-1K
1338
+ 8K
1339
+ 0.7
1340
+ 0.003
1341
+ RAW IMG-TXT IN-21K
1342
+ 60K 0.75 0.003
1343
+ RAW IMG-TXT multi.
1344
+ 30K
1345
+ 0.7
1346
+ 0.003
1347
+ Table 10. Hyperparameters of CiT Curation.
1348
+ A.1.3
1349
+ Detailed Implementation Settings
1350
+ The hyper-parameters of CiT training are shown in Table 9.
1351
+ We mostly follow [20, 21, 38]. CiT is trained on 16 GPUs
1352
+ with a global batch size of 16,384 (1024 per GPU).
1353
+ Hyperparameters for CiT curation outlined in §3 of the
1354
+ main paper are shown in Table 10. We use different thresh-
1355
+ olds t and minimal ratios γ for each dataset/metadata com-
1356
+ bination to fit the training into a budget b shown in the table
1357
+ as well. We use the same values for all variants of vision en-
1358
+ coders. Due to smaller size, we use a lower t for YFCC15M
1359
+ and CC12M, whereas for YFCC100M and Raw Img-Text
1360
+ Crawl we use a higher t to focus on high-quality data from
1361
+ the raw data source, in order to roughly meet the budget b.
1362
+ Single GPU Setting.
1363
+ We provide more details on the im-
1364
+ plementation of the extremely efficient single GPU setup
1365
+ used for zero-shot evaluation on multiple tasks in Table 12.
1366
+ We can fit a batch size of 1,536 into a single 32GB V100
1367
+ GPU and train for b = 5000 steps. To ensure the training
1368
+ can be finished quickly, we set γ = 0.05. Further to re-
1369
+ duce the chance of using the minimal ratio during curation,
1370
+
1371
+ we perform a pre-curation on YFCC15M for each task us-
1372
+ ing BERT-SimCSE with a threshold of 0.45 to remove pairs
1373
+ with low relevance.
1374
+ A.1.4
1375
+ Implementation Differences from LiT
1376
+ While we aim for a close reproduction of LiT [38], there are
1377
+ a few tricks that our implementation does not incorporate
1378
+ and we suspect the differences on our LiT reproduction on
1379
+ YFCC stem from those. Below we list some tricks known
1380
+ to us, but there could be more differences we are not aware
1381
+ of since we have no access to LiT’s full preprocessing and
1382
+ training code.
1383
+ Preprocessing. For the captions, LiT performs extra filter-
1384
+ ing and removes titles that start with “DSC”, “IMG”, “Pic-
1385
+ ture”. Also, LiT removes text consisting of only the word
1386
+ “image” or text that contains a large fraction of digits.
1387
+ Joint Contrastive Loss. LiT adopts a joint contrastive loss
1388
+ over 3 text fields in YFCC15M and shows the gain in Figure
1389
+ 8 of the LiT paper [38]. Since this technique is specific to
1390
+ the type of captions in the specific YFCC data, we remove
1391
+ it from our implementation and randomly sample one of the
1392
+ three text fields to pair with a training image.
1393
+ Text encoder.
1394
+ LiT adopts various text encoders such
1395
+ as BERTbase and BERTlarge. This work consistently uses
1396
+ BERTbase for all main results to have a fair comparison.
1397
+ A.2. Additional Results
1398
+ This section extends the results of CiT in the main pa-
1399
+ per to full results across 26 CLIP/SLIP benchmarks on
1400
+ YFCC15M and LAION400M and an extra ablation study.
1401
+ A.2.1
1402
+ Full Results on YFCC15M
1403
+ We show the full results of Table 7 above in Table 12
1404
+ below.
1405
+ On average, CiT-multi-meta (52.6) is slightly
1406
+ better than CiT-21K-meta (51.7), which is better than
1407
+ CiT-sep-meta and CiT-1K-meta (47.2).
1408
+ It appears that
1409
+ the broader ImageNet-21K wordnet taxonomy works well
1410
+ across datasets, and combining metadata from all down-
1411
+ stream tasks is only slightly better than that. We note that
1412
+ training on the larger metadata does not introduce much
1413
+ extra curation compute since forwarding the raw exam-
1414
+ ples takes the majority of computation. Nevertheless, we
1415
+ observe that larger metadata takes longer to converge and
1416
+ therefore increase the training budget to b = 8000 for CiT-
1417
+ 21K-meta and CiT-multi-meta. We expect larger budgets
1418
+ will lead to even better results.
1419
+ Besides what was already discussed in the main paper,
1420
+ we observe that CiT performs even better on larger mod-
1421
+ els or models trained with supervised (AugReg IN-21K) or
1422
+ weakly supervised (SWAG) data than the unsupervisedly
1423
+ Eval. Prompts
1424
+ Acc
1425
+ CLIP+LiT prompts
1426
+ 61.4
1427
+ CLIP prompts only
1428
+ 61.2
1429
+ (a) Evaluation Prompts
1430
+ Objective
1431
+ Acc
1432
+ img2txt obj.
1433
+ 61.4
1434
+ CLIP obj.
1435
+ 61.2
1436
+ (b) Training Objective
1437
+ Table 11.
1438
+ Additional ablation experiments.
1439
+ We use the de-
1440
+ fault setup (MoCo-v3 / BERTbase-SimCSE) and YFCC15M as data
1441
+ source and report IN-1K Accuracy.
1442
+ pre-trained MoCo-v3 on IN-1K. Out-of-domain issues (e.g.
1443
+ MNIST) are present even for larger vision encoders.
1444
+ A.2.2
1445
+ Full Results on LAION400M
1446
+ In Table 13, we show the result of CiT trained on
1447
+ LAION400M and evaluated on 26 CLIP/SLIP benchmarks.
1448
+ With a larger data source, we realize CiT takes more time
1449
+ to converge especially with more metadata, which can be
1450
+ attributed to more data meeting the curation criteria. We set
1451
+ b = 16000 for CiT-multi-meta and b = 30000 for CiT-21K-
1452
+ meta. The trend is similar to YFCC15M but with better
1453
+ performance aross the benchmarks. Similar as in Table 12,
1454
+ CiT-multi-meta is better than CiT-21K-meta, but this time
1455
+ the gap is larger. In addition to the longer training, we be-
1456
+ lieve that the combined metadata from 26 benchmarks are
1457
+ more effective on larger pre-training data.
1458
+ A.2.3
1459
+ Full Results on Raw Image-Text Crawl
1460
+ In Table 14, we show the result of CiT trained on our raw
1461
+ image-text crawl and evaluated on 26 benchmarks. With a
1462
+ larger raw data source, we realize CiT takes more time to
1463
+ converge. We set b = 30000 for CiT-multi-meta and b =
1464
+ 60000 for CiT-21K-meta. The trend is similar to LAION-
1465
+ 400M but raw Image-Text Crawl is not cleaned for vision-
1466
+ language association. Similar as in Table 13, CiT-multi-
1467
+ meta is better than CiT-21K-meta, but the gap is larger. We
1468
+ expect better accuracy for longer training.
1469
+ A.2.4
1470
+ Additional Ablations
1471
+ This section extends ablations in Table 1 of the main paper
1472
+ to (i) evaluation prompts and (ii) training objectives.
1473
+ Evaluation Prompts. We first verify the effects of LiT’s
1474
+ extra prompts on CiT in Table 11a. We obtain a +0.2% gain
1475
+ by adding them to the CLIP prompts.
1476
+ Training Objective. We ablate the Limg2txt training objec-
1477
+ tive which our approach uses (see §3.2 of the main paper).
1478
+ In Table 11a we see that this variant provides a +0.2% gain
1479
+ over CLIP’s objective that also incorporates a text2img loss.
1480
+
1481
+ Vis. Encoder Init.
1482
+ Hrs
1483
+ Food-101
1484
+ CIFAR10
1485
+ CIFAR100
1486
+ CUB
1487
+ SUN397
1488
+ Cars
1489
+ Aircraft
1490
+ DTD
1491
+ Pets
1492
+ Caltech-101
1493
+ Flowers
1494
+ MNIST
1495
+ FER-2013
1496
+ STL-10
1497
+ EuroSAT
1498
+ RESISC45
1499
+ GTSRB
1500
+ KITTI
1501
+ Country211
1502
+ PCAM
1503
+ UCF101
1504
+ Kinetics700
1505
+ CLEVR
1506
+ HatefulMemes
1507
+ SST2
1508
+ ImageNet
1509
+ Avg
1510
+ CLIP [20,21]
1511
+ ViT-B/16
1512
+ scratch
1513
+ 27
1514
+ 50.6 66.0 34.5 38.8 51.1 4.0
1515
+ 5.4 21.2 28.5 60.9 53.3 8.4 17.3 90.5 30.2 21.5 6.1 35.1 10.5 53.5 28.5 22.1 10.8 52.4 50.7 37.6 34.2
1516
+ ViT-L/16
1517
+ scratch
1518
+ 189 59.5 72.9 41.5 40.3 53.6 6.9
1519
+ 6.4 20.6 27.9 65.4 55.0 10.3 34.5 94.2 22.7 28.8 5.8 41.4 12.6 54.9 34.3 24.0 12.9 54.3 50.1 40.4 37.4
1520
+ SLIP [20]
1521
+ ViT-B/16
1522
+ scratch
1523
+ 41
1524
+ 59.5 78.6 45.2 38.7 53.4 5.4
1525
+ 5.7 26.1 31.1 71.0 56.6 9.8 19.6 94.4 20.3 28.9 14.5 34.0 11.6 55.4 37.7 26.9 17.5 52.8 51.1 42.8 38.0
1526
+ ViT-L/16
1527
+ scratch
1528
+ 284 64.4 87.8 56.4 39.8 58.9 8.6
1529
+ 7.8 26.8 32.0 76.6 59.4 13.2 36.0 96.6 27.7 36.5 7.2 28.8 15.6 54.4 42.6 30.0 14.1 53.4 50.1 46.2 41.2
1530
+ CiT-1K-meta
1531
+ ViT-B/16
1532
+ MoCo-v3
1533
+ 5
1534
+ 45.6 81.0 49.9 30.4 44.9 6.3
1535
+ 8.3 26.8 80.0 71.2 25.1 7.3 26.0 95.2 19.1 14.3 6.9 22.2 6.2 54.1 34.7 24.7 13.4 50.7 50.1 61.2 38.5
1536
+ ViT-B/16
1537
+ AugReg
1538
+ 8
1539
+ 57.9 92.3 74.2 36.9 52.5 7.7
1540
+ 5.6 25.2 77.9 84.5 38.8 8.3 31.2 94.4 16.6 24.3 6.5 17.2 6.4 59.1 47.8 32.2 13.3 52.0 50.1 68.9 41.6
1541
+ ViT-L/16
1542
+ AugReg
1543
+ 8
1544
+ 60.0 93.6 77.8 36.3 54.0 9.0
1545
+ 5.7 25.6 79.8 87.3 45.2 9.7 29.2 96.1 20.9 32.8 7.0 36.0 7.6 52.8 51.5 35.2 12.6 53.0 49.7 71.6 43.8
1546
+ ViT-H/14
1547
+ SWAG
1548
+ 11
1549
+ 79.0 91.6 68.1 35.3 56.9 26.2 12.5 30.0 88.8 86.4 47.6 8.1 31.3 97.8 27.6 46.4 7.3 34.2 14.5 50.3 54.7 43.8 12.3 51.8 51.0 73.3 47.2
1550
+ CiT-21K-meta
1551
+ ViT-B/16
1552
+ MoCo-v3 15
1553
+ 51.2 84.4 53.5 45.7 52.3 7.6
1554
+ 9.0 31.6 69.2 73.8 56.1 10.6 24.5 95.7 30.1 23.4 7.9 28.5 9.2 51.0 39.5 28.7 15.0 49.3 49.1 57.4 40.6
1555
+ ViT-B/16
1556
+ AugReg
1557
+ 23
1558
+ 75.3 93.8 75.7 57.8 59.8 9.7 10.1 35.4 68.3 87.9 74.3 12.1 27.4 97.1 30.8 30.6 7.3 24.3 9.9 50.5 54.7 37.4 13.6 53.8 50.1 63.7 46.6
1559
+ ViT-L/16
1560
+ AugReg
1561
+ 29
1562
+ 78.9 95.1 78.6 60.5 61.9 11.6 10.9 35.1 74.2 90.5 75.4 14.8 34.8 98.0 24.7 35.5 7.5 25.7 10.9 50.8 57.4 40.7 14.8 49.9 48.7 67.7 48.3
1563
+ ViT-H/14
1564
+ SWAG
1565
+ 39
1566
+ 92.2 92.9 70.9 59.0 64.7 36.9 14.9 40.3 87.7 90.9 77.4 10.1 32.7 99.1 38.8 53.2 9.3 15.9 20.5 50.7 62.2 49.4 12.9 46.8 44.2 71.4 51.7
1567
+ CiT-multi-meta
1568
+ ViT-B/16
1569
+ MoCo-v3 11
1570
+ 51.3 81.8 50.5 50.7 51.6 9.5 14.6 30.8 75.6 73.3 58.7 10.3 26.2 95.6 23.2 19.1 7.8 14.6 9.4 50.8 39.7 28.0 14.7 52.8 50.0 58.8 40.4
1571
+ ViT-B/16
1572
+ AugReg
1573
+ 11
1574
+ 77.8 94.0 76.5 63.9 60.1 10.3 13.1 35.2 79.0 88.9 79.4 12.2 33.0 96.2 31.6 29.3 10.2 17.4 9.6 50.8 56.0 38.0 12.5 55.8 47.8 67.0 47.9
1575
+ ViT-L/16
1576
+ AugReg
1577
+ 16
1578
+ 80.4 95.3 79.4 65.6 61.9 13.3 11.3 35.1 79.9 90.6 80.1 10.7 37.8 97.4 29.3 35.0 7.8 13.8 10.7 49.7 59.5 41.3 13.0 54.5 47.9 70.5 48.9
1579
+ ViT-H/14
1580
+ SWAG
1581
+ 31
1582
+ 91.8 90.7 71.3 65.6 62.4 47.9 19.7 40.8 91.7 91.3 81.2 10.7 37.5 98.0 23.9 46.4 11.0 12.4 20.2 51.3 64.3 50.2 13.5 54.6 47.1 73.4 52.6
1583
+ CiT-sep.-meta (single GPU)
1584
+ ViT-B/16
1585
+ MoCo-v3
1586
+ 4
1587
+ 59.1 82.2 55.2 56.6 50.7 13.0 13.1 32.8 74.8 77.6 65.9 16.9 13.8 96.3 17.1 21.6 7.6 40.6 9.4 53.5 42.7 27.8 14.2 52.2 50.9 50.7 42.2
1588
+ ViT-B/16
1589
+ AugReg
1590
+ 5
1591
+ 79.1 94.4 75.2 73.8 60.6 19.4 17.4 36.6 78.1 88.0 79.8 12.4 39.2 97.0 31.1 29.1 11.1 30.1 9.9 51.9 54.9 37.1 19.2 52.5 50.0 56.8 49.4
1592
+ ViT-L/16
1593
+ AugReg
1594
+ 7
1595
+ 83.8 94.8 79.6 76.9 60.4 19.6 17.2 36.0 77.8 89.6 82.2 12.1 39.0 96.7 24.8 31.2 9.7 26.9 10.7 57.6 59.1 39.9 14.9 46.8 51.2 60.1 49.9
1596
+ ViT-H/14
1597
+ SWAG
1598
+ 11
1599
+ 92.1 89.9 71.8 71.3 65.4 52.0 20.9 38.7 90.6 90.4 84.8 15.1 30.6 92.8 26.8 47.1 13.4 34.8 20.8 59.4 65.8 50.1 14.0 48.5 51.7 67.0 54.1
1600
+ Table 12. CiT trained on YFCC15M and evaluated on 26 CLIP/SLIP benchmarks: we vary metadata on IN-1K, IN-21K and combined class
1601
+ names on 26 tasks (CiT-multi-meta) with a single training and run 26 separate training on each task with a single GPU (CiT-sep.-meta).
1602
+ Vis. Encoder Init.
1603
+ Hrs
1604
+ Food-101
1605
+ CIFAR10
1606
+ CIFAR100
1607
+ CUB
1608
+ SUN397
1609
+ Cars
1610
+ Aircraft
1611
+ DTD
1612
+ Pets
1613
+ Caltech-101
1614
+ Flowers
1615
+ MNIST
1616
+ FER-2013
1617
+ STL-10
1618
+ EuroSAT
1619
+ RESISC45
1620
+ GTSRB
1621
+ KITTI
1622
+ Country211
1623
+ PCAM
1624
+ UCF101
1625
+ Kinetics700
1626
+ CLEVR
1627
+ HatefulMemes
1628
+ SST2
1629
+ ImageNet
1630
+ Avg
1631
+ CLIP (WIT400M) [21]
1632
+ ViT-B/32
1633
+ scratch
1634
+ 458
1635
+ 84.4 91.3 65.1 37.8 63.2 59.4 21.2 44.5 87.0 87.9 66.7 51.9 47.3 97.2 49.4 60.3 32.2 39.4 17.8 58.4 64.5 47.8 24.8 57.6 59.6 63.2 56.9
1636
+ ViT-B/16
1637
+ scratch
1638
+ 981
1639
+ 89.2 91.6 68.7 39.1 65.2 65.6 27.1 46.0 88.9 89.3 70.4 56.0 52.7 98.2 54.1 65.5 43.3 44.0 23.3 48.1 69.8 52.4 23.4 61.7 59.8 68.6 60.1
1640
+ ViT-L/14
1641
+ scratch
1642
+ 6803 92.9 96.2 77.9 48.3 67.7 77.3 36.1 55.3 93.5 92.6 78.7 87.2 57.5 99.3 59.9 71.6 50.3 23.1 32.7 58.8 76.2 60.3 24.3 63.3 64.0 75.3 66.2
1643
+ OpenCLIP *
1644
+ ViT-B-32
1645
+ scratch
1646
+ 458
1647
+ n/a
1648
+ 90.8 70.2
1649
+ n/a
1650
+ 67.0 79.2 16.8 54.3 86.8 83.3 68.3 37.4 42.7 95.5 51.6
1651
+ n/a
1652
+ 42.0 28.8 14.7 54.6
1653
+ n/a
1654
+ n/a
1655
+ 16.3
1656
+ n/a
1657
+ 52.6 62.9
1658
+ n/a
1659
+ ViT-B-16
1660
+ scratch
1661
+ 981
1662
+ n/a
1663
+ 91.7 71.0
1664
+ n/a
1665
+ 69.6 83.7 17.5 51.3 89.2 83.5 69.3 66.6 42.9 97.0 50.3
1666
+ n/a
1667
+ 43.5 19.0 18.1 60.5
1668
+ n/a
1669
+ n/a
1670
+ 28.8
1671
+ n/a
1672
+ 54.7 67.0
1673
+ n/a
1674
+ ViT-L-14
1675
+ scratch
1676
+ 6803
1677
+ n/a
1678
+ 94.7 77.4
1679
+ n/a
1680
+ 72.6 89.6 25.1 60.3 91.9 84.2 75.4 76.4 50.1 98.0 61.8
1681
+ n/a
1682
+ 50.0 20.8 23.1 48.6
1683
+ n/a
1684
+ n/a
1685
+ 24.2
1686
+ n/a
1687
+ 56.3 72.7
1688
+ n/a
1689
+ CiT-1K-meta
1690
+ ViT-B/16
1691
+ MoCo-v3
1692
+ 26
1693
+ 31.2 80.7 56.7 29.5 41.7 12.6 3.9 35.2 85.9 82.3 19.1 16.3 25.0 89.7 20.0 19.7 14.5 42.2 3.7 55.3 34.8 23.0 14.4 49.5 49.3 67.0 38.6
1694
+ ViT-B/32
1695
+ AugReg
1696
+ 62
1697
+ 45.0 86.6 68.8 34.5 48.1 12.1 3.8 35.3 87.0 87.6 34.5 10.2 29.2 89.8 19.7 23.0 10.5 33.1 4.4 50.6 45.5 27.7 15.2 48.5 50.4 67.5 41.1
1698
+ ViT-B/16
1699
+ AugReg
1700
+ 63
1701
+ 45.4 87.8 70.9 33.7 50.8 12.4 3.3 38.0 86.2 89.0 31.5 9.7 26.4 90.0 25.3 25.3 13.2 34.9 5.2 54.7 50.0 31.5 14.7 50.4 49.3 73.0 42.4
1702
+ ViT-L/16
1703
+ AugReg
1704
+ 27
1705
+ 45.3 90.6 76.3 36.3 54.7 13.6 5.0 35.9 87.2 92.1 32.0 10.2 20.0 91.3 28.2 31.2 10.6 21.4 5.5 51.7 50.9 33.6 16.1 48.9 50.1 75.7 42.9
1706
+ ViT-H/14
1707
+ SWAG
1708
+ 26
1709
+ 65.4 89.8 68.7 36.4 56.5 38.0 7.9 41.7 89.4 88.5 41.4 10.2 30.5 94.3 34.6 41.5 12.0 19.1 12.3 49.5 57.0 42.6 13.2 51.5 46.5 76.2 46.7
1710
+ CiT-21K-meta
1711
+ ViT-B/16
1712
+ MoCo-v3
1713
+ 70
1714
+ 64.8 85.0 63.1 59.5 56.3 26.2 8.1 40.2 87.6 87.1 60.6 17.8 34.5 95.9 29.4 30.3 10.9 33.0 6.4 54.5 48.8 31.2 15.1 47.9 50.1 64.1 46.5
1715
+ ViT-B/32
1716
+ AugReg
1717
+ 57
1718
+ 71.7 91.1 72.8 62.4 59.0 18.8 5.9 42.6 81.8 89.8 67.5 16.3 38.8 96.3 27.1 32.8 12.4 33.9 6.4 52.8 56.8 35.9 16.4 51.0 50.1 65.0 48.3
1719
+ ViT-B/16
1720
+ AugReg
1721
+ 72
1722
+ 77.1 92.8 74.7 68.9 61.9 20.6 8.3 41.5 85.7 91.2 73.8 21.7 38.3 97.0 26.2 36.4 15.1 41.8 7.1 52.4 56.8 38.3 12.1 51.0 50.5 71.2 50.5
1723
+ ViT-L/16
1724
+ AugReg
1725
+ 97
1726
+ 77.5 93.5 79.1 67.6 62.9 19.5 8.3 44.8 84.4 93.1 71.5 18.9 34.2 98.0 29.6 38.9 11.7 22.9 7.7 50.9 60.3 41.6 14.8 51.5 48.2 73.9 50.2
1727
+ ViT-H/14
1728
+ SWAG
1729
+ 135
1730
+ 89.2 91.5 72.1 68.2 64.0 36.9 10.4 43.9 88.2 92.1 75.8 7.1 41.7 97.4 29.2 49.6 10.7 34.6 15.0 50.9 62.6 46.4 13.2 52.3 49.7 76.1 52.6
1731
+ CiT-multi-meta
1732
+ ViT-B/16
1733
+ MoCo-v3
1734
+ 31
1735
+ 68.1 84.3 62.0 63.7 56.9 65.7 16.0 40.3 90.0 87.8 61.1 6.8 26.6 92.1 27.6 35.9 18.0 38.6 7.2 50.9 56.0 35.2 17.2 46.0 49.7 65.8 48.8
1736
+ ViT-B/32
1737
+ AugReg
1738
+ 32
1739
+ 75.2 90.0 72.2 70.9 60.2 43.9 11.8 42.8 86.6 90.2 74.6 29.2 21.6 93.0 31.7 33.3 13.5 44.7 6.9 51.1 61.7 38.7 14.9 49.9 50.1 66.2 51.0
1740
+ ViT-B/16
1741
+ AugReg
1742
+ 51
1743
+ 80.2 91.5 74.4 75.1 62.3 53.7 15.5 40.1 87.2 90.8 76.3 12.3 31.2 92.4 28.1 38.3 13.2 18.6 7.8 60.5 66.0 42.5 14.0 50.3 50.0 71.7 51.7
1744
+ ViT-L/16
1745
+ AugReg
1746
+ 61
1747
+ 81.6 92.7 79.2 72.3 63.8 56.9 15.7 42.6 88.5 92.9 73.9 22.6 33.3 94.1 30.9 38.4 16.9 27.7 8.7 56.7 68.4 45.5 16.4 50.0 48.3 74.8 53.6
1748
+ ViT-H/14
1749
+ SWAG
1750
+ 54
1751
+ 92.1 91.0 71.8 71.7 66.3 77.4 18.7 51.3 93.8 92.2 81.5 14.9 39.6 97.5 39.4 50.0 15.0 19.1 17.8 50.9 71.8 52.4 14.7 51.7 51.1 76.5 56.5
1752
+ Table 13. CiT trained on LAION400M and evaluated on 26 CLIP benchmarks: We vary metadata from IN-1K (CiT-1K-meta), IN-21K
1753
+ (CiT-21K-meta) and combined class names from 26 benchmarks (CiT-multi.-meta). We also list results from CLIP on WIT400M and
1754
+ OpenCLIP trained on LAION400M. *: from https://github.com/LAION-AI/CLIP_benchmark, with some results using
1755
+ VTAB benchmark evaluation/prompts.
1756
+
1757
+ Vis. Encoder Init.
1758
+ Hrs
1759
+ Food-101
1760
+ CIFAR10
1761
+ CIFAR100
1762
+ CUB
1763
+ SUN397
1764
+ Cars
1765
+ Aircraft
1766
+ DTD
1767
+ Pets
1768
+ Caltech-101
1769
+ Flowers
1770
+ MNIST
1771
+ FER-2013
1772
+ STL-10
1773
+ EuroSAT
1774
+ RESISC45
1775
+ GTSRB
1776
+ KITTI
1777
+ Country211
1778
+ PCAM
1779
+ UCF101
1780
+ Kinetics700
1781
+ CLEVR
1782
+ HatefulMemes
1783
+ SST2
1784
+ ImageNet
1785
+ Avg
1786
+ CLIP (WIT400M) [21]
1787
+ ViT-B/32
1788
+ scratch
1789
+ 458
1790
+ 84.4 91.3 65.1 37.8 63.2 59.4 21.2 44.5 87.0 87.9 66.7 51.9 47.3 97.2 49.4 60.3 32.2 39.4 17.8 58.4 64.5 47.8 24.8 57.6 59.6 63.2 56.9
1791
+ ViT-B/16
1792
+ scratch
1793
+ 981
1794
+ 89.2 91.6 68.7 39.1 65.2 65.6 27.1 46.0 88.9 89.3 70.4 56.0 52.7 98.2 54.1 65.5 43.3 44.0 23.3 48.1 69.8 52.4 23.4 61.7 59.8 68.6 60.1
1795
+ ViT-L/14
1796
+ scratch
1797
+ 6803 92.9 96.2 77.9 48.3 67.7 77.3 36.1 55.3 93.5 92.6 78.7 87.2 57.5 99.3 59.9 71.6 50.3 23.1 32.7 58.8 76.2 60.3 24.3 63.3 64.0 75.3 66.2
1798
+ CiT-1K-meta
1799
+ ViT-B/16
1800
+ MoCo-v3
1801
+ 39
1802
+ 29.0 86.0 56.5 17.6 41.3 12.4 5.8 25.7 83.8 77.0 10.6 10.8 24.9 95.1 22.3 20.8 6.8 35.6 4.2 50.8 27.7 20.5 17.2 48.9 50.1 68.4 36.5
1803
+ ViT-B/32
1804
+ AugReg
1805
+ 69
1806
+ 42.8 92.2 70.5 22.1 49.0 11.4 5.5 27.0 83.8 81.1 16.5 8.2 32.5 94.3 29.4 22.2 8.5 39.1 4.9 51.3 37.6 26.7 16.4 48.0 50.1 67.8 40.0
1807
+ ViT-B/16
1808
+ AugReg
1809
+ 72
1810
+ 43.9 92.1 73.4 20.4 50.0 10.9 4.5 31.3 84.6 83.0 18.8 7.1 21.5 96.2 23.3 22.4 11.2 29.4 5.2 52.3 41.9 29.4 17.0 50.6 50.1 74.9 40.2
1811
+ ViT-L/16
1812
+ AugReg
1813
+ 105
1814
+ 47.8 95.4 76.0 18.5 49.4 11.4 5.6 30.9 84.7 83.7 22.4 6.4 25.6 96.8 24.7 29.7 8.9 36.3 5.3 50.9 45.9 31.0 16.3 46.5 50.1 77.5 41.4
1815
+ ViT-H/14
1816
+ SWAG
1817
+ 43
1818
+ 57.2 93.2 68.5 19.8 47.2 25.6 5.9 32.4 81.3 82.5 25.3 8.2 28.8 97.4 17.6 42.2 8.1 29.2 10.3 50.9 53.7 38.8 14.5 48.0 53.2 77.1 43.0
1819
+ CiT-21K-meta
1820
+ ViT-B/16
1821
+ MoCo-v3 134
1822
+ 57.1 87.1 60.3 57.1 54.0 10.5 6.0 37.0 84.6 82.8 59.9 9.8 26.8 96.8 31.8 30.8 8.3 41.2 7.4 59.9 37.9 25.9 20.8 48.2 50.1 62.8 44.4
1823
+ ViT-B/32
1824
+ AugReg
1825
+ 148
1826
+ 64.4 93.2 71.7 49.5 56.8 10.8 5.7 35.4 76.2 85.8 60.9 9.5 29.1 95.4 27.1 25.2 9.3 39.8 7.7 51.3 45.8 32.1 14.1 51.3 50.1 62.2 44.6
1827
+ ViT-B/16
1828
+ AugReg
1829
+ 161
1830
+ 70.0 93.6 75.9 58.2 59.9 11.7 5.2 37.7 74.9 89.3 61.7 9.8 32.6 97.9 29.5 29.4 11.2 40.9 9.0 51.1 49.6 36.1 13.6 48.9 50.1 69.4 46.8
1831
+ ViT-L/16
1832
+ AugReg
1833
+ 228
1834
+ 71.7 96.0 78.7 56.7 62.4 12.2 5.9 37.4 77.0 90.6 65.3 14.6 37.6 98.3 27.8 34.0 8.5 34.0 9.4 44.2 54.7 39.0 15.5 47.9 50.1 72.6 47.8
1835
+ ViT-H/14
1836
+ SWAG
1837
+ 310
1838
+ 80.4 93.2 72.0 58.4 60.8 25.6 5.5 36.0 78.4 89.1 70.6 7.8 34.7 98.9 28.4 41.7 10.8 29.9 14.0 50.8 57.5 41.9 12.4 45.8 52.6 75.5 49.0
1839
+ CiT-multi-meta
1840
+ ViT-B/16
1841
+ MoCo-v3
1842
+ 91
1843
+ 70.4 88.8 61.1 60.1 59.0 63.2 24.5 38.4 90.2 85.5 66.5 9.8 32.0 96.6 35.4 39.0 9.5 35.8 10.2 50.3 48.7 33.4 17.1 43.8 50.1 66.1 49.4
1844
+ ViT-B/32
1845
+ AugReg
1846
+ 62
1847
+ 72.7 92.9 71.0 51.0 58.9 30.9 10.9 36.3 86.6 87.4 67.5 9.8 36.3 94.5 29.1 29.4 8.5 33.4 8.6 54.9 51.6 36.3 14.8 49.2 50.0 64.4 47.6
1848
+ ViT-B/16
1849
+ AugReg
1850
+ 62
1851
+ 81.3 94.0 76.6 65.2 62.2 44.1 17.9 41.3 90.0 90.6 74.9 9.8 35.3 97.5 34.6 36.5 13.1 34.4 10.4 56.8 57.6 41.3 13.4 50.6 50.1 71.9 52.0
1852
+ ViT-L/16
1853
+ AugReg
1854
+ 62
1855
+ 82.4 96.1 79.2 62.4 64.1 44.5 15.8 41.2 89.3 91.3 74.9 9.8 34.7 98.2 27.9 38.7 8.9 33.4 11.1 55.9 61.3 44.0 11.9 48.9 50.1 74.4 51.9
1856
+ ViT-H/14
1857
+ SWAG
1858
+ 203
1859
+ 93.7 93.5 73.2 75.7 65.1 79.5 25.2 40.3 95.8 92.1 85.0 11.6 38.9 98.3 30.5 51.9 10.1 28.7 21.8 52.5 68.9 52.9 15.9 45.7 50.1 77.6 56.7
1860
+ Table 14. CiT trained on Raw Image-Text Crawl and evaluated on 26 CLIP benchmarks: We vary metadata from IN-1K (CiT-1K-meta),
1861
+ IN-21K (CiT-21K-meta) and combined class names from 26 benchmarks (CiT-multi.-meta). The budget b = 60000 for IN-21K and
1862
+ b = 30000 for combined class names. We also list results from CLIP on WIT400M.
1863
+ YFCC15M
1864
+ Food-101
1865
+ CIFAR10
1866
+ CIFAR100
1867
+ CUB
1868
+ SUN397
1869
+ Cars
1870
+ Aircraft
1871
+ DTD
1872
+ Pets
1873
+ Caltech-101
1874
+ Flowers
1875
+ MNIST
1876
+ FER-2013
1877
+ STL-10
1878
+ EuroSAT
1879
+ RESISC45
1880
+ GTSRB
1881
+ KITTI
1882
+ Country211
1883
+ PCAM
1884
+ UCF101
1885
+ Kinetics700
1886
+ CLEVR
1887
+ HatefulMemes
1888
+ SST2
1889
+ ImageNet
1890
+ # of classes
1891
+ 101
1892
+ 10
1893
+ 100 200 397 196 100
1894
+ 47
1895
+ 37
1896
+ 102 102
1897
+ 10
1898
+ 7
1899
+ 10
1900
+ 10
1901
+ 45
1902
+ 43
1903
+ 4
1904
+ 211
1905
+ 2
1906
+ 101 700
1907
+ 8
1908
+ 2
1909
+ 2
1910
+ 1000
1911
+ t > 0.55
1912
+ # pairs per class (k) 5.32 16.64 9.27 3.28 5.63 0.81 4.35 3.12
1913
+ 3.66 6.71 6.51 11.9 5.43 18.21 8.59 10.57 2.22 23.63 2.33 0.53 4.28 3.9 20.96 5.48
1914
+ 0.66
1915
+ 3.69
1916
+ total keep rates (%) 3.66 1.13 6.31 4.46 15.2 1.08 2.96 0.999 0.922 4.66 4.52 0.81 0.259 1.24 0.585 0.324 0.65 0.644 3.34 0.007 2.95 18.6 1.14 0.075 0.009 25.1
1917
+ Table 15. Statistics of YFCC15M (title and description) coverage on 26 tasks of CLIP evaluation: Low coverage could explain the root
1918
+ cause of the poor performance of zero-shot transfer (e.g. Cars, PCAM, etc.).
1919
+ A.2.5
1920
+ Early Detection of Task Coverage
1921
+ One extra benefit of curation is being able to detect zero-
1922
+ shot transferability. Although existing scaled pre-trainings
1923
+ have huge success, the coverage of pre-training data distri-
1924
+ bution for downstream tasks is largely unknown. We dis-
1925
+ cuss this coverage issue below.
1926
+ Task Coverage. We obtain the statistics of curated data (of-
1927
+ fline in Table 1a) for the 26 tasks and show it in Table 15.
1928
+ We consider a sample with a maximum cosine similarity
1929
+ for one class as one sample belonging to that class/task. We
1930
+ note that this is a hard-matching which does not necessar-
1931
+ ily cover the full class to sample correlation. Breaking down
1932
+ YFCC15M for different tasks partially explains the low per-
1933
+ formance on some. For example, SST2 (a binary classifica-
1934
+ tion task) has low image-text pair matches, explaining the
1935
+ low performance (close to random) for all models.
1936
+ References
1937
+ [1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine
1938
+ Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch,
1939
+ Katie Millican, Malcolm Reynolds, et al. Flamingo: a vi-
1940
+ sual language model for few-shot learning. arXiv preprint
1941
+ arXiv:2204.14198, 2022.
1942
+ [2] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou,
1943
+ Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerg-
1944
+ ing properties in self-supervised vision transformers.
1945
+ In
1946
+ Proceedings of the IEEE/CVF International Conference on
1947
+ Computer Vision, pages 9650–9660, 2021.
1948
+ [3] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu
1949
+ Soricut. Conceptual 12m: Pushing web-scale image-text pre-
1950
+ training to recognize long-tail visual concepts. In Proceed-
1951
+ ings of the IEEE/CVF Conference on Computer Vision and
1952
+ Pattern Recognition, pages 3558–3568, 2021.
1953
+ [4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge-
1954
+ offrey Hinton. A simple framework for contrastive learning
1955
+
1956
+ of visual representations. arXiv preprint arXiv:2002.05709,
1957
+ 2020.
1958
+ [5] Xinlei Chen, Saining Xie, and Kaiming He.
1959
+ An empiri-
1960
+ cal study of training self-supervised vision transformers. In
1961
+ Proceedings of the IEEE/CVF International Conference on
1962
+ Computer Vision, pages 9640–9649, 2021.
1963
+ [6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
1964
+ Toutanova. BERT: Pre-training of deep bidirectional trans-
1965
+ formers for language understanding. In Proceedings of the
1966
+ 2019 Conference of the North American Chapter of the As-
1967
+ sociation for Computational Linguistics: Human Language
1968
+ Technologies, Volume 1 (Long and Short Papers), pages
1969
+ 4171–4186, Minneapolis, Minnesota, June 2019. Associa-
1970
+ tion for Computational Linguistics.
1971
+ [7] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,
1972
+ Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
1973
+ Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-
1974
+ vain Gelly, et al. An image is worth 16x16 words: Trans-
1975
+ formers for image recognition at scale. In International Con-
1976
+ ference on Learning Representations, 2020.
1977
+ [8] Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Sim-
1978
+ ple contrastive learning of sentence embeddings. In Proceed-
1979
+ ings of the 2021 Conference on Empirical Methods in Natu-
1980
+ ral Language Processing, pages 6894–6910, 2021.
1981
+ [9] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta,
1982
+ Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith.
1983
+ Don’t stop pretraining: adapt language models to domains
1984
+ and tasks. arXiv preprint arXiv:2004.10964, 2020.
1985
+ [10] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr
1986
+ Dollár, and Ross Girshick. Masked autoencoders are scalable
1987
+ vision learners. arXiv preprint arXiv:2111.06377, 2021.
1988
+ [11] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross
1989
+ Girshick. Momentum contrast for unsupervised visual rep-
1990
+ resentation learning. In Proceedings of the IEEE/CVF Con-
1991
+ ference on Computer Vision and Pattern Recognition, pages
1992
+ 9729–9738, 2020.
1993
+ [12] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh,
1994
+ Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom
1995
+ Duerig. Scaling up visual and vision-language representa-
1996
+ tion learning with noisy text supervision. In International
1997
+ Conference on Machine Learning, pages 4904–4916. PMLR,
1998
+ 2021.
1999
+ [13] Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh
2000
+ Ramakrishnan, and Rishabh Iyer.
2001
+ Glister: Generalization
2002
+ based data subset selection for efficient and robust learning.
2003
+ In Proceedings of the AAAI Conference on Artificial Intelli-
2004
+ gence, volume 35, pages 8110–8118, 2021.
2005
+ [14] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon
2006
+ Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert:
2007
+ a pre-trained biomedical language representation model for
2008
+ biomedical text mining. Bioinformatics, 36(4):1234–1240,
2009
+ 2020.
2010
+ [15] Junlong Li, Zhuosheng Zhang, Hai Zhao, Xi Zhou, and
2011
+ Xiang Zhou.
2012
+ Task-specific objectives of pre-trained lan-
2013
+ guage models for dialogue adaptation.
2014
+ arXiv preprint
2015
+ arXiv:2009.04984, 2020.
2016
+ [16] Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli
2017
+ Ouyang, Jing Shao, Fengwei Yu, and Junjie Yan.
2018
+ Su-
2019
+ pervision exists everywhere: A data efficient contrastive
2020
+ language-image pre-training paradigm.
2021
+ arXiv preprint
2022
+ arXiv:2110.05208, 2021.
2023
+ [17] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar
2024
+ Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettle-
2025
+ moyer, and Veselin Stoyanov. Roberta: A robustly optimized
2026
+ bert pretraining approach. arXiv preprint arXiv:1907.11692,
2027
+ 2019.
2028
+ [18] Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan
2029
+ Laptev, Josef Sivic, and Andrew Zisserman.
2030
+ End-to-end
2031
+ learning of visual representations from uncurated instruc-
2032
+ tional videos. In Proceedings of the IEEE/CVF Conference
2033
+ on Computer Vision and Pattern Recognition, pages 9879–
2034
+ 9889, 2020.
2035
+ [19] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac,
2036
+ Makarand
2037
+ Tapaswi,
2038
+ Ivan
2039
+ Laptev,
2040
+ and
2041
+ Josef
2042
+ Sivic.
2043
+ Howto100m: Learning a text-video embedding by watching
2044
+ hundred million narrated video clips. In Proceedings of the
2045
+ IEEE international conference on computer vision, pages
2046
+ 2630–2640, 2019.
2047
+ [20] Norman Mu, Alexander Kirillov, David Wagner, and Sain-
2048
+ ing Xie. Slip: Self-supervision meets language-image pre-
2049
+ training. arXiv preprint arXiv:2112.12750, 2021.
2050
+ [21] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
2051
+ Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,
2052
+ Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn-
2053
+ ing transferable visual models from natural language super-
2054
+ vision. arXiv preprint arXiv:2103.00020, 2021.
2055
+ [22] Shibani Santurkar, Yann Dubois, Rohan Taori, Percy Liang,
2056
+ and Tatsunori Hashimoto. Is a caption worth a thousand im-
2057
+ ages? a controlled study for representation learning. arXiv
2058
+ preprint arXiv:2207.07635, 2022.
2059
+ [23] Christoph Schuhmann, Romain Beaumont, Cade W Gor-
2060
+ don, Ross Wightman, Theo Coombes, Aarush Katta, Clayton
2061
+ Mullis, Patrick Schramowski, Srivatsa R Kundurthy, Kather-
2062
+ ine Crowson, et al. Laion-5b: An open large-scale dataset
2063
+ for training next generation image-text models.
2064
+ [24] Christoph Schuhmann, Richard Vencu, Romain Beaumont,
2065
+ Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo
2066
+ Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m:
2067
+ Open dataset of clip-filtered 400 million image-text pairs.
2068
+ arXiv preprint arXiv:2111.02114, 2021.
2069
+ [25] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu
2070
+ Soricut. Conceptual captions: A cleaned, hypernymed, im-
2071
+ age alt-text dataset for automatic image captioning. In Pro-
2072
+ ceedings of the 56th Annual Meeting of the Association for
2073
+ Computational Linguistics (Volume 1: Long Papers), pages
2074
+ 2556–2565, Melbourne, Australia, July 2018. Association
2075
+ for Computational Linguistics.
2076
+ [26] Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guil-
2077
+ laume Couairon, Wojciech Galuba, Marcus Rohrbach, and
2078
+ Douwe Kiela. Flava: A foundational language and vision
2079
+ alignment model. arXiv preprint arXiv:2112.04482, 2021.
2080
+ [27] Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius
2081
+ de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv
2082
+ Mahajan, Ross Girshick, Piotr Dollár, and Laurens van der
2083
+ Maaten. Revisiting weakly supervised pre-training of visual
2084
+
2085
+ perception models. In Proceedings of the IEEE/CVF Con-
2086
+ ference on Computer Vision and Pattern Recognition, pages
2087
+ 804–814, 2022.
2088
+ [28] Andreas Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross
2089
+ Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train
2090
+ your vit? data, augmentation, and regularization in vision
2091
+ transformers. arXiv preprint arXiv:2106.10270, 2021.
2092
+ [29] Bart Thomee, David A Shamma, Gerald Friedland, Ben-
2093
+ jamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and
2094
+ Li-Jia Li. Yfcc100m: The new data in multimedia research.
2095
+ Communications of the ACM, 59(2):64–73, 2016.
2096
+ [30] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in
2097
+ data subset selection and active learning. In International
2098
+ conference on machine learning, pages 1954–1963. PMLR,
2099
+ 2015.
2100
+ [31] Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim,
2101
+ Mike Li, Simon Kornblith, Rebecca Roelofs, Raphael Gon-
2102
+ tijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok
2103
+ Namkoong, et al. Robust fine-tuning of zero-shot models.
2104
+ In Proceedings of the IEEE/CVF Conference on Computer
2105
+ Vision and Pattern Recognition, pages 7959–7971, 2022.
2106
+ [32] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin.
2107
+ Unsupervised feature learning via non-parametric instance
2108
+ discrimination. In Proceedings of the IEEE conference on
2109
+ computer vision and pattern recognition, pages 3733–3742,
2110
+ 2018.
2111
+ [33] Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko,
2112
+ Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and
2113
+ Christoph Feichtenhofer. Videoclip: Contrastive pre-training
2114
+ for zero-shot video-text understanding. In Proceedings of
2115
+ the 2021 Conference on Empirical Methods in Natural Lan-
2116
+ guage Processing, pages 6787–6800, 2021.
2117
+ [34] Hu Xu, Bing Liu, Lei Shu, and S Yu Philip.
2118
+ Bert post-
2119
+ training for review reading comprehension and aspect-based
2120
+ sentiment analysis. In Proceedings of the 2019 Conference of
2121
+ the North American Chapter of the Association for Computa-
2122
+ tional Linguistics: Human Language Technologies, Volume
2123
+ 1 (Long and Short Papers), pages 2324–2335, 2019.
2124
+ [35] Andrew Yates, Rodrigo Nogueira, and Jimmy Lin.
2125
+ Pre-
2126
+ trained transformers for text ranking: Bert and beyond. In
2127
+ Proceedings of the 14th ACM International Conference on
2128
+ Web Search and Data Mining, pages 1154–1156, 2021.
2129
+ [36] Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mo-
2130
+ jtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive
2131
+ captioners are image-text foundation models. arXiv preprint
2132
+ arXiv:2205.01917, 2022.
2133
+ [37] Lu Yuan, Dongdong Chen, Yi-Ling Chen, Noel Codella,
2134
+ Xiyang Dai, Jianfeng Gao, Houdong Hu, Xuedong Huang,
2135
+ Boxin Li,
2136
+ Chunyuan Li,
2137
+ et al.
2138
+ Florence:
2139
+ A new
2140
+ foundation model for computer vision.
2141
+ arXiv preprint
2142
+ arXiv:2111.11432, 2021.
2143
+ [38] Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner,
2144
+ Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer.
2145
+ Lit: Zero-shot transfer with locked-image text tuning.
2146
+ In
2147
+ Proceedings of the IEEE/CVF Conference on Computer Vi-
2148
+ sion and Pattern Recognition, pages 18123–18133, 2022.
2149
+ [39] Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul Mc-
2150
+ Namee, Marine Carpuat, and Kevin Duh. Curriculum learn-
2151
+ ing for domain adaptation in neural machine translation.
2152
+ arXiv preprint arXiv:1905.05816, 2019.
2153
+ [40] Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D
2154
+ Manning, and Curtis P Langlotz.
2155
+ Contrastive learning of
2156
+ medical visual representations from paired images and text.
2157
+ arXiv preprint arXiv:2010.00747, 2020.
2158
+
PtE0T4oBgHgl3EQfTwCq/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
PtFPT4oBgHgl3EQfnzX3/content/2301.13132v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbe1d2b4b5ff50785bdd6e7351ecbf6712662fa1dff2caf4198a4f9bbb83809a
3
+ size 5767841
PtFPT4oBgHgl3EQfnzX3/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e707fafad7296459861a6d8ebad376f6f2098d0c988ded0166a5927154bad333
3
+ size 131292
QdE4T4oBgHgl3EQf-w6Q/content/tmp_files/2301.05366v1.pdf.txt ADDED
@@ -0,0 +1,468 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ de Haas van Alphen oscillations in hybridization-gap insulators
2
+ as a sudden change in the diamagnetic moment of Landau levels
3
+ S. R. Julian
4
+ Department of Physics, University of Toronto, 60 St.
5
+ George Street, Toronto, Ontario, Canada M5S 1A7
6
+ (Dated: January 16, 2023)
7
+ Abstract
8
+ This note revisits the semi-classical theory of quantum oscillations in hybridization-gap insu-
9
+ lators, and shows that the physical origin of the oscillations, at T = 0 K, is a sudden change in
10
+ the diamagnetic moment of each Landau level as it crosses the hybridized region of the valence
11
+ band.
12
+ 1
13
+ arXiv:2301.05366v1 [cond-mat.str-el] 13 Jan 2023
14
+
15
+ I.
16
+ INTRODUCTION
17
+ For more than 80 years after the first observation of a quantum oscillatory magneti-
18
+ zation, the de Haas-van Alphen (dHvA) effect was regarded as a signature of a metal,
19
+ because the theory was formulated in terms of the emptying of quantized Landau levels
20
+ as they sequentially cross the Fermi surface. In this picture, there should be no oscilla-
21
+ tory magnetization for a filled or empty band, because there is no Fermi surface for the
22
+ Landau levels to cross. Thus, the observation of dHvA oscillations in the Kondo insulator
23
+ SmB6 [1, 2] was greeted with suprise. Almost equally surprising, a possible, very simple,
24
+ theory of these quantum-oscillations-without-a-Fermi-surface was soon found: Knolle and
25
+ Cooper [3] showed that quantum oscillations emerge in a straightforward calculation of the
26
+ thermodynamic potential of a narrow-gap insulator with Landau-quantized states.
27
+ The physical origin of these oscillations has, however, been unclear, and a source of
28
+ some confusion. As Ref. [4] states, in the absence of a Fermi surface “it is not clear what
29
+ surface is being measured.” The original Knolle and Cooper paper [3] does not suggest a
30
+ physical origin for the oscillations: they emerge from the mathematics. Subsequent papers
31
+ by these and other authors [5–8] simply state that the thermodynamic potential oscillates
32
+ as Landau levels sequentially cross the region where the bands cross, although Pal [7] makes
33
+ the enigmatic remark that, as B changes, the Landau levels “feel” the abrupt change in
34
+ the slope of E(k) and “manifest as quantum oscillations”. Moreoever, some papers that
35
+ offer competing theories, e.g. [9–11], state that the Knolle and Cooper dHvA oscillations
36
+ are due to magnetic breakdown, in which the condition ℏωc ≳ E2
37
+ g/EF, where ωc is the
38
+ quasiparticle cyclotron frequency, Eg is the energy gap and EF is the Fermi energy, allows
39
+ the quasiparticles to tunnel across the hybridization gap, which would mean that these
40
+ are, essentially, conventional quantum oscillations.
41
+ The purpose of this brief, somewhat pedagogical, note is to present a simple semi-
42
+ classical picture that shows that the anomalous Knolle-Cooper dHvA oscillations in hy-
43
+ bridzation gap insulators are produced by the sudden change in the diamagnetic moment
44
+ of the Landau levels as they pass between regions of E(k) having different quasiparticle
45
+ velocities. This is unrelated to magnetic breakdown, and indeed it shows that this is a
46
+ novel mechanism for the dHvA effect. This paper does not address the issue of whether
47
+ 2
48
+
49
+ this, as opposed to one of several competing theories (see Ref. 8 for a recent list), is actually
50
+ correct for the case of SmB6.
51
+ II.
52
+ RESULTS AND DISCUSSION: SEMICLASSICAL CALCULATION OF THE
53
+ DHVA EFFECT
54
+ The situation considered by Knolle and Cooper, somewhat generalized in subsequent
55
+ treatments [3, 5–8], is as pictured in Fig. 1a.
56
+ The essential feature is that two bands
57
+ with very different dispersions (in this case one electron-like and one hole-like) weakly
58
+ hybridize where they cross, and the Fermi energy EF lies in the gap. An applied magnetic
59
+ field B results in the formation of Landau levels. As B is increased, the Landau levels,
60
+ illustrated in Fig. 1b, sequentially pass from the electron-like to the hole-like part of the
61
+ valence band, and vice-versa for the conduction band. Working, for simplicity, at T = 0
62
+ in two-dimensions with spinless fermions and no impurity scattering, the grand canonical
63
+ potential can be expressed as a sum over all of the Landau levels of the valence band,
64
+ Ω(µ, T = 0, B) = D
65
+
66
+
67
+ (Eℓ,v − EF) ,
68
+ (1)
69
+ where D = (eB/2πℏ) is the degeneracy of each Landau level per unit area of sample, and
70
+ Eℓ,v is the energy of the ℓth Landau level in the valence band.
71
+ In the conventional treatment of the dHvA effect in a metal (see e.g. Ref. 12), the sum
72
+ in Eq. 1 is approximated in such a way that the oscillatory part, ˜Ω, which involves only
73
+ states in the immediate vicinity of EF, can be extracted. Then it is straightforward to
74
+ calculate the oscillations in any thermodynamic quantity. For example, the dHvA effect
75
+ measures the oscillatory magnetization
76
+ ˜
77
+ M = −∂ ˜Ω
78
+ ∂B .
79
+ (2)
80
+ One could approach the problem differently, however, obtaining the total magnetization
81
+ by taking the derivative of Ω, rather than ˜Ω. Consider first a simple parabolic band of
82
+ non-interacting electrons, so that the dispersion relation is E(k) = ℏ2k2/2m. Then, as in
83
+ the case of electrons in free space, the Landau levels are quantized in energy as
84
+ Eℓ = ℏωc(ℓ + 1/2),
85
+ (3)
86
+ 3
87
+
88
+ 2.0 × 109
89
+ 0
90
+ 2.0 × 109
91
+ k (m
92
+ 1)
93
+ 50
94
+ 0
95
+ 50
96
+ E (meV)
97
+ (a)
98
+ Ec(k)
99
+ Ev(k)
100
+ -5
101
+ 0
102
+ 5
103
+ E (meV)
104
+ (b)
105
+ 1.50
106
+ 1.52
107
+ 1.54
108
+ 1.56
109
+ 1.58
110
+ k (m
111
+ 1)
112
+ 1e9
113
+ 50
114
+ 0
115
+ (2
116
+ B/(elect.)
117
+ (c)
118
+ FIG. 1. (a) Band structure for a hybridization-gap insulator, with the conduction band in red,
119
+ and valence band in blue (see §IV, Methods, for details.) (b) A zoomed-in view of the circled
120
+ region from (a), where the bands cross and hybridize, with Landau levels shown as blue and red
121
+ points for the valence and conduction band respectively. The green and pink lines respectively
122
+ are the unhybridized electron and hole bands at zero field. (c) Shows the diamagnetic moment
123
+ per electron in the Landau levels near the band crossing in the valence (blue) and conduction
124
+ (red) bands, in units of double Bohr magnetons ℏe/me. The hybridization strength is moderate,
125
+ so that the band dispersion changes rather gradually compared to the Landau level spacing. At
126
+ T = 0 K the conduction-band Landau levels are unoccupied.
127
+ where the cyclotron frequency is ωc = eB/m.
128
+ Substituting this in Eq. 1, and taking
129
+ the derivative with respect to B using the non-obvious step of applying the chain rule to
130
+ 4
131
+
132
+ separate the derivative inside the sum from the derivative of the prefactor D, gives
133
+ M = − D
134
+ ℓmax
135
+
136
+ ℓ=0
137
+ eℏ
138
+ m (ℓ + 1/2)
139
+ (4a)
140
+ − Dℏωc
141
+ B
142
+ ℓmax
143
+
144
+ ℓ=0
145
+ [(ℓ + 1/2) − X].
146
+ (4b)
147
+ The new variables are X ≡ EF/ℏωc, and ℓmax, which is the quantum number of the
148
+ highest occupied Landau level at a given field B, which satisfies (ℓmax + 1/2) ≤ EF/ℏωc ≤
149
+ (ℓmax + 3/2). ℓmax changes by one every time a Landau level crosses EF.
150
+ In Fig. 2a the two terms of Eq. 4 are plotted separately (top), and summed (bottom). It
151
+ can be seen that, to a good approximation, the first term is comprised of an oscillatory part
152
+ superposed on a large background, while the second term cancels (or very nearly cancels)
153
+ the large background of the first term. (The second term also has an oscillatory part, but
154
+ it is normally ignored, being typically less than 1% of the oscillatory part of the first term.)
155
+ That is, the oscillations are almost entirely contained in the first term, which has a simple
156
+ physical interpretation: the Landau diamagnetic dipole moment of an electron in the ℓth
157
+ Landau level is µ = −(eℏ/m)(ℓ + 1/2) (see below). Thus the oscillatory magnetization
158
+ comes from the sum over the diamagnetic dipole moments of the electrons in their Landau
159
+ levels, and in particular the sharp step in the ‘sawtooth’ pattern of M is due to the loss of
160
+ the Landau diamagnetic moment of the electrons in the ℓmax Landau level, when the level
161
+ suddenly empties as it passes EF with increasing B.
162
+ Shoenberg, in his book, [12] credits Brian Pippard with this insight, but it must have
163
+ been known to Landau and others. Pippard’s treatment, contained in the proceedings of
164
+ a summer school that was held in Vancouver, Canada, in 1967 [13], is nevertheless useful.
165
+ In a departure from the usual Landau gauge approach, he uses the cylindrical gauge to
166
+ construct wave-functions, for the Landau quantized electrons, that in real space are sharply
167
+ peaked (provided ℓ is not too small) at the classical cyclotron radius, which encloses a real-
168
+ space area Ar. Since the electrons circulate with period T = 2π/ωc, their diamagnetic
169
+ dipole moment is
170
+ µ = −eωc
171
+ 2π Ar = − e2B
172
+ 2πmAr.
173
+ (5)
174
+ 5
175
+
176
+ B T
177
+ 0.14
178
+ 0.13
179
+ M (mJ/T m2)
180
+ (a) conventional
181
+ Eq. 4a
182
+ (Eq. 4b)
183
+ 0.11
184
+ 0.10
185
+ (b) anomalous
186
+ Eq. 10a
187
+ (Eq. 10b)
188
+ 10
189
+ 10.5
190
+ 11
191
+ B (T)
192
+ 0.5
193
+ 0.0
194
+ 0.5
195
+ M
196
+ (2
197
+ B/e. )
198
+ (Eq. 4a)+(Eq. 4b)
199
+ 10
200
+ 10.5
201
+ 11
202
+ B (T)
203
+ 0.5
204
+ 0.0
205
+ 0.5
206
+ (Eq. 10a)+(Eq. 10b)
207
+ FIG. 2.
208
+ (a), top panel, plots Eq. 4a and the negative of 4b for the conventional dHvA effect
209
+ in an isolated metallic band of mass m = 1me, showing that Eq. 4a contains the quantum
210
+ oscillations, while 4b, to a very good approximation, cancels the quasi-linear background, leaving
211
+ a total magnetization (lower panel) that is dominated by the oscillations. (See §IV, Methods, for
212
+ details.) (b) shows the corresponding plot for Eq. 10, the hybridization-gap insulator of Fig. 1,
213
+ with a very weak hybridization so that the band dispersion changes very suddenly, compared to
214
+ the Landau level spacing.
215
+ Pippard merely remarks that one can reformulate the dHvA effect directly in terms of the
216
+ diamagnetic magnetization.
217
+ For an arbitrary band structure, applying the semi-classical relation between the real-
218
+ space, Ar,ℓ, and the k-space, Ak,ℓ, areas of a Landau orbit [12, 14],
219
+ Ar,ℓ(B) =
220
+ ℏ2
221
+ e2B2Ak,ℓ(B) = 2πℏ
222
+ eB (ℓ + γ),
223
+ (6)
224
+ where γ = 1/2 for circularly-symmetric two-dimensional bands such as we use here [15],
225
+ gives
226
+ µ = −
227
+ ℏe
228
+ m∗
229
+ ℓ(B)(ℓ + γ),
230
+ (7)
231
+ where
232
+ m∗
233
+ ℓ(B) ≡ ℏ2
234
+
235
+ ∂Ak
236
+ ∂E
237
+ ����
238
+ ℓ,B
239
+ .
240
+ (8)
241
+ It is the factor 1/m∗
242
+ ℓ in Eq. 7 that is of interest. The sign of m∗
243
+ ℓ determines the direction
244
+ in which an electron circulates in its Landau orbit, and thus determines the sign of the
245
+ 6
246
+
247
+ dipole moment; meanwhile the magnitude of m∗, which in this case is really a proxy for
248
+ the speed of the electrons in the Landau orbit, determines the magnitude of the moment:
249
+ fast electrons (low m∗
250
+ ℓ) produce a strong diamagnetic moment, slow electrons (high m∗
251
+ ℓ)
252
+ produce a weak diamagnetic moment. The diamagnetic moments of the Landau levels near
253
+ the hybridization gap are illustrated in Fig. 1c. On the light electron-like part of the bands
254
+ the circulating electrons have a large negative diamagnetic moment, but on the heavy
255
+ hole-like part (where m∗
256
+ ℓ is negative) they have a weak positive moment. The diamagnetic
257
+ moment of the electrons in a Landau level in the valence band thus undergoes a sudden
258
+ change when they cross the hybridized region, which is located at the Fermi wave-vector
259
+ of the unhybridized bands.
260
+ Returning to Eq. 1 and using
261
+ ∂Eℓ
262
+ ∂B = ∂E
263
+ ∂Ak
264
+ ∂Ak
265
+ ∂B =
266
+ eℏ
267
+ m∗
268
+ ℓ(B)(ℓ + γ),
269
+ (9)
270
+ gives the generalized version of Eq. 4:
271
+ M = − D
272
+ occ.
273
+
274
+
275
+ ℏe
276
+ m∗
277
+ ℓ(B)(ℓ + γ)
278
+ (10a)
279
+ − D/B
280
+ occ.
281
+
282
+
283
+ (Eℓ − EF).
284
+ (10b)
285
+ In Fig. 2b the sum in Eq. 10 is evaluated for the band structure of Fig. 1a. Despite
286
+ the fact that the Landau levels do not cross the Fermi energy, but rather remain in the
287
+ valence band for all values of ℓ, it can be seen that the results for the ‘anomalous’ and
288
+ ‘conventional’ dHvA effects are remarkably similar: the pattern is the same, the period
289
+ of the oscillations is the same, and again the oscillations are contained in the sum over
290
+ the diamagnetic moments of the electrons in the occupied Landau levels. There are slight
291
+ differences: the magnitude of the quantum oscillations is slightly larger in the anomalous
292
+ case, and the cancellation of background is not quite as good.
293
+ Eq. 10a, and Fig. 1c, demonstrate the physical mechanism of the Knolle-Cooper dHvA
294
+ oscillations: as each subsequent Landau level in the valence band crosses the hybridized
295
+ region at kF, its diamagnetic moment undergoes a sudden change in sign and magnitude
296
+ as the electrons suddenly start circulating in the opposite sense at a different speed.
297
+ The actual size of the step in M depends on the details of the band structure. In the
298
+ case chosen here, the velocity of the electrons in a Landau level reverses, and slows by a
299
+ 7
300
+
301
+ 0
302
+ 1
303
+ 2
304
+ 3
305
+ F (kT)
306
+ 0
307
+ A
308
+ = 1 eV
309
+ 0.1 meV
310
+ 0.3 meV
311
+ (a)
312
+ 10
313
+ 11
314
+ B (T)
315
+ 0.5
316
+ 0.0
317
+ 0.5
318
+ M
319
+ 0.0
320
+ 0.1
321
+ 0.2
322
+ 0.3
323
+ 0.4
324
+ (meV)
325
+ Amax
326
+ (b)
327
+ T = 0 K
328
+ 0
329
+ 2
330
+ 4
331
+ 6
332
+ T (K)
333
+ Amax
334
+ (c)
335
+ = 1 eV
336
+ 0.1 meV
337
+ 0.2 meV
338
+ 0.4 meV
339
+ FIG. 3.
340
+ (a) shows the effect of changing hybridization strength. The inset compares oscillations
341
+ for λ = 1 µeV (blue line) with λ = 0.1 meV (red line).
342
+ The main figure shows the Fourier
343
+ transform of M vs 1/B. The frequency of the fundamental peak, F ∼ 780 T, corrsponds via the
344
+ Onsager relation, F = ℏAkF /2πe, to the Fermi wave-vector of the unhybridized bands. (See §IV,
345
+ Methods, for details.) (b) shows the amplitude of the fundamental peak vs. λ. The black line is a
346
+ decaying exponential fit to the points with λ > 0.15 meV. (c) shows the temperature dependence
347
+ of the oscillations for selected values of λ.
348
+ factor of 10, as the Landau level passes from the electron-like to the hole-like part of the
349
+ valence band, so the Landau diamagnetism goes from a negative value to a 10× smaller,
350
+ positive value, so the step-change in M is slightly larger than in the conventional case,
351
+ where the diamagnetic moment merely disappears when the Landau level empties as it
352
+ crosses EF. In the original Knolle and Cooper paper [3] the hole band had infinite mass,
353
+ so the diamagnetic moment suddenly drops to zero at kF – exactly as if the Landau level
354
+ had emptied.
355
+ In Figs. 3a and 3b the effect of increasing the hybridization strength λ is shown. There
356
+ is an exponential fall in dHvA amplitude with increasing λ as long as λ is not too small.
357
+ This was noted by Knolle and Cooper, but some authors [9–11] mistakenly attributed this
358
+ to magnetic breakdown, because it has a similar dependence on the energy gap Eg. But in
359
+ 8
360
+
361
+ fact, as noted in Ref. 7, the physical effect (recall that the conduction band is unoccupied in
362
+ this calculation) has to do with the width of the hybridization-crossover region, compared
363
+ with the Landau level spacing. If one Landau level crossing the hybridized region changes
364
+ its diamagnetic moment before the next level enters, there is a large oscillatory effect. If,
365
+ on the other hand, several Landau levels are crossing this region at the same time (the
366
+ large λ case), as pictured in Fig. 1c, then the oscillations are small. It is of course a feature
367
+ of degenerate perturbation theory that the larger the gap, the wider the region in k-space
368
+ over which the bands change their slope.
369
+ Finally, in Fig. 2c the temperature dependence of the peak in the amplitude spectrum
370
+ for several values of λ is shown. Similar results were found by Knolle and Cooper, and
371
+ discussed by them. The key point is that when kBT ≫ Eg the hybridization gap no longer
372
+ matters, and conventional Lifshitz-Kosevich temperature dependence is found.
373
+ III.
374
+ CONCLUSIONS
375
+ By showing that the quantum oscillatory magnetization is contained in the sum over the
376
+ diamagnetic moments of the electrons in occupied Landau levels, it follows that at T = 0 K
377
+ the oscillations in the anomalous dHvA effect arise, not because the highest Landau level
378
+ empties as in the conventional dHvA effect, but rather because its diamagnetic moment
379
+ changes suddenly when the slope of the valence band changes in the hybridization region.
380
+ This gives a simple interpretation of the quantum oscillatory magnetism found by Knolle
381
+ and Cooper, and reinforces that this is a novel mechanism for quantum oscillations.
382
+ IV.
383
+ METHODS
384
+ In Fig. 1a two bands, with E1(k) = ℏ2k2/2m1 and E2(k) = W + ℏ2k2/2m2, where
385
+ m1 = 1me, m2 = −10me and W = 0.1 eV, are hybridized with a coupling of strength λ.
386
+ (The negative mass for m2 is needed for consistency of notation, particularly with Eq. 8.)
387
+ The resulting bands are
388
+ Ei(k) = Eav(k) ±
389
+
390
+ ∆E(k)2 + λ2,
391
+ (11)
392
+ 9
393
+
394
+ where Eav(k) = 0.5(E1(k) + E2(k)), ∆E = 0.5(E1(k) − E2(k)), and Ei(k) is Ec(k)(Ev(k))
395
+ for the +(−) solution. Since the band structure is circularly symmetric, the wave-vector
396
+ corresponding to a given Landau level ℓ at a field B can be obtained from πkℓ(B)2 =
397
+ Ak,ℓ(B) = 2πeB(ℓ + γ)/ℏ, and then m∗
398
+ ℓ(B) for i = v, c is obtained from Eq. 8. In Figs. 1b
399
+ and 1c the Landau levels are evaluated at B = 10 T.
400
+ For Figs. 2 and 3, Eqs. 8 and 10 were numerically evaluated. A spherical Brillouin zone
401
+ boundary was used, with a gradual logic-function cutoff to avoid spurious zone-boundary
402
+ dHvA oscillations. Care must be taken with this spherical zone boundary when taking
403
+ derivatives.
404
+ For Fig. 2a, the hole-like band is absent, leaving only a metallic band, with m = 1me.
405
+ EF is the same as for the anomalous dHvA case, so kF occurs at the crossing-point of the
406
+ unhybridized bands in Fig. 1b. In Fig. 2b, lower panel, the same normalization was used
407
+ as in (a). That is, only electrons in the electron-like part of the valence band are counted,
408
+ leaving out the electrons in the Landau levels of the hole-like part, which are of course also
409
+ occupied. This makes comparison with Fig. 2a meaningful.
410
+ To get the temperature dependence for Fig. 3c the usual expression for Ω was used:
411
+ Ω = −DkBT
412
+
413
+
414
+ ln
415
+
416
+ 1 + e(EF −Eℓ)/kBT�
417
+ ,
418
+ (12)
419
+ which leads to the generalization of Eq. 10:
420
+ M = − D
421
+
422
+
423
+ f(Eℓ, T)
424
+ ℏe
425
+ m∗
426
+ ℓ(B)(ℓ + γ)
427
+ (13a)
428
+ + kBTD
429
+ B
430
+
431
+
432
+ ln
433
+
434
+ 1 + e(EF −Eℓ)/kBT�
435
+ ,
436
+ (13b)
437
+ where f(Eℓ, T) is the Fermi-Dirac distribution function, and the sum now includes both
438
+ the conduction and the valence bands.
439
+ 10
440
+
441
+ V.
442
+ ACKNOWLEDGEMENTS
443
+ This research was funded by NSERC (RGPIN-2019-06446).
444
+ [1] G. Li, Z. Xiang, F. Yu, T. Asaba, B. Lawson, P. Cai, C. Tinsman, A. Berkley, S. Wolgast,
445
+ Y. S. Eo, D.-J. Kim, C. Kurdak, J. W. Allen, K. Sun, X. H. Chen, Y. Y. Wang, Z. Fisk, and
446
+ L. Li, Science 346, 1208 (2014).
447
+ [2] B. S. Tan, Y.-T. Hsu, B. Zeng, M. Ciomaga Hatnean, N. Harrison, Z. Zhu, M. Hartstein,
448
+ M. Kiourlappou, A. Srivastava, M. D. Johannes, T. P. Murphy, J.-H. Park, L. Balicas, G. G.
449
+ Lonzarich, G. Balakrishnan, and S. E. Sebastian, Science 349, 287 (2015).
450
+ [3] J. Knolle and N. R. Cooper, Phys. Rev. Lett. 115, 146401 (2015).
451
+ [4] P. Ram and B. Kumar, Phys. Rev. B 96, 075115 (2017).
452
+ [5] J. Knolle and N. R. Cooper, Phys. Rev. Lett. 118, 176801 (2017).
453
+ [6] H. K. Pal, F. Pi´echon, J.-N. Fuchs, M. Goerbig, and G. Montambaux, Phys. Rev. B 94,
454
+ 125140 (2016).
455
+ [7] H. K. Pal, Phys. Rev. B 95, 085111 (2017).
456
+ [8] A. Panda, S. Banerjee, and M. Randeria, Proc. Natl. Acad. Sci. USA 119, e2208373119
457
+ (2022).
458
+ [9] O. Erten, P. Ghaemi, and P. Coleman, Phys. Rev. Lett. 116, 046403 (2016).
459
+ [10] I. Sodemann, D. Chowdhury, and T. Senthil, Phys. Rev. B 97, 045152 (2018).
460
+ [11] A. Ghazaryan, M. N. Emilian, O. Erten, and P. Ghaemi, New J. Phys. 23, 123042 (2021).
461
+ [12] D. Shoenberg, Magnetic oscillations in metals (Cambridge University Press, 1984).
462
+ [13] A. Pippard, Solid State Physics, vol. 1, Electrons in Metals, edited by J. F. Cochran and
463
+ R. R. Haering (Gordon and Breach, 1968).
464
+ [14] N. W. Ashcroft and N. D. Mermin, Solid state physics (Hold, Rinehart and Winston, 1976)
465
+ Chap. 12 and 14.
466
+ [15] L. Roth, Phys. Rev. B 145, 434 (1966).
467
+ 11
468
+
QdE4T4oBgHgl3EQf-w6Q/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,335 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf,len=334
2
+ page_content='de Haas van Alphen oscillations in hybridization-gap insulators as a sudden change in the diamagnetic moment of Landau levels S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
3
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
4
+ page_content=' Julian Department of Physics, University of Toronto, 60 St.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
5
+ page_content=' George Street, Toronto, Ontario, Canada M5S 1A7 (Dated: January 16, 2023) Abstract This note revisits the semi-classical theory of quantum oscillations in hybridization-gap insu- lators, and shows that the physical origin of the oscillations, at T = 0 K, is a sudden change in the diamagnetic moment of each Landau level as it crosses the hybridized region of the valence band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
6
+ page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
7
+ page_content='05366v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
8
+ page_content='str-el] 13 Jan 2023 I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
9
+ page_content=' INTRODUCTION For more than 80 years after the first observation of a quantum oscillatory magneti- zation, the de Haas-van Alphen (dHvA) effect was regarded as a signature of a metal, because the theory was formulated in terms of the emptying of quantized Landau levels as they sequentially cross the Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
10
+ page_content=' In this picture, there should be no oscilla- tory magnetization for a filled or empty band, because there is no Fermi surface for the Landau levels to cross.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
11
+ page_content=' Thus, the observation of dHvA oscillations in the Kondo insulator SmB6 [1, 2] was greeted with suprise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
12
+ page_content=' Almost equally surprising, a possible, very simple, theory of these quantum-oscillations-without-a-Fermi-surface was soon found: Knolle and Cooper [3] showed that quantum oscillations emerge in a straightforward calculation of the thermodynamic potential of a narrow-gap insulator with Landau-quantized states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
13
+ page_content=' The physical origin of these oscillations has, however, been unclear, and a source of some confusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
14
+ page_content=' As Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
15
+ page_content=' [4] states, in the absence of a Fermi surface “it is not clear what surface is being measured.” The original Knolle and Cooper paper [3] does not suggest a physical origin for the oscillations: they emerge from the mathematics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
16
+ page_content=' Subsequent papers by these and other authors [5–8] simply state that the thermodynamic potential oscillates as Landau levels sequentially cross the region where the bands cross, although Pal [7] makes the enigmatic remark that, as B changes, the Landau levels “feel” the abrupt change in the slope of E(k) and “manifest as quantum oscillations”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
17
+ page_content=' Moreoever, some papers that offer competing theories, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
18
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
19
+ page_content=' [9–11], state that the Knolle and Cooper dHvA oscillations are due to magnetic breakdown, in which the condition ℏωc ≳ E2 g/EF, where ωc is the quasiparticle cyclotron frequency, Eg is the energy gap and EF is the Fermi energy, allows the quasiparticles to tunnel across the hybridization gap, which would mean that these are, essentially, conventional quantum oscillations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
20
+ page_content=' The purpose of this brief, somewhat pedagogical, note is to present a simple semi- classical picture that shows that the anomalous Knolle-Cooper dHvA oscillations in hy- bridzation gap insulators are produced by the sudden change in the diamagnetic moment of the Landau levels as they pass between regions of E(k) having different quasiparticle velocities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
21
+ page_content=' This is unrelated to magnetic breakdown, and indeed it shows that this is a novel mechanism for the dHvA effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
22
+ page_content=' This paper does not address the issue of whether 2 this, as opposed to one of several competing theories (see Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
23
+ page_content=' 8 for a recent list), is actually correct for the case of SmB6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
24
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
25
+ page_content=' RESULTS AND DISCUSSION: SEMICLASSICAL CALCULATION OF THE DHVA EFFECT The situation considered by Knolle and Cooper, somewhat generalized in subsequent treatments [3, 5–8], is as pictured in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
26
+ page_content=' 1a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
27
+ page_content=' The essential feature is that two bands with very different dispersions (in this case one electron-like and one hole-like) weakly hybridize where they cross, and the Fermi energy EF lies in the gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
28
+ page_content=' An applied magnetic field B results in the formation of Landau levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
29
+ page_content=' As B is increased, the Landau levels, illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
30
+ page_content=' 1b, sequentially pass from the electron-like to the hole-like part of the valence band, and vice-versa for the conduction band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
31
+ page_content=' Working, for simplicity, at T = 0 in two-dimensions with spinless fermions and no impurity scattering, the grand canonical potential can be expressed as a sum over all of the Landau levels of the valence band, Ω(µ, T = 0, B) = D � ℓ (Eℓ,v − EF) , (1) where D = (eB/2πℏ) is the degeneracy of each Landau level per unit area of sample, and Eℓ,v is the energy of the ℓth Landau level in the valence band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
32
+ page_content=' In the conventional treatment of the dHvA effect in a metal (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
33
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
34
+ page_content=' Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
35
+ page_content=' 12), the sum in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
36
+ page_content=' 1 is approximated in such a way that the oscillatory part, ˜Ω, which involves only states in the immediate vicinity of EF, can be extracted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
37
+ page_content=' Then it is straightforward to calculate the oscillations in any thermodynamic quantity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
38
+ page_content=' For example, the dHvA effect measures the oscillatory magnetization ˜ M = −∂ ˜Ω ∂B .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
39
+ page_content=' (2) One could approach the problem differently, however, obtaining the total magnetization by taking the derivative of Ω, rather than ˜Ω.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
40
+ page_content=' Consider first a simple parabolic band of non-interacting electrons, so that the dispersion relation is E(k) = ℏ2k2/2m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
41
+ page_content=' Then, as in the case of electrons in free space, the Landau levels are quantized in energy as Eℓ = ℏωc(ℓ + 1/2), (3) 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
42
+ page_content='0 × 109 0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
43
+ page_content='0 × 109 k (m 1) 50 0 50 E (meV) (a) Ec(k) Ev(k) 5 0 5 E (meV) (b) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
44
+ page_content='50 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
45
+ page_content='52 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
46
+ page_content='54 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
47
+ page_content='56 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
48
+ page_content='58 k (m 1) 1e9 50 0 (2 B/(elect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
49
+ page_content=') (c) FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
50
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
51
+ page_content=' (a) Band structure for a hybridization-gap insulator, with the conduction band in red, and valence band in blue (see §IV, Methods, for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
52
+ page_content=') (b) A zoomed-in view of the circled region from (a), where the bands cross and hybridize, with Landau levels shown as blue and red points for the valence and conduction band respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
53
+ page_content=' The green and pink lines respectively are the unhybridized electron and hole bands at zero field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
54
+ page_content=' (c) Shows the diamagnetic moment per electron in the Landau levels near the band crossing in the valence (blue) and conduction (red) bands, in units of double Bohr magnetons ℏe/me.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
55
+ page_content=' The hybridization strength is moderate, so that the band dispersion changes rather gradually compared to the Landau level spacing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
56
+ page_content=' At T = 0 K the conduction-band Landau levels are unoccupied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
57
+ page_content=' where the cyclotron frequency is ωc = eB/m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
58
+ page_content=' Substituting this in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
59
+ page_content=' 1, and taking the derivative with respect to B using the non-obvious step of applying the chain rule to 4 separate the derivative inside the sum from the derivative of the prefactor D, gives M = − D ℓmax � ℓ=0 eℏ m (ℓ + 1/2) (4a) − Dℏωc B ℓmax � ℓ=0 [(ℓ + 1/2) − X].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
60
+ page_content=' (4b) The new variables are X ≡ EF/ℏωc, and ℓmax, which is the quantum number of the highest occupied Landau level at a given field B, which satisfies (ℓmax + 1/2) ≤ EF/ℏωc ≤ (ℓmax + 3/2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
61
+ page_content=' ℓmax changes by one every time a Landau level crosses EF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
62
+ page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
63
+ page_content=' 2a the two terms of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
64
+ page_content=' 4 are plotted separately (top), and summed (bottom).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
65
+ page_content=' It can be seen that, to a good approximation, the first term is comprised of an oscillatory part superposed on a large background, while the second term cancels (or very nearly cancels) the large background of the first term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
66
+ page_content=' (The second term also has an oscillatory part, but it is normally ignored, being typically less than 1% of the oscillatory part of the first term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
67
+ page_content=') That is, the oscillations are almost entirely contained in the first term, which has a simple physical interpretation: the Landau diamagnetic dipole moment of an electron in the ℓth Landau level is µ = −(eℏ/m)(ℓ + 1/2) (see below).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
68
+ page_content=' Thus the oscillatory magnetization comes from the sum over the diamagnetic dipole moments of the electrons in their Landau levels, and in particular the sharp step in the ‘sawtooth’ pattern of M is due to the loss of the Landau diamagnetic moment of the electrons in the ℓmax Landau level, when the level suddenly empties as it passes EF with increasing B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
69
+ page_content=' Shoenberg, in his book, [12] credits Brian Pippard with this insight, but it must have been known to Landau and others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
70
+ page_content=' Pippard’s treatment, contained in the proceedings of a summer school that was held in Vancouver, Canada, in 1967 [13], is nevertheless useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
71
+ page_content=' In a departure from the usual Landau gauge approach, he uses the cylindrical gauge to construct wave-functions, for the Landau quantized electrons, that in real space are sharply peaked (provided ℓ is not too small) at the classical cyclotron radius, which encloses a real- space area Ar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
72
+ page_content=' Since the electrons circulate with period T = 2π/ωc, their diamagnetic dipole moment is µ = −eωc 2π Ar = − e2B 2πmAr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
73
+ page_content=' (5) 5 B T 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
74
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
75
+ page_content='13 M (mJ/T m2) (a) conventional Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
76
+ page_content=' 4a (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
77
+ page_content=' 4b) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
78
+ page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
79
+ page_content='10 (b) anomalous Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
80
+ page_content=' 10a (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
81
+ page_content=' 10b) 10 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
82
+ page_content='5 11 B (T) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
83
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
84
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
85
+ page_content='5 M (2 B/e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
86
+ page_content=' ) (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
87
+ page_content=' 4a)+(Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
88
+ page_content=' 4b) 10 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
89
+ page_content='5 11 B (T) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
90
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
91
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
92
+ page_content='5 (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
93
+ page_content=' 10a)+(Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
94
+ page_content=' 10b) FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
95
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
96
+ page_content=' (a), top panel, plots Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
97
+ page_content=' 4a and the negative of 4b for the conventional dHvA effect in an isolated metallic band of mass m = 1me, showing that Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
98
+ page_content=' 4a contains the quantum oscillations, while 4b, to a very good approximation, cancels the quasi-linear background, leaving a total magnetization (lower panel) that is dominated by the oscillations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
99
+ page_content=' (See §IV, Methods, for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
100
+ page_content=') (b) shows the corresponding plot for Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
101
+ page_content=' 10, the hybridization-gap insulator of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
102
+ page_content=' 1, with a very weak hybridization so that the band dispersion changes very suddenly, compared to the Landau level spacing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
103
+ page_content=' Pippard merely remarks that one can reformulate the dHvA effect directly in terms of the diamagnetic magnetization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
104
+ page_content=' For an arbitrary band structure, applying the semi-classical relation between the real- space, Ar,ℓ, and the k-space, Ak,ℓ, areas of a Landau orbit [12, 14], Ar,ℓ(B) = ℏ2 e2B2Ak,ℓ(B) = 2πℏ eB (ℓ + γ), (6) where γ = 1/2 for circularly-symmetric two-dimensional bands such as we use here [15], gives µ = − ℏe m∗ ℓ(B)(ℓ + γ), (7) where m∗ ℓ(B) ≡ ℏ2 2π ∂Ak ∂E ���� ℓ,B .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
105
+ page_content=' (8) It is the factor 1/m∗ ℓ in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
106
+ page_content=' 7 that is of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
107
+ page_content=' The sign of m∗ ℓ determines the direction in which an electron circulates in its Landau orbit, and thus determines the sign of the 6 dipole moment;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
108
+ page_content=' meanwhile the magnitude of m∗, which in this case is really a proxy for the speed of the electrons in the Landau orbit, determines the magnitude of the moment: fast electrons (low m∗ ℓ) produce a strong diamagnetic moment, slow electrons (high m∗ ℓ) produce a weak diamagnetic moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
109
+ page_content=' The diamagnetic moments of the Landau levels near the hybridization gap are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
110
+ page_content=' 1c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
111
+ page_content=' On the light electron-like part of the bands the circulating electrons have a large negative diamagnetic moment, but on the heavy hole-like part (where m∗ ℓ is negative) they have a weak positive moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
112
+ page_content=' The diamagnetic moment of the electrons in a Landau level in the valence band thus undergoes a sudden change when they cross the hybridized region, which is located at the Fermi wave-vector of the unhybridized bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
113
+ page_content=' Returning to Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
114
+ page_content=' 1 and using ∂Eℓ ∂B = ∂E ∂Ak ∂Ak ∂B = eℏ m∗ ℓ(B)(ℓ + γ), (9) gives the generalized version of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
115
+ page_content=' 4: M = − D occ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
116
+ page_content=' � ℓ ℏe m∗ ℓ(B)(ℓ + γ) (10a) − D/B occ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
117
+ page_content=' � ℓ (Eℓ − EF).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
118
+ page_content=' (10b) In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
119
+ page_content=' 2b the sum in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
120
+ page_content=' 10 is evaluated for the band structure of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
121
+ page_content=' 1a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
122
+ page_content=' Despite the fact that the Landau levels do not cross the Fermi energy, but rather remain in the valence band for all values of ℓ, it can be seen that the results for the ‘anomalous’ and ‘conventional’ dHvA effects are remarkably similar: the pattern is the same, the period of the oscillations is the same, and again the oscillations are contained in the sum over the diamagnetic moments of the electrons in the occupied Landau levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
123
+ page_content=' There are slight differences: the magnitude of the quantum oscillations is slightly larger in the anomalous case, and the cancellation of background is not quite as good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
124
+ page_content=' Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
125
+ page_content=' 10a, and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
126
+ page_content=' 1c, demonstrate the physical mechanism of the Knolle-Cooper dHvA oscillations: as each subsequent Landau level in the valence band crosses the hybridized region at kF, its diamagnetic moment undergoes a sudden change in sign and magnitude as the electrons suddenly start circulating in the opposite sense at a different speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
127
+ page_content=' The actual size of the step in M depends on the details of the band structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
128
+ page_content=' In the case chosen here, the velocity of the electrons in a Landau level reverses, and slows by a 7 0 1 2 3 F (kT) 0 A = 1 eV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
129
+ page_content='1 meV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
130
+ page_content='3 meV (a) 10 11 B (T) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
131
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
132
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
133
+ page_content='5 M 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
134
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
135
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
136
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
137
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
138
+ page_content='4 (meV) Amax (b) T = 0 K 0 2 4 6 T (K) Amax (c) = 1 eV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
139
+ page_content='1 meV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
140
+ page_content='2 meV 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
141
+ page_content='4 meV FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
142
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
143
+ page_content=' (a) shows the effect of changing hybridization strength.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
144
+ page_content=' The inset compares oscillations for λ = 1 µeV (blue line) with λ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
145
+ page_content='1 meV (red line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
146
+ page_content=' The main figure shows the Fourier transform of M vs 1/B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
147
+ page_content=' The frequency of the fundamental peak, F ∼ 780 T, corrsponds via the Onsager relation, F = ℏAkF /2πe, to the Fermi wave-vector of the unhybridized bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
148
+ page_content=' (See §IV, Methods, for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
149
+ page_content=') (b) shows the amplitude of the fundamental peak vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
150
+ page_content=' λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
151
+ page_content=' The black line is a decaying exponential fit to the points with λ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
152
+ page_content='15 meV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
153
+ page_content=' (c) shows the temperature dependence of the oscillations for selected values of λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
154
+ page_content=' factor of 10, as the Landau level passes from the electron-like to the hole-like part of the valence band, so the Landau diamagnetism goes from a negative value to a 10× smaller, positive value, so the step-change in M is slightly larger than in the conventional case, where the diamagnetic moment merely disappears when the Landau level empties as it crosses EF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
155
+ page_content=' In the original Knolle and Cooper paper [3] the hole band had infinite mass, so the diamagnetic moment suddenly drops to zero at kF – exactly as if the Landau level had emptied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
156
+ page_content=' In Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
157
+ page_content=' 3a and 3b the effect of increasing the hybridization strength λ is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
158
+ page_content=' There is an exponential fall in dHvA amplitude with increasing λ as long as λ is not too small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
159
+ page_content=' This was noted by Knolle and Cooper, but some authors [9–11] mistakenly attributed this to magnetic breakdown, because it has a similar dependence on the energy gap Eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
160
+ page_content=' But in 8 fact, as noted in Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
161
+ page_content=' 7, the physical effect (recall that the conduction band is unoccupied in this calculation) has to do with the width of the hybridization-crossover region, compared with the Landau level spacing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
162
+ page_content=' If one Landau level crossing the hybridized region changes its diamagnetic moment before the next level enters, there is a large oscillatory effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
163
+ page_content=' If, on the other hand, several Landau levels are crossing this region at the same time (the large λ case), as pictured in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
164
+ page_content=' 1c, then the oscillations are small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
165
+ page_content=' It is of course a feature of degenerate perturbation theory that the larger the gap, the wider the region in k-space over which the bands change their slope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
166
+ page_content=' Finally, in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
167
+ page_content=' 2c the temperature dependence of the peak in the amplitude spectrum for several values of λ is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
168
+ page_content=' Similar results were found by Knolle and Cooper, and discussed by them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
169
+ page_content=' The key point is that when kBT ≫ Eg the hybridization gap no longer matters, and conventional Lifshitz-Kosevich temperature dependence is found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
170
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
171
+ page_content=' CONCLUSIONS By showing that the quantum oscillatory magnetization is contained in the sum over the diamagnetic moments of the electrons in occupied Landau levels, it follows that at T = 0 K the oscillations in the anomalous dHvA effect arise, not because the highest Landau level empties as in the conventional dHvA effect, but rather because its diamagnetic moment changes suddenly when the slope of the valence band changes in the hybridization region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
172
+ page_content=' This gives a simple interpretation of the quantum oscillatory magnetism found by Knolle and Cooper, and reinforces that this is a novel mechanism for quantum oscillations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
173
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
174
+ page_content=' METHODS In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
175
+ page_content=' 1a two bands, with E1(k) = ℏ2k2/2m1 and E2(k) = W + ℏ2k2/2m2, where m1 = 1me, m2 = −10me and W = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
176
+ page_content='1 eV, are hybridized with a coupling of strength λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
177
+ page_content=' (The negative mass for m2 is needed for consistency of notation, particularly with Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
178
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
179
+ page_content=') The resulting bands are Ei(k) = Eav(k) ± � ∆E(k)2 + λ2, (11) 9 where Eav(k) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
180
+ page_content='5(E1(k) + E2(k)), ∆E = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
181
+ page_content='5(E1(k) − E2(k)), and Ei(k) is Ec(k)(Ev(k)) for the +(−) solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
182
+ page_content=' Since the band structure is circularly symmetric, the wave-vector corresponding to a given Landau level ℓ at a field B can be obtained from πkℓ(B)2 = Ak,ℓ(B) = 2πeB(ℓ + γ)/ℏ, and then m∗ ℓ(B) for i = v, c is obtained from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
183
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
184
+ page_content=' In Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
185
+ page_content=' 1b and 1c the Landau levels are evaluated at B = 10 T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
186
+ page_content=' For Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
187
+ page_content=' 2 and 3, Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
188
+ page_content=' 8 and 10 were numerically evaluated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
189
+ page_content=' A spherical Brillouin zone boundary was used, with a gradual logic-function cutoff to avoid spurious zone-boundary dHvA oscillations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
190
+ page_content=' Care must be taken with this spherical zone boundary when taking derivatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
191
+ page_content=' For Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
192
+ page_content=' 2a, the hole-like band is absent, leaving only a metallic band, with m = 1me.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
193
+ page_content=' EF is the same as for the anomalous dHvA case, so kF occurs at the crossing-point of the unhybridized bands in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
194
+ page_content=' 1b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
195
+ page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
196
+ page_content=' 2b, lower panel, the same normalization was used as in (a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
197
+ page_content=' That is, only electrons in the electron-like part of the valence band are counted, leaving out the electrons in the Landau levels of the hole-like part, which are of course also occupied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
198
+ page_content=' This makes comparison with Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
199
+ page_content=' 2a meaningful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
200
+ page_content=' To get the temperature dependence for Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
201
+ page_content=' 3c the usual expression for Ω was used: Ω = −DkBT � ℓ ln � 1 + e(EF −Eℓ)/kBT� , (12) which leads to the generalization of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
202
+ page_content=' 10: M = − D � ℓ f(Eℓ, T) ℏe m∗ ℓ(B)(ℓ + γ) (13a) + kBTD B � ℓ ln � 1 + e(EF −Eℓ)/kBT� , (13b) where f(Eℓ, T) is the Fermi-Dirac distribution function, and the sum now includes both the conduction and the valence bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
203
+ page_content=' 10 V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
204
+ page_content=' ACKNOWLEDGEMENTS This research was funded by NSERC (RGPIN-2019-06446).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
205
+ page_content=' [1] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
206
+ page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
207
+ page_content=' Xiang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
208
+ page_content=' Yu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
209
+ page_content=' Asaba, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
210
+ page_content=' Lawson, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
211
+ page_content=' Cai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
212
+ page_content=' Tinsman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
213
+ page_content=' Berkley, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
214
+ page_content=' Wolgast, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
215
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
216
+ page_content=' Eo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
217
+ page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
218
+ page_content=' Kim, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
219
+ page_content=' Kurdak, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
220
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
221
+ page_content=' Allen, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
222
+ page_content=' Sun, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
223
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
224
+ page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
225
+ page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
226
+ page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
227
+ page_content=' Fisk, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
228
+ page_content=' Li, Science 346, 1208 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
229
+ page_content=' [2] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
230
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
231
+ page_content=' Tan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
232
+ page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
233
+ page_content=' Hsu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
234
+ page_content=' Zeng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
235
+ page_content=' Ciomaga Hatnean, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
236
+ page_content=' Harrison, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
237
+ page_content=' Zhu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
238
+ page_content=' Hartstein, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
239
+ page_content=' Kiourlappou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
240
+ page_content=' Srivastava, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
241
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
242
+ page_content=' Johannes, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
243
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
244
+ page_content=' Murphy, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
245
+ page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
246
+ page_content=' Park, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
247
+ page_content=' Balicas, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
248
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
249
+ page_content=' Lonzarich, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
250
+ page_content=' Balakrishnan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
251
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
252
+ page_content=' Sebastian, Science 349, 287 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
253
+ page_content=' [3] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
254
+ page_content=' Knolle and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
255
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
256
+ page_content=' Cooper, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
257
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
258
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
259
+ page_content=' 115, 146401 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
260
+ page_content=' [4] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
261
+ page_content=' Ram and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
262
+ page_content=' Kumar, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
263
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
264
+ page_content=' B 96, 075115 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
265
+ page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
266
+ page_content=' Knolle and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
267
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
268
+ page_content=' Cooper, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
269
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
270
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
271
+ page_content=' 118, 176801 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
272
+ page_content=' [6] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
273
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
274
+ page_content=' Pal, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
275
+ page_content=' Pi´echon, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
276
+ page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
277
+ page_content=' Fuchs, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
278
+ page_content=' Goerbig, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
279
+ page_content=' Montambaux, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
280
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
281
+ page_content=' B 94, 125140 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
282
+ page_content=' [7] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
283
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
284
+ page_content=' Pal, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
285
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
286
+ page_content=' B 95, 085111 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
287
+ page_content=' [8] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
288
+ page_content=' Panda, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
289
+ page_content=' Banerjee, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
290
+ page_content=' Randeria, Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
291
+ page_content=' Natl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
292
+ page_content=' Acad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
293
+ page_content=' Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
294
+ page_content=' USA 119, e2208373119 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
295
+ page_content=' [9] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
296
+ page_content=' Erten, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
297
+ page_content=' Ghaemi, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
298
+ page_content=' Coleman, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
299
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
300
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
301
+ page_content=' 116, 046403 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
302
+ page_content=' [10] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
303
+ page_content=' Sodemann, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
304
+ page_content=' Chowdhury, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
305
+ page_content=' Senthil, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
306
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
307
+ page_content=' B 97, 045152 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
308
+ page_content=' [11] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
309
+ page_content=' Ghazaryan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
310
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
311
+ page_content=' Emilian, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
312
+ page_content=' Erten, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
313
+ page_content=' Ghaemi, New J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
314
+ page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
315
+ page_content=' 23, 123042 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
316
+ page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
317
+ page_content=' Shoenberg, Magnetic oscillations in metals (Cambridge University Press, 1984).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
318
+ page_content=' [13] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
319
+ page_content=' Pippard, Solid State Physics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
320
+ page_content=' 1, Electrons in Metals, edited by J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
321
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
322
+ page_content=' Cochran and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
323
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
324
+ page_content=' Haering (Gordon and Breach, 1968).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
325
+ page_content=' [14] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
326
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
327
+ page_content=' Ashcroft and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
328
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
329
+ page_content=' Mermin, Solid state physics (Hold, Rinehart and Winston, 1976) Chap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
330
+ page_content=' 12 and 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
331
+ page_content=' [15] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
332
+ page_content=' Roth, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
333
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
334
+ page_content=' B 145, 434 (1966).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}
335
+ page_content=' 11' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/QdE4T4oBgHgl3EQf-w6Q/content/2301.05366v1.pdf'}