jackkuo commited on
Commit
1bd9209
·
verified ·
1 Parent(s): bab24dc

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -9AyT4oBgHgl3EQfdffn/content/2301.00305v1.pdf +3 -0
  2. -9AyT4oBgHgl3EQfdffn/vector_store/index.pkl +3 -0
  3. -NFST4oBgHgl3EQfcDge/content/tmp_files/2301.13801v1.pdf.txt +1842 -0
  4. -NFST4oBgHgl3EQfcDge/content/tmp_files/load_file.txt +0 -0
  5. -dAyT4oBgHgl3EQfRPaF/content/2301.00062v1.pdf +3 -0
  6. -dAyT4oBgHgl3EQfRPaF/vector_store/index.pkl +3 -0
  7. -dAzT4oBgHgl3EQfg_zf/content/tmp_files/2301.01479v1.pdf.txt +1197 -0
  8. -dAzT4oBgHgl3EQfg_zf/content/tmp_files/load_file.txt +0 -0
  9. -tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf +3 -0
  10. -tAyT4oBgHgl3EQfdfed/vector_store/index.faiss +3 -0
  11. -tAyT4oBgHgl3EQfdfed/vector_store/index.pkl +3 -0
  12. -tE1T4oBgHgl3EQfUwM8/content/tmp_files/2301.03093v1.pdf.txt +712 -0
  13. -tE1T4oBgHgl3EQfUwM8/content/tmp_files/load_file.txt +390 -0
  14. .gitattributes +57 -0
  15. 19FIT4oBgHgl3EQf4Su_/content/2301.11385v1.pdf +3 -0
  16. 19FIT4oBgHgl3EQf4Su_/vector_store/index.faiss +3 -0
  17. 1NE4T4oBgHgl3EQfaAwh/vector_store/index.faiss +3 -0
  18. 1tAzT4oBgHgl3EQfe_wu/content/tmp_files/2301.01444v1.pdf.txt +1409 -0
  19. 1tAzT4oBgHgl3EQfe_wu/content/tmp_files/load_file.txt +0 -0
  20. 2tE2T4oBgHgl3EQf5gjq/content/tmp_files/2301.04192v1.pdf.txt +1706 -0
  21. 2tE2T4oBgHgl3EQf5gjq/content/tmp_files/load_file.txt +0 -0
  22. 4dE4T4oBgHgl3EQfbgxF/content/tmp_files/2301.05073v1.pdf.txt +0 -0
  23. 4dE4T4oBgHgl3EQfbgxF/content/tmp_files/load_file.txt +0 -0
  24. 4dFST4oBgHgl3EQfZzjU/content/tmp_files/2301.13793v1.pdf.txt +881 -0
  25. 4dFST4oBgHgl3EQfZzjU/content/tmp_files/load_file.txt +0 -0
  26. 5tE4T4oBgHgl3EQfbwzK/vector_store/index.faiss +3 -0
  27. 89FAT4oBgHgl3EQfpR2f/vector_store/index.faiss +3 -0
  28. 89FAT4oBgHgl3EQfpR2f/vector_store/index.pkl +3 -0
  29. 8dAyT4oBgHgl3EQfQvbW/content/2301.00054v1.pdf +3 -0
  30. 8dAyT4oBgHgl3EQfQvbW/vector_store/index.faiss +3 -0
  31. 8dAyT4oBgHgl3EQfQvbW/vector_store/index.pkl +3 -0
  32. 9dE2T4oBgHgl3EQflwec/content/tmp_files/2301.03992v1.pdf.txt +2044 -0
  33. 9dE2T4oBgHgl3EQflwec/content/tmp_files/load_file.txt +0 -0
  34. ANE5T4oBgHgl3EQfSg9u/content/tmp_files/2301.05529v1.pdf.txt +2921 -0
  35. ANE5T4oBgHgl3EQfSg9u/content/tmp_files/load_file.txt +0 -0
  36. AtFQT4oBgHgl3EQf9DeI/content/2301.13449v1.pdf +3 -0
  37. AtFQT4oBgHgl3EQf9DeI/vector_store/index.faiss +3 -0
  38. AtFQT4oBgHgl3EQf9DeI/vector_store/index.pkl +3 -0
  39. CNAzT4oBgHgl3EQfwP6m/content/tmp_files/2301.01720v1.pdf.txt +657 -0
  40. CNAzT4oBgHgl3EQfwP6m/content/tmp_files/load_file.txt +501 -0
  41. CdFJT4oBgHgl3EQfASzG/content/2301.11420v1.pdf +3 -0
  42. CdFJT4oBgHgl3EQfASzG/vector_store/index.faiss +3 -0
  43. CtE2T4oBgHgl3EQfSAdG/vector_store/index.faiss +3 -0
  44. CtE2T4oBgHgl3EQfSAdG/vector_store/index.pkl +3 -0
  45. D9FQT4oBgHgl3EQfQDZ4/vector_store/index.faiss +3 -0
  46. DtFKT4oBgHgl3EQfZS4v/content/tmp_files/2301.11802v1.pdf.txt +1239 -0
  47. DtFKT4oBgHgl3EQfZS4v/content/tmp_files/load_file.txt +0 -0
  48. G9AzT4oBgHgl3EQfjP2K/content/2301.01513v1.pdf +3 -0
  49. G9AzT4oBgHgl3EQfjP2K/vector_store/index.faiss +3 -0
  50. G9AzT4oBgHgl3EQfjP2K/vector_store/index.pkl +3 -0
-9AyT4oBgHgl3EQfdffn/content/2301.00305v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df1b0c7743efd05e01857316f5a85a10828f63f5e32c5d8eaf6b6e10feea85bf
3
+ size 1436631
-9AyT4oBgHgl3EQfdffn/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ed795260b9ebc97ff85522d84c9699f877fa2574e548052243ebbacaa3318a9
3
+ size 1108341
-NFST4oBgHgl3EQfcDge/content/tmp_files/2301.13801v1.pdf.txt ADDED
@@ -0,0 +1,1842 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.13801v1 [cs.SI] 29 Jan 2023
2
+ Cultural Differences in Friendship Network Behaviors: A
3
+ Snapchat Case Study
4
+ Agrima Seth
5
+ agrima@umich.edu
6
+ School of Information, University of
7
+ Michigan,
8
+ Ann Arbor, Michigan, USA
9
+ Jiyin Cao
10
+ jiyincao@gmail.com
11
+ Stony Brook University
12
+ Stony Brook, New York, USA
13
+ Xiaolin Shi
14
+ Xiaolin@snap.com
15
+ Snap Inc.
16
+ Santa Monica, California, USA
17
+ Ron Dotsch
18
+ rdotsch@snap.com
19
+ Snap Inc.
20
+ Santa Monica, California, USA
21
+ Yozen Liu
22
+ yliu2@snap.com
23
+ Snap Inc.
24
+ Santa Monica, California, USA
25
+ Maarten W. Bos
26
+ maarten@snap.com
27
+ Snap Inc.
28
+ Santa Monica, California, USA
29
+ ABSTRACT
30
+ Culture shapes people’s behavior, both online and offline. Surpris-
31
+ ingly, there is sparse research on how cultural context affects net-
32
+ work formation and content consumption on social media. We an-
33
+ alyzed the friendship networks and dyadic relations between con-
34
+ tent producers and consumers across 73 countries through a cul-
35
+ tural lens in a closed-network setting. Closed networks allow for
36
+ intimate bonds and self-expression, providing a natural setting to
37
+ study cultural differences in behavior. We studied three theoreti-
38
+ cal frameworks of culture - individualism, relational mobility, and
39
+ tightness. We found that friendship networks formed across dif-
40
+ ferent cultures differ in egocentricity, meaning the connectedness
41
+ between a user’s friends. Individualism, mobility, and looseness
42
+ also significantly negatively impact how tie strength affects con-
43
+ tent consumption. Our findings show how culture affects social
44
+ media behavior, and we outline how researchers can incorporate
45
+ this in their work. Our work has implications for content recom-
46
+ mendations and can improve content engagement.
47
+ CCS CONCEPTS
48
+ • Human-centered computing → Social networks; Social me-
49
+ dia; Social network analysis.
50
+ KEYWORDS
51
+ Social media platforms, Cross-cultural analysis, Social ties, User
52
+ Behavior Modeling, relationship modeling, tie strength
53
+ ACM Reference Format:
54
+ Agrima Seth, Jiyin Cao, Xiaolin Shi, Ron Dotsch, Yozen Liu, and Maarten
55
+ W. Bos. 2023. Cultural Differences in Friendship Network Behaviors: A
56
+ Snapchat Case Study. In Proceedings of the 2023 CHI Conference on Human
57
+ Factors in Computing Systems (CHI ’23), April 23–28, 2023, Hamburg, Ger-
58
+ many. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3544548.3581074
59
+ Permission to make digital or hard copies of all or part of this work for personal or
60
+ classroom use is granted without fee provided that copies are not made or distributed
61
+ for profit or commercial advantage and that copies bear this notice and the full cita-
62
+ tion on the first page. Copyrights for components of this work owned by others than
63
+ the author(s) must be honored. Abstracting with credit is permitted. To copy other-
64
+ wise, or republish, to post on servers or to redistribute to lists, requires prior specific
65
+ permission and/or a fee. Request permissions from permissions@acm.org.
66
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
67
+ © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
68
+ ACM ISBN 978-1-4503-9421-5/23/04...$15.00
69
+ https://doi.org/10.1145/3544548.3581074
70
+ 1
71
+ INTRODUCTION
72
+ In the past two decades, social media platforms have transformed
73
+ how individuals build and maintain their relationships. These plat-
74
+ forms are increasingly becoming the preferred method for initiat-
75
+ ing intimate relationships [53], seeking advice [32], and commu-
76
+ nity building [33]. With social media platforms becoming an inte-
77
+ gral part of social life for many of us (there are 4.26 billion social
78
+ media users as of 2021 [42]), understanding the drivers of user be-
79
+ haviors is imperative.
80
+ Directly engaging with others (e.g., sending messages) and con-
81
+ suming their content (e.g., viewing, replying, and reacting to Sto-
82
+ ries and posts) are often studied to understand behavioral patterns
83
+ on social media platforms. User behavior on online social media
84
+ platforms can be said to be broadly driven by a complex combina-
85
+ tion of (a) user identity (personality, demographics), (b) the norms
86
+ (descriptive and prescriptive) that the users in a network collec-
87
+ tively subscribe to, (c) the relationship between users (friends, ac-
88
+ quaintances, strangers), (d) usage intent; for example, professional
89
+ (LinkedIn) vs. curated self-presentation (Instagram), and (e) plat-
90
+ form affordances. While any particular platform usually provides
91
+ the same affordances to all users on that platform, users bring their
92
+ different backgrounds, experiences, expectations, beliefs, and val-
93
+ ues to the platform. As a result, different behaviors on the same
94
+ platform are culturally influenced [2, 7, 11, 36].
95
+ Most studies on social media user behavior are based on data
96
+ that is west-centric [39, 54], and thus, their results have an implied
97
+ context of western cultural norms. These findings fail to account
98
+ for the heterogeneity in user behavior that arises from different
99
+ cultural contexts [4, 32]. Hence, to further understand how cul-
100
+ tural values affect behavior on these platforms, our work focuses
101
+ on how users from different cultural backgrounds interact differ-
102
+ ently on a platform. Specifically, we use theoretical frameworks of
103
+ cultural values to study the differences in the formation of friend-
104
+ ship networks and the moderation of differential behavior of con-
105
+ tent consumption within these friendship networks. This paper
106
+ uses three theoretical frameworks of cultural values: Hofstede’s
107
+ concept of Individualism [20], Thomson and colleagues’ concept
108
+ of Relational Mobility [46], and Gelfand and colleagues’ concept
109
+ of Tightness [15]. The data we used for our analyses is from the
110
+
111
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
112
+ Seth, et al.
113
+ camera and messaging platform Snapchat. Snapchat is used in al-
114
+ most 150 countries and has 347 million daily active users world-
115
+ wide [41]. Snapchat is a closed network, meaning that a lot of the
116
+ content shared by individuals on Snapchat is only available to a
117
+ limited set of trusted users. Past work on eliciting the motivations
118
+ for Snapchat usage has shown that Snapchat is used to commu-
119
+ nicate with close relationships and is viewed as a platform with
120
+ a relatively lower emphasis on self-presentation and impression
121
+ management compared to platforms like Instagram [3, 6, 35, 50].
122
+ Because closed networks have less formal pressures and allow for
123
+ intimate bonds and self-expression, they provide us with a cleaner
124
+ setting to study differences in human behavior.
125
+ Specifically, we focus on 1) how culture influences network cre-
126
+ ation and 2) how culture influences content consumption behav-
127
+ iors embedded in the network. In particular, for the second ques-
128
+ tion, we are interested in how culture moderates the effect of tie
129
+ strength (i.e., the closeness between individuals) on content con-
130
+ sumption. Past work has shown that tie strength strongly predicts
131
+ a variety of user behaviors on platform, including what informa-
132
+ tion will be exchanged [31, 51, 54], the likelihood to change one’s
133
+ actions [8], the attention given to content [52], and the preferred
134
+ behavior to signal engagement [5]. We will explore how tie strength
135
+ moderates tie strength’s effect on content consumption. To study
136
+ content consumption behavior, we use the metric of dwell time,
137
+ i.e., the time a user spends consuming content that another user
138
+ creates.
139
+ In sum, we ask the following research questions:
140
+ (1) How do friendship networks differ in countries with differ-
141
+ ent cultural values?
142
+ (2) How do cultural values change the effect of tie strength on
143
+ dwell time?
144
+ To answer our first research question, we studied the network
145
+ properties of friendship networks across 73 countries, which have
146
+ been surveyed by either Hofstede [20], Thomson et al. [46], or
147
+ Gelfand et al. [15], and have different cultural values that lie on a
148
+ continuum of the three cultural values of individualism, mobility,
149
+ and tightness. We analyzed how friendship network size and ego-
150
+ centricity — the extent to which a person’s friends are connected
151
+ with each other — vary across cultures in the closed network set-
152
+ ting of Snapchat. We find that users from more individualistic, mo-
153
+ bile, and loose cultures have a more extensive friendship network
154
+ and are less egocentric. Next, we analyzed within these networks
155
+ how tie-strength between users impacts engagement with content
156
+ (dwell time) and the role of cultural values as a moderator. We
157
+ found that individualism, mobility, and looseness negatively mod-
158
+ erate the effect of tie strength on content consumption.
159
+ Where previous work on culture and social media platforms has
160
+ primarily been limited to a small sample size [2, 39], this paper
161
+ contributes by studying cultural differences in user behavior on a
162
+ large scale, analyzing hundreds of thousands of users across many
163
+ countries. Further, where other quantitative works are usually lim-
164
+ ited to open or broadcast networks, this study explores relatively
165
+ under-studied closed-network settings [23].
166
+ From an HCI and design perspective, our work can advance
167
+ our understanding of behavior patterns across cultures. We dis-
168
+ cuss the implications of understanding users’ engagement with
169
+ content to design better experiences for the user. When applied to
170
+ platform design, our work would help user-retention of platforms
171
+ without compromising the user experience, in turn creating better
172
+ outcomes for both users and platforms. Our work furthers the re-
173
+ search that helps answer the question: What does it mean to under-
174
+ stand and support users from diverse cultures on online platforms?
175
+ [14]. Most of the designs and practices of online platforms have a
176
+ ‘one-size-fits-all’ approach and do not actively account for different
177
+ user preferences across geographies. Our results provide evidence
178
+ of differential behavioral patterns in online friendship networks
179
+ across cultures and suggest how algorithm design can be cultur-
180
+ ally inclusive.
181
+ 1.1
182
+ Privacy and Ethics
183
+ The data for this study was taken from Snapchat, and the study was
184
+ conducted within Snapchat in accordance with Snapchat’s policies
185
+ and procedures with respect to Snapchat data. This analysis only
186
+ uses the metadata of the user behavior. It does not analyze the ac-
187
+ tual content of the communication between the users.
188
+ 2
189
+ RELATED WORK
190
+ 2.1
191
+ Ties and user behavior
192
+ Interpersonal relationships make social media platforms social. Like
193
+ in offline social networks, an individual’s online network consists
194
+ of individuals, with each of whom one shares a different type of re-
195
+ lationship. Each dyadic relationship is different based on the close-
196
+ ness and the purpose they serve to the individual. Social network
197
+ analysis literature uses the term tie strength to differentiate be-
198
+ tween relations of different closeness. This term was coined by
199
+ Granovetter[17], who analyzed the role of different ties in differ-
200
+ ent situations. The two types of ties characterized were strong and
201
+ weak. The four dimensions determining a tie’s strength were: the
202
+ amount of time spent on a tie, the intimacy, the intensity, and re-
203
+ ciprocal services [17]. Although researchers have used different
204
+ operationalizations to conceptualize tie strength depending on the
205
+ purpose of the study, many works on social media platforms op-
206
+ erationalize tie strength as proportional to the total number of ex-
207
+ changes in the dyad. This operationalization of tie strength has
208
+ been used to study various phenomena, like promoting mental
209
+ well-being [25], increased diffusion of information, and access to
210
+ novel information [17, 49]. While these works analyze the role of
211
+ tie strength in reaping social benefits, studies have also focused on
212
+ justifying Granovetter’s hypothesis that the two ties elicit differ-
213
+ ent interaction patterns, for which they analyze how information
214
+ from different ties is received [19, 23, 24, 52]. These studies find
215
+ evidence that individuals spend more time on the content received
216
+ from stronger ties.
217
+ 2.2
218
+ Cultural values
219
+ One primary aspect of culture is that it’s the normative value sys-
220
+ tem that dictates acceptable practices and helps differentiate one
221
+ group from another. Culture is both a result of the accepted past
222
+ actions and the determinant of acceptable future actions. One of
223
+ the ways to reason about attitudes and actions is to understand
224
+ the culture people are in. Prior studies have shown that an indi-
225
+ vidual’s behavior in the online space is influenced by their culture
226
+
227
+ Cultural Differences in Friendship Network Behaviors: A Snapchat Case Study
228
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
229
+ in the same way as offline behaviors. With cultural values shaping
230
+ actions, we must first understand how culture can be measured
231
+ and then how culture affects behavior. While prior work usually
232
+ focuses on groups and their specialized culture, we introduce liter-
233
+ ature from cultural psychology in our work. Culture is often opera-
234
+ tionalized through dimensions where a dimension is defined as “an
235
+ aspect of a culture that can be measured relative to other cultures.”
236
+ In this paper, we bring in concepts from three dominant cultural
237
+ psychology theories, namely, individualism-collectivism [20], re-
238
+ lational mobility[46], and tightness-looseness[15], to explore how
239
+ culture impacts network creation and content consumption behav-
240
+ iors within a network. Below, we briefly introduce each of the cul-
241
+ tural dimensions.
242
+ 2.2.1
243
+ Hofstede’s Individualism-collectivism. Hofstede [20] analyzed
244
+ data from over 50 countries and identified six critical dimensions of
245
+ national culture. Individualism-collectivism is one dimension that
246
+ has drawn the most research attention. Typically, individualism
247
+ leads to loose ties among the individuals of a society. Individual-
248
+ ists focus on "I" as opposed to "we." Because groups are less im-
249
+ portant to them, individualists also tend to show no difference in
250
+ their behaviors and attitudes toward ingroups versus outgroups. In
251
+ contrast, collectivism leads to a collective identity, and the welfare
252
+ of an individual is implicitly assumed to be linked to the interests
253
+ of the larger group. Hence, collectivists focus on "we." Because of
254
+ their particular focus on "we," collectivists are known to have differ-
255
+ ent norms and behaviors towards ingroups versus outgroups and
256
+ place greater emphasis on harmony.
257
+ Because of the "I" nature, individualists need to constantly reach
258
+ out to build networks and also tend to see relationships as fluid.
259
+ In contrast, collectivists see relationships as given, and thus, they
260
+ are less active in building networks. As a result, we predict that
261
+ individualism will be positively correlatedwith friendship network
262
+ size.
263
+ Individualists are less likely to treat other people based on re-
264
+ lationship strength and group membership, whereas collectivists
265
+ tend to have a strong tendency to favor ingroup members and peo-
266
+ ple they are close to. This should also be manifested in how tie
267
+ strength drives content engagement behavior in different cultures.
268
+ As a result, we predict that individualism will negatively moderate
269
+ the positive effect of tie strength on content engagement, such that
270
+ the effect of tie strength on content engagement will be weaker for
271
+ individualists than for collectivists.
272
+ Hence, we hypothesize that:
273
+ H1a: The friendship network for individualisticcultures is larger than
274
+ the friendship network for collectivistic cultures
275
+ H1b: Individualism negatively moderates the effect of tie strength on
276
+ content engagement.
277
+ 2.2.2
278
+ Relational Mobility. Thomson et al. [46] conducted a survey
279
+ across 39 countries using a set of 12 questions to construct their
280
+ dimension of culture. Relational mobility indicates the degree of
281
+ freedom and opportunities the members of a culture have to form
282
+ and terminate relationships. The two opposing poles on this in-
283
+ dex are high and low relational mobility. For example, relational
284
+ mobility is high in North America and low in Japan. Because re-
285
+ lationships in high-mobility cultures are less stable and easier to
286
+ change than those in low-mobility cultures, they are more fragile.
287
+ It also requires more effort to maintain committed relationships.
288
+ Prior work has shown that cultures with higher relational mobil-
289
+ ity tend to share more about themselves (self-disclosure), are more
290
+ active in giving support, and tend to have more trust in the mem-
291
+ bers of the society [46, 55]. Because cultures high in mobility have
292
+ more opportunities to form relationships, it allows individuals to
293
+ have a larger network. In a similar vein, because in high mobil-
294
+ ity cultures, individuals see relationships as more fragile and fluid,
295
+ they are less likely to adjust their interpersonal behaviors based on
296
+ tie strength. As such, we predict that relational mobility will neg-
297
+ atively moderate the effect of tie strength on content engagement,
298
+ such that the effect of tie strength on content engagement will be
299
+ weaker in high-mobility cultures than in low-mobility cultures.
300
+ Hence, we hypothesize that:
301
+ H2a: The friendship network for high mobility cultures is larger than
302
+ the friendship network of low mobility cultures
303
+ H2b: Relational mobilitynegatively moderates the effect of tie strength
304
+ on content engagement.
305
+ 2.2.3
306
+ Tightness. Gelfand et al. [15] conducted a survey across 33
307
+ countries using 12 behaviors across 15 situations to construct their
308
+ dimension. Tightness-looseness is about the extent to which a so-
309
+ ciety tolerates norm-deviant behaviors. The two opposing poles
310
+ on this index are tight and loose. For example, Looseness is high
311
+ in North America and low in Japan. Tight cultures have stronger
312
+ norms and are less tolerant of behavior that deviates from the norm.
313
+ In contrast, loose cultures have relatively weaker norms and are
314
+ more tolerant of behavior that deviates from the norm. As such,
315
+ we predict that tightness should be negatively correlated with net-
316
+ work size because a tight culture makes it hard for people to bring
317
+ new members to a social network. Cultural tightness is often con-
318
+ sidered a selection criterion to test whether a new member can fit
319
+ in. In contrast, the level of scrutiny will be much lower in a loose
320
+ culture, making it easier for an individual to expand their network.
321
+ Similarly, we predict that tie strength’s effect on content engage-
322
+ ment will be weaker in loose cultures than in tight cultures. In a
323
+ loose culture, tie strength is less likely to be seen as a criterion
324
+ that individuals rely upon to decide how they approach a person.
325
+ In contrast, in a tight culture, tie strength is a monitoring mech-
326
+ anism that powerfully regulates people. As a result, people draw
327
+ more influence from tie strength, including content engagement
328
+ behavior.
329
+ Hence, we hypothesize that:
330
+ H3a: The friendship networks for tighter cultures are smaller than
331
+ friendship networks of looser cultures
332
+ H3b: Tightness positively moderates the effect of tie strength on con-
333
+ tent engagement.
334
+ Although the three cultural dimensions originated from differ-
335
+ ent theories, they are often conceptually related. Prior work has
336
+ shown that individualism, relational mobility, and looseness are
337
+ often moderately correlated (Thomson et al., 2018, Appendix Ta-
338
+ ble S8, p. 51 [46]). For example, the U.S. is a culture that is in-
339
+ dividualistic, high mobility, and loose at the same time, whereas
340
+ Japan is a culture that is collectivistic, low mobility, and tight. How-
341
+ ever, while Germany ranks higher in individualism and mobility, it
342
+
343
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
344
+ Seth, et al.
345
+ ranks lower in looseness, whereas Brazil, though less individualis-
346
+ tic, is more mobile and loose. Thus, while the three theories are
347
+ conceptually related and can serve as a robustness check for one
348
+ another, they each touch upon a unique cultural aspect. When re-
349
+ searchers study the effect of one of the cultural values on individu-
350
+ als, they also tend to include the other two as a way of robustness
351
+ check [44, 46]. As a result, although the three dimensions are from
352
+ different theories, we see them as a whole package.
353
+ In sum, culture provides an important context about the shared
354
+ common knowledge to its members on how to behave in a given
355
+ context and how others will interpret their behavior. Comparative
356
+ work on interpersonal relationships across cultures has shown that
357
+ the same relationships elicit different behaviors in different cul-
358
+ tures, implying that the same relationships across cultures are not
359
+ similarly perceived [13, 16, 18, 30, 37, 47]. Our work aims to ana-
360
+ lyze if user behavior on the same online platform provides empiri-
361
+ cal evidence that the impact of tie strength on their behavior varies
362
+ across cultures.
363
+ 3
364
+ DATA
365
+ We conduct our study on the Snapchat platform. Snapchat is an
366
+ online messaging platform where content shared between users
367
+ is ephemeral. Like most platforms, Snapchat allows users to ex-
368
+ change content in the form of text, images, and videos. The inter-
369
+ actions between users can be one-to-one, one-to-group, or one-to-
370
+ all friends (a broadcast interaction). Interactions are identified by
371
+ different names and are introduced below:
372
+ • Snaps: A direct or personal interaction of image or video
373
+ content type between users, which may be one-to-one or
374
+ one-to-group. Depending on the receiver’s chosen settings,
375
+ Snaps disappear immediately after viewing or 24 hours later.
376
+ In our analysis, we only consider Snaps that are exchanged
377
+ between dyads (just two users), which are termed ‘direct
378
+ Snaps.’ We do not analyze Snaps sent to groups.
379
+ • Chats: A text message between users. Akin to Snaps, de-
380
+ pending on the receiver’s chosen settings, chats disappear
381
+ immediately after viewing or 24 hours later. In our analy-
382
+ sis, we only consider the chats that are exchanged between
383
+ dyads (just two users), which are termed ’direct chats.’ We
384
+ do not analyze group chats.
385
+ • Stories: A broadcast interaction (with all of one’s friends)
386
+ having an image orvideo as the content type. Users on Snapchat
387
+ (posters) can create Stories for their friends (viewers) to con-
388
+ sume. Stories constitute a pull communication wherein friends
389
+ decide to either engage with a Story in part or whole or ig-
390
+ nore it. Unlike Snaps and chats that disappear after watch-
391
+ ing, Stories are available for 24 hrs after posting and can be
392
+ viewed multiple times.
393
+ We analyze users on Snapchat who share a friend connection.
394
+ Friendships on Snapchat are bidirectional and are unlike the ‘fol-
395
+ low model’ that platforms like Instagram and Twitter allow (i.e.,
396
+ both individuals need to add each other as friends in Snapchat).
397
+ For each of the 73 countries (Refer appendix D), we randomly sam-
398
+ pled 10,000 unique users (egos), their associated Story viewing ac-
399
+ tivity for one month, and their complete one-hop friend network.
400
+ Though users may have friends across geographies, we filtered the
401
+ data only to include those friend pairs where both friends resided
402
+ in the same country. Aggregated over all 73 countries, cross-country
403
+ friendships accounted for 21.8% of the data. The filtering resulted
404
+ in a total dataset of approx 600,000 users per country. Each user
405
+ can view Stories from multiple friends, with each of whom they
406
+ share a different level of closeness. This results in a data set of
407
+ unique dyadic relations between a Story viewer and a Story poster.
408
+ For each dyadic interaction, we calculate aggregated statistics of
409
+ the total time spent by a viewer on each of the poster’s Stories, the
410
+ total number of Stories shared by a poster, and the total number of
411
+ Snaps and chats exchanged between the two in the dyadic commu-
412
+ nication. To avoid noise from users who rarely engage with each
413
+ other, we only keep those dyadic pairs where at least one direct
414
+ chat or Snap has been exchanged by both the Story poster and the
415
+ viewer during the one month we analyzed. To control for effects
416
+ unrelated to the cultural values but caused by the economic devel-
417
+ opment and platform reach in a country, we include each country’s
418
+ GDP [22], which is a measure of a country’s economic standing,
419
+ GINI [45], which is a measure of economic inequality within a na-
420
+ tion, and Snap’s market penetration 1, which measures the user-
421
+ base of Snapchat for a country. Section 4 details the process used
422
+ to answer each research question. The three cross-cultural theo-
423
+ ries that inform our study did not survey all the same countries.
424
+ Thus, while the three theories do not have a perfect overlap with
425
+ each other (Refer appendix D), using all three allows us to cover
426
+ 73 unique countries.
427
+ 4
428
+ METHOD
429
+ We use the observational data from Section 3 and create statistical
430
+ models to understand the role of culture on users’ network forma-
431
+ tion and content engagement (dwell time). Building on and align-
432
+ ing with prior cross-cultural work, we consider a country a repre-
433
+ sentative unit of one culture [15, 20, 46] and analyze the users at
434
+ the group level of a country.
435
+ 4.1
436
+ RQ1: How do friendship networks differ in
437
+ countries with different cultural values?
438
+ We first measured each country’s average friendship network size
439
+ to determine whether people from different cultures have differ-
440
+ ent friendship networks. For this, we calculated the total number
441
+ of friends per user in each country and averaged it over the total
442
+ number of users in the country.
443
+ Next, for each country under study, we reconstruct the ego net-
444
+ work (egonet) for that country’s randomly sampled 10,000 users.
445
+ An ego network consists of the user (the ego), the user’s friends
446
+ (the alters), and the friendship relations between the alters. The
447
+ egonets formed were independent, i.e., the users’ egonets did not
448
+ overlap. We filter out networks that consist of only two nodes
449
+ (users who are only connected to the default Snapbot and do not
450
+ have other friends on the platform) or star graphs (a pattern where
451
+ a user is connected to other users, but none of those other users are
452
+ connected, which is a pattern mainly shown by bots [38, 48]). Since
453
+ all friendships on Snapchat are bidirectional, we convert the graph
454
+ to a simple graph by removing the multiple edges (edges that are
455
+ incident on the same pair of nodes). For each of the egonets, we
456
+ 1Internal Snap INC. marketing data
457
+
458
+ Cultural Differences in Friendship Network Behaviors: A Snapchat Case Study
459
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
460
+ calculate measures of egocentricity - the density, transitivity, and
461
+ the betweenness centrality of the ego using the igraph package in
462
+ R [12].
463
+ Ego betweenness measures the percentage of shortest paths be-
464
+ tween two alters. In a social network setting, it allows us to mea-
465
+ sure the importance of the ego node. The higher the betweenness
466
+ centrality, the more the ego node is the binding factor between its
467
+ friends. Since centrality is sensitive to network size, we normalized
468
+ it by the maximum possible betweenness of the ego node. This ap-
469
+ proach is in line with prior work on measuring betweenness in
470
+ egonets Na et al.,[28].
471
+ Betweenness centrality of node i =
472
+
473
+ 푖≠푗≠푘
474
+ 푔푗푘 (푖)
475
+ 푔푗푘
476
+ Where 푔푗푘 is the number of shortest paths that connect node j and
477
+ node k, 푔푗푘 (푖) is the number of these shortest paths that include
478
+ node i.
479
+ Network density is the ratio of the edges in the user’s network
480
+ to the edges of the same user’s hypothetical network where every
481
+ node is connected to every other node. Likewise, transitivity is the
482
+ number of triads relative to the number of possible triads. In our
483
+ setting, density and transitivity measure the tendency of the users
484
+ to cluster or connect. The higher the density and transitivity, the
485
+ more the tendency of the group to cluster.
486
+ Density for an undirected graph =
487
+
488
+ 푗≠푘 푧푗푘
489
+ 푛∗(푛−1)
490
+ 2
491
+ Where n is the number of nodes in a network, and 푧푗푘 is equal to
492
+ 1 if the alters j and k are connected.
493
+ Transitivity for an undirected graph =
494
+ 3 ∗
495
+ number of triangles in the network
496
+ number of connected triples of nodes in the network
497
+ A high density and transitivity are indicative of people connect-
498
+ ing with friends of friends; a low betweenness, on the other hand,
499
+ implies a reduced tendency of nodes to cluster together. Prior work
500
+ by Na et al. [28] on self-reported Facebook networks in East Asia
501
+ and the USA found that users from the USA were more egocen-
502
+ tric than users from East Asia (had higher Ego Betweenness and
503
+ lower Density and Transitivity). We use the same methodology —
504
+ to analyze data across more countries — to explore whether these
505
+ findings generalize across platforms and for data that is not self-
506
+ reported but an individual’s actual network data from a social me-
507
+ dia platform. To maintain consistency with Na et al.,[28], we log-
508
+ transform density and transitivity and then inverse the transforma-
509
+ tion by multiplying minus one; we transform betweenness using
510
+ 푙표푔(1 + 푀푎푥(푥) − 푥) and then inverse the transformation by mul-
511
+ tiplying minus one.
512
+ 4.2
513
+ RQ2: How do cultural values change the
514
+ effect of tie strength on dwell time?
515
+ Online social media platforms continually aim to remove obsta-
516
+ cles for content creation and consumption; this has allowed for
517
+ a myriad of content to be available for consumption by users on
518
+ all platforms. With the multitude of content available, attention
519
+ from one’s social network has become a valuable and competitive
520
+ resource. Here, we analyze how users allocate their attention to so-
521
+ cial connections with varying degrees of closeness and how this al-
522
+ location is moderated by culture. We study attention in the context
523
+ of Stories posted by friends in one’s network. We examine whether
524
+ tie strength predicts one’s dwell time on a Story and whether cul-
525
+ ture moderates the relationship.
526
+ 4.2.1
527
+ Measuring interest. Attention to a poster’s Story is a proxy
528
+ for the interest in the information shared by the user. Attention
529
+ towards a friend who posts Stories (p) is measured by the total
530
+ time they spend on viewing their Story; longer attention (dwell
531
+ time) for a Story indicates a stronger interest towards that friend.
532
+ To measure total time spent on content consumption (TC), we refer
533
+ to the formulation proposed in prior works on measuring content
534
+ dwell time [23].
535
+ 푇퐶(푣,푝) =
536
+
537
+ 푠∈푆푝→푣
538
+ 훿(푠)
539
+ where푆푝→푣 denotes the set of Stories postedby p and consumed by
540
+ v, s denotes (without loss of generality) one such Story sample, and
541
+ 훿(푠) indicates the time spent by v in viewing the Story. This mea-
542
+ sures the relative difference in the viewer’s interest across different
543
+ posters. However, as pointed out in prior literature, a viewer’s total
544
+ view time on a poster’sStory can be skewed by the frequency of the
545
+ posting activity of the Story creator, i.e., given the equal likelihood
546
+ to consume Stories from different poster’s푇퐶(푣, 푝1) > 푇퐶(푣, 푝2) if
547
+ |푆푝1→푣| > |푆푝2→푣|. Hence, we model dwell time towards a sender
548
+ s as the average time spent by a viewer on the sender’s Stories.
549
+ 퐷푇 (푣, 푝) =
550
+
551
+ 푠∈푆푝→푣 훿(푠)
552
+ |푆푝→푣|
553
+ Dwell time is measured in seconds. While Stories vary in dura-
554
+ tion and can, in turn, influence dwell times, our initial analysis
555
+ of viewing time distribution showed that most viewing activities
556
+ were short and independent of content duration. This finding is in
557
+ line with prior works on dwell time in closed network settings [23]
558
+ - thus, we do not control for this variable.
559
+ 4.2.2
560
+ Measuring social tie strength between two users. Tie strength
561
+ between two users is a complex concept, subject to user percep-
562
+ tions and emotions; hence a direct quantitative measure of tie strength
563
+ between users is challenging. However, measuring the activity of
564
+ direct conversations between two users on social media platforms
565
+ has proven to be an effective proxy in estimating tie strength: the
566
+ higher the number of dyadic message exchanges, the closer the
567
+ two users are. Some users send burst messages while others send
568
+ fewer but longer messages; thus, we model tie strength (TS) as the
569
+ total number of direct Snaps and chats exchanged between a pair
570
+ of users.
571
+ 푇푆(푣, 푝) = |퐷퐶푝→푣| + |퐷퐶푣→푝| + |퐷푆푝→푣| + |퐷푆푣→푝|
572
+ where 퐷퐶푝→푣 denotes the set of direct chats sent by the Story
573
+ poster to the Story viewer, 퐷퐶푣→푝 denotes the set of direct chats
574
+ sent by the Story viewer to the Story poster, 퐷푆푝→푣 denotes the set
575
+ of direct Snaps sent by the Story poster to the viewer, and 퐷푆푣→푝
576
+ denotes the set of direct Snaps sent by Story viewer to the poster.
577
+
578
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
579
+ Seth, et al.
580
+ Preliminary analysis of tie strength in each country showed varia-
581
+ tion; hence, we standardize tie strengths within each country and
582
+ use the standardized version for analysis.
583
+ 4.2.3
584
+ Measuring culture of each user. We use the results from Hof-
585
+ stede’s Individualism [20], Thomson et al.’s Relational Mobility [46],
586
+ and Gelfand et al.’s Tightness [15] dimensions, discussed in Sec-
587
+ tion 2 as the measure of cultural values (CV) for the country that
588
+ an individual belongs to. These measures have been widely used
589
+ in the literature. Hofstede’s work has attracted over 45,000 cita-
590
+ tions, Thomson et al.’s (more recent) work has already been cited
591
+ 178 times, and Gelfand et al.’s work has more than 2000 citations.
592
+ Since each value system is on a different scale (Appendix D)— In-
593
+ dividualism ranges from 6 to 91, Relational Mobility ranges from
594
+ 3.886 to 4.607, and Tightness ranges from 1.6 to 12.3 — we indepen-
595
+ dently standardize each value system across countries and use the
596
+ standardized version for analysis.
597
+ 4.2.4
598
+ Mixed effects model to analyze dwell time as a function of tie
599
+ strength and cultural values . We used a linear mixed-effects model
600
+ to address the research question of how cultural values moderate
601
+ the impact of tie strength on the time spent consuming content
602
+ (dwell time) in closed network settings. Since the sets of countries
603
+ surveyed by Hofstede [20], Thomson et al. [46], and Gelfand et al.
604
+ [15] do not have perfect overlap, we created three multilevel mod-
605
+ els to understand how cultural values moderate the effect of tie
606
+ strength on Story dwell time. The models included terms for tie
607
+ strength (dyad level), cultural value (country level), and their in-
608
+ teraction as fixed effects, with random intercepts for country and
609
+ viewer, and the number of friends, the GDP, GINI, and Snap’s mar-
610
+ ket penetration (MP) for a country as control variables. We stan-
611
+ dardized each value system across countries and used the standard-
612
+ ized version for analysis. Since we have multiple observations per
613
+ country and a viewer views multiple posters, we include the ran-
614
+ dom effects due to the country and the viewer.
615
+ 퐷푇 (푣, 푝) = 푇푆(푣, 푝) 푋 퐶푉 (푣) + |푣푓 | + 퐺퐷푃 + 퐺퐼푁퐼 + 푀푃+
616
+ (1|푐표푢푛푡푟푦) + (1|푉푖푒푤푒푟)
617
+ where |푣푓 | refers to the number of friends a viewer has, 푇푆(푣, 푝) is
618
+ the tie strength between a pair of viewers and a poster, and 퐶푉 (푣)
619
+ is the cultural value of the viewer, which is the same as the cultural
620
+ value of the poster.
621
+ Since each dyad contains the dwell time of multiple Stories, we
622
+ model random effects for the dyad. However, users in a dyad can
623
+ have two roles: sometimes a user is a viewer, and sometimes a
624
+ poster. A user who is a viewer (v) for a poster p can be a poster (푝′)
625
+ for some other node (푣′). This directionality complicates modeling.
626
+ To simplify, we randomly regard one person as the viewer and the
627
+ other as a poster, disregarding the Stories of that dyad where the
628
+ viewer posted and the poster viewed. To ensure that the results 5
629
+ are robust against role assignment, we bootstrapped the analysis;
630
+ on each run, for each dyad, viewer and poster roles were randomly
631
+ assigned before fitting the model. The bootstrapped results are in
632
+ Appendix B.
633
+ 5
634
+ RESULTS
635
+ 5.1
636
+ RQ1: How do friendship networks differ in
637
+ countries with different cultural values?
638
+ We report zero-order Pearson correlations between cultural values
639
+ and friendship network size in Table 1. We find that countries that
640
+ rank higher in individualism, mobility, and looseness tend to have
641
+ a bigger friendship network than collectivistic, less mobile, and
642
+ tighter countries. This means that people in the higher ranking
643
+ countries are connected to more friends on Snapchat, supporting
644
+ H1a, H2a, and H3a. To check for robustness, we ran the same analy-
645
+ ses with GDP, GINI, and Snapchat’s market penetration as control
646
+ variables. The addition of control variables reduced the sample size
647
+ of countries, but the results corroborate those reported here A.
648
+ Next, the structural analysis of the ego networks of users from
649
+ different cultures (Table 3) shows that the ego centrality of user
650
+ networks on Snapchat varies with cultural values. Akin to Na et
651
+ al.,[28], we find that the individual structural measures, namely
652
+ density, transitivity, and betweenness, are highly correlated (Ta-
653
+ ble 2), and thus we average the standardized values and report the
654
+ results for this averaged index of ego-centrality. The results show
655
+ that mobility and individualism are negatively correlated with ego-
656
+ centricity, and tightness is positively correlated with egocentrality.
657
+ This means that in countries that rank higher on mobility and indi-
658
+ vidualism, people’s friends on Snapchat are more likely to be con-
659
+ nected to each other, and in countries that rank higher on tightness,
660
+ people’s friends on Snapchat are less likely to be connected to each
661
+ other.
662
+ Table 1: Pearson correlation between cultural values and
663
+ friendship network size (∗푝 < 0.05, ∗ ∗ 푝 < 0.01, ∗ ∗ ∗푝 < 0.001)
664
+ Cultural Value
665
+ Correlation
666
+ Number of countries
667
+ Individualism
668
+ 0.68**
669
+ 65
670
+ Relational Mobility
671
+ 0.31*
672
+ 37
673
+ Tightness
674
+ -0.37*
675
+ 30
676
+ Table 2: Pearson correlation between network structural
677
+ measures for data across different cultural values after con-
678
+ trolling for GDP, GINI, and market penetration (∗푝 < 0.05, ∗∗
679
+ 푝 < 0.01, ∗ ∗ ∗푝 < 0.001)
680
+ Cultural
681
+ Value
682
+ Betweenness
683
+ and
684
+ Transitivity
685
+ Betweenness
686
+ and Density
687
+ Density
688
+ and
689
+ Transitivity
690
+ Individualism
691
+ 0.74***
692
+ 0.504***
693
+ 0.92 ***
694
+ Mobility
695
+ 0.76 ***
696
+ 0.49**
697
+ 0.85***
698
+ Tightness
699
+ 0.82***
700
+ 0.45*
701
+ 0.83 ***
702
+
703
+ Cultural Differences in Friendship Network Behaviors: A Snapchat Case Study
704
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
705
+ Table 3: Pearson correlation between cultural values and
706
+ egocentrality(∗푝 < 0.05, ∗ ∗ 푝 < 0.01, ∗ ∗ ∗푝 < 0.001)
707
+ Cultural Value
708
+ averaged index of ego-centrality
709
+ Individualism
710
+ -0.07 ***
711
+ Relational Mobility
712
+ -0.04***
713
+ Tightness
714
+ 0.06***
715
+ 5.2
716
+ RQ2: How do cultural values change the
717
+ effect of tie strength on dwell time?
718
+ Given that the friendship network structures are different across
719
+ cultures, using multilevel modeling, we analyzed how cultural val-
720
+ ues moderate the effect of tie strength on the viewer’s dwell time
721
+ (Tables 4, 5, 6). We see that an increase in the strength of ties in-
722
+ creases the dwell time, a result in line with prior works [23, 52].
723
+ Having more friends reduces a viewer’s dwell time on content,
724
+ which is likely because an increase in the number of friends leads
725
+ to more potential Story content to consume. Though the cultural
726
+ values do not have a significant main effect, they significantly mod-
727
+ erate the effect of tie strength on dwell time across all three cultural
728
+ values. We find that tie strength negatively moderates the effect of
729
+ tie strength for more individualistic, mobile, and looser cultures.
730
+ Thus confirming H1b, H2b, and H3b. The bootstrap results from
731
+ 100 runs corroborate the findings reported here in Appendix B.
732
+ Our work focuses on understanding (and not predicting) within-
733
+ dyad level dwell time from theories of country-level cultural val-
734
+ ues, which may not fully account for a lot of individual-level vari-
735
+ ation. However, a significant moderation effect allows us to argue
736
+ for a substantiative effect of cultural values on individual-level be-
737
+ havior [27]. Using only the intersection of countries present across
738
+ all three measures of culture, we check for robustness of these re-
739
+ sults (Appendix C), and the results corroborate the results reported
740
+ in Tables 4, 5, 6. Because the effects we found are on the smaller
741
+ side, there is still a lot of unexplained variance, and we can not fully
742
+ account for all individual-level and item (Story) level variation.
743
+ Table 4: Coefficients from Multilevel Modeling for the ef-
744
+ fect of Individualism as a moderator on Dwell Time (∗푝 <
745
+ 0.05, ∗ ∗ 푝 < 0.01, ∗ ∗ ∗푝 < 0.001), Sample size: country = 47,
746
+ dyads = 460000, RMSE = 4.9, AIC = 2793115, BIC = 279226, R2
747
+ conditional = 0.04, R2 marginal = 0.01
748
+ Fixed Effects
749
+ Estimate
750
+ Standard Error
751
+ Intercept
752
+ 3.741***
753
+ 0.078
754
+ Strength of Ties
755
+ 0.092***
756
+ 0.007
757
+ Individualism
758
+ 0.035
759
+ 0.074
760
+ Strength of Ties : Individualism
761
+ -0.014***
762
+ 0.007
763
+ Control variables
764
+ Number of Friends
765
+ -0.338***
766
+ 0.008
767
+ GDP
768
+ -0.036
769
+ 0.068
770
+ GINI
771
+ -0.040
772
+ 0.060
773
+ Market Penetration
774
+ 0.065*
775
+ 0.059
776
+ Table 5: Coefficients From Multilevel Modeling for the effect
777
+ of Mobility as a moderator on Dwell Time (∗푝 < 0.05, ∗ ∗ 푝 <
778
+ 0.01, ∗∗∗푝 < 0.001) Sample size: country = 26, dyads = 128800,
779
+ RMSE= 3.12, AIC = 1438399, BIC = 1438504, R2 conditional =
780
+ 0.27, R2 marginal = 0.01
781
+ Fixed Effects
782
+ Estimate
783
+ Standard Error
784
+ Intercept
785
+ 3.835 ***
786
+ 0.097
787
+ Strength of Ties
788
+ 0.116***
789
+ 0.008
790
+ High Mobility
791
+ 0.092
792
+ 0.071
793
+ Strength of Ties : High Mobility
794
+ -0.012*
795
+ 0.006
796
+ Control variables
797
+ Number of Friends
798
+ -0.35***
799
+ 0.011
800
+ GDP
801
+ -0.051
802
+ 0.108
803
+ GINI
804
+ -0.02
805
+ 0.102
806
+ Market Penetration
807
+ 0.108
808
+ 0.091
809
+ Table 6: Coefficients From Multilevel Modeling for the effect
810
+ of Tightness as a moderator on Dwell Time (∗푝 < 0.05, ∗∗푝 <
811
+ 0.01, ∗∗∗푝 < 0.001), Sample size: country = 25, dyads = 100000,
812
+ RMSE=2.19, AIC = 731754.3, BIC = 731850.8, R2 conditional
813
+ = 0.80, R2 marginal = 0.01
814
+ Fixed Effects
815
+ Estimate
816
+ Standard Error
817
+ Intercept
818
+ 3.725***
819
+ 0.1
820
+ Strength of Ties
821
+ 0.129***
822
+ 0.010
823
+ Tightness
824
+ -0.060
825
+ -0.082
826
+ Strength of Ties : Tightness
827
+ 0.058***
828
+ 0.010
829
+ Control variables
830
+ Number of Friends
831
+ -0.283 ***
832
+ 0.011
833
+ GDP
834
+ -0.154*
835
+ 0.077
836
+ GINI
837
+ -0.171
838
+ 0.069
839
+ Market Penetration
840
+ 0.179*
841
+ 0.077
842
+ 6
843
+ DISCUSSION
844
+ Most social media platforms were introduced in the Global North
845
+ before they started gaining a user base in other countries. As a
846
+ result, studies on understanding users on social media platforms
847
+ primarily draw from west-centric populations, which leads to un-
848
+ intended biases. Using data from 10,000 users per country from
849
+ nearly 73 countries, our work studied how individuals across cul-
850
+ tures differ in their behavior on the same platform. We control
851
+ for confounders like the platform’s market penetration, countries’
852
+ GDP, and GINI score, which may have influenced the platform’s
853
+ user base size and composition. Our main findings are:
854
+ Structure of friendship network. The analysis of the egocentrality of
855
+ the friendship networks showed that individualistic, more mobile,
856
+ and looser cultures are negatively correlated with egocentrality.
857
+ This result is unlike the prior survey-based network analysis by Na
858
+ et al. [28], which found that individualism is positively correlated
859
+ with ego centrality. Na et al. [28] recruited individuals through a
860
+ call for survey participants on the Facebook platform, which re-
861
+ sulted in a substantially varied number of respondents from each
862
+
863
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
864
+ Seth, et al.
865
+ country and thus could be sensitive to selection and conformity
866
+ bias. In our study, we randomly sampled users and analyzed the
867
+ metadata of the user behavior, which provides a relatively cleaner
868
+ signal for a user’s choices. Apart from a more balanced number of
869
+ users from different countries, we also analyzed data from a sub-
870
+ stantially higher number of countries. Apart from data collection
871
+ and sample size differences, another potential source for the dif-
872
+ ferences in findings could arise from who is befriended on these
873
+ platforms.
874
+ Adams and Plaut posited that friendship’s meaning varies sub-
875
+ stantially across cultures [1]. Markus and Kitayama [26] argued
876
+ that familial ties form an important part of a user’s social network
877
+ in collectivist cultures compared to individualistic cultures. With
878
+ the demographics on Snapchat skewing towards a younger popu-
879
+ lation [9, 10] and motivations differing from Facebook [3, 34, 50], it
880
+ is plausible that (a) the ’younger users’ do not ’friend’ familial ties
881
+ due to the difference in how they make sense of ’friendship’ and
882
+ whom they ’friend,’ and (b) the ’elder’ familial members are ab-
883
+ sent from the platform. Since family ties form an important part of
884
+ collectivist cultures, not including them on their Snapchat friend-
885
+ ship network could be the reason for differences in our findings
886
+ when compared to Na et al. [28]. While our results differ from Na
887
+ et al., [28], they agree with the findings from Igarashi et al.[21]
888
+ that user’s from collectivist cultures had more egocentric networks.
889
+ Given that very few studies have explored how culture affects net-
890
+ work structures, future work in this domain will help establish
891
+ a stronger understanding of how culture influences the network
892
+ structures formed on social media platforms.
893
+ Our findings bear important implications for future work that
894
+ aims to study user interaction patterns on a platform. Firstly, stud-
895
+ ies should elicit and validate the network structure formed for their
896
+ population of interest because the network structures vary across
897
+ subpopulations on the same platform and across platforms, and
898
+ relying on metrics from prior work with a mismatched popula-
899
+ tion might lead to incorrect inferences. Next, the differences in
900
+ friendship networks bear importance for context-aware friendship
901
+ recommendation engines, which we discuss under design implica-
902
+ tions.
903
+ Cultural Values and user behavior. Culture is a complex societal-
904
+ level phenomenon that guides individual behavior. Various studies
905
+ have tried to study culture through a system of ’cultural values.’ In
906
+ this project, we chose three dominant theories in cultural psychol-
907
+ ogy, ranging from Hofstede’s dimensions published in 2001 [20] to
908
+ more recent theories on Tightness and Mobility published in 2011
909
+ and 2018 [15, 46], respectively. Consistent with our hypothesis, we
910
+ found that each cultural value (i.e., individualism, looseness, mobil-
911
+ ity) significantly moderates the effect of tie strength on dwell time,
912
+ highlighting the significance of considering culture in understand-
913
+ ing behavior patterns on social media. In addition, we found that
914
+ individualism, looseness, and mobility moderates the relationship
915
+ between tie strength and dwell time in the same direction. Theo-
916
+ retically, it is logical because in societies where people have more
917
+ freedom to make friends and move between different circles (i.e.,
918
+ high relational mobility), a looser norm (i.e., looseness) is likely to
919
+ develop, and a comparatively more self-focused mindset (i.e., indi-
920
+ vidualism) is likely to rise. Indeed, prior work has also predicted
921
+ that these three variables would have a similar impact on individ-
922
+ ual cognition and behavior [46]. Thus, we extend the prior work in
923
+ cultural psychology by adopting a cultural lens in understanding
924
+ user behaviors on social media.
925
+ 6.1
926
+ Design Implications
927
+ The diversity of content on platforms has made good recommen-
928
+ dation systems a necessity. While these recommendation systems
929
+ are becoming increasingly personalized, they fail to distinguish the
930
+ varied meanings that different types of social ties have for users
931
+ from different cultures. For example, if we consider the dyadic pair
932
+ of user A, their strongest tie, and user B, their strongest tie, such
933
+ that user A and B belong to different cultures, the influence of the
934
+ respective strongest tie may be different. Our study, through evi-
935
+ dence, argues for treating users and their friendship relations from
936
+ different cultures differently when designing recommendation sys-
937
+ tems. Analyzing users at a cultural level may reduce the complexity
938
+ of recommendation systems and make the recommendation sys-
939
+ tem more culturally sensitive. By doing so, they may be able to
940
+ better rank the content the user is more likely to engage with at
941
+ a reduced cost. For example, our result suggests that when design-
942
+ ing recommendation systems, tie strength should be given greater
943
+ weight for users in less mobile, tighter, and collectivistic countries
944
+ because our results show that tie strength is more strongly corre-
945
+ lated to content dwell time in these countries.
946
+ Friendship recommendation engines that are unaware of ’how’
947
+ and ’why’ network structures differ across cultures run the risk of
948
+ treating friending activities across different cultures as the same,
949
+ resulting in a suboptimal platform experience. For instance, the
950
+ motivations of individuals from tight cultures could differ from
951
+ those from loose cultures, i.e., in contrast to individuals from loose
952
+ cultures, individuals in tight cultures might feel forced to friend
953
+ not only those whom they want to but also those whom they have
954
+ to - say befriending familial ties. A recommendation engine that
955
+ captures behavior from loose cultures might not be able to recom-
956
+ mend users with whom one shares common friends. Similarly, a
957
+ recommendation engine that focuses on tight cultures would ex-
958
+ plore less and over-recommend users with whom one shares com-
959
+ mon friends. Hence, using the behavioral understanding from only
960
+ either of the cultures risks the failure of the algorithms ( and, in
961
+ turn, platform experience) in the other cultures. Thus, while our
962
+ work takes a step in highlighting ’how’ the network structures
963
+ differ, future work that provides insights into ’why’ the network
964
+ structures differ can further enrich the understanding of design-
965
+ ing friendship recommendation algorithms.
966
+ 7
967
+ LIMITATIONS
968
+ Our study is subject to a few important limitations. First, our work
969
+ uses data from Snapchat, which encompasses a significant but lim-
970
+ ited amount of people’s online communications. We could only
971
+ use available data for our study, and some of Snapchat’s user data
972
+ is only available for a limited time. Additionally, the actual con-
973
+ tent of Snapchat communications is not available for analysis. The
974
+ Snapchat user group skews young [10], and studies have found
975
+ that younger people have shifted away from traditional values [29,
976
+ 43]. Second, recommendation algorithms play an important role
977
+
978
+ Cultural Differences in Friendship Network Behaviors: A Snapchat Case Study
979
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
980
+ in network formation on the platform. We did not have access to
981
+ the friend recommendation algorithm for this study, and we could,
982
+ therefore, not control for any potential confounding effects. Get-
983
+ ting an insight into the algorithm and its impact on users across
984
+ geographies could further enrich future work. Further, the focus of
985
+ this study was to understand the friendship network and behavior
986
+ on the online social network, which may differ from an individ-
987
+ ual’s offline friendship networks and their interactions on these
988
+ networks. Next, not every country has been equally surveyed in
989
+ prior research on cultural values. There is a non-perfect overlap be-
990
+ tween countries that have been studied for mobility and countries
991
+ that have been studied for tightness. Once data from more coun-
992
+ tries becomes available, our analyses could be extended to include
993
+ those countries. Future work can further build on ours by analyz-
994
+ ing how content type interacts with cultural values and impacts
995
+ dwell time. By using a large random sample of users across coun-
996
+ tries, country-level measures of economic growth, and inequity,
997
+ we tried to limit selection bias and account for variations across
998
+ countries. GDP and GINI measures help us control for country-
999
+ level socioeconomic status. However, it is plausible that a given
1000
+ stratum of society is overrepresented on the platform, and country-
1001
+ level socioeconomic measures might not fully control for the plat-
1002
+ form user’s socioeconomic status. The lack of finer-grained mea-
1003
+ sures could be a limitation of the study.
1004
+ Human behavior is complex and subject to factors that have
1005
+ individual-level variation. Hence, it is difficult to fully predict hu-
1006
+ man behavior in the social sciences. The focus of our work was to
1007
+ test the theory of the effect of culture, as measured at the country
1008
+ level, on individual behavior. Like prior works, we can not fully
1009
+ account for all individual-level and item (Story) level variation. As
1010
+ brought out in the Introduction 1, individual behavior is affected
1011
+ by a host of other variables, and content engagement is no differ-
1012
+ ent. For example, the Story’s content might be an important fac-
1013
+ tor; however, we could not study this due to Snap Inc.’s policies
1014
+ on not retaining information about the content. Future studies can
1015
+ help make the model more complete by operationalizing the type
1016
+ of content and other variables that might affect the dwell time on
1017
+ content. While the cultural theories used in this study span a large
1018
+ geographic region, the identities of the researchers who created
1019
+ these measures could be a source of bias for these measures. As
1020
+ argued by Shweder [40] (p. 409), these studies can largely benefit
1021
+ from a more emic expansion approach, which would help remove
1022
+ biases from future empirical studies.
1023
+ 8
1024
+ CONCLUSION
1025
+ We examined the friendship network and the dwell time behavior
1026
+ of users across 73 cultures on the online platform Snapchat. We
1027
+ studied one month’s data from 10K users from each culture. First,
1028
+ we found that the friendship networks curated by individuals from
1029
+ different cultures vary in size and egocentricity. We found evidence
1030
+ that individuals from individualistic, high mobility, and loose cul-
1031
+ tures tend to form larger friendship networks. We analyzed how
1032
+ cultural values moderate the relation between tie strength and users’
1033
+ content engagement behavior. We found that individualism, high
1034
+ mobility, and looseness negatively moderate this effect. This pro-
1035
+ vides evidence for psychological theories which posit that relation-
1036
+ ships are not perceived similarly across different cultures, and thus
1037
+ their effect on user behavior is not uniform across cultures. Our
1038
+ work could advance the understanding of engagement with con-
1039
+ tent on online platforms and how using this insight can improve
1040
+ recommendation systems. Incorporating cultural values in the ex-
1041
+ perience design can improve the user experience and does better
1042
+ justice to the diverse backgrounds of platform users.
1043
+ ACKNOWLEDGMENTS
1044
+ We thank Dr. Neil Shah (Snap Inc.) for providing valuable advice
1045
+ on friendship network creation and analysis.
1046
+ REFERENCES
1047
+ [1] Glenn Adams and Victoria C Plaut. 2003. The cultural grounding of personal
1048
+ relationship: Friendship in North American and West African worlds. Personal
1049
+ Relationships 10, 3 (2003), 333–347.
1050
+ [2] Khaled Saleh Al Omoush, Saad Ghaleb Yaseen, and Mohammad Atwah
1051
+ Alma’Aitah. 2012. The impact of Arab cultural values on online social network-
1052
+ ing: The case of Facebook. Computers in Human Behavior 28, 6 (2012), 2387–
1053
+ 2399.
1054
+ [3] Saleem Alhabash and Mengyan Ma. 2017. A tale of four platforms: Motivations
1055
+ and uses of Facebook, Twitter, Instagram, and Snapchat among college students?
1056
+ Social media+ society 3, 1 (2017), 2056305117691544.
1057
+ [4] Dhoha A Alsaleh, Michael T Elliott, Frank Q Fu, and Ramendra Thakur. 2019.
1058
+ Cross-cultural differences in the adoption of social media. Journal of Research
1059
+ in Interactive Marketing (2019).
1060
+ [5] Valerio Arnaboldi, Andrea Guazzini, and Andrea Passarella. 2013. Egocentric
1061
+ online social networks: Analysis of key features and prediction of tie strength
1062
+ in Facebook. Computer Communications 36, 10-11 (2013), 1130–1144.
1063
+ [6] Joseph B Bayer, Nicole B Ellison, Sarita Y Schoenebeck, and Emily B Falk. 2016.
1064
+ Sharing the small moments: ephemeral social interaction on Snapchat. Informa-
1065
+ tion, Communication & Society 19, 7 (2016), 956–977.
1066
+ [7] Agata Błachnio, Aneta Przepiorka,Martina Benvenuti, Davide Cannata, Adela M
1067
+ Ciobanu, Emre Senol-Durak, Mithat Durak,Michail N Giannakos, Elvis Mazzoni,
1068
+ Ilias O Pappas, et al. 2016. Cultural and personality predictors of Facebook in-
1069
+ trusion: a cross-cultural study. Frontiers in Psychology 7 (2016), 1895.
1070
+ [8] Robert M Bond, Christopher J Fariss, Jason J Jones, Adam DI Kramer, Cameron
1071
+ Marlow, Jaime E Settle, and James H Fowler. 2012. A 61-million-person exper-
1072
+ iment in social influence and political mobilization. Nature 489, 7415 (2012),
1073
+ 295–298.
1074
+ [9] Brent
1075
+ Barnhart.
1076
+ 2022.
1077
+ Social
1078
+ media
1079
+ demograph-
1080
+ ics
1081
+ to
1082
+ inform
1083
+ your
1084
+ brand’s
1085
+ strategy
1086
+ in
1087
+ 2022.
1088
+ https://sproutsocial.com/insights/new-social-media-demographics/#facebook-demographics,
1089
+ Last accessed on 2022-09-15.
1090
+ [10] Brent
1091
+ Barnhart.
1092
+ 2022.
1093
+ Social
1094
+ media
1095
+ demograph-
1096
+ ics
1097
+ to
1098
+ inform
1099
+ your
1100
+ brand’s
1101
+ strategy
1102
+ in
1103
+ 2022.
1104
+ https://sproutsocial.com/insights/new-social-media-demographics/#snapchat-demographics,
1105
+ Last accessed on 2022-09-15.
1106
+ [11] Mehmet Civelek, Krzysztof Gajdka, Jaroslav Světlík, and Vladimír Vavrečka.
1107
+ 2020. Differences in the usage of online marketing and social media tools: evi-
1108
+ dence from Czech, Slovakian and Hungarian SMEs. Equilibrium. Quarterly Jour-
1109
+ nal of Economics and Economic Policy 15, 3 (2020), 537–563.
1110
+ [12] Gabor Csardi and Tamas Nepusz. 2006.
1111
+ The igraph software package for
1112
+ complex network research.
1113
+ InterJournal Complex Systems (2006), 1695.
1114
+ https://igraph.org
1115
+ [13] José Manuel Errasti Pérez, Isaac Amigo Vázquez, José Manuel Villadangos Fer-
1116
+ nández, Joaquín Morís Fernández, et al. 2018. Differences between individualist
1117
+ and collectivist cultures in emotional Facebook usage: Relationship with empa-
1118
+ thy, self-esteem, and narcissism. Psicothema (2018).
1119
+ [14] Silvia Elena Gallagher and Timothy Savage. 2013. Cross-cultural analysis in
1120
+ online community research: A literature review. Computers in Human Behavior
1121
+ 29, 3 (2013), 1028–1038.
1122
+ [15] Michele J Gelfand, Jana L Raver, Lisa Nishii, Lisa M Leslie, Janetta Lun,
1123
+ Beng Chong Lim, Lili Duan, Assaf Almaliach, Soon Ang, Jakobina Arnadottir,
1124
+ et al. 2011. Differences between tight and loose cultures: A 33-nation study. sci-
1125
+ ence 332, 6033 (2011), 1100–1104.
1126
+ [16] Robin Goodwin. 2013. Personal relationships across cultures. Routledge.
1127
+ [17] Mark S Granovetter. 1973. The strength of weak ties. American journal of soci-
1128
+ ology 78, 6 (1973), 1360–1380.
1129
+
1130
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
1131
+ Seth, et al.
1132
+ [18] Manjul Gupta, Irem Uz, Pouyan Esmaeilzadeh, Fabrizio Noboa, Abeer A
1133
+ Mahrous, Eojina Kim, Graca Miranda, Vanesa M Tennant, Sean Chung, Akbar
1134
+ Azam, et al. 2018. Do cultural norms affect social network behavior inappropri-
1135
+ ateness? A global study. Journal of Business Research 85 (2018), 10–22.
1136
+ [19] Arnon Hershkovitz and Zack Hayat. 2020. The role of tie strength in assessing
1137
+ credibility of scientific content on Facebook. Technology in Society 61 (2020),
1138
+ 101261.
1139
+ [20] Geert Hofstede. 2001. Culture’s consequences: Comparing values, behaviors, insti-
1140
+ tutions and organizations across nations. Sage publications.
1141
+ [21] Tasuku Igarashi, Yoshihisa Kashima, Emiko S Kashima, Tomas Farsides, Uichol
1142
+ Kim, Fritz Strack, Lioba Werth, and Masaki Yuki. 2008. Culture, trust, and social
1143
+ networks. Asian Journal of Social Psychology 11, 1 (2008), 88–101.
1144
+ [22] International
1145
+ Monetary
1146
+ Fund.
1147
+ 2022.
1148
+ Global
1149
+ economy
1150
+ on
1151
+ firmer
1152
+ ground,
1153
+ but
1154
+ with
1155
+ divergent
1156
+ recoveries
1157
+ amid
1158
+ high
1159
+ uncertainty.
1160
+ https://www.imf.org/en/Publications/WEO/Issues/2021/03/23/world-economic-outlook-april-2021,
1161
+ Last accessed on 2022-09-12.
1162
+ [23] Parisa Kaghazgaran, Maarten Bos, Leonardo Neves, and Neil Shah. 2020. Social
1163
+ factors in closed-network content consumption. In Proceedings of the 29th ACM
1164
+ International Conference on Information & Knowledge Management. 595–604.
1165
+ [24] Nicole C Krämer, Vera Sauer, and Nicole Ellison. 2021. The strength of weak ties
1166
+ revisited: further evidence of the role of strong ties in the provision of online
1167
+ social support. Social Media+ Society 7, 2 (2021), 20563051211024958.
1168
+ [25] Piper Liping Liu and Tien Ee Dominic Yeo. 2022. Weak ties matter: Social net-
1169
+ work dynamics of mobile media multiplexity and their impact on the social sup-
1170
+ port and psychological well-being experienced by migrant workers. Mobile Me-
1171
+ dia & Communication 10, 1 (2022), 76–96.
1172
+ [26] Hazel R Markus and Shinobu Kitayama. 1991. Culture and the self: Implications
1173
+ for cognition, emotion, and motivation. Psychological review 98, 2 (1991), 224.
1174
+ [27] Ferenc Moksony and Rita Heged. 1990. Small is beautiful. The use and interpre-
1175
+ tation of R2 in social research. Szociológiai Szemle, Special issue (1990), 130–138.
1176
+ [28] Jinkyung Na, Michal Kosinski, and David J Stillwell. 2015. When a new tool is
1177
+ introduced in different cultural contexts: Individualism–collectivism and social
1178
+ network on Facebook. Journal of Cross-Cultural Psychology 46, 3 (2015), 355–
1179
+ 370.
1180
+ [29] Thuan Si Nguyen. 2015. Using Geert Hofstede’s cultural dimensions to describe
1181
+ and to analyze cultural differences between first generation and second generation
1182
+ Vietnamese in the Vietnamese Church in America. Nyack College, Alliance Theo-
1183
+ logical Seminary.
1184
+ [30] MichaelObal and Werner Kunz. 2016. Cross-culturaldifferences in uses of online
1185
+ experts. Journal of Business Research 69, 3 (2016), 1148–1156.
1186
+ [31] Katrina Panovich, Rob Miller, and David Karger. 2012. Tie strength in question
1187
+ & answer on social network sites. In Proceedings of the ACM 2012 conference on
1188
+ computer supported cooperative work. 1057–1066.
1189
+ [32] Sachin R Pendse, Kate Niederhoffer, and Amit Sharma. 2019. Cross-Cultural
1190
+ Differences in the Use of Online Mental Health Support Forums. Proceedings of
1191
+ the ACM on Human-Computer Interaction 3, CSCW (2019), 1–29.
1192
+ [33] PEW
1193
+ RESEARCH
1194
+ CENTER.
1195
+ 2006.
1196
+ The
1197
+ Strength
1198
+ of
1199
+ Internet
1200
+ Ties.
1201
+ https://www.pewresearch.org/internet/2006/01/25/the-strength-of-internet-ties/,
1202
+ Last accessed on 2022-09-12.
1203
+ [34] Joe Phua, Seunga Venus Jin, and Jihoon Jay Kim. 2017. Uses and gratifications
1204
+ of social networking sites for bridging and bonding social capital: A comparison
1205
+ of Facebook, Twitter, Instagram, and Snapchat. Computers in human behavior
1206
+ 72 (2017), 115–122.
1207
+ [35] LukaszPiwek and Adam Joinson. 2016. “What do they snapchat about?” Patterns
1208
+ of use in time-limited instant messaging service. Computers in human behavior
1209
+ 54 (2016), 358–367.
1210
+ [36] Annu Sible Prabhakar, Elena Maris, and Indrani Medhi Thies. 2021. Toward Un-
1211
+ derstanding the CulturalInfluences on Social Media Use of Middle ClassMothers
1212
+ in India. In Extended Abstracts of the 2021 CHI Conference on Human Factors in
1213
+ Computing Systems. 1–7.
1214
+ [37] Catherine Raeff, Patricia Marks Greenfield, and Blanca Quiroz. 2000. Conceptu-
1215
+ alizing interpersonal relationships in the cultural contexts of individualism and
1216
+ collectivism. New directions for child and adolescent development 2000, 87 (2000),
1217
+ 59–74.
1218
+ [38] Ross Schuchard, Andrew Crooks, Anthony Stefanidis, and Arie Croitoru. 2018.
1219
+ Bots in nets: empirical comparative analysis of bot evidence in social networks.
1220
+ In International Conference on Complex Networks and their Applications. Springer,
1221
+ 424–436.
1222
+ [39] Pavica Sheldon, Erna Herzfeldt, and Philipp A Rauschnabel. 2020. Culture and
1223
+ social media: the relationship between cultural values and hashtagging styles.
1224
+ Behaviour & Information Technology 39, 7 (2020), 758–770.
1225
+ [40] RichardA Shweder, Jonathan Haidt, Randall Horton, and Craig Joseph. 1993. The
1226
+ cultural psychology of the emotions. Handbook of emotions (1993), 417–431.
1227
+ [41] Statista.
1228
+ 2022.
1229
+ Number
1230
+ of
1231
+ daily
1232
+ active
1233
+ Snapchat
1234
+ users
1235
+ from
1236
+ 1st
1237
+ quarter
1238
+ 2014
1239
+ to
1240
+ 2nd
1241
+ quarter
1242
+ 2022.
1243
+ https://www.statista.com/statistics/545967/snapchat-app-dau/, Last accessed
1244
+ on 2022-09-15.
1245
+ [42] Statista.
1246
+ 2022.
1247
+ Number
1248
+ of
1249
+ social
1250
+ media
1251
+ users
1252
+ world-
1253
+ wide
1254
+ from
1255
+ 2018
1256
+ to
1257
+ 2022,
1258
+ with
1259
+ forecasts
1260
+ from
1261
+ 2023
1262
+ to
1263
+ 2027.
1264
+ https://www.statista.com/statistics/278414/number-of-worldwide-social-network-users/,
1265
+ Last accessed on 2022-09-12.
1266
+ [43] Jiaming Sun and Xun Wang. 2010. Value differences between generations in
1267
+ China: A study in Shanghai. Journal of Youth Studies 13, 1 (2010), 65–81.
1268
+ [44] Thomas Talhelm, Xiao Zhang, Shige Oishi, Chen Shimin, Dechao Duan, Xiaoli
1269
+ Lan, and Shinobu Kitayama. 2014. Large-scale psychological differences within
1270
+ China explained by rice versus wheat agriculture. Science 344, 6184 (2014), 603–
1271
+ 608.
1272
+ [45] The World Bank. 2022. Gini Index. https://data.worldbank.org/indicator/SI.POV.GINI,
1273
+ Last accessed on 2022-09-12.
1274
+ [46] Robert Thomson, Masaki Yuki, Thomas Talhelm, Joanna Schug, Mie Kito, Arin H
1275
+ Ayanian, Julia C Becker, Maja Becker, Chi-yue Chiu, Hoon-Seok Choi, et al. 2018.
1276
+ Relational mobility predicts social behaviors in 39 countries and is tied to his-
1277
+ torical farming and threat. Proceedings of the National Academy of Sciences 115,
1278
+ 29 (2018), 7521–7526.
1279
+ [47] William B Gudykunst Stella Ting-Toomey and Tsukasa Nishida. 1996. Commu-
1280
+ nication in personal relationships across cultures. Sage.
1281
+ [48] Joshua Uyheng and Kathleen M Carley. 2020. Bots and online hate during the
1282
+ COVID-19 pandemic: case studies in the United States and the Philippines. Jour-
1283
+ nal of computational social science 3, 2 (2020), 445–468.
1284
+ [49] Brian Uzzi. 1999. Embeddedness in the making of financial capital: How social
1285
+ relations and networks benefit firms seeking financing. American sociological
1286
+ review (1999), 481–505.
1287
+ [50] J Mitchell Vaterlaus, Kathryn Barnett, Cesia Roche, and Jimmy A Young. 2016.
1288
+ “Snapchat is more personal”: An exploratory study on Snapchat behaviors and
1289
+ young adult interpersonal relationships.
1290
+ Computers in Human Behavior 62
1291
+ (2016), 594–601.
1292
+ [51] Alan Warde and Gindo Tampubolon. 2002. Social capital, networks and leisure
1293
+ consumption. The Sociological Review 50, 2 (2002), 155–180.
1294
+ [52] Lilian Weng, Márton Karsai,Nicola Perra, Filippo Menczer, and Alessandro Flam-
1295
+ mini. 2018. Attention on weak ties in social and communication networks. In
1296
+ Complex spreading phenomena in social systems. Springer, 213–228.
1297
+ [53] Naomi Whiteside, Torgeir Aleti, Jason Pallant, John Zeleznikow, et al. 2018. Help-
1298
+ ful or harmful? Exploring the impact of social media usage on intimate relation-
1299
+ ships. Australasian Journal of Information Systems 22 (2018).
1300
+ [54] REBECCA P. YU, RYAN J. MCCAMMON, NICOLE B. ELLISON, and KEN-
1301
+ NETH M. LANGA. 2016. The relationships that matter: social network site use
1302
+ and social wellbeing among older adults in the United States of America. Ageing
1303
+ and Society 36, 9 (2016), 1826–1852. https://doi.org/10.1017/S0144686X15000677
1304
+ [55] Masaki Yuki and Joanna Schug. 2020. Psychological consequences of relational
1305
+ mobility. Current opinion in psychology 32 (2020), 129–132.
1306
+ A
1307
+ CULTURAL VALUE AND FRIEND
1308
+ NETWORK SIZE WITH CONTROL
1309
+ VARIABLES
1310
+ Table 7: Pearson correlation between cultural values and
1311
+ friendship network size with GDP, GINI, and Market Pen-
1312
+ etration as control variables (∗푝 < 0.05, ∗ ∗ 푝 < 0.01, ∗ ∗ ∗푝 <
1313
+ 0.001)
1314
+ Cultural Value
1315
+ Correlation
1316
+ Number of countries
1317
+ Individualism
1318
+ 0.6**
1319
+ 47
1320
+ Relational Mobility
1321
+ 0.27
1322
+ 26
1323
+ Tightness
1324
+ -0.51*
1325
+ 24
1326
+
1327
+ Cultural Differences in Friendship Network Behaviors: A Snapchat Case Study
1328
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
1329
+ B
1330
+ BOOTSTAPPED RESULTS FOR MIXED
1331
+ EFFECTS MODEL (ACROSS 100 RUNS)
1332
+ Table 8: Bootstrapped Coefficients From Multilevel Model-
1333
+ ing for the effect of Individualism as a moderator on Dwell
1334
+ Time
1335
+ Fixed Effects
1336
+ Estimate
1337
+ 퐶퐼 (95%)
1338
+ Intercept
1339
+ 3.740
1340
+ [3.718,3.762]
1341
+ Strength of Ties
1342
+ 0.103
1343
+ [0.100,0.106]
1344
+ Individualism
1345
+ 0.033
1346
+ [0.028,0.038]
1347
+ Strength of Ties : Individualism
1348
+ -0.008
1349
+ [-0.011,-0.005]
1350
+ Control variables
1351
+ Number of Friends
1352
+ -0.329
1353
+ [-0.341,-0.317]
1354
+ GDP
1355
+ -0.031
1356
+ [-0.038,-0.024]
1357
+ GINI
1358
+ -0.040
1359
+ [-0.042,-0.039]
1360
+ Market Penetration
1361
+ 0.56
1362
+ [0.038,0.074]
1363
+ Table 9: Bootstrapped Coefficients From Multilevel Model-
1364
+ ing for the effect of Mobility as a moderator on Dwell Time
1365
+ Fixed Effects
1366
+ Estimate
1367
+ 퐶퐼 (95%)
1368
+ Intercept
1369
+ 3.820
1370
+ [3.801,3.839]
1371
+ Strength of Ties
1372
+ 0.114
1373
+ [0.111,0.117]
1374
+ High Mobility
1375
+ 0.092
1376
+ [0.089,0.095]
1377
+ Strength of Ties : High Mobility
1378
+ -0.010
1379
+ [-0.014,-0.007]
1380
+ Control variables
1381
+ Number of Friends
1382
+ -0.347
1383
+ [-0.356,-0.338]
1384
+ GDP
1385
+ -0.058
1386
+ [-0.062,0.054]
1387
+ GINI
1388
+ -0.020
1389
+ [-0.020,-0.015]
1390
+ Market Penetration
1391
+ 0.109
1392
+ [0.104,0.114]
1393
+ Table 10: Bootstrapped Coefficients From Multilevel Model-
1394
+ ing for the effect of Tightness as a moderator on Dwell Time
1395
+ Fixed Effects
1396
+ Estimate
1397
+ 퐶퐼 (95%)
1398
+ Intercept
1399
+ 3.740
1400
+ [3.737,3.743]
1401
+ Strength of Ties
1402
+ 0.116
1403
+ [0.111,0.121]
1404
+ Tightness
1405
+ -0.061
1406
+ [-0.061, -0.060]
1407
+ Strength of Ties : Tightness
1408
+ 0.008
1409
+ [0.006, 0.012]
1410
+ Control variables
1411
+ Number of Friends
1412
+ -0.291
1413
+ [-0.295,-0.285]
1414
+ GDP
1415
+ -0.156
1416
+ [-0.161,-0.150]
1417
+ GINI
1418
+ -0.17
1419
+ [-0.171,-0.162]
1420
+ Market Penetration
1421
+ 0.170
1422
+ [0.169,0.170]
1423
+ C
1424
+ MIXED EFFECTS MODEL FOR THE
1425
+ INTERSECTION OF COUNTRIES PRESENT
1426
+ ACROSS ALL THREE MEASURES (FOR 1
1427
+ RUN)
1428
+ Table 11: Coefficients from Multilevel Modeling for the ef-
1429
+ fect of Individualism as a moderator on Dwell Time (∗푝 <
1430
+ 0.05, ∗ ∗ 푝 < 0.01, ∗ ∗ ∗푝 < 0.001), Sample size: country = 18,
1431
+ dyads = 82800, RMSE = 2.947, AIC = 45373.1, BIC = 453824.6,
1432
+ R2 conditional =0.27, R2 marginal = 0.01
1433
+ Fixed Effects
1434
+ Estimate
1435
+ Standard Error
1436
+ Intercept
1437
+ 3.699***
1438
+ 0.126
1439
+ Strength of Ties
1440
+ 0.147***
1441
+ 0.017
1442
+ Individualism
1443
+ 0.085
1444
+ 0.116
1445
+ Strength of Ties : Individualism
1446
+ -0.042***
1447
+ 0.011
1448
+ Control variables
1449
+ Number of Friends
1450
+ -0.322***
1451
+ 0.018
1452
+ GDP
1453
+ -0.168*
1454
+ 0.074
1455
+ GINI
1456
+ -0.171
1457
+ 0.063
1458
+ Market Penetration
1459
+ 0.238
1460
+ 0.128
1461
+ Table 12: Coefficients From Multilevel Modeling for the ef-
1462
+ fect of Mobility as a moderator on Dwell Time (∗푝 < 0.05, ∗ ∗
1463
+ 푝 < 0.01, ∗ ∗ ∗푝 < 0.001) Sample size: country = 18, dyads =
1464
+ 82800 RMSE = 3.07, AIC = 476936.1, BIC = 477029.3, R2 con-
1465
+ ditional = 0.09, R2 marginal = 0.01
1466
+ Fixed Effects
1467
+ Estimate
1468
+ Standard Error
1469
+ Intercept
1470
+ 3.969***
1471
+ 0.123
1472
+ Strength of Ties
1473
+ 0.126***
1474
+ 0.014
1475
+ High Mobility
1476
+ 0.008
1477
+ 0.083
1478
+ Strength of Ties : High Mobility
1479
+ -0.037***
1480
+ 0.009
1481
+ Control variables
1482
+ Number of Friends
1483
+ -0.30***
1484
+ 0.017
1485
+ GDP
1486
+ -0.152
1487
+ 0.077
1488
+ GINI
1489
+ -0.141
1490
+ 0.060
1491
+ Market Penetration
1492
+ 0.099***
1493
+ 0.010
1494
+
1495
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
1496
+ Seth, et al.
1497
+ Table 13: Coefficients From Multilevel Modeling for the ef-
1498
+ fect of Tightness as a moderator on Dwell Time (∗푝 < 0.05, ∗∗
1499
+ 푝 < 0.01, ∗ ∗ ∗푝 < 0.001), Sample size: country = 18, dyads =
1500
+ 82800, RMSE = 2.36, AIC = 482736.7, BIC = 482830, R2 condi-
1501
+ tional = 0.48, R2 marginal = 0.01
1502
+ Fixed Effects
1503
+ Estimate
1504
+ Standard Error
1505
+ Intercept
1506
+ 3.767***
1507
+ 0.135
1508
+ Strength of Ties
1509
+ 0.164***
1510
+ 0.014
1511
+ Tightness
1512
+ -0.021
1513
+ 0.084
1514
+ Strength of Ties : Tightness
1515
+ 0.094***
1516
+ 0.012
1517
+ Control variables
1518
+ Number of Friends
1519
+ -0.383***
1520
+ 0.024
1521
+ GDP
1522
+ -0.183
1523
+ 0.081
1524
+ GINI
1525
+ -0.180
1526
+ 0.070
1527
+ Market Penetration
1528
+ 0.191
1529
+ 0.105
1530
+
1531
+ Cultural Differences in Friendship Network Behaviors: A Snapchat Case Study
1532
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
1533
+ D
1534
+ LIST OF COUNTRIES ANALYZED
1535
+ Country
1536
+ Individualism
1537
+ Mobility
1538
+ Tightness
1539
+ Argentina
1540
+ 46
1541
+ ×
1542
+ ×
1543
+ Australia
1544
+ 90
1545
+ 4.308
1546
+ 4.4
1547
+ Austria
1548
+ 55
1549
+ ×
1550
+ 6.8
1551
+ Belgium
1552
+ 75
1553
+ ×
1554
+ 5.6
1555
+ Brazil
1556
+ 38
1557
+ 4.419
1558
+ 3.5
1559
+ Canada
1560
+ ×
1561
+ 4.404
1562
+ ×
1563
+ Chile
1564
+ 23
1565
+ 4.3
1566
+ ×
1567
+ China
1568
+ excluded from analysis since Snapchat is banned
1569
+ Colombia
1570
+ 13
1571
+ 4.483
1572
+ ×
1573
+ Costa Rica
1574
+ 15
1575
+ ×
1576
+ ×
1577
+ Czech Republic
1578
+ 58
1579
+ ×
1580
+ ×
1581
+ Denmark
1582
+ 74
1583
+ ×
1584
+ ×
1585
+ cEcuador
1586
+ 8
1587
+ ×
1588
+ ×
1589
+ Egypt
1590
+ 38
1591
+ 3.971
1592
+ ×
1593
+ El Salvador
1594
+ 19
1595
+ ×
1596
+ ×
1597
+ Estonia
1598
+ ×
1599
+ 4.233
1600
+ 2.6
1601
+ Ethiopia
1602
+ 27
1603
+ ×
1604
+ ×
1605
+ Finland
1606
+ 63
1607
+ ×
1608
+ ×
1609
+ France
1610
+ 71
1611
+ 4.451
1612
+ 6.3
1613
+ Germany
1614
+ 67
1615
+ 4.194
1616
+ 7
1617
+ Ghana
1618
+ 20
1619
+ ×
1620
+ ×
1621
+ Greece
1622
+ 35
1623
+ ×
1624
+ 3.9
1625
+ Guatemala
1626
+ 6
1627
+ ×
1628
+ ×
1629
+ Hong Kong
1630
+ 25
1631
+ 4.043
1632
+ 6.3
1633
+ Hungary
1634
+ 55
1635
+ 3.893
1636
+ 2.9
1637
+ Iceland
1638
+ ×
1639
+ ×
1640
+ 6.4
1641
+ India
1642
+ 48
1643
+ ×
1644
+ 11
1645
+ Indonesia
1646
+ 14
1647
+ ×
1648
+ ×
1649
+ Iran
1650
+ excluded from analysis since Snapchat is banned
1651
+ Iraq
1652
+ 38
1653
+ ×
1654
+ ×
1655
+ Ireland
1656
+ 70
1657
+ ×
1658
+ ×
1659
+ Israel
1660
+ 54
1661
+ 4.336
1662
+ 3.1
1663
+ Italy
1664
+ 76
1665
+ ×
1666
+ 6.8
1667
+ Jamaica
1668
+ 39
1669
+ ×
1670
+ ×
1671
+ Japan
1672
+ 46
1673
+ 3.934
1674
+ 8.6
1675
+ Jordan
1676
+ ×
1677
+ 3.96
1678
+ ×
1679
+ Kenya
1680
+ 27
1681
+ ×
1682
+ ×
1683
+ Kuwait
1684
+ 38
1685
+ ×
1686
+ ×
1687
+ Lebanon
1688
+ 38
1689
+ 4.079
1690
+ ×
1691
+ Libya
1692
+ 38
1693
+ 4.015
1694
+ ×
1695
+ Malaysia
1696
+ 26
1697
+ 3.886
1698
+ 11.8
1699
+ Mauritius
1700
+ ×
1701
+ 4.385
1702
+ ×
1703
+ Mexico
1704
+ 30
1705
+ 4.607
1706
+ 7.2
1707
+ Morocco
1708
+ ×
1709
+ 4.062
1710
+ ×s
1711
+ Netherlands
1712
+ 80
1713
+ 4.448
1714
+ 3.3
1715
+ New Zealand
1716
+ 79
1717
+ 4.287
1718
+ 3.9
1719
+ Nigeria
1720
+ 20
1721
+ ×
1722
+ ×
1723
+ Norway
1724
+ 69
1725
+ ×
1726
+ 9.5
1727
+ Pakistan
1728
+ 14
1729
+ ×
1730
+ 12.3
1731
+ Panama
1732
+ 11
1733
+ ×
1734
+ ×
1735
+ Peru
1736
+ 16
1737
+ ×
1738
+ ×
1739
+ Philippines
1740
+ 32
1741
+ 4.158
1742
+ ×
1743
+ Poland
1744
+ 60
1745
+ 4.415
1746
+ 6.0
1747
+
1748
+ CHI ’23, April 23–28, 2023, Hamburg, Germany
1749
+ Seth, et al.
1750
+ Portugal
1751
+ 27
1752
+ 4.236
1753
+ 7.8
1754
+ Puerto Rico
1755
+ ×
1756
+ 4.603
1757
+ ×
1758
+ Saudi Arabia
1759
+ 38
1760
+ ×
1761
+ ×
1762
+ Sierra Leone
1763
+ 20
1764
+ ×
1765
+ ×
1766
+ Singapore
1767
+ 20
1768
+ 4.133
1769
+ 10.4
1770
+ South Africa
1771
+ 65
1772
+ ×
1773
+ ×
1774
+ South Korea
1775
+ 18
1776
+ 4.089
1777
+ 10.0
1778
+ Spain
1779
+ 51
1780
+ 4.415
1781
+ 5.4
1782
+ Sweden
1783
+ 71
1784
+ 4.364
1785
+ ×
1786
+ Switzerland
1787
+ 68
1788
+ ×
1789
+ ×
1790
+ Taiwan
1791
+ 17
1792
+ 4.118
1793
+ ×
1794
+ Tanzania
1795
+ 27
1796
+ ×
1797
+ ×
1798
+ Thailand
1799
+ 20
1800
+ ×
1801
+ ×
1802
+ Trinidad and Tobago
1803
+ ×
1804
+ 4.421
1805
+ ×
1806
+ Tunisia
1807
+ ×
1808
+ 3.954
1809
+ ×
1810
+ Turkey
1811
+ 37
1812
+ 4.122
1813
+ 9.2
1814
+ Ukraine
1815
+ excluded from analysis due to geo-political instability
1816
+ United Arab Emirates
1817
+ 38
1818
+ ×
1819
+ ×
1820
+ United Kingdom
1821
+ 89
1822
+ 4.315
1823
+ 6.9
1824
+ United States
1825
+ 91
1826
+ 4.382
1827
+ 5.1
1828
+ Uruguay
1829
+ 36
1830
+ ×
1831
+ ×
1832
+ Venezuela
1833
+ 12
1834
+ 4.508
1835
+ 3.7
1836
+ Zambia
1837
+ 27
1838
+ ×
1839
+ ×
1840
+ Table 14: List of Countries and the cultural values that they were surveyed for; × signifies country not surveyed for that cultural
1841
+ value
1842
+
-NFST4oBgHgl3EQfcDge/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-dAyT4oBgHgl3EQfRPaF/content/2301.00062v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2030679606ffe9d886cd2b2eb6b9324d5baf96af03b42d9be25d232d0165ccdb
3
+ size 796957
-dAyT4oBgHgl3EQfRPaF/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06740851568214dd70fa9fe375d76c1abdffdd776de84af861e291cb2502a0a1
3
+ size 83618
-dAzT4oBgHgl3EQfg_zf/content/tmp_files/2301.01479v1.pdf.txt ADDED
@@ -0,0 +1,1197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01479v1 [math.OC] 4 Jan 2023
2
+ Generalizations of R0 and SSM properties; Extended Horizontal Linear
3
+ Complementarity Problem
4
+ Punit Kumar Yadav
5
+ Department of Mathematics
6
+ Malaviya National Instiute of Technology, Jaipur, 302017, India
7
+ E-mail address: punitjrf@gmail.com
8
+ K. Palpandi
9
+ Department of Mathematics
10
+ Malaviya National Instiute of Technology, Jaipur, 302017, India
11
+ E-mail address: kpalpandi.maths@mnit.ac.in
12
+ Abstract
13
+ In this paper, we first introduce R0-W and SSM-W property for the set of matrices which
14
+ is a generalization of R0 and the strictly semimonotone matrix. We then prove some existence
15
+ results for the extended horizontal linear complementarity problem when the involved matrices
16
+ have these properties. With an additional condition on the set of matrices, we prove that the
17
+ SSM-W property is equivalent to the unique solution for the corresponding extended horizontal
18
+ linear complementarity problems. Finally, we give a necessary and sufficient condition for the
19
+ connectedness of the solution set of the extended horizontal linear complementarity problems.
20
+ 1
21
+ Introduction
22
+ The standard linear complementarity problem (for short LCP), LCP(C, q), is to find vectors x, y
23
+ such that
24
+ x ∈ Rn, y = Cx + q ∈ Rn and x ∧ y = 0,
25
+ (1)
26
+ where C ∈ Rn×n, q ∈ Rn and ′∧′ is a min map. The LCP has numerous applications in numerous
27
+ domains, such as optimization, economics, and game theory.
28
+ Cottle and Pang’s monograph [1]
29
+ is the primary reference for standard LCP. Various generalisations of the linear complementarity
30
+ problem have been developed and discussed in the literature during the past three decades (see,
31
+ [7, 10, 11, 13, 14, 16]). The extended horizontal linear complementarity problem is one of the most
32
+ important extensions of LCP, which various authors have studied; see [4, 6, 7] and references therein.
33
+ For a given ordered set of matrices C := {C0, C1, ..., Ck} ⊆ Rn×n, vector q ∈ Rn and ordered set of
34
+ positive vectors d := {d1, d2, ..., dk} ⊆ Rn, the extended horizontal linear complementarity problem
35
+ (for short EHLCP), denoted by EHCLP(C, d, q), is to find a vector x0, x1, ..., xk ∈ Rn such that
36
+ C0x0 =q +
37
+ k
38
+
39
+ i=1
40
+ Cixi,
41
+ x0 ∧ x1 = 0 and (dj−xj) ∧ xj+1 = 0, 1 ≤ j ≤ k − 1.
42
+ (2)
43
+ If k = 1, then EHLCP becomes the horizontal linear complementarity problem (for short HLCP),
44
+ that is,
45
+ C0x0 − C1x1 = q and x0 ∧ x1 = 0.
46
+ Further, HLCP reduces to the standard LCP by taking C0 = I. Due to its widespread applications in
47
+ numerous domains, the horizontal linear complementarity problem has received substantial research
48
+ attention from many academics; see [13, 14, 16, 18] and reference therein.
49
+ Various writers have presented new classes of matrices for analysing the structure of LCP solution
50
+ sets in recent years; see for example, [1, 2, 4]. The classes of R0, P0, P, and strictly semimonotone
51
+ 1
52
+
53
+ (SSM) matrices play a crucial role in the existence and uniqueness of the solution to LCP. For
54
+ instance, P matrix (if [x ∈ Rn, x ∗ Ax ≤ 0 =⇒ x = 0]) gives a necessary and sufficient condition
55
+ for the uniqueness of the solution for the LCP (see, Theorem 3.3.7 in [1]). To get a similar type of
56
+ existence and uniqueness results for the generalized LCPs, the notion of P matrix was extended for
57
+ the set of matrices as the column W-property by Gowda et al. [4]. They proved that column W-
58
+ property gives the solvability and the uniqueness for the extended horizontal linear complementarity
59
+ problem (EHLCP). Also, they have generalized the concept of the P0-matrix as the column W0-
60
+ property.
61
+ Another class of matrix, the so-called SSM matrix, has importance in LCP theory. This class of
62
+ matrices provides a unique solution to LCP on Rn
63
+ + and also gives the existence of the solution for the
64
+ LCP (see, [1]). For a Z matrix (if all the off-diagonal entries of a matrix are non-positive), P matrix
65
+ is equivalent to the SSM matrix (see, Theorem 3.11.10 in [1]). A natural question arises whether
66
+ the SSM matrix can be generalized for the set of matrices in the view of EHLCP and whether we
67
+ have a similar equivalence relation for the set of Z matrices. In this paper, we would like to answer
68
+ this question.
69
+ The connectedness of the solution set of LCP has a prominent role in the study of the LCP. We
70
+ say a matrix is connected if the solution set of the corresponding LCP is connected. In [19], Jones
71
+ and Gowda addressed the connectedness of the solution set of the LCP. They proved that the matrix
72
+ is connected whenever the given matrix is a P0 matrix and the solution set has a bounded connected
73
+ component. Also, they have shown that if the solution set of LCP is connected, then there is almost
74
+ one solution of LCP for all q > 0. Due to the specially structured matrices involved in the study of
75
+ the connectedness of the solution to LCP, various authors studied the connectedness of LCP, see for
76
+ example [19, 20, 21]. The main objectives of this paper are to answer the following questions:
77
+ (Q1) In LCP theory, it is a well-known result that the R0 matrix gives boundedness to the LCP
78
+ solution set. The same holds true for HLCP [17]. This motivates the question of whether or
79
+ not the notion of R0 matrix can be generalized to the set of matrices. If so, then can we expect
80
+ the same kind of outcome in the EHLCP?
81
+ (Q2) Given that a strictly semimonotone matrix guarantees the existence of the LCP solution and
82
+ its uniqueness for q ≥ 0, it is natural to wonder whether the concept of SSM matrix can be
83
+ extended to the set of matrices. If so, then whether the same result holds true for EHLCP.
84
+ (Q3) Motivated by the results of Gowda and Jones [19] regarding the connectedness of the solution
85
+ set of LCP, one can ask whether the solution set of EHLCP is connected if the set of matrices
86
+ has the column W0 property and the solution set of the corresponding EHLCP has a bounded
87
+ connected component.
88
+ The paper’s outline is as follows: We present some basic definitions and results in section 2. We
89
+ generalize the concept of R0 matrix and prove the existence result for EHLCP in section 3. In
90
+ section 4, we introduce the SSM-W property, and we then study an existence and uniqueness result
91
+ for the EHLCP when the underlying set of matrices have this property. In the last section, we give
92
+ a necessary and sufficient condition for the connectedness of the solution set of the EHLCP.
93
+ 2
94
+ Notations and Preliminaries
95
+ 2.1
96
+ Notations
97
+ Throughout this paper, we use the following notations:
98
+ (i) The n dimensional Euclidean space with the usual inner product will be denoted by Rn. The
99
+ set of all non-negative vectors (respectively, positive vectors) in Rn will be denoted by Rn
100
+ +
101
+ 2
102
+
103
+ (respectively, Rn
104
+ ++ ). We say x ≥ 0 (respectively, > 0) if and only if x ∈ Rn
105
+ + (respectively,
106
+ Rn
107
+ ++).
108
+ (ii) The k-ary Cartesian power of Rn will be denoted by Λ(k)
109
+ n
110
+ and the k-ary Cartesian power of
111
+ Rn
112
+ ++ will be denoted by Λ(k)
113
+ n,++. The bold zero ’0’ will be used for denoting the zero vector
114
+ (0, 0, ..., 0) ∈ Λ(k)
115
+ n .
116
+ (iii) The set of all n×n real matrices will be denoted by Rn×n. We use the symbol Λ(k)
117
+ n×n to denote
118
+ the k-ary Cartesian product of Rn×n.
119
+ (iv) We use [n] to denote the set {1, 2, ..., n}.
120
+ (v) Let M ∈ Rn×n. We use diag(M) to denote the vector (M11, M22, ..., Mkk) ∈ Rn, where Mii is
121
+ the iith diagonal entry of matrix M and det(M) is used to denote the determinant of matrix
122
+ M.
123
+ (vi) SOL(C, d, q) will be used for denoting the set of all solution to EHLCP(C, d, q).
124
+ We now recall some definitions and results from the LCP theory, which will be used frequently in
125
+ our paper.
126
+ Proposition 2.1 ([8]). Let V = Rn. Then, the following statements are equivalent.
127
+ (i) x ∧ y = 0.
128
+ (ii) x, y ≥ 0 and x ∗ y = 0, where ∗ is the Hadamard product.
129
+ (iii) x, y ≥ 0 and ⟨x, y⟩ = 0.
130
+ Definition 1 ([4]). Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
131
+ n×n .
132
+ Then a matrix R ∈ Rn×n is column
133
+ representative of C if
134
+ R.j ∈
135
+
136
+ (C0).j, (C1).j, ..., (Ck).j
137
+
138
+ , ∀j ∈ [n],
139
+ where R.j is the jth column of matrix R.
140
+ Next, we define the column W-property.
141
+ Definition 2 ([4]). Let C := (C0, C1, ..., Ck) ∈ Λ(k+1)
142
+ n×n . Then we say that C has the
143
+ (i) column W-property if the determinants of all the column representative matrices of C are all
144
+ positive or all negative.
145
+ (ii) column W0-property if there exists N := (N0, N1, ..., Nk) ∈ Λ(k+1)
146
+ n×n
147
+ such that C + ǫN := (C0 +
148
+ ǫN0, C1 + ǫN1, ..., Ck + ǫNk) has the column W-property for all ǫ > 0.
149
+ Due to Gowda and Sznajder [4], we have the following result.
150
+ Theorem 2.2 ([4]). For C = (C0, C1, ..., Ck) ∈ Λ(k+1)
151
+ n×n , the following are equivalent:
152
+ (i) C has the column W-property.
153
+ (ii) For arbitrary non-negative diagonal matrices D0, D1, ..., Dk ∈ Rn×n with diag(D0 +D1 +D2 +
154
+ ... + Dk) > 0,
155
+ det
156
+
157
+ C0D0 + C1D1 + ... + CkDk
158
+
159
+ ̸= 0.
160
+ (iii) C0 is invertible and (I, C−1
161
+ 0 C1, ..., C−1
162
+ 0 Ck) has the column W-property.
163
+ 3
164
+
165
+ (iv) For all q ∈ Rn and d ∈ Λ(k−1)
166
+ n,++ , EHLCP(C, d, q) has a unique solution.
167
+ If k = 1 and C−1
168
+ 0
169
+ exists, then HLCP(C0, C1, q) is equivalent to LCP(C−1
170
+ 0 C1, C−1
171
+ 0 (q)). In this
172
+ case, C−1
173
+ 0 C1 is a P matrix if and only if for all q ∈ Rn, LCP(C−1
174
+ 0 C1, C−1
175
+ 0 (q)) has a unique solution
176
+ (see, Theorem 3.3.7 in [1]). Hence we have the following theorem given the previous theorem.
177
+ Theorem 2.3 ([4]). Let (C0, C1) ∈ Λ(2)
178
+ n×n. Then the following are equivalent.
179
+ (i) (C0, C1) has the column W-property.
180
+ (ii) C0 is invertible and C−1
181
+ 0 C1 is a P matrix.
182
+ (iii) For all q ∈ Rn, HLCP(C0, C1, q) has a unique solution.
183
+ 2.2
184
+ Degree theory
185
+ We now recall the definition and some properties of a degree from [2, 3] for our discussion.
186
+ Let Ω be an open bounded set in Rn. Suppose h : ¯Ω → Rn is a continuous function and a vector
187
+ p /∈ h(∂Ω), where ∂Ω and ¯Ω denote the boundary and closure of Ω, respectively. Then the degree of
188
+ h is defined with respect to p over Ω denoted by deg(h, Ω, p). The equation h(x) = p has a solution
189
+ whenever deg(h, Ω, p) is non-zero. If h(x) = p has only one solution, say y in Rn, then the degree is
190
+ the same overall bounded open sets containing y. This common degree is denoted by deg(h, p).
191
+ 2.2.1
192
+ Properties of the degree
193
+ The following properties are used frequently here.
194
+ (D1) deg(I, Ω, ·) = 1, where I is the identity function.
195
+ (D2) Homotopy invariance: Let a homotopy Φ(x, s) : Rn ×[0, 1] → Rn be continuous. If the zero
196
+ set of Φ(x, s), X = {x : Φ(x, s) = 0 for some s ∈ [0, 1]} is bounded, then for any bounded
197
+ open set Ω in Rn containing the zero set X, we have
198
+ deg(Φ(x, 1), Ω, 0) = deg(Φ(x, 0), Ω, 0).
199
+ (D3) Nearness property: Assume deg(h1(x), Ω, p) is defined and h2 : ¯Ω → Rn is a continuous
200
+ function. If supx∈Ω∥h2(x) − h1(x)∥ < dist(p, ∂Ω), then deg(h2(x), Ω, p) is defined and equals
201
+ to deg(h1(x), Ω, p).
202
+ The following result from Facchinei and Pang [2] will be used later.
203
+ Proposition 2.4 ([2]). Let Ω be a non-empty, bounded open subset of Rn and let Φ : ¯Ω → Rn be a
204
+ continuous injective mapping. Then deg(Φ, Ω, p) ̸= 0 for all p ∈ Φ(Ω).
205
+ Note: All the degree theoretic results and concepts are also applicable over any finite dimensional
206
+ Hilbert space (like Rn or Rn × Rn × Rn etc).
207
+ 3
208
+ R0-W property
209
+ In this section, we first define the R0-W property for the set of matrices which is a natural generaliza-
210
+ tion of R0 matrix in the LCP theory. We then show that the R0-W property gives the boundedness
211
+ of the solution set of the corresponding EHLCP.
212
+ 4
213
+
214
+ Definition 3. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
215
+ n×n . We say that C has the R0-W property if the
216
+ system
217
+ C0x0 =
218
+ k
219
+
220
+ i=1
221
+ Cixi and x0 ∧ xj = 0 ∀ j ∈ [k]
222
+ has only zero solution.
223
+ It can be seen easily that the R0-W property coincides with R0 matrix when k = 1 and C0 = I.
224
+ Also it is noted (see, [8]) that if k = 1, then the R0-W property referred as R0 pair. To proceed
225
+ further, we prove the following result.
226
+ Lemma 3.1. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
227
+ n×n
228
+ and x = (x0, x1, ..., xk) ∈ SOL(C, d, q). Then x
229
+ satisfies the following system
230
+ C0x0 = q +
231
+ k
232
+
233
+ i=1
234
+ Cixi and x0 ∧ xj = 0 ∀ j ∈ [k].
235
+ Proof. As x0 ≥ 0, there exists an index set α ⊆ [n] such that (x0)i =
236
+
237
+ > 0
238
+ i ∈ α
239
+ 0
240
+ i ∈ [n] \ α . Since
241
+ x0 ∧ x1 = 0, we have (x1)i = 0 for all i ∈ α. From (d1 − x1) ∧ x2 = 0, we get (d1)i(x2)i = 0 ∀i ∈ α.
242
+ This gives that (x2)i = 0 ∀i ∈ α. By substituting (x2)i = 0 ∀i ∈ α in (d2 − x2) ∧ x3 = 0, we obtain
243
+ (x3)i = 0 ∀i ∈ α. Continue the process in the similar way, one can get (x4)i = (x5)i = ... = (xk)i =
244
+ 0 ∀i ∈ α. So, x0 ∧ xj = 0 ∀ j ∈ [k]. This completes the proof.
245
+ We now prove the boundedness of the solution set of EHLCP when the involved set of matrices
246
+ has the R0-W property.
247
+ Theorem 3.2. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
248
+ n×n . If C has the R0-W property then SOL(C, d, q)
249
+ is bounded for every q ∈ Rn and d ∈ Λ(k−1)
250
+ n,++ .
251
+ Proof. Suppose there exist q ∈ Rn and d = (d1, d2, ..., dk−1) ∈ Λ(k−1)
252
+ n,++ such that SOL(C, d, q) is
253
+ unbounded. Then there exists a sequence x(m) = (x(m)
254
+ 0
255
+ , x(m)
256
+ 1
257
+ , ..., x(m)
258
+ k
259
+ ) in Λ(k+1)
260
+ n
261
+ such that ||x(m)|| →
262
+ ∞ as m → ∞ and it satisfies
263
+ C0x(m)
264
+ 0
265
+ = q +
266
+ k
267
+
268
+ i=1
269
+ Cix(m)
270
+ i
271
+ x(m)
272
+ 0
273
+ ∧ x(m)
274
+ 1
275
+ = 0 and (dj − x(m)
276
+ j
277
+ ) ∧ x(m)
278
+ j+1 = 0 ∀j ∈ [k − 1].
279
+ (3)
280
+ From the Lemma 3.1, equation 3 gives that
281
+ C0x(m)
282
+ 0
283
+ =q +
284
+ k
285
+
286
+ i=1
287
+ Cix(m)
288
+ i
289
+ and x(m)
290
+ 0
291
+ ∧ x(m)
292
+ j
293
+ =
294
+ 0 ∀j ∈ [k].
295
+ (4)
296
+ As
297
+ x(m)
298
+ ∥x(m)∥ is a unit vector for all m,
299
+ x(m)
300
+ ∥x(m)∥ converges to some vector y = (y0, y1, ..., yk) ∈ Λ(k+1)
301
+ n
302
+ with ||y|| = 1. Now first divide the equation 4 by ∥x(m)∥ and then take the limit m → ∞, we get
303
+ C0y0 =
304
+ k
305
+
306
+ i=1
307
+ Ciyi and y0 ∧ yj = 0 ∀j ∈ [k].
308
+ This implies that y must be a zero vector as C has the R0-W property, which contradicts the fact
309
+ that ||y|| = 1. Therefore SOL(C, d, q) is bounded.
310
+ 5
311
+
312
+ 3.1
313
+ Degree of EHLCP
314
+ Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
315
+ n×n
316
+ and d = (d1, d2, ...., dk−1) ∈ Λ(k−1)
317
+ n,++ .
318
+ We define a function
319
+ F : Λ(k+1)
320
+ n
321
+ → Λ(k+1)
322
+ n
323
+ as
324
+ F(x) =
325
+
326
+ 
327
+ C0x0 − �k
328
+ i=1 Cixi
329
+ x0 ∧ x1
330
+ (d1 − x1) ∧ x2
331
+ (d2 − x2) ∧ x3
332
+ .
333
+ .
334
+ .
335
+ (dk−1 − xk−1) ∧ xk
336
+
337
+ 
338
+ .
339
+ (5)
340
+ We denote the degree of F with respect to 0 over bounded open set Ω ⊆ Λ(k+1)
341
+ n
342
+ as deg(C, Ω, 0).
343
+ It is noted that if C has the R0-W property, in view of the Lemma 3.1, F(x) = 0 ⇔ x = 0 which
344
+ implies that deg(C, Ω, 0) = deg(C, 0) for any bounded open set Ω contains the origin in Λ(k+1)
345
+ n
346
+ . We
347
+ call this degree as EHLCP-degree of C.
348
+ We now prove an existence result for EHLCP.
349
+ Theorem 3.3. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
350
+ n×n . Suppose the following hold:
351
+ (i) C has the R0-W property.
352
+ (ii) deg(C, 0) ̸= 0.
353
+ Then EHLCP(C, d, q) has non-empty compact solution for all q ∈ Rn and d ∈ Λ(k−1)
354
+ n,++ .
355
+ Proof. As the solution set of EHLCP is closed, it is enough to prove that the solution set is non-empty
356
+ and bounded. We first define a homotopy Φ : Λ(k+1)
357
+ n
358
+ × [0, 1] → Λ(k+1)
359
+ n
360
+ as
361
+ Φ(x, s) =
362
+
363
+ 
364
+ C0x0 − �k
365
+ i=1 Cixi − sq
366
+ x0 ∧ x1
367
+ (d1 − x1) ∧ x2
368
+ (d2 − x2) ∧ x3
369
+ .
370
+ .
371
+ .
372
+ (dk−1 − xk−1) ∧ xk
373
+
374
+ 
375
+ .
376
+ Then,
377
+ Φ(x, 0) = F(x) and Φ(x, 1) = F(x) − ˆq, where ˆq = (q, 0, 0, ...0) ∈ Λ(k+1)
378
+ n
379
+ .
380
+ By using the similar argument as in above Theorem 3.2, we can easily show that the zero set of
381
+ homotopy, X = {x : Φ(x, s) = 0 for some s ∈ [0, 1]} is bounded. From the property of degree (D2),
382
+ we get deg(F, Ω, 0) = deg(F − ˆq, Ω, 0) for any open bounded set Ω containing X. As deg(F, Ω, 0) =
383
+ deg(C, 0) ̸= 0, we obtain deg(F − ˆq, Ω, 0) ̸= 0 which implies SOL(C, d, q) is non-empty. As C has
384
+ the R0-W property, by Theorem 3, SOL(C, d, q) is bounded. This completes the proof.
385
+ 4
386
+ SSM-W property
387
+ In this section, we first define the SSM-W property for the set of matrices which is a generalization
388
+ of the SSM matrix in the LCP theory, and we then prove that the existence and uniqueness result
389
+ for the EHLCP when the involved set of matrices have the SSM-W property.
390
+ We now recall that an n × n real matrix M is called strictly semimonotone (SSM) matrix if
391
+ [x ∈ Rn
392
+ +, x ∗ Mx ≤ 0 ⇒ x = 0]. We generalize this concept to the set of matrices.
393
+ 6
394
+
395
+ Definition 4. We say that C = (C0, C1, ..., Ck) ∈ Λ(k+1)
396
+ n×n
397
+ has the SSM-W property if
398
+ {C0x0 =
399
+ k
400
+
401
+ i=1
402
+ Cixi, xi ≥ 0 and x0 ∗ xi ≤ 0 ∀i ∈ [k]} ⇒ x = (x0, x1, .., xk) = 0.
403
+ We prove the following result.
404
+ Proposition 4.1. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
405
+ n×n . If C has the SSM-W property, then the
406
+ followings hold:
407
+ (i) C−1
408
+ 0
409
+ exists and C−1
410
+ 0 Ci is a strict semimonotone matrix for all i ∈ [k].
411
+ (ii) (I, C−1
412
+ 0 C1, ..., C−1
413
+ 0 Ck) has the SSM-W property.
414
+ (iii) (P T C0P, P T C1P, ..., P T CkP) has the SSM-W property for any permutation matrix P of order
415
+ n.
416
+ Proof. (i): Suppose there exists a vector x0 ∈ Rn such that C0x0 = 0. Then we have
417
+ C0x0 = C10 + C20 + ... + Ck0.
418
+ This gives that x0 = 0 as C has the SSM-W property. Thus C0 is invertible.
419
+ Now we prove the second part of (i).
420
+ Without loss of generality, it is enough to prove that
421
+ C−1
422
+ 0 C1 is a strictly semimonotone matrix. Suppose there exists a vector y ∈ Rn such that y ≥ 0
423
+ and y ∗ (C−1
424
+ 0 C1)y ≤ 0. Let y0 := (C−1
425
+ 0 C1)y, y1 := y and yi := 0 for all 2 ≤ i ≤ k. Then we get
426
+ C0y0 = C1y1 + C2y2 + ... + Ciyi + .. + Ckyk,
427
+ yj ≥ 0 and y0 ∗ yj ≤ 0 ∀j ∈ [k].
428
+ Since C has the SSM-W property, yj = 0 ∀j ∈ [k]. Thus C−1
429
+ 0 C1 is a strict semimonotone matrix.
430
+ This completes the proof.
431
+ (ii): It follows from the definition of the SSM-W property.
432
+ (iii): Let x = (x0, x1, ..., xk) ∈ Λ(k+1)
433
+ n
434
+ such that
435
+ (P T C0P)x0 =
436
+ k
437
+
438
+ i=1
439
+ (P T CiP)xi, xj ≥ 0 and x0 ∗ xj ≤ 0 ∀j ∈ [k].
440
+ As P is a non-negative matrix and PP T = P T P, we can rewrite the above equation as
441
+ C0Px0 =
442
+ k
443
+
444
+ i=1
445
+ CiPxi, Pxj ≥ 0 and Px0 ∗ Pxj ≤ 0 ∀j ∈ [k].
446
+ By the SSM-W property of C, Pxj = 0 for all 0 ≤ j ≤ k which implies x = 0. This completes the
447
+ proof.
448
+ In the above Proposition 4.1, it can be seen easily that the converse of the item (ii) and (iii) are
449
+ valid. But the converse of item (i) need not be true. The following example illustrates this.
450
+ Example 4.2. Let C = (C0, C1, C2) ∈ Λ(3)
451
+ 2×2, where
452
+ C0 =
453
+
454
+ 1
455
+ 0
456
+ 0
457
+ 1
458
+
459
+ , C1 =
460
+
461
+ 1
462
+ −2
463
+ 0
464
+ 1
465
+
466
+ , C2 =
467
+
468
+ 1
469
+ 0
470
+ −2
471
+ 1
472
+
473
+ .
474
+ It is easy to check that C−1
475
+ 0 C1 = C1 and C−1
476
+ 0 C2 = C2 are P matrix. So, C−1
477
+ 0 C1 and C−1
478
+ 0 C2 are SSM
479
+ matrix. Let x = (x0, x1, x2) = ((0, 0)T , (1, 1)T , (1, 1)T ) ∈ Λ(3)
480
+ 2 . Then we can see that the non-zero x
481
+ satisfies
482
+ C0x0 = C1x1 + C2x2, x1 ≥ 0, x2 ≥ 0 and x0 ∗ x1 = 0 = x0 ∗ x2.
483
+ So C can not have the SSM-W property.
484
+ 7
485
+
486
+ The following result is a generalization of a well-known result in matrix theory that every P
487
+ matrix is a SSM matrix.
488
+ Theorem 4.3. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
489
+ n×n . If C has the column W-property, then C has the
490
+ SSM-W property.
491
+ Proof. Suppose there exists a non-zero vector x = (x0, ..., xk) ∈ Λ(k+1)
492
+ n
493
+ such that
494
+ C0x0 =
495
+ k
496
+
497
+ i=1
498
+ Cixi, xj ≥ 0, x0 ∗ xj ≤ 0 ∀j ∈ [k].
499
+ Consider a vector y ∈ Rn whose jth component is given by
500
+ yj =
501
+
502
+
503
+
504
+
505
+
506
+
507
+
508
+
509
+
510
+ −1
511
+ if (x0)j > 0
512
+ 1
513
+ if (x0)j < 0
514
+ 1
515
+ if (x0)j = 0 and (xi)j ̸= 0 for some i ∈ [k]
516
+ 0
517
+ if (x0)j = 0 and (xi)j = 0 for all i ∈ [k]
518
+ .
519
+ As x is a non-zero vector, y must be a non-zero vector. Consider the diagonal matrices D0, D1, ..., Dk
520
+ which are defined by
521
+ (D0)jj =
522
+
523
+
524
+
525
+
526
+
527
+
528
+
529
+
530
+
531
+ (x0)j
532
+ if (x0)j > 0
533
+ −(x0)j
534
+ if (x0)j < 0
535
+ 0
536
+ if(x0)j = 0 and (xi)j ̸= 0 for some i ∈ [k]
537
+ 1
538
+ if (x0)j = 0 and (xi)j = 0 for all i ∈ [k]
539
+ and for all i ∈ [k],
540
+ (Di)jj =
541
+
542
+ 0
543
+ if (x0)j > 0
544
+ (xi)j
545
+ else
546
+ .
547
+ It is easy to verify that D0, D1, ..., Dk are non-negative diagonal matrices and diag(D0 + D1 + ... +
548
+ Dk) > 0. And also note that
549
+ x0 = −D0y and xi = Diy ∀i ∈ [k].
550
+ (6)
551
+ By substituting the Equation 6 in C0x0 = �k
552
+ i=1 Cixi, we get
553
+ C0(−D0y) =
554
+ k
555
+
556
+ i=1
557
+ CiDi(y) ⇒
558
+
559
+ C0D0 + C1D1 + ... + CkDk
560
+
561
+ y = 0.
562
+ This implies that det(C0D0 + C1D1 + ... + CkDk
563
+
564
+ = 0. So, C does not have the column W-property
565
+ from Theorem 2.2. Thus we get a contradiction. Therefore, C has the SSM-W property.
566
+ The following example illustrates that the converse of the above theorem is invalid.
567
+ Example 4.4. Let C = (C0, C1, C2) ∈ Λ(3)
568
+ 2×2 such that
569
+ C0 =
570
+ �1
571
+ 0
572
+ 0
573
+ 1
574
+
575
+ , C1 =
576
+ �1
577
+ 1
578
+ 1
579
+ 1
580
+
581
+ , C2 =
582
+ �1
583
+ 1
584
+ 1
585
+ 1
586
+
587
+ .
588
+ Suppose w = (x, y, z) ∈ Λ3
589
+ 2 such that
590
+ C0x = C1y + C2z and y, z ≥ 0, x ∗ y ≤ 0, x ∗ z ≤ 0.
591
+ 8
592
+
593
+ From C0x = C1y + C2z, we get
594
+ �x1
595
+ x2
596
+
597
+ =
598
+ �y1 + y2 + z1 + z2
599
+ y1 + y2 + z1 + z2
600
+
601
+ .
602
+ As x ∗ y ≤ 0, x ∗ z ≤ 0 and from the above equation, we have
603
+ y1(y1 + y2 + z1 + z2) ≤ 0 and y2(y1 + y2 + z1 + z2) ≤ 0,
604
+ z1(y1 + y2 + z1 + z2) ≤ 0 and z2(y1 + y2 + z1 + z2) ≤ 0.
605
+ (7)
606
+ Since y, z ≥ 0, from the equation 7, we get x = y = z = 0. Hence C has the SSM-W property. As
607
+ det(C1) = 0, by the definition of the column W-property, C does not have the column W-property.
608
+ We now give a characterization for SSM-W property.
609
+ Theorem 4.5. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
610
+ n×n has the SSM-W property if and only if (C0, C1D1+
611
+ C2D2 + ... + CkDk) ∈ Λ(2)
612
+ n×n has the SSM-W property for any set of non-negative diagonal matrix
613
+ (D1, D2, ..., Dk) ∈ Λ(k)
614
+ n×n with diag(D1 + D2 + ... + Dk) > 0.
615
+ Proof. Necessary part: Let (D1, D2..., Dk) ∈ Λ(k)
616
+ n×n be the set of non-negative diagonal matrix with
617
+ diag(D1 + D2 + ... + Dk) > 0. Suppose there exist vectors x0 ∈ Rn and y ∈ Rn
618
+ + such that
619
+ C0x0 =
620
+
621
+ C1D1 + C2D2 + ... + CkDk
622
+
623
+ y and x0 ∗ y ≤ 0.
624
+ For each i ∈ [k], we set xi := Diy. As each Di is a non-negative diagonal matrix, from x0 ∗ y ≤ 0,
625
+ we get x0 ∗ xi ≤ 0 ∀i ∈ [k]. Then we have
626
+ C0x0 = C1x1 + C2x2 + ... + Ckxk,
627
+ xi ≥ 0, x0 ∗ xi ≤ 0 ∀i ∈ [k].
628
+ As C has the SSM-W property of C, we must have x0 = x1 = ... = xk = 0. This implies
629
+ x1 + x2 + ... + xk = (D1 + D2 + ... + Dk)y = 0.
630
+ As diag(D1 + D2 + .... + Dk) > 0, we have
631
+ y = 0. This completes the necessary part.
632
+ Sufficiency part: Let x = (x0, x1, ..., xk) ∈ Λ(k+1)
633
+ n
634
+ such that
635
+ C0x0 = C1x1 + C2x2 + ... + Ckxk and xj ≥ 0, x0 ∗ xj ≤ 0 ∀j ∈ [k].
636
+ (8)
637
+ We now consider an n × k matrix X whose jth column as xj for j ∈ [k]. So, X = [x1 x2 ... xk].
638
+ Let S := {i ∈ [k] : ith row sum of X is zero}. From this, we define a vector y ∈ Rn and diagonal
639
+ matrices D1, D2, .., Dk such that
640
+ yi =
641
+
642
+ 1
643
+ i /∈ S
644
+ 0
645
+ i ∈ S
646
+ and (Dj)ii =
647
+
648
+ (xj)i
649
+ i /∈ S
650
+ 1
651
+ i ∈ S ,
652
+ where (Dj)ii is the diagonal entry of Dj for all j ∈ [k]. It can be seen easily that Djy = xj for all
653
+ j ∈ [k] and each Dj is a non-negative diagonal matrix with diag(D1 + D2 + ...+ Dk) > 0. Therefore,
654
+ from equation 8, we get
655
+ C0x0 =
656
+
657
+ C1D1 + C2D2 + ... + CkDk
658
+
659
+ y,
660
+ x0 ∗ y ≤ 0.
661
+ From the hypothesis, we get x0 = 0 = y which implies x = 0.
662
+ This completes the sufficiency
663
+ part.
664
+ 9
665
+
666
+ We now give a characterization for the column W-property.
667
+ Theorem 4.6. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
668
+ n×n
669
+ has the column-W property if and only if
670
+ (C0, C1D1 + C2D2 + ... + CkDk) ∈ Λ(2)
671
+ n×n has the column-W property for any set of non-negative
672
+ diagonal matrices D1, D2, ..., Dk of order n with diag(D1 + D2 + ... + Dk) > 0.
673
+ Proof. Necessary part: It is obvious.
674
+ Sufficiency part: Let {E0, E1, ..., Ek} be a set of non-negative diagonal matrices of order n such that
675
+ diag(E0 + E1 + ... + Ek) > 0. We claim that det(C0E0 + C1E1 + ... + CkEk) ̸= 0.
676
+ To prove this, we first construct a set of non-negative diagonal matrices D1, D2, ..., Dk and E as
677
+ follows:
678
+ (Dj)ii =
679
+
680
+ Ej
681
+ ii
682
+ if �k
683
+ m=1 Em
684
+ ii ̸= 0
685
+ 1
686
+ if �k
687
+ m=1 Em
688
+ ii = 0 and Eii =
689
+
690
+ 1
691
+ if �k
692
+ m=1 Em
693
+ ii ̸= 0
694
+ 0
695
+ if �k
696
+ m=1 Em
697
+ ii = 0 ,
698
+ where (Dj)ii is iith diagonal entry of Dj for j ∈ [k] and Eii is iith diagonal entry of matrix E.
699
+ By an easy computation, we have DjE = Ej ∀j ∈ [k] and diag(D1 + D2 + ... + Dk) > 0. From
700
+ diag(E0 + E1 + ... + Ek) > 0, we get diag(E0 + E) > 0. As DjE = Ej ∀j ∈ [k] and (C0, C1D1 +
701
+ C2D2 + ... + CkDk) has column W-property, by Theorem 2.2, we have
702
+ det(C0E0 + C1E1 + ... + CkEk) = det(C0E0 + C1D1E + ... + CkDkE)
703
+ = det(C0E0 + (C1D1 + ... + CkDk)E) ̸= 0.
704
+ Hence C has the column W-property. This completes the proof.
705
+ A well-known result in the standard LCP is that strictly semimonotone matrix and P matrix are
706
+ equivalent in the class of Z matrices (see, Theorem 3.11.10 in [1]). Analogue this result, we prove
707
+ the following theorem.
708
+ Theorem 4.7. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
709
+ n×n
710
+ such that C−1
711
+ 0 Ci be a Z matrix for all i ∈ [k].
712
+ Then the following statements are equivalent.
713
+ (i) C has the column W-property.
714
+ (ii) C has the SSM-W property.
715
+ Proof. (i) =⇒ (ii): It follows from Theorem 4.3.
716
+ (ii) =⇒ (i): Let {D1, D2, ..., Dk} be the set of non-negative diagonal matrices of order n such
717
+ that diag(D1 + D2 + ... + Dk) > 0. In view of Theorem 4.6, it is enough to prove that (C0, C1D1 +
718
+ C2D2 + ... + CkDk) has the column W-property.
719
+ As C has the SSM-W property, by Theorem 4.5, we have (C0, C1D1 + ... + CkDk) has the
720
+ SSM-W property. So, by Proposition 4.1,
721
+
722
+ I, C−1
723
+ 0
724
+
725
+ C1D1 + ... + CkDk
726
+ ��
727
+ has the SSM-W property
728
+ and C−1
729
+ 0
730
+
731
+ C1D1 + C2D2 + ... + CkDk
732
+
733
+ is a strict semimonotone matrix. As C−1
734
+ 0 Ci is a Z matrix, we
735
+ get C−1
736
+ 0
737
+
738
+ C1D1 + C2D2 + ... + CkDk
739
+
740
+ is also a Z matrix. Hence C−1
741
+ 0
742
+
743
+ C1D1 + C2D2 + ... + CkDk
744
+
745
+ is a P matrix. So, by Theorem 2.3, (C0, C1D1 + C2D2 + ... + CkDk) has the column W-property.
746
+ Hence we have our claim.
747
+ Corollary 4.8. Let C = (C0, C1, ..., Ck) ∈ Λk+1
748
+ n×n such that C−1
749
+ 0 Ci be a Z matrix for all i ∈ [k].
750
+ Then the following statements are equivalent.
751
+ (i) C has the SSM-W property.
752
+ (ii) For all q ∈ Rn and d ∈ Λ(k−1)
753
+ n,++ , EHLCP(C, d, q) has a unique solution.
754
+ Proof. (i)
755
+ =⇒ (ii): It follows from Theorem 4.7 and Theorem 2.2. (ii) =⇒ (i): It follows from
756
+ Theorem 2.2 and Theorem 4.3.
757
+ 10
758
+
759
+ In the standard LCP [3], the strictly semimonotone matrix gives the existence of a solution of
760
+ LCP. We now prove that the same result holds in EHLCP.
761
+ Theorem 4.9. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
762
+ n×n
763
+ has the SSM-W property, then SOL(C, d, q) ̸= ∅
764
+ for all q ∈ Rn and d ∈ Λ(k+1)
765
+ n,++ .
766
+ Proof. As C has the SSM-W property, C has the R0-W property. From Theorem 4.1, it is enough
767
+ to prove that deg(C, 0) ̸= 0. To prove this, we consider a homotopy Φ : Λ(k+1)
768
+ n
769
+ × [0, 1] → Λ(k+1)
770
+ n
771
+ as
772
+ Φ(x, t) = t
773
+
774
+ 
775
+ C0x0
776
+ x1
777
+ x2
778
+ x3
779
+ .
780
+ .
781
+ .
782
+ xk
783
+
784
+ 
785
+ + (1 − t)
786
+
787
+ 
788
+ C0x0 − �k
789
+ i=1 Cixi
790
+ x0 ∧ x1
791
+ (d1 − x1) ∧ x2
792
+ (d2 − x2) ∧ x3
793
+ .
794
+ .
795
+ .
796
+ (dk−1 − xk−1) ∧ xk
797
+
798
+ 
799
+ .
800
+ Let F(x) := Φ(x, 0) and G(x) := Φ(x, 1). We first prove that the zero set X = {x : Φ(x, t) =
801
+ 0 for some t ∈ [0, 1]} of homotopy Φ contains only zero. We consider the following cases.
802
+ Case 1: Suppose t = 0 or t = 1. If t = 0, then Φ(x, 0) = 0 =⇒ F(x) = 0. As C has the SSM-W
803
+ property, by Lemma 3.1, we have F(x) = 0 ⇒ x = 0. If t = 1, then Φ(x, 1) = 0 =⇒ G(x) = 0.
804
+ Again by C has the SSM-W property, C−1
805
+ 0
806
+ exists, which implies that G is a one-one map. So,
807
+ G(x) = 0 ⇒ x = 0.
808
+ Case 2: Suppose t ∈ (0, 1). Then Φ(x, t) = 0 which gives that
809
+
810
+ 
811
+ C0x0 − �k
812
+ i=1 Cixi
813
+ x0 ∧ x1
814
+ (d1 − x1) ∧ x2
815
+ (d2 − x2) ∧ x3
816
+ .
817
+ .
818
+ .
819
+ (dk−1 − xk−1) ∧ xk
820
+
821
+ 
822
+ = −α
823
+
824
+ 
825
+ C0x0
826
+ x1
827
+ x2
828
+ x3
829
+ .
830
+ .
831
+ .
832
+ xk
833
+
834
+ 
835
+ ,
836
+ where α =
837
+ t
838
+ 1 − t > 0.
839
+ (9)
840
+ From the second row of above equation, we have
841
+ x0 ∧ x1 = −αx1 =⇒ min{x0 + αx1, (1 + α)x1} = 0.
842
+ By Proposition 2.1, we get x1 ≥ 0 and (x0 + αx1) ∗ (1 + α)x1 = 0 which implies that x0 ∗ x1 ≤ 0.
843
+ Set ∆ := {i ∈ [n] : (x1)i > 0}. So, we have
844
+ (x0)i =
845
+
846
+ ≤ 0
847
+ if i ∈ ∆
848
+ ≥ 0
849
+ if i /∈ ∆
850
+ and (x1)i =
851
+
852
+ > 0 if i ∈ ∆
853
+ = 0 if i /∈ ∆
854
+ .
855
+ (10)
856
+ From third row of the equation 9, we have (d1 − x1) ∧ x2 = −αx2 which is equivalent
857
+ min{d1 − x1 + αx2, (1 + α)x2} = 0.
858
+ This gives that x2 ≥ 0 and (d1 − x1 + αx2) ∗ (1 + α)x2 = 0. As d1 > 0 and from the last term in
859
+ equation 10, we have
860
+ (x2)i =
861
+
862
+ ≥ 0 if i ∈ ∆
863
+ = 0 if i /∈ ∆
864
+ .
865
+ 11
866
+
867
+ This leads that x0 ∗ x2 ≤ 0. By continuing the similar argument for the remaining rows, we get
868
+ xj ≥ 0 and x0 ∗ xj ≤ 0 ∀j ∈ [k].
869
+ From the first row of the equation 9, the vectors x = (x0, x1, ..., xk) satisfies
870
+ C0(1 + α)x0 =
871
+ k
872
+
873
+ i=1
874
+ Cixi and xj ≥ 0, x0 ∗ xj ≤ 0, j ∈ [k].
875
+ So, x = 0 as C has the SSM-W property.
876
+ From both cases, we get X contains only zero. By the homotopy invariance property of degree
877
+ (D2), we have deg(Φ(x, 0), Ω, 0) = deg
878
+
879
+ Φ(x, 1), Ω, 0
880
+
881
+ for any bounded open set containing 0. As G
882
+ is a continuous one-one function, by Proposition 2.4, we have
883
+ deg
884
+
885
+ C, 0
886
+
887
+ = deg
888
+
889
+ Φ(x, 0), Ω, 0
890
+
891
+ = deg
892
+
893
+ F, Ω, 0
894
+
895
+ = deg
896
+
897
+ G, Ω, 0
898
+
899
+ ̸= 0.
900
+ This completes the proof.
901
+ We now recall that a matrix A ∈ Rn×n is said to be a M matrix if it is Z matrix and A−1(Rn
902
+ +) ⊆
903
+ Rn
904
+ +. We prove a uniqueness result for EHLCP when q ≥ 0 and d ∈ Λ(k−1)
905
+ n,++ .
906
+ Theorem 4.10. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
907
+ n×n
908
+ has the SSM-W property. If C0 is a M matrix.
909
+ then for every q ∈ Rn
910
+ + and for every d ∈ Λ(k−1)
911
+ n,++ , EHLCP(C, d, q) has a unique solution.
912
+ Proof. Let q ∈ Rn
913
+ + and d = (d1, d2, ..., dk−1) ∈ Λ(k−1)
914
+ n,++ . We first show (C−1
915
+ 0 q, 0, ..., 0) ∈ SOL(C, d, q).
916
+ As C0 is a M matrix and q ∈ Rn
917
+ +, we have C−1
918
+ 0 q ≥ 0. If we set y = (y0, y1, ..., yk) := (C−1
919
+ 0 q, 0, ..., 0) ∈
920
+ Λ(k+1)
921
+ n
922
+ , then we can see easily that (y0, y1, ..., yk) satisfies that
923
+ C0y0 = q +
924
+ k
925
+
926
+ i=1
927
+ Ciyi, y0 ∧ y1 = 0 and (dj − yj) ∧ yj+1 = 0 ∀j ∈ [k − 1].
928
+ Hence (C−1
929
+ 0 q, 0, ..., 0) ∈ SOL(C, d, q).
930
+ Suppose x = (x0, x1, ..., xk) ∈ Λ(k+1)
931
+ n
932
+ is an another solution to EHLCP(C, q, d). Then,
933
+ C0x0 = q +
934
+ k
935
+
936
+ i=1
937
+ Cixi, x0 ∧ x1 = 0, (dj − xj) ∧ xj+1 = 0 ∀j ∈ [k − 1].
938
+ (11)
939
+ From the Lemma 3.1, we have
940
+ C0x0 = q +
941
+ k
942
+
943
+ i=1
944
+ Cixi and x0 ∧ xj = 0 ∀ j ∈ [k].
945
+ (12)
946
+ We let z := x − y, then z = (x0 − C−1
947
+ 0 q, x1, x2, .., xk). By an easy computation, from Equation 12,
948
+ we get
949
+ C0(x0 − C−1
950
+ 0 q) =
951
+ k
952
+
953
+ i=1
954
+ Cixi
955
+ and
956
+ xj ≥ 0,
957
+ (x0 − C−1
958
+ 0 q) ∗ xj = x0 ∗ xj − C−1
959
+ 0 q ∗ xj = −C−1
960
+ 0 q ∗ xj ≤ 0 ∀j ∈ [k].
961
+ Since C has the SSM-W property, z = 0 which implies that (x0, x1, ..., xk) = (C−1
962
+ 0 q, 0, ..., 0). This
963
+ completes the proof.
964
+ 12
965
+
966
+ 5
967
+ Connected solution set and Column W0 property
968
+ In this section, we give a necessary and sufficient condition for the connected solution set of the
969
+ EHLCP.
970
+ Definition 5. Let C = (C0, C1, ..., Ck) ∈ Λ(k+1)
971
+ n×n . We say that C is connected if SOL(C, d, q) is
972
+ connected for all q ∈ Rn and for all d ∈ Λ(k−1)
973
+ n,++ .
974
+ We now recall some definitions and results to proceed further.
975
+ Definition 6. [22] A subset of Rn is said to be a semi-algebraic set it can be represented as,
976
+ S =
977
+ s�
978
+ u=1
979
+ ru
980
+
981
+ v=1
982
+ {x ∈ Rn; fu,v(x) ∗uv 0},
983
+ where for all u ∈ [s] and for all v ∈ [ru], ∗uv ∈ { >, =} and fu,v is in the space of all real polynomials.
984
+ Theorem 5.1 ([22]). Let S be a semi-algebraic set. Then S is connected iff S is path-connected.
985
+ Lemma 5.2. The SOL(C, d, q) is a semi-algebraic set.
986
+ Proof. It is clear from the definition of SOL(C, d, q).
987
+ The following result gives a necessary condition for a connected solution whenever C0 is a M
988
+ matrix.
989
+ Theorem 5.3. Let C0 ∈ Rn×n be a M matrix. If C = (C0, C1, ..., Ck) ∈ Λ(k+1)
990
+ n×n
991
+ is connected, then
992
+ SOL(C, d, q) = {(C−1
993
+ 0 q, 0, ..., 0)} for all q ∈ Rn
994
+ ++ and for all d ∈ Λ(k−1)
995
+ n,++ .
996
+ Proof. Let q ∈ Rn
997
+ ++ and d = (d1, d2, ..., dk−1) ∈ Λ(k−1)
998
+ n,++ . It can be seen from the proof of The-
999
+ orem 4.10 that x = (C−1
1000
+ 0 q, 0, ..., 0) ∈ SOL(C, d, q). We now show that x is the only solution to
1001
+ EHLCP(C, d, q).
1002
+ Assume contrary. Suppose y is another solution to EHLCP(C, d, q). As SOL(C, d, q) is con-
1003
+ nected, by Lemma 5.2 and Theorem 5.1, it is path-connected. So, there exists a path γ = (γ0, γ1, ..., γk) :
1004
+ [0, 1] → SOL(C, d, q) such that
1005
+ γ(0) = x, γ(1) = y and γ(t) ̸= x ∀t > 0.
1006
+ Let {tm} ⊆ (0, 1) be a sequence such that tm → 0 as m → ∞. Then, by the continuity of γ,
1007
+ γ(tm) → γ(0) = x as m → ∞. Since
1008
+
1009
+ γ0(tm), γ1(tm), ...γk(tm)
1010
+
1011
+ ∈ SOL(C, d, q),
1012
+ C0γ0(tm) = q +
1013
+ k
1014
+
1015
+ i=1
1016
+ Ciγi(tm),
1017
+ γ0(tm) ∧ γ1(tm) = 0 and
1018
+
1019
+ dj − γj(tm)
1020
+
1021
+ ∧ γ(j+1)(tm) = 0 ∀j ∈ [k − 1].
1022
+ Now we claim that there exists a subsequence {tml} of {tm} such that
1023
+
1024
+ γj(tml)
1025
+
1026
+ i ̸= 0, for some j ∈ [k] and for some i ∈ [n].
1027
+ Suppose the claim is not true. This means that for given any subsequence {tml} of {tm}, there exists
1028
+ m0 ∈ N such that for all ml ≥ m0, we have
1029
+
1030
+ γj(tml)
1031
+
1032
+ i = 0 ∀i ∈ [n] ∀j ∈ [k].
1033
+ 13
1034
+
1035
+ So, γj(tm) is an eventually zero sequence for all j ∈ [k]. This implies that there exists a natural
1036
+ number m0 such that
1037
+ γ1(tm) = γ2(tm) = ... = γk(tm) = 0 ∀m ≥ m0.
1038
+ As
1039
+
1040
+ γ0(tm), γ1(tm), ...γk(tm)
1041
+
1042
+ ∈ SOL(C, d, q), we get γ0(tm) = C−1
1043
+ 0 (q)
1044
+ ∀m ≥ m0. This gives us
1045
+ that γ(tm) = x for all m ≥ m0 which contradicts the fact that γ(tm) ̸= x for all m. Therefore, our
1046
+ claim is true. No loss of generality, we assume a sequence {tm} itself satisfies the condition
1047
+
1048
+ γj(tm)
1049
+
1050
+ i ̸= 0, for some j ∈ [k] and for some i ∈ [n].
1051
+ We now consider the following cases for possibilities of j.
1052
+ Case 1 : If j = 1, then (γ0(tm))i(γ1(tm))i = 0 which leads to (γ0(tm))i = 0. This implies that
1053
+ 0 = lim
1054
+ m→∞ γ0(tm)i = (C−1
1055
+ 0 q)i.
1056
+ But (C−1
1057
+ 0 q) > 0 as C0 is a M matrix. This is not possible. So, j ̸= 1.
1058
+ Case 2 : If 2 ≤ j ≤ k, then we have (dj−1 − γj−1(tm))i(γj(tm))i = 0 which gives that (dj−1 −
1059
+ γj−1(tm))i = 0. By taking limit m → ∞,
1060
+ 0 = lim
1061
+ m→∞(dj−1 − γj−1(tm))i = (dj−1)i − (γj−1(0))i = (dj−1)i > 0.
1062
+ This is not possible.
1063
+ From both cases, there is no such a j exists. This contradicts the fact. Hence x = (C−1
1064
+ 0 q, 0, ..., 0)
1065
+ is the only solution to EHLCP(C, d, q).
1066
+ The following result gives a sufficient condition for a connected solution to EHLCP.
1067
+ Theorem 5.4. Let C := (C0, C1, ..., Ck) ∈ Λ(k+1)
1068
+ n×n
1069
+ has the column W0-property. If SOL(C, d, q) has
1070
+ a bounded connected component, then SOL(C, d, q) is connected.
1071
+ Proof. If SOL(C, d, q) = ∅, then we have nothing to prove. Let SOL(C, d, q) ̸= ∅ and A be a con-
1072
+ nected component of SOL(C, d, q). If SOL(C, d, q) = A, then we are done. Suppose SOL(C, d, q) ̸=
1073
+ A. Then there exists y = (y0, y1, .., yk) ∈ SOL(C, d, q)\ A. As A is a bounded connected component
1074
+ of SOL(C, d, q), we can find an open bounded set Ω ⊆ Λ(k+1)
1075
+ n
1076
+ which contains A and it does not
1077
+ intersect with other component of SOL(C, d, q). Therefore y /∈ �� and ∂(Ω) ∩ SOL(C, d, q) = ∅.
1078
+ Since C has the column W0-property, there exists N := (N0, N1, ..., Nk) ∈ Λ(k+1)
1079
+ n×n
1080
+ such that
1081
+ C + ǫN := (C0 + ǫN0, C1 + ǫN1, ..., Ck + ǫNk) has the column W-property for every ǫ > 0.
1082
+ Let z = (z0, z1, ..., zk) ∈ A and ǫ > 0, we define functions H1, H2 and H3 as follows:
1083
+ H1(x) =
1084
+
1085
+ 
1086
+ C0x0 − �k
1087
+ i=1 Cixi − q
1088
+ x0 ∧ x1
1089
+ (d1 − x1) ∧ x2
1090
+ .
1091
+ .
1092
+ .
1093
+ (dk−1 − xk−1) ∧ xk
1094
+
1095
+ 
1096
+ ,
1097
+ H2(x) =
1098
+
1099
+ 
1100
+ (C0 + ǫN0)x0 − �k
1101
+ i=1(Ci + ǫNi)xi + (�k
1102
+ i=1 ǫNiyi − ǫN0y0 − q)
1103
+ x0 ∧ x1
1104
+ (d1 − x1) ∧ x2
1105
+ .
1106
+ .
1107
+ .
1108
+ (dk−1 − xk−1) ∧ xk
1109
+
1110
+ 
1111
+ ,
1112
+ 14
1113
+
1114
+ H3(x) =
1115
+
1116
+ 
1117
+ (C0 + ǫN0)x0 − �k
1118
+ i=1(Ci + ǫNi)xi + (�k
1119
+ i=1 ǫNizi − ǫN0z0 − q)
1120
+ x0 ∧ x1
1121
+ (d1 − x1) ∧ x2
1122
+ .
1123
+ .
1124
+ .
1125
+ (dk−1 − xk−1) ∧ xk
1126
+
1127
+ 
1128
+ .
1129
+ By putting x = y in H2(x), and x = z in H1(x) and H3(x), we get
1130
+ H1(z) = H2(y) = H3(z) = 0.
1131
+ For ǫ is near to zero, deg(H1, Ω, 0)= deg(H2, Ω, 0)= deg(H3, Ω, 0) due to the nearness property
1132
+ of degree (D3). As z ∈ Ω is a solution to H3(x) = 0 and C + ǫN has the column W-property,
1133
+ we get deg(H3, Ω, 0) ̸= 0 by Theorem 4.3 and 4.9. Since deg(H2, Ω, 0)= deg(H3, Ω, 0), we have
1134
+ deg(H2, Ω, 0) ̸= 0. This implies that if we set q2 := q+ǫN0y0−�k
1135
+ i=1 ǫNiyi, then EHLCP(C + ǫN, d, q2)
1136
+ must have a solution in Ω. As C + ǫN has the column W-property, by Theorem 2.2, EHLCP(C + ǫN, d, q2)
1137
+ has a unique solution which must be equal to y. So, y ∈ Ω. It gives us a contradiction. Hence
1138
+ SOL(C, d, q) = A. Thus SOL(C, d, q) is connected.
1139
+ 6
1140
+ Conclusion
1141
+ In this paper, we introduced the R0-W property and SSM-W properties and then studied the
1142
+ existence and uniqueness result for EHLCP when the underlying set of matrices has these properties.
1143
+ Last, we gave a necessary and sufficient condition for the connectedness of the solution set of the
1144
+ EHLCP.
1145
+ Declaration of Competing Interest
1146
+ The authors have no competing interests.
1147
+ Acknowledgements
1148
+ The first author is a CSIR-SRF fellow, and he wants to thank the Council of Scientific & Industrial
1149
+ Research(CSIR) for the financial support.
1150
+ References
1151
+ [1] R.W. Cottle, J.-S. Pang, R.E. Stone, The Linear Complementarity Problem, Classics in Applied
1152
+ Mathematics, Philadelphia: SIAM ; 2009.
1153
+ [2] Facchinei, F., Pang, J.S. Finite Dimensional Variational Inequalities and Complementarity
1154
+ Problems. New York: Springer; 2003.
1155
+ [3] M. S. Gowda.: Applications of degree theory to linear complementarity problems, Math. Oper.
1156
+ Res. 18,868-879(1993)
1157
+ [4] R. Sznajder, M.S. Gowda: Generalizations of P0-and P-properties: extended vertical and hori-
1158
+ zontal linear complementarity problems, Linear Algebra Appl.695-715(1995)
1159
+ [5] A.N. Willson: A useful generalization of the P0-matrix concept, Numer. Math.62-70(1971)
1160
+ 15
1161
+
1162
+ [6] Camlibel M.K. and Schumacher J.M., Existence and uniqueness of solutions for a class of
1163
+ piecewise linear dynamical systems: Linear Algebra and its Applications,351-352;147-184(2004)
1164
+ [7] I. Kaneko, A linear complementarity problem with nby 2nP-matrix, Math. Program. Stud.120-
1165
+ 141(1978)
1166
+ [8] Chi, X., Gowda, M.S., Tao, J.: The weighted horizontal linear complementarity problem on a
1167
+ Euclidean Jordan algebra. J. Global Optim.73; 153-169(2019)
1168
+ [9] R.W. Cottle, G.B. Dantzig: A generalization of the linear complementarity problem, J. Comb.
1169
+ Theory.;8; 79-90(1970)
1170
+ [10] O.L. Mangasarian, J.S. Pang: The extended linear complementarity problem, SIAM J. Matrix
1171
+ Anal. Appl,16(2); 359-368(1995)
1172
+ [11] B.D. Schutter, B.D. Moor, The extended linear complementarity problem, Math. Program.71;
1173
+ 289-325(1995)
1174
+ [12] R.A. Horn, C.R. Johnson, Matrix Analysis. Cambridge Cambridge University Press; (1985)
1175
+ [13] F. Mezzadri, E. Galligani: Splitting methods for a class of horizontal linear complementarity
1176
+ prob-lems, J. Optim. Theory Appl.; 180; 500-517(2019)
1177
+ [14] Y. Zhang: On the convergence of a class on infeasible interior-point methods for the horizontal
1178
+ linear complementarity problem, SIAM J. Optim.4(1); 208-227(1994)
1179
+ [15] M. Gowda: Reducing a monotone horizontal LCP to an LCP, Appl. Math. Lett 8(1); 97-
1180
+ 100(1995).
1181
+ [16] R.H. T¨ut¨unc¨u, M.J. Todd: Reducing horizontal linear complementarity problems, Linear Alge-
1182
+ bra Appl ; 223–224; 717-729(1995).
1183
+ [17] Sznajder, R.: Degree-theoretic analysis of the vertical and horizontal linear complementarity
1184
+ problems, Ph.D. Thesis, University of Maryland Baltimore County (1994).
1185
+ [18] D. Ralph: A stable homotopy approach to horizontal linear complementarity problems, Control
1186
+ Cybern.31; 575-600(2002).
1187
+ [19] C. Jones, M.S. Gowda: On the connectedness of solution sets in linear complementarity prob-
1188
+ lems, Linear Algebra Appl.; 272; 33-44(1998).
1189
+ [20] G.S.R. Murthy, T. Parthasarathy, B. Sriparna: On the solution sets of linear complementarity
1190
+ problems, SIAM J. Matrix Anal. Appl.; 21(4); 1229-1235(2000).
1191
+ [21] T. Rapcsak: On the connectedness of the solution set to linear complementarity systems, J.
1192
+ Optim. Theory Appl.; 80(3); 501-512(1994).
1193
+ [22] Basu S, Pollack R, Roy MF.:
1194
+ Algorithms in Real Algebraic Geometry. Vol. 10. Berlin:
1195
+ SpringerVerlag; (2006)
1196
+ 16
1197
+
-dAzT4oBgHgl3EQfg_zf/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27b190503767010097f6588d73c57903f91ea8c0ab2602f252810e9458f5b62d
3
+ size 1803028
-tAyT4oBgHgl3EQfdfed/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed7d25706f2caeecdb88e976fa14d05f94acc47b262b77051533b469b3713b87
3
+ size 3866669
-tAyT4oBgHgl3EQfdfed/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8f9349fc2ea3e188bdcf643423e684ad7989e4aaf64cd047170ec13e01169b4
3
+ size 158819
-tE1T4oBgHgl3EQfUwM8/content/tmp_files/2301.03093v1.pdf.txt ADDED
@@ -0,0 +1,712 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2019 22nd International Conference on Computer and Information Technology (ICCIT), 18-20 December 2019
2
+ 978-1-7281-5842-6/19/$31.00 ©2019 IEEE
3
+
4
+ Prognosis and Treatment Prediction of Type-2
5
+ Diabetes Using Deep Neural Network and Machine
6
+ Learning Classifiers
7
+ Md. Kowsher
8
+ Dept. of Applied Mathematics
9
+ Noakhali Science and Technology
10
+ University, Noakhali-3814,Bangladesh.
11
+ ga.kowsher@gmail.com
12
+
13
+ Mahbuba Yesmin Turaba
14
+ Dept. of Information and
15
+ Communication Technology
16
+ Comilla University
17
+ Comilla, Bangladesh
18
+ mahbuba.yesmin11@gmail.com
19
+
20
+ M M Mahabubur Rahman
21
+ Dept. of CSTE
22
+ Noakhali Science and Technology
23
+ University Noakhali-3814, Bangladesh
24
+ toufikrahman098@gmail.com
25
+ Tanvir Sajed
26
+ Dept. of Computing Science
27
+ University of Alberta
28
+ Edmonton, Canada
29
+ tsajed@ualbarta.ca
30
+
31
+
32
+ Abstract—Type 2 Diabetes is a fast-growing, chronic
33
+ metabolic disorder due to imbalanced insulin activity. As lots
34
+ of people are suffering from it, access to proper treatment is
35
+ necessary to control the problem. Most patients are unaware of
36
+ health complexity, symptoms and risk factors before diabetes.
37
+ The motion of this research is a comparative study of seven
38
+ machine learning classifiers and an artificial neural network
39
+ method to prognosticate the detection and treatment of
40
+ diabetes with a high accuracy, in order to identify and treat
41
+ diabetes patients at an early age. Our training and test dataset
42
+ is an accumulation of 9483 diabetes patients’ information. The
43
+ training dataset is large enough to negate overfitting and
44
+ provide for highly accurate test performance. We use
45
+ performance measures such as accuracy and precision to find
46
+ out the best algorithm deep ANN which outperforms with
47
+ 95.14% accuracy among all other tested machine learning
48
+ classifiers. We hope our high performing model can be used by
49
+ hospitals to predict diabetes and drive research into more
50
+ accurate prediction models.
51
+ Keywords—Artificial Neural Network, Type 2 diabetes, Support
52
+ Vector Machine, Decision Tree, Naive Bayes, LDA, Random
53
+ forest classifier
54
+ I.
55
+ INTRODUCTION
56
+ Diabetes Mellitus (DM) is a very common metabolic
57
+ disorder that affects millions of people worldwide. It occurs
58
+ when the concentration of blood glucose reaches excessive
59
+ level due to lack of production of insulin by the pancreas
60
+ organ (Type 1 Diabetes) or due to insulin resistance (Type 2
61
+ Diabetes) [1]. It has been published that 422 million people
62
+ are suffering from diabetes approximately in 2014 and it is
63
+ expected to rise to 438 million in 2030[2, 3]. Among them,
64
+ 90% of cases are Type 2 diabetes (T2DM) [4]. It may arise
65
+ at an early childhood because of the failure of cells to
66
+ respond to insulin appropriately [5]. So, patients have to
67
+ face excessive tiredness, visual disorders, excessive thirst,
68
+ skin infection recurrence, delayed wound healing and
69
+ frequent discharge of urine [6]. It has been pointed out by
70
+ Diabetes Research Center that 80 percent of cases of
71
+ diabetes can be prevented or delayed if it is detected early
72
+ [7]. Also, by controlling blood sugar, it is possible to lessen
73
+ the T2DM effect. A healthy diet, physical exercise,
74
+ sufficient nutrition for pregnant women, proper medication,
75
+ weight at a necessary level are crucial to maintaining a safer
76
+ sugar level.
77
+ When the diabetes is diagnosed with medical tests, it
78
+ shows significantly dangerous symptoms but these methods
79
+ do not perform well because of clinical complexity, time-
80
+ consuming process and very high expense. However, using
81
+ automated machine learning algorithms, a researcher can
82
+ predict a disease like diabetes with reduced cost and time. In
83
+ the field of Artificial Intelligence, classification is
84
+ considered a supervised technique that analyses patient data
85
+ and classifies whether or not the patient is suffering from a
86
+ disease. Researchers have created different AI and machine
87
+ learning techniques to automate prognosis of various
88
+ diseases. Machine learning techniques studies algorithm and
89
+ statistical model that has the capability for accurate
90
+ prediction by using implicit programming. In medical
91
+ science, they take the concept of the human brain as it
92
+ contains millions of neurons to complete tasks of the human
93
+ body. It is called nonlinear modelling and they are
94
+ interconnected like brain cells although the neuron creation
95
+ is done by program [8].
96
+ In this paper, first we have discussed various procedures
97
+ and existing works about the prognosis of T2D , though we
98
+ emphasized various classification algorithms known as
99
+ Logistic Regression, KNN, Decision Tree, Naive Bayes,
100
+ SVM, Linear Discriminant Analysis and Random forest
101
+ classifier and Artificial Neural Network (ANN) for T2DM
102
+ prediction. Our selected model is an Artificial Neural
103
+ Network is found to be superior among all of them.
104
+ Feedforward neural network contains the signal in one
105
+ Authorized licensed use limited to: Newcastle University. Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore. Restrictions apply.
106
+
107
+ direction from the input to the output. It is used in different
108
+ medical diagnostic applications such as nephritis disease,
109
+ heart disease, myeloid leukemia etc. [ref].
110
+ We have taken a medical dataset from Noakhali Medical
111
+ College, Bangladesh, consisting of 9483 samples and 14
112
+ symptoms per sample. The 80% data and 20% data are
113
+ chosen to be training dataset and testing dataset
114
+ respectively. Machine learning classification algorithms are
115
+ applied to dataset and some elements may be missed. Then,
116
+ the mean and median method is applied in order to detect it.
117
+ The contributions of this paper are summarized as
118
+
119
+
120
+ We have proposed a prediction model for T2D
121
+ using Artificial Neural Network machine learning
122
+ classifier
123
+
124
+ We have exerted seven classifier techniques and
125
+ ANN on T2D data and provided comparison of
126
+ accuracy among them.
127
+
128
+ The improvement systems of the model, as well as
129
+ accuracy, are mentioned in this work.
130
+
131
+ The remaining of the discussion is organized as follows:
132
+ Section-II explains related work of various classification
133
+ techniques for prediction of diabetes, Section-III describes
134
+ the methodology and materials used, Section-IV discusses
135
+ evaluated Results and Section-V delineates the conclusion
136
+ of the research work.
137
+
138
+ II.
139
+ RELATED WORK
140
+
141
+ In recent years, several studies have been published using
142
+ multiple machine learning classifiers, ANN techniques and
143
+ various feature extraction methods. These have a drastic
144
+ change in potential research and some works are discussed
145
+ related to T2DM. Ebenezer et al. used the backpropagation
146
+ feature of ANN in order to diagnose diabetes. It finds out
147
+ the error by juxtaposing input and output number. Here, the
148
+ preceding round error is greater than the present error each
149
+ time by means of changing weight to minimize gradient of
150
+ errors using a technique known as gradient descent [9].
151
+ Nongyao et al. delineated risk prediction by using various
152
+ machine learning classification algorithms such as Decision
153
+ Tree, Neural Network, Random Forest algorithms, Naïve
154
+ Bayes, Logistic Regression. All of them followed Bagging
155
+ and Boosting approaches to improve robustness except RFA
156
+ [10]. Deepti et al. proposed a model to identify diabetes at a
157
+ premature age by applying Decision Tree, SVM and Naïve
158
+ Bayes on Pima Indians Diabetes Database (PIDD) datasets.
159
+ They chose sufficient measures for accuracy including
160
+ precision, ROC, F measure, Recall but Naïve Bayes beat
161
+ them by acquiring the highest accuracy [11]. Su et al.
162
+ applied decision tree, logistic regression, neural network,and
163
+ rough sets to assess accuracy through various features like
164
+ age, right thigh circumference, left thigh circumference,
165
+ trunk volume and illustrates thigh circumference as a better
166
+ feature than BMI in anthropometrical data [12]. Al-Rubeaan
167
+ et al. has presented T2DM based on diabetic nephropathy
168
+ (DP), then defined high impact risk factors; age and diabetes
169
+ duration for microalbuminuria, macroalbuminuria and end-
170
+ stage renal disease(ESRD) classifications[13]. Vijayan V.
171
+ examines various types of preprocessing techniques which
172
+ includes PCA and discretization. It increases the accuracy of
173
+ Naïve Bayes classifier and Decision Tree algorithm but
174
+ reduces SVM accuracy [14].
175
+ Micheal et al. proposed Multi-Layer Feed Forward
176
+ Neural Networks (MLFNN) in order to diagnose diabetes by
177
+ considering activation units, learning techniques on Pima
178
+ Indian Diabetes (PID) data set and achieved 82.5%
179
+ accuracy. It performs better than Naïve Bayes, Logistic
180
+ Regression (LR) and Random Forest (RF) classifier [15].
181
+ Sadri et al. chose data mining algorithms like Naive Bayes,
182
+ RBF Network, and J48 to diagnose T2DM for Pima Indians
183
+ Diabetes Dataset that has 768 samples. Each sample has
184
+ nine features as the total number of Pregnancy, Plasma
185
+ Glucose Concentration, Diastolic Blood Pressure and 2-
186
+ Hour Serum Insulin. Among them, the Naive Bayes
187
+ algorithm is unbeatable and has 76.95% accuracy [16].
188
+ Pradhan et al. devised a classifier for diabetes detection
189
+ using Genetic programming (GP) at low cost. Simplified
190
+ function pool consists of arithmetic operations that are used
191
+ in lower validation [17]. Yang Guo et al. applied Naïve
192
+ Bayes classifier by using WEKA tool in order to predict
193
+ Type2 diabetes and obtained remarkable accuracy [18].
194
+
195
+ Unlike these works, we have introduced diabetes‟s
196
+ medication detection system using machining learning and
197
+ deep ANN that will act like a doctor to choose the right
198
+ medication of a patient suffering from diabetes.
199
+
200
+ III.MATERIALS AND METHODS
201
+
202
+ In order to categorize diabetes therapy and drugs system for
203
+ patients, the whole workflow is separated into four parts
204
+ such as data collection, data preprocessing, training data via
205
+ the proposed algorithms, and predictions. We have exerted
206
+ seven machine learning classifiers and deep neural networks
207
+ into the pre-processed data set.
208
+
209
+
210
+ Fig.1. System Diagram of T2D analysis
211
+ Authorized licensed use limited to: Newcastle University. Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore. Restrictions apply.
212
+
213
+ Data
214
+ Processing Data
215
+ Training Model
216
+ T2Dpatients
217
+ -ML Classifiers:Logistic
218
+ 80% Training
219
+ 1.Missing Value Check
220
+ Regression, KNN
221
+ 2.Handling Categorical
222
+ Decision Tree, Naive
223
+ Bayes, SVM, Linear
224
+ 20%Testingset
225
+ 3.Value
226
+ 4.Features selection
227
+ Discriminant Analysis
228
+ 5.Feature Scaling
229
+ and Random Forest
230
+ Classifier
231
+ 6.Dimension Reduction
232
+ ·DeepANN
233
+ D=Diet & Lifestyle
234
+ I=Insulin
235
+ 10 Fold Cross
236
+ M=Bigreanides
237
+ Validation
238
+ S=Secretagogues
239
+ PredictionThe source of our data came from Noakhali Medical
240
+ College, Bangladesh and the data set is separated into two
241
+ parts such as training and test set. The training data are
242
+ manipulated to the diagnostic system and 13 factors have
243
+ been taken to determine therapy in order to apply machine
244
+ learning and multilayer ANN. The dataset is tested from the
245
+ trained machine learning classifiers and artificial neural
246
+ network.
247
+ A. Dataset
248
+ As discussed before our data set contains information about
249
+ 9483 diabetes patients and formatted in comma-separated-
250
+ file (CSV). The dimension of the data set is 9483*14. It
251
+ preserves 14 different kind of information of a diabetes
252
+ patient such as „Name of patient‟, “Fasting”, “2 h after
253
+ Pressure” “BMI”, “Duration”, “Age”, “Sex”, “Blood
254
+ pressure”, “High Cholesterols”, “Heart Diseases”, “Kidney
255
+ Diseases”, and , “Medications”. The first 13 columns are
256
+ considered independent variables and the last one is the
257
+ dependent variable. It contains kinds of basic medicine
258
+ name of diabetes such as Diet and Lifestyle Modification,
259
+ Secretagogues, Biguanides, and Insulin.
260
+ When datasets consist of enough variables, it increases the
261
+ accuracy of prediction. Here, “Fasting” measures blood test
262
+ just before taking food, “2 h after glucose load” provides a
263
+ blood test after two hours of eating. “BMI” refers to the
264
+ weight and height of patients in kg/m2. “Medication”
265
+ indicates proper drugs and therapy. People who recovered
266
+ T2DM at early stage follow some features: age group 30-75
267
+ years, diabetes of diagnosis duration is more than half years,
268
+ glucose level at fasting plasma is higher than 125 mg/dl,
269
+ creation of plasma indicates equal or greater than 1.7 mg/dl,
270
+ plasma glucose after two hours is 11.
271
+ When a patient suffers from kidney problems, it may be a
272
+ symptom of T2DM as higher sugar level may damage
273
+ nephron. Even bleary eyesight is considered as a side effect
274
+ for patients as eye‟s retina and the macula is affected. Bad
275
+ cholesterol may lead to Diabetic dyslipidemia which can
276
+ increase heart diseases and atherosclerosis. Here, we suggest
277
+ treatment for kidney and v---ision problems. In order to
278
+ categorize diabetes therapy and drugs system for patients,
279
+ we applied seven machine learning classifiers and eight
280
+ deep neural into a data system of Noakhali Medical College,
281
+ Bangladesh. Training data are manipulated to diagnostic
282
+ system and twelve factors have been taken to determine
283
+ therapy in order to apply machine learning and multilayer
284
+ ANN. For the training and testing of the systems, we
285
+ divided the data set into 80% training and 20% test set. The
286
+ training dataset is used to find out the appropriate model and
287
+ best hyper-parameters and testing data set contains unseen
288
+ data to predict the performance.
289
+
290
+ B. Data preprocessing
291
+ Data preprocessing involves raw data converting into a
292
+ recognizable format from various sources. The well-
293
+ preprocessed data aids for the best training of algorithms.
294
+ Multi pre-processing training is held in our presented
295
+ systems.
296
+ 1. Missing Value Check
297
+ Usually,
298
+ missing
299
+ values
300
+ may
301
+ occur
302
+ due
303
+ to
304
+ data
305
+ incompleteness, missing field, programming error, manual
306
+ data transfer from a database and so on. We may ignore
307
+ missing values but it causes problems in parameter
308
+ calculation and data accuracy for features such as age,
309
+ wages and fare. We need to inspect whether a dataset has
310
+ any missing value or not. There are many ways to handle
311
+ missing values such as delete rows, missing values
312
+ prediction, mean, median, mode and so on. But the most
313
+ prominent policy for missing value replacement is the mean
314
+ method and also it is used to exchange the approximate
315
+ results in the dataset [19]. Mean is written in this way in
316
+ mathematics,
317
+ (1)
318
+
319
+ Where, denotes the mean and provides the average number
320
+ of n.
321
+
322
+ 2. Handling Categorical value
323
+ Categorical encoding identifies data type and transfers
324
+ categorical features into numerical numbers as the majority
325
+ of machine learning algorithms could not cope up with label
326
+ data directly. Then numerical values are fed into the
327
+ specific model. In our data set, there are five categorical
328
+ variable names as „Name of patients‟, “Heart Diseases”,
329
+ “Kidney Diseases”, “Sex”, and “Medications”. There are
330
+ two popular ways of transforming categorical data into
331
+ numerical data such as Integer encoding and one-hot
332
+ encoding. In the label encoder, categorical features are an
333
+ integer value and contain a natural order relationship, but
334
+ the multiclass relationship will provide different values for
335
+ various classes. One hot encoding maps categorical value
336
+ into binary vectors. Firstly, it is obvious to assign binary
337
+ value to an integer value of female and male is 0 and 1.
338
+ Then converting it to a 2 size of 2 possible integers in a
339
+ binary vector. Here, a female is encoded as 0 and
340
+ represented as [1, 0] in which index 0 has value 1 and vice
341
+ versa. It chooses this value as a feature to influence model
342
+ training [20].
343
+
344
+ 3. Features Selection
345
+ Feature selection incorporates the identification and
346
+ reduction of unnecessary features that have no impact on the
347
+ objective function and high impact features are kept. Our
348
+ dataset contains 14 types of elements and we have checked
349
+ p-value which is a statistical process for finding out the
350
+ probability for the null hypothesis. The features are taken
351
+ out whose p-value indicates less than 0.05.
352
+ Moreover, multicollinearity refers to determine the high
353
+ correlation which exists between two or more independent
354
+ features and features that are influential to each other. It is
355
+ called redundancy when two features are highly correlated.
356
+ As we have to handle redundancy, it is essential to choose
357
+ some methods such as χ2Test and Correlation Coefficient.
358
+ Authorized licensed use limited to: Newcastle University. Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore. Restrictions apply.
359
+
360
+ x
361
+ KThe Correlation Coefficient can be calculated by numerical
362
+ data. Assume that A and B are two features and it can be
363
+ defined as,
364
+
365
+
366
+
367
+
368
+ (2)
369
+
370
+ After performing both p-value and multicollinearity test, we
371
+ could come forward with seven features among thirteen
372
+ independent features. Those are “Fasting”, “2 Hours after
373
+ Glucose Load” “Duration”, “BMI”, “High Cholesterols”,
374
+ “Heart Diseases”, and “Kidney Diseases”.
375
+
376
+ 4. Feature scaling
377
+ Most of the time, the dataset does not remain on the same
378
+ scale or even not normalized. So, feature scaling is a
379
+ fundamental data transformation method for coping the
380
+ dataset to algorithms. We need to scale value of features and
381
+ provide equal weight to all features in order to obtain the
382
+ same scale for all data. Moreover, it is possible for scaling
383
+ to change in different values for different features. There are
384
+ lots of techniques for feature scaling for example
385
+ Standardization, Mean Normalization, Min-Max Scaling,
386
+ Unit Vector and so on.
387
+ In our research work, we have taken Min-Max Scaling or
388
+ normalization process as the features are confined within a
389
+ bounded area. Minmax normalization is a z-series
390
+ normalization to transform linearly x to x‟ where maxX and
391
+ minX are the maximum and minimum value for X
392
+ respectively.
393
+
394
+ (3)
395
+
396
+ When x=max, then y =1 and x=min, y=1.
397
+
398
+ The scaling range belongs between 0 and 1(positive value)
399
+ and -1 to 1(negative) and we have taken the value between 0
400
+ and 1.
401
+
402
+ 5. Dimension Reducing
403
+ Dimensionality reduction refers to minimizing random
404
+ variables by considering the principal set of variables that
405
+ avoids overfitting. For a large number of dataset, we need to
406
+ use dimension reduction technique. In our study, we prefer
407
+ dimension reduction for dimensional graphical visualization.
408
+ There are a lot of methods for reducing dimension, for
409
+ instance, LDA, PCA, SVD, NMF, etc. In our system, we
410
+ have applied Principal Component Analysis (PCA). It is a
411
+ linear transformation based on the correlation between
412
+ features in order to identify patterns. High dimensional data
413
+ are estimated into equal or lower dimensions through
414
+ maximum variance. We have taken two components of PCA
415
+ according to their high variance so that we can graphically
416
+ visualize in Cartesian coordinate system.
417
+
418
+ C. Training Algorithms
419
+ The training dataset for T2DM is applied to each algorithm
420
+ to find out medications and model performance is assessed
421
+ by obtaining accuracy.
422
+
423
+ a. Machine Learning Classifier
424
+ Since we focus on the performance of treatment predictions,
425
+ we have implemented seven machine learning classifiers
426
+ such as logistic regression, KNN, SVM, Naive Bayes,
427
+ decision tree, LDA, random forest tree.
428
+ Logistic regression is based on the probability model; it is
429
+ derived from linear regression that mapped the dataset into
430
+ two categories by considering existing data. At first, features
431
+ are mapped linearly that are transferred to a sigmoid
432
+ function layer for prediction. It shows the relationship
433
+ between the dependent and independent values but output
434
+ limits the prediction range on [0, 1]. As we need to predict
435
+ the right treatment of a diabetes person, it is beneficial to
436
+ use a binary classification problem.
437
+ Linear Discriminant Analysis (LDA) belongs to a linear
438
+ classifier to find out the linear correlation between elements
439
+ in order to support binary and multiclass classification. The
440
+ chance of inserting a new dataset into every class is detected
441
+ by LDA. Then, the class that contains the dataset is detected
442
+ as output. It can calculate the mean function for each class
443
+ and it is estimated by vectors for finding group variance.
444
+ Support Vector Machine (SVM) is the most recognized
445
+ classifier to make decision boundary as hyperplane to keep
446
+ the widest distance from both sides of points. This
447
+ hyperplane refers to separating data into two groups in two-
448
+ dimensional space. It performs better with non-linear
449
+ classification by the kernel function. It is capable of
450
+ separating and classifying unsupported data.
451
+ K-nearest neighbours (KNN) works instant learning
452
+ algorithm and input labeled data that act as training instance.
453
+ Then, the output produces a group of data. When k=1, 2, 5
454
+ then it means the class has 1, 2 or 5 neighbours of every data
455
+ point. For this system, we choose k=5 that means 5
456
+ neighbours for every data point. We have taken Minkowski
457
+ distance to provide distance between two points in N-
458
+ dimensional vector space to run data. Suppose, points p1(x1,
459
+ y1) and p2(x2, y2) illustrates Minkowski distance as,
460
+
461
+ (4)
462
+ Here, d denotes Minkowski distance between p1 and p2
463
+ point.
464
+
465
+ Naive Bayes Classifier is constructed from Bayes theorem,
466
+ in which features are independent of each other in present
467
+ class and classification that counts the total number of
468
+ observations by calculating the probability to create a
469
+ predictive model in the fastest time. It outperformed with a
470
+ huge dataset of categorical variables. The main benefits of
471
+ that it involves limited training data to estimate better
472
+ results. Naive Bayes theorem probability can be derived
473
+ from P (T), P(X) and P (X|T). Therefore,
474
+
475
+ (5)
476
+ Authorized licensed use limited to: Newcastle University. Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore. Restrictions apply.
477
+
478
+ at A-(b:.
479
+ 17
480
+ noaobmmax(x).
481
+ mmtm(x71
482
+ dp : (x,y) → Ilx -yllp
483
+ Ix - yilP
484
+ i=1The decision tree is a decision-supporting a predictive
485
+ model based on tree structure by putting logic to interpret
486
+ features. It provides a conditional control system and marks
487
+ red or green for died or alive leaves. It has three types of
488
+ nodes: root node, decision nodes and leaf nodes. The root
489
+ node is the topmost node among them and data are split into
490
+ choices to find out the decision‟s result. Decision nodes
491
+ basically comprise of decision rules to produce the output
492
+ by considering all information gain and oval shape is used to
493
+ denote it. The terminal node represents the action that needs
494
+ to be taken after getting the outcome of all decisions.
495
+
496
+ Multiple random trees lead to the random forest to calculate
497
+ elements of molecular structure. A decision tree looks like a
498
+ tree that is the storehouse of results from the random forest
499
+ algorithm and bagging is applied to it in order to reduce
500
+ bias-variance trade-off. It can perform feature selection
501
+ directly and output represents the mode of all classes. In
502
+ Random Forest Tree, we took the total number of trees in
503
+ the forest: 10.
504
+
505
+ b. Artificial Neural Network
506
+ An ANN is considered as a human brain due to consisting
507
+ millions of neurons to communicate with each other. It has
508
+ three layers; the input layer fed raw data to network, hidden
509
+ layer is the middle layer based on input, weight and the
510
+ relationship denoted by activity function. Output layers
511
+ value is determined by activity, weight and relationship
512
+ from the second layer.
513
+ Since we need to find out the probability of each treatment
514
+ and the objective function is not binary, so we used softmax
515
+ activation function instead of sigmoid between the hidden
516
+ layer and output layer. There is no rule of thumb to choose
517
+ hidden layer in ANN. If our data is linearly separable then
518
+ we don‟t need any hidden layer. Then the average node
519
+ between the input and output node is preferable.
520
+ In our system, we prefer six hidden layers between the input
521
+ node and the hidden layer and 25 epochs to train a neural
522
+ network. It has no gradient vanishing problem and uses
523
+ ReLU activation function to train dataset without
524
+ pretraining.
525
+
526
+ c. Validation
527
+
528
+ The validation is a technique of evaluating the performance
529
+ of algorithms. It cooperates to evaluate the model and
530
+ reduce overfitting. Different types of validation method
531
+ includes
532
+ Holdout
533
+ method,
534
+ K-Fold
535
+ Cross-Validation,
536
+ Stratified K-Fold Cross-Validation and Leave-P-Out Cross-
537
+ Validation. We have picked out k-fold validation dataset is
538
+ divided into k subsets in k times. One k subset act as test set
539
+ and error is estimated by average k trails. Therefore, k-1
540
+ subsets produce training set. We prefer k=10 generally
541
+ which contains 10 folds, repeat one time and stratified
542
+ sampling as each fold has a similar amount of samples.
543
+
544
+ IV.EXPERIMENTAL RESULT ANALYSIS
545
+
546
+ A. Experimental tool
547
+ The whole task has been implemented in python 3.6
548
+ programming language in Anaconda distribution. Python
549
+ library offers various facilities to implement machine
550
+ learning and deep learning. The unbeatable library for data
551
+ representation is pandas that provide huge commands and
552
+ large data management. We have used it to read and
553
+ analyze data in less writing. Afterward, scikit-learn has
554
+ features for various classification, clustering algorithms to
555
+ build models. Also, Keras combines the advantages of
556
+ theano and TensorFlow to train a neural network model. We
557
+ use to fit and evaluate function to train and assess neural
558
+ network model respectively bypassing the same input and
559
+ output, then we apply matplotlib for graphical visualization.
560
+ B. Model performance
561
+ For boosting performance, it is always a better idea to
562
+ increase data size instead of depending on prediction and
563
+ weak correlations. Also, adding a hidden layer may increase
564
+ accuracy and speed due to its tendency to make a training
565
+ dataset overfit. But partially it is dependent on the
566
+ complexity of the model. Contrarily, increasing the epochs
567
+ number ameliorate performance though it sometimes
568
+ overfits training data. It works well for the deep network
569
+ than shallow network when considering regulation factor.
570
+ Hereafter, we have added another hidden layer; choose
571
+ epoch 100 then the Deep ANN accuracy risen up to 95.14%
572
+ which is superior among all of them.
573
+
574
+
575
+ Fig.2. Models Performance Comparison.
576
+ C. Improving Model performance
577
+ For boosting performance, it is always a better idea to
578
+ increase data size instead of depending on prediction and
579
+ weak correlations. Also, adding a hidden layer may increase
580
+ training accuracy and speed due to its tendency to make
581
+ training dataset overfit. But partially it is dependent on the
582
+ complexity of the model. Contrarily, increasing the epochs
583
+ number ameliorate performance though it sometimes
584
+ overfits training data. It works well for the deep network
585
+ than shallow network when considering regulation factor.
586
+ Hereafter, we added another hidden layer; choose epoch 100
587
+ then the Deep ANN accuracy of the training and test set is
588
+ risen up to 96.42% and 95.14% which is superior among all
589
+ of them.
590
+ Authorized licensed use limited to: Newcastle University. Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore. Restrictions apply.
591
+
592
+
593
+ Fig.3. 2-D Graphical Visualization of Test set
594
+
595
+ D.Final Result
596
+ After applying feature extraction to the dataset and
597
+ implementing several types of classification and deep neural
598
+ network, we found artificial neural network as better
599
+ performer with best validity and Random forest classifier
600
+ are preferable among other machine learning classifiers.
601
+
602
+
603
+ Fig.4. Final Result Comparison
604
+
605
+ V. CONCLUSIONS
606
+ Type-2 diabetes can lead to a lot of complications as heart
607
+ attack, kidney damage, blurred vision, hearing problems and
608
+ Alzheimer‟s disease. The main problem is lower accuracy of
609
+ the prediction model, small datasets and inadaptability to
610
+ various datasets. In this paper, the medication and treatment
611
+ are predicted by a comparative study of seven machine
612
+ learning algorithms and deep neural networks. Artificial
613
+ neural networks play a vital role in medical science by
614
+ minimizing classification error that leads to greater
615
+ accuracy. Experiment result determines the designed ANN
616
+ system achieved higher accuracy of 94.7%. It can cooperate
617
+ with experts to detect T2DM patients at a very early age and
618
+ provide the best treatment option.
619
+ In the future, we can enhance the accuracy of early
620
+ treatment to lessen the suffering of patients. Also, we can
621
+ implement more classifiers to pick up the leading one for
622
+ record-breaking performance and extend it to automation
623
+ analysis. There is a plan to apply this designed system in
624
+ diabetes or for other diseases. It may increase the
625
+ performance of prediction of various diseases. Larger
626
+ dataset leads to the higher training set and it cooperates in
627
+ advanced accuracy. It is convenient for people to have an
628
+ application on their smartphones related to T2DM that may
629
+ have T2DM symptoms, treatment, risk factors, and health
630
+ management.
631
+ REFERENCES
632
+
633
+ [1]
634
+ https://www.niddk.nih.gov/health-
635
+ information/diabetes/overview/what-is-
636
+ diabetes?fbclid=IwAR36jKI7GXUE4D0PhZ1Wk4zAa49kKXtn3hB7
637
+ OqrYSoAqA925MzkXa_1u_Sk [Accessed: 24 June, 2019]
638
+ [2]
639
+ https://www.who.int/health-topics/diabetes [Accessed: 24 June, 2019]
640
+ [3]
641
+ Rawal LB, Tapp RJ, Williams ED, Chan C, Yasin S, Oldenburg B.
642
+ Prevention of type 2 diabetes and its complications in developing
643
+ countries: a review. Int J Behav Med. 2012; 19:121–133.
644
+ [4]
645
+ https://www.diabetes.org.uk/diabetes-the-basics/what-is-type-2-
646
+ diabetes [Accessed: 24 June, 2019]
647
+ [5]
648
+ https://en.m.wikipedia.org/wiki/Diabetes?fbclid=IwAR3c20p4V8Np
649
+ MvAwkTZmEK-rXxnBCZ61jhV87-ZnfPMNUJDpm9Easq9dDzA
650
+ [Accessed: 24 June, 2019]
651
+ [6]
652
+ https://idf.org/52-about-diabetes.html [Accessed: 24 June, 2019]
653
+ [7]
654
+ E. I. Mohamed, R. Linde, G. Perriello, N. Di Daniele, S. J. Pöppl and
655
+ A. De Lorenzo. "Predicting type 2 diabetes using an electronic nose-
656
+ based artificial neural network analysis," in Diabetes nutrition &
657
+ metabolism Vol.15, No.4, (2002). pp. 222-215.
658
+ [8]
659
+ R. A. Dunne, Wiley, J., Inc, S. "A Statistical Approach to Neural
660
+ Networks for Pattern Recognition", New Jersey: John Wiley & Sons
661
+ Inc; (2007).
662
+ [9]
663
+ Ebenezer Obaloluwa Olaniyi and Khashman Adnan..“Onset diabetes
664
+ diagnosis using artificial neural network”, International Journal of
665
+ Scientific and Engineering research 5.10 (2014).
666
+ [10] Nai-Arun, N., Moungmai, R. “Comparison of Classifiers for the Risk
667
+ of Diabetes Prediction”, Procedia Computer Science vol: 69, pp: 132–
668
+ 142, 2015.
669
+ [11] Deepti Sisodiaa, Dilip Singh Sisodia. “Prediction of Diabetes using
670
+ Classification
671
+ Algorithms”
672
+ International
673
+ Conference
674
+ on
675
+ Computational Intelligence and Data Science, 2018
676
+ [12] Kowsher, M., Tithi, F. S., Rabeya, T., Afrin, F., & Huda, M. N.
677
+ (2020). Type 2 Diabetics Treatment and Medication Detection with
678
+ Machine
679
+ Learning
680
+ Classifier
681
+ Algorithm. In
682
+ Proceedings
683
+ of
684
+ International Joint Conference on Computational Intelligence (pp.
685
+ 519-531). Springer, Singapore.
686
+ [13] https://www.ncbi.nlm.nih.gov/pubmed/24586457 [Accessed: 24 June,
687
+ 2019]
688
+ [14] Veena Vijayan V. and Anjali C. Decision support systems for
689
+ predicting diabetes mellitus –a review. Proceedings of 2015 global
690
+ conference on communication technologies (GCCT 2015).
691
+ [15] https://www.researchgate.net/publication/331352518_A_Multi-
692
+ layer_Feed_Forward_Neural_Network_Approach_for_Diagnosing_D
693
+ iabetes
694
+ [16] https://pdfs.semanticscholar.org/ab93/6e4630720cb7f7ead833222b94
695
+ 5dc3801438.pdf
696
+ [17] Pradhan, M.A., Rahman, A., Acharya, P., Gawade, R., Pateria, A.
697
+ Design of classifier for Detection of Diabetes using Genetic
698
+ Programming. In: International Conference on Computer Science and
699
+ Information Technology, Pattaya, Thailand, pp. 125–130 (2011).
700
+ [18] Yang Guo, Karlskrona, S Guohua Bai and Yan Hu. Using Bayes
701
+ Network for Prediction of Type-2 diabetes, IEEE: International
702
+ Conference on Internet Technology And Secured Transactions, pp:
703
+ 471 - 472, Dec. 2012.
704
+ [19] https://www.analyticsindiamag.com/5-ways-handle-missing-values-
705
+ machine-learning-datasets/ [Accessed: 24 June, 2019]
706
+ [20] https://medium.com/@contactsunny/label-encoder-vs-one-hot-
707
+ encoder- [Accessed: 5 August, 2019]
708
+
709
+ Authorized licensed use limited to: Newcastle University. Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore. Restrictions apply.
710
+
711
+ ANN (Test set)
712
+ CI
-tE1T4oBgHgl3EQfUwM8/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,390 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf,len=389
2
+ page_content='2019 22nd International Conference on Computer and Information Technology (ICCIT), 18-20 December 2019 978-1-7281-5842-6/19/$31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
3
+ page_content='00 ©2019 IEEE Prognosis and Treatment Prediction of Type-2 Diabetes Using Deep Neural Network and Machine Learning Classifiers Md.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
4
+ page_content=' Kowsher Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
5
+ page_content=' of Applied Mathematics Noakhali Science and Technology University, Noakhali-3814,Bangladesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
6
+ page_content=' ga.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
7
+ page_content='kowsher@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
8
+ page_content='com Mahbuba Yesmin Turaba Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
9
+ page_content=' of Information and Communication Technology Comilla University Comilla, Bangladesh mahbuba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
10
+ page_content='yesmin11@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
11
+ page_content='com M M Mahabubur Rahman Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
12
+ page_content=' of CSTE Noakhali Science and Technology University Noakhali-3814, Bangladesh toufikrahman098@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
13
+ page_content='com Tanvir Sajed Dept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
14
+ page_content=' of Computing Science University of Alberta Edmonton, Canada tsajed@ualbarta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
15
+ page_content='ca Abstract—Type 2 Diabetes is a fast-growing, chronic metabolic disorder due to imbalanced insulin activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
16
+ page_content=' As lots of people are suffering from it, access to proper treatment is necessary to control the problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
17
+ page_content=' Most patients are unaware of health complexity, symptoms and risk factors before diabetes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
18
+ page_content=' The motion of this research is a comparative study of seven machine learning classifiers and an artificial neural network method to prognosticate the detection and treatment of diabetes with a high accuracy, in order to identify and treat diabetes patients at an early age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
19
+ page_content=' Our training and test dataset is an accumulation of 9483 diabetes patients’ information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
20
+ page_content=' The training dataset is large enough to negate overfitting and provide for highly accurate test performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
21
+ page_content=' We use performance measures such as accuracy and precision to find out the best algorithm deep ANN which outperforms with 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
22
+ page_content='14% accuracy among all other tested machine learning classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
23
+ page_content=' We hope our high performing model can be used by hospitals to predict diabetes and drive research into more accurate prediction models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
24
+ page_content=' Keywords—Artificial Neural Network, Type 2 diabetes, Support Vector Machine, Decision Tree, Naive Bayes, LDA, Random forest classifier I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
25
+ page_content=' INTRODUCTION Diabetes Mellitus (DM) is a very common metabolic disorder that affects millions of people worldwide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
26
+ page_content=' It occurs when the concentration of blood glucose reaches excessive level due to lack of production of insulin by the pancreas organ (Type 1 Diabetes) or due to insulin resistance (Type 2 Diabetes) [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
27
+ page_content=' It has been published that 422 million people are suffering from diabetes approximately in 2014 and it is expected to rise to 438 million in 2030[2, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
28
+ page_content=' Among them, 90% of cases are Type 2 diabetes (T2DM) [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
29
+ page_content=' It may arise at an early childhood because of the failure of cells to respond to insulin appropriately [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
30
+ page_content=' So, patients have to face excessive tiredness, visual disorders, excessive thirst, skin infection recurrence, delayed wound healing and frequent discharge of urine [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
31
+ page_content=' It has been pointed out by Diabetes Research Center that 80 percent of cases of diabetes can be prevented or delayed if it is detected early [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
32
+ page_content=' Also, by controlling blood sugar, it is possible to lessen the T2DM effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
33
+ page_content=' A healthy diet, physical exercise, sufficient nutrition for pregnant women, proper medication, weight at a necessary level are crucial to maintaining a safer sugar level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
34
+ page_content=' When the diabetes is diagnosed with medical tests, it shows significantly dangerous symptoms but these methods do not perform well because of clinical complexity, time- consuming process and very high expense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
35
+ page_content=' However, using automated machine learning algorithms, a researcher can predict a disease like diabetes with reduced cost and time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
36
+ page_content=' In the field of Artificial Intelligence, classification is considered a supervised technique that analyses patient data and classifies whether or not the patient is suffering from a disease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
37
+ page_content=' Researchers have created different AI and machine learning techniques to automate prognosis of various diseases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
38
+ page_content=' Machine learning techniques studies algorithm and statistical model that has the capability for accurate prediction by using implicit programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
39
+ page_content=' In medical science, they take the concept of the human brain as it contains millions of neurons to complete tasks of the human body.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
40
+ page_content=' It is called nonlinear modelling and they are interconnected like brain cells although the neuron creation is done by program [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
41
+ page_content=' In this paper, first we have discussed various procedures and existing works about the prognosis of T2D , though we emphasized various classification algorithms known as Logistic Regression, KNN, Decision Tree, Naive Bayes, SVM, Linear Discriminant Analysis and Random forest classifier and Artificial Neural Network (ANN) for T2DM prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
42
+ page_content=' Our selected model is an Artificial Neural Network is found to be superior among all of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
43
+ page_content=' Feedforward neural network contains the signal in one Authorized licensed use limited to: Newcastle University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
44
+ page_content=' Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
45
+ page_content=' Restrictions apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
46
+ page_content=' direction from the input to the output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
47
+ page_content=' It is used in different medical diagnostic applications such as nephritis disease, heart disease, myeloid leukemia etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
48
+ page_content=' [ref].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
49
+ page_content=' We have taken a medical dataset from Noakhali Medical College, Bangladesh, consisting of 9483 samples and 14 symptoms per sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
50
+ page_content=' The 80% data and 20% data are chosen to be training dataset and testing dataset respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
51
+ page_content=' Machine learning classification algorithms are applied to dataset and some elements may be missed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
52
+ page_content=' Then, the mean and median method is applied in order to detect it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
53
+ page_content=' The contributions of this paper are summarized as We have proposed a prediction model for T2D using Artificial Neural Network machine learning classifier We have exerted seven classifier techniques and ANN on T2D data and provided comparison of accuracy among them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
54
+ page_content=' The improvement systems of the model, as well as accuracy, are mentioned in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
55
+ page_content=' The remaining of the discussion is organized as follows: Section-II explains related work of various classification techniques for prediction of diabetes, Section-III describes the methodology and materials used, Section-IV discusses evaluated Results and Section-V delineates the conclusion of the research work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
56
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
57
+ page_content=' RELATED WORK In recent years, several studies have been published using multiple machine learning classifiers, ANN techniques and various feature extraction methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
58
+ page_content=' These have a drastic change in potential research and some works are discussed related to T2DM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
59
+ page_content=' Ebenezer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
60
+ page_content=' used the backpropagation feature of ANN in order to diagnose diabetes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
61
+ page_content=' It finds out the error by juxtaposing input and output number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
62
+ page_content=' Here, the preceding round error is greater than the present error each time by means of changing weight to minimize gradient of errors using a technique known as gradient descent [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
63
+ page_content=' Nongyao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
64
+ page_content=' delineated risk prediction by using various machine learning classification algorithms such as Decision Tree, Neural Network, Random Forest algorithms, Naïve Bayes, Logistic Regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
65
+ page_content=' All of them followed Bagging and Boosting approaches to improve robustness except RFA [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
66
+ page_content=' Deepti et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
67
+ page_content=' proposed a model to identify diabetes at a premature age by applying Decision Tree, SVM and Naïve Bayes on Pima Indians Diabetes Database (PIDD) datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
68
+ page_content=' They chose sufficient measures for accuracy including precision, ROC, F measure, Recall but Naïve Bayes beat them by acquiring the highest accuracy [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
69
+ page_content=' Su et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
70
+ page_content=' applied decision tree, logistic regression, neural network,and rough sets to assess accuracy through various features like age, right thigh circumference, left thigh circumference, trunk volume and illustrates thigh circumference as a better feature than BMI in anthropometrical data [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
71
+ page_content=' Al-Rubeaan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
72
+ page_content=' has presented T2DM based on diabetic nephropathy (DP), then defined high impact risk factors;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
73
+ page_content=' age and diabetes duration for microalbuminuria, macroalbuminuria and end- stage renal disease(ESRD) classifications[13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
74
+ page_content=' Vijayan V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
75
+ page_content=' examines various types of preprocessing techniques which includes PCA and discretization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
76
+ page_content=' It increases the accuracy of Naïve Bayes classifier and Decision Tree algorithm but reduces SVM accuracy [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
77
+ page_content=' Micheal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
78
+ page_content=' proposed Multi-Layer Feed Forward Neural Networks (MLFNN) in order to diagnose diabetes by considering activation units, learning techniques on Pima Indian Diabetes (PID) data set and achieved 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
79
+ page_content='5% accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
80
+ page_content=' It performs better than Naïve Bayes, Logistic Regression (LR) and Random Forest (RF) classifier [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
81
+ page_content=' Sadri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
82
+ page_content=' chose data mining algorithms like Naive Bayes, RBF Network, and J48 to diagnose T2DM for Pima Indians Diabetes Dataset that has 768 samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
83
+ page_content=' Each sample has nine features as the total number of Pregnancy, Plasma Glucose Concentration, Diastolic Blood Pressure and 2- Hour Serum Insulin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
84
+ page_content=' Among them, the Naive Bayes algorithm is unbeatable and has 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
85
+ page_content='95% accuracy [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
86
+ page_content=' Pradhan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
87
+ page_content=' devised a classifier for diabetes detection using Genetic programming (GP) at low cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
88
+ page_content=' Simplified function pool consists of arithmetic operations that are used in lower validation [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
89
+ page_content=' Yang Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
90
+ page_content=' applied Naïve Bayes classifier by using WEKA tool in order to predict Type2 diabetes and obtained remarkable accuracy [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
91
+ page_content=' Unlike these works, we have introduced diabetes‟s medication detection system using machining learning and deep ANN that will act like a doctor to choose the right medication of a patient suffering from diabetes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
92
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
93
+ page_content='MATERIALS AND METHODS In order to categorize diabetes therapy and drugs system for patients, the whole workflow is separated into four parts such as data collection, data preprocessing, training data via the proposed algorithms, and predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
94
+ page_content=' We have exerted seven machine learning classifiers and deep neural networks into the pre-processed data set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
95
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
96
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
97
+ page_content=' System Diagram of T2D analysis Authorized licensed use limited to: Newcastle University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
98
+ page_content=' Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
99
+ page_content=' Restrictions apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
100
+ page_content=' Data Processing Data Training Model T2Dpatients ML Classifiers:Logistic 80% Training 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
101
+ page_content='Missing Value Check Regression, KNN 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
102
+ page_content='Handling Categorical Decision Tree, Naive Bayes, SVM, Linear 20%Testingset 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
103
+ page_content='Value 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
104
+ page_content='Features selection Discriminant Analysis 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
105
+ page_content='Feature Scaling and Random Forest Classifier 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
106
+ page_content='Dimension Reduction DeepANN D=Diet & Lifestyle I=Insulin 10 Fold Cross M=Bigreanides Validation S=Secretagogues PredictionThe source of our data came from Noakhali Medical College, Bangladesh and the data set is separated into two parts such as training and test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
107
+ page_content=' The training data are manipulated to the diagnostic system and 13 factors have been taken to determine therapy in order to apply machine learning and multilayer ANN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
108
+ page_content=' The dataset is tested from the trained machine learning classifiers and artificial neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
109
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
110
+ page_content=' Dataset As discussed before our data set contains information about 9483 diabetes patients and formatted in comma-separated- file (CSV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
111
+ page_content=' The dimension of the data set is 9483*14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
112
+ page_content=' It preserves 14 different kind of information of a diabetes patient such as „Name of patient‟, “Fasting”, “2 h after Pressure” “BMI”, “Duration”, “Age”, “Sex”, “Blood pressure”, “High Cholesterols”, “Heart Diseases”, “Kidney Diseases”, and , “Medications”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
113
+ page_content=' The first 13 columns are considered independent variables and the last one is the dependent variable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
114
+ page_content=' It contains kinds of basic medicine name of diabetes such as Diet and Lifestyle Modification, Secretagogues, Biguanides, and Insulin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
115
+ page_content=' When datasets consist of enough variables, it increases the accuracy of prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
116
+ page_content=' Here, “Fasting” measures blood test just before taking food, “2 h after glucose load” provides a blood test after two hours of eating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
117
+ page_content=' “BMI” refers to the weight and height of patients in kg/m2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
118
+ page_content=' “Medication” indicates proper drugs and therapy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
119
+ page_content=' People who recovered T2DM at early stage follow some features: age group 30-75 years, diabetes of diagnosis duration is more than half years, glucose level at fasting plasma is higher than 125 mg/dl, creation of plasma indicates equal or greater than 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
120
+ page_content='7 mg/dl, plasma glucose after two hours is 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
121
+ page_content=' When a patient suffers from kidney problems, it may be a symptom of T2DM as higher sugar level may damage nephron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
122
+ page_content=' Even bleary eyesight is considered as a side effect for patients as eye‟s retina and the macula is affected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
123
+ page_content=' Bad cholesterol may lead to Diabetic dyslipidemia which can increase heart diseases and atherosclerosis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
124
+ page_content=' Here, we suggest treatment for kidney and v---ision problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
125
+ page_content=' In order to categorize diabetes therapy and drugs system for patients, we applied seven machine learning classifiers and eight deep neural into a data system of Noakhali Medical College, Bangladesh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
126
+ page_content=' Training data are manipulated to diagnostic system and twelve factors have been taken to determine therapy in order to apply machine learning and multilayer ANN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
127
+ page_content=' For the training and testing of the systems, we divided the data set into 80% training and 20% test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
128
+ page_content=' The training dataset is used to find out the appropriate model and best hyper-parameters and testing data set contains unseen data to predict the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
129
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
130
+ page_content=' Data preprocessing Data preprocessing involves raw data converting into a recognizable format from various sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
131
+ page_content=' The well- preprocessed data aids for the best training of algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
132
+ page_content=' Multi pre-processing training is held in our presented systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
133
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
134
+ page_content=' Missing Value Check Usually, missing values may occur due to data incompleteness, missing field, programming error, manual data transfer from a database and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
135
+ page_content=' We may ignore missing values but it causes problems in parameter calculation and data accuracy for features such as age, wages and fare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
136
+ page_content=' We need to inspect whether a dataset has any missing value or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
137
+ page_content=' There are many ways to handle missing values such as delete rows, missing values prediction, mean, median, mode and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
138
+ page_content=' But the most prominent policy for missing value replacement is the mean method and also it is used to exchange the approximate results in the dataset [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
139
+ page_content=' Mean is written in this way in mathematics, (1) Where, denotes the mean and provides the average number of n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
140
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
141
+ page_content=' Handling Categorical value Categorical encoding identifies data type and transfers categorical features into numerical numbers as the majority of machine learning algorithms could not cope up with label data directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
142
+ page_content=' Then numerical values are fed into the specific model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
143
+ page_content=' In our data set, there are five categorical variable names as „Name of patients‟, “Heart Diseases”, “Kidney Diseases”, “Sex”, and “Medications”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
144
+ page_content=' There are two popular ways of transforming categorical data into numerical data such as Integer encoding and one-hot encoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
145
+ page_content=' In the label encoder, categorical features are an integer value and contain a natural order relationship, but the multiclass relationship will provide different values for various classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
146
+ page_content=' One hot encoding maps categorical value into binary vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
147
+ page_content=' Firstly, it is obvious to assign binary value to an integer value of female and male is 0 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
148
+ page_content=' Then converting it to a 2 size of 2 possible integers in a binary vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
149
+ page_content=' Here, a female is encoded as 0 and represented as [1, 0] in which index 0 has value 1 and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
150
+ page_content=' It chooses this value as a feature to influence model training [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
151
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
152
+ page_content=' Features Selection Feature selection incorporates the identification and reduction of unnecessary features that have no impact on the objective function and high impact features are kept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
153
+ page_content=' Our dataset contains 14 types of elements and we have checked p-value which is a statistical process for finding out the probability for the null hypothesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
154
+ page_content=' The features are taken out whose p-value indicates less than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
155
+ page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
156
+ page_content=' Moreover, multicollinearity refers to determine the high correlation which exists between two or more independent features and features that are influential to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
157
+ page_content=' It is called redundancy when two features are highly correlated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
158
+ page_content=' As we have to handle redundancy, it is essential to choose some methods such as χ2Test and Correlation Coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
159
+ page_content=' Authorized licensed use limited to: Newcastle University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
160
+ page_content=' Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
161
+ page_content=' Restrictions apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
162
+ page_content=' x KThe Correlation Coefficient can be calculated by numerical data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
163
+ page_content=' Assume that A and B are two features and it can be defined as, (2) After performing both p-value and multicollinearity test, we could come forward with seven features among thirteen independent features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
164
+ page_content=' Those are “Fasting”, “2 Hours after Glucose Load” “Duration”, “BMI”, “High Cholesterols”, “Heart Diseases”, and “Kidney Diseases”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
165
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
166
+ page_content=' Feature scaling Most of the time, the dataset does not remain on the same scale or even not normalized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
167
+ page_content=' So, feature scaling is a fundamental data transformation method for coping the dataset to algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
168
+ page_content=' We need to scale value of features and provide equal weight to all features in order to obtain the same scale for all data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
169
+ page_content=' Moreover, it is possible for scaling to change in different values for different features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
170
+ page_content=' There are lots of techniques for feature scaling for example Standardization, Mean Normalization, Min-Max Scaling, Unit Vector and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
171
+ page_content=' In our research work, we have taken Min-Max Scaling or normalization process as the features are confined within a bounded area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
172
+ page_content=' Minmax normalization is a z-series normalization to transform linearly x to x‟ where maxX and minX are the maximum and minimum value for X respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
173
+ page_content=' (3) When x=max, then y =1 and x=min, y=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
174
+ page_content=' The scaling range belongs between 0 and 1(positive value) and -1 to 1(negative) and we have taken the value between 0 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
175
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
176
+ page_content=' Dimension Reducing Dimensionality reduction refers to minimizing random variables by considering the principal set of variables that avoids overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
177
+ page_content=' For a large number of dataset, we need to use dimension reduction technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
178
+ page_content=' In our study, we prefer dimension reduction for dimensional graphical visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
179
+ page_content=' There are a lot of methods for reducing dimension, for instance, LDA, PCA, SVD, NMF, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
180
+ page_content=' In our system, we have applied Principal Component Analysis (PCA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
181
+ page_content=' It is a linear transformation based on the correlation between features in order to identify patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
182
+ page_content=' High dimensional data are estimated into equal or lower dimensions through maximum variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
183
+ page_content=' We have taken two components of PCA according to their high variance so that we can graphically visualize in Cartesian coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
184
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
185
+ page_content=' Training Algorithms The training dataset for T2DM is applied to each algorithm to find out medications and model performance is assessed by obtaining accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
186
+ page_content=' a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
187
+ page_content=' Machine Learning Classifier Since we focus on the performance of treatment predictions, we have implemented seven machine learning classifiers such as logistic regression, KNN, SVM, Naive Bayes, decision tree, LDA, random forest tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
188
+ page_content=' Logistic regression is based on the probability model;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
189
+ page_content=' it is derived from linear regression that mapped the dataset into two categories by considering existing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
190
+ page_content=' At first, features are mapped linearly that are transferred to a sigmoid function layer for prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
191
+ page_content=' It shows the relationship between the dependent and independent values but output limits the prediction range on [0, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
192
+ page_content=' As we need to predict the right treatment of a diabetes person, it is beneficial to use a binary classification problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
193
+ page_content=' Linear Discriminant Analysis (LDA) belongs to a linear classifier to find out the linear correlation between elements in order to support binary and multiclass classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
194
+ page_content=' The chance of inserting a new dataset into every class is detected by LDA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
195
+ page_content=' Then, the class that contains the dataset is detected as output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
196
+ page_content=' It can calculate the mean function for each class and it is estimated by vectors for finding group variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
197
+ page_content=' Support Vector Machine (SVM) is the most recognized classifier to make decision boundary as hyperplane to keep the widest distance from both sides of points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
198
+ page_content=' This hyperplane refers to separating data into two groups in two- dimensional space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
199
+ page_content=' It performs better with non-linear classification by the kernel function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
200
+ page_content=' It is capable of separating and classifying unsupported data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
201
+ page_content=' K-nearest neighbours (KNN) works instant learning algorithm and input labeled data that act as training instance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
202
+ page_content=' Then, the output produces a group of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
203
+ page_content=' When k=1, 2, 5 then it means the class has 1, 2 or 5 neighbours of every data point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
204
+ page_content=' For this system, we choose k=5 that means 5 neighbours for every data point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
205
+ page_content=' We have taken Minkowski distance to provide distance between two points in N- dimensional vector space to run data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
206
+ page_content=' Suppose, points p1(x1, y1) and p2(x2, y2) illustrates Minkowski distance as, (4) Here, d denotes Minkowski distance between p1 and p2 point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
207
+ page_content=' Naive Bayes Classifier is constructed from Bayes theorem, in which features are independent of each other in present class and classification that counts the total number of observations by calculating the probability to create a predictive model in the fastest time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
208
+ page_content=' It outperformed with a huge dataset of categorical variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
209
+ page_content=' The main benefits of that it involves limited training data to estimate better results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
210
+ page_content=' Naive Bayes theorem probability can be derived from P (T), P(X) and P (X|T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
211
+ page_content=' Therefore, (5) Authorized licensed use limited to: Newcastle University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
212
+ page_content=' Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
213
+ page_content=' Restrictions apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
214
+ page_content=' at A-(b:.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
215
+ page_content=' 17 noaobmmax(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
216
+ page_content=' mmtm(x71 dp : (x,y) → Ilx -yllp Ix - yilP i=1The decision tree is a decision-supporting a predictive model based on tree structure by putting logic to interpret features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
217
+ page_content=' It provides a conditional control system and marks red or green for died or alive leaves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
218
+ page_content=' It has three types of nodes: root node, decision nodes and leaf nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
219
+ page_content=' The root node is the topmost node among them and data are split into choices to find out the decision‟s result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
220
+ page_content=' Decision nodes basically comprise of decision rules to produce the output by considering all information gain and oval shape is used to denote it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
221
+ page_content=' The terminal node represents the action that needs to be taken after getting the outcome of all decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
222
+ page_content=' Multiple random trees lead to the random forest to calculate elements of molecular structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
223
+ page_content=' A decision tree looks like a tree that is the storehouse of results from the random forest algorithm and bagging is applied to it in order to reduce bias-variance trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
224
+ page_content=' It can perform feature selection directly and output represents the mode of all classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
225
+ page_content=' In Random Forest Tree, we took the total number of trees in the forest: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
226
+ page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
227
+ page_content=' Artificial Neural Network An ANN is considered as a human brain due to consisting millions of neurons to communicate with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
228
+ page_content=' It has three layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
229
+ page_content=' the input layer fed raw data to network, hidden layer is the middle layer based on input, weight and the relationship denoted by activity function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
230
+ page_content=' Output layers value is determined by activity, weight and relationship from the second layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
231
+ page_content=' Since we need to find out the probability of each treatment and the objective function is not binary, so we used softmax activation function instead of sigmoid between the hidden layer and output layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
232
+ page_content=' There is no rule of thumb to choose hidden layer in ANN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
233
+ page_content=' If our data is linearly separable then we don‟t need any hidden layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
234
+ page_content=' Then the average node between the input and output node is preferable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
235
+ page_content=' In our system, we prefer six hidden layers between the input node and the hidden layer and 25 epochs to train a neural network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
236
+ page_content=' It has no gradient vanishing problem and uses ReLU activation function to train dataset without pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
237
+ page_content=' c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
238
+ page_content=' Validation The validation is a technique of evaluating the performance of algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
239
+ page_content=' It cooperates to evaluate the model and reduce overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
240
+ page_content=' Different types of validation method includes Holdout method, K-Fold Cross-Validation, Stratified K-Fold Cross-Validation and Leave-P-Out Cross- Validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
241
+ page_content=' We have picked out k-fold validation dataset is divided into k subsets in k times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
242
+ page_content=' One k subset act as test set and error is estimated by average k trails.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
243
+ page_content=' Therefore, k-1 subsets produce training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
244
+ page_content=' We prefer k=10 generally which contains 10 folds, repeat one time and stratified sampling as each fold has a similar amount of samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
245
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
246
+ page_content='EXPERIMENTAL RESULT ANALYSIS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
247
+ page_content=' Experimental tool The whole task has been implemented in python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
248
+ page_content='6 programming language in Anaconda distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
249
+ page_content=' Python library offers various facilities to implement machine learning and deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
250
+ page_content=' The unbeatable library for data representation is pandas that provide huge commands and large data management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
251
+ page_content=' We have used it to read and analyze data in less writing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
252
+ page_content=' Afterward, scikit-learn has features for various classification, clustering algorithms to build models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
253
+ page_content=' Also, Keras combines the advantages of theano and TensorFlow to train a neural network model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
254
+ page_content=' We use to fit and evaluate function to train and assess neural network model respectively bypassing the same input and output, then we apply matplotlib for graphical visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
255
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
256
+ page_content=' Model performance For boosting performance, it is always a better idea to increase data size instead of depending on prediction and weak correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
257
+ page_content=' Also, adding a hidden layer may increase accuracy and speed due to its tendency to make a training dataset overfit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
258
+ page_content=' But partially it is dependent on the complexity of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
259
+ page_content=' Contrarily, increasing the epochs number ameliorate performance though it sometimes overfits training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
260
+ page_content=' It works well for the deep network than shallow network when considering regulation factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
261
+ page_content=' Hereafter, we have added another hidden layer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
262
+ page_content=' choose epoch 100 then the Deep ANN accuracy risen up to 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
263
+ page_content='14% which is superior among all of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
264
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
265
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
266
+ page_content=' Models Performance Comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
267
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
268
+ page_content=' Improving Model performance For boosting performance, it is always a better idea to increase data size instead of depending on prediction and weak correlations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
269
+ page_content=' Also, adding a hidden layer may increase training accuracy and speed due to its tendency to make training dataset overfit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
270
+ page_content=' But partially it is dependent on the complexity of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
271
+ page_content=' Contrarily, increasing the epochs number ameliorate performance though it sometimes overfits training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
272
+ page_content=' It works well for the deep network than shallow network when considering regulation factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
273
+ page_content=' Hereafter, we added another hidden layer;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
274
+ page_content=' choose epoch 100 then the Deep ANN accuracy of the training and test set is risen up to 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
275
+ page_content='42% and 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
276
+ page_content='14% which is superior among all of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
277
+ page_content=' Authorized licensed use limited to: Newcastle University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
278
+ page_content=' Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
279
+ page_content=' Restrictions apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
280
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
281
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
282
+ page_content=' 2-D Graphical Visualization of Test set D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
283
+ page_content='Final Result After applying feature extraction to the dataset and implementing several types of classification and deep neural network, we found artificial neural network as better performer with best validity and Random forest classifier are preferable among other machine learning classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
284
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
285
+ page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
286
+ page_content=' Final Result Comparison V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
287
+ page_content=' CONCLUSIONS Type-2 diabetes can lead to a lot of complications as heart attack, kidney damage, blurred vision, hearing problems and Alzheimer‟s disease.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
288
+ page_content=' The main problem is lower accuracy of the prediction model, small datasets and inadaptability to various datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
289
+ page_content=' In this paper, the medication and treatment are predicted by a comparative study of seven machine learning algorithms and deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
290
+ page_content=' Artificial neural networks play a vital role in medical science by minimizing classification error that leads to greater accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
291
+ page_content=' Experiment result determines the designed ANN system achieved higher accuracy of 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
292
+ page_content='7%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
293
+ page_content=' It can cooperate with experts to detect T2DM patients at a very early age and provide the best treatment option.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
294
+ page_content=' In the future, we can enhance the accuracy of early treatment to lessen the suffering of patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
295
+ page_content=' Also, we can implement more classifiers to pick up the leading one for record-breaking performance and extend it to automation analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
296
+ page_content=' There is a plan to apply this designed system in diabetes or for other diseases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
297
+ page_content=' It may increase the performance of prediction of various diseases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
298
+ page_content=' Larger dataset leads to the higher training set and it cooperates in advanced accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
299
+ page_content=' It is convenient for people to have an application on their smartphones related to T2DM that may have T2DM symptoms, treatment, risk factors, and health management.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
300
+ page_content=' REFERENCES [1] https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
301
+ page_content='niddk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
302
+ page_content='nih.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
303
+ page_content='gov/health- information/diabetes/overview/what-is- diabetes?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
304
+ page_content='fbclid=IwAR36jKI7GXUE4D0PhZ1Wk4zAa49kKXtn3hB7 OqrYSoAqA925MzkXa_1u_Sk [Accessed: 24 June, 2019] [2] https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
305
+ page_content='who.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
306
+ page_content='int/health-topics/diabetes [Accessed: 24 June, 2019] [3] Rawal LB, Tapp RJ, Williams ED, Chan C, Yasin S, Oldenburg B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
307
+ page_content=' Prevention of type 2 diabetes and its complications in developing countries: a review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
308
+ page_content=' Int J Behav Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
309
+ page_content=' 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
310
+ page_content=' 19:121–133.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
311
+ page_content=' [4] https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
312
+ page_content='diabetes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
313
+ page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
314
+ page_content='uk/diabetes-the-basics/what-is-type-2- diabetes [Accessed: 24 June, 2019] [5] https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
315
+ page_content='m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
316
+ page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
317
+ page_content='org/wiki/Diabetes?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
318
+ page_content='fbclid=IwAR3c20p4V8Np MvAwkTZmEK-rXxnBCZ61jhV87-ZnfPMNUJDpm9Easq9dDzA [Accessed: 24 June, 2019] [6] https://idf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
319
+ page_content='org/52-about-diabetes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
320
+ page_content='html [Accessed: 24 June, 2019] [7] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
321
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
322
+ page_content=' Mohamed, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
323
+ page_content=' Linde, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
324
+ page_content=' Perriello, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
325
+ page_content=' Di Daniele, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
326
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
327
+ page_content=' Pöppl and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
328
+ page_content=' De Lorenzo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
329
+ page_content=' "Predicting type 2 diabetes using an electronic nose- based artificial neural network analysis," in Diabetes nutrition & metabolism Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
330
+ page_content='15, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
331
+ page_content='4, (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
332
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
333
+ page_content=' 222-215.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
334
+ page_content=' [8] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
335
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
336
+ page_content=' Dunne, Wiley, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
337
+ page_content=', Inc, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
338
+ page_content=' "A Statistical Approach to Neural Networks for Pattern Recognition", New Jersey: John Wiley & Sons Inc;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
339
+ page_content=' (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
340
+ page_content=' [9] Ebenezer Obaloluwa Olaniyi and Khashman Adnan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
341
+ page_content='.“Onset diabetes diagnosis using artificial neural network”, International Journal of Scientific and Engineering research 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
342
+ page_content='10 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
343
+ page_content=' [10] Nai-Arun, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
344
+ page_content=', Moungmai, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
345
+ page_content=' “Comparison of Classifiers for the Risk of Diabetes Prediction”, Procedia Computer Science vol: 69, pp: 132– 142, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
346
+ page_content=' [11] Deepti Sisodiaa, Dilip Singh Sisodia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
347
+ page_content=' “Prediction of Diabetes using Classification Algorithms” International Conference on Computational Intelligence and Data Science, 2018 [12] Kowsher, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
348
+ page_content=', Tithi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
349
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
350
+ page_content=', Rabeya, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
351
+ page_content=', Afrin, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
352
+ page_content=', & Huda, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
353
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
354
+ page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
355
+ page_content=' Type 2 Diabetics Treatment and Medication Detection with Machine Learning Classifier Algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
356
+ page_content=' In Proceedings of International Joint Conference on Computational Intelligence (pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
357
+ page_content=' 519-531).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
358
+ page_content=' Springer, Singapore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
359
+ page_content=' [13] https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
360
+ page_content='ncbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
361
+ page_content='nlm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
362
+ page_content='nih.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
363
+ page_content='gov/pubmed/24586457 [Accessed: 24 June, 2019] [14] Veena Vijayan V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
364
+ page_content=' and Anjali C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
365
+ page_content=' Decision support systems for predicting diabetes mellitus –a review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
366
+ page_content=' Proceedings of 2015 global conference on communication technologies (GCCT 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
367
+ page_content=' [15] https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
368
+ page_content='researchgate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
369
+ page_content='net/publication/331352518_A_Multi- layer_Feed_Forward_Neural_Network_Approach_for_Diagnosing_D iabetes [16] https://pdfs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
370
+ page_content='semanticscholar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
371
+ page_content='org/ab93/6e4630720cb7f7ead833222b94 5dc3801438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
372
+ page_content='pdf [17] Pradhan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
373
+ page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
374
+ page_content=', Rahman, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
375
+ page_content=', Acharya, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
376
+ page_content=', Gawade, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
377
+ page_content=', Pateria, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
378
+ page_content=' Design of classifier for Detection of Diabetes using Genetic Programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
379
+ page_content=' In: International Conference on Computer Science and Information Technology, Pattaya, Thailand, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
380
+ page_content=' 125–130 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
381
+ page_content=' [18] Yang Guo, Karlskrona, S Guohua Bai and Yan Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
382
+ page_content=' Using Bayes Network for Prediction of Type-2 diabetes, IEEE: International Conference on Internet Technology And Secured Transactions, pp: 471 - 472, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
383
+ page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
384
+ page_content=' [19] https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
385
+ page_content='analyticsindiamag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
386
+ page_content='com/5-ways-handle-missing-values- machine-learning-datasets/ [Accessed: 24 June, 2019] [20] https://medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
387
+ page_content='com/@contactsunny/label-encoder-vs-one-hot- encoder- [Accessed: 5 August, 2019] Authorized licensed use limited to: Newcastle University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
388
+ page_content=' Downloaded on May 18,2020 at 05:34:25 UTC from IEEE Xplore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
389
+ page_content=' Restrictions apply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
390
+ page_content=' ANN (Test set) CI' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tE1T4oBgHgl3EQfUwM8/content/2301.03093v1.pdf'}
.gitattributes CHANGED
@@ -15294,3 +15294,60 @@ n9FPT4oBgHgl3EQf6DVd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
15294
  ktE4T4oBgHgl3EQftA2c/content/2301.05221v1.pdf filter=lfs diff=lfs merge=lfs -text
15295
  M9AyT4oBgHgl3EQfs_mP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15296
  O9E2T4oBgHgl3EQfVQdj/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15294
  ktE4T4oBgHgl3EQftA2c/content/2301.05221v1.pdf filter=lfs diff=lfs merge=lfs -text
15295
  M9AyT4oBgHgl3EQfs_mP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15296
  O9E2T4oBgHgl3EQfVQdj/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15297
+ YdAyT4oBgHgl3EQfWffL/content/2301.00166v1.pdf filter=lfs diff=lfs merge=lfs -text
15298
+ KNE1T4oBgHgl3EQfsQX7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15299
+ V9A0T4oBgHgl3EQfE_8w/content/2301.02025v1.pdf filter=lfs diff=lfs merge=lfs -text
15300
+ P9FAT4oBgHgl3EQfzx6B/content/2301.08700v1.pdf filter=lfs diff=lfs merge=lfs -text
15301
+ qNE4T4oBgHgl3EQfvg2M/content/2301.05243v1.pdf filter=lfs diff=lfs merge=lfs -text
15302
+ VdFKT4oBgHgl3EQfmi5y/content/2301.11858v1.pdf filter=lfs diff=lfs merge=lfs -text
15303
+ UdE2T4oBgHgl3EQfXQfL/content/2301.03843v1.pdf filter=lfs diff=lfs merge=lfs -text
15304
+ 5tE4T4oBgHgl3EQfbwzK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15305
+ o9E3T4oBgHgl3EQf7gve/content/2301.04800v1.pdf filter=lfs diff=lfs merge=lfs -text
15306
+ P9FOT4oBgHgl3EQf5DRG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15307
+ P9FOT4oBgHgl3EQf5DRG/content/2301.12952v1.pdf filter=lfs diff=lfs merge=lfs -text
15308
+ HtAzT4oBgHgl3EQfxv41/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15309
+ -9AyT4oBgHgl3EQfdffn/content/2301.00305v1.pdf filter=lfs diff=lfs merge=lfs -text
15310
+ CdFJT4oBgHgl3EQfASzG/content/2301.11420v1.pdf filter=lfs diff=lfs merge=lfs -text
15311
+ o9E3T4oBgHgl3EQf7gve/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15312
+ xdE5T4oBgHgl3EQfMw71/content/2301.05485v1.pdf filter=lfs diff=lfs merge=lfs -text
15313
+ ItE4T4oBgHgl3EQfhQ0t/content/2301.05123v1.pdf filter=lfs diff=lfs merge=lfs -text
15314
+ CdFJT4oBgHgl3EQfASzG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15315
+ AtFQT4oBgHgl3EQf9DeI/content/2301.13449v1.pdf filter=lfs diff=lfs merge=lfs -text
15316
+ 8dAyT4oBgHgl3EQfQvbW/content/2301.00054v1.pdf filter=lfs diff=lfs merge=lfs -text
15317
+ 19FIT4oBgHgl3EQf4Su_/content/2301.11385v1.pdf filter=lfs diff=lfs merge=lfs -text
15318
+ ptE4T4oBgHgl3EQfUQxX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15319
+ UNFJT4oBgHgl3EQfNCyL/content/2301.11476v1.pdf filter=lfs diff=lfs merge=lfs -text
15320
+ adFLT4oBgHgl3EQfXC8y/content/2301.12059v1.pdf filter=lfs diff=lfs merge=lfs -text
15321
+ mNE3T4oBgHgl3EQf6QvR/content/2301.04789v1.pdf filter=lfs diff=lfs merge=lfs -text
15322
+ -tAyT4oBgHgl3EQfdfed/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15323
+ kb_51/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15324
+ 1NE4T4oBgHgl3EQfaAwh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15325
+ UdE2T4oBgHgl3EQfXQfL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15326
+ G9AzT4oBgHgl3EQfjP2K/content/2301.01513v1.pdf filter=lfs diff=lfs merge=lfs -text
15327
+ 19FIT4oBgHgl3EQf4Su_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15328
+ qNE4T4oBgHgl3EQfvg2M/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15329
+ -dAyT4oBgHgl3EQfRPaF/content/2301.00062v1.pdf filter=lfs diff=lfs merge=lfs -text
15330
+ NNE3T4oBgHgl3EQfwgvw/content/2301.04704v1.pdf filter=lfs diff=lfs merge=lfs -text
15331
+ Y9FRT4oBgHgl3EQfPDf9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15332
+ -tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf filter=lfs diff=lfs merge=lfs -text
15333
+ x9E3T4oBgHgl3EQfPAm3/content/2301.04399v1.pdf filter=lfs diff=lfs merge=lfs -text
15334
+ G9AzT4oBgHgl3EQfjP2K/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15335
+ 8dAyT4oBgHgl3EQfQvbW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15336
+ 89FAT4oBgHgl3EQfpR2f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15337
+ P9FAT4oBgHgl3EQfzx6B/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15338
+ AtFQT4oBgHgl3EQf9DeI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15339
+ YtFOT4oBgHgl3EQf-DRf/content/2301.12972v1.pdf filter=lfs diff=lfs merge=lfs -text
15340
+ D9FQT4oBgHgl3EQfQDZ4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15341
+ kdFJT4oBgHgl3EQfYiwU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15342
+ GNAzT4oBgHgl3EQfi_3N/content/2301.01510v1.pdf filter=lfs diff=lfs merge=lfs -text
15343
+ ONAzT4oBgHgl3EQfWfxA/content/2301.01301v1.pdf filter=lfs diff=lfs merge=lfs -text
15344
+ M9FIT4oBgHgl3EQfcitV/content/2301.11266v1.pdf filter=lfs diff=lfs merge=lfs -text
15345
+ CtE2T4oBgHgl3EQfSAdG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15346
+ LtFLT4oBgHgl3EQfMS8S/content/2301.12015v1.pdf filter=lfs diff=lfs merge=lfs -text
15347
+ dNFAT4oBgHgl3EQfYx1Q/content/2301.08541v1.pdf filter=lfs diff=lfs merge=lfs -text
15348
+ udE0T4oBgHgl3EQfbgBl/content/2301.02349v1.pdf filter=lfs diff=lfs merge=lfs -text
15349
+ v9E5T4oBgHgl3EQfMQ4D/content/2301.05479v1.pdf filter=lfs diff=lfs merge=lfs -text
15350
+ p9FAT4oBgHgl3EQfex0W/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15351
+ p9FAT4oBgHgl3EQfex0W/content/2301.08577v1.pdf filter=lfs diff=lfs merge=lfs -text
15352
+ M9FIT4oBgHgl3EQfcitV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
15353
+ s9E3T4oBgHgl3EQfjgr4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
19FIT4oBgHgl3EQf4Su_/content/2301.11385v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b23cc52fe932f30e61468fda9aff13192bba5218885699f36ad02fbf46448a0e
3
+ size 3252097
19FIT4oBgHgl3EQf4Su_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c064af3c1c379c943f3a0b52974aea228286813c74c71e481bd5c47c2d75d3d
3
+ size 2162733
1NE4T4oBgHgl3EQfaAwh/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db377eb2a796afffe21a3f06a77fe5024f80b766b0c2bc81bb78d9591beb9125
3
+ size 1376301
1tAzT4oBgHgl3EQfe_wu/content/tmp_files/2301.01444v1.pdf.txt ADDED
@@ -0,0 +1,1409 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.01444v1 [cond-mat.str-el] 4 Jan 2023
2
+ Magnetic properties of the layered heavy fermion antiferromagnet CePdGa6
3
+ H. Q. Ye,1 T. Le,1 H. Su,1 Y. N. Zhang,1 S. S. Luo,1 M. J. Gutmann,2 H. Q. Yuan,1, 3, 4, 5 and M. Smidman1, 3, ∗
4
+ 1Center for Correlated Matter and Department of Physics, Zhejiang University, Hangzhou 310058, China
5
+ 2ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon OX11 0QX, United Kingdom
6
+ 3Zhejiang Province Key Laboratory of Quantum Technology and Device,
7
+ Department of Physics, Zhejiang University, Hangzhou 310058, China
8
+ 4State Key Laboratory of Silicon Materials, Zhejiang University, Hangzhou 310058, China
9
+ 5Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China
10
+ (Dated: January 5, 2023)
11
+ We report the magnetic properties of the layered heavy fermion antiferromagnet CePdGa6, and
12
+ their evolution upon tuning with the application of magnetic field and pressure. CePdGa6 orders
13
+ antiferromagnetically below TN = 5.2 K, where there is evidence for heavy fermion behavior from an
14
+ enhanced Sommerfeld coefficient. Our results are best explained by a magnetic ground state of fer-
15
+ romagnetically coupled layers of Ce 4f-moments orientated along the c-axis, with antiferromagnetic
16
+ coupling between layers. At low temperatures we observe two metamagnetic transitions for fields
17
+ applied along the c-axis corresponding to spin-flip transitions, where the lower transition is to a dif-
18
+ ferent magnetic phase with a magnetization one-third of the saturated value. From our analysis of
19
+ the magnetic susceptibility, we propose a CEF level scheme which accounts for the Ising anisotropy
20
+ at low temperatures, and we find that the evolution of the magnetic ground state can be explained
21
+ considering both antiferromagnetic exchange between nearest neighbor and next nearest neighbor
22
+ layers, indicating the influence of long-range interactions. Meanwhile we find little change of TN
23
+ upon applying hydrostatic pressures up to 2.2 GPa, suggesting that significantly higher pressures
24
+ are required to examine for possible quantum critical behaviors.
25
+ I.
26
+ INTRODUCTION
27
+ Heavy fermion compounds are prototypical examples
28
+ of strongly correlated electron systems, and have been
29
+ found to host a range of emergent phenomena including
30
+ unconventional superconductivity, complex magnetic or-
31
+ der and strange metal behavior [1–3]. Ce-based heavy
32
+ fermions contain a Kondo lattice of Ce-ions with an un-
33
+ paired 4f electron, which can both couple to other 4f mo-
34
+ ments via the Ruderman-Kittel-Kasuya-Yosida (RKKY)
35
+ interaction and undergo the Kondo interaction due to
36
+ hybridization with the conduction electrons.
37
+ Here the
38
+ RKKY interaction gives rise to long-range magnetic or-
39
+ der, while the Kondo interaction favors a non-magnetic
40
+ Fermi-liquid ground state with greatly enhanced quasi-
41
+ particle masses. Due to the small energy scales, the rel-
42
+ ative strengths of these competing interactions can often
43
+ be tuned by non-thermal parameters such as pressure,
44
+ magnetic fields and chemical doping [4], and in many
45
+ cases the magnetic ordering can be continuously sup-
46
+ pressed to zero temperature at a quantum critical point
47
+ (QCP).
48
+ A major question for heavy fermion systems is the
49
+ relationship between quantum criticality, and the dome
50
+ of unconventional superconductivity sometimes found to
51
+ encompass the QCP. CeIn3 is a canonical example of this
52
+ phenomenon, which at ambient pressure orders antiferro-
53
+ magnetically below TN = 10.1 K, but exhibits a pressure-
54
+ induced QCP around 2.6 GPa, which is surrounded by
55
+ a superconducting dome with a maximum Tc of 0.2 K
56
+ [5]. The layered CeMIn5 (M= transition metal) com-
57
+ pounds consist of alternating layers of MIn2 and CeIn3
58
+ along the c-axis [6], and among the remarkable proper-
59
+ ties is a significantly enhanced superconducting Tc for
60
+ the M= Rh and Co systems, reaching over 2 K [7, 8],
61
+ giving a strong indication that quasi-two-dimensionality
62
+ is important for promoting heavy fermion superconduc-
63
+ tivity. Meanwhile the Ce2MIn8 compounds correspond
64
+ to a stacked arrangement of two units of CeIn3, and one
65
+ of MIn2 [9], and are expected to have an intermediate
66
+ degree of two dimensionality relative to CeMIn5. Cor-
67
+ respondingly, the superconducting phases have lower Tc
68
+ values of 0.4 and 0.68 K for Ce2CoIn8 [10] and Ce2PdIn8
69
+ [11] at ambient pressure, and a maximum of Tc = 2 K
70
+ at 2.3 GPa for Ce2RhIn8 [12]. On the other hand, these
71
+ different series of related Ce-based heavy fermion sys-
72
+ tems also exhibit different magnetic ground states and
73
+ crystalline electric field (CEF) level schemes [13–17] and
74
+ therefore it is challenging to disentangle the role of these
75
+ factors from that of the reduced dimensionality.
76
+ The
77
+ elucidation of the interplay between these different as-
78
+ pects requires examining additional families of layered
79
+ Ce-based heavy fermion systems for quantum critical be-
80
+ haviors, as well as detailed characterizations of the mag-
81
+ netic ground states and exchange interactions.
82
+ The properties of layered Ce-based heavy fermion gal-
83
+ lides have been less studied than the indium-based sys-
84
+ tems. CeGa6 has a layered tetragonal structure (space
85
+ group P4/nbm), with four Ga-layers between each Ce
86
+ layer [18].
87
+ This compound orders magnetically below
88
+ TN = 1.7 K, and there is evidence for the build-up of
89
+ magnetic correlations at significantly higher tempera-
90
+ tures [19]. A more layered structure is realized in the
91
+ Ce2MGa12 (M= Cu, Ni, Rh, Pd, Ir, Pt) series, where the
92
+ Ce-layers are alternately separated by four Ga-layers, and
93
+ units of MGa6, leading to a larger interlayer separation
94
+
95
+ 2
96
+ of the Ce-atoms [20, 21]. Several members of this series
97
+ show evidence for both antiferromagnetism and heavy
98
+ fermion behavior [20–25], where pressure can readily sup-
99
+ press the antiferromagnetic transitions of Ce2NiGa12 and
100
+ Ce2PdGa12 [26, 27], while evidence for field-induced crit-
101
+ ical fluctuations is revealed in Ce2IrGa12 [25].
102
+ CePdGa6 has a different layered tetragonal structure
103
+ (space group P4/mmm) displayed in Fig. 1(a), consist-
104
+ ing of square layers of Ce-atoms, with each Ce con-
105
+ tained in a CeGa4 prism, separated by PdGa2 layers
106
+ [28]. Correspondingly, there is a distance between Ce-
107
+ layers of 7.92 ˚A, while the nearest neighbor in-plane Ce-
108
+ Ce separation is 4.34 ˚A, compared to respective values
109
+ of 7.54 ˚A
110
+ and 4.65 ˚A in CeRhIn5 [29]. CePdGa6 or-
111
+ ders antiferromagnetically below TN = 5.2 K, and heavy
112
+ fermion behavior is evidenced by an enhanced Sommer-
113
+ feld coefficient [20, 28]. As such, CePdGa6 is a good can-
114
+ didate to look for novel behaviors arising in quasi-two-
115
+ dimensional heavy fermion systems, but there is both a
116
+ lack of detailed characterizations of the magnetic ground
117
+ state, and no reports of the evolution under pressure. In
118
+ addition, most measurements of CePdGa6 are reported
119
+ in Ref. 28, where the results are affected by the inclu-
120
+ sion of an extrinsic antiferromagnetic phase Ce2PdGa12,
121
+ which can be eliminated using a modified crystal growth
122
+ procedure [20].
123
+ In this article we report detailed measurements of the
124
+ magnetic properties of single crystals of CePdGa6, in-
125
+ cluding their evolution upon applying magnetic fields and
126
+ hydrostatic pressure. We find that CePdGa6 orders an-
127
+ tiferromagnetically in zero-field, where the Ce-moments
128
+ are orientated along the c-axis and align ferromagneti-
129
+ cally within the ab-plane, but there is antiferromagnetic
130
+ coupling between layers. At low temperatures, two meta-
131
+ magnetic transitions are observed for fields along the c-
132
+ axis, the lower of which corresponds to a spin-flip transi-
133
+ tion to a phase with magnetization one-third of the sat-
134
+ urated value. From our analysis of the magnetic suscep-
135
+ tibility, we propose a CEF level scheme which can ex-
136
+ plain the low temperature Ising anisotropy, and we find
137
+ that from considering interactions between the nearest-
138
+ neighbor and next nearest neighbor Ce-layers, the field
139
+ evolution of the magnetic state can be well accounted for.
140
+ II.
141
+ EXPERIMENTAL DETAILS
142
+ Single crystals of CePdGa6 were grown using a Ga self-
143
+ flux method with a molar ratio of Ce:Pd:Ga of 1:1.5:15
144
+ [20].
145
+ Starting materials of Ce ingot (99.9%), Pd pow-
146
+ der (99.99%) and Ga pieces (99.99%) were loaded into
147
+ an alumina crucible which was sealed in an evacuated
148
+ quartz tube. The tube was heated to 1150 ◦C and held
149
+ at this temperature for two hours, before being rapidly
150
+ cooled to 500 ◦C at a rate of 150 K/h and then cooled
151
+ more slowly to 400
152
+ ◦C at 8 K/h.
153
+ After being held
154
+ at 400 ◦C for two weeks, the tube was removed from
155
+ the furnace, and centrifuged to remove excess Ga. The
156
+ 2
157
+ 0
158
+ 1
159
+ FIG. 1.
160
+ (Color online) (a) Crystal structure of CePdGa6
161
+ where the red, blue and green atoms correspond to Ce, Pd
162
+ and Ga, respectively.
163
+ J0 represents magnetic exchange in-
164
+ teractions between nearest neighbor Ce atoms within the ab-
165
+ plane, J1 is between nearest neighboring layers and J2 is
166
+ between next nearest layers.
167
+ An image of a typical single
168
+ crystal of CePdGa6 is also displayed, where each square in
169
+ the background is 2 mm × 2 mm. (b) X-ray diffraction pat-
170
+ tern measured on a single crystal of CePdGa6. The red dashes
171
+ correspond to the positions of the (00l) Bragg peaks, indicat-
172
+ ing that the [001] direction is perpendicular to the large face
173
+ of the plate-like samples.
174
+ obtained crystals are plate-like with typical dimensions
175
+ 2 × 1.5 × 0.3 mm3. Note that when slower cooling rates
176
+ of 6 K/h or 4 K/h were used, the resulting crystals were
177
+ significantly smaller. Single crystals of the non-magnetic
178
+ analog LaPdGa6 were also obtained using a similar pro-
179
+ cedure. The composition was confirmed using a cold field
180
+ emission scanning electron microscope (SEM) equipped
181
+ with an energy dispersive x-ray spectrometer. The phase
182
+ of the crystals were checked using both a PANalytical
183
+ X’Pert MRD powder diffractometer using Cu-Kα radi-
184
+ ation, and a Rigaku-Oxford diffraction Xtalab synergy
185
+ single crystal diffractometer equipped with a HyPix hy-
186
+ brid pixel array detector using Mo-Kα radiation. The ob-
187
+ tained lattice parameters from the single crystal diffrac-
188
+ tion data of a = 4.3446(3) ˚A and c = 7.9173(10) ˚A are
189
+
190
+ pg
191
+ c3
192
+ 0
193
+ 100
194
+ 200
195
+ 300
196
+ 2
197
+ 3
198
+ 4
199
+ 5
200
+ 4
201
+ 8
202
+ 1.5
203
+ 2.0
204
+
205
+
206
+
207
+ (
208
+ cm)
209
+ T (K)
210
+ T
211
+ N
212
+ ~ 5.2 K
213
+
214
+
215
+
216
+ (
217
+ cm)
218
+ T (K)
219
+ FIG. 2.
220
+ (Color online) Temperature dependence of the re-
221
+ sistivity ρ(T ) of CePdGa6 between 1.8 and 300 K. The inset
222
+ displays the low temperature resistivity, where there is a sharp
223
+ anomaly at the antiferromagnetic transition.
224
+ in excellent agreement with previous reports [28]. Mea-
225
+ surements of a crystal using the powder diffractometer
226
+ are displayed in Fig. 1(b), where all the Bragg peaks are
227
+ well-indexed by the (00l) reflections of CePdGa6, demon-
228
+ strating that the c-axis is perpendicular to the large face
229
+ of the crystals.
230
+ Resistivity and specific heat measure-
231
+ ments were performed in applied fields up to 14 T using
232
+ a Quantum Design Physical Property Measurement Sys-
233
+ tem (PPMS-14) down to 1.8 K, and to 0.3 K using a 3He
234
+ insert.
235
+ Resistivity measurements were performed after
236
+ spot welding four Pt wires to the surface, with the exci-
237
+ tation current in the ab-plane. Magnetization measure-
238
+ ments were performed in the range 1.8 - 300 K in applied
239
+ fields up to 5 T using a Quantum Design Magnetic Prop-
240
+ erty Measurement System (MPMS) SQUID magnetome-
241
+ ter.
242
+ Heat capacity measurements under pressure were
243
+ carried out in a piston cylinder cell, using an ac calori-
244
+ metric method.
245
+ III.
246
+ RESULTS
247
+ A.
248
+ Antiferromagnetic transition and CEF
249
+ excitations of CePdGa6
250
+ Figure 2 displays the temperature dependence of
251
+ the resistivity ρ(T ) of CePdGa6 between 1.8 and 300
252
+ K, which has a residual resistivity ratio [RRR =
253
+ ρ(300 K)/ρ(2 K)] = 3.8.
254
+ A broad shoulder is ob-
255
+ served at around 50 K, which likely arises due to both
256
+ the Kondo effect, and as a consequence of CEF excita-
257
+ tions.
258
+ At higher temperatures, quasilinear behavior is
259
+ observed, which could be due to electron-phonon cou-
260
+ pling.
261
+ As shown in the inset, there is an anomaly at
262
+ around TN = 5.2 K, below which ρ(T ) decreases more
263
+ rapidly with decreasing temperature, which corresponds
264
+ C
265
+ m
266
+ /T (J mol
267
+ -1
268
+ K
269
+ -2
270
+ )
271
+ T (K)
272
+ Rln2
273
+
274
+
275
+ S
276
+ m
277
+ (J mol
278
+ -1
279
+ K
280
+ -1
281
+ )
282
+
283
+
284
+ C
285
+ m
286
+ (J mol
287
+ -1
288
+ K
289
+ -1
290
+ )
291
+ T (K)
292
+ (b)
293
+ CePdGa
294
+ 6
295
+ LaPdGa
296
+ 6
297
+
298
+
299
+ C (J mol
300
+ -1
301
+ K
302
+ -1
303
+ )
304
+ T (K)
305
+ (a)
306
+ FIG. 3. (Color online) (a) Magnetic contribution to the spe-
307
+ cific heat Cm at low temperatures, where the red solid line
308
+ shows the results from fitting with Eq. 1. The inset shows
309
+ the total specific heat C of CePdGa6 and the non-magnetic
310
+ analog LaPdGa6. (b) Temperature dependence of Cm/T and
311
+ the magnetic entropy Sm of CePdGa6. The pink dotted line
312
+ displays the low temperature contribution to the specific heat
313
+ calculated from the CEF scheme deduced from the analysis
314
+ of χ(T ).
315
+ to the antiferromagnetic transition reported previously
316
+ [20], while no signature of the spurious transition at
317
+ higher temperatures is detected [28].
318
+ The total spe-
319
+ cific heat of CePdGa6 and nonmagnetic isostructural
320
+ LaPdGa6 are shown in the inset of Fig. 3(a). The tem-
321
+ perature dependence of the magnetic contribution to the
322
+ specific heat Cm was estimated by subtracting the data
323
+ of LaPdGa6, which is shown in Fig. 3(a), while the spe-
324
+ cific heat coefficient Cm/T and the magnetic entropy Sm
325
+ of CePdGa6 are displayed in Fig. 3(b). A pronounced
326
+ λ-like anomaly is observed at TN = 5.2 K, as is typi-
327
+ cal for a second-order magnetic phase transition.
328
+ For
329
+ T > TN, Cm/T increases with decreasing temperature,
330
+ and extrapolates to a relatively large zero temperature
331
+ value of 250 mJ/mol K2. As discussed below, the analysis
332
+ of the magnetic susceptibility χ(T ) suggests the presence
333
+ of a low lying CEF level, which could contribute to Cm/T
334
+ in this temperature range. The dotted line in Fig. 3(b)
335
+ shows the calculated Cm/T for the CEF level scheme de-
336
+
337
+ 8030412JS12
338
+ 300
339
+ e0
340
+ 0S0
341
+ 04
342
+ 0
343
+ 10
344
+ 20
345
+ 0.0
346
+ 0.1
347
+ 0.2
348
+ 0
349
+ 150
350
+ 300
351
+ 0
352
+ 200
353
+ 400
354
+ (b)
355
+
356
+
357
+ (emu/mol)
358
+ T (K)
359
+ H = 0.1 T
360
+ H // c
361
+ H // ab
362
+ (a)
363
+
364
+ H = 0.5 T
365
+ H // c
366
+ H // ab
367
+ 1/(
368
+ ) (mol/emu)
369
+ T (K)
370
+
371
+ FIG. 4.
372
+ (Color online) (a) Low temperature magnetic sus-
373
+ ceptibility χ(T ) of CePdGa6, with an applied field of µ0H =
374
+ 0.1 T both parallel to the c-axis and within the ab-plane. (b)
375
+ Temperature dependence of 1/(χ-χ0) up to 300 K for 0.5 T
376
+ applied along the two field directions, where the dashed and
377
+ solid lines show the results from fitting with the CEF model
378
+ described in the text.
379
+ scribed below, which has a sizeable value in the vicinity
380
+ of the transition. Subtracting the contribution from the
381
+ CEF at TN yields an estimate of γ ∼ 121.4 mJ/mol K2
382
+ associated with the ground state doublet, and such an
383
+ enhanced value could arise both due to heavy fermion
384
+ behavior, as well as the presence of short range magnetic
385
+ correlations, as inferred in CeRhIn5[30, 31].
386
+ The data
387
+ below TN were analyzed using [32]:
388
+ Cm = γT + c∆7/2
389
+ SW
390
+
391
+ T exp
392
+ �−∆SW
393
+ T
394
+
395
+ ×
396
+
397
+ 1 +
398
+ 39T
399
+ 20∆SW
400
+ + 51
401
+ 32
402
+
403
+ T
404
+ ∆SW
405
+ �2�
406
+ (1)
407
+ where the first term corresponds to the electronic con-
408
+ tribution and the second term arises due to antiferro-
409
+ magnetic spin-waves. Here the coefficient c is related to
410
+ the spinwave stiffness D via c ∝ D−3, while ∆SW
411
+ is
412
+ the spin-wave gap. The results from fitting the zero-field
413
+ data are displayed in the main panel of Fig. 3(a), where
414
+ γ = 121.4 mJ/mol K2 was fixed, yielding ∆SW = 2.3 K
415
+ and c = 23 mJ/mol K2. The moderate value of ∆SW
416
+ is smaller than TN, unlike the layered heavy fermions
417
+ gallides Ce2PdGa12 and Ce2IrGa12 where ∆SW
418
+ > TN
419
+ [24, 25], likely reflecting the weaker magnetocrystalline
420
+ anisotropy in CePdGa6. The temperature dependence of
421
+ the magnetic entropy Sm of CePdGa6 is also displayed
422
+ in Fig. 3(b), obtained by integrating Cm/T , where Cm/T
423
+ was linearly extrapolated below 0.4 K. At TN, Sm reaches
424
+ 0.76R ln 2, which together with the expected sizeable con-
425
+ tribution from the excited CEF level discussed above,
426
+ suggests a reduced entropy corresponding to the ground
427
+ state doublet due to Kondo screening.
428
+ Figure 4(a) displays the temperature dependence of
429
+ the magnetic susceptibility χ(T ) of CePdGa6 at low tem-
430
+ peratures, with an applied field of µ0H = 0.1 T along
431
+ the c-axis and within the ab-plane, which both exhibit
432
+ an anomaly at TN.
433
+ At low temperatures, χ(T ) is sig-
434
+ nificantly larger for fields along the c-axis than in the
435
+ ab-plane, demonstrating that the c-axis is the easy-axis
436
+ of magnetization.
437
+ At TN, there is a peak in χ(T ) for
438
+ H ∥ c, while for H ∥ ab χ(T ) weakly increases below TN,
439
+ indicating that this corresponds to an antiferromagnetic
440
+ transition with moments ordered along the easy c-axis.
441
+ At higher temperatures, the data above 100 K can
442
+ be analyzed using the Curie-Weiss law: χ=χ0+C/(T −
443
+ θCW), where χ0 is a temperature-independent term, C
444
+ is the Curie constant and θCW is the Curie-Weiss tem-
445
+ perature, yielding θc
446
+ CW = −11.7(3) K and an effective
447
+ moment of µc
448
+ eff = 2.35µB/Ce for H ∥ c, as well as
449
+ θab
450
+ CW = −12.9(8) K and µab
451
+ eff = 2.49µB/Ce for H ∥ ab.
452
+ The obtained values of µeff for both directions are close
453
+ to the full value of 2.54 µB for the J = 5
454
+ 2 ground state
455
+ multiplet of Ce3+. At lower temperatures, there is a devi-
456
+ ation of χ(T ) from Curie-Weiss behavior, due to the split-
457
+ ting of the ground state multiplet by crystalline-electric
458
+ fields. To analyze the CEF level scheme, we considered
459
+ the following Hamiltonian for a Ce3+ ion in a tetragonal
460
+ CEF [33]
461
+ HCF = B0
462
+ 2O0
463
+ 2 + B0
464
+ 4O0
465
+ 4 + B4
466
+ 4O4
467
+ 4
468
+ (2)
469
+ where Om
470
+ l
471
+ and Bm
472
+ l
473
+ are Stevens operator equivalents and
474
+ parameters, respectively. The B0
475
+ 2 parameter can be es-
476
+ timated from the high temperature susceptibility using
477
+ [34]
478
+ B0
479
+ 2 = 10kB
480
+
481
+ θab
482
+ CW − θc
483
+ CW
484
+
485
+ 3(2J − 1)(2J + 3) ,
486
+ (3)
487
+ where J =
488
+ 5
489
+ 2 for the ground state multiplet of Ce3+,
490
+ yielding B0
491
+ 2 = -0.01077 meV. χ(T ) along both directions
492
+ was analyzed taking into account the contribution from
493
+ the CEF χi
494
+ CEF, as well as molecular field parameters λi
495
+ using
496
+ χi = χi
497
+ 0 +
498
+ χi
499
+ CEF
500
+ 1 − λiχi
501
+ CEF
502
+ ,
503
+ (4)
504
+ where the superscript i denotes the c-axis or ab-plane.
505
+ With B0
506
+ 2 fixed from Eq. 3, values of B0
507
+ 4 = -0.0746
508
+ meV and |B4
509
+ 4| = 0.496 meV were obtained, together
510
+ with molecular field parameters of λc = -3.55 mol/emu
511
+ and λab = 8.15 mol/emu, χc
512
+ 0 = 2.2 × 10−4emu/mol
513
+ and χab
514
+ 0
515
+ = −2.3 × 10−3emu/mol, and the fitted re-
516
+ sults are shown in Fig. 4(b).
517
+ These parameters yield
518
+ a CEF scheme with a Γ7 ground state Kramer’s doublet
519
+ ��ψ±
520
+ 1
521
+
522
+ = 0.883
523
+ ��± 5
524
+ 2
525
+
526
+ − 0.469
527
+ ��∓ 3
528
+ 2
529
+
530
+ (for positive B4
531
+ 4), and
532
+ excitations to Γ6 and Γ7 levels of ∆1 = 2.8 meV and ∆2 =
533
+ 32.1 meV, respectively. At high temperatures, the small
534
+
535
+ 5
536
+ 4
537
+ 8
538
+ 0.5
539
+ 1.0
540
+ 1.5
541
+ 2.0
542
+ 0
543
+ 4
544
+ 6
545
+ 8
546
+ 10
547
+ 12
548
+ 4
549
+ 8
550
+ (b)
551
+ (a)
552
+
553
+
554
+ C
555
+ P
556
+ /T (J/mol K
557
+ 2
558
+ )
559
+ T (K)
560
+ 0
561
+ 1
562
+ 1.5
563
+ 2
564
+ 2.5
565
+ 3
566
+ H // c
567
+ 0
568
+ H (T)
569
+
570
+
571
+ T (K)
572
+ 0
573
+ H (T)
574
+ H // ab
575
+ FIG. 5. (Color online) Temperature dependence of the spe-
576
+ cific heat of CePdGa6 in various applied magnetic fields (a)
577
+ parallel to the c-axis, and (b) within the ab-plane.
578
+ 4
579
+ 8
580
+ 0.1
581
+ 0.2
582
+ 0.3
583
+ 4
584
+ 8
585
+ 0.04
586
+ 0.06
587
+ 0.08
588
+ 4
589
+ 8
590
+ (a)
591
+
592
+
593
+ (emu/mol)
594
+ T (K)
595
+ 0.1
596
+ 0.5
597
+ 1
598
+ H // c
599
+ 0
600
+ H (T)
601
+ 0
602
+ H (T)
603
+ H // ab
604
+
605
+
606
+ 1
607
+ 2
608
+ 4
609
+ 6
610
+ 8
611
+ (emu/mol)
612
+ T (K)
613
+ (c)
614
+ (b)
615
+
616
+
617
+ T (K)
618
+ 1.5
619
+ 2
620
+ 3
621
+ H // c
622
+ 0
623
+ H (T)
624
+ FIG. 6. (Color online) Temperature dependence of the mag-
625
+ netic susceptibility χ(T ) of CePdGa6 in different magnetic
626
+ fields parallel to the c-axis for fields (a) below, and (b) above
627
+ 1 T. The vertical arrows mark the position of the antiferro-
628
+ magnetic transition. Panel (c) shows χ(T ) for various fields
629
+ applied within the ab-plane, where the dashed line shows the
630
+ evolution of TN with field.
631
+ negative B0
632
+ 2 leads to a nearly isotropic χ(T ), while at low
633
+ temperatures, the negative B0
634
+ 4 leads to the observed Ising
635
+ anisotropy with an easy c-axis. The predicted moment
636
+ along the c-axis is given by ⟨µz⟩ =
637
+
638
+ ψ±
639
+ 1 |gJJz| ψ±
640
+ 1
641
+
642
+ =
643
+ 1.4 µB/Ce, which is larger than the value obtained from
644
+ the saturated magnetization. The positive value of λab
645
+ is consistent with ferromagnetic coupling between spins
646
+ within the basal plane, while the smaller negative λc
647
+ is consistent with weaker antiferromagnetic coupling be-
648
+ tween Ce layers.
649
+ B.
650
+ Field dependence of the magnetic properties
651
+ In order to determine the behavior of the magnetic
652
+ ground state in magnetic fields, and to map the field-
653
+ temperature phase diagrams, measurements of the spe-
654
+ H // ab
655
+ H // c
656
+ (b)
657
+ H // c
658
+ T (K)
659
+
660
+
661
+ 2
662
+ 3
663
+ 4
664
+ M (
665
+ /Ce)
666
+ 0
667
+ H (T)
668
+ H // c
669
+ (a)
670
+
671
+
672
+ M (
673
+ /Ce)
674
+ 0
675
+ H (T)
676
+
677
+
678
+ M (
679
+ /Ce)
680
+ 0
681
+ H (T)
682
+ 5 K
683
+ 3 K
684
+ 0.3 K
685
+
686
+
687
+ (
688
+ cm)
689
+ 0
690
+ H (T)
691
+ 1.8 K
692
+ FIG. 7. (Color online) (a) Isothermal field dependence of the
693
+ magnetization M(H) of CePdGa6 for fields along the c-axis,
694
+ at three temperatures below TN. The lower inset displays the
695
+ low field region of the data in the main panel, demonstrating
696
+ hysteresis about the metamagnetic transition, while the up-
697
+ per inset shows M(H) at 2 K for fields within the ab-plane.
698
+ (b) Field dependence of the resistivity ρ(H) of CePdGa6 at
699
+ several temperatures for fields along the c-axis. The dashed
700
+ lines show the evolution of the two metamagnetic transitions.
701
+ cific heat and magnetization were performed in differ-
702
+ ent applied fields. Figure 5(a) displays the low tempera-
703
+ ture specific heat of CePdGa6 with different fields applied
704
+ along the c-axis. It can be seen that TN is gradually sup-
705
+ pressed with increasing field, and at fields greater than
706
+ 2 T, no magnetic transition is observed. Instead, there
707
+ is a broad hump in C/T , which shifts to higher tempera-
708
+ ture with increasing field, corresponding to the Schottky
709
+ anomaly from the splitting of the ground state doublet
710
+ in the applied field. In Fig. 5(b), C/T is displayed for
711
+ fields within the ab-plane, where the antiferromagnetic
712
+ transition is more robust than for fields along the c-axis,
713
+ and the broad Schottky anomaly is only clearly resolved
714
+ in a field of 12 T. The differences in the field dependence
715
+ for the two different field directions is consistent with the
716
+ low temperature Ising anisotropy in CePdGa6, where a
717
+ smaller field along the easy c-axis can bring the system
718
+ to the spin-polarized state.
719
+ The low temperature χ(T ) in different applied fields
720
+
721
+ 744
722
+ 00.0
723
+ 己.0
724
+ T0
725
+ -0'288ST
726
+ 088880.0
727
+ 2.00
728
+ V2.0--4806
729
+ 2
730
+ 4
731
+ 6
732
+ 8
733
+ 0.0
734
+ 0.5
735
+ 1.0
736
+
737
+
738
+ C
739
+ ac
740
+ /T (a.u.)
741
+ T (K)
742
+ 0.20GPa
743
+ 0.85GPa
744
+ 1.60GPa
745
+ 2.20GPa
746
+ P (GPa)
747
+ FIG. 8. (Color online) Temperature dependence of the ac heat
748
+ capacity of CePdGa6 at various hydrostatic pressures up to
749
+ 2.2 GPa. The vertical dashed line shows the position of the
750
+ ambient pressure TN, which remains nearly unchanged with
751
+ pressure.
752
+ are displayed in Fig. 6. For fields along the c-axis dis-
753
+ tinctly different behaviors are observed for different field
754
+ ranges. In a field of 0.1 T, there is a sharp peak at TN,
755
+ corresponding to entering the antiferromagnetic ground
756
+ state. At a larger field of 0.5 T, only a small hump is
757
+ observed at TN, while at low temperatures there is an
758
+ increase in χ(T ), and at higher fields there is broad peak
759
+ which is gradually suppressed with field. Meanwhile for
760
+ fields within the ab-plane up to at least 8 T, there is a
761
+ gradual suppression of TN, in line with the specific heat
762
+ results.
763
+ The isothermal magnetization as a function of field
764
+ along the c-axis at three temperatures below TN is dis-
765
+ played in Fig. 7(a), measured upon both sweeping the
766
+ field up and down.
767
+ In zero-field there is no remanent
768
+ magnetization, consistent with a purely antiferromag-
769
+ netic ground state. At 2 K, there are two metamagnetic
770
+ transitions at Hm1 = 0.4 T and Hm2 = 2.1 T, where
771
+ hysteresis is also observed indicating a first-order nature,
772
+ whereas otherwise the magnetization plateaus, with only
773
+ a weak change of the magnetization with field. This is
774
+ consistent with Hm1 and Hm2 corresponding to spin-flip
775
+ transitions, with the spins remaining orientated along the
776
+ c-axis. For fields above Hm2, no magnetic transition is
777
+ observed in the specific heat, and therefore this likely cor-
778
+ responds to the system reaching the spin polarized state,
779
+ with a saturation magnetization of Ms = 1.1 µB/Ce. On
780
+ the other hand, above Hm1 the magnetization reaches
781
+ a value of 0.35 µB/Ce, corresponding to ≈ Ms/3, in-
782
+ dicating a change of magnetic structure with a ferro-
783
+ magnetic component. While there is little change in the
784
+ field-dependence of the magnetization at 3 K, the curves
785
+ at 4 K are drastically different. Instead of there being
786
+ abrupt step-like metamagnetic transitions, the magneti-
787
+ 0
788
+ 1
789
+ 2
790
+ 3
791
+ 4
792
+ 0
793
+ 2
794
+ 4
795
+ 6
796
+ 0
797
+ 1
798
+ 2
799
+ 0.0
800
+ 0.5
801
+ 1.0
802
+ 1.5
803
+ H
804
+ m2
805
+ H
806
+ m1
807
+ H // c
808
+
809
+
810
+ T (K)
811
+ H (T)
812
+ C (T)
813
+
814
+ (T)
815
+
816
+ (T)
817
+ M (H)
818
+
819
+ (H)
820
+
821
+
822
+ M (
823
+ /Ce)
824
+ 0
825
+ H (T)
826
+ 2 K
827
+ FIG. 9. (Color online) Temperature-field phase diagram of
828
+ CePdGa6 at ambient pressure for fields along the easy c-axis,
829
+ from measurements of the resistivity, magnetization, and spe-
830
+ cific heat.
831
+ The solid line shows the evolution of TN, while
832
+ the dashed lines show the positions of the low temperature
833
+ metamagnetic transitions.
834
+ The magnetic structures at low
835
+ temperature are also illustrated by the orange arrows, where
836
+ in zero-field there is an antiferromagnetic ground state, while
837
+ upon applying a field the system passes through an interme-
838
+ diate ↑↑↓ phase, before entering the spin polarized state. The
839
+ inset shows the field dependence of the magnetization based
840
+ on mean-field calculations of the magnetic ground state cal-
841
+ culated using the McPhase software package [35], with the
842
+ parameters described in the text.
843
+ zation smoothly increases with field, reaching a very sim-
844
+ ilar saturation value. This suggests that at higher tem-
845
+ peratures, the spins continuously rotate upon increasing
846
+ the applied field, rather than undergoing abrupt spin flip
847
+ transitions. The field dependent magnetization at 2 K
848
+ for fields in the ab-plane is also shown in the inset of
849
+ Fig. 7(a), which smoothly changes with field, with no
850
+ sign of saturation up to at least 5 T, consistent with this
851
+ being the hard direction of magnetization. The metam-
852
+ agnetic transitions are also revealed in the field depen-
853
+ dence of the resistivity ρ(H), as displayed in Fig. 7(b) for
854
+ fields along the c-axis. At 0.3 K, two abrupt anomalies
855
+ are observed corresponding to Hm1 and Hm2, which are
856
+ also detected at 1.8 K and 3 K. Above these transitions,
857
+ there is a decrease of ρ(H), consistent with the reduced
858
+ spin-flip scattering arising from a larger ferromagnetic
859
+ component to the magnetism.
860
+ On the other hand, no
861
+ metamagnetic transitions are detected at 5 K, where in-
862
+ stead there is a broad peak in ρ(H), again consistent
863
+ with a more gradual reorientation of the spins with field
864
+ at higher temperatures.
865
+
866
+ 7
867
+ C.
868
+ Magnetism of CePdGa6 under pressure
869
+ To determine the evolution of the magnetic order un-
870
+ der pressure, the temperature dependence of the ac spe-
871
+ cific heat of CePdGa6 was measured at several differ-
872
+ ent hydrostatic pressures up to 2.2 GPa, which are dis-
873
+ played in Fig. 8.
874
+ It can be seen from the dotted line
875
+ that there is little change of TN with pressure indicat-
876
+ ing the robustness of magnetic order.
877
+ In the case of
878
+ the layered Ce2MGa12 compounds, the TN of Ce2NiGa12
879
+ and Ce2PdGa12 decrease with pressure, and antiferro-
880
+ magnetism is suppressed entirely above 5.5 and 7 GPa,
881
+ respectively [26, 27].
882
+ On the other hand the TN of
883
+ Ce2IrGa12 undergoes a moderate enhancement from 3.1
884
+ to 3.7 K for pressures up to 2.3 GPa, indicating that
885
+ this compound is located on the left side of the Doniach
886
+ phase diagram [25]. In the case of CePdGa6, the robust-
887
+ ness of TN suggests that measurements to higher pres-
888
+ sures are required to situate this compound within the
889
+ framework of the Doniach phase diagram and to exam-
890
+ ine whether there is pressure-induced quantum criticality
891
+ in CePdGa6.
892
+ IV.
893
+ DISCUSSION
894
+ Our measurements of the resistivity, magnetic sus-
895
+ ceptibility and specific heat show that CePdGa6 orders
896
+ antiferromagnetically below TN = 5.2 K, with the mo-
897
+ ments orientated along the c-axis. Figure 9 displays the
898
+ temperature-field phase diagram for magnetic fields ap-
899
+ plied along the c-axis. The phase boundaries obtained
900
+ from different measurements are highly consistent, show-
901
+ ing that TN shifts to lower temperatures with field, before
902
+ abruptly disappearing in a field of 2 T. At low temper-
903
+ atures, there are two step-like metamagnetic transitions
904
+ shown by the dashed lines, where the second transition is
905
+ to the spin polarized state, while the lower transition cor-
906
+ responds to a change of magnetic state to a phase with
907
+ a magnetization of 0.35 µB/Ce, about one-third of the
908
+ saturated value. Such step-like changes in the magne-
909
+ tization suggest that the spins are strongly constrained
910
+ along the c-axis, and therefore there are abrupt spin-
911
+ flip transitions for fields applied along the ordering di-
912
+ rection. On the other hand, at 4 K the magnetization
913
+ changes smoothly with field, reaching the same saturated
914
+ magnetization, indicating that at this temperature the
915
+ spins continuously rotate in the applied field.
916
+ Such a
917
+ change with temperature may be a consequence of only
918
+ a moderate magnetocrystalline anisotropy, as also evi-
919
+ denced by the relatively small value of the spin-wave gap
920
+ ∆SW /TN ≈ 0.4, as compared to the other heavy fermion
921
+ gallides Ce2IrGa12 and Ce2PdGa12 which have ∆SW /TN
922
+ of 1.5 and 2.8, respectively [24, 25].
923
+ From the analysis of the magnetic susceptibility includ-
924
+ ing the CEF contribution, the molecular field parameter
925
+ is positive in the ab-plane (λab), while a smaller negative
926
+ value is obtained along the c-axis (λc). Together with the
927
+ fact that only a relatively small field along the c-axis is re-
928
+ quired to reach the spin polarized state, this suggests that
929
+ the antiferromagnetic ground state consists of ferromag-
930
+ netically ordered Ce-layers coupled antiferromagnetically
931
+ along the c-axis. The simplest model for such a system
932
+ would consist of ferromagnetic Heisenberg exchange in-
933
+ teractions between nearest neighbor Ce atoms within the
934
+ ab-plane J0 > 0, and antiferromagnetic exchange inter-
935
+ actions J1 < 0 between nearest neighboring layers, as
936
+ well as a sufficiently strong Ising anisotropy. This yields
937
+ an A-type antiferromagnetic ground state consisting of
938
+ ferromagnetic layers with moments orientated along the
939
+ c-axis, where the moment direction alternates between
940
+ adjacent layers, “↑↓↑↓”. This model however cannot ac-
941
+ count for the field induced phase with one-third magneti-
942
+ zation, since for fields along the c-axis, only a metamag-
943
+ netic transition directly from the ↑↓↑↓ phase to the spin
944
+ polarized state is anticipated.
945
+ In order to realize the intermediate field-induced phase,
946
+ it is necessary to consider an antiferromagnetic exchange
947
+ J2 between next nearest neighboring layers. In this case,
948
+ from considering the classical ground state energies with
949
+ sufficiently strong Ising anisotropy, the same ↑↓↑↓ ground
950
+ state is realized for J1/J2 > 2, while a ↑↑↓↓ state oc-
951
+ curs for J1/J2 < 2 [36]. Upon applying a magnetic field
952
+ along the c-axis, there is a metamagnetic transition at
953
+ a field Hm1 to an ↑↑↓ state with a net magnetization
954
+ one-third of the saturated value, and another at Hm2
955
+ to the spin polarized state, where Hm2/Hm1 is deter-
956
+ mined by J1/J2. We performed mean-field calculations
957
+ of the magnetic ground state and magnetization using
958
+ the McPhase software package [35], which determines
959
+ the most stable magnetic structure at a given temper-
960
+ ature and magnetic field from considering multiple ran-
961
+ dom starting moment configurations.
962
+ These took into
963
+ account the Heisenberg exchange interactions described
964
+ above, as well as the CEF Hamiltonian HCF with our de-
965
+ duced values of the Stevens parameters. As shown in the
966
+ inset of Fig. 9, the observed values of Hm1 = 0.4 T and
967
+ Hm2 = 2.1 T, from the midpoints of the metamagnetic
968
+ transitions at 2 K, are well reproduced from the mean-
969
+ field calculations at 2 K with J1 = −0.023 meV and
970
+ J2 = −0.0085 meV, where for Hm1 < H < Hm2 the ↑↑↓
971
+ ground state has the lowest energy. Keeping these values
972
+ fixed, we find that a nearest neighbor in-plane ferromag-
973
+ netic interaction J0 = 0.034 meV can yield the observed
974
+ value of TN = 5.2 K. Therefore our analysis suggests
975
+ stronger in-plane ferromagnetic interactions, where the
976
+ value of 4J0/(2J1 + 2J2) = 2.16 is close to our fitted
977
+ value of λab/λc = 2.3. Note that here we have assumed
978
+ a ↑↓↑↓ ground state with J1/J2 > 2. Although a ↑↑↓↓
979
+ phase has been reported in CeCoGe3 [37], such a scenario
980
+ is less likely in CePdGa6 due to the larger interlayer dis-
981
+ tances.
982
+ Compared to the layered heavy fermion antiferromag-
983
+ net CeRhIn5, the magnetism in CePdGa6 appears to
984
+ have a much more three dimensional character, whereas
985
+ it is rather two-dimensional in the former, with J1/J0 =
986
+
987
+ 8
988
+ 0.13 deduced from inelastic neutron scattering [38]. In
989
+ addition, in CeRhIn5 the easy plane anisotropy and pres-
990
+ ence of in-plane antiferromagnetic interactions give rise
991
+ to spiral magnetic order which is incommensurate along
992
+ the c-axis [13, 14], and these features may be important
993
+ factors for realizing the unconventional quantum critical-
994
+ ity and superconductivity. On the other hand, the TN of
995
+ CePdGa6 is much more robust with pressure, remaining
996
+ almost unchanged at pressures up to 2.2 GPa. Therefore
997
+ an understanding of the relationship between the mag-
998
+ netism and any quantum critical behaviors will require
999
+ measurements at considerably higher pressures.
1000
+ In addition, despite the layered arrangement of Ce
1001
+ atoms, the local environment of the Ce atoms is rel-
1002
+ atively three dimensional, as evidenced by the derived
1003
+ CEF parameters being close to that for a cubic sys-
1004
+ tem (where B0
1005
+ 2 = 0 and |B4
1006
+ 4| = 5|B0
1007
+ 4|).
1008
+ This CEF
1009
+ scheme can correctly predict the low-temperature Ising
1010
+ anisotropy, but the predicted moment along the c-axis
1011
+ is larger than that observed. While such a reduced mo-
1012
+ ment compared to that predicted from the CEF level-
1013
+ scheme is often observed in heavy fermion antiferromag-
1014
+ nets due to screening of the moments by the Kondo effect
1015
+ [14, 16, 37, 39, 40], confirming whether such a scenario is
1016
+ applicable to CePdGa6 requires a more precise determi-
1017
+ nation of the CEF parameters, by measurements such as
1018
+ inelastic neutron scattering.
1019
+ V.
1020
+ CONCLUSION
1021
+ In summary, we have characterized the magnetic prop-
1022
+ erties of the heavy fermion antiferromagnet CePdGa6,
1023
+ and their
1024
+ evolution upon the application of mag-
1025
+ netic fields and pressure.
1026
+ We have constructed the
1027
+ temperature-field phase diagram for fields along the c-
1028
+ axis, where at low temperatures there are two abrupt
1029
+ metamagnetic transitions corresponding to spin-flip tran-
1030
+ sitions. From the analysis of the magnetic susceptibility,
1031
+ we propose a CEF level scheme for the splitting of the
1032
+ ground state J = 5/2 multiplet, indicating that the Ising
1033
+ anisotropy at low temperatures is driven by the sizeable
1034
+ B0
1035
+ 4 parameter. Moreover, our results are consistent with
1036
+ an antiferromagnetic ground state consisting of ferromag-
1037
+ netically coupled Ce-layers, with antiferromagnetic cou-
1038
+ pling between layers. We have proposed a model for the
1039
+ exchange interactions which can explain the evolution of
1040
+ the magnetic ordering with applied magnetic field, which
1041
+ has sizeable nearest neighbor and next-nearest neighbor
1042
+ layer interactions, indicating the presence of significant
1043
+ long-range magnetic interactions. Despite evidence for
1044
+ heavy fermion behavior, there is negligible change of TN
1045
+ upon applying pressures up 2.2 GPa, and hence measure-
1046
+ ments at much higher pressures are necessary to look for
1047
+ evidence of quantum criticality.
1048
+ VI.
1049
+ ACKNOWLEDGMENTS
1050
+ We are grateful to Martin Rotter for advice with the
1051
+ McPhase software. This work was supported by the Na-
1052
+ tional Key R&D Program of China (2017YFA0303100),
1053
+ the Key R&D Program of Zhejiang Province, China
1054
+ (2021C01002), and the National Natural Science Foun-
1055
+ dation of China (12174332, 12034017 and 11974306).
1056
+ ∗ msmidman@zju.edu.cn
1057
+ [1] Z.
1058
+ Weng,
1059
+ M.
1060
+ Smidman,
1061
+ L.
1062
+ Jiao,
1063
+ X.
1064
+ Lu,
1065
+ and
1066
+ H.
1067
+ Q.
1068
+ Yuan,
1069
+ Multiple
1070
+ quantum
1071
+ phase
1072
+ transitions
1073
+ and
1074
+ superconductivity in
1075
+ Ce-based
1076
+ heavy
1077
+ fermions,
1078
+ Rep. Prog. Phys. 79, 094503 (2016).
1079
+ [2] Q. Si and F. Steglich, Heavy fermions and quantum phase
1080
+ transitions, Science 329, 1161 (2010).
1081
+ [3] P.
1082
+ Coleman,
1083
+ Heavy
1084
+ fermions:
1085
+ Elec-
1086
+ trons
1087
+ at
1088
+ the
1089
+ edge
1090
+ of
1091
+ magnetism,
1092
+ Handbook of magnetism and advanced magnetic materials (2007).
1093
+ [4] S. Doniach, The Kondo lattice and weak antiferromag-
1094
+ netism, Physica B+C 91, 231 (1977).
1095
+ [5] N. Mathur, F. Grosche, S. Julian, I. Walker, D. Freye,
1096
+ R. Haselwimmer, and G. Lonzarich, Magnetically me-
1097
+ diated superconductivity in heavy fermion compounds,
1098
+ Nature 394, 39 (1998).
1099
+ [6] J. D. Thompson
1100
+ and Z. Fisk, Progress
1101
+ in heavy-
1102
+ fermion superconductivity: Ce115 and related materials,
1103
+ J. Phys. Soc. Jpn. 81, 011002 (2012).
1104
+ [7] T. Park, F. Ronning, H. Q. Yuan, M. B. Salamon,
1105
+ R. Movshovich, and J. D. Thompson, Hidden magnetism
1106
+ and quantum criticality in the heavy fermion supercon-
1107
+ ductor CeRhIn5, Nature 440, 65 (2006).
1108
+ [8] C.
1109
+ Petrovic,
1110
+ P.
1111
+ G.
1112
+ Pagliuso,
1113
+ M.
1114
+ F.
1115
+ Hund-
1116
+ ley,
1117
+ R.
1118
+ Movshovich,
1119
+ J.
1120
+ L.
1121
+ Sarrao,
1122
+ J.
1123
+ D.
1124
+ Thompson,
1125
+ Z.
1126
+ Fisk,
1127
+ and
1128
+ P.
1129
+ Monthoux,
1130
+ Heavy-
1131
+ fermion
1132
+ superconductivity
1133
+ in
1134
+ CeCoIn5
1135
+ at
1136
+ 2.3
1137
+ K,
1138
+ J. Phys. Condens. Matter 13, L337 (2001).
1139
+ [9] A.
1140
+ L.
1141
+ Cornelius,
1142
+ P.
1143
+ G.
1144
+ Pagliuso,
1145
+ M.
1146
+ F.
1147
+ Hund-
1148
+ ley, and J. L. Sarrao, Field-induced magnetic tran-
1149
+ sitions
1150
+ in
1151
+ the
1152
+ quasi-two-dimensional
1153
+ heavy-fermion
1154
+ antiferromagnets
1155
+ CenRhIn3n+2
1156
+ (n
1157
+ =
1158
+ 1
1159
+ or
1160
+ 2),
1161
+ Phys. Rev. B 64, 144411 (2001).
1162
+ [10] G. Chen, S. Ohara, M. Hedo, Y. Uwatoko, K. Saito,
1163
+ M. Sorai, and I. Sakamoto, Observation of supercon-
1164
+ ductivity in heavy-fermion compounds of Ce2CoIn8,
1165
+ J. Phys. Soc. Jpn. 71, 2836 (2002).
1166
+ [11] D.
1167
+ Kaczorowski,
1168
+ A.
1169
+ P.
1170
+ Pikul,
1171
+ D.
1172
+ Gnida,
1173
+ and
1174
+ V. H. Tran, Emergence of a superconducting state
1175
+ from
1176
+ an
1177
+ antiferromagnetic
1178
+ phase
1179
+ in
1180
+ single
1181
+ crys-
1182
+ tals
1183
+ of
1184
+ the
1185
+ heavy
1186
+ fermion
1187
+ compound
1188
+ Ce2PdIn8,
1189
+ Phys. Rev. Lett. 103, 027003 (2009).
1190
+ [12] M. Nicklas, V. A. Sidorov, H. A. Borges, P. G. Pagliuso,
1191
+ C. Petrovic, Z. Fisk, J. L. Sarrao, and J. D. Thomp-
1192
+
1193
+ 9
1194
+ son, Magnetism and superconductivity in Ce2RhIn8,
1195
+ Phys. Rev. B 67, 020506(R) (2003).
1196
+ [13] N. J. Curro, P. C. Hammel, P. G. Pagliuso, J. L. Sar-
1197
+ rao, J. D. Thompson, and Z. Fisk, Evidence for spiral
1198
+ magnetic order in the heavy fermion material CeRhIn5,
1199
+ Phys. Rev. B 62, R6100 (2000).
1200
+ [14] W.
1201
+ Bao,
1202
+ P.
1203
+ G.
1204
+ Pagliuso,
1205
+ J.
1206
+ L.
1207
+ Sarrao,
1208
+ J.
1209
+ D.
1210
+ Thompson,
1211
+ Z. Fisk,
1212
+ J. W. Lynn,
1213
+ and R. W. Er-
1214
+ win, Incommensurate magnetic structure of CeRhIn5,
1215
+ Phys. Rev. B 62, R14621 (2000).
1216
+ [15] W. Bao, P. G. Pagliuso, J. L. Sarrao, J. D. Thompson,
1217
+ Z. Fisk, and J. W. Lynn, Magnetic structure of heavy-
1218
+ fermion Ce2RhIn8, Phys. Rev. B 64, 020401(R) (2001).
1219
+ [16] A. D. Christianson, J. M. Lawrence, P. G. Pagliuso, N. O.
1220
+ Moreno, J. L. Sarrao, J. D. Thompson, P. S. Risebor-
1221
+ ough, S. Kern, E. A. Goremychkin, and A. H. Lacerda,
1222
+ Neutron scattering study of crystal fields in CeRhIn5,
1223
+ Phys. Rev. B 66, 193102 (2002).
1224
+ [17] D. S. Christovam,
1225
+ C. Giles,
1226
+ L. Mendonca-Ferreira,
1227
+ J. Le˜ao, W. Ratcliff, J. W. Lynn, S. Ramos, E. N. Hering,
1228
+ H. Hidaka, E. Baggio-Saitovich, Z. Fisk, P. G. Pagliuso,
1229
+ and C. Adriano, Spin rotation induced by applied pres-
1230
+ sure in the Cd-doped Ce2RhIn8 intermetallic compound,
1231
+ Phys. Rev. B 100, 165133 (2019).
1232
+ [18] J. Pelleg, G. Kimmel, and D. Dayan, RGa6 (R= rare
1233
+ earth atom), a common intermetallic compound of the
1234
+ R-Ga systems, J. Less Common Met. 81, 33 (1981).
1235
+ [19] E. Lidstr¨om, R. W¨appling, O. Hartmann, M. Ekstr¨om,
1236
+ and G. M. Kalvius, A µSR and neutron scattering
1237
+ study of REGa6, where RE = Ce, Nd, Gd and Tb,
1238
+ J. Phys. Condens. Matter 8, 6281 (1996).
1239
+ [20] R. T. Macaluso, J. N. Millican, S. Nakatsuji, H. O.
1240
+ Lee, B. Carter, N. O. Moreno, Z. Fisk, and J. Y.
1241
+ Chan, A comparison of the structure and localized mag-
1242
+ netism in Ce2PdGa12 with the heavy fermion CePdGa6,
1243
+ J. Solid State Chem. 178, 3547 (2005).
1244
+ [21] J. Y. Cho, J. N. Millican, C. Capan, D. A. Sokolov,
1245
+ M. Moldovan, A. B. Karki, D. P. Young, M. C. Aronson,
1246
+ and J. Y. Chan, Crystal growth, structure, and physical
1247
+ properties of Ln2MGa12 (Ln = La, Ce; M = Ni, Cu),
1248
+ Chem. Mater. 20, 6116 (2008).
1249
+ [22] S.
1250
+ Nallamuthu,
1251
+ T.
1252
+ P.
1253
+ Rashid,
1254
+ V.
1255
+ Krishnakumar,
1256
+ C. Besnard, H. Hagemann, M. Reiffers, and R. Nagalak-
1257
+ shmi, Anisotropic magnetic, transport and thermody-
1258
+ namic properties of novel tetragonal Ce2RhGa12 com-
1259
+ pound, J. Alloys Compd. 604, 379 (2014).
1260
+ [23] O.
1261
+ Sichevych,
1262
+ C.
1263
+ Krellner,
1264
+ Y.
1265
+ Prots,
1266
+ Y.
1267
+ Grin,
1268
+ and
1269
+ F.
1270
+ Steglich,
1271
+ Physical
1272
+ prop-
1273
+ erties
1274
+ and
1275
+ crystal
1276
+ chemistry
1277
+ of
1278
+ Ce2PtGa12,
1279
+ J. Phys. Condens. Matter 24, 256006 (2012).
1280
+ [24] D.
1281
+ Gnida
1282
+ and
1283
+ D.
1284
+ Kaczorowski,
1285
+ Magnetism
1286
+ and
1287
+ weak
1288
+ electronic
1289
+ correlations
1290
+ in
1291
+ Ce2PdGa12,
1292
+ Journal of Physics: Condensed Matter 25, 145601 (2013).
1293
+ [25] Y. J. Zhang, B. Shen, F. Du, Y. Chen, J. Y. Liu,
1294
+ H. Lee, M. Smidman, and H. Q. Yuan, Structural
1295
+ and magnetic properties of antiferromagnetic Ce2IrGa12,
1296
+ Phys. Rev. B 101, 024421 (2020).
1297
+ [26] N. Kawamura, R. Sasaki, K. Matsubayashi, N. Ishimatsu,
1298
+ M. Mizumaki, Y. Uwatoko, S. Ohara, and S. Watan-
1299
+ abe, High pressure properties for electrical resistivity
1300
+ and Ce valence state of heavy-fermion antiferromagnet
1301
+ Ce2NiGa12, J. Phys. Conf. Ser. 568, 042015 (2014).
1302
+ [27] S. Ohara, T. Yamashita,
1303
+ T. Shiraishi,
1304
+ K. Matsub-
1305
+ ayashi, and Y. Uwatoko, Pressure effects on electrical
1306
+ resistivity of heavy-fermion antiferromagnet Ce2PdGa12,
1307
+ J. Phys. Conf. Ser. 400, 042048 (2012).
1308
+ [28] R.
1309
+ T.
1310
+ Macaluso,
1311
+ S.
1312
+ Nakatsuji,
1313
+ H.
1314
+ Lee,
1315
+ Z.
1316
+ Fisk,
1317
+ M.
1318
+ Moldovan,
1319
+ D.
1320
+ Young,
1321
+ and
1322
+ J.
1323
+ Y.
1324
+ Chan,
1325
+ Synthesis,
1326
+ structure,
1327
+ and
1328
+ magnetism
1329
+ of
1330
+ a
1331
+ new
1332
+ heavy-fermion
1333
+ antiferromagnet,
1334
+ CePdGa6,
1335
+ J. Solid State Chem. 174, 296 (2003).
1336
+ [29] H. Hegger, C. Petrovic, E. G. Moshopoulou, M. F.
1337
+ Hundley, J. L. Sarrao, Z. Fisk, and J. D. Thomp-
1338
+ son, Pressure-induced superconductivity in Quasi-2D
1339
+ CeRhIn5, Phys. Rev. Lett. 84, 4986 (2000).
1340
+ [30] B. E. Light, R. S. Kumar, A. L. Cornelius, P. G. Pagliuso,
1341
+ and J. L. Sarrao, Heat capacity studies of Ce and Rh
1342
+ site substitution in the heavy-fermion antiferromagnet
1343
+ CeRhIn5 : Short-range magnetic interactions and non-
1344
+ Fermi-liquid behavior, Phys. Rev. B 69, 024419 (2004).
1345
+ [31] P. G. Pagliuso, N. O. Moreno, N. J. Curro, J. D. Thomp-
1346
+ son, M. F. Hundley, J. L. Sarrao, Z. Fisk, A. D. Chris-
1347
+ tianson, A. H. Lacerda, B. E. Light, and A. L. Cor-
1348
+ nelius, Ce-site dilution studies in the antiferromagnetic
1349
+ heavy fermions CemRhnIn3m+2n (m = 1, 2; n = 0, 1),
1350
+ Phys. Rev. B 66, 054433 (2002).
1351
+ [32] S. de Medeiros, M. Continentino, M. Orlando, M. Fontes,
1352
+ E. Baggio-Saitovitch, A. Rosch, and A. Eichler, Quan-
1353
+ tum critical point in CeCo(Ge1−xSix)3: Oral presenta-
1354
+ tion, Physica B: Condensed Matter 281, 340 (2000).
1355
+ [33] M. T. Hutchings, Point-charge calculations of energy
1356
+ levels of magnetic ions in crystalline electric fields, in
1357
+ Solid state physics, Vol. 16 (Elsevier, 1964) pp. 227–273.
1358
+ [34] J.
1359
+ Jensen
1360
+ and
1361
+ A.
1362
+ R.
1363
+ Mackintosh,
1364
+ Rare earth magnetism: structures and excitations
1365
+ (Oxford University Press, 1991).
1366
+ [35] M.
1367
+ Rotter,
1368
+ Using
1369
+ McPhase
1370
+ to
1371
+ calculate
1372
+ mag-
1373
+ netic
1374
+ phase
1375
+ diagrams
1376
+ of
1377
+ rare
1378
+ earth
1379
+ compounds,
1380
+ J. Magn. Magn. Mater. 272, E481 (2004).
1381
+ [36] B. Li, Y. Sizyuk, N. S. Sangeetha, J. M. Wilde, P. Das,
1382
+ W. Tian, D. C. Johnston, A. I. Goldman, A. Kreyssig,
1383
+ P. P. Orth, R. J. McQueeney, and B. G. Ueland, Antifer-
1384
+ romagnetic stacking of ferromagnetic layers and doping-
1385
+ controlled phase competition in Ca1−xSrxCo2−yAs2,
1386
+ Phys. Rev. B 100, 024415 (2019).
1387
+ [37] M. Smidman, D. T. Adroja, A. D. Hillier, L. C. Chapon,
1388
+ J. W. Taylor,
1389
+ V. K. Anand,
1390
+ R. P. Singh,
1391
+ M. R.
1392
+ Lees, E. A. Goremychkin, M. M. Koza, V. V. Kr-
1393
+ ishnamurthy, D. M. Paul, and G. Balakrishnan, Neu-
1394
+ tron scattering and muon spin relaxation measurements
1395
+ of the noncentrosymmetric antiferromagnet CeCoGe3,
1396
+ Phys. Rev. B 88, 134416 (2013).
1397
+ [38] P. Das, S.-Z. Lin, N. J. Ghimire, K. Huang, F. Ron-
1398
+ ning, E. D. Bauer, J. D. Thompson, C. D. Batista,
1399
+ G. Ehlers, and M. Janoschek, Magnitude of the Magnetic
1400
+ Exchange Interaction in the Heavy-Fermion Antiferro-
1401
+ magnet CeRhIn5, Phys. Rev. Lett. 113, 246403 (2014).
1402
+ [39] O. Stockert, E. Faulhaber, G. Zwicknagl, N. St¨ußer, H. S.
1403
+ Jeevan, M. Deppe, R. Borth, R. K¨uchler, M. Loewen-
1404
+ haupt, C. Geibel, and F. Steglich, Nature of the A Phase
1405
+ in CeCu2Si2, Phys. Rev. Lett. 92, 136401 (2004).
1406
+ [40] E. A. Goremychkin and R. Osborn, Crystal-field excita-
1407
+ tions in CeCu2Si2, Phys. Rev. B 47, 14280 (1993)
1408
+ .
1409
+
1tAzT4oBgHgl3EQfe_wu/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2tE2T4oBgHgl3EQf5gjq/content/tmp_files/2301.04192v1.pdf.txt ADDED
@@ -0,0 +1,1706 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.04192v1 [math.AG] 10 Jan 2023
2
+ QUANTIZATIONS OF LOCAL CALABI–YAU THREEFOLDS
3
+ AND THEIR MODULI OF VECTOR BUNDLES
4
+ E. BALLICO, E. GASPARIM, F. RUBILAR, B. SUZUKI
5
+ Abstract. We describe the geometry of noncommutative deformations of local Calabi–Yau
6
+ threefolds, showing that the choice of Poisson structure strongly influences the geometry of the
7
+ quantum moduli space.
8
+ Contents
9
+ 1.
10
+ Introduction
11
+ 1
12
+ 2.
13
+ Noncommutative deformations
14
+ 2
15
+ 3.
16
+ Vector bundles on noncommutative deformations
17
+ 3
18
+ 4.
19
+ Moduli of bundles on noncommutative deformations
20
+ 6
21
+ 5.
22
+ Quantum moduli of bundles on W1
23
+ 8
24
+ 6.
25
+ Quantum moduli of bundles on W2
26
+ 11
27
+ Appendix A.
28
+ Computations of H1
29
+ 15
30
+ References
31
+ 16
32
+ 1. Introduction
33
+ We discuss moduli of vector bundles on those noncommutative local Calabi–Yau threefolds that
34
+ occur in noncommutative crepant resolutions of the generalised conifolds xy − znwm = 0. Such
35
+ crepant resolutions require lines of type (−1, −1) and (−2, 0), that is, those locally modelled by
36
+ W1 := Tot(OP1(−1) ⊕ OP1(−1))
37
+ or
38
+ W2 := Tot(OP1(−2) ⊕ OP1(0)).
39
+ Their appearance is balanced in a precise sense described in [GKMR] so that no particular
40
+ configuration of such lines is more likely to occur in a crepant resolution than any other.
41
+ Our results show that the structure of the quantum moduli space (Def. 4.3) of vector bundles
42
+ over a noncommutative deformation varies drastically depending on the choice of a Poisson
43
+ structure.
44
+ In the 2-dimensional case, [BG] described the geometry of noncommutative deformations of the
45
+ local surfaces Zk := Tot(OP1(−k)), showing that the quantum moduli space of instantons over
46
+ a noncommutative deformation (Zk, σ) can be viewed as the ´etale space of a constructible sheaf
47
+ over the classical moduli space of instantons on Zk. While in 2 dimensions vector bundles occur
48
+ as mathematical representations of instantons, in the 3-dimensional case vector bundles occur
49
+ as mathematical descriptions of BPS states, with W1 and W2 appearing as building blocks, as
50
+ described in [GKMR, GSTV, OSY].
51
+ 1
52
+
53
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
54
+ 2
55
+ In this work, we describe the geometry of noncommutative deformations W of a Calabi–Yau
56
+ threefold W, showing that the quantum moduli space of vector bundles on together with the
57
+ map taking a vector bundle on W to its classical limit
58
+ Mℏ
59
+ j(W, σ)
60
+ Mj(W)
61
+ has the structure of a constructible sheaf, whose rank and singularity set depend explicitly on
62
+ the choice of noncommutative deformation. In particular, we describe the geometry of noncom-
63
+ mutative deformations of some crepant resolutions. It is at this point yet unclear how these
64
+ compare with Van den Bergh’s noncommutative crepant resolutions [V].
65
+ To each Poisson structure σ on Wk, with k = 1 or k = 2 there corresponds a noncommutative
66
+ deformation (Wk, Aσ) with Aσ = (O[[ℏ]], ⋆σ) where ⋆σ is the star product corresponding to σ. All
67
+ of these Poisson structures were described in [BGKS] in terms of generators over global functions;
68
+ when σ is one of such generators, we refer to it as a basic Poisson structure. There exist Poisson
69
+ structures for which all brackets vanish on the first formal neighbourhood of P1 ⊂ Wk; we call
70
+ them extremal Poisson structures, they behave very differently from the basic ones. Our main
71
+ results are:
72
+ Theorem (5.4,6.4). Let k = 1 or 2. If σ is an extremal Poisson structure on Wk, then the
73
+ quantum moduli space Mℏ
74
+ j(Wk, σ) can be viewed as the ´etale space of a constructible sheaf Ek of
75
+ generic rank 2j − k − 1 over the classical moduli space Mj(Wk) with singular stalks of all ranks
76
+ up to 4j − k − 4.
77
+ If σ′ is another Poisson structure on Wk, then the corresponding sheaf E′
78
+ k is a subsheaf of Ek,
79
+ with the smallest possible sheaf occurring for basic Poisson structures.
80
+ Theorem (5.2,6.2). Let k = 1 or 2. If σ is a basic Poisson structure on Wk, then the quantum
81
+ moduli space Mℏ
82
+ j(Wk, σ) and its classical limit are isomorphic:
83
+ Mℏ
84
+ j(Wk, σ) ≃ Mj(Wk) ≃ P4j−5.
85
+ Therefore, comparing these results, we see that the choice of Poisson structure has a strong
86
+ influence on the geometry of the quantum moduli space.
87
+ 2. Noncommutative deformations
88
+ A holomorphic Poisson structure on a complex manifold (or smooth complex algebraic variety)
89
+ X is given by a holomorphic bivector field σ ∈ H0(X, Λ2TX) whose Schouten–Nijenhuis bracket
90
+ [σ, σ] ∈ H0(X, Λ3TX) is zero. The associated Poisson bracket is then given by the pairing ⟨ · , · ⟩
91
+ between vector fields and forms {f, g}σ = ⟨σ, df ∧ dg⟩.
92
+ To obtain a noncommutative deformation of X one must first promote the Poisson structure to
93
+ a
94
+ star product on X, that is, a C[[ℏ]]-bilinear associative product ⋆: OX[[ℏ]] × OX[[ℏ]] → OX[[ℏ]]
95
+ which is of the form f ⋆ g = fg + �∞
96
+ n=1 Bn(f, g) ℏn where the Bn are bidifferential operators.
97
+ The pair (X, ⋆σ) is called a deformation quantization of (X, σ) when the star product on X
98
+ satisfies B1(f, g) = {f, g}σ.
99
+ For a holomorphic Poisson manifold (X, σ) with associated Poisson bracket { · , · }σ, the sheaf
100
+ of formal functions with holomorphic coefficients on the quantization (X, ⋆σ) is
101
+ Aσ := (O[[ℏ]], ⋆σ).
102
+
103
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
104
+ 3
105
+ We call Wk(σ) = (Wk, Aσ) a noncommutative deformation of Wk, and a vector bundle on a
106
+ noncommutative deformation is by definition a locally free sheaf of Aσ-modules. These vector
107
+ bundles and their moduli are our objects of study here.
108
+ When we work with a fixed Poisson structure, we use the abbreviated notations A, { · , · } and
109
+ ⋆. We also use the cut to order n represented as A(n) = O[[ℏ]]/ℏn+1.
110
+ The existence of star products on Poisson manifolds was proven in the seminal papers of Kontse-
111
+ vich [Ko1, Ko2]. For a complex algebraic variety X with structure sheaf OX, if both H1(X, OX)
112
+ and H2(X, OX) vanish, then there is a bijection
113
+ {Poisson deformations of OX}/∼ ↔ {associative deformations of OX}/∼
114
+ where ∼ denotes gauge equivalence [Y, Cor. 11.2]. These cohomological hypothesis are verified
115
+ in the cases of W1 and W2 (but not for W3, see App. A). We now recall the basic properties of
116
+ Poisson structures on Wk or k = 1, 2. All Poisson structures on Wk may be described by giving
117
+ their generators over global functions. This is a consequence of the following result.
118
+ Lemma 2.1. [BGKS, Prop. 1] Let X be a smooth complex threefold and σ a Poisson structure
119
+ on X, then fσ is integrable for all f ∈ O(X).
120
+ Local Calabi–Yau threefolds. For k ≥ 1, we set
121
+ Wk = Tot(OP1(−k) ⊕ OP1(k − 2)).
122
+ The canonical charts for the complex manifold structure of Wk is obtained by gluing the open
123
+ sets
124
+ U = C3
125
+ {z,u1,u2}
126
+ and
127
+ V = C3
128
+ {ξ,v1,v2}
129
+ by the relation
130
+ (ξ, v1, v2) = (z−1, zku1, z−k+2u2).
131
+ All Poisson structures on W1 can be obtained using the following generators [BGKS, Thm. 3.2]
132
+ σ1 = ∂z ∧ ∂u1,
133
+ σ2 = ∂z ∧ ∂u2,
134
+ σ3 = u1∂u1 ∧ ∂u2 − z∂z ∧ ∂u2,
135
+ σ4 = u2∂u1 ∧ ∂u2 + z∂z ∧ ∂u1.
136
+ The W1-Poisson structures σ1, σ2, σ3, σ4 are pairwise isomorphic.
137
+ All Poisson structures on W2 can be obtained using the following generators [BGKS, Lem. 3]
138
+ σ1 = ∂z ∧ ∂u1,
139
+ σ2 = ∂z ∧ ∂u2,
140
+ σ3 = z∂z ∧ ∂u2,
141
+ σ4 = u1∂u1 ∧ ∂u2,
142
+ σ5 = 2zu1∂u1 ∧ ∂u2 − z2∂z ∧ ∂u2.
143
+ The W2-Poisson structures σ2 and σ5 on are isomorphic.
144
+ Moreover, the Poisson structures
145
+ σ1, σ2, σ3, σ4 on W2 are pairwise inequivalent, giving 4 distinct Poisson manifolds.
146
+ 3. Vector bundles on noncommutative deformations
147
+ To discuss moduli of vector bundles on noncommutative deformations of Wk, for k = 1 or 2 we
148
+ will consider those bundles that are formally algebraic.
149
+ Definition 3.1. We say that p = � pnℏn ∈ O[[ℏ]] is formally algebraic if pn is a polynomial
150
+ for every n. We say that a vector bundle over (Wk, σ) is formally algebraic if it is isomorphic to
151
+ a vector bundle given by formally algebraic transition functions. In addition, if there exists N
152
+ such that pn = 0 for all n > N, we then say that p is algebraic.
153
+ Lemma 3.2. Let A be a deformation quantization of O. Then an A-module S is acyclic if and
154
+ only if S = S/ℏS is acyclic.
155
+
156
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
157
+ 4
158
+ Proof. Consider the short exact sequence
159
+ 0 −→ S
160
+
161
+ −→ S −→ S −→ 0.
162
+ It gives, for j > 0 surjections
163
+ Hj(X, S)
164
+
165
+ −→ Hj(X, S) −→ 0.
166
+ This immediately implies that Hj(X, S) = 0 for j > 0. The converse is immediate.
167
+
168
+ Notation 3.3. Let Wk be a noncommutative deformation of Wk. Denote by A(j) the line
169
+ bundle over Wk with transition function z−j, hence the pull back of O(j) on P1.
170
+ Proposition 3.4. For k = 1, 2 any line bundle on Wk is isomorphic to A(j) for some j ∈ Z,
171
+ i.e., Pic(Wk) = Z when k = 1, 2.
172
+ Proof. Let f = f0 + �∞
173
+ n=1 �fn ℏn ∈ A∗(U ∩ V ) be the transition function for the line bundle L.
174
+ Then there exist functions a0 ∈ O∗(U) and α0 ∈ O∗(V ) such that α0f0a0 = z−j and viewing
175
+ a0 resp. α0 as elements in A∗(U) resp. A∗(V ) one has α0 ⋆ f ⋆ a0 = z−j + �∞
176
+ n=1 fnℏn for some
177
+ fn ∈ O(U ∩ V ). We may thus assume that the transition function of L is z−j + �∞
178
+ n=1 fnℏn.
179
+ To give an isomorphism L ≃ A(j) it suffices to define functions an ∈ O(U) and αn ∈ O(V )
180
+ satisfying
181
+ �1 + �∞
182
+ n=1 αnℏn� ⋆
183
+ �z−j + �∞
184
+ n=1 fnℏn� ⋆
185
+ �1 + �∞
186
+ n=1 anℏn� = z−j.
187
+ (3.5)
188
+ Collecting terms by powers of ℏ, (3.5) is equivalent to the system of equations
189
+ Sn + z−jan + z−jαn = 0
190
+ n = 1, 2, . . .
191
+ where Sn is a finite sum involving fi, Bi for i ≤ n, but only ai, αi for i < n. The first terms are
192
+ S1 = f1
193
+ S2 = f2 + α1f1 + a1f1 + B1
194
+ �α1, z−j� + B1
195
+ �z−j, a1
196
+ � + α1z−ja1
197
+ S3 = f3 + B2
198
+ �α1, z−j� + B2
199
+ �z−j, a1
200
+ � + B1
201
+ �α2, z−j�
202
+ + B1
203
+ �z−j, a2
204
+ � + B1
205
+ �α1, f1
206
+ � + B1
207
+ �α1, z−ja1
208
+ � + B1
209
+ �z−j, a1
210
+
211
+ + α2f1 + α2z−ja1 + α1f2 + α1f1a1 + α1z−ja2 + f2a1 + f1a2
212
+ Since by Lem. A.1 we have H1(Wk, O) = 0 when k = 1, 2, we can solve these equations recur-
213
+ sively, by defining an to cancel out all terms of zjSn having positive powers of z and setting
214
+ αn = zjSn − an.
215
+
216
+ Note that this is essentially the same proof as [BG, Prop. 6.7], and it does not work for k ≥ 3,
217
+ in fact Pic(W3) is much larger, see Lem. A.3.
218
+ We now consider vector bundles of higher rank.
219
+ Theorem 3.6. For k = 1, 2, vector bundles over Wk(σ) are filtrable.
220
+ Proof. This is a generalisation of Ballico–Gasparim–K¨oppe [BGK1, Thm. 3.2] to the noncom-
221
+ mutative case. Let E be a sheaf of A-modules. Lem. 3.2 gives that the classical limit E0 = E/ℏE
222
+ is acyclic as a sheaf of A-modules (and equivalently as a sheaf of O-modules) if and only if E is
223
+ acyclic as a sheaf of A-modules.
224
+ Filtrability for a bundle E over Wk, for k = 1, 2 was proved in [K] and is obtained from the
225
+ vanishing of cohomology groups Hi(Wk, E ⊗ SymnN ∗) for i = 1, 2, where N ∗ is the conormal
226
+
227
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
228
+ 5
229
+ bundle of ℓ ⊂ Wk and n > 0 are integers, the proof proceeds by induction on n.
230
+ In the
231
+ noncommutative case, let S denote the kernel of the projection A(n) → A(n−1). By construction
232
+ we have that S/ℏS = SymnN ∗ and the required vanishing of cohomologies is guaranteed by
233
+ Lem. 3.2.
234
+
235
+ The analogous proof does not work for W3, see [K, Rem. 3.13]. It is unknown whether bundles
236
+ on Wk are filtrable when k ≥ 3.
237
+ Remark 3.7. There are also some particular features happening only when k = 1.
238
+ Every
239
+ holomorphic vector bundle on W1 is algebraic [K, Thm. 3.10], and W1 is formally rigid [GKRS,
240
+ Thm. 11]. In contrast, if k > 1, then Wk has as infinite-dimensional family of deformations. In
241
+ particular, a deformation family for W2 can be given by (ξ, v1, v2) =
242
+
243
+ z−1, z2u1 + z �
244
+ j>0 tjuj
245
+ 2, u2
246
+
247
+ [GKRS, Thm. 13] and this family contains infinitely many distinct manifolds [BGS, Thm. 1.13].
248
+ Furthermore, for k > q > 0, Wk can be deformed to Wq [BGS, Thm. 1.28].
249
+ For each Poisson manifold (Wk, σ), we want to study moduli spaces of vector bundles over
250
+ (Wk, ⋆) where ⋆ is the corresponding star product.
251
+ [K, Prop. 3.1] showed that a rank 2 bundle E on Wk with first Chern class c1(E) = 0 is deter-
252
+ mined by a canonical transition matrix
253
+ �zj
254
+ p
255
+ 0
256
+ z−j
257
+
258
+ where, using ǫ = 0, 1 we have:
259
+ p =
260
+ 2j−2
261
+
262
+ s=ǫ
263
+ 2j−2−s
264
+
265
+ i=1−ǫ
266
+ j−1
267
+
268
+ l=i+s−j+1
269
+ pliszlui
270
+ 1us
271
+ 2
272
+ for
273
+ k = 1,
274
+ (3.8)
275
+ and
276
+ p =
277
+
278
+
279
+ s=ǫ
280
+ j−1
281
+
282
+ i=1−ǫ
283
+ j−1
284
+
285
+ l=2i−j+1
286
+ pliszlui
287
+ 1us
288
+ 2
289
+ for
290
+ k = 2.
291
+ (3.9)
292
+ Accordingly, for a noncommutative deformation (Wk, σ) we define the notion of canonical tran-
293
+ sition matrix as:
294
+ T =
295
+ �zj
296
+ p
297
+ 0
298
+ z−j
299
+
300
+ with
301
+ p =
302
+
303
+
304
+ n=0
305
+ pnℏn ∈ Ext1(A(j), A(−j)).
306
+ (3.10)
307
+ Where we have that each pn can be given the same canonical form of the classical case, which
308
+ can be seen using:
309
+ Lemma 3.11. Let A be a deformation quantization of OWk with k = 1 or 2. There is an
310
+ injective map of C-vector spaces
311
+ Ext1
312
+ A(A(j), A(−j))
313
+
314
+
315
+ n=0
316
+ Ext1
317
+ O(O(j), O(−j))ℏn ≃ Ext1
318
+ O(O(j), O(−j))[[ℏ]]
319
+ p = p0 +
320
+
321
+
322
+ n=1
323
+ pnℏn
324
+ (p0, p1ℏ, p2ℏ2, . . . )
325
+ where pi ∈ Ext1(O(j), O(−j)).
326
+ Proof. Ext1
327
+ A(A(j), A(−j)) is the quotient of Ext1
328
+ O(O(j), O(−j))[[ℏ]] by the relations
329
+ qn ≃ qn + � pipn−i.
330
+
331
+
332
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
333
+ 6
334
+ We wish to describe the structure of moduli spaces of vector bundles on Wk. Using the results
335
+ of this section, we may proceed analogously to the classical (commutative) setup, to extract
336
+ moduli spaces out of extension groups of line bundles, by considering extension classes up to
337
+ bundle isomorphism.
338
+ 4. Moduli of bundles on noncommutative deformations
339
+ We recall the notion of isomorphism of vector bundles on a noncommutative deformation of Wk.
340
+ Definition 4.1. Let E and E′ be vector bundles over (Wk, σ) defined by transition matrices T
341
+ and T ′ respectively. An isomorphism between E and E′ is given by a pair of matrices AU and
342
+ AV with entries in Aσ(U) and Aσ(V ), respectively, which are invertible with respect to ⋆ and
343
+ such that
344
+ T ′ = AV ⋆ T ⋆ AU.
345
+ Notation 4.2. Denoting by Ext1
346
+ Alg(A(j), A(−j)) the subset of formally algebraic extension
347
+ classes, we denote by Mj(Wk) the quotient
348
+ Mj(Wk) := Ext1
349
+ Alg(A(j), A(−j))/∼
350
+ consisting of those classes of formally algebraic vector bundles (Def. 3.1), whose classical limit is a
351
+ stable vector bundle of charge j. Here ∼ denotes bundle isomorphism as in Def. 4.1 and following
352
+ [BGK2] stability means that the classical limit does not split on the 0-th formal neighbourhood.
353
+ We denote by Mℏn
354
+ j (Wk, σ) the moduli of bundles obtained by imposing the cut-off ℏn+1 = 0,
355
+ that is, the superscript ℏn means quantised to level n.
356
+ Note that Mj(Wk, σ) := Mℏ0
357
+ j (Wk, σ) = Mj(Wk) recovers the classical moduli space obtained
358
+ when ℏ = 0, while Mℏ
359
+ j(Wk, σ) denotes the moduli on the first order quantization, which will be
360
+ the focus of this work. Accordingly:
361
+ Definition 4.3. We call Mj(Wk, σ) the classical moduli space and Mℏ
362
+ j(Wk, σ) the quantum
363
+ moduli space of bundles on Wk.
364
+ Lemma 4.4. [BGS, Thm. 2.7] The classical moduli spaces of vector bundles of rank 2 and
365
+ splitting type j on Wk has dimension 4j − 5.
366
+ Definition 4.5. The splitting type of a vector bundle E on (Wk, σ) is the one of its classical
367
+ limit [BG, Def. 5.2]. Hence, when the classical limit is an SL(2, C) bundle, the splitting type of
368
+ E is the smallest integer j such that E can be written as an extension of A(j) by A(−j).
369
+ We fix a splitting type j and look at rank 2 bundles on the first formal neighbourhood ℓ(1) of
370
+ ℓ ≃ P1 ⊂ W1 together with their extensions up to first order in ℏ. We now calculate isomorphism
371
+ classes. Let p + p′ℏ and q + q′ℏ be two extension classes in Ext1
372
+ A(A(j), A(−j)) which are of
373
+ splitting type j, i.e. in canonical U-coordinates p, p′, q, q′ are multiples of u1, u2.
374
+ According to Def. 4.1 bundles defined by p + p′ℏ and q + q′ℏ are isomorphic, if there exist
375
+ invertible matrices
376
+ �a + a′ℏ
377
+ b + b′ℏ
378
+ c + c′ℏ
379
+ d + d′ℏ
380
+
381
+ and
382
+ �α + α′ℏ
383
+ β + β′ℏ
384
+ γ + γ′ℏ
385
+ δ + δ′ℏ
386
+
387
+ whose entries are holomorphic on U and V , respectively, such that
388
+ �α + α′ℏ
389
+ β + β′ℏ
390
+ γ + γ′ℏ
391
+ δ + δ′ℏ
392
+
393
+
394
+ �zj
395
+ q + q′ℏ
396
+ 0
397
+ z−j
398
+
399
+ =
400
+ �zj
401
+ p + p′ℏ
402
+ 0
403
+ z−j
404
+
405
+
406
+ �a + a′ℏ
407
+ b + b′ℏ
408
+ c + c′ℏ
409
+ d + d′ℏ
410
+
411
+ .
412
+ (4.6)
413
+
414
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
415
+ 7
416
+ We wish to determine the constraints such an isomorphism imposes on the coefficients of q and
417
+ q′. This is more conveniently rewritten by multiplying by the right-inverse of
418
+
419
+ zj q+q′ℏ
420
+ 0
421
+ z−j
422
+
423
+ , which
424
+ (modulo ℏ2) is
425
+ �z−j
426
+ −q − q′ℏ + 2z−j{zj, q}ℏ
427
+ 0
428
+ zj
429
+
430
+ .
431
+ We have that the zero section ℓ ≃ P1 is cut out inside Wk by u1 = u2 = 0. Hence, the n-th
432
+ formal neighbourhood of ℓ is by definition ℓ(n) = OW1
433
+ In+1 where I =< u1, u2 >. So, on ℓ(1) we
434
+ have that u2
435
+ 1 = u2
436
+ 2 = u1u2 = 0 and therefore we may write
437
+ a = a0 + a1
438
+ 1u1 + a2
439
+ 1u2,
440
+ α = α0 + α1
441
+ 1u1 + α2
442
+ 1u2,
443
+ etc., where ai
444
+ 1, αi
445
+ 1, etc. are holomorphic functions of z.
446
+ Following the details of the proof of [G, Prop. 3.3] we assume in (4.6) that a0 = α0, d0 = δ0 are
447
+ constant and b = β = 0. Since we already know that on the classical limit the only equivalence
448
+ on ℓ(1) is projectivization [G, Prop. 3.2], we assume p = q, keeping in mind a projectivization to
449
+ be done in the end. We may also assume that the determinants of the changes of coordinates
450
+ on the classical limit are 1. Accordingly, we rewrite (4.6) as:
451
+ �α + α′ℏ
452
+ β′ℏ
453
+ γ + γ′ℏ
454
+ δ + δ′ℏ
455
+
456
+ =
457
+ �zj
458
+ p + p′ℏ
459
+ 0
460
+ z−j
461
+
462
+
463
+ �a + a′ℏ
464
+ b′ℏ
465
+ c + c′ℏ
466
+ d + d′ℏ
467
+
468
+
469
+ �z−j
470
+ −p − q′ℏ + 2{zj, p}z−jℏ
471
+ 0
472
+ zj
473
+
474
+ (4.7)
475
+ where a0 = d0 = α0 = δ0 = 1.
476
+ Since we already know the moduli in the classical limit, we only need to study terms containing
477
+ ℏ, which after multiplying are:
478
+ (1, 1) = a′ + {zja, z−j} + {zj, a}z−j + {pc, z−j} + {p, c}z−j + (pc′ + p′c)z−j
479
+ (1, 2) = {p, d}zj − {a, p}zj − {zj, a}p + {zj, p}a + {pd, zj} + 2z−j{zj, p}pc + z2jb′
480
+ − (pa′ + q′a)zj + (pd′ + p′d)zj − (pc′ + p′c + q′c)p
481
+ (2, 1) = z−2jc′
482
+ (2, 2) = d′ + {z−jd, zj} + {z−j, d}zj − {z−jc, p} − {z−j, c}p − (pc′ + q′c)z−j + 2{zj, p}z−2jc.
483
+ All four terms must be adjusted using the free variables to only contain expressions which are
484
+ holomorphic on V to satisfy (4.7). For example, in the (2, 1) term this condition is satisfied
485
+ precisely when c′ is a section of O(2j). Computing Poisson brackets, we see that the (1, 1) and
486
+ (2, 2) terms can always be made holomorphic on V by appropriate choices of c and d′, leaving
487
+ the coefficients of a′ free. We will need to use these free coefficients for the next step.
488
+ It remains to analyse the (1, 2) term. Because we are working on the first formal neighbourhood
489
+ of ℓ, terms in u2
490
+ 1, u1u2, u2
491
+ 2 or higher vanish (recall that we assume that p, p′, q′ are multiples of u1
492
+ or u2). Since z2jb′ is there to cancel out any possible terms having power of z greater or equal
493
+ to 2j, we remove it from the expression, keeping in mind that we only need to cancel out the
494
+ coefficients of the monomials ziu1 and ziu2 with i ≤ 2j − 1 in the expression:
495
+ (1, 2) = {p, d+a}zj −{zj, a}p+{zj, p}a+{pd, zj}+2z−j{zj, p}pc+p(d′−a′)zj +(p′−q′)zj. (�)
496
+ To determine the quantum moduli spaces, we must verify what restrictions are imposed on q′ so
497
+ that p′ and q′ define isomorphic bundles. Since this requires computing brackets, the analysis
498
+ must be carried out separately for each noncommutative deformation.
499
+
500
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
501
+ 8
502
+ 5. Quantum moduli of bundles on W1
503
+ The Calabi–Yau threefold we consider in this section is the crepant resolution of the conifold
504
+ singularity xy − zw = 0, that is,
505
+ W1 := Tot(OP1(−1) ⊕ OP1(−1)).
506
+ We will carry out calculations using the canonical coordinates W1 = U ∪ V where U ≃ C3 ≃ V
507
+ with U = {z, u1, u2}, V = {ξ, v1, v2}, and change of coordinates on U ∩ V ≃ C∗ × C × C given
508
+ by
509
+ �ξ = z−1 ,
510
+ v1 = zu1 ,
511
+ v2 = zu2
512
+ � .
513
+ Consequently, global functions on W1 are generated over C by the monomials 1, u1, zu1, u2, zu2.
514
+ For each specific noncommutative deformation (W1, Aσ), we wish to compare the quantum and
515
+ classical moduli spaces of vector bundles, see Def. 4.3.
516
+ This is part of the general quest to
517
+ understand how deformations of a variety affect moduli of bundles on it, and it is worth noting
518
+ that no commutative deformation of W1 is known to exits.
519
+ For a rank 2 bundle E on a noncommutative deformation W1 with a canonical matrix
520
+
521
+ zj
522
+ p
523
+ 0
524
+ z−j
525
+
526
+ as in (3.10) where p = �∞
527
+ n=0 pnℏn, expression (3.8) gives us the general form of the coefficients
528
+ pn. In particular, on the first formal neighbourhood, we have:
529
+ p =
530
+ j−1
531
+
532
+ l=−j+2
533
+ pl10zlu1 +
534
+ j−1
535
+
536
+ l=−j+2
537
+ pl01zlu2,
538
+ (5.1)
539
+ where p = 0 if j = 1.
540
+ Each noncommutative deformation comes from some Poisson structure which determines the first
541
+ order terms of the corresponding star product, see Sec. 2. The most basic Poisson structures σ
542
+ on W1 are those which generate all others over global functions. We call these generators the
543
+ basic Poisson structures.
544
+ Theorem 5.2. If σ is a basic Poisson structure on W1, then the quantum moduli space Mℏ
545
+ j(Wk, σ)
546
+ and its classical limit are isomorphic:
547
+ Mℏ
548
+ j(W1, σ) ≃ Mj(W1) ≃ P4j−5.
549
+ Proof. We perform the computations using the bracket σ1 = ∂z ∧ ∂u1; the choice of such a
550
+ generator is irrelevant, since all the 4 generators give pairwise isomorphic Poisson manifolds. To
551
+ obtain an isomorphism, we need to cancel out all coefficients of the terms
552
+ z2u1, . . . , z2j−1u1
553
+ and
554
+ z2u2, . . . , z2j−1u2
555
+ appearing in expression �. Calculating σ1 brackets, we have {zj, f} = jzj−1 ∂f
556
+ ∂u1
557
+ , and following
558
+ expressions for a and d coming from the classical part
559
+ a = 1 + a1
560
+ 1u1 + a2
561
+ 1u2,
562
+ d = 1 − a1
563
+ 1u1 − a2
564
+ 1u2,
565
+ where ai
566
+ 1 and di
567
+ 1 are functions of z, gives ∂a
568
+ ∂u1
569
+ = a1
570
+ 1,
571
+ ∂d
572
+ ∂u1
573
+ = −a1
574
+ 1, so that {p, d} − {a, p}zj =
575
+ −2
576
+ �∂p
577
+ ∂za1
578
+ 1 − ∂a
579
+ ∂z
580
+ ∂p
581
+ ∂u1
582
+
583
+ zj. Therefore, expression � becomes
584
+ � = −2
585
+ �∂p
586
+ ∂z a1
587
+ 1 − ∂a
588
+ ∂z
589
+ ∂p
590
+ ∂u1
591
+
592
+ zj + 2j
593
+ � ∂p
594
+ ∂u1
595
+ (a1
596
+ 1u1 + a2
597
+ 1u2 + pc)
598
+
599
+ zj−1 + p(d′ − a′)zj + (p′ − q′)zj.
600
+ Now we need to cancel out separately the coefficients of each monomial ziu1 and ziu2 for 2 ≤
601
+ i ≤ 2j − 1, that is, all those terms potentially giving nonholomorphic functions. To determine
602
+
603
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
604
+ 9
605
+ the classes in the moduli space we need to verify what constraints are imposed on q′. Take for
606
+ instance the monomial ziu1 in (p′d − q′a)zj. Since a′ remains free we can always choose its
607
+ corresponding coefficient in order to cancel out the term in ziu1 in the entire expression of (1, 2).
608
+ Indeed, notice that the expressions p(d′ − a′)zj and (p′d − q′a)zj contain monomials of the same
609
+ orders, all of which may be adjusted to zero by choosing a′. Moreover the first three summands
610
+ in � also contain the same list of monomials, hence may also be absorbed by the appropriate
611
+ choices of coefficients of a, a′ and c.
612
+ Since this process can be independently carried out for each monomial, we then conclude that
613
+ the expression � can be made holomorphic on V for any choice of q′.
614
+ Hence, there are no
615
+ restrictions on q′. Thus, we obtain an equivalence p + p′ℏ ∼ p + q′ℏ for all q′ and the projection
616
+ onto the classical limit (the first coordinate)
617
+ π1 : Mℏ
618
+ j(W1, σ) → Mj(W1)
619
+ taking (p, p′) to p is an isomorphism. The isomorphism type of the moduli space is given in
620
+ [BGS, Lem. 6.2] as P4j−5.
621
+
622
+ We now calculate the quantum moduli space for the particular choice of splitting type j = 2 and
623
+ for a different choice of Poisson structure on W1. We use the notation p ∈ Mj(W1) to refer to
624
+ a point in the classical moduli space, that is, a rank 2 bundle is labelled by its extension class.
625
+ Example 5.3 (j = 2 and σ = u1σ1). Here we write
626
+ p = p0zu1 + p1u1 + p2zu2 + p3u2,
627
+ p′ = p′
628
+ 0zu1 + p′
629
+ 1u1 + p′
630
+ 2zu2 + p′
631
+ 3u2.
632
+ for the first order part of the extension class, where we have renamed the coefficients to simplify
633
+ notation ( p0 := p110, p1 := p010, p2 := p101, p3 := p001). Lem. 2.1 implies that σ = u1σ1 is also
634
+ a Poisson structure on W1. With this choice, all brackets acquire an extra u1 in comparison
635
+ to the bracket σ1 used in the proof of Thm. 5.2, so that in the first formal neighbourhood the
636
+ (1, 2)-term described in � simplifies to just:
637
+ � = z2p(d′ − a′) + z2(p′ − q′).
638
+ Here a′ = a′
639
+ 0 + a′1u1 + a′2u2,
640
+ d′ = d′
641
+ 0 − d′1u1 − d′2u2, so that
642
+ d′ − a′ = (d′ − a′)0 + (d′ − a′)1u1 + (d′ − a′)2u2.
643
+ Hence, the total expression of � is
644
+
645
+ =
646
+ (p0z3u1 + p1z2u1 + p2z3u2 + p3z2u2)((d′ − a′)0 + (d′ − a′)1u1 + (d′ − a′)2u2))
647
+ +(p′
648
+ 0 − q′
649
+ 0)z3u1 + (p′
650
+ 1 − q′
651
+ 1)z2u1 + (p′
652
+ 2 − q′
653
+ 2)z3u2 + (p′
654
+ 3 − q′
655
+ 3)z2u2,
656
+ where we canceled out all the monomials containing u2
657
+ 1, u1u2, and u2
658
+ 2, since we work on the first
659
+ formal neighbourhood. We rename (d′ − a′)0(z) = λ0 + λ1z + λ2z2 + . . . to simplify notation,
660
+ and since all terms in (1, 2) having powers of z equal to 4 and higher can be cancelled out by
661
+ the appropriate choice of the z2jb′, it suffices to analyse the expression
662
+
663
+ =
664
+ (p0z3u1 + p1z2u1 + p2z3u2 + p3z2u2)(λ0 + λ1z)
665
+ +(q′
666
+ 0 − p′
667
+ 0)z3u1 + (q′
668
+ 1 − p′
669
+ 1)z2u1 + (q′
670
+ 2 − p′
671
+ 2)z3u2 + (q′
672
+ 3 − p′
673
+ 3)z2u2.
674
+ To have an isomorphism q′ ∼ p′, we need to cancel out the coefficients of z3u1, z2u1, z3u2, z2u2
675
+ in � with appropriate choices of λi. Consequently, q′ ∼ p′ if and only if the following equality
676
+ holds for some choice of λ0 and λ1:
677
+
678
+
679
+
680
+
681
+ q′
682
+ 0 − p′
683
+ 0
684
+ q′
685
+ 1 − p′
686
+ 1
687
+ q′
688
+ 2 − p′
689
+ 2
690
+ q′
691
+ 3 − p′
692
+ 3
693
+
694
+
695
+
696
+  = λ0
697
+
698
+
699
+
700
+
701
+ p0
702
+ p1
703
+ p2
704
+ p3
705
+
706
+
707
+
708
+  + λ1
709
+
710
+
711
+
712
+
713
+ p1
714
+ 0
715
+ p3
716
+ 0
717
+
718
+
719
+
720
+  .
721
+
722
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
723
+ 10
724
+ When the vectors v1 = (p0, p1, p2, p3) and v2 = (0, p1, 0, p3) are linearly independent, the point
725
+ q′ belongs to the plane that passes through the point p′ with v1 and v2 as direction vectors.
726
+ Therefore, whenever v1 and v2 are linearly independent vectors, the fibre over p = (p0, p1, p2, p3)
727
+ is a copy of C4 foliated by 2-planes. The leaf containing a point p′ forms the equivalence class
728
+ of p′. Thus, the moduli space over the fibre over p is parametrised by the 2-plane through the
729
+ origin in the direction perpendicular to v1, v2 over the point p, except when p1 = p3 = 0.
730
+ In contrast, the fibre over a point p = (p0, 0, p2, 0) is a copy of C4 foliated by lines in the
731
+ direction of v1 = (p0, 0, p2, 0). In this case, the moduli space over p is parametrised by a copy of
732
+ C3 perpendicular to v1.
733
+ We conclude that Mℏ
734
+ 2(W1, σ) → M2(W1) ≃ P3 (where the isomorphism is given by Lem. 4.4) is
735
+ the ´etale space of a constructible sheaf, whose stalks have
736
+ • dimension 2 over the Zariski open set (p1, p3) ̸= (0, 0), and
737
+ • dimension 3 over the P1 cut out by p1 = p3 = 0 in P3.
738
+ The same techniques readily generalise to give a description of the quantum moduli spaces for
739
+ other choices of noncommutative deformations.
740
+ Theorem 5.4. If σ is an extremal Poisson structure on W1, then the quantum moduli space
741
+ Mℏ
742
+ j(W1, σ) can be viewed as the ´etale space of a constructible sheaf of generic rank 2j − 2 over
743
+ the classical moduli space Mj(W1) with singular stalks up to rank 4j − 5.
744
+ Proof. We give the details of the case j = 3, for an extremal Poisson structure, that is, the case
745
+ when all brackets vanish on the first formal neighbourhood. The general case is clear from these
746
+ calculations, just notationally more complicated.
747
+ When j = 3 and σ = u1σ1, expression � becomes:
748
+ � = p(d′ − a′)z3 + (p′d − q′a)z3,
749
+ and we get a system of equations:
750
+
751
+ =
752
+
753
+ p0z5u1 + p1z4u1 + p2z3u1 + p3z2u1 + p4z5u2 + p5z4u2 + p6z3u2 + p7z2u2
754
+
755
+ ·(λ0 + λ1z + λ2z2 + λ3z3 + λ4z4) +
756
+ +(p′ − q′)0z5u1 + (p′ − q′)1z4u1 + (p′ − q′)2z3u1 + (p′ − q′)3z2u1
757
+ +(p′ − q′)4z5u2 + (p′ − q′)5z4u2 + (p′ − q′)6z3u2 + (p′ − q′)7z2u2.
758
+ To have an isomorphism q′ ∼ p′, we need to cancel out the coefficients of z5u1, z4u1, z3u1, z2u1,
759
+ z5u2, z4u2, z3u2, z2u2 in � with appropriate choices of λi. Consequently, q′ ∼ p′ if and only if
760
+ the following equality holds for some choice of λ0, λ1, λ2, λ3:
761
+
762
+
763
+
764
+
765
+
766
+
767
+
768
+
769
+
770
+
771
+
772
+
773
+
774
+ q′
775
+ 0 − p′
776
+ 0
777
+ q′
778
+ 1 − p′
779
+ 1
780
+ q′
781
+ 2 − p′
782
+ 2
783
+ q′
784
+ 3 − p′
785
+ 3
786
+ q′
787
+ 4 − p′
788
+ 4
789
+ q′
790
+ 5 − p′
791
+ 5
792
+ q′
793
+ 6 − p′
794
+ 6
795
+ q′
796
+ 7 − p′
797
+ 7
798
+
799
+
800
+
801
+
802
+
803
+
804
+
805
+
806
+
807
+
808
+
809
+
810
+
811
+ = λ0
812
+
813
+
814
+
815
+
816
+
817
+
818
+
819
+
820
+
821
+
822
+
823
+
824
+
825
+ p0
826
+ p1
827
+ p2
828
+ p3
829
+ p4
830
+ p5
831
+ p6
832
+ p7
833
+
834
+
835
+
836
+
837
+
838
+
839
+
840
+
841
+
842
+
843
+
844
+
845
+
846
+ + λ1
847
+
848
+
849
+
850
+
851
+
852
+
853
+
854
+
855
+
856
+
857
+
858
+
859
+
860
+ p1
861
+ p2
862
+ p3
863
+ 0
864
+ p5
865
+ p6
866
+ p7
867
+ 0
868
+
869
+
870
+
871
+
872
+
873
+
874
+
875
+
876
+
877
+
878
+
879
+
880
+
881
+ + λ2
882
+
883
+
884
+
885
+
886
+
887
+
888
+
889
+
890
+
891
+
892
+
893
+
894
+
895
+ p2
896
+ p3
897
+ 0
898
+ 0
899
+ p6
900
+ p7
901
+ 0
902
+ 0
903
+
904
+
905
+
906
+
907
+
908
+
909
+
910
+
911
+
912
+
913
+
914
+
915
+
916
+ + λ3
917
+
918
+
919
+
920
+
921
+
922
+
923
+
924
+
925
+
926
+
927
+
928
+
929
+
930
+ p3
931
+ 0
932
+ 0
933
+ 0
934
+ p7
935
+ 0
936
+ 0
937
+ 0
938
+
939
+
940
+
941
+
942
+
943
+
944
+
945
+
946
+
947
+
948
+
949
+
950
+
951
+ .
952
+
953
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
954
+ 11
955
+ Consider now the family U of vector spaces over M2(W1) ≃ P7 whose fibre at p is given by
956
+ Up =
957
+
958
+
959
+
960
+
961
+
962
+
963
+
964
+
965
+
966
+
967
+
968
+
969
+
970
+ p0
971
+ p1
972
+ p2
973
+ p3
974
+ p1
975
+ p2
976
+ p3
977
+ 0
978
+ p2
979
+ p3
980
+ 0
981
+ 0
982
+ p3
983
+ 0
984
+ 0
985
+ 0
986
+ p4
987
+ p5
988
+ p6
989
+ p7
990
+ p5
991
+ p6
992
+ p7
993
+ 0
994
+ p6
995
+ p7
996
+ 0
997
+ 0
998
+ p7
999
+ 0
1000
+ 0
1001
+ 0
1002
+
1003
+
1004
+
1005
+
1006
+
1007
+
1008
+
1009
+
1010
+
1011
+
1012
+
1013
+
1014
+
1015
+ .
1016
+ Now, the quantum moduli space is obtained from this family after dividing by the equivalence
1017
+ relation ∼ over each point p. Hence
1018
+ Mℏ
1019
+ 2(W1, σ) = U/ ∼ .
1020
+ We conclude that Mℏ
1021
+ 2(W1, σ) → M2(W1) ≃ P7 (where the isomorphism is given by Lem. 4.4) is
1022
+ the ´etale space of a constructible sheaf or rank 4, with stalk at p having dimension equal to the
1023
+ corank of Up, in this case
1024
+ 4 ≤ dim Mℏ
1025
+ 2(W1, σ)p = 8 − rk Up ≤ 7.
1026
+ In the general case we have
1027
+ 2j − 2 ≤ dim Mℏ
1028
+ j(W1, σ)p = corank Up =≤ 4j − 5.
1029
+
1030
+ 6. Quantum moduli of bundles on W2
1031
+ The Calabi–Yau threefold we consider in this section is a crepant resolution of the singularity
1032
+ xy − w2 = 0 in C4, that is
1033
+ W2 := Tot(OP1(−2) ⊕ OP1) = Z2 × C.
1034
+ Similarly to what we did for W1, we will carry out calculations using the canonical coordinates
1035
+ W2 = U ∪V where U ≃ C3 ≃ V with U = {z, u1, u2}, V = {ξ, v1, v2}, and change of coordinates
1036
+ on U ∩ V ≃ C∗ × C × C given by
1037
+ �ξ = z−1 ,
1038
+ v1 = z2u1 ,
1039
+ v2 = u2
1040
+ � .
1041
+ Consequently, global holomorphic functions on W2 are generated by 1, u1, zu1, z2u1, u2.
1042
+ For each specific noncommutative deformation (W2, Aσ), we wish to compare the quantum and
1043
+ classical moduli spaces of vector bundles, see Def. 4.3.
1044
+ For a rank 2 bundle E on a noncommutative deformation W2 with a canonical matrix
1045
+
1046
+ zj
1047
+ p
1048
+ 0
1049
+ z−j
1050
+
1051
+ as in (3.10) where p = �∞
1052
+ n=0 pnℏn, expression (3.8) gives us the general form of the coefficients
1053
+ pn. In particular, on the first formal neighbourhood, we have:
1054
+ p =
1055
+ j−1
1056
+
1057
+ l=−j+3
1058
+ pl10zlu1 +
1059
+ j−1
1060
+
1061
+ l=−j+1
1062
+ pl01zlu2
1063
+ (6.1)
1064
+ where in case j = 1 we have only p001u2.
1065
+ To describe the quantum moduli for Poisson structures on W2, we consider the expression �:
1066
+ � = {p, d + a}zj − {zj, a}p + {zj, p}a + {pd, zj} + 2z−j{zj, p}pc + p(d′ − a′)zj + (p′d − q′a)zj,
1067
+ where we need to cancel out the coefficients of z3u1, . . . , z2j−1u1
1068
+ and
1069
+ zu2, . . . , z2j−1u2.
1070
+
1071
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
1072
+ 12
1073
+ Each noncommutative deformation comes from some Poisson structure. The most basic Poisson
1074
+ structures σ on W2 are those which generate all others over global functions. We call these
1075
+ generators the basic Poisson structures. Now, we compute the quantum moduli of bundles for
1076
+ them.
1077
+ Remark. We observe that the 4 Poisson manifolds (W2, σi) for i = 1, 2, 3, 4, are pairwise
1078
+ nonisomorphic. This can be verified by the table of their degeneracy loci:
1079
+ W2 Poisson structures
1080
+ bracket
1081
+ degeneracy
1082
+ σ1
1083
+ σ2
1084
+
1085
+ σ3
1086
+ σ4
1087
+
1088
+ Nevertheless, the 4 quantum moduli spaces defined by these basic Poisson structures turn out to
1089
+ be all isomorphic.
1090
+ Theorem 6.2. If σ is a basic Poisson structure on W2, then the quantum moduli space Mℏ
1091
+ j(Wk, σ)
1092
+ and its classical limit are isomorphic:
1093
+ Mℏ
1094
+ j(W2, σ) ≃ Mj(W2) ≃ P4j−5.
1095
+ Proof. We carry out calculations for the basic bracket σ4 = u1∂u1 ∧ ∂u2. It does turn out that
1096
+ the result is the same for the the basic brackets. The calculation for σ4 is shorter, since any
1097
+ of the brackets having one entry equal to zj vanishes. Because we work on the first formal
1098
+ neighbourhood, we also remove the expressions that are quadratic in the ui variables.
1099
+ So, the expression � that remains to be analysed simplifies to:
1100
+ � = {p, d + a}zj + p(d′ − a′)zj + (p′d − q′a)zj,
1101
+ where we must cancel out the coefficients of the monomials z3u1, . . . , z2j−1u1 and zu2, . . . , z2j−1u2.
1102
+ On the first formal neighbourhood, we write
1103
+ a = 1 + a1(z)u1 + a2(z)u2,
1104
+ d = 1 + d1(z)u1 + d2(z)u2,
1105
+ and
1106
+ a′ = a′
1107
+ 0(z) + a′
1108
+ 1(z)u1 + a′
1109
+ 2(z)u2,
1110
+ d′ = d′
1111
+ 0(z) + d′
1112
+ 1(z)u1 + d′
1113
+ 2(z)u2,
1114
+ so that the partials are
1115
+ ∂uia = ai(z)
1116
+ ∂uid = di(z)
1117
+ and
1118
+ ∂u2a = a2(z)
1119
+ ∂u2d = d2(z).
1120
+ The extension class given in (3.9) becomes p =
1121
+ j−1
1122
+
1123
+ l=3−j
1124
+ pl10zlu1 +
1125
+ j−1
1126
+
1127
+ l=1−j
1128
+ pl01zlu2, and computing
1129
+ the bracket gives
1130
+ {p, d + a}zj =
1131
+
1132
+
1133
+ j−1
1134
+
1135
+ l=3−j
1136
+ pl10zl
1137
+
1138
+  (d2(z) + a2(z))zju1 +
1139
+
1140
+
1141
+ j−1
1142
+
1143
+ l=1−j
1144
+ pl01zl
1145
+
1146
+  (d1(z) + a1(z))zju1.
1147
+ To work with a simpler notation, we present details of � when j = 2, in which case we can
1148
+ express the extension class as
1149
+ p = p0zu1 + p1zu2 + p2u2 + p3z−1u2,
1150
+
1151
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
1152
+ 13
1153
+ having renamed the coefficients for simplicity (making p0 := p110, p1 := p101, p2 := p001, p3 :=
1154
+ p−101). We will point out the steps for generalising to higher j.
1155
+ Assuming j = 2, we have
1156
+ {p, d + a}z2 = p0(d2(z) + a2(z))z3u1 + (p1z3 + p2z2 + p3z)(d1(z) + a1(z))u1.
1157
+ To obtain equivalence between q′ and p′, we must cancel out coefficients of z3u1, zu2, z2u2, z3u2
1158
+ in the expression of �, which becomes
1159
+
1160
+ =
1161
+ p0(d2(z) + a2(z))z3u1 + (p1z3 + p2z2 + p3z)(d1(z) + a1(z))u1
1162
+ +(p0z3u1 + p1z3u2 + p2z2u2 + p3zu2)(d′
1163
+ 0(z) − a′
1164
+ 0(z))
1165
+ +(p′
1166
+ 0z3u1 + p′
1167
+ 1z3u2 + p′
1168
+ 2z2u2 + p′
1169
+ 3zu2)
1170
+ −(q′
1171
+ 0z3u1 + q′
1172
+ 1z3u2 + q′
1173
+ 2z2u2 + q′
1174
+ 3zu2).
1175
+ Since the highest power of z to be considered is 3, we observe that d2(z) + a2(z) may be chosen
1176
+ conveniently, we cancel out all terms in z3u1. We may also choose d1(z) + a1(z) = 0, leaving
1177
+
1178
+ =
1179
+ (p1z3u2 + p2z2u2 + p3zu2)(d′
1180
+ 0(z) − a′
1181
+ 0(z))
1182
+ +(p′
1183
+ 1z3u2 + p′
1184
+ 2z2u2 + p′
1185
+ 3zu2)
1186
+ −(q′
1187
+ 1z3u2 + q��
1188
+ 2z2u2 + q′
1189
+ 3zu2).
1190
+ Now we may choose d′
1191
+ 0 − a′
1192
+ 0 appropriately to cancel out all terms in u2. We conclude that there
1193
+ are no conditions imposed on q′. In other words, here p + p′ℏ is equivalent to p + q′ℏ for any
1194
+ choice of q′. Hence, the quantum and classical moduli spaces are isomorphic.
1195
+ The generalisation to higher j works out similarly, we can first choose di + ai for i > 0 to cancel
1196
+ out the coefficients of u1 and then choose d′
1197
+ 0 −a′
1198
+ 0 to take care of the coefficients of u2. So, for all
1199
+ j using the bracket σ4 we conclude that the quantum and classical moduli spaces are isomorphic
1200
+ Mℏ
1201
+ j(W2, σ4) ≃ Mj(W2) ≃ P4j−5
1202
+ where the second isomorphism is proven in [K, Prop. 3.24].
1203
+
1204
+ Example 6.3. Now choose any Poisson structure of W2 for which all brackets in � vanish on
1205
+ neighbourhood 1, for example σ = u1σ4 = u2
1206
+ 1∂u1 ∧ ∂u2 works. In such a case, the expression for
1207
+ � reduces to:
1208
+ � = p(d′ − a′)zj + (p′d − q′a)zj.
1209
+ Now, consider the case of j = 2, when we have:
1210
+
1211
+ =
1212
+ (p0z3u1 + p1z3u2 + p2z2u2 + p3zu2) + (d′
1213
+ 0(z) − a′
1214
+ 0(z))
1215
+ +(p′
1216
+ 0z3u1 + p′
1217
+ 1z3u2 + p′
1218
+ 2z2u2 + p′
1219
+ 3zu2)
1220
+ −(q′
1221
+ 0z3u1 + q′
1222
+ 1z3u2 + q′
1223
+ 2z2u2 + q′
1224
+ 3zu2).
1225
+ Setting
1226
+ d′
1227
+ 0(z) − a′
1228
+ 0(z) = λ0 + λ1z + λ2z2,
1229
+ we get a system of equations:
1230
+
1231
+
1232
+
1233
+
1234
+ q′
1235
+ 0 − p′
1236
+ 0
1237
+ q′
1238
+ 1 − p′
1239
+ 1
1240
+ q′
1241
+ 2 − p′
1242
+ 2
1243
+ q′
1244
+ 3 − p′
1245
+ 3
1246
+
1247
+
1248
+
1249
+  =
1250
+
1251
+
1252
+
1253
+
1254
+ λ0
1255
+ 0
1256
+ 0
1257
+ 0
1258
+ 0
1259
+ λ0
1260
+ λ1
1261
+ λ2
1262
+ 0
1263
+ 0
1264
+ λ0
1265
+ λ1
1266
+ 0
1267
+ 0
1268
+ 0
1269
+ λ0
1270
+
1271
+
1272
+
1273
+
1274
+
1275
+
1276
+
1277
+
1278
+ p0
1279
+ p1
1280
+ p2
1281
+ p3
1282
+
1283
+
1284
+
1285
+  .
1286
+ Since we can choose λ1 and λ2 to solve the second and third equations, we see that q′
1287
+ 1 and q′
1288
+ 2
1289
+ are free. Hence (q′
1290
+ 0, q′
1291
+ 1, q′
1292
+ 2, q′
1293
+ 3) ∼ λ0(q′
1294
+ 0, ∗, ∗, q′
1295
+ 3), and our system of equations reduces to
1296
+ �q′
1297
+ 0 − p′
1298
+ 0
1299
+ q′
1300
+ 3 − p′
1301
+ 3
1302
+
1303
+ = λ0
1304
+ �p0
1305
+ p3
1306
+
1307
+ ,
1308
+
1309
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
1310
+ 14
1311
+ which is the parametric equation of a line in the (q′
1312
+ 0, q′
1313
+ 3)-plane whenever (p0, p3) ̸= (0, 0). The
1314
+ entire question of moduli now reduces to the 2-dimensional case, disregarding p1, p2 coordinates.
1315
+ If (p0, p3) ̸= (0, 0), then the equivalence class of q′ in the fibre over the point p is the 1-dimensional
1316
+ subspace L directed by the vector (p0, p3) and passing through (q′
1317
+ 0, q′
1318
+ 3) in the (p′
1319
+ 0, p′
1320
+ 3)-plane.
1321
+ If p0 = p3 = 0, then we must have the equality (q′
1322
+ 0, q′
1323
+ 3) = (p′
1324
+ 0, p′
1325
+ 3). So, its the equivalence class
1326
+ consists of a single point.
1327
+ Accordingly, the set of equivalence classes over p can be represented either by the line L⊥ by
1328
+ the origin perpendicular to L (directed by (−p3, p0) when (p0, p3) ̸= (0, 0) or else by the entire
1329
+ (p′
1330
+ 0, p′
1331
+ 3)-plane over (0, 0).
1332
+ We conclude that Mℏ
1333
+ 2(W2, σ) → M2(W2) ≃ P3 (where the isomorphism is given by Lem. 4.4) is
1334
+ the ´etale space of a constructible sheaf, whose stalks have
1335
+ • dimension 1 over the Zariski open set (p0, p3) ̸= (0, 0), and
1336
+ • dimension 2 over the P1 cut out by p0 = p3 = 0 in P3.
1337
+ In fact, we could express this moduli space as a sheaf given by an extension of OP3(+1) by a
1338
+ torsion sheaf.
1339
+ Theorem 6.4. If σ is an extremal Poisson structure on W2, then the quantum moduli space
1340
+ Mℏ
1341
+ j(W2, σ) can be viewed as the ´etale space of a constructible sheaf of generic rank 2j − 3 over
1342
+ the classical moduli space Mj(W2) with singular stalks up to rank 4j − 6.
1343
+ Proof. Now, for j = 3, we write down the extremal example when the brackets vanish on the
1344
+ first formal neighbourhood. The generalisation of the extremal cases to all j becomes clear from
1345
+ this example. Where, assuming all brackets vanish on the first formal neighbourhood, we need
1346
+ to cancel out the coefficients of z3u1, . . . , z2j−1u1
1347
+ and
1348
+ zu2, . . . , z2j−1u2 in
1349
+ � = p(d′ − a′)zj + (p′d − q′a)zj.
1350
+ For j = 3 we have
1351
+ p =
1352
+ 2
1353
+
1354
+ l=0
1355
+ pl10zlu1 +
1356
+ 2
1357
+
1358
+ l=−2
1359
+ pl01zlu2,
1360
+ which we rewrite as
1361
+ p = p0z2u1 + p1zu1 + p2u1 + p3z2u2 + p4z1u2 + p5u2 + p6z–1u2 + p7z−2u2.
1362
+ Setting
1363
+ d′
1364
+ 0(z) − a′
1365
+ 0(z) = λ0 + λ1z + λ2z2 + λ3z3 + λ4z4,
1366
+ expression
1367
+ � = p(d′ − a′)z3 + (p′d − q′a)z3
1368
+ becomes
1369
+
1370
+ =
1371
+
1372
+ p0z5u1 + p1z4u1 + p2z3u1 + p3z5u2 + p4z4u2 + p5z3u2 + p6z2u2 + p7zu2
1373
+
1374
+ ·(λ0 + λ1z + λ2z2 + λ3z3 + λ4z4)
1375
+ +(p′ − q′)0z5u1 + (p′ − q′)1z4u1 + (p′ − q′)2z3u1
1376
+ +(p′ − q′)3z5u2 + (p′ − q′)4z4u2 + (p′ − q′)5z3u2 + (p′ − q′)6z2u2 + (p′ − q′)7zu2.
1377
+ To start with we notice that λ3 and λ4 can always be chosen to solve the equations involving q′
1378
+ 3
1379
+ and q′
1380
+ 4 so that these 2 coordinates can take any value, that is, there are isomorphisms
1381
+ (q′
1382
+ 0, q′
1383
+ 1, q′
1384
+ 2, q′
1385
+ 3, q′
1386
+ 4, q′
1387
+ 5, q′
1388
+ 6, q′
1389
+ 7) ∼ (q′
1390
+ 0, q′
1391
+ 1, q′
1392
+ 2, ∗, ∗, q′
1393
+ 5, q′
1394
+ 6, q′
1395
+ 7).
1396
+
1397
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
1398
+ 15
1399
+ Consequently, we may remove q′
1400
+ 3, q′
1401
+ 4 and rewrite the reduced system as:
1402
+
1403
+
1404
+
1405
+
1406
+
1407
+
1408
+
1409
+
1410
+
1411
+ q′
1412
+ 0 − p′
1413
+ 0
1414
+ q′
1415
+ 1 − p′
1416
+ 1
1417
+ q′
1418
+ 2 − p′
1419
+ 2
1420
+ q′
1421
+ 5 − p���
1422
+ 5
1423
+ q′
1424
+ 6 − p′
1425
+ 6
1426
+ q′
1427
+ 7 − p′
1428
+ 7
1429
+
1430
+
1431
+
1432
+
1433
+
1434
+
1435
+
1436
+
1437
+
1438
+ = λ0
1439
+
1440
+
1441
+
1442
+
1443
+
1444
+
1445
+
1446
+
1447
+
1448
+ p0
1449
+ p1
1450
+ p2
1451
+ p5
1452
+ p6
1453
+ p7
1454
+
1455
+
1456
+
1457
+
1458
+
1459
+
1460
+
1461
+
1462
+
1463
+ + λ1
1464
+
1465
+
1466
+
1467
+
1468
+
1469
+
1470
+
1471
+
1472
+
1473
+ p1
1474
+ p2
1475
+ 0
1476
+ p6
1477
+ p7
1478
+ 0
1479
+
1480
+
1481
+
1482
+
1483
+
1484
+
1485
+
1486
+
1487
+
1488
+ + λ2
1489
+
1490
+
1491
+
1492
+
1493
+
1494
+
1495
+
1496
+
1497
+
1498
+ p2
1499
+ 0
1500
+ 0
1501
+ p7
1502
+ 0
1503
+ 0
1504
+
1505
+
1506
+
1507
+
1508
+
1509
+
1510
+
1511
+
1512
+
1513
+ .
1514
+ Here q′ ∼ p′ if and only if the equality holds for some choice of λ0, λ1, λ2. Consider now the
1515
+ family U of vector spaces over M2(W2) ≃ P7 whose fibre at p is given by
1516
+ Up =
1517
+
1518
+
1519
+
1520
+
1521
+
1522
+
1523
+
1524
+
1525
+
1526
+ p0
1527
+ p1
1528
+ p2
1529
+ p1
1530
+ p2
1531
+ 0
1532
+ p2
1533
+ 0
1534
+ 0
1535
+ p5
1536
+ p6
1537
+ p7
1538
+ p6
1539
+ p7
1540
+ 0
1541
+ p7
1542
+ 0
1543
+ 0
1544
+
1545
+
1546
+
1547
+
1548
+
1549
+
1550
+
1551
+
1552
+
1553
+ .
1554
+ Now, the quantum moduli space is obtained from this family after dividing by the equivalence
1555
+ relation ∼ over each point p. Hence
1556
+ Mℏ
1557
+ 2(W2, σ) = U/ ∼ .
1558
+ We conclude that Mℏ
1559
+ 2(W2, σ) → M2(W2) ≃ P7 (where the isomorphism is given by Lem. 4.4) is
1560
+ the ´etale space of a constructible sheaf, with stalk at p having dimension equal to the corank of
1561
+ Up, in this case
1562
+ 3 ≤ dim Mℏ
1563
+ 2(W2, σ)p = corank Up = 6 − rk Up ≤ 6.
1564
+ In the general case we then have
1565
+ 2j − 3 ≤ dim Mℏ
1566
+ j(W2, σ)p = corank Up = 2j − rk Up ≤ 4j − 6.
1567
+
1568
+ Appendix A. Computations of H1
1569
+ Lemma A.1. H1(W1, O) = H1(W2, O) = 0.
1570
+ Proof. A 1-cocycle τ ∈ O(U ∩ V ) may be written in the form
1571
+ τU =
1572
+
1573
+
1574
+ l=−∞
1575
+
1576
+
1577
+ i=0
1578
+
1579
+
1580
+ s=0
1581
+ τliszlui
1582
+ 1us
1583
+ 2.
1584
+ Since terms containing only positive powers of z are holomorphic on the U-chart
1585
+ τU ∼
1586
+ −1
1587
+
1588
+ l=−∞
1589
+
1590
+
1591
+ i=0
1592
+
1593
+
1594
+ s=0
1595
+ τliszlui
1596
+ 1us
1597
+ 2,
1598
+ where ∼ denotes cohomological equivalence. Changing to V coordinates we have
1599
+ τV =
1600
+ −1
1601
+
1602
+ l=−∞
1603
+
1604
+
1605
+ i=0
1606
+
1607
+
1608
+ s=0
1609
+ τlisξ−l+ki+(−k+2)svi
1610
+ 1vs
1611
+ 2,
1612
+ (A.2)
1613
+ where, for k = 1, 2 exponents of ξ are non-negative.
1614
+ Thus, τV is holomorphic on V , and
1615
+ τ ∼ 0.
1616
+
1617
+ Lemma A.3. H1(W3, O) is infinite dimensional over C.
1618
+
1619
+ QUANTIZATION OF CALABI–YAU THREEFOLDS
1620
+ 16
1621
+ Proof. As in the proof of Lem. A.1 we arrive at the expression (A.2) for the 1-cocycle τ on the
1622
+ V -chart, which in the case k = 3, gives
1623
+ τV ∼
1624
+ −1
1625
+
1626
+ l=−∞
1627
+
1628
+
1629
+ i=0
1630
+
1631
+
1632
+ s=0
1633
+ τlisξ−l+3i−svi
1634
+ 1vs
1635
+ 2.
1636
+ The terms that are not holomorphic on V are all of those satisfying −l + 3i − s < 0.
1637
+ We conclude that all terms having s > 3i − l, namely all of
1638
+ −1
1639
+
1640
+ l=−∞
1641
+
1642
+
1643
+ i=0
1644
+
1645
+
1646
+ s=3i−l+1
1647
+ τliszlui
1648
+ 1us
1649
+ 2
1650
+ are nontrivial in first cohomology, so that dim H1(W3, O) = ∞.
1651
+
1652
+ Acknowledgements. E. Ballico is a member of GNSAGA of INdAM (Italy). E. Gasparim
1653
+ acknowledges support of Vicerrector´ıa de Investigaci´on y Desarrollo Tecnol´ogico, UCN Chile.
1654
+ F. Rubilar acknowledges support of ANID-FAPESP cooperation 2019/13204-0. B. Suzuki was
1655
+ supported by Grant 2021/11750-7 S˜ao Paulo Research Foundation - FAPESP.
1656
+ References
1657
+ [BG]
1658
+ S. Barmeier, E. Gasparim, Quantization of local surfaces and rebel instantons, J. Noncommut. Geom.
1659
+ 16 (2022) 311–351.
1660
+ [BGK1]
1661
+ E. Ballico, E. Gasparim, T. K¨oppe, Local moduli of holomorphic bundles, J. Pure Appl. Algebra 213
1662
+ n.4 (2009) 397–408.
1663
+ [BGK2]
1664
+ E. Ballico, E. Gasparim, T. K¨oppe, Vector bundles near negative curves: moduli and local Euler char-
1665
+ acteristic. Comm. Algebra 37 n.8 (2009) 2688–2713.
1666
+ [BGKS]
1667
+ E. Ballico, E. Gasparim, T. K¨oppe, B. Suzuki, Poisson structures on the conifold and local Calabi-Yau
1668
+ threefolds, Rep. Math. Phys. 90 n.3 (2022) 299–324.
1669
+ [BGS]
1670
+ E. Ballico, E. Gasparim, B. Suzuki, Infinite dimensional families of Calabi–Yau threefolds and moduli
1671
+ of vector bundles, J. Pure Appl. Algebra 225 n.4 (2021) 106554, 24 pp..
1672
+ [G]
1673
+ E. Gasparim, Rank two bundles on the blow-up of C2, J. Algebra 199 n.2 (1998) 581–590.
1674
+ [GKMR] E. Gasparim, T. K¨oppe, P. Majumdar, K. Ray, BPS state counting on singular varieties, J. Phys. A 45
1675
+ n. 26 (2012) 265401 20pp..
1676
+ [GKRS]
1677
+ E. Gasparim, T. K¨oppe, F. Rubilar, and B. Suzuki., Deformations of noncompact Calabi–Yau threefolds,
1678
+ Rev. Colombiana Mat. 52 n.1 (2018)41–57.
1679
+ [GSTV]
1680
+ E. Gasparim, B. Suzuki, A. Torres-Gomez, C. Varea, Topological String Partition Function on Gener-
1681
+ alised Conifolds, Journal of Mathematical Physics, 58 (2017) 1–16.
1682
+ [Ko1]
1683
+ M. Kontsevich, Deformation quantization of Poisson manifolds, Lett. Math. Phys. 66 n.3 (2003) 157–
1684
+ 216.
1685
+ [Ko2]
1686
+ M. Kontsevich, Deformation quantization of algebraic varieties, Lett. Math. Phys. 56 n.3 (2001) 271–
1687
+ 294.
1688
+ [K]
1689
+ T. K¨oppe, Moduli of bundles on local surfaces and threefolds, PhD thesis, The University of Edinburgh
1690
+ (2010).
1691
+ [OSY]
1692
+ H. Ooguri, P. Su�lkowski, M. Yamazaki, Wall Crossing as Seen by Matrix Models, Commun. Math. Phys.
1693
+ 307 (2011) 429–462.
1694
+ [S]
1695
+ B. Szendr˝oi, Non-commutative Donaldson–Thomas invariants and the conifold, Geom. Topol. 12 (2008)
1696
+ 1171–1202.
1697
+ [V]
1698
+ M. Van den Bergh, Non-commutative crepant resolutions. In: The legacy of Niels Henrik Abel. Springer,
1699
+ Berlin (2004) 749–770.
1700
+ [Y]
1701
+ A. Yekutieli, Twisted deformation quantization of algebraic varieties, Adv. Math. 268 (2015) 271–294.
1702
+ Ballico - Dept. Mathematics, Univ. of Trento, Povo Italy; ballico@science.unitn.it,
1703
+ Gasparim - Depto. Matem´aticas, Univ. Cat´olica del Norte, Chile; etgasparim@gmail.com,
1704
+ Rubilar - Depto. Matem´aticas, Univ. Sant. Concepci´on, Chile; francisco.rubilar.arriagada@gmail.com,
1705
+ Suzuki - Depto. Matem´atica, Univ. de S˜ao Paulo, Brazil; obrunosuzuki@gmail.com.
1706
+
2tE2T4oBgHgl3EQf5gjq/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4dE4T4oBgHgl3EQfbgxF/content/tmp_files/2301.05073v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
4dE4T4oBgHgl3EQfbgxF/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4dFST4oBgHgl3EQfZzjU/content/tmp_files/2301.13793v1.pdf.txt ADDED
@@ -0,0 +1,881 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Protecting the Texas power grid from tropical cyclones:
2
+ Increasing resilience by protecting critical lines
3
+ Julian Stürmer1,2, Anton Plietzsch1, Thomas Vogt1, Frank Hellmann1, Jürgen Kurths1,3,4,
4
+ Christian Otto1,*, Katja Frieler, Mehrnaz Anvari1,*
5
+ 1Potsdam Institute for Climate Impact Research, Telegrafenberg A56, 14473 Potsdam,
6
+ Germany
7
+ 2Institute for Theoretical Physics, TU Berlin, 10623 Germany
8
+ 3Institute of Physics and Astronomy, University of Potsdam, 14476 Potsdam, Germany
9
+ 4Institute of Physics, Humboldt Universität zu Berlin, 12489 Berlin, Germany
10
+ *Corresponding author(s): anvari@pik-potsdam.de; christian.otto@pik-potsdam.de
11
+ Abstract
12
+ The Texan electric network in the Gulf Coast of the United States is frequently hit by Tropical
13
+ Cyclones (TC) causing widespread power outages, a risk that is expected to substantially
14
+ increase under global warming. Here, we introduce a new approach of combining a
15
+ probabilistic line fragility model with a network model of the Texas grid to simulate the
16
+ temporal evolution of wind-induced failures of transmission lines and the resulting cascading
17
+ power outages from seven major historical hurricanes. The approach allows reproducing
18
+ observed supply failures. In addition, compared to a static approach, it provides a significant
19
+ advantage in identifying critical lines whose failure can trigger large supply shortages. We
20
+ show that protecting only 1% of total lines can reduce the likelihood of the most destructive
21
+ type of outages by a factor of between 5 and 20. The proposed modelling approach could
22
+ represent a tool so far missing to effectively strengthen the power grids against future
23
+ hurricane risks even under limited knowledge.
24
+ Keywords: Electric networks, Extreme weather events (hurricane), Cascading failures
25
+ Introduction
26
+ Modern societies depend heavily on reliable access to electricity. Power outages have the
27
+ potential to disrupt transportation and telecommunication networks, heating and health
28
+ systems, the cooling chain underpinning food delivery and more1–3. Depending on the cause
29
+ of power outages and the amount of physical damages to infrastructures, the recovery of
30
+ the electric network, and the social infrastructures dependent on it, often takes days or even
31
+ months4. Such outages are often driven by extreme weather events. In Norway
32
+ of all
33
+ overhead line failures are caused by extreme weather which involves strong winds, icing and
34
+ lightning strikes5. In February 2021 a winter storm in Texas led to outages that in turn caused
35
+ a breakdown of the gas supply and thus the heating sector6–8. Impacts are particularly
36
+
37
+ devastating when it comes to tropical cyclones. In the summer months, the Gulf Coast and
38
+ the East Coast of the United States are frequently hit by tropical cyclones (TC) that entail
39
+ widespread outages and costs of billions of dollars. For example, hurricane Ike hitting
40
+ southeast Texas on September 13, 2008 destroyed around 100 towers holding high voltage
41
+ transmission lines and cut off electric power for between 2.8 and 4.5 million customers for
42
+ weeks to months9, 10. On August 29, 2021 hurricane Ida made landfall in Louisiana, and
43
+ destroyed major transmission lines delivering power into New Orleans, causing more than a
44
+ million customers to lose power11.
45
+ Resilience against line failures in power grids is usually discussed in terms of the N-1 (rarely
46
+ also N-2) security of the system, that is, the ability of the system to stay fully functional upon
47
+ the failure of one or two elements12. When a line fails, the power flow automatically
48
+ reroutes through the intact grid. To avoid overloads in the rerouting, relevant lines are
49
+ intentionally taken out of the grid. This secondary failures of lines can trigger a cascade13–19
50
+ of additional failures. N-1 security asserts that single line failures do not trigger such
51
+ cascades. Significant secondary failures do occur in larger events and were, e.g., observed in
52
+ response to the software error leading to the U.S.-Canadian blackout on August 14th, 200320.
53
+ They are typically also induced by the widespread primary damages and line failures caused
54
+ by TCs.
55
+ The N-1 approach to system resilience does not scale to extreme weather events. The tens
56
+ or even hundreds of primary failures during events such as hurricanes can not be fully
57
+ mitigated by an electric network, because N-100 security is not realistic to achieve. N-1
58
+ security is typically studied by simulating the reaction of the system to every possible failure
59
+ scenario. As the number of possible failure scenarios scales exponentially with the number
60
+ of failures, it is computationally infeasible to consider all possible such scenarios in larger
61
+ events. Initialising failure cascade models designed for N-1 studies with many initial failures
62
+ is challenging.
63
+ Here, we present an approach that solves these issues by temporally resolving the potential
64
+ damages induced by hurricanes and a stepwise application of a failure cascade model. This
65
+ approach particularly allows us to identify critical power lines whose protection could most
66
+ effectively reduce the risk of severe widespread power outages. Although the frequency of
67
+ severe hurricanes is expected to increase12–14, such an approach does not exist so far.
68
+ Main text
69
+ Our approach explicitly models the dynamical interplay of an extreme wind event with
70
+ the power grid. It temporally resolves both, the primary wind damages, and the cascades
71
+ and secondary failures that result from them. We will use this approach to study the
72
+ impact of massive TCs on the Texan power grid. Strong hurricanes, such as Harvey that
73
+
74
+ made landfall on Texas and Louisiana in August 2017, can destroy more than hundreds
75
+ transmission lines in an electric grid (see Fig. 1(a)). These lines do not collapse
76
+ simultaneously, but over the hours or days the TC passage takes. Making use of the
77
+ chronological order of the line destructions, we divide each overall TC scenario into a
78
+ sequence of 5 minute long scenarios. In most of these individual steps, only one line fails.
79
+ We then solve individual scenarios by representing the Texan transmission network in a DC
80
+ power flow approximation with conservative load balancing assumptions (see Methods and
81
+ Supplementary Methods 3 and 4). This approach accounts for the ‘path dependency’ of the
82
+ solution: Everytime a line collapses, secondary failures can occur, but also control
83
+ mechanisms are immediately activated and try to bring back the energy balance to the
84
+ system and, consequently, mitigate the effect of the failure (see Supplementary Methods 4).
85
+ Later primary damages along the TC track then meet a partially destroyed, rebalanced grid.
86
+ Thus, the effect of later failures can be more or, even, less intense. It is the resilience of
87
+ these intermediate, partially destroyed states that ultimately decides whether the impact of
88
+ the TC is amplified by secondary failures.
89
+ Fig. 1: Probability distributions of primary line failures and final power outages (a)
90
+ Probability distribution of the total number of wind-induced line failures
91
+ as generated by
92
+ the probabilistic line fragility model for each of the seven recent hurricanes hitting Texas
93
+ (category in brackets behind the name). TCs are sorted according to the means of the
94
+ distributions which are indicated as solid vertical lines. (b) Probability distribution of the
95
+ associated total power outage
96
+ after TC passage. The inset highlights large cascading
97
+ failures that can also occur for the weaker hurricanes. The dashed vertical lines indicate the
98
+ reported power outages listed in the Supplementary Table 1 and the solid vertical lines
99
+ represent the means. See Methods section for the model parameters used in the
100
+ simulations.
101
+ Unfortunately, neither detailed information about the topology of the exposed power grid
102
+ nor about the exact power lines destroyed by the considered TC is publically accessible. So
103
+
104
+ a
105
+ b
106
+ Harvey (4)
107
+ Ike (2)
108
+ Claudette (1)
109
+ Hanna (1)
110
+ Erin (TS)
111
+ Hermine (TS)
112
+ (anod)d
113
+ p(Np)
114
+ Pout
115
+ Laura (4)
116
+ μp
117
+ pout
118
+ 0
119
+ 20
120
+ 40
121
+ 60
122
+ 80
123
+ 100
124
+ 120
125
+ 140
126
+ 0
127
+ 10
128
+ 20
129
+ 30
130
+ 40
131
+ Np
132
+ pout [GW]here, we use a synthetic model of the Texan grid introduced by Bircheld et al21 (see
133
+ Supplementary Fig. 2 as well as Methods).
134
+ To represent the TCs impact on the energy supply we combine this grid model with a
135
+ probabilistic line destruction model (see Methods) forced by modelled historical wind fields
136
+ from seven different TCs (see Supplementary Supplementary Methods 2). The probabilistic
137
+ model provides the probability of line failure in terms of wind speeds and allows to generate
138
+ a large sample of temporally resolved realisations of line failure maps. In the default setting
139
+ considered here we assume a homogeneous base failure rate for all transmission lines. This
140
+ is our main adjustable parameter and is tuned to reproduce observed power outages (see
141
+ Fig. 1(b) and Supplementary Methods 5). The TCs are selected to cover several different
142
+ types of trajectories and intensities and particularly include storms that continue to move
143
+ westward after landfall and affect the southern and western parts of Texas such as Hurricane
144
+ Claudette, Tropical Storm Erin, and Hurricane Hanna, contrary to most hurricanes that are
145
+ steered northward by the Coriolis effect before western parts of Texas are reached22.
146
+ Core result
147
+ While the number of primary line failures follows a Poisson binomial distribution, the
148
+ derived distribution of outages is heavily multimodal for all storm tracks with the potential
149
+ of large
150
+ to
151
+ outages (see Fig. 1(b)). These large damages turn out to not
152
+ accumulate gradually over the course of the hurricane but occur suddenly in one or few time
153
+ steps (see Fig. 2(b)). This sudden increase in outages is induced by cascading line failures
154
+ taking the Houston and a weakly connected North-Western section of the grid offline (see
155
+ Fig. 2(d) and Fig. 3).
156
+ Figure 3 shows what damage patterns correspond to the various modes of the outage
157
+ distribution. The disconnection of the North-West occurs due to the non-local effects of
158
+ cascading failures in areas not directly affected by high wind speeds. For example, hurricanes
159
+ Harvey and Hanna never reach this region, but cause a considerable probability of outages
160
+ affected by Harvey (Fig. 3(a)-(c)), but also due to non-local cascades as seen for Hanna (Fig.
161
+ 3(h) and (i)). As the most populous city in Texas and a major load centre, the disconnection
162
+ of Houston from the electrical networks causes the disconnection of a huge number of
163
+ consumers from the electrical network and, consequently, the overproduction of generators
164
+ located in the west of Texas, which have key roles to provide the required energy in Houston
165
+ (see Supplementary Methods 5 and Supplementary Fig. 8). Interestingly, the northern part
166
+ of the electric grid is never impacted by outages caused by these three hurricanes. Same
167
+ figures for other hurricanes have been shown in Supplementary Fig. 7.
168
+
169
+ Figure 2: Simulation of hurricane-induced cascading failures in the Texan electric grid (a)
170
+ The schematic variation of the supplied load in an electric grid before (pre-hurricane), during
171
+ (hurricane phase) and after (restoration phase) a hurricane is loosely based on ERCOT's23.
172
+ The total power outage
173
+ after a hurricane has passed, and the total energy
174
+ (red
175
+ area) that was not supplied are measures for the severity of an outage scenario. (b)
176
+ Summary of all realisations of power outage trajectories simulated for hurricane Claudette
177
+ (see Methods section and Supplementary Methods 5 for specification of the model
178
+ parameters). Trajectories shown in red come in two types, those that aggregate damages
179
+ gradually over time (Type I in the figure) and those that include a large cascade (Type II). The
180
+ distribution of cascade sizes is multimodal and we use an empirical threshold of
181
+ to define large cascades (see Supplementary Fig. 9). (c) and (d) show
182
+ respectively the state of the power grid at the beginning and the end of the hurricane. These
183
+ two states are shown in panel (b), for one realisation of primary line failures. Lines shown in
184
+ black were destroyed by the hurricane or deactivated due to the secondary effects, for the
185
+ other lines the relative line loading is shown, with red lines close to overload. In addition,
186
+ the panel includes the track and a snapshot of the windfields of hurricane Claudette in blue.
187
+ In the Supplement, we also provide a video of the simulation showing how the wind
188
+ damages spread along the passage of hurricane Claudette.
189
+
190
+ b
191
+ a
192
+ Hurricane
193
+ Restorationphase
194
+ phase
195
+ Supplied load
196
+ 60
197
+ Type
198
+ ye
199
+ [GW]
200
+ Eout
201
+ anod
202
+ 50
203
+ p
204
+ d
205
+ 40
206
+ Time
207
+ 60
208
+ 70
209
+ 80
210
+ 90
211
+ 100
212
+ 110
213
+ t [h]
214
+ c
215
+ d
216
+ 1.0
217
+ 40
218
+ 36
219
+ pout = 0.0 GW
220
+ 36 -
221
+ pout = 20.5 GW
222
+ [Pc = 67.i GW
223
+ P: = 46.6 GW
224
+ 35
225
+ 0.8
226
+ 34
227
+ 34
228
+ 30
229
+ 25
230
+ 32 -
231
+ 32
232
+ 0.6
233
+ Windspeed [m/s]
234
+ Latitude [°]
235
+ Line Loading
236
+ 20
237
+ 30
238
+ 30 -
239
+ 0.4
240
+ 15
241
+ 28
242
+ 28 -
243
+ 10
244
+ 0.2
245
+ 5
246
+ Inactive Parts
247
+ 26
248
+ 26 -
249
+ - Hurricane Track
250
+ -104
251
+ -102
252
+ -100
253
+ -98
254
+ -96
255
+ -94
256
+ -104
257
+ -102
258
+ -100
259
+ -98
260
+ -96
261
+ -94
262
+ 0.0
263
+ 0
264
+ Longitude [°]
265
+ Longitude ["]noC
266
+ 15GMFigure 4: Probability of line failure for different parts of the total power outage
267
+ distribution (a-i) Probability that the failure of a given power line is involved in three
268
+ different modes of the power outage distribution. The modes are indicated by the insets and
269
+ the exact range of considered power outages are shown below these insets. The
270
+ probabilities
271
+ are calculated as: number of realisations with a total outage within the
272
+ specified range in each figure where the considered line failed / number of total realisations.
273
+ The rows describe the probabilities for different hurricanes as indicated in the panel. Texan
274
+ electric grid with grid elements colored according to their respective outage probability.
275
+ The probability distributions shown in the insets are identical to the ones shown in Fig. 1(b) .
276
+
277
+ a
278
+ b
279
+ p
280
+ Cluster of
281
+ pout E[6 GW, 12GW]
282
+ generators
283
+ pout E[23GW, 28GW]
284
+ pout E[34GW, 44GW]
285
+ Dallas
286
+ Austin
287
+ San Antonio
288
+ Generators
289
+ Houston
290
+ Generators
291
+ Generators
292
+ Corpus
293
+ Loads
294
+ Loads
295
+ Loads
296
+ Christi
297
+ Harvey
298
+ Harvey
299
+ Harvey
300
+ d
301
+ pout E[0GW
302
+ pout E[3GW, 12 GW]
303
+ pout E[15GW, 30GW]
304
+ Generators
305
+ Generators
306
+ Generators
307
+ Loads
308
+ Loads
309
+ Loads
310
+ Claudette
311
+ Claudette
312
+ Claudette
313
+ g
314
+ h
315
+ pout E[15GW, 30GW]
316
+ Generators
317
+ Generators
318
+ Generators
319
+ Loads
320
+ Loads
321
+ Loads
322
+ Hanna
323
+ Hanna
324
+ Hanna
325
+ 0.0
326
+ 0.2
327
+ 0.4
328
+ 0.6
329
+ 0.8
330
+ 1.0
331
+ nodFor all seven hurricanes, the cascades play a major role in the total line failures associated
332
+ with the event (see Fig. 4 ). They are induced by the overload of remaining lines and the
333
+ isolation of grid elements, as well as the failure of islands with unavoidable overproduction.
334
+ Figure 3: The probability of primary damages and secondary failures induced by hurricane
335
+ Harvey In this plot the transmission lines are colored according to their high probability to
336
+ be directly damaged by Harvey (blue lines) or to be deactivated due to the secondary effect
337
+ of the hurricane (red lines). As expected the primary damages are located around the path
338
+ of Harvey. However, secondary failures can occur far away from the hurricane track, which is
339
+ related to the non-local effect of the primary damages in the power grid (see supplementary
340
+ Methods 5). In this plot, grey lines have a higher probability of remaining operational than
341
+ failing due to any reason.
342
+ Our results are not sensitive to the assumption of a homogeneous base failure rate as similar
343
+ characteristics are also derived when assuming randomised base failure rates (see
344
+ Supplementary Table 3). In addition, a temporal resolution of 5 minutes turned out to be
345
+ adequate as time steps where several lines fail are rare. At this resolution it is also
346
+ reasonable to assume that cascades of secondary failures have run their course before
347
+ further lines are destroyed by the hurricane24,25 (for further discussion regarding the
348
+ temporal resolution, see Supplementary Note 2).
349
+ Increasing Resilience
350
+ The fact that large cascades are triggered by the failure of specific lines suggests targeting
351
+ these lines for protection. To identify the critical lines that should be protected we define a
352
+
353
+ 36
354
+ 34
355
+ 32
356
+ Latitude [°]
357
+ 30
358
+ 28
359
+ Most probable line status
360
+ Unaffected
361
+ 26
362
+ Primary Damage
363
+ Secondary Failure
364
+ -104
365
+ -102
366
+ -100
367
+ -98
368
+ -96
369
+ -94
370
+ Longitude [°]priority index as the probability that the wind-induced damage of this specific line triggers a
371
+ large cascade, that is, a cascade that increase the outage by more than 15 GW, averaged
372
+ over all seven hurricanes (see Fig. 2(b) and Eq. (4) in Methods).
373
+ As a baseline we also consider a conventional, static model (see Methods). The static index
374
+ of a line is the conditional probability of a large outage given that the line is damaged by a
375
+ TC. In both the co-evolution model and the static baseline (see Fig. 5(a) and (b)) the critical
376
+ lines are mostly located around Houston.
377
+ To estimate the reduction in power outages that can be reached by protecting critical lines,
378
+ we order them according to their priority index and evaluate the impact of the TC on the
379
+ system with the first one to twenty lines protected, e.g. by being replaced by underground
380
+ cables. It is worth noting that the co-evolution priority index value for most transmission
381
+ lines is zero. Only
382
+ of them have a value above
383
+ , and only
384
+ lines above
385
+ . By
386
+ protecting these
387
+ lines, large power outages and cascading failures are almost completely
388
+ prevented for smaller storms and dramatically reduced for the larger ones (see Fig. 5 and
389
+ Supplementary Fig.9). For the stronger hurricanes Harvey and Ike, the power outage
390
+ distributions are shifted from the second peak to the first peak with
391
+ (see
392
+ Fig. 5(c)). Protecting the lines one by one shows that the reduction of the largest power
393
+ outages improves smoothly, thus it is effective to protect up to twenty lines (see Fig. 5(c) and
394
+ (d)). While in the original system damage amplification was almost guaranteed, it rarely
395
+ occurs in the reinforced one. In summary
396
+ of total lines reinforced leads to a 5 to 20 time
397
+ reduction of the largest scale outages. The level of protection that can be reached by
398
+ protecting the lines according to the priority index derived from the co-evolution models is
399
+ generally higher than the protection of the same number of lines selected according to the
400
+ priority index derived by the static model (see panel (d) of Fig. 5). The static baseline also
401
+ identifies some of the most critical lines (see Supplementary 4), but additional protections
402
+ stop being effective after the first 6-10 lines (see Fig. 5(c) and (d)). This demonstrates that
403
+ the co-evolution model, with its detailed picture of the partially destroyed states, reveals
404
+ genuinely new and critical information for increasing the resilience of the system.
405
+ It is worth to mention that the results obtained from homogeneous base failure rates are
406
+ similar to the randomised ones (see Supplementary Methods 5 and Supplementary Table 3).
407
+
408
+ moC
409
+ 1OGMFigure 5: Level of risk reduction that can be reached by protecting power lines according to
410
+ the priority index: The co-evolution model against the static model (a)-(b) 20 lines of the
411
+ Texan power grid with the highest priority index (see Eq. Eq. (4)) obtained from the static
412
+ model (orange lines), the co-evolution model (blue lines), and both approaches (green lines).
413
+ The inset (b) shows a close-up view of Houston and Harris County, which contain most of the
414
+ critical lines. As seen in (b) the critical lines obtained from both models are located in the
415
+ same region, however, the co-evolutionary model identifies additional lines whose
416
+ protection has a dramatic effect on increasing resilience. (c) Power outage distributions of
417
+ hurricane Harvey in terms of the number of critical lines protected in both the co-evolution
418
+ (blue) and the static model (orange). The second peak in the power outage distribution is
419
+ strongly reduced as the number of protected lines increases. However, protecting lines
420
+ obtained from the static model does not increase the resilience of the power grid as much as
421
+ occurs in the co-evolution model. (d) Reduction of the large power outages obtained from
422
+ both models. For all three strong hurricanes, i.e. Harvey, Ike and Claudette, the reduction in
423
+ power outages is much greater in co-evolution model than the static one.
424
+
425
+ b
426
+ a
427
+ 31.0
428
+ 36 -
429
+ 30.5
430
+ 34 -
431
+ 32
432
+ 30.0
433
+ Latitude [°]
434
+ Latitude [°]
435
+ 30
436
+ 29.5
437
+ 28
438
+ 29.0
439
+ 20 most critical lines
440
+ coevolution method
441
+ 26 -
442
+ static method
443
+ both methods
444
+ 28.5
445
+ -104
446
+ -102
447
+ -100
448
+ -98
449
+ -96
450
+ -94
451
+ -97.0
452
+ 96.5
453
+ 96.0
454
+ 95.5
455
+ 95.0
456
+ 94.5
457
+ Longitude [°]
458
+ d
459
+ Longitude [°]
460
+ c
461
+ Staticmethod
462
+ 1.0
463
+ Coevolutionmethod
464
+ 0
465
+ Number of protected lines
466
+ 0.8
467
+ 0.6
468
+ 10
469
+ Method
470
+ 0.4
471
+ coevolution
472
+ static
473
+ 0.2
474
+ 20
475
+ Harvey
476
+ ike
477
+ Claudette
478
+ 0.0
479
+ 0
480
+ 10
481
+ 20
482
+ 30
483
+ 40
484
+ 0
485
+ 2
486
+ 4
487
+ 6
488
+ 8
489
+ 10
490
+ 12
491
+ 14
492
+ 16
493
+ 18
494
+ 20
495
+ p(pout) [GW]
496
+ Number of protected linesConclusion and outlook
497
+ The co-evolution model of the Texan power grid has been introduced as an efficient
498
+ approach to temporally resolve the line failures and secondary grid outages induced by TCs.
499
+ The model can resolve to considerable detail the way secondary failure cascades amplify the
500
+ impact of extreme events. Using this information it can be used to identify critical lines that
501
+ should be protected to effectively increase the system's resilience and prevent the most
502
+ severe outages. Our model goes significantly beyond the state of the art so far represented
503
+ by statistical and economic models that can only capture a static picture of the event and
504
+ the network24–29. We have seen that such static approaches do not easily identify all of the
505
+ critical lines during extended events. Their importance is only revealed by stepwise ‘tracking’
506
+ the destruction of the system and associated power outages and overloads. We expect that
507
+ this co-evolution approach will also be a promising tool to understand and protect other
508
+ grids exposed to spatio-temporally extended extreme events.
509
+ The results of our study are in agreement with a recent TC related risk assessment for
510
+ Texas26. Combining our priority index with additional information about the cost of a
511
+ reinforcement of the considered lines could also enable the identification of the most cost
512
+ efficient way to reduce the probability of power outages above a critical limit to an intended
513
+ value (see Supplementary Methods 6).
514
+ While the model based on wind speeds and historical hurricane tracks already identified
515
+ crucial structures in the grid, the co-evolution approach could naturally be extended to more
516
+ sophisticated models and broader settings. One particularly important goal for future
517
+ research will be to drive the model with potential future storm tracks due to climate
518
+ change27. As the frequency of particularly strong TCs is expected to increase under global
519
+ warming (WGI contribution to the AR6), understanding what lines are critical in the face of
520
+ the weather of the next decades is crucial. Another important avenue of broadening the
521
+ model is to account for TC induced flooding (coastal flooding, pluvial or fluvial flooding) and
522
+ associated destructions. These may follow a different temporal pattern where the adequacy
523
+ of the approach proposed here has to be newly tested. This would also provide a first step
524
+ towards an assessment of genuine compound events in which several stresses for the grid
525
+ coincide.
526
+ Methods
527
+ Electric grid data of Texas
528
+ For the study we used the publicly available electric grid test case ACTIVSg200028, that
529
+ covers the area of the so-called ERCOT Interconnection, which supplies
530
+ percent of the
531
+ electricity demand in Texas29. The test case is synthetic but resembles fundamental
532
+
533
+ properties of the real grid, such as the spatial distribution of power generation and
534
+ demand21. It encompasses
535
+ buses with geographic locations,
536
+ branches (both
537
+ transmission lines and transformers) and covers four different voltage levels. The test
538
+ case comes with all required electrical parameters ranging from the power injections of
539
+ buses to the power flow capacities of transmission lines and transformers. The flow
540
+ capacities
541
+ play a particularly important role for the simulation of cascading failures
542
+ as they determine the amount of power that can be transported by individual lines and
543
+ transformers without potentially damaging the equipment.
544
+ Historical hurricane data
545
+ Hurricane storm tracks are extracted from the International Best Track Archive for
546
+ Climate Stewardship (IBTrACS)30, 31 as time series of cyclone center coordinates along
547
+ with meteorological variables like maximum sustained wind speeds and minimum
548
+ pressure on a
549
+ h snapshot basis. For this study, a hand-picked selection of seven
550
+ historical storms is used (Supplementary Fig. 1 and 2) to cover several different types of
551
+ trajectories and intensities. Particularly, the selection also includes storms that continue
552
+ to move westward after landfall and affect the southern and western parts of Texas (see
553
+ Hurricane Claudette, Tropical Storm Erin, and Hurricane Hanna in Supplementary Fig. 2
554
+ and the Supplementary Fig. 1), contrary to most hurricanes that are steered northward
555
+ by the Coriolis effect before western parts of Texas are reached22. From the track records,
556
+ we compute time series of wind fields within a radius of
557
+ km from the storm center
558
+ using the Holland model for surface winds, as implemented in the Python-package
559
+ CLIMADA32, 33, at a spatial resolution of
560
+ degrees (approximately
561
+ km) and a
562
+ temporal resolution of
563
+ minutes. The intensities of the considered storms are also
564
+ shown along the respective tracks in Supplementary Fig.1 while other properties of the
565
+ storms are listed in Supplementary Table 1.
566
+ Transmission line fragility model
567
+ To model wind-induced failures of transmission lines, we first differentiate between
568
+ overhead transmission lines and underground cables in the electric grid of Texas.
569
+ Following Birchfield et al., we analyse lines that are shorter than
570
+ km (
571
+ miles)
572
+ and connect a total load of at least
573
+ MW as underground cables21. All other lines are
574
+ assumed to be overhead transmission lines. The latter are then divided into segments of
575
+ length
576
+ m, which corresponds to the average distance between transmission
577
+ towers in Texas34. Our fragility model assigns failure rates to individual line segments
578
+ according to
579
+
580
+ where,
581
+ denotes the wind force acting on the line segment
582
+ for a given wind
583
+ speed
584
+ and is calculated according to the guidelines published by the American Society
585
+ of Civil Engineers35. The parameter
586
+ represents the inverse of the so-called time to
587
+ failure, which indicates how long a line segment can withstand a wind force equal to the
588
+ breaking force
589
+ . It is used as a free parameter to calibrate the model such that
590
+ historically
591
+ reported
592
+ power
593
+ outages
594
+ are
595
+ reproduced
596
+ in
597
+ our
598
+ simulations
599
+ (see
600
+ Supplementary Methods 5). The full wind force equation as well as the meaning and the
601
+ values of all parameters can be found in Supplementary Methods 2 and Supplementary
602
+ Table 2. In all figures shown in the main text,
603
+ . Using the failure rates
604
+ , we define the probability that a line segment
605
+ fails during the time interval
606
+ as
607
+ This failure probability is inspired by the line fragility model established by Winkler et al.,
608
+ which assumes that the failure probability is proportional to the ratio of the wind force
609
+ and the breaking force36. However, in contrast to their model, we define the failure
610
+ probability
611
+ using a time-dependent failure rate
612
+ that allows us to take the time
613
+ evolution of a field into account. A line is removed from the test case if any of its line
614
+ segments fails during a time interval. It should be noted that multiple lines may be
615
+ destroyed in the same time step, meaning that they are removed from the network
616
+ simultaneously. According to Eq. (2), the probability of simultaneous failures increases
617
+ with time step size
618
+ . A discussion of the role of the time resolution can be found in
619
+ Supplementary Note 2.
620
+ Cascading failure model
621
+ Wind-induced line failures can trigger cascades of overload failures in the branches of
622
+ the electric grid. As cascading failures typically evolve on smaller time scales than the
623
+ temporal resolution
624
+ of the wind field, we can assume a time scale separation. When
625
+ the network topology is changed by a primary damage event, the power flows
626
+ on
627
+ the branches are rerouted using the DC power flow model
628
+ here,
629
+ are the net active power injections at the buses,
630
+ are the bus voltage angles
631
+ and
632
+ are the elements of the nodal susceptance matrix that comprises the network
633
+ topology. More details on the assumptions of the DC power flow model and the software
634
+ used can be found in Supplementary Methods 3. If the new state of the network exhibits
635
+ any overloaded branch (
636
+ ), they are deactivated and the process is repeated.
637
+ When the network reaches a state without overloads, the algorithm advances to the
638
+
639
+ br
640
+ 0.002 hnext primary damage event. When a load or generator gets disconnected or the grid is
641
+ split into several parts, the global active power balance (GAPB) has to be restored in each
642
+ network component. Motivated by a primary frequency control in real electric grids, we
643
+ adjust the outputs of generators uniformly, while respecting their output limits defined
644
+ in the data set. Whenever the generator limits do not allow to fully restore the GAPB, we
645
+ either conduct a uniform minimal load shedding or consider the blackout of the whole
646
+ network component in the case of an unavoidable overproduction. The details of the
647
+ algorithm are explained in Supplementary Methods 4.
648
+ Quantification of power outages
649
+ We use the following three different quantities to track the power outages arising in our
650
+ simulations: (i)
651
+ denotes the total supplied load at the end of each time step, i.e.,
652
+ after the cascading algorithm finished, respectively. It is calculated by adding up the
653
+ demands of all connected loads across all islands that exist at the given time. Since our
654
+ co-evolution model assumes that cascading failures happen instantaneously,
655
+ represents a step function for each individual TC scenario as shown in Fig. 2(b). We have
656
+ simulated
657
+ scenarios for each hurricane. (ii) Any cascading failure that actually causes
658
+ a loss of supplied load results in a vertical transition of size
659
+ in
660
+ . One such
661
+ transition is annotated with
662
+ for the highlighted scenario in Fig. 2(b). (iii) All
663
+ cascading failures that are triggered in a given TC scenario lead to a final power outage
664
+ . The interesting statistics of
665
+ are
666
+ shown and discussed in Fig. 1(b) .
667
+ Identification of critical lines
668
+ We identify critical overhead transmission lines by means of a priority index defined for
669
+ each line
670
+ as
671
+ where
672
+ denotes the set of considered hurricanes (seven hurricanes in this study) and
673
+ is the probability of a large cascade being triggered by the wind-induced failure of
674
+ line
675
+ . More specifically, we call cascades large or belonging to type II if their
676
+ associated power outage
677
+ lies above an empirical threshold of
678
+ GW (indicated
679
+ as type II in Fig. 2(b) and Fig. 5(d)). Eq. (4) includes an averaging over all considered
680
+ hurricanes to discern lines that are critical for multiple hurricanes. This allows us to
681
+ propose line reinforcements that increase the resilience not only for a particular
682
+ hurricane. Some properties of the
683
+ most critical lines found in this study are listed in
684
+
685
+ Dou
686
+ na
687
+ 0GM.67.GVSupplementary Table 3. Fig. 5(a) and (b) shows the location of these lines and
688
+ demonstrates that reinforcing them indeed increases the resilience of the electric grid
689
+ substantially. More details of the critical lines and a possibility to incorporate economic
690
+ considerations into our analysis are discussed in Supplementary Methods 6.
691
+ Baseline Method
692
+ Here, we apply the static model as a baseline method. By static model, we mean that all
693
+ primary damages occur simultaneously and, then, the DC power model along with global
694
+ active power balance (see Supplementary Methods 6) are activated once to bring back
695
+ the energy balance in the system and to evaluate the total final power outages
696
+ . As
697
+ discussed in Supplementary Note 2 the final power outage distributions are independent
698
+ of the time resolution of the wind field, however the primary damages leading to large
699
+ outages, i.e.
700
+ to
701
+ , can be completely different ones. To indicate the
702
+ critical lines obtained from the static model, first, we separate all scenarios in which
703
+ . Then, we use Eq. (4) to calculate the priority index of the primary
704
+ damages leading to large cascades. The top
705
+ lines with the highest priority index have
706
+ been listed in Supplementary Table 5. As seen in this table, except for the six lines
707
+ highlighted in red, the other lines are completely different from lines obtained from the
708
+ co-evolution model.
709
+ Code availability
710
+ All code necessary to reproduce the findings in this work is openly available. The
711
+ time-dependent wind fields are computed using the open-source platform CLIMADA32, 33.
712
+ The implementation of the transmission line fragility and the DC power model is
713
+ available from https://gitlab.pik-potsdam.de/stuermer/itcpg.jl.
714
+ Data availability
715
+ The observed TCs from IBTrACS30, 31 are distributed under the permissive WMO open data
716
+ licence
717
+ through
718
+ the
719
+ IBTrACS
720
+ website
721
+ (https://www.ncei.noaa.gov/products/international-best-track-archive)
722
+ and
723
+ can
724
+ be
725
+ directly retrieved through the CLIMADA32, 33 platform. The electrical network data is
726
+ openly available from the Texas
727
+ A&M University’s electric grid test case repository
728
+ (https://electricgrids.engr.tamu.edu/electricgrid-test-cases/activsg2000/).
729
+ Acknowledgements
730
+ This project has received funding from the ConNDyNet2 project under grant no.
731
+ 03EF3055F. This research has received funding from the German Academic Scholarship
732
+ Foundation and the German Federal Ministry of Education and Research (BMBF) under
733
+
734
+ 20 GM30 GMnoC
735
+ 15GMthe research projects QUIDIC (01LP1907A) and SLICE (FKZ: 01LA1829A), and from the
736
+ CHIPS project, part of AXIS, an ERA-NET initiated by JPI Climate, funded by FORMAS
737
+ (Sweden), DLR/BMBF (Germany, grant no. 01LS1904A), AEI (Spain) and ANR (France)
738
+ with co-funding by the European Union (grant no. 776608).
739
+ Author Contribution
740
+ M. Anvari, F. Hellmann and C. Otto contributed to design and conceive the research. The
741
+ co-evolution model is designed and developed by M. Anvari, J. Stürmer, A. Plietzsch and
742
+ F. Hellmann. All simulations and data analyses of this work have been done by J.
743
+ Stürmer and under supervision of M. Anvari. All hurricane data have been provided by
744
+ T. Vogt during this research. All authors contributed to discussing and interpreting the
745
+ results, and contributed to writing the manuscript.
746
+ Competing Interests
747
+ The authors declare that they have no competing interests.
748
+ References
749
+ 1.
750
+ J. Bialek, What Does the Power Outage on 9 August 2019 Tell Us about GB Power
751
+ System, University of Cambridge, Energy Policy Research Group, Cambridge, Technical
752
+ Report 2006, 2020.
753
+ 2.
754
+ S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, and S. Havlin, Catastrophic cascade of
755
+ failures in interdependent networks, Nature, vol. 464, no. 7291, pp. 1025–1028, 2010.
756
+ 3.
757
+ N. E. Observatory, Extreme Winter Weather Causes U.S. Blackouts. 2021.
758
+ 4.
759
+ T. P. N. I. A. Council, Surviving a Chatastrophic Power Outage. 2018.
760
+ 5.
761
+ Ø. R. Solheim, T. Trötscher, and G. Kjølle, Wind dependent failure rates for overhead
762
+ transmission lines using reanalysis data and a Bayesian updating scheme, in 2016
763
+ International Conference on Probabilistic Methods Applied to Power Systems (PMAPS),
764
+ 2016, pp. 1–7.
765
+ 6.
766
+ A. Technica, New report suggests Texas’ grid was 5 minutes from catastrophic failure.
767
+ 2021.
768
+ 7.
769
+ J. W. Busby et al., Cascading risks: Understanding the 2021 winter blackout in Texas,
770
+ Energy Res. Soc. Sci., vol. 77, p. 102106, 2021.
771
+ 8.
772
+ FERC, NERC and Regional Entity Staff, The February 2021 Cold Weather Outages in
773
+ Texas
774
+ and
775
+ the
776
+ South
777
+ Central United States. Nov. 2021. [Online]. Available:
778
+ https://www.ferc.gov/media/february-2021-cold-weather-outages-texas-and-south-ce
779
+ ntral-united-states-ferc-nerc-and
780
+ 9.
781
+ R. B. N. H. Center, Tropical Cyclone Report Hurricane Ike. Jan. 2009.
782
+ 10.
783
+ B. L. Preston et al., Resilience of the US electricity system: a multi-hazard perspective,
784
+ US Dep. Energy Off. Policy Wash. DC, 2016.
785
+ 11.
786
+ F. News, Ida: At least 1 dead, more than a million customers without power in
787
+ Louisiana. 2021.
788
+ 12.
789
+ A. J. Wood, B. F. Wollenberg, and G. B. Sheblé, Power generation, operation, and
790
+ control. John Wiley & Sons, 2013.
791
+
792
+ 13.
793
+ I. Dobson, B. A. Carreras, V. E. Lynch, and D. E. Newman, Complex systems analysis of
794
+ series of blackouts: Cascading failure, critical points, and self-organization, Chaos
795
+ Interdiscip. J. Nonlinear Sci., vol. 17, no. 2, p. 026103, 2007.
796
+ 14.
797
+ P. Hines, E. Cotilla-Sanchez, and S. Blumsack, Do topological models provide good
798
+ information about electricity infrastructure vulnerability?, Chaos Interdiscip. J.
799
+ Nonlinear Sci., vol. 20, no. 3, p. 033122, 2010.
800
+ 15.
801
+ D. Witthaut and M. Timme, Nonlocal effects and countermeasures in cascading failures,
802
+ Phys. Rev. E, vol. 92, no. 3, p. 032809, 2015.
803
+ 16.
804
+ M. Rohden, D. Jung, S. Tamrakar, and S. Kettemann, Cascading failures in ac electricity
805
+ grids, Phys. Rev. E, vol. 94, no. 3, p. 032209, 2016.
806
+ 17.
807
+ A. Plietzsch, P. Schultz, J. Heitzig, and J. Kurths, Local vs. global redundancy–trade-offs
808
+ between resilience against cascading failures and frequency stability, Eur. Phys. J. Spec.
809
+ Top., vol. 225, no. 3, pp. 551–568, 2016.
810
+ 18.
811
+ S. Pahwa, C. Scoglio, and A. Scala, Abruptness of cascade failures in power grids, Sci.
812
+ Rep., vol. 4, no. 1, pp. 1–9, 2014.
813
+ 19.
814
+ I. Simonsen, L. Buzna, K. Peters, S. Bornholdt, and D. Helbing, Transient dynamics
815
+ increasing network vulnerability to cascading failures, Phys. Rev. Lett., vol. 100, no. 21,
816
+ p. 218701, 2008.
817
+ 20.
818
+ U.S.-Canada Power System Outage Task Force, Final Report on the August 14, 2003
819
+ Blackout in the United States and Canada: Causes and Recommendations. 2004.
820
+ [Online].
821
+ Available:
822
+ https://www.energy.gov/sites/default/files/oeprod/DocumentsandMedia/BlackoutFina
823
+ l-Web.pdf
824
+ 21.
825
+ A. B. Birchfield, T. Xu, K. M. Gegner, K. S. Shetye, and T. J. Overbye, Grid structural
826
+ characteristics as validation criteria for synthetic networks, IEEE Trans. Power Syst., vol.
827
+ 32, no. 4, pp. 3258–3265, 2016.
828
+ 22.
829
+ E. A. Keller and D. E. DeVecchio, Natural hazards: earth’s processes as hazards,
830
+ disasters, and catastrophes. Routledge, 2016.
831
+ 23.
832
+ S. Morris, P. Rocha, K. Donohoo, B. Blevins, and M. K, ERCOT Hurricane Ike Summary,
833
+ 2009,
834
+ [Online].
835
+ Available:
836
+ https://www.ercot.com/files/docs/
837
+ 2009/01/30/ros_hurricane_ike_report___tac_adopted.pdf
838
+ 24.
839
+ B. Schäfer, D. Witthaut, M. Timme, and V. Latora, Dynamically induced cascading
840
+ failures in power grids, Nat. Commun., vol. 9, no. 1, pp. 1–13, 2018.
841
+ 25.
842
+ Federal Network Agency for Electricity, Gas, Telecommunications, Post and Railways,
843
+ On the disturbance in the German and European power system on the 4th of
844
+ November 2006. Feb. 2007.
845
+ 26.
846
+ A. B. Smith, U.S. Billion-dollar Weather and Climate Disasters, 1980 - present. NOAA
847
+ National Centers for Environmental Information, 2020. doi: 10.25921/stkw-7w73.
848
+ 27.
849
+ T. Geiger, J. Gütschow, D. N. Bresch, K. Emanuel, and K. Frieler, Double benefit of
850
+ limiting global warming for tropical cyclone exposure, Nat. Clim. Change, pp. 1–6, 2021.
851
+ 28.
852
+ A. Birchfield, ACTIVSg2000: 2000-bus synthetic grid on footprint of Texas. Sep. 2020.
853
+ 29.
854
+ E. R. C. of Texas (ERCOT), ERCOT fact sheet. 2021.
855
+ 30.
856
+ K. R. Knapp, M. C. Kruk, D. H. Levinson, H. J. Diamond, and C. J. Neumann, The
857
+ international best track archive for climate stewardship (IBTrACS) unifying tropical
858
+ cyclone data, Bull. Am. Meteorol. Soc., vol. 91, no. 3, pp. 363–376, 2010.
859
+ 31.
860
+ K. R. Knapp, Diamond, H. J, J. P. Kossin, M. C. Kruk and, and C. J. I. Schreck, International
861
+ Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 4, NOAA Natl.
862
+
863
+ Cent. Environ. Inf., 2018.
864
+ 32.
865
+ T. Geiger, K. Frieler, and D. N. Bresch, A global historical data set of tropical cyclone
866
+ exposure (TCE-DAT), Earth Syst. Sci. Data, vol. 10, no. 1, pp. 185–194, 2018.
867
+ 33.
868
+ G. Holland, A revised hurricane pressure–wind model, Mon. Weather Rev., vol. 136, no.
869
+ 9, pp. 3432–3445, 2008.
870
+ 34.
871
+ E. B. Watson and A. H. Etemadi, Modeling Electrical Grid Resilience Under Hurricane
872
+ Wind Conditions With Increased Solar and Wind Power Generation, IEEE Trans Power
873
+ Syst, vol. 35, no. 2, pp. 929–937, Mar. 2020, doi: 10.1109/TPWRS.2019.2942279.
874
+ 35.
875
+ C. J. Wong and M. D. Miller, Guidelines for electrical transmission line structural
876
+ loading, 2009.
877
+ 36.
878
+ J. Winkler, L. Duenas-Osorio, R. Stein, and D. Subramanian, Performance assessment of
879
+ topologically diverse power systems subjected to hurricane events, Reliab. Eng. Syst.
880
+ Saf., vol. 95, no. 4, pp. 323–336, 2010.
881
+
4dFST4oBgHgl3EQfZzjU/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5tE4T4oBgHgl3EQfbwzK/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d874d50510d6624ef59bed82f766b623fc600d9d4b1dc4e24368a4fe981f79b2
3
+ size 6225965
89FAT4oBgHgl3EQfpR2f/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4d52fee0247b17e3b505fb97eb1fbdd935af23426398b27e86573282fb57b7d
3
+ size 6946861
89FAT4oBgHgl3EQfpR2f/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c26ec27ffd78a4d003e856000832906b325b21516ee0d58a162015ba4da3db3
3
+ size 249046
8dAyT4oBgHgl3EQfQvbW/content/2301.00054v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:caca0054e96adee333193b1bc7c202fd36e2a538eda50e96f3ceedce65bd9f70
3
+ size 834566
8dAyT4oBgHgl3EQfQvbW/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:928249e31c2a9380ef8ed3f490acad660a2a40d786eecfab3ce3a77dc5263f81
3
+ size 2752557
8dAyT4oBgHgl3EQfQvbW/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7ce82b009e77d70be4bc51944f6b16a0fa9a3d1b3bb9e5c98d2292c9d6cb7b8
3
+ size 117393
9dE2T4oBgHgl3EQflwec/content/tmp_files/2301.03992v1.pdf.txt ADDED
@@ -0,0 +1,2044 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Vision Transformers Are Good Mask Auto-Labelers
2
+ Shiyi Lan1
3
+ Xitong Yang2
4
+ Zhiding Yu1
5
+ Zuxuan Wu3
6
+ Jose M. Alvarez1
7
+ Anima Anandkumar1,4
8
+ 1NVIDIA
9
+ 2Meta AI, FAIR
10
+ 3Fudan University
11
+ 4Caltech
12
+ https://github.com/NVlabs/mask-auto-labeler
13
+ Figure 1. Examples of mask pseudo-labels generated by Mask Auto-Labeler on COCO. Only human-annotated bounding boxes are
14
+ used as supervision during training to obtain these results.
15
+ Abstract
16
+ We propose Mask Auto-Labeler (MAL), a high-quality
17
+ Transformer-based mask auto-labeling framework for in-
18
+ stance segmentation using only box annotations. MAL takes
19
+ box-cropped images as inputs and conditionally generates
20
+ their mask pseudo-labels.We show that Vision Transform-
21
+ ers are good mask auto-labelers. Our method significantly
22
+ reduces the gap between auto-labeling and human annota-
23
+ tion regarding mask quality. Instance segmentation models
24
+ trained using the MAL-generated masks can nearly match
25
+ the performance of their fully-supervised counterparts, re-
26
+ taining up to 97.4% performance of fully supervised mod-
27
+ els. The best model achieves 44.1% mAP on COCO in-
28
+ stance segmentation (test-dev 2017), outperforming state-
29
+ of-the-art box-supervised methods by significant margins.
30
+ Qualitative results indicate that masks produced by MAL
31
+ are, in some cases, even better than human annotations.
32
+ 1. Introduction
33
+ Computer vision has seen significant progress over the
34
+ last decade. Tasks such as instance segmentation have made
35
+ it possible to localize and segment objects with pixel-level
36
+ accuracy.
37
+ However, these tasks rely heavily on expan-
38
+ sive human mask annotations. For instance, when creat-
39
+ ing the COCO dataset, about 55k worker hours were spent
40
+ on masks, which takes about 79% of the total annotation
41
+ time [1]. Moreover, humans also make mistakes. Human
42
+ annotations are often misaligned with actual object bound-
43
+ aries. On complicated objects, human annotation quality
44
+ tends to drop significantly if there is no quality control. Due
45
+ to the expensive cost and difficulty of quality control, some
46
+ other large-scale detection datasets such as Open Images [2]
47
+ and Objects365 [3], only contain partial or even no instance
48
+ segmentation labels.
49
+ In light of these limitations, there is an increasing in-
50
+ terest in pursuing box-supervised instance segmentation,
51
+ where the goal is to predict object masks from bounding
52
+ box supervision directly. Recent box-supervised instance
53
+ segmentation methods [4–8] have shown promising perfor-
54
+ mance. The emergence of these methods challenges the
55
+ long-held belief that mask annotations are needed to train
56
+ instance segmentation models. However, there is still a non-
57
+ negligible gap between state-of-the-art approaches and their
58
+ fully-supervised oracles.
59
+ Our contributions: To address box-supervised instance
60
+ segmentation, we introduce a two-phase framework consist-
61
+ ing of a mask auto-labeling phase and an instance segmenta-
62
+ tion training phase (see Fig. 2). We propose a Transformer-
63
+ based mask auto-labeling framework, Mask Auto-Labeler
64
+ (MAL), that takes Region-of-interest (RoI) images as inputs
65
+ 1
66
+ arXiv:2301.03992v1 [cs.CV] 10 Jan 2023
67
+
68
+ 50
69
+ 100
70
+ 150
71
+ 200
72
+ 250
73
+ 300
74
+ 350
75
+ 400
76
+ 0
77
+ 100
78
+ 200
79
+ 300
80
+ 400
81
+ 500
82
+ 600100
83
+ 200
84
+ 300
85
+ 400
86
+ 0
87
+ 100
88
+ 200
89
+ 30050
90
+ 100
91
+ 150
92
+ 200
93
+ 250
94
+ 300
95
+ 350
96
+ 400
97
+ 100
98
+ 200
99
+ 300
100
+ 400
101
+ 500
102
+ 60050
103
+ 100
104
+ 150
105
+ 200
106
+ 250
107
+ 300
108
+ 350
109
+ 400
110
+ 0
111
+ 100
112
+ 200
113
+ 300
114
+ 400
115
+ 500
116
+ 600100
117
+ 200
118
+ 300
119
+ 400
120
+ 0
121
+ 100
122
+ 200
123
+ 300
124
+ 400
125
+ 500
126
+ 600100
127
+ 200
128
+ 300
129
+ 400
130
+ 0
131
+ 100
132
+ 200
133
+ 300
134
+ 400
135
+ 500
136
+ 600100
137
+ PEPSI
138
+ 200
139
+ 300
140
+ 400
141
+ 0
142
+ 100
143
+ 200
144
+ 3000
145
+ 100
146
+ 200
147
+ 300
148
+ 400
149
+ 0
150
+ 100
151
+ 200
152
+ 300
153
+ 400
154
+ 500
155
+ 6000
156
+ 50
157
+ 100
158
+ 150
159
+ 200
160
+ 250
161
+ 300
162
+ 350
163
+ 400
164
+ 0
165
+ 100
166
+ 200
167
+ 300
168
+ 400
169
+ 500
170
+ 600100
171
+ 200
172
+ 300
173
+ 400
174
+ 500
175
+ 600
176
+ 0
177
+ 100
178
+ 200
179
+ 300
180
+ 4000
181
+ 50
182
+ 100
183
+ 150
184
+ 200
185
+ 250
186
+ 300
187
+ 350
188
+ 400
189
+ 0
190
+ 100
191
+ 200
192
+ 300
193
+ 400
194
+ 500
195
+ 600Box-supervised
196
+ Loss
197
+ Cropped
198
+ Regions
199
+ Supervised
200
+ Mask Loss
201
+ Mask Labels
202
+ MAL
203
+ Generate
204
+ Masks
205
+ Phase 1: Mask Auto-labeling
206
+ Inst Seg
207
+ Image
208
+ Phase 2: Instance Segmentation Training
209
+ Masks
210
+ Figure 2.
211
+ An overview of the two-phase framework of box-
212
+ supervised instance segmentation. For the first phase, we train
213
+ Mask Auto-Labeler using box supervision and conditionally gen-
214
+ erate masks of the cropped regions in training images (top). We
215
+ then train the instance segmentation models using the generated
216
+ masks (bottom).
217
+ and conditionally generates high-quality masks (demon-
218
+ strated in Fig. 1) within the box. Our contributions can be
219
+ summarized as follows:
220
+ • Our two-phase framework presents a versatile design
221
+ compatible with any instance segmentation architecture.
222
+ Unlike existing methods, our framework is simple and
223
+ agnostic to instance segmentation module designs.
224
+ • We show that Vision Transformers (ViTs) used as image
225
+ encoders yield surprisingly strong auto-labeling results.
226
+ We also demonstrate that some specific designs in MAL,
227
+ such as our attention-based decoder, multiple-instance
228
+ learning with box expansion, and class-agnostic training,
229
+ crucial for strong auto-labeling performance. Thanks to
230
+ these components, MAL sometimes even surpasses hu-
231
+ mans in annotation quality.
232
+ • Using MAL-generated masks for training, instance seg-
233
+ mentation models achieve up to 97.4% of their fully
234
+ supervised performance on COCO and LVIS. Our re-
235
+ sult significantly narrows down the gap between box-
236
+ supervised and fully supervised approaches. We also
237
+ demonstrate the outstanding open-vocabulary general-
238
+ ization of MAL by labeling novel categories not seen
239
+ during training.
240
+ Our method outperforms all the existing state-of-the-
241
+ art box-supervised instance segmentation methods by large
242
+ margins. This might be attributed to good representations
243
+ of ViTs and their emerging properties such as meaningful
244
+ grouping [9], where we observe that the attention to objects
245
+ might benefit our task significantly (demonstrated in Fig.
246
+ 6). We also hypothesize that our class-agnostic training de-
247
+ sign enables MAL to focus on learning general grouping
248
+ instead of focusing on category information. Our strong re-
249
+ sults pave the way to remove the need for expensive human
250
+ annotation for instance segmentation in real-world settings.
251
+ 2. Related work
252
+ 2.1. Vision Transformers
253
+ Transformers were initially proposed in natural language
254
+ processing [10].
255
+ Vision Transformers [11] (ViTs) later
256
+ emerged as highly competitive visual recognition models
257
+ that use multi-head self-attention (MHSA) instead of con-
258
+ volutions as the basic building block. These models are re-
259
+ cently marked by their competitive performance in many vi-
260
+ sual recognition tasks [12]. We broadly categorize existing
261
+ ViTs into two classes: plain ViTs, and hierarchical ViTs.
262
+ Standard Vision Transformers. Standard ViTs [11] are
263
+ the first vision transformers. Standard ViTs have the sim-
264
+ plest structures, which consist of a tokenization embedding
265
+ layer followed by a sequence of MHSA layers. However,
266
+ global MHSA layers can be heavy and usually face signif-
267
+ icant optimization issues. To improve their performance,
268
+ many designs and training recipes are proposed to train
269
+ ViTs in data-efficient manners [9,13–19].
270
+ Hierarchical Vision Transformers. Hierarchical Vision
271
+ Transformers [12,20–22] are pyramid-shaped architectures
272
+ that aim to benefit other tasks besides image classification
273
+ with their multi-scale designs. On top of plain ViTs, these
274
+ ViTs [20,21] separate their multi-head self-attention layers
275
+ into hierarchical stages. Between the stages, there are spa-
276
+ tial reduction layers, such as max-pooling layers. These ar-
277
+ chitectures are usually mixed with convolutional layers [23]
278
+ and often adopt efficient self-attention designs to deal with
279
+ long sequence lengths.
280
+ 2.2. Instance segmentation
281
+ Instance segmentation is a visual recognition task that
282
+ predicts the bounding boxes and masks of objects.
283
+ Fully supervised instance segmentation. In this setting,
284
+ both bounding boxes and instance-level masks are provided
285
+ as the supervision signals. Early works [24–27] follow a
286
+ two-stage architecture that generates box proposals or seg-
287
+ mentation proposals in the first stage and then produces the
288
+ final segmentation and classification information in the sec-
289
+ ond stage. Later, instance segmentation models are broadly
290
+ divided into two categories: some continue the spirit of
291
+ the two-stage design and extend it to multi-stage architec-
292
+ tures [28, 29].
293
+ Others simplify the architecture and pro-
294
+ pose one-stage instance segmentation, e.g., YOLACT [30],
295
+ SOLO [31, 32], CondInst [33], PolarMask [34, 35]. Re-
296
+ cently, DETR and Deformable DETR [36, 37] show great
297
+ potential of query-based approaches in object detection.
298
+ Then, methods like MaxDeepLab [38], MaskFormer [39],
299
+ 2
300
+
301
+ 0
302
+ 100
303
+ 200
304
+ 300
305
+ 400
306
+ 0
307
+ 100
308
+ 200
309
+ 300
310
+ 400
311
+ 500
312
+ 600…
313
+
314
+
315
+ MHSA
316
+ FFN
317
+ FFN
318
+ MaxPool
319
+ FC
320
+ *
321
+
322
+
323
+ MHSA
324
+ FFN
325
+ FFN
326
+ *
327
+ Pos. Bags
328
+ Neg. Bags
329
+ Average
330
+ Multiple Instance Learning Loss
331
+ +
332
+ -
333
+ + +
334
+ +
335
+ ++
336
+ -- --
337
+ +
338
+ ++ +
339
+ +
340
+ - -
341
+ -- -
342
+ Self Training
343
+ EMA
344
+ EMA
345
+ Conditional Random Fields Loss
346
+ 𝑋!
347
+ Neighbors 𝑋"
348
+ Mean Field Algorithm
349
+ MaxPool
350
+ FC
351
+ 𝐸
352
+ 𝐷
353
+ 𝐷!
354
+ 𝐸!
355
+ 𝑉!
356
+ 𝑉
357
+ 𝐾!
358
+ 𝐾
359
+ Task Network
360
+ Teacher Network
361
+ Figure 3. Overview of MAL architecture. We visualize the architecture of Mask Auto-Labeler. Mask Auto-Labeler takes cropped images
362
+ as inputs. Mask Auto-Labeler consists of two symmetric networks, Task Network and Teacher Network. Each network contains the image
363
+ encoder E(or Et), and the mask decoder D(or Dt). We use the exponential moving average (EMA) to update the weights of the teacher
364
+ network. We apply multiple instance learning (MIL) loss and conditional random fields (CRFs) loss. The CRF loss takes the average mask
365
+ predictions of the teacher network and the task network to make the training more stable and generate refined masks for self-training.
366
+ PanopticSegFormer [40], Mask2Former [41] and Mask
367
+ DINO [42] are introduced along this line and have pushed
368
+ the boundary of instance segmentation. On the other hand,
369
+ the instance segmentation also benefits from more power-
370
+ ful backbone designs, such as Swin Transformers [12, 22],
371
+ ViTDet [43], and ConvNeXt [44].
372
+ Weakly supervised instance segmentation. There are two
373
+ main styles of weakly supervised instance segmentation:
374
+ learning with image-level and box-level labels. The former
375
+ uses image-level class information to perform instance seg-
376
+ mentation [45–49], while the latter uses box-supervision.
377
+ Hsu et al. [4] leverages the tight-box priors. Later, Box-
378
+ Inst [5] proposes to leverage color smoothness to improve
379
+ accuracy. Besides that, DiscoBox [7] proposes to leverage
380
+ both color smoothness and inter-image correspondence for
381
+ the task. Other follow-ups [6,8] also leverage tight-box pri-
382
+ ors and color smoothness priors.
383
+ 2.3. Deep learning interpretation
384
+ The interest in a deeper understanding of deep net-
385
+ works has inspired many works to study the interpreta-
386
+ tion of deep neural networks.
387
+ For example, Class Ac-
388
+ tivation Map (CAM) [50] and Grad-CAM [51] visualize
389
+ the emerging localization during image classification train-
390
+ ing of convolutional neural networks (CNNs). This abil-
391
+ ity has also inspired much weakly-supervised localization
392
+ and shows deep connections to general weakly-supervised
393
+ learning, which partly motivates our decoder design in this
394
+ paper. DINO [9] further shows that meaning visual group-
395
+ ing emerges during self-supervised learning with ViTs. In
396
+ addition, FAN [52] shows that such emerging properties in
397
+ ViTs are linked to their robustness.
398
+ 3. Method
399
+ Our work differs from previous box-supervised instance
400
+ segmentation frameworks [4–8] that simultaneously learns
401
+ detection and instance segmentation. We leverage a two-
402
+ phase framework as visualized in Fig. 2, which allows us to
403
+ have a network focused on generating mask pseudo-labels
404
+ in phase 1, and another network focused on learning in-
405
+ stance segmentation [24, 28, 41, 43] in phase 2. Our pro-
406
+ posed auto-labeling framework is used in phase 1 to gener-
407
+ ate high-quality mask pseudo-labels.
408
+ We propose this two-phase framework because it brings
409
+ the following benefits:
410
+ • We can relax the learning constraints in phase 1 and
411
+ focus only on mask pseudo-labels. Therefore, in this
412
+ phase, we can take Region-of-interest (RoI) images in-
413
+ stead of untrimmed images as inputs. This change al-
414
+ lows us to use a higher resolution for small objects and
415
+ a strong training technique mentioned in Sec. 3.1, which
416
+ helps improve the mask quality.
417
+ • We can leverage different image encoders and mask de-
418
+ coders in phases 1 and 2 to achieve higher performance.
419
+ We empirically found that phases 1 and 2 favor different
420
+ architectures for the image encoders and mask decoders.
421
+ See the ablation study in Tab. 3 and 4.
422
+ • We can use MAL-generated masks to directly train the
423
+ most fully supervised instance segmentation models in
424
+ phase 2. This makes our approach more flexible than
425
+ previous architecture-specific box-supervised instance
426
+ segmentation approaches [4–8].
427
+ As phase 2 follows the previous standard pipelines,
428
+ which do not need to be re-introduced here, we focus on
429
+ introducing phase 1 (MAL) in the following subsections.
430
+ 3
431
+
432
+ 0
433
+ 50
434
+ 100
435
+ 150
436
+ 200
437
+ 250
438
+ 300
439
+ 350
440
+ 400
441
+ 0
442
+ 100
443
+ 200
444
+ 300
445
+ 400
446
+ 500
447
+ 6003.1. RoI input generation
448
+ Most
449
+ box-supervised
450
+ instance
451
+ segmentation
452
+ ap-
453
+ proaches [4–7] are trained using the entire images.
454
+ However, we find that using RoI images might have
455
+ more benefits in box-supervised instance segmentation.
456
+ Moreover, we compare two intuitive sampling strategies
457
+ of RoI images to obtain foreground and background pixels
458
+ and explain the better strategy, box expansion, in detail.
459
+ Benefits of using RoI inputs. There are two advantages of
460
+ using RoI images for inputs. First, using the RoI images
461
+ as inputs is naturally good for handling small objects be-
462
+ cause no matter how small the objects are, the RoI images
463
+ are enlarged to avoid the issues caused by low resolution.
464
+ Secondly, having RoI inputs allows MAL to focus on learn-
465
+ ing segmentation and avoid being distracted from learning
466
+ other complicated tasks, e.g., object detection. RoI sam-
467
+ pling strategy. The sampling strategy should ensure both
468
+ positive and negative pixels are included. We present two
469
+ straightforward sampling strategies:
470
+ • The first strategy is to use bounding boxes to crop the
471
+ images for positive inputs. We crop the images using
472
+ randomly generated boxes containing only background
473
+ pixels for negative inputs. MAL does not generate good
474
+ mask pseudo-labels with cropping strategy. We observe
475
+ that the networks tend to learn the trivial solution (all
476
+ pixels are predicted as either foreground or background).
477
+ • The second is to expand the bounding boxes randomly
478
+ and include background pixels, where negative bags are
479
+ chosen from the expanded rows and columns. We visu-
480
+ alize how we define positive/negative bags in Fig. 3 and
481
+ explain the detail in Sec. 3.3. This detailed design is
482
+ critical to make MAL work as it prevents MAL from
483
+ learning trivial solutions. Without this design, the gen-
484
+ erated masks tend to fill the entire bounding box.
485
+ Box expansion specifics. Given an untrimmed image Iu ∈
486
+ RC×Hu×W u and the bounding box b = (x0, y0, x1, y1) in-
487
+ dicating the x, y coordinates of the top-left corners and the
488
+ bottom-right corners. To obtain background pixels, we ran-
489
+ domly expand the bounding box b to b′ = (xc + βx(x0 −
490
+ xc), yc +β′
491
+ x(y0 −yc), xc +βy(x1 −xc), yc +β′
492
+ y(y1 −yc)),
493
+ where xc = (x0 + x1)/2, yc = (y0 + y1)/2. To gener-
494
+ ate random values of βx, β′
495
+ x, βy, β′
496
+ y, we randomly generate
497
+ θx, θy ∈ [0, θ] for x- and y-direction, where θ is the upper
498
+ bound of box expansion rate. Next, we randomly generate
499
+ βx ∈ [0, θx] and βy ∈ [0, θy]. In the end, we assign β′
500
+ x as
501
+ θx−βx and β′
502
+ y as θy −βy. Finally, we use b′ to crop the im-
503
+ age and obtain trimmed image It. We conduct the ablation
504
+ study for θ in Tab. 5. At last, We resize the trimmed image
505
+ It to the size of C × Hc × W c as the input image Ic.
506
+ Class
507
+ Tokens
508
+ k
509
+ q
510
+ v
511
+ Transformer
512
+ Layer
513
+ *
514
+ (a)
515
+ (b)
516
+ (c)
517
+ (d)
518
+ Figure 4. (a) The fully connected decoder (b) The fully convolu-
519
+ tional Decoder (c) The attention-based decoder (used in MAL) (d)
520
+ The query-based Decoder.
521
+ 3.2. MAL architecture
522
+ MAL can be divided into two symmetric networks: the
523
+ task network and the teacher network. The task network
524
+ consists of an image encoder denoted as E, and a mask de-
525
+ coder denoted as D, demonstrated in Fig. 3. The architec-
526
+ ture of the teacher network is identical to the task network.
527
+ We denote the segmentation output of the task network and
528
+ the teacher network as m, mt ∈ {0, 1}N, respectively.
529
+ Image encoder. We use Standard ViTs [11] as the image
530
+ encoder and drop the classification head of Standard ViTs.
531
+ We compare different image encoders in Sec. 4.4. We also
532
+ try feature pyramid networks on top of Standard ViTs, e.g.,
533
+ FPN [53], but it causes a performance drop. Similar con-
534
+ clusions were also found in ViTDet [43].
535
+ Mask decoder. For the mask decoder D, we use a simple
536
+ attention-based network inspired by YOLACT [30], which
537
+ includes an instance-aware head K and a pixel-wise head
538
+ V , where D(E(I)) = K(E(I)) · V (E(I)), and “ · ” repre-
539
+ sents the inner-product operator.
540
+ For the instance-aware head K, we use a max-pooling
541
+ layer followed by a fully connected layer. The input chan-
542
+ nel dimension of K is equivalent to the output channel di-
543
+ mension of E. The output channel dimension of K is 256.
544
+ For the pixel-wise head V , we use four sequential convo-
545
+ lutional layers. Each is followed by a ReLU layer. Between
546
+ the second and the third convolutional layer, we insert a bi-
547
+ linear interpolation layer to increase the feature resolution
548
+ by 2. The input channel dimension is equivalent to the out-
549
+ put channel dimension of E. We use 256 dimensions for
550
+ hidden channels and output channels. We also compare dif-
551
+ ferent design choices of mask decoders in Sec. 4.5.
552
+ Exponential moving average (EMA) teacher. Instead of
553
+ training the teacher network directly, we leverage exponen-
554
+ tial moving averages (EMA) to update the parameters in the
555
+ teacher network using the parameters in the task network
556
+ similar to MOCO [54]. The goal of using EMA Teacher
557
+ is to eliminate the loss-explosion issues in training since
558
+ optimizing Standard Vision Transformers is usually non-
559
+ trivial [13, 14, 16]. We do not observe any significant per-
560
+ formance drop or improvement on DeiT-small-based MAL
561
+ after removing the teacher network. However, it makes the
562
+ training more stable when we use larger-scale image en-
563
+ coders in MAL, e.g. ViT-MAE-Base [13].
564
+ 4
565
+
566
+ 3.3. Losses
567
+ We use Multiple Instance Learning Loss Lmil and Con-
568
+ ditional Random Field Loss Lcrf as the box-supervised loss:
569
+ L = αmilLmil + αcrfLcrf
570
+ (1)
571
+ Multiple Instance Learning Loss. The motivation of the
572
+ Multiple Instance Segmentation is to exploit the priors of
573
+ tight-bounding box annotations.
574
+ After the student network produces the output m, we ap-
575
+ ply the Multiple Instance Learning (MIL) Loss on the out-
576
+ put mask m. We demonstrate the process in Fig. 3.
577
+ We denote mi,j as the mask score at the location i, j in
578
+ the image Ic. We define each pixel as an instance in the
579
+ MIL loss. Inspired by BBTP [4], we treat each row or col-
580
+ umn of pixels as a bag. We determine whether a bag is pos-
581
+ itive or negative based on whether it passes a ground-truth
582
+ box. We define the bags as B, and each bag Bi contains a
583
+ row or column of pixels. Additionally, we define the label
584
+ for each bag g, and each label gi corresponds to a bag Bi.
585
+ Therefore, we use the max pooling as the reduction func-
586
+ tion and dice loss [55]:
587
+ Lmil = 1 −
588
+ 2 �
589
+ i gi · max{Bi}2
590
+
591
+ i max{Bi}2 + �
592
+ i g2
593
+ i
594
+ (2)
595
+ Conditional Random Field Loss. The goal of CRF loss
596
+ is to refine the mask prediction by imposing the smooth-
597
+ ness priors via energy minimization. Then, we leverage this
598
+ refined mask as pseudo-labels to self-train the mask predic-
599
+ tion in an online-teacher manner. We use the average mask
600
+ prediction ma = 1
601
+ 2(m + mt) as the mask prediction to be
602
+ refined for more stable training.
603
+ Next, we define a random field X = {X1, ..., XN},
604
+ where N = Hc × W c is the size of cropped image and
605
+ each Xi represents the label that corresponds to a pixel in
606
+ Ic, therefore we have X ∈ {0, 1}N, meaning the back-
607
+ ground or the foreground. We use l ∈ {0, 1}N to represent
608
+ a labeling of X minimizing the following CRF energy:
609
+ E(l|ma, Xc) = µ(X|ma, Ic) + ψ(X|Ic),
610
+ (3)
611
+ where µ(X|ma, Ic) represents the unary potentials, which
612
+ is used to align Xi and ma
613
+ i since we assume that most of
614
+ the mask predictions are correct. Meanwhile, ψ(X|Ic) rep-
615
+ resents the pairwise potential, which sharpens the refined
616
+ mask. Specifically, we define the pairwise potentials as:
617
+ ψ(X|Ic) =
618
+
619
+ i∈{0..N−1},
620
+ j∈N (i)
621
+ ω exp(−|Ic
622
+ i − Ic
623
+ j |2
624
+ 2ζ2
625
+ )[Xi ̸= Xj], (4)
626
+ where N(i) represents the set of 8 immediate neighbors to
627
+ Xi as shown in Fig. 3. Then, we use the MeanField al-
628
+ gorithm [7, 56] to efficiently approximate the optimal so-
629
+ lution, denoted as l = MeanField(Ic, ma). We attach
630
+ the derivation and PyTorch code in the supplementary. At
631
+ last, we apply Dice Loss to leverage the refined masks l to
632
+ self-train the models as:
633
+ Lcrf = 1 − 2 �
634
+ i limi
635
+
636
+ i l2
637
+ i + m2
638
+ i
639
+ (5)
640
+ 4. Experiments
641
+ We evaluate MAL on COCO dataset [1], and LVIS [57].
642
+ The main results on COCO and LVIS are shown in Tab. 1
643
+ and 2. The qualitative results are shown in Fig. 1 and Fig. 5.
644
+ 4.1. Datasets
645
+ COCO dataset. contains 80 semantic categories. We fol-
646
+ low the standard partition, which includes train2017 (115K
647
+ images), val2017 (5K images), and test-dev (20k images).
648
+ LVIS dataset. contains 1200+ categories and 164K images.
649
+ We follow the standard partition of training and validation.
650
+ 4.2. Implementation Details
651
+ We use 8 NVIDIA Tesla V100s to run the experiments.
652
+ Phase 1 (mask auto-labeling). We use AdamW [58] as
653
+ the network optimizer and set the two momentums as 0.9,
654
+ 0.9. We use the cosine and annealing scheduler to adjust the
655
+ learning rate, which is set to 1.5 · 10−6 per image. The MIL
656
+ loss weight αmil, CRF loss weight αcrf, ζ, and ω in CRF
657
+ pairwise potentials are set to 4, 0.5, 0.5, 2, respectively. We
658
+ analyze the sensitivity of the loss weights and CRF hyper-
659
+ parameters in Fig. 8. We use the input resolution of 512 ×
660
+ 512, and a batch size of 32 (4 per GPU). For EMA, we use
661
+ a momentum of 0.996. For the task and teacher network,
662
+ we apply random flip data augmentation. On top of that,
663
+ we apply extra random color jittering, random grey-scale
664
+ conversion, and random Gaussian blur for the task network.
665
+ We train MAL for 10 epochs. It takes around 23 hours and
666
+ 35 hours to train MAL with Standard ViT-Base [11] on the
667
+ COCO and LVIS datasets, respectively.
668
+ Phase 2 (Training instance segmentation models). We
669
+ select a couple of high-performance fully supervised in-
670
+ stance segmentation models, which are ConvNeXts [44]
671
+ with Cascade R-CNN [28], Swin Transformers [12] with
672
+ Mask2Former [41], ResNets [59] and ResNeXts [60] with
673
+ SOLOv2 [31]. MAL works extremely well with these ar-
674
+ chitectures, which demonstrates the great power of Mask
675
+ Auto-Labeler from the perspective of accuracy and gener-
676
+ alization. We leverage the codebase in MMDetection [61]
677
+ for phase 2. Again, we only replace the GT masks with
678
+ MAL-generated mask pseudo-labels to adjust all these fully
679
+ supervised models to box-supervised learning.
680
+ 4.3. Instance segmentation results
681
+ Retention Rate. We argue that the sole mAP of instance
682
+ segmentation is not fair enough to evaluate box-supervised
683
+ instance segmentation since the performance gain can be
684
+ 5
685
+
686
+ Method
687
+ Labeler Backbone
688
+ InstSeg Backbone
689
+ InstSeg Model
690
+ Sup
691
+ (%)Mask APval
692
+ (%)Mask APtest
693
+ (%)Ret.val
694
+ (%)Ret.test
695
+ Mask R-CNN∗ [24]
696
+ -
697
+ ResNet-101
698
+ Mask R-CNN
699
+ Mask
700
+ 38.6
701
+ 38.8
702
+ -
703
+ -
704
+ Mask R-CNN∗ [24]
705
+ -
706
+ ResNeXt-101
707
+ Mask R-CNN
708
+ Mask
709
+ 39.5
710
+ 39.9
711
+ -
712
+ -
713
+ CondInst [33]
714
+ -
715
+ ResNet-101
716
+ CondInst
717
+ Mask
718
+ 38.6
719
+ 39.1
720
+ -
721
+ -
722
+ SOLOv2 [31]
723
+ -
724
+ ResNet-50
725
+ SOLOv2
726
+ Mask
727
+ 37.5
728
+ 38.4
729
+ -
730
+ -
731
+ SOLOv2 [31]
732
+ -
733
+ ResNet-101-DCN
734
+ SOLOv2
735
+ Mask
736
+ 41.7
737
+ 41.8
738
+ -
739
+ -
740
+ SOLOv2 [31]
741
+ -
742
+ ResNeXt-101-DCN
743
+ SOLOv2
744
+ Mask
745
+ 42.4
746
+ 42.7
747
+ -
748
+ -
749
+ ConvNeXt [44]
750
+ -
751
+ ConvNeXt-Small [44]
752
+ Cascade R-CNN
753
+ Mask
754
+ 44.8
755
+ 45.5
756
+ -
757
+ -
758
+ ConvNeXt [44]
759
+ -
760
+ ConvNeXt-Base [44]
761
+ Cascade R-CNN
762
+ Mask
763
+ 45.4
764
+ 46.1
765
+ -
766
+ -
767
+ Mask2Former [41]
768
+ -
769
+ Swin-Small
770
+ Mask2Former
771
+ Mask
772
+ 46.1
773
+ 47.0
774
+ -
775
+ -
776
+ BBTP† [4]
777
+ -
778
+ ResNet-101
779
+ Mask R-CNN
780
+ Box
781
+ -
782
+ 21.1
783
+ -
784
+ 59.1
785
+ BoxInst [5]
786
+ -
787
+ ResNet-101
788
+ CondInst
789
+ Box
790
+ 33.0
791
+ 33.2
792
+ 85.5
793
+ 84.9
794
+ BoxLevelSet [6]
795
+ -
796
+ ResNet-101-DCN
797
+ SOLOv2
798
+ Box
799
+ 35.0
800
+ 35.4
801
+ 83.9
802
+ 83.5
803
+ DiscoBox [7]
804
+ -
805
+ ResNet-50
806
+ SOLOv2
807
+ Box
808
+ 30.7
809
+ 32.0
810
+ 81.9
811
+ 83.3
812
+ DiscoBox [7]
813
+ -
814
+ ResNet-101-DCN
815
+ SOLOv2
816
+ Box
817
+ 35.3
818
+ 35.8
819
+ 84.7
820
+ 85.9
821
+ DiscoBox [7]
822
+ -
823
+ ResNeXt-101-DCN
824
+ SOLOv2
825
+ Box
826
+ 37.3
827
+ 37.9
828
+ 88.0
829
+ 88.8
830
+ BoxTeacher [8]
831
+ -
832
+ Swin-Base
833
+ CondInst
834
+ Box
835
+ -
836
+ 40.0
837
+ -
838
+ -
839
+ Mask Auto-Labeler
840
+ ViT-MAE-Base [13]
841
+ ResNet-50
842
+ SOLOv2
843
+ Box
844
+ 35.0
845
+ 35.7
846
+ 93.3
847
+ 93.0
848
+ Mask Auto-Labeler
849
+ ViT-MAE-Base [13]
850
+ ResNet-101-DCN
851
+ SOLOv2
852
+ Box
853
+ 38.2
854
+ 38.7
855
+ 91.6
856
+ 92.6
857
+ Mask Auto-Labeler
858
+ ViT-MAE-Base [13]
859
+ ResNeXt-101-DCN
860
+ SOLOv2
861
+ Box
862
+ 38.9
863
+ 39.1
864
+ 91.7
865
+ 91.6
866
+ Mask Auto-Labeler
867
+ ViT-MAE-Base [13]
868
+ ConvNeXt-Small [44]
869
+ Cascade R-CNN
870
+ Box
871
+ 42.3
872
+ 43.0
873
+ 94.4
874
+ 94.5
875
+ Mask Auto-Labeler
876
+ ViT-MAE-Base [13]
877
+ ConvNeXt-Base [44]
878
+ Cascade R-CNN
879
+ Box
880
+ 42.9
881
+ 43.3
882
+ 94.5
883
+ 93.9
884
+ Mask Auto-Labeler
885
+ ViT-MAE-Base [13]
886
+ Swin-Small [12]
887
+ Mask2Former [41]
888
+ Box
889
+ 43.3
890
+ 44.1
891
+ 93.9
892
+ 93.8
893
+ Table 1. Main results on COCO. Ret means the retention rate of box-supervised mask AP
894
+ supervised mask AP . MAL with SOLOv2/ResNeXt-101 outperforms
895
+ DiscoBox with SOLOv2/ResNeXt-101 by 1.6% on val2017 and 1.3% on test-dev. Our best model (Mask2former/Swin-Small) achieves
896
+ 43.3% AP on val and 44.1% AP on test-dev.
897
+ achieved by improving box quality unrelated to segmenta-
898
+ tion quality. However, the retention rate can better reflect
899
+ the real mask quality because the fully supervised counter-
900
+ parts also get boosted by the better box results.
901
+ Results on COCO. In table 1, we show that various mod-
902
+ ern instance segmentation models can achieve up to 94.5%
903
+ performance with the pseudo-labels of the fully supervised
904
+ oracles. Our best results are 43.3% mAP on COCO test-dev
905
+ and 44.1% mAP on COCO val, achieved by using MAL
906
+ (Standard ViT-Base [11] pretrained with MAE) for phase
907
+ 1, and using Mask2Former (Swin-Small) [12,41] for phase
908
+ 2. There is no significant retention drop when we use the
909
+ mask pseudo-labels to train more powerful instance seg-
910
+ mentation models. On the contrary, the higher retention
911
+ rates on COCO are achieved by the heavier instance seg-
912
+ mentation models, e.g., Cascade R-CNN with ConvNeXts
913
+ and Mask2Former with Swin-Small. However, other meth-
914
+ ods have significantly lower retention rates compared with
915
+ MAL. The experiment results quantitatively imply that the
916
+ mask quality outperforms other methods by a large margin.
917
+ Results on LVIS. In table 2, we also observe that all in-
918
+ stance segmentation models work very well with the mask
919
+ pseudo-labels generated by MAL (Ret. = 93% ˜ 98%). We
920
+ visualize part of the results in figure 5. We also evaluate
921
+ the open-vocabulary ability of MAL by training MAL on
922
+ COCO dataset but generating mask pseudo-labels on LVIS,
923
+ and thus training instance segmentation models using these
924
+ mask pseudo-labels.
925
+ 4.4. Image encoder variation
926
+ To support our claim that Vision Transformers are good
927
+ auto-labelers, we compare three popular networks as the im-
928
+ age encoders of MAL: Standard Vision Transformers [11,
929
+ 13,16], Swin Transformer [12], ConvNeXts [44] in Tab. 4.
930
+ First,
931
+ we compare the fully supervised pretrained
932
+ weights of these three models. We choose the official fully
933
+ supervised pre-trained weights of ConvNeXts and Swin
934
+ Transformers. For Standard Vision Transformers, we adopt
935
+ a popular fully supervised approach, DeiT [16]. We ob-
936
+ serve that fully supervised Standard Vision Transformers
937
+ (DeiT) as image encoders of Mask Auto-Labeler are better
938
+ than Swin Transformers and ConvNeXts even though the
939
+ imaganet-1k performance of Swin Transformers and Con-
940
+ vNeXts is higher than that of DeiT. We argue that the suc-
941
+ cess of Standard Vision Transformers might be owed to the
942
+ self-emerging properties of Standard ViTs [9, 11] (visual-
943
+ ized in Fig. 6), and the larger-receptive field brought by
944
+ global multi-head self-attention layers.
945
+ Second, the mask pseudo-labels can be further improved
946
+ by Mask AutoEncoder (MAE) pretraining [13]. The poten-
947
+ tial reason might be that MAE pretraining enhances Stan-
948
+ dard ViTs via learning pixel-level information, which is
949
+ very important for dense-prediction tasks like segmentation.
950
+ 4.5. Mask decoder variation
951
+ We compare four different modern designs of mask de-
952
+ coders: the fully connected Decoder [62], the fully convolu-
953
+ tional decoder [24,63], the attention-based decoder [30,31],
954
+ and the query-based decoder [41] in Tab.
955
+ 3. We visual-
956
+ ize different designs of mask decoders in Figure 4. For the
957
+ fully connected Decoder, we use two fully connected layers
958
+ with a hidden dimension of 2048 and then output a confi-
959
+ dence map for each pixel. We reshape this output vector as
960
+ the 2D confidence map. We introduce the attention-based
961
+ decoder in Sec 3.2. For the fully convolutional Decoder,
962
+ We adopt the pixel-wise head V in the attention-based De-
963
+ 6
964
+
965
+ Figure 5. Qualitative results of mask pseudo-labels generated by Mask Auto-Labeler on LVIS v1.
966
+ Method
967
+ Autolabeler Backbone
968
+ InstSeg Backbone
969
+ InstSeg Model
970
+ Training Data
971
+ Sup
972
+ (%)Mask APval
973
+ (%)Ret.val
974
+ Mask R-CNN [24]
975
+ -
976
+ ResNet-50-DCN
977
+ Mask R-CNN [24]
978
+ -
979
+ Mask
980
+ 21.7
981
+ -
982
+ Mask R-CNN [24]
983
+ -
984
+ ResNet-101-DCN
985
+ Mask R-CNN [24]
986
+ -
987
+ Mask
988
+ 23.6
989
+ -
990
+ Mask R-CNN [24]
991
+ -
992
+ ResNeXt-101-32x4d-FPN
993
+ Mask R-CNN [24]
994
+ -
995
+ Mask
996
+ 25.5
997
+ -
998
+ Mask R-CNN [24]
999
+ -
1000
+ ResNeXt-101-64x4d-FPN
1001
+ Mask R-CNN [24]
1002
+ -
1003
+ Mask
1004
+ 25.8
1005
+ -
1006
+ Mask Auto-Labeler
1007
+ ViT-MAE-Base [13]
1008
+ ResNet-50-DCN
1009
+ Mask R-CNN [24]
1010
+ LVIS v1
1011
+ Box
1012
+ 20.7
1013
+ 95.4
1014
+ Mask Auto-Labeler
1015
+ ViT-MAE-Base [13]
1016
+ ResNet-101-DCN
1017
+ Mask R-CNN [24]
1018
+ LVIS v1
1019
+ Box
1020
+ 23.0
1021
+ 97.4
1022
+ Mask Auto-Labeler
1023
+ ViT-MAE-Base [13]
1024
+ ResNeXt-101-32x4d-FPN
1025
+ Mask R-CNN [24]
1026
+ LVIS v1
1027
+ Box
1028
+ 23.7
1029
+ 92.9
1030
+ Mask Auto-Labeler
1031
+ ViT-MAE-Base [13]
1032
+ ResNeXt-101-64x4d-FPN
1033
+ Mask R-CNN [24]
1034
+ LVIS v1
1035
+ Box
1036
+ 24.5
1037
+ 95.0
1038
+ Mask Auto-Labeler
1039
+ ViT-MAE-Base [13]
1040
+ ResNeXt-101-32x4d-FPN
1041
+ Mask R-CNN [24]
1042
+ COCO
1043
+ Box
1044
+ 23.3
1045
+ 91.8
1046
+ Mask Auto-Labeler
1047
+ ViT-MAE-Base [13]
1048
+ ResNeXt-101-64x4d-FPN
1049
+ Mask R-CNN [24]
1050
+ COCO
1051
+ Box
1052
+ 24.2
1053
+ 93.8
1054
+ Table 2. Main results on LVIS v1. Training data means the dataset we use for training MAL. We also finetune it on COCO and then generate
1055
+ pseudo-labels of LVIS v1. Compared with trained on LVIS v1 directly, MAL finetuned on COCO only caused around 0.35% mAP drop
1056
+ on the final results, which indicates the great potential of the open-set ability of MAL. Ret means the retention rate of box-supervised mask AP
1057
+ supervised mask AP .
1058
+ Mask decoder
1059
+ (%)Mask APval
1060
+ (%)Ret.val
1061
+ Fully connected decoder
1062
+ 35.5
1063
+ 79.2
1064
+ Fully convolutional decoder
1065
+ 36.1
1066
+ 80.5
1067
+ Attention-based decoder
1068
+ 42.3
1069
+ 94.4
1070
+ Query-based decoder
1071
+ -
1072
+ -
1073
+ Table 3. Ablation study of box expansion. We use Standard ViT-
1074
+ MAE-Base as the image encoder of MAL in phase 1 and Cascade
1075
+ RCNN with ConvNext-Small as the instance segmentation models
1076
+ in phase 2. The numbers are reported in % Mask mAP. Among
1077
+ different designs, the attention-based decoder performs the best.
1078
+ We can not obtain reasonable results with Query-based Decoder.
1079
+ coder. For the query-based decoder, we follow the design
1080
+ in Mask2Former [41]. We spend much effort exploring the
1081
+ query-based Decoder on MAL since it performs extremely
1082
+ well on fully supervised instance segmentation. However,
1083
+ the results are surprisingly unsatisfactory. We suspect the
1084
+ slightly heavier layers might cause optimization issues un-
1085
+ der the box-supervised losses.
1086
+ Experiments show that box-supervised instance segmen-
1087
+ tation favors the attention-based decoder. However, state-
1088
+ of-the-art instance segmentation and object detection meth-
1089
+ ods often adopt the fully convolutional decoder [15, 43]
1090
+ or the query-based decoder [41]. Our proposed two-phase
1091
+ framework resolves this dilemma and allows the networks
1092
+ to enjoy the merits of both the attention-based Decoder and
1093
+ the non-attention-based Decoders.
1094
+ 4.6. Clustering analysis
1095
+ As the results are shown in Tab. 4, we wonder why the
1096
+ Standard ViTs outperform other modern image encoders in
1097
+ auto-labeling. As the comparison of classification ability
1098
+ Backbone
1099
+ IN-1k Acc@1
1100
+ Mask APval
1101
+ Ret.val
1102
+ ConvNeXt-Base [44]
1103
+ 83.8
1104
+ 39.6
1105
+ 88.4
1106
+ Swin-Base [12]
1107
+ 83.5
1108
+ 40.2
1109
+ 89.7
1110
+ ViT-DeiT-Small [64]
1111
+ 79.9
1112
+ 40.8
1113
+ 91.0
1114
+ ViT-DeiT-Base [64]
1115
+ 81.8
1116
+ 41.1
1117
+ 91.7
1118
+ ViT-MAE-Base [13]
1119
+ 83.6
1120
+ 42.3
1121
+ 94.4
1122
+ ViT-MAE-Large [13]
1123
+ 85.9
1124
+ 42.3
1125
+ 94.4
1126
+ Table 4.
1127
+ Ablation study of different backbones.
1128
+ All models
1129
+ are pre-trained on ImageNet-1k.
1130
+ ConvNeXt and Swin Trans-
1131
+ former outperform DeiT on image classification, but standard ViT-
1132
+ Small [16] (ViT-DeiT-Small) outperforms ConvNeXt-base and
1133
+ Swin-Base on mask Auto-labeling.
1134
+ Standard ViT-Base (ViT-
1135
+ MAE-Base) and Standard ViT-Large (ViT-MAE-Large) pretrained
1136
+ via MAE achieve the best performance on mask Auto-labeling.
1137
+ does not seem to reflect the actual ability of auto-labeling,
1138
+ we try to use the ability clustering to evaluate the image en-
1139
+ coders because foreground(FG)/background(BG) segmen-
1140
+ tation is very similar to the binary clustering problem.
1141
+ Specifically, we extract the feature map output by the last
1142
+ layers of Swin Transformers [12], ConvNeXts [44], Stan-
1143
+ dard ViTs [11]. Then, we use the GT mask to divide the
1144
+ feature vectors into the FG and BG feature sets. By evalu-
1145
+ ating the average distance from the FG/BG feature vectors
1146
+ to their clustering centers, we can reveal the ability of the
1147
+ networks to distinguish FG and BG pixels empirically.
1148
+ Formally, we define the feature vector of token i gener-
1149
+ ated by backbone E as f E
1150
+ i . We define the FG/BG clustering
1151
+ centers f ′
1152
+ 1, f ′
1153
+ 0 as the mean of the FG/BG feature vectors.
1154
+ Then, we use the following metric as the clustering score:
1155
+ S = 1
1156
+ N
1157
+ N
1158
+
1159
+ i
1160
+ ( f E
1161
+ i
1162
+ |f E
1163
+ i | −
1164
+ f ′
1165
+ γ(i)
1166
+ |f ′
1167
+ γ(i)|)2,
1168
+ (6)
1169
+ 7
1170
+
1171
+ 50
1172
+ 100
1173
+ 150
1174
+ 200
1175
+ 250
1176
+ 300
1177
+ 350
1178
+ 400
1179
+ 0
1180
+ 100
1181
+ 200
1182
+ 300
1183
+ 400
1184
+ 500
1185
+ 6000
1186
+ 100
1187
+ 200
1188
+ 300
1189
+ 400
1190
+ 500
1191
+ 600
1192
+ 0
1193
+ 100
1194
+ 200
1195
+ 300
1196
+ 40050
1197
+ 100
1198
+ 150
1199
+ 200
1200
+ 250
1201
+ 300
1202
+ 350
1203
+ 400
1204
+ 0
1205
+ 100
1206
+ 200
1207
+ 300
1208
+ 400
1209
+ 500
1210
+ 600100
1211
+ 200
1212
+ 300
1213
+ 400
1214
+ 500
1215
+ 600
1216
+ 0
1217
+ 100
1218
+ 200
1219
+ 300
1220
+ 400
1221
+ 500
1222
+ 600100
1223
+ 200
1224
+ 300
1225
+ 400
1226
+ 0
1227
+ 100
1228
+ 200
1229
+ 300
1230
+ 400
1231
+ 500
1232
+ 6000
1233
+ 50
1234
+ 100
1235
+ 150
1236
+ 200
1237
+ 250
1238
+ 300
1239
+ 350
1240
+ 400
1241
+ 0
1242
+ 100
1243
+ 200
1244
+ 300
1245
+ 400
1246
+ 500
1247
+ 600Figure 6. Attention visualization of two RoI images produced by MAL. In each image group, the left-most image is the original image.
1248
+ We visualize the attention map output by the 4th, 8th, 12th MHSA layers of the Standard ViTs in MAL.
1249
+ (a) MAL-generated Masks are sharper and more boundary-sticky
1250
+ (b) Occlusion Issues
1251
+ MAL Mask
1252
+ Pseudo-labels
1253
+ Ground-truth
1254
+ Masks
1255
+ Figure 7. The lateral comparison between MAL-generated pseudo-labels (top) and GT masks (bottom) on COCO val2017. On the left, we
1256
+ observe that MAL-generated pseudo-labels are sharper and more boundary-sticky than GT masks in some cases. On the right, we observe
1257
+ that in highly occluded situations, human-annotated masks are still better.
1258
+ 25
1259
+ 30
1260
+ 35
1261
+ 40
1262
+ 0.1 0.25 0.5 0.75 1
1263
+ 𝛼!"#
1264
+ 32
1265
+ 34
1266
+ 36
1267
+ 1
1268
+ 2
1269
+ 4
1270
+ 8
1271
+ 16
1272
+ 𝛼$%&
1273
+ 30
1274
+ 32
1275
+ 34
1276
+ 36
1277
+ 0.5
1278
+ 1
1279
+ 2
1280
+ 4
1281
+ 6
1282
+ 𝜔
1283
+ 27
1284
+ 30
1285
+ 33
1286
+ 36
1287
+ 0.1 0.25 0.5 0.75 1
1288
+ 𝜁
1289
+ Figure 8. Sensitivity analysis of loss weights and CRF hyper-
1290
+ parameters. We use ViT-Base [11] pretrained via MAE [13] as the
1291
+ image encoder for the first phase and SOLOv2 (ResNet-50) for the
1292
+ second phase. The x-axis and y-axis indicate the hyper-parameter
1293
+ values and the (%)mask AP, respectively.
1294
+ θ
1295
+ Mask APval
1296
+ Ret.val
1297
+ 0.6
1298
+ 41.3
1299
+ 92.2
1300
+ 0.8
1301
+ 41.7
1302
+ 93.1
1303
+ 1.0
1304
+ 42.2
1305
+ 94.2
1306
+ 1.2
1307
+ 42.3
1308
+ 94.4
1309
+ 1.4
1310
+ 42.0
1311
+ 93.8
1312
+ 1.6
1313
+ 41.8
1314
+ 93.3
1315
+ Table 5. Ablation on box ex-
1316
+ pansion ratio. We use Standard
1317
+ ViT-Base pretrained via MAE
1318
+ (ViT-MAE-Base) and Cascade
1319
+ R-CNN (ConvNeXt-Small) for
1320
+ phase 1 and 2.
1321
+ Backbone
1322
+ Score (↓)
1323
+ ConvNeXt-Base [44]
1324
+ 0.459
1325
+ Swin-Base [12]
1326
+ 0.425
1327
+ ViT-DeiT-Small [64]
1328
+ 0.431
1329
+ ViT-DeiT-Base [64]
1330
+ 0.398
1331
+ ViT-MAE-Base [13]
1332
+ 0.324
1333
+ ViT-MAE-Large [13]
1334
+ 0.301
1335
+ Table 6.
1336
+ Clustering scores
1337
+ for different image encoders.
1338
+ The smaller clustering scores
1339
+ imply a better ability to dis-
1340
+ tinguish foreground and back-
1341
+ ground features.
1342
+ where if pixel i is FG, γ(i) = 1, otherwise γ(i) = 0.
1343
+ We show the clustering evaluation on the COCO val
1344
+ 2017 in Tab. 6. The results align our conclusion that Stan-
1345
+ dard Vision Transformers are better at mask auto-labeling.
1346
+ 4.7. MAL masks v.s. GT masks
1347
+ We show the apples to apples qualitative comparison in
1348
+ Fig. 7 and make the following observations. First, MAL-
1349
+ generated mask pseudo-labels are considerably sharper and
1350
+ boundary-sticky than human-annotated ones since humans
1351
+ have difficulties in aligning with the true boundaries. Sec-
1352
+ ond, severe occlusion also presents a challenging issue.
1353
+ 5. Conclusion
1354
+ In this work, we propose a novel two-phase frame-
1355
+ work for box-supervised instance segmentation and a
1356
+ novel Transformer-based architecture, Mask Auto-Labeler
1357
+ (MAL), to generate high-quality mask pseudo-labels in
1358
+ phase 1. We reveal that Standard Vision Transformers are
1359
+ good mask auto-labelers. Moreover, we find that random
1360
+ using box-expansion RoI inputs, the attention-based De-
1361
+ coder, and class-agnostic training are crucial to the strong
1362
+ mask auto-labeling performance. Moreover, thanks to the
1363
+ two-phase framework design and MAL, we can adjust al-
1364
+ most all kinds of fully supervised instance segmentation
1365
+ models to box-supervised learning with little performance
1366
+ drop, which shows the great generalization of MAL.
1367
+ Limitations. Although great improvement has been made
1368
+ by our approaches in mask auto-labeling, we still observe
1369
+ many failure cases in the occlusion situation, where human
1370
+ annotations are much better than MAL-generated masks.
1371
+ Additionally, we meet saturation problems when scaling the
1372
+ model from Standard ViT-Base to Standard ViT-Large. We
1373
+ leave those problems in the future work.
1374
+ Broader impacts. Our proposed Transformer-based mask
1375
+ auto-labeler and the two-phase architecture serve as a stan-
1376
+ dard paradigm for high-quality box-supervised instance
1377
+ segmentation. If follow-up work can find and fix the issues
1378
+ under our proposed paradigm, there is great potential that
1379
+ expansive human-annotated masks are no longer needed for
1380
+ instance segmentation in the future.
1381
+ 8
1382
+
1383
+ References
1384
+ [1] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays,
1385
+ Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence
1386
+ Zitnick. Microsoft coco: Common objects in context. In
1387
+ European conference on computer vision, pages 740–755.
1388
+ Springer, 2014. 1, 5
1389
+ [2] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Ui-
1390
+ jlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan
1391
+ Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig,
1392
+ and Vittorio Ferrari. The open images dataset v4: Unified
1393
+ image classification, object detection, and visual relationship
1394
+ detection at scale. IJCV, 2020. 1
1395
+ [3] Shuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang
1396
+ Yu, Xiangyu Zhang, Jing Li, and Jian Sun. Objects365: A
1397
+ large-scale, high-quality dataset for object detection. In Pro-
1398
+ ceedings of the IEEE/CVF international conference on com-
1399
+ puter vision, pages 8430–8439, 2019. 1
1400
+ [4] Cheng-Chun Hsu, Kuang-Jui Hsu, Chung-Chi Tsai, Yen-Yu
1401
+ Lin, and Yung-Yu Chuang. Weakly supervised instance seg-
1402
+ mentation using the bounding box tightness prior. Advances
1403
+ in Neural Information Processing Systems, 32, 2019. 1, 3, 4,
1404
+ 5, 6
1405
+ [5] Zhi Tian, Chunhua Shen, Xinlong Wang, and Hao Chen.
1406
+ Boxinst: High-performance instance segmentation with box
1407
+ annotations. In Proceedings of the IEEE/CVF Conference
1408
+ on Computer Vision and Pattern Recognition, pages 5443–
1409
+ 5452, 2021. 1, 3, 4, 6
1410
+ [6] Wentong Li, Wenyu Liu, Jianke Zhu, Miaomiao Cui, Xi-
1411
+ ansheng Hua, and Lei Zhang.
1412
+ Box-supervised instance
1413
+ segmentation with level set evolution.
1414
+ arXiv preprint
1415
+ arXiv:2207.09055, 2022. 1, 3, 4, 6
1416
+ [7] Shiyi Lan, Zhiding Yu, Christopher Choy, Subhashree Rad-
1417
+ hakrishnan, Guilin Liu, Yuke Zhu, Larry S Davis, and An-
1418
+ ima Anandkumar. Discobox: Weakly supervised instance
1419
+ segmentation and semantic correspondence from box super-
1420
+ vision. In Proceedings of the IEEE/CVF International Con-
1421
+ ference on Computer Vision, pages 3406–3416, 2021. 1, 3,
1422
+ 4, 5, 6
1423
+ [8] Tianheng Cheng, Xinggang Wang, Shaoyu Chen, Qian
1424
+ Zhang, and Wenyu Liu. Boxteacher: Exploring high-quality
1425
+ pseudo labels for weakly supervised instance segmentation.
1426
+ arXiv preprint arXiv:2210.05174, 2022. 1, 3, 6
1427
+ [9] Mathilde Caron, Hugo Touvron, Ishan Misra, Herv´e J´egou,
1428
+ Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerg-
1429
+ ing properties in self-supervised vision transformers.
1430
+ In
1431
+ Proceedings of the IEEE/CVF International Conference on
1432
+ Computer Vision, pages 9650–9660, 2021. 2, 3, 6
1433
+ [10] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko-
1434
+ reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia
1435
+ Polosukhin. Attention is all you need. Advances in neural
1436
+ information processing systems, 30, 2017. 2
1437
+ [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,
1438
+ Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
1439
+ Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-
1440
+ vain Gelly, et al. An image is worth 16x16 words: Trans-
1441
+ formers for image recognition at scale.
1442
+ arXiv preprint
1443
+ arXiv:2010.11929, 2020. 2, 4, 5, 6, 7, 8, 12
1444
+ [12] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng
1445
+ Zhang, Stephen Lin, and Baining Guo. Swin transformer:
1446
+ Hierarchical vision transformer using shifted windows. In
1447
+ Proceedings of the IEEE/CVF International Conference on
1448
+ Computer Vision, pages 10012–10022, 2021. 2, 3, 5, 6, 7, 8,
1449
+ 12
1450
+ [13] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr
1451
+ Doll´ar, and Ross Girshick. Masked autoencoders are scalable
1452
+ vision learners. In Proceedings of the IEEE/CVF Conference
1453
+ on Computer Vision and Pattern Recognition, pages 16000–
1454
+ 16009, 2022. 2, 4, 6, 7, 8, 12
1455
+ [14] Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training
1456
+ of image transformers.
1457
+ arXiv preprint arXiv:2106.08254,
1458
+ 2021. 2, 4
1459
+ [15] Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhil-
1460
+ iang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mo-
1461
+ hammed, Saksham Singhal, Subhojit Som, et al. Image as a
1462
+ foreign language: Beit pretraining for all vision and vision-
1463
+ language tasks. arXiv preprint arXiv:2208.10442, 2022. 2,
1464
+ 7
1465
+ [16] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco
1466
+ Massa, Alexandre Sablayrolles, and Herv´e J´egou. Training
1467
+ data-efficient image transformers & distillation through at-
1468
+ tention. In International Conference on Machine Learning,
1469
+ pages 10347–10357. PMLR, 2021. 2, 4, 6, 7, 12
1470
+ [17] Hugo Touvron, Matthieu Cord, and Herve Jegou. Deit iii:
1471
+ Revenge of the vit. arXiv preprint arXiv:2204.07118, 2022.
1472
+ 2
1473
+ [18] Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles,
1474
+ Gabriel Synnaeve, and Herv´e J´egou. Going deeper with im-
1475
+ age transformers. In Proceedings of the IEEE/CVF Interna-
1476
+ tional Conference on Computer Vision (ICCV), pages 32–42,
1477
+ October 2021. 2
1478
+ [19] Lingchen Meng, Hengduo Li, Bor-Chun Chen, Shiyi Lan,
1479
+ Zuxuan Wu, Yu-Gang Jiang, and Ser-Nam Lim.
1480
+ Adavit:
1481
+ Adaptive vision transformers for efficient image recognition.
1482
+ In Proceedings of the IEEE/CVF Conference on Computer
1483
+ Vision and Pattern Recognition, pages 12309–12318, 2022.
1484
+ 2
1485
+ [20] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao
1486
+ Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao.
1487
+ Pyramid vision transformer: A versatile backbone for dense
1488
+ prediction without convolutions.
1489
+ In Proceedings of the
1490
+ IEEE/CVF International Conference on Computer Vision,
1491
+ pages 568–578, 2021. 2
1492
+ [21] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao
1493
+ Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pvt
1494
+ v2: Improved baselines with pyramid vision transformer.
1495
+ Computational Visual Media, 8(3):415–424, 2022. 2
1496
+ [22] Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie,
1497
+ Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al.
1498
+ 9
1499
+
1500
+ Swin transformer v2: Scaling up capacity and resolution. In
1501
+ Proceedings of the IEEE/CVF Conference on Computer Vi-
1502
+ sion and Pattern Recognition, pages 12009–12019, 2022. 2,
1503
+ 3
1504
+ [23] Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xi-
1505
+ aochen Lian, Zihang Jiang, Qibin Hou, and Jiashi Feng.
1506
+ Deepvit: Towards deeper vision transformer. arXiv preprint
1507
+ arXiv:2103.11886, 2021. 2
1508
+ [24] Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross Gir-
1509
+ shick. Mask r-cnn. In Proceedings of the IEEE international
1510
+ conference on computer vision, pages 2961–2969, 2017. 2,
1511
+ 3, 6, 7, 12
1512
+ [25] Yi Li, Haozhi Qi, Jifeng Dai, Xiangyang Ji, and Yichen Wei.
1513
+ Fully convolutional instance-aware semantic segmentation.
1514
+ In Proceedings of the IEEE conference on computer vision
1515
+ and pattern recognition, pages 2359–2367, 2017. 2
1516
+ [26] Ke Xu, Kaiyu Guan, Jian Peng, Yunan Luo, and Sibo Wang.
1517
+ Deepmask: An algorithm for cloud and cloud shadow de-
1518
+ tection in optical satellite remote sensing images using deep
1519
+ residual network. arXiv preprint arXiv:1911.03607, 2019. 2
1520
+ [27] Hexiang Hu, Shiyi Lan, Yuning Jiang, Zhimin Cao, and Fei
1521
+ Sha. Fastmask: Segment multi-scale object candidates in one
1522
+ shot. In Proceedings of the IEEE Conference on Computer
1523
+ Vision and Pattern Recognition, pages 991–999, 2017. 2
1524
+ [28] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delv-
1525
+ ing into high quality object detection. In Proceedings of the
1526
+ IEEE conference on computer vision and pattern recogni-
1527
+ tion, pages 6154–6162, 2018. 2, 3, 5
1528
+ [29] Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaox-
1529
+ iao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jianping
1530
+ Shi, Wanli Ouyang, et al. Hybrid task cascade for instance
1531
+ segmentation. In Proceedings of the IEEE/CVF Conference
1532
+ on Computer Vision and Pattern Recognition, pages 4974–
1533
+ 4983, 2019. 2
1534
+ [30] Daniel Bolya, Chong Zhou, Fanyi Xiao, and Yong Jae Lee.
1535
+ Yolact: Real-time instance segmentation. In Proceedings of
1536
+ the IEEE/CVF international conference on computer vision,
1537
+ pages 9157–9166, 2019. 2, 4, 6
1538
+ [31] Xinlong Wang, Rufeng Zhang, Tao Kong, Lei Li, and Chun-
1539
+ hua Shen.
1540
+ Solov2: Dynamic and fast instance segmenta-
1541
+ tion. Advances in Neural information processing systems,
1542
+ 33:17721–17732, 2020. 2, 5, 6
1543
+ [32] Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong,
1544
+ and Lei Li. Solo: A simple framework for instance segmen-
1545
+ tation. IEEE Transactions on Pattern Analysis and Machine
1546
+ Intelligence, 2021. 2
1547
+ [33] Zhi Tian, Chunhua Shen, and Hao Chen. Conditional convo-
1548
+ lutions for instance segmentation. In European conference
1549
+ on computer vision, pages 282–298. Springer, 2020. 2, 6
1550
+ [34] Enze Xie, Peize Sun, Xiaoge Song, Wenhai Wang, Xuebo
1551
+ Liu, Ding Liang, Chunhua Shen, and Ping Luo. Polarmask:
1552
+ Single shot instance segmentation with polar representation.
1553
+ In Proceedings of the IEEE/CVF conference on computer vi-
1554
+ sion and pattern recognition, pages 12193–12202, 2020. 2
1555
+ [35] Enze Xie, Wenhai Wang, Mingyu Ding, Ruimao Zhang, and
1556
+ Ping Luo. Polarmask++: Enhanced polar representation for
1557
+ single-shot instance segmentation and beyond. IEEE Trans-
1558
+ actions on Pattern Analysis and Machine Intelligence, 2021.
1559
+ 2
1560
+ [36] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas
1561
+ Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-
1562
+ end object detection with transformers. In European confer-
1563
+ ence on computer vision, pages 213–229. Springer, 2020. 2
1564
+ [37] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang,
1565
+ and Jifeng Dai. Deformable DETR: Deformable transform-
1566
+ ers for end-to-end object detection. In International Confer-
1567
+ ence on Learning Representations, 2021. 2
1568
+ [38] Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, and
1569
+ Liang-Chieh Chen.
1570
+ Max-deeplab:
1571
+ End-to-end panoptic
1572
+ segmentation with mask transformers.
1573
+ In Proceedings of
1574
+ the IEEE/CVF conference on computer vision and pattern
1575
+ recognition, pages 5463–5474, 2021. 2
1576
+ [39] Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per-
1577
+ pixel classification is not all you need for semantic segmen-
1578
+ tation. Advances in Neural Information Processing Systems,
1579
+ 34:17864–17875, 2021. 2
1580
+ [40] Zhiqi Li, Wenhai Wang, Enze Xie, Zhiding Yu, Anima
1581
+ Anandkumar, Jose M Alvarez, Ping Luo, and Tong Lu.
1582
+ Panoptic segformer: Delving deeper into panoptic segmen-
1583
+ tation with transformers. In Proceedings of the IEEE/CVF
1584
+ Conference on Computer Vision and Pattern Recognition,
1585
+ pages 1280–1289, 2022. 3
1586
+ [41] Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexan-
1587
+ der Kirillov, Rohit Girdhar, and Alexander G Schwing.
1588
+ Mask2former for video instance segmentation.
1589
+ arXiv
1590
+ preprint arXiv:2112.10764, 2021. 3, 5, 6, 7
1591
+ [42] Feng Li, Hao Zhang, Shilong Liu, Lei Zhang, Lionel M Ni,
1592
+ Heung-Yeung Shum, et al. Mask dino: Towards a unified
1593
+ transformer-based framework for object detection and seg-
1594
+ mentation. arXiv preprint arXiv:2206.02777, 2022. 3
1595
+ [43] Yanghao Li, Hanzi Mao, Ross Girshick, and Kaiming He.
1596
+ Exploring plain vision transformer backbones for object de-
1597
+ tection. arXiv preprint arXiv:2203.16527, 2022. 3, 4, 7, 12
1598
+ [44] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feicht-
1599
+ enhofer, Trevor Darrell, and Saining Xie. A convnet for the
1600
+ 2020s. In Proceedings of the IEEE/CVF Conference on Com-
1601
+ puter Vision and Pattern Recognition, pages 11976–11986,
1602
+ 2022. 3, 5, 6, 7, 8, 12, 13
1603
+ [45] Thibaut Durand, Taylor Mordan, Nicolas Thome, and
1604
+ Matthieu Cord.
1605
+ Wildcat: Weakly supervised learning of
1606
+ deep convnets for image classification, pointwise localiza-
1607
+ tion and segmentation. In Proceedings of the IEEE confer-
1608
+ ence on computer vision and pattern recognition, pages 642–
1609
+ 651, 2017. 3
1610
+ [46] Bin Jin, Maria V Ortiz Segovia, and Sabine Susstrunk. We-
1611
+ bly supervised semantic segmentation. In Proceedings of the
1612
+ IEEE Conference on Computer Vision and Pattern Recogni-
1613
+ tion, pages 3626–3635, 2017. 3
1614
+ 10
1615
+
1616
+ [47] Jiwoon Ahn and Suha Kwak. Learning pixel-level semantic
1617
+ affinity with image-level supervision for weakly supervised
1618
+ semantic segmentation.
1619
+ In Proceedings of the IEEE con-
1620
+ ference on computer vision and pattern recognition, pages
1621
+ 4981–4990, 2018. 3
1622
+ [48] Ruochen Fan, Qibin Hou, Ming-Ming Cheng, Gang Yu,
1623
+ Ralph R Martin, and Shi-Min Hu. Associating inter-image
1624
+ salient instances for weakly supervised semantic segmenta-
1625
+ tion. In Proceedings of the European conference on com-
1626
+ puter vision (ECCV), pages 367–383, 2018. 3
1627
+ [49] Guolei Sun, Wenguan Wang, Jifeng Dai, and Luc Van Gool.
1628
+ Mining cross-image semantics for weakly supervised seman-
1629
+ tic segmentation. In European conference on computer vi-
1630
+ sion, pages 347–365. Springer, 2020. 3
1631
+ [50] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva,
1632
+ and Antonio Torralba. Learning deep features for discrimina-
1633
+ tive localization. In Proceedings of the IEEE conference on
1634
+ computer vision and pattern recognition, pages 2921–2929,
1635
+ 2016. 3
1636
+ [51] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das,
1637
+ Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra.
1638
+ Grad-cam:
1639
+ Visual explanations from deep networks via
1640
+ gradient-based localization. In Proceedings of the IEEE in-
1641
+ ternational conference on computer vision, pages 618–626,
1642
+ 2017. 3
1643
+ [52] Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, An-
1644
+ imashree Anandkumar, Jiashi Feng, and Jose M Alvarez.
1645
+ Understanding the robustness in vision transformers. In In-
1646
+ ternational Conference on Machine Learning, pages 27378–
1647
+ 27394. PMLR, 2022. 3
1648
+ [53] Tsung-Yi Lin, Piotr Doll´ar, Ross Girshick, Kaiming He,
1649
+ Bharath Hariharan, and Serge Belongie.
1650
+ Feature pyra-
1651
+ mid networks for object detection.
1652
+ In Proceedings of the
1653
+ IEEE conference on computer vision and pattern recogni-
1654
+ tion, pages 2117–2125, 2017. 4, 12, 13
1655
+ [54] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross
1656
+ Girshick. Momentum contrast for unsupervised visual repre-
1657
+ sentation learning. In Proceedings of the IEEE/CVF Confer-
1658
+ ence on Computer Vision and Pattern Recognition (CVPR),
1659
+ June 2020. 4
1660
+ [55] Carole H Sudre, Wenqi Li, Tom Vercauteren, Sebastien
1661
+ Ourselin, and M Jorge Cardoso. Generalised dice overlap as
1662
+ a deep learning loss function for highly unbalanced segmen-
1663
+ tations. In Deep learning in medical image analysis and mul-
1664
+ timodal learning for clinical decision support, pages 240–
1665
+ 248. Springer, 2017. 5
1666
+ [56] Philipp Kr¨ahenb¨uhl and Vladlen Koltun. Efficient inference
1667
+ in fully connected crfs with gaussian edge potentials. Ad-
1668
+ vances in neural information processing systems, 24, 2011.
1669
+ 5, 12
1670
+ [57] Agrim Gupta, Piotr Dollar, and Ross Girshick.
1671
+ Lvis: A
1672
+ dataset for large vocabulary instance segmentation. In Pro-
1673
+ ceedings of the IEEE/CVF conference on computer vision
1674
+ and pattern recognition, pages 5356–5364, 2019. 5
1675
+ [58] Ilya Loshchilov and Frank Hutter. Decoupled weight decay
1676
+ regularization. arXiv preprint arXiv:1711.05101, 2017. 5
1677
+ [59] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
1678
+ Deep residual learning for image recognition. In Proceed-
1679
+ ings of the IEEE conference on computer vision and pattern
1680
+ recognition, pages 770–778, 2016. 5, 13
1681
+ [60] Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and
1682
+ Kaiming He. Aggregated residual transformations for deep
1683
+ neural networks. In Proceedings of the IEEE conference on
1684
+ computer vision and pattern recognition, pages 1492–1500,
1685
+ 2017. 5
1686
+ [61] Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao,
1687
+ Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei
1688
+ Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu,
1689
+ Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu,
1690
+ Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli
1691
+ Ouyang, Chen Change Loy, and Dahua Lin.
1692
+ MMDetec-
1693
+ tion: Open mmlab detection toolbox and benchmark. arXiv
1694
+ preprint arXiv:1906.07155, 2019. 5
1695
+ [62] Jifeng Dai, Kaiming He, and Jian Sun. Instance-aware se-
1696
+ mantic segmentation via multi-task network cascades.
1697
+ In
1698
+ Proceedings of the IEEE conference on computer vision and
1699
+ pattern recognition, pages 3150–3158, 2016. 6
1700
+ [63] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully
1701
+ convolutional networks for semantic segmentation. In Pro-
1702
+ ceedings of the IEEE conference on computer vision and pat-
1703
+ tern recognition, pages 3431–3440, 2015. 6
1704
+ [64] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco
1705
+ Massa, Alexandre Sablayrolles, and Herve Jegou. Training
1706
+ data-efficient image transformers & distillation through at-
1707
+ tention. In International Conference on Machine Learning,
1708
+ volume 139, pages 10347–10357, July 2021. 7, 8
1709
+ 11
1710
+
1711
+ A. Appendix
1712
+ A.1. Additional details of CRF
1713
+ In the main paper, we define the energy terms of CRF but
1714
+ skip the details on how we use the Mean Field algorithm to
1715
+ minimize the energy. Here, we provide more details on how
1716
+ we use the Mean Field algorithm [56].
1717
+ We define l = {l1, ..., lN} as the label being inferred,
1718
+ where N = H × W is the size of the input image and xi
1719
+ is the label of the i-th pixel in I. We also assume that the
1720
+ network predicts a mask m = {m1, ..., mN} is where mi
1721
+ is the unary mask score of the i-th pixel in I. The pseudo-
1722
+ code to obtain l using mean field is attached in Alg. 1:
1723
+ Algorithm 1 Mean field algorithm for CRFs.
1724
+ 1: procedure MEANFIELD(m, I)
1725
+ 2:
1726
+ Ki,j ←− ω exp(− |Ii−Ij|
1727
+ 2ζ2
1728
+ )
1729
+ 3:
1730
+ ▷ Initialize the Gaussian kernels
1731
+ 4:
1732
+ l ← m
1733
+ ▷ Initialize l using m
1734
+ 5:
1735
+ while not converge do
1736
+ ▷ Iterate until convergence
1737
+ 6:
1738
+ for i ← 1 to |l| do
1739
+ 7:
1740
+ ˆli ← li
1741
+ 8:
1742
+ for j ∈ N(i) do
1743
+ 9:
1744
+ ˆli ← ˆli + Kj ∗ lj
1745
+ 10:
1746
+ ▷ Message passing
1747
+ 11:
1748
+ end for
1749
+ 12:
1750
+ end for
1751
+ 13:
1752
+ l ← ϕ(ˆl)
1753
+ ▷ ϕ is a clamp function
1754
+ 14:
1755
+ end while
1756
+ 15:
1757
+ return λ(l)
1758
+ ▷ λ is a threshold function
1759
+ 16: end procedure
1760
+ A.2. Additional implementation details
1761
+ We use the same hyper-parameters on all benchmarks
1762
+ for all image encoders (Standard ViTs [11, 13, 16], Swin
1763
+ Transformers [12], and ConvNeXts [44]) and mask de-
1764
+ coders (fully connected decoder, fully convolutional de-
1765
+ coder, attention-based decoder, ), including batch size, opti-
1766
+ mization hyper-parameters. We observe a performance drop
1767
+ when we add parametric layers or multi-scale lateral/skip
1768
+ connections [43, 53] between the image encoder (Standard
1769
+ ViTs, Swin Transformers, ConvNeXts) and the mask de-
1770
+ coder (attention-based decoder). We insert a couple of the
1771
+ bi-linear interpolation layers to resize the feature map be-
1772
+ tween the image encoder and the mask decoder and resize
1773
+ the segmentation score map. Specifically, we resize the fea-
1774
+ ture map produced by the image encoder to 1/16 (small),
1775
+ 1/8 (medium), 1/4 (large) size of the raw input according
1776
+ to the size of the objects. We divide the objects into three
1777
+ scales regarding to the area of their bound boxes. We use
1778
+ the area ranges of [0, 322), [322, 962), [962, ∞) to cover
1779
+ small, medium, and large objects, respectively. We resize
1780
+ the mask prediction map to 512 × 512 to reach the original
1781
+ resolution of the input images.
1782
+ Moreover, we also try three naive ways to add classifica-
1783
+ tion loss, but it does not work well with MAL. First, we add
1784
+ another fully connected layer as the classification decoder,
1785
+ which takes the feature map of the first fully connected layer
1786
+ of the instance-aware head K. With this design, the classi-
1787
+ fication causes a significant performance drop. Secondly,
1788
+ we use two extra fully connected layers or the original clas-
1789
+ sification decoder of standard ViTs as the classification de-
1790
+ coder, which directly takes the feature map of the image
1791
+ encoder. However, the classification loss does not provide
1792
+ performance improvement or loss in this scenario.
1793
+ A.3. Benefits for object detection
1794
+ The supervised object detection models benefit from the
1795
+ extra mask supervision [24], which improves detection re-
1796
+ sults.
1797
+ Specifically, we follow the settings in Mask R-
1798
+ CNN [24]. First, we use RoI Align, the box branch, and
1799
+ the box supervision without mask supervision. Second, we
1800
+ add the mask branch and ground-truth mask supervision on
1801
+ top of the first baseline. The second baseline is the original
1802
+ Mask R-CNN. Thirdly, we replace the ground-truth masks
1803
+ with the mask pseudo-labels generated by MAL on top of
1804
+ the second baseline. It turns out that using MAL-generated
1805
+ mask pseudo-labels for mask supervision brings in an im-
1806
+ provement similar to ground-truth masks on detection. We
1807
+ show the results in Tab. 7.
1808
+ A.4. Additional qualitative results
1809
+ We also visualize the prediction results produced by
1810
+ the instance segmentation models trained with ground-truth
1811
+ masks and mask pseudo-labels in Fig. 9. In most cases, we
1812
+ argue that humans cannot tell which results are produced by
1813
+ the models supervised by human-annotated labels.
1814
+ 12
1815
+
1816
+ InstSeg Backbone
1817
+ Dataset
1818
+ Mask Labels
1819
+ (%)AP
1820
+ (%)AP50
1821
+ (%)AP75
1822
+ (%)APS
1823
+ (%)APM
1824
+ (%)APL
1825
+ ResNet-50-DCN [59]
1826
+ LVIS v1
1827
+ None
1828
+ 22.0
1829
+ 36.4
1830
+ 22.9
1831
+ 16.8
1832
+ 29.1
1833
+ 33.4
1834
+ ResNet-50-DCN [59]
1835
+ LVIS v1
1836
+ GT mask
1837
+ 22.5
1838
+ 36.9
1839
+ 23.8
1840
+ 16.8
1841
+ 29.7
1842
+ 35.0
1843
+ ResNet-50-DCN [59]
1844
+ LVIS v1
1845
+ MAL mask
1846
+ 22.6
1847
+ 37.2
1848
+ 23.8
1849
+ 17.3
1850
+ 29.8
1851
+ 34.6
1852
+ ResNet-101-DCN [59]
1853
+ LVIS v1
1854
+ None
1855
+ 24.4
1856
+ 39.5
1857
+ 26.1
1858
+ 17.9
1859
+ 32.2
1860
+ 36.7
1861
+ ResNet-101-DCN [59]
1862
+ LVIS v1
1863
+ GT mask
1864
+ 24.6
1865
+ 39.7
1866
+ 26.1
1867
+ 18.3
1868
+ 32.1
1869
+ 38.3
1870
+ ResNet-101-DCN [59]
1871
+ LVIS v1
1872
+ MAL mask
1873
+ 25.1
1874
+ 40.0
1875
+ 26.7
1876
+ 18.4
1877
+ 32.5
1878
+ 37.8
1879
+ ResNeXt-101-32x4d-FPN [53,59]
1880
+ LVIS v1
1881
+ None
1882
+ 25.5
1883
+ 41.0
1884
+ 27.1
1885
+ 18.8
1886
+ 33.7
1887
+ 38.0
1888
+ ResNeXt-101-32x4d-FPN [53,59]
1889
+ LVIS v1
1890
+ GT mask
1891
+ 26.7
1892
+ 42.1
1893
+ 28.6
1894
+ 19.7
1895
+ 34.7
1896
+ 39.4
1897
+ ResNeXt-101-32x4d-FPN [53,59]
1898
+ LVIS v1
1899
+ MAL mask
1900
+ 26.3
1901
+ 41.5
1902
+ 28.3
1903
+ 19.5
1904
+ 34.5
1905
+ 39.6
1906
+ ResNeXt-101-64x4d-FPN [53,59]
1907
+ LVIS v1
1908
+ None
1909
+ 26.6
1910
+ 42.0
1911
+ 28.3
1912
+ 19.8
1913
+ 34.7
1914
+ 39.9
1915
+ ResNeXt-101-64x4d-FPN [53,59]
1916
+ LVIS v1
1917
+ GT mask
1918
+ 27.2
1919
+ 42.8
1920
+ 29.2
1921
+ 20.2
1922
+ 35.7
1923
+ 41.0
1924
+ ResNeXt-101-64x4d-FPN [53,59]
1925
+ LVIS v1
1926
+ MAL mask
1927
+ 27.2
1928
+ 42.7
1929
+ 29.1
1930
+ 19.8
1931
+ 35.9
1932
+ 40.7
1933
+ ConvNeXt-Small [44]
1934
+ COCO
1935
+ None
1936
+ 51.5
1937
+ 70.6
1938
+ 56.1
1939
+ 34.8
1940
+ 55.2
1941
+ 66.9
1942
+ ConvNeXt-Small [44]
1943
+ COCO
1944
+ GT mask
1945
+ 51.8
1946
+ 70.6
1947
+ 56.3
1948
+ 34.5
1949
+ 55.9
1950
+ 66.6
1951
+ ConvNeXt-Small [44]
1952
+ COCO
1953
+ MAL mask
1954
+ 51.7
1955
+ 70.5
1956
+ 56.2
1957
+ 35.2
1958
+ 55.7
1959
+ 66.8
1960
+ Table 7. Results of detection by adding different mask supervision. The models are evaluated on COCO val2017 and LVIS v1. By adding
1961
+ mask supervision using ground-truth masks or mask pseudo-labels, we can get around 1% improvement on different AP metrics on LVIS
1962
+ v1. On COCO val2017, the detection performance also benefits from mask pseudo-labels. Although the improvement is less than COCO’s,
1963
+ the improvement is consistent over different random seeds.
1964
+ Mask2former
1965
+ (Swin-S)
1966
+ trained with
1967
+ GT Mask
1968
+ Mask2former
1969
+ (Swin-S)
1970
+ trained with
1971
+ MAL Mask
1972
+ Mask2former
1973
+ (Swin-S)
1974
+ trained with
1975
+ GT Mask
1976
+ Mask2former
1977
+ (Swin-S)
1978
+ trained with
1979
+ MAL Mask
1980
+ Figure 9. The qualitative comparison between Mask2Former trained with GT mask and Mask2Former trained with MAL-generated mask
1981
+ pseudo-labels. Note that we use ViT-MAE-Base as the image encoder of MAL and Swin-Small as the backbone of the Mask2Former.
1982
+ 13
1983
+
1984
+ Y
1985
+ tv/0.99
1986
+ couchl0.30
1987
+ couchjo.99dog/0.97
1988
+ ngel08
1989
+ benchjo.47dog/0.98
1990
+ ngej0.93
1991
+ benchjo.92tvl0.97
1992
+ tv0.38
1993
+ WORKPLACE
1994
+ keyboardj0.98
1995
+ mnuse|0.34
1996
+ 1605
1997
+ mouse|0.89
1998
+ keyboardj0.99
1999
+ mouse0.9gtv/0.99
2000
+ tv0.54
2001
+ WORKPLACE
2002
+ keyboardj0.99
2003
+ 660/
2004
+ mouse|0.95
2005
+ keyboardj0.99
2006
+ mouse/0.99baseballbat/0.98
2007
+ personj0.97
2008
+ personj0.98
2009
+ oersonbaseball batj0.98
2010
+ personjo.98
2011
+ personjo.99chairl0.80
2012
+ personJo.96
2013
+ cellphone|0.72
2014
+ sife8chairl0.51
2015
+ personj0.98
2016
+ cellphone|0.89tv|0.98
2017
+ couchl0.56
2018
+ couchj0.98sonjo96
2019
+ personlo.g
2020
+ chair0.92
2021
+ chairl0.7034
2022
+ nairfo.9
2023
+ chair
2024
+ person0.98
2025
+ personlaano
2026
+ chair0.4
2027
+ chairl0.90ha
2028
+ personl0.98
2029
+ n10.99ebra
2030
+ zebral0.97zebraj0.98personj0.99
2031
+ tie/0.98
2032
+ personj0.99
2033
+ personl0.99zebral0.97
2034
+ zebral0.9g
2035
+ zebraj0.97zebral0.99
2036
+ zebraj0.99
2037
+ zebra/0.99personj0.98
2038
+ personl0.96
2039
+ backpacklo.
2040
+ horse/0.95personj0.99
2041
+ personj0.98
2042
+ backpackjo.8
2043
+ horse|Chorse0.633
2044
+ horse0.97
9dE2T4oBgHgl3EQflwec/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ANE5T4oBgHgl3EQfSg9u/content/tmp_files/2301.05529v1.pdf.txt ADDED
@@ -0,0 +1,2921 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ UNIFORM GLOBAL STABILITY OF SWITCHED NONLINEAR
2
+ SYSTEMS IN THE KOOPMAN OPERATOR FRAMEWORK∗
3
+ CHRISTIAN MUGISHO ZAGABE† AND ALEXANDRE MAUROY ‡
4
+ Abstract.
5
+ In this paper, we provide a novel solution to an open problem on the global uniform stability
6
+ of switched nonlinear systems. Our results are based on the Koopman operator approach and, to
7
+ our knowledge, this is the first theoretical contribution to an open problem within that framework.
8
+ By focusing on the adjoint of the Koopman generator in the Hardy space on the polydisk, we
9
+ define equivalent linear (but infinite-dimensional) switched systems and we construct a common
10
+ Lyapunov functional for those systems, under a solvability condition of the Lie algebra generated by
11
+ the linearized vector fields. A common Lyapunov function for the original switched nonlinear systems
12
+ is derived from the Lyapunov functional by exploiting the reproducing kernel property of the Hardy
13
+ space. The Lyapunov function is shown to converge in a bounded region of the state space, which
14
+ proves global uniform stability of specific switched nonlinear systems on bounded invariant sets.
15
+ Key words.
16
+ Koopman operator, Hardy space on the polydisk, Switched systems, Uniform
17
+ stability, Common Lyapunov function.
18
+ AMS subject classifications. 47B32, 47B33, 47D06, 70K20, 93C10, 93D05.
19
+ 1. Introduction. Switched systems are hybrid-type models encountered in ap-
20
+ plications where the dynamics abruptly jump from one behavior to another. They
21
+ are typically described by a family of subsystems that alternate according to a given
22
+ commutation law. Stability properties of switched systems have been the focus of
23
+ intense research effort (see e.g. [32] for a review). In this context, a natural question
24
+ is whether a switched system with an equilibrium point is uniformly stable, that is,
25
+ stable for any commutation law. It turned out that the uniform stability problem
26
+ is counter-intuitive and challenging. In the linear case, it is well-known that stable
27
+ subsystems may induce an unstable switched system. However, uniform stability is
28
+ guaranteed if the matrices associated with the subsystems are stable and commute
29
+ pairwise [24], a result which is extended in [15] to subsystems described by stable
30
+ matrices generating a solvable Lie algebra. This latter result can be explained by the
31
+ well-known equivalence between solvable Lie algebra of matrices and the existence
32
+ of a common invariant flag for those matrices, which allows to construct a common
33
+ Lyapunov function for the subsystems [34].
34
+ In the case of switched nonlinear systems, an open problem was posed in [13] on
35
+ the relevance of Lie-algebraic conditions of vector fields for global uniform stability.
36
+ Partial solutions have been proposed in this context. It was proven in [17] that uniform
37
+ stability holds if the vector fields are individually stable and commute, in which case
38
+ a common Lyapunov function can be constructed [30, 33]. Uniform stability was also
39
+ shown for a pair of vector fields generating a third-order nilpotent Lie algebra [29]
40
+ and for particular r-order nilpotent Lie algebras [18]. However, no result has been
41
+ obtained, which solely relies on the more general solvability property of Lie algebras
42
+ of the subsystems vector fields.
43
+ In this paper, we provide a partial solution to the problem introduced in [13] by
44
+ ∗Submitted to the editors
45
+ †Department of Mathematics and Namur Research Institute for Complex Systems (naXys), Uni-
46
+ versity of Namur (christian.mugisho@unamur.be),
47
+ ‡Department of Mathematics and Namur Research Institute for Complex Systems (naXys), Uni-
48
+ versity of Namur (alexandre.mauroy@unamur.be)
49
+ 1
50
+ This manuscript is for review purposes only.
51
+ arXiv:2301.05529v1 [math.DS] 13 Jan 2023
52
+
53
+ 2
54
+ C. M. ZAGABE AND A. MAUROY
55
+ proving global uniform stability results for switched nonlinear systems under a gen-
56
+ eral solvability property of Lie algebras. To do so, we rely on the Koopman operator
57
+ framework [3, 21]: we depart from the classical pointwise description of dynami-
58
+ cal systems and consider instead the evolution of observable functions (here in the
59
+ Hardy space of holomorphic functions defined on the complex polydisk). Through this
60
+ approach, equivalent infinite-dimensional dynamics are generated by linear Koopman
61
+ generators, so that nonlinear systems are represented by Koopman linear systems that
62
+ are amenable to global stability analysis [19]. In particular, building on preliminary
63
+ results obtained in [34], we construct a common Lyapunov functional for switched
64
+ Koopman linear systems. A key point is to focus on the adjoint of the Koopman
65
+ generators and notice that these operators have a common invariant maximal flag if
66
+ the linear parts of the subsystems generate a solvable Lie algebra, a condition that is
67
+ milder than the original assumption proposed in [13]. Finally, we derive a common
68
+ Lyapunov function for the original switched nonlinear system and prove its conver-
69
+ gence under specific algebraic conditions on the vector field. This allows us to obtain
70
+ a bounded invariant region where the switched nonlinear system is globally uniformly
71
+ asymptotically stable. To our knowledge, this is the first time that a novel solution
72
+ to an open theoretical problem is obtained within the Koopman operator framework.
73
+ The rest of the paper is organized as follows.
74
+ In Section 2, we present some
75
+ preliminary notions on uniform stability of switched nonlinear systems and give a
76
+ general introduction to the Koopman operator framework, as well as some specific
77
+ properties in the Hardy space on the polydisk. In Section 3, we state and prove our
78
+ main result. We recast the open problem given in [13] in terms of the existence of
79
+ an invariant maximal flag and we provide a constructive proof for the existence of
80
+ a common Lyapunov function. Additional corollaries are also given, which focus on
81
+ specific classes of vector fields. Our main results are illustrated with two examples in
82
+ Section 4. Finally, concluding remarks and perspectives are given in Section 5.
83
+ Notations. We will use the following notation throughout the manuscript. For
84
+ multi-index notations α = (α1, ..., αn) ∈ Nn, we define |α| = α1 + · · · + αn and
85
+ zα = zα1
86
+ 1 · · · zαn
87
+ n . The complex conjugate and real part of a complex number a are
88
+ denoted by ¯a and ℜ(a), respectively. The transpose-conjugate of a matrix (or vector)
89
+ A is denoted by A†. The Jacobian matrix of the vector field F at x is given by JF(x).
90
+ The complex polydisk centered at 0 and of radius ρ is defined by
91
+ Dn(0, ρ) = {z ∈ Cn : |z1| < ρ, · · · , |zn| < ρ} .
92
+ In particular, Dn denotes the unit polydisk (i.e. with ρ = 1) and ∂Dn is its boundary.
93
+ Finally, the floor of a real number is denoted by ⌊x⌋.
94
+ 2. Preliminaries. In this section, we introduce preliminary notions and results
95
+ on the stability theory for switched systems and on the Koopman operator framework.
96
+ 2.1. Stability of switched systems. We focus on the uniform asymptotic sta-
97
+ bility property of switched systems and on the existence of a common Lyapunov
98
+ function. Some existing results that connect these two main concepts are presented
99
+ in both linear and nonlinear cases.
100
+ Definition 2.1 (Switched system). A switched system ˙x = F (σ)(x) is a (finite)
101
+ set of subsystems
102
+ (2.1)
103
+
104
+ ˙x = F (i)(x), x ∈ X ⊂ Rn�m
105
+ i=1
106
+ This manuscript is for review purposes only.
107
+
108
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
109
+ 3
110
+ associated with a commutation law σ : R+ → {1, · · · , m} indicating which subsystem
111
+ is activated at a given time.
112
+ In this paper, we make the following standing assumption.
113
+ Assumption 1. The commutation law σ is a piecewise constant function with a
114
+ finite number of discontinuities on every bounded time interval (see e.g. [12]).
115
+ 2.1.1. Uniform stability. According to [16], stability analysis of switched sys-
116
+ tems revolves around three important problems:
117
+ • decide whether an equilibrium is stable under the action of the switched
118
+ system for any commutation law σ, in which case the equilibrium is said to
119
+ be uniformly stable,
120
+ • identify the commutation laws for which the equilibrium is stable, and
121
+ • construct the commutation law for which the equilibrium is stable.
122
+ In this paper we focus on the first problem related to uniform stability.
123
+ Definition 2.2 (Uniform stability). Assume that F (i)(xe) = 0 for all i = 1, . . . , m.
124
+ The equilibrium xe is
125
+ • uniformly asymptotically stable (UAS) if ∀ϵ > 0, ∃δ > 0 such that
126
+ ∥x(0) − xe∥ ≤ δ ⇒ ∥x(t) − xe∥ ≤ ϵ, ∀t > 0, ∀σ
127
+ and
128
+ ∥x(0) − xe∥ ≤ δ ⇒ lim
129
+ t→∞ x(t) = xe, ∀σ,
130
+ • globally uniformly asymptotically stable (GUAS) on D ⊆ Rn if it is UAS
131
+ and
132
+ x(0) ∈ D ⇒ lim
133
+ t→∞ x(t) = xe, ∀σ,
134
+ • globally uniformly exponentially stable (GUES) on D ⊆ Rn if ∃β, λ > 0 such
135
+ that
136
+ x(0) ∈ D ⇒ ∥x(t) − xe∥ ≤ β∥x(0) − xe∥e−λt, ∀t > 0, ∀σ.
137
+ This definition implies that the subsystems share a common equilibrium. More-
138
+ over, a necessary condition is that this equilibrium is asymptotically stable with re-
139
+ spect to the dynamics of all individual subsystems. However, this condition is not
140
+ sufficient, since the switched system might be unstable for a specific switching law.
141
+ A sufficient condition for uniform asymptotic stability is the existence of a common
142
+ Lyapunov function (CLF).
143
+ Definition 2.3 (Common Lyapunov function [12]). A positive C1- function V :
144
+ D ⊆ Rn → R is a common Lyapunov function on D ⊆ Rn for the family of subsystems
145
+ (2.1) if
146
+ ∇V · F (i)(x) < 0
147
+ ∀x ∈ D \ {xe},
148
+ ∀i = 1, . . . , m.
149
+ For switched systems with a finite number of subsystems, a converse Lyapunov result
150
+ also holds ([12], [17]).
151
+ Theorem 2.4 ([17]). Suppose that D ⊆ Rn is compact and forward-invariant
152
+ with respect to the flow induced by the subsystems (2.1). The switched system (2.1)
153
+ is GUAS on D if and only if all subsystems share a CLF on D.
154
+ This manuscript is for review purposes only.
155
+
156
+ 4
157
+ C. M. ZAGABE AND A. MAUROY
158
+ A corollary of this result provides a necessary condition for GUAS, which is based on
159
+ convex combinations of vector fields.
160
+ Corollary 2.5 ([12]). If the equilibrium of the switched system (2.1) is GUAS,
161
+ then it is a globally asymptotically stable equilibrium for the dynamics
162
+ ˙x = αF (i)(x) + (1 − α)F (j)(x),
163
+ for all i, j ∈ {1, · · · , m} and for all α ∈ [0, 1].
164
+ 2.1.2. Lie-algebraic conditions in the linear case. In the case of switched
165
+ linear systems { ˙x = A(i)x, A(i) ∈ Cn×n}m
166
+ i=1, several results related to uniform stability
167
+ have been proved (see [32] for a review). We focus here on specific results based on
168
+ Lie-algebraic conditions.
169
+ Let g = span
170
+
171
+ A(i)�
172
+ Lie denote the Lie algebra generated by the matrices A(i),
173
+ with i = 1, · · · , m, and equipped with the Lie bracket [A(i), A(j)] = A(i)A(j)−A(j)A(i).
174
+ Definition 2.6 (Solvable Lie algebra). A Lie algebra g equipped with the Lie
175
+ bracket [., .] is said to be solvable if there exists k ∈ N such that gk = 0, where
176
+ {gj}j∈N∗ is a descendant sequence of ideals defined by
177
+
178
+ g1 := g
179
+ gj+1 :=
180
+
181
+ gj, gj�
182
+ .
183
+ A general Lie-algebraic criterion for uniform exponential (asymptotic) stability of
184
+ switched linear systems is given in the following theorem.
185
+ Theorem 2.7 ([15]).
186
+ If all matrices A(i), i = 1, · · · , m, are stable (i.e. with
187
+ eigenvalues λ(i)
188
+ j
189
+ such that ℜ
190
+
191
+ λ(i)
192
+ j
193
+
194
+ < 0) and if the Lie algebra g is solvable, then the
195
+ switched linear system { ˙x = A(i)x}m
196
+ i=1 is GUES.
197
+ As shown in [23, 31], this result follows from the simultaneous triangularization of
198
+ the matrices A(i), which is a well-known property of solvable Lie algebras (see Lie’s
199
+ theorem A.5 in Appendix A). This property is in fact equivalent to the existence of a
200
+ common invariant flag for complex matrices [6].
201
+ Definition 2.8 (Invariant flag). An invariant maximal flag of the set of matrices
202
+ {A(i)}m
203
+ i=1 is a set of subspaces {Sj}n
204
+ j=1 ⊆ Cn such that (i) A(i)Sj ⊂ Sj for all i, j, (ii)
205
+ dim(Sj) = j for all j, and (iii) Sj ⊂ Sj+1 for all j < n.
206
+ The subspaces Sj can be described through an orthonormal basis (v1, · · · , vn), so
207
+ that Sj = span {v1, · · · , vj}. Note that the vector v1 is a common eigenvector of the
208
+ matrices A(i). This basis can be used to construct a CLF.
209
+ Proposition 2.9 ([34]).
210
+ Let
211
+ (2.2)
212
+
213
+ ˙x = A(i) x, A(i) ∈ Cn×n, x ∈ Cn�m
214
+ i=1
215
+ be a switched linear system. Suppose that all matrices A(i) are stable and admit a
216
+ common invariant maximal flag
217
+ {0} ⊂ S1 ⊂ · · · ⊂ Sn = Cn,
218
+ Sj = span{v1, . . . , vj}.
219
+ This manuscript is for review purposes only.
220
+
221
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
222
+ 5
223
+ Then there exist ϵj > 0, j = 1, . . . , n, such that
224
+ (2.3)
225
+ V (x) =
226
+ n
227
+
228
+ j=1
229
+ ϵj|v†
230
+ jx|2
231
+ is a CLF for (2.2).
232
+ The values ϵj must satisfy the condition
233
+ (2.4)
234
+ ϵj >
235
+ max
236
+ i∈{1,...,m}
237
+ k∈{1,...,j−1}
238
+ ϵk
239
+ (n − 1)2
240
+ 4
241
+ ���v†
242
+ kA(i)vj
243
+ ���
244
+ 2
245
+ ���ℜ
246
+
247
+ λ(i)
248
+ j
249
+ ����
250
+ ���ℜ
251
+
252
+ λ(i)
253
+ k
254
+ ����
255
+ where λ(i)
256
+ j
257
+ are the eigenvalues of A(i).
258
+ They can be obtained iteratively from an
259
+ arbitrary value ϵ1 > 0. The geometric approach followed in [34] provides a constructive
260
+ way to obtain a CLF, a result that we will leverage in an infinite-dimensional setting
261
+ for switched nonlinear systems.
262
+ 2.1.3. Lie-algebraic condition in the nonlinear case. In the context of
263
+ switched nonlinear systems, one has to consider the Lie algebra of vector fields
264
+ (2.5)
265
+ gF = span
266
+
267
+ F (i), i = 1, . . . , m
268
+
269
+ Lie
270
+ equipped with the Lie bracket
271
+ (2.6)
272
+ [F (i), F (j)](x) = JF (j)(x) F (i)(x) − JF (i)(x) F (j)(x).
273
+ It has been conjectured in [13] that Lie-algebraic conditions on (2.5) could be
274
+ used to characterize uniform stability.
275
+ This problem has been solved partially in
276
+ [29] for third-order nilpotent Lie algebras and in [18] for particular r-order nilpotent
277
+ Lie algebras. Another step toward more general Lie-algebraic conditions based on
278
+ solvability has been made in [34], a preliminary result that relies on the so-called
279
+ Koopman operator framework. However, the results obtained in [34] are restricted
280
+ to specific switched nonlinear systems that can be represented as finite-dimensional
281
+ linear ones.
282
+ In this paper, we build on this preliminary work, further exploiting
283
+ the Koopman operator framework to obtain general conditions that characterize the
284
+ GUAS property of switched nonlinear systems.
285
+ 2.2. Koopman operator approach to dynamical systems. In this section,
286
+ we present the Koopman operator framework, which is key to extend the result of
287
+ Proposition 2.9 to switched nonlinear systems. We introduce the Koopman semigroup
288
+ along with its Koopman generator, cast the framework in the context of Lie groups,
289
+ and describe the finite-dimensional approximation of the operator.
290
+ 2.2.1. Koopman operator. Consider a continuous-time dynamical system
291
+ (2.7)
292
+ ˙x = F(x),
293
+ x ∈ X ⊂ Rn,
294
+ F ∈ C1
295
+ which generates a flow ϕt : X → X, with t ∈ R+. The Koopman operator is defined
296
+ on a (Banach) space F and acts on observables, i.e. functions f : X → R, f ∈ F.
297
+ Definition 2.10 (Koopman semigroup [11]). The semigroup of Koopman opera-
298
+ tors (in short, Koopman semigroup) is the family of linear operators (Ut)t≥0 defined
299
+ by
300
+ Ut : F → F,
301
+ Utf = f ◦ ϕt.
302
+ This manuscript is for review purposes only.
303
+
304
+ 6
305
+ C. M. ZAGABE AND A. MAUROY
306
+ We can also define the associated Koopman generator.
307
+ Definition 2.11 (Koopman generator [11]). The Koopman generator associated
308
+ with the vector field (2.7) is the linear operator
309
+ (2.8)
310
+ LF : D(LF ) → F,
311
+ LF f := F · ∇f
312
+ with the domain D(LF ) = {f ∈ F : F · ∇f ∈ F}.
313
+ As shown below (see Lemma 2.13), the Koopman semigroup and the Koopman gen-
314
+ erators are directly related. When the Koopman semigroup is strongly continuous
315
+ [7], i.e.
316
+ lim
317
+ t→0+ ∥Utf − f∥F = 0, the Koopman generator is the infinitesimal generator
318
+ LF f := lim
319
+ t→0+(Utf −f)/t of the Koopman semigroup. Since the Koopman operator Ut
320
+ and the generator LF are both linear, we can describe the dynamics of an observable
321
+ f on F through the linear abstract ordinary differential equation
322
+ (2.9)
323
+ ˙f = LF f.
324
+ We can also briefly discuss the spectral properties of the Koopman operator.
325
+ Definition 2.12 (Koopman eigenfunction and eigenvalue [3, 21]). An eigenfunc-
326
+ tion of the Koopman operator is an observable φλ ∈ F \ {0} such that
327
+ LF φλ = λφλ.
328
+ The value λ ∈ C is the associated Koopman eigenvalue.
329
+ Under the strong continuity property, the Koopman eigenfunction also satisfies
330
+ Utφλ = eλtφλ,
331
+ ∀t ≥ 0.
332
+ For a linear system ˙x = Ax, with x ∈ Rn, we denote an eigenvalue of A by ˜λj
333
+ and its associated left eigenvector by wj. Then ˜λj is a Koopman eigenvalue and the
334
+ associated Koopman eigenfunction is given by φ˜λj(x) = w†
335
+ jx [22]. For a nonlinear
336
+ system of the form (2.7) which admits a stable equilibrium xe, the eigenvalues of
337
+ JF(xe) are typically Koopman eigenvalues and the associated eigenfunctions are the
338
+ so-called principal Koopman eigenfunctions (see Remark 2.14 below).
339
+ 2.2.2. Koopman operator in the Hardy space H2(Dn). From this point
340
+ on, we define the Koopman operator in the Hardy space on the polydisk (see e.g.
341
+ [25, 26, 28] for more details). This choice is well-suited to the case of analytic vector
342
+ fields that admit a stable hyperbolic equilibrium, where it allows to exploit convenient
343
+ spectral properties of the operator.
344
+ Let D be the open unit disk in C, ∂D its boundary, and Dn the unit polydisk in
345
+ Cn. The Hardy space of holomorphic functions on Dn is the space
346
+ H2(Dn) =
347
+
348
+ f : Dn → C, holomorphic : ∥f∥2 = lim
349
+ r→1−
350
+
351
+ (∂D)n |f (rω) |2dmn(ω) < ∞
352
+
353
+ ,
354
+ where mn is the normalized Lebesgue measure on (∂D)n. It is equipped with an inner
355
+ product defined by
356
+ ⟨f, g⟩ =
357
+
358
+ (∂D)n f (ω) ¯g (ω) dmn(ω),
359
+ This manuscript is for review purposes only.
360
+
361
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
362
+ 7
363
+ so that the set {zα : α ∈ Nn} is the standard orthonormal basis of monomials on
364
+ H2(Dn). The monomials will be denoted by ek(z) = zα(k), where the map α : N → Nn,
365
+ k �→ α(k) refers to the lexicographic order, i.e. ek1 < ek2 if |α(k1)| < |α(k2)|, or if
366
+ |α(k1)| = |α(k2)| and αj(k1) > αj(k2) for the smallest j such that αj(k1) ̸= αj(k2).
367
+ For f and g in H2(Dn), with f =
368
+
369
+ k∈N
370
+ fkek and g =
371
+
372
+ k∈N
373
+ gkek, the isomorphism
374
+
375
+ k∈N
376
+ fkek �→ (fk)k≥0
377
+ between H2(Dn) and the l2-space allows to rewrite the norm and the inner product
378
+ as
379
+ ∥f∥2 =
380
+
381
+ k∈N
382
+ |fk|2
383
+ and
384
+ ⟨f, g⟩ =
385
+
386
+ k∈N
387
+ fk ¯gk.
388
+ We also note that H2(Dn) is a reproducing kernel Hilbert space (RKHS) with the
389
+ Cauchy kernel ([25, Chapter 1])
390
+ (2.10)
391
+ k (z, ξ) =
392
+ n
393
+
394
+ i=1
395
+ 1
396
+ 1 − ¯ξizi
397
+ , z, ξ ∈ Dn.
398
+ It follows that one can define the evaluation functional f(z) = ⟨f, kz⟩ with kz(ω) =
399
+ k (z, ω).
400
+ If the vector field F is analytic, we can consider its analytic continuation on Dn.
401
+ Moreover, if it generates a holomorphic flow that is invariant in Dn, we can define
402
+ the Koopman semigroup on H2(Dn), which is also known as the composition operator
403
+ with symbol ϕt. The required assumptions are summarized as follows.
404
+ Assumption 2. The components Fl, l = 1, · · · , n, of the vector field F belong to
405
+ the Hardy space H2(Dn). Moreover, F generates a flow which is holomorphic and
406
+ maps Dn to Dn (forward invariance).
407
+ It is shown in [4] that the flow ϕt is holomorphic on Dn if and only if the vector field
408
+ components have a specific form (see Proposition (A.1) in the Appendix, and the works
409
+ [4, 5]). Note also that this property holds if the dynamics possess a globally stable
410
+ hyperbolic equilibrium (Assumption 3 below) in the case of non-resonant eigenvalues
411
+ (see Remark 2.14).
412
+ Now, we recall some important properties that we will use to prove our results.
413
+ Lemma 2.13. Consider a function f ∈ H2(Dn) and an evaluation functional kz,
414
+ with z ∈ Dn. Then,
415
+ 1. LF zα ∈ H2(Dn) and the domain D (LF ) is dense in H2(Dn),
416
+ 2. U ∗
417
+ t kz = kϕt(z),
418
+ 3.
419
+ d
420
+ dt ⟨U ∗
421
+ t kz, f⟩ = ⟨L∗
422
+ F U ∗
423
+ t kz, f⟩.
424
+ Proof.
425
+ 1. For all z ∈ Dn, we have
426
+ LF zα = F(z) · ∇zk =
427
+ n
428
+
429
+ l=1
430
+ Fl(z) αl z(��1,...,αl−1,αl−1,αl+1,...,αn).
431
+ Since ∥fzα∥ = ∥f∥ for all f ∈ H2(Dn) and for all α ∈ Nn, it follows from
432
+ Assumption 2 that ∥LF eα∥ =
433
+ �����
434
+ n
435
+
436
+ l=1
437
+ αlFl
438
+ ����� ≤
439
+ n
440
+
441
+ l=1
442
+ |αl| ∥Fl∥ < ∞. Moreover
443
+ D (LF ) is dense in H2(Dn) since the monomials zα form a complete basis.
444
+ This manuscript is for review purposes only.
445
+
446
+ 8
447
+ C. M. ZAGABE AND A. MAUROY
448
+ 2. For all f ∈ H2(Dn), we have
449
+ ⟨U ∗
450
+ t kz, f⟩ = ⟨kz, Utf⟩ = (Utf) (z)
451
+ and
452
+
453
+ kϕt(z), f
454
+
455
+ = f (ϕt(z)) = (Utf) (z),
456
+ so that
457
+ U ∗
458
+ t kz = kϕt(z).
459
+ 3. For all z ∈ Dn and all f ∈ D(LF ),
460
+ d
461
+ dt ⟨U ∗
462
+ t kz, f⟩ = d
463
+ dt
464
+
465
+ kϕt(z), f
466
+
467
+ = d
468
+ dtf ◦ ϕt(z)
469
+ = F (ϕt(z)) .∇f (ϕt(z))
470
+ =
471
+
472
+ kϕt(z), LF f
473
+
474
+ = ⟨L∗
475
+ F U ∗
476
+ t kz, f⟩ .
477
+ The result follows for all f since D(LF ) is dense in H2(Dn).
478
+ In the previous lemma, the second property is a well-known property of the composi-
479
+ tion operator on a RKHS. The third property is also known in the context of strongly
480
+ continuous semigroup theory (see [7]).
481
+ Finally, we make the following additional standing assumption.
482
+ Assumption 3. The vector field F admits on Dn a unique hyperbolic stable equi-
483
+ librium at 0 (without loss of generality), i.e. F(0) = 0 and the eigenvalues ˜λj of the
484
+ Jacobian matrix JF(0) satisfy ℜ{˜λj} < 0.
485
+ Remark 2.14 (Holomorphic flow and spectral properties). If Assumption 3 holds
486
+ and if the eigenvalues ˜λj are non-resonant1, then the Poincar´e linearization theorem [2]
487
+ implies that the flow ϕt is topologically conjugated to the linear flow ˜ϕt(z) = eJF (0)tz,
488
+ i.e. there exists a bi-holomorphic map h such that ϕt = h−1 ◦ ˜ϕt ◦ h. In this case, the
489
+ flow ϕt is clearly holomorphic. Moreover, the components of h are associated with
490
+ holomorphic Koopman eigenfunctions φ˜λj ∈ H2(Dn) associated with the eigenvalues
491
+ ˜λj [9, 20]. These eigenfunctions are called principal eigenfunctions. Also, it can easily
492
+ be shown that, for all α ∈ Nn,
493
+ n
494
+
495
+ j=1
496
+ αj˜λj is a Koopman eigenvalue associated with the
497
+ eigenfunction φα1
498
+ ˜λ1 · · · φαn
499
+ ˜λn.
500
+ 2.2.3. Koopman infinite matrix. Since H2(Dn) is isomorphic to l2, the Koop-
501
+ man generator can be represented by the Koopman infinite matrix
502
+ (2.11)
503
+ ¯LF =
504
+
505
+
506
+
507
+
508
+
509
+
510
+
511
+
512
+
513
+ ⟨LF e0, e0⟩
514
+ ⟨LF e0, e1⟩
515
+ ⟨LF e0, e2⟩
516
+ ⟨LF e0, e3⟩
517
+ · · ·
518
+ ⟨LF e1, e0⟩
519
+ ⟨LF e1, e1⟩
520
+ ⟨LF e1, e2⟩
521
+ ⟨LF e1, e3⟩
522
+ · · ·
523
+ ⟨LF e2, e0⟩
524
+ ⟨LF e2, e1⟩
525
+ ⟨LF e2, e2⟩
526
+ ⟨LF e2, e3⟩
527
+ · · ·
528
+ ⟨LF e3, e0⟩
529
+ ⟨LF e3, e1⟩
530
+ ⟨LF e3, e2⟩
531
+ ⟨LF e3, e3⟩
532
+ · · ·
533
+ ⟨LF e4, e0⟩
534
+ ⟨LF e4, e1⟩
535
+ ⟨LF e4, e2⟩
536
+ ⟨LF e4, e3⟩
537
+ · · ·
538
+ ...
539
+ ...
540
+ ...
541
+ ...
542
+ · · ·
543
+
544
+
545
+
546
+
547
+
548
+
549
+
550
+
551
+
552
+ ,
553
+ 1The eigenvalues ˜λj are non-resonant if
554
+ n
555
+
556
+ j=1
557
+ αj ˜λj = 0 with α ∈ Zn implies that α = 0.
558
+ This manuscript is for review purposes only.
559
+
560
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
561
+ 9
562
+ where the kth row contains the components of LF ek in the basis of monomials. For
563
+ f =
564
+
565
+ k∈N
566
+ fkek, we also have that
567
+ ⟨LF f, ej⟩ =
568
+
569
+ k∈N
570
+ fk⟨LF ek, ej⟩.
571
+ Remark 2.15. We note that, since LF e0 = 0, the first row and column of ¯LF
572
+ contains only zero entries. By removing the first row and column, one obtains the
573
+ representation of the restriction of the Koopman generator to the subspace of functions
574
+ f that satisfy f(0) = 0. This subspace is spanned by the basis (ek)k≥1. Note that
575
+ kz − k0 belongs to this subspace, since (2.10) implies that kz(0) − k0(0) = 0.
576
+ For Fl(z) =
577
+
578
+ |β|≥1
579
+ al,βzβ, the action of the Koopman operator on a basis element
580
+ is given by
581
+ LF zα =
582
+ n
583
+
584
+ l=1
585
+ Fl(z) αl z(α1,...,αl−1,αl−1,αl+1,...,αn)
586
+ =
587
+ n
588
+
589
+ l=1
590
+
591
+ |β|≥1
592
+ al,βzβ αl z(α1,...,αl−1,αl−1,αl+1,...,αn)
593
+ =
594
+ n
595
+
596
+ l=1
597
+ αl
598
+
599
+
600
+
601
+ (β1,...,βn)∈Nn
602
+ al,(β1,...,βn)z(β1+α1,...,βl+αl−1,βl+1+αl+1,...,βn+αn)
603
+
604
+ � .
605
+ By setting γ1 = β1 + α1, . . . , γl = βl + αl − 1, . . . , γn = βn + αn we obtain
606
+ LF zα =
607
+ n
608
+
609
+ l=1
610
+ αl
611
+
612
+ |γ|≥|α|
613
+ al,(γ1−α1,...,γl−αl+1,...,γn−αn)z(γ1,γ2,...,γn)
614
+ =
615
+ n
616
+
617
+ l=1
618
+ αl
619
+
620
+ |γ|≥|α|
621
+ al,(γ−α)lzγ,
622
+ (2.12)
623
+ where we denote
624
+ (2.13)
625
+ (γ − α)l = (γ1 − α1, · · · , γl − αl + 1, · · · , γn − αn) .
626
+ It follows that the entries of (2.11) are given by
627
+ (2.14)
628
+ ⟨LF ek, ej⟩ =
629
+
630
+
631
+
632
+
633
+
634
+ n
635
+
636
+ l=1
637
+ αl(k) al,(α(j)−α(k))l
638
+ if |α(j)| ≥ |α(k)|
639
+ 0
640
+ if |α(j)| < |α(k)|.
641
+ Remark 2.16. For the linear part of the vector field F, where |α(j)| = 1, j =
642
+ 1, · · · , n, it is clear that α(j) is the canonical basis vector of Cn, i.e. αi(j) = δij, and we
643
+ have that a(l)
644
+ α(j) = [JF(0)]lj. Also, if |α(j)| = |α(k)|, we have that (α(j)−α(k))l = α(r)
645
+ for some r ≤ n (i.e. |α(r)| = 1), with αr(j) = αr(k) + 1, αl(j) = αl(k) − 1, and
646
+ This manuscript is for review purposes only.
647
+
648
+ 10
649
+ C. M. ZAGABE AND A. MAUROY
650
+ αi(j) = αi(k) for all i /∈ {l, r}. Then, it follows from (2.14) that
651
+ (2.15)
652
+ ⟨LF ek, ej⟩ =
653
+
654
+
655
+
656
+
657
+
658
+
659
+
660
+
661
+
662
+ n
663
+
664
+ l=1
665
+ αl(j) [JF(0)]ll
666
+ if j = k
667
+ αl(k) [JF(0)]lr
668
+ if α(j) = (α1(k), · · · , αl(k) − 1, · · · , αr(k) + 1, · · · , αn(k)),
669
+ 0
670
+ otherwise .
671
+ 2.2.4. Switched Koopman systems and Lie-algebraic conditions. In the
672
+ case of a switched nonlinear system (2.1), the Koopman operator description yields
673
+ a switched linear infinite-dimensional system (in short, switched Koopman system) of
674
+ the form
675
+ (2.16)
676
+
677
+ ˙f = LF (i)f, f ∈ D
678
+ �m
679
+ i=1
680
+ with D = ∩m
681
+ i=1D(LF (i)). Similarly, the Lie algebra gF spanned by F (i) (see (2.5)) is
682
+ replaced by gL = span {LF (i), i = 1, . . . , m}Lie, equipped with the Lie bracket
683
+ [LF (i), LF (j)] = LF (i)LF (j) − LF (j)LF (i) .
684
+ In particular, we have the well-known relationship
685
+ (2.17)
686
+ [LF (i), LF (j)] = L[F (i),F (j)]
687
+ so that the two algebras gF and gL are isomorphic.
688
+ It follows that Lie-algebraic
689
+ conditions in gF can be recast into Lie-algebraic criteria in gL, a framework where
690
+ we can expect to obtain new results on switched systems that are reminiscent to the
691
+ linear case. In particular, since the solvability property of gF is equivalent to the
692
+ solvability property of gL, we will investigate whether this latter condition implies
693
+ the existence of a common Lyapunov functional for the switched Koopman system
694
+ (2.16).
695
+ 3. Main result. This section presents our main result. We first use an illus-
696
+ trative example to show that Lie’s theorem A.5 cannot be used for nonlinear vector
697
+ fields, in contrast to the linear case (see Proposition 2.9). We then relax the algebraic
698
+ conditions suggested in [13] in order to obtain a triangular form in the Koopman
699
+ matrix representation (2.11), a property which is equivalent to the existence of an in-
700
+ variant flag for the adjoint operator L∗
701
+ F . We finally prove uniform stability of switched
702
+ nonlinear systems under these conditions.
703
+ 3.1. A first remark on the existence of the common invariant flag. The
704
+ following example shows that Lie’s theorem does not hold for infinite-dimensional
705
+ switched Koopman systems.
706
+ Example 1. Consider the two vector fields
707
+ F (1)(x1, x2) = (−αx1, −αx2)
708
+ and
709
+ F (2)(x1, x2) = (−βx1+γ
710
+
711
+ x2
712
+ 1 − x2
713
+ 2
714
+
715
+ , −βx2+2γx1x2),
716
+ where α, β and γ are real parameters. These two vector fields generate the Lie alge-
717
+ bra g = span
718
+
719
+ F (1), F (2), F (3)�
720
+ Lie with F (3)(x1, x2) = (αγ(x2
721
+ 1 − x2
722
+ 2), 2αγx1x2) since
723
+ [F (1), F (2)] = F (3), [F (1), F (3)] = αF (3) and [F (2), F (3)] = βF (3).
724
+ Moreover, one
725
+ has g1 = [g, g] = span
726
+
727
+ F (3)�
728
+ Lie and g2 = [g1, g1] = 0, which implies that g is a
729
+ This manuscript is for review purposes only.
730
+
731
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
732
+ 11
733
+ solvable Lie algebra. However, the Koopman generators LF (1) and LF (2) associated
734
+ with the two vector fields do not share a common eigenfunction, and therefore can-
735
+ not have a common invariant flag. Indeed, the principal eigenfunctions of LF (1) are
736
+ φ˜λ(1)
737
+ 1 (x1, x2) = x1 and φ˜λ(1)
738
+ 2 (x1, x2) = x2, while those of LF (2) are given by
739
+ φ˜λ(2)
740
+ 1 (x1, x2) = βγ
741
+
742
+ βx1 − γ
743
+
744
+ x2
745
+ 1 − x2
746
+ 2
747
+ ��
748
+ (β − γx1)2 + γ2x2
749
+ 2
750
+ and
751
+ φ˜λ(2)
752
+ 2 (x1, x2) =
753
+ β2γx2
754
+ (β − γx1)2 + γ2x2
755
+ 2
756
+ .
757
+ We conclude that Lie’s theorem A.5 does not hold setting for the above example,
758
+ so that we cannot directly extend Proposition 2.9 to this case. The two Koopman
759
+ generators are not simultaneous triangularizable and do not have a common invariant
760
+ flag (see [10] for more details about simultaneous triangularization of operators and
761
+ its connection to the existence of an invariant infinite maximal flag). However, it
762
+ can be easily seen that the Koopman infinite matrices (2.11) related to the vector
763
+ fields F (1) and F (2) are both upper triangular, and therefore admit a common infinite
764
+ invariant maximal flag. In fact, this implies that the adjoint operators L∗
765
+ F (i) have a
766
+ common invariant flag. For this reason, we will depart from the solvability condition
767
+ on vector fields (i.e. on Koopman generators), and we will deal with simultaneous
768
+ triangularization of adjoints of Koopman generators. The following result provides a
769
+ sufficient condition on the vector fields for the simultaneous triangularization of ad-
770
+ joints of Koopman generators, which appears to be less restrictive than the solvability
771
+ condition.
772
+ Lemma 3.1. Let F be an analytic vector field on Dn such that the Jacobian matrix
773
+ JF(0) is upper triangular. Then the Koopman matrix (2.11) is upper triangular, i.e.
774
+ ⟨LF ek, ej⟩ = 0 for all k > j. Moreover, the adjoint L∗
775
+ F of the Koopman generator
776
+ admits an infinite invariant maximal flag generated by the monomials ek, i.e. Sk =
777
+ span{e1, . . . , ek}.
778
+ Proof. It follows from (2.14) that ⟨LF ek, ej⟩ = 0 if |α(k)| > |α(j)| (i.e. the Koop-
779
+ man matrix (2.11) is always upper triangular by matrix blocks related to monomials of
780
+ the same total degree). In the case |α(k)| = |α(j)| with k > j, the lexicographic order
781
+ implies that one can have α(j) = (α1(k), · · · , αl(k)−1, · · · , αr(k)+1, · · · , αn(k)) only
782
+ with r < l. Since [JF(0)]lr = 0 for all l > r, it follows from (2.15) that ⟨LF ek, ej⟩ = 0
783
+ when k > j. Finally, it is clear that L∗
784
+ F ej ∈ span{e1, . . . , ej} since ⟨ek, L∗
785
+ F ej⟩ = 0 for
786
+ all k > j.
787
+ Remark 3.2. When the Jacobian matrix is upper triangular, it is well-known that
788
+ [JF(0)]jj = ˜λj. In this case, it follows from (2.15) that the diagonal entries of the
789
+ (upper triangular) Koopman matrix are given by
790
+ (3.1)
791
+ ⟨LF ej, ej⟩ =
792
+ n
793
+
794
+ l=1
795
+ αl(j)˜λl.
796
+ Since these values are the Koopman eigenvalues in the case of non-resonant eigenvalues
797
+ ˜λj (see Remark 2.14), we will denote λj = ⟨LF ej, ej⟩ by a slight abuse of notation.
798
+ Corollary 3.3. Let
799
+
800
+ F (i)�m
801
+ i=1 be a switched nonlinear system on Dn and sup-
802
+ pose that the Lie algebra of matrices span
803
+
804
+ JF (i)(0)
805
+
806
+ Lie is solvable. Then there ex-
807
+ ists a change of variables z �→ �z = P −1z on Cn such that the adjoint operators
808
+ L∗
809
+
810
+ F (i) of the Koopman generators (with �F (i)(�z) = P −1F (i)(P �z)) admit a common in-
811
+ This manuscript is for review purposes only.
812
+
813
+ 12
814
+ C. M. ZAGABE AND A. MAUROY
815
+ finite invariant maximal flag. Moreover,
816
+
817
+ �F (i)�m
818
+ i=1 is a switched nonlinear system on
819
+ Dn �
820
+ 0, ∥P −1∥∞
821
+
822
+ .
823
+ Proof. Since span
824
+
825
+ JF (i)(0)
826
+
827
+ Lie is solvable, Lie’s theorem A.5 implies that the
828
+ matrices JF (i)(0) are simultaneously triangularizable, i.e. there exists a matrix P such
829
+ that JF (i)(0) = PT (i)P −1 for all i, where T (i) is upper triangular. Let set F (i)(z) =
830
+ JF (i)(0)z + ˜F (i)(z) to separate the linear and the nonlinear parts of the dynamics. In
831
+ the new coordinates �z = P −1z, we obtain the dynamics �F (i)(�z) = P −1JF (i)(0)P �z +
832
+ P −1 ˜Fi(P �z) = T (i)�z + �˜F i(�z). It follows from Lemma 3.1 that monomials �ek, with
833
+ �ek(�z) = (�z)α(k), generate a common invariant maximal flag for L∗
834
+
835
+ F (i). In addition, for
836
+ all z ∈ Dn and all j = 1, · · · , n, we have
837
+ |�zj| ≤ ∥�z∥∞ =
838
+ ��P −1z
839
+ ��
840
+ ∞ ≤
841
+ ��P −1��
842
+ ∞ ∥z∥∞ < ∥P −1∥∞.
843
+ Remark 3.4. It is clear that the change of coordinates z �→ �z = P −1z is defined
844
+ up to a multiplicative constant. Without loss of generality, we will consider in the
845
+ sequel that ∥P −1∥∞ = 1, so that
846
+
847
+ �F (i)�m
848
+ i=1 is a switched nonlinear system on the
849
+ unit polydisk Dn.
850
+ Instead of a nilpotency or solvability condition on the vector fields F (i), we only
851
+ require a milder solvability condition on the Jacobian matrices JF (i)(xe) to guarantee
852
+ the triangular form of the Koopman matrix (2.11). It is noticeable that this local
853
+ condition is much less restrictive than the global solvability condition mentioned in
854
+ the original open problem [13]. Also, it was shown in [1] that the triangular form of the
855
+ vector fields (and therefore of the Jacobian matrices) is not sufficient to guarantee the
856
+ GUAS property of a switched nonlinear system on Rn. In the next section, however,
857
+ we use the solvability condition on the Jacobian matrices to prove the GUAS property
858
+ in a bounded invariant region of the state space. This result is consistent with the
859
+ local stability result derived in [14].
860
+ 3.2. A common Lyapunov function for switched nonlinear systems. We
861
+ now aim to show that, for some positive sequence (ϵk)∞
862
+ k=1, the series
863
+ (3.2)
864
+ V(f) =
865
+
866
+
867
+ k=1
868
+ ϵk |⟨f, ek⟩|2
869
+ is a Lyapunov functional for the switched Koopman system (2.16). Before starting
870
+ our main result, we need a few lemmas.
871
+ Lemma 3.5. Let ˙z = F(z) be a vector field on the polydisk Dn which generates a
872
+ flow ϕt. Suppose that there exist a sequence of positive numbers (ϵk)k≥1 and ρ ∈]0, 1]
873
+ such that Dn(0, ρ) is forward invariant with respect to ϕt and such that the series
874
+
875
+ k≥1
876
+ |α(k)|ϵkρ2|α(k)|
877
+ is convergent. Then, the series
878
+ (3.3)
879
+ V(kz − k0) =
880
+
881
+
882
+ k=1
883
+ ϵk |⟨kz, ek⟩|2
884
+ This manuscript is for review purposes only.
885
+
886
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
887
+ 13
888
+ and
889
+ (3.4)
890
+
891
+
892
+ k=1
893
+ ϵk
894
+ d
895
+ dt |⟨U ∗
896
+ t (kz − k0), ek⟩|2
897
+ are absolutely and uniformly convergent on Dn(0, ρ) for all t > 0.
898
+ Proof. For the first series, we have
899
+ ϵk |⟨kz − k0, ek⟩|2 = ϵk |⟨kz, ek⟩|2 = ϵk
900
+ ���zα(k)���
901
+ 2
902
+ < ϵkρ2|α(k)| ≤ |α(k)|ϵkρ2|α(k)|
903
+ for all z ∈ Dn(0, ρ) and all k ≥ 1. For the second series, we have
904
+ ϵk
905
+ ����
906
+ d
907
+ dt |⟨U ∗
908
+ t (kz − k0), ek⟩|2
909
+ ���� = ϵk
910
+ ����
911
+ d
912
+ dt
913
+ ���(ϕt(z))α(k) − (ϕt(0))α(k)���
914
+ 2����
915
+ = 2ϵk
916
+ ����ℜ
917
+ � d
918
+ dt (ϕt(z))α(k) ·
919
+
920
+ ϕt(z)
921
+ �α(k)�����
922
+ ≤ 2ϵk
923
+ ����
924
+ d
925
+ dt (ϕt(z))α(k)
926
+ ���� ·
927
+ ����
928
+
929
+ ϕt(z)
930
+ �α(k)����
931
+ < 2ϵkρ|α(k)|
932
+ ����
933
+ d
934
+ dt (ϕt(z))α(k)
935
+ ����
936
+ ≤ 2ϵkρ|α(k)|
937
+ n
938
+
939
+ s=1
940
+ αs(k)
941
+ ���F (s) (ϕt(z))
942
+ ���
943
+ ��(ϕt(z))s
944
+ ��αs(k)−1
945
+ n
946
+
947
+ l=1,l̸=s
948
+ ��(ϕt(z))l
949
+ ��αl(k)
950
+ < 2ϵkρ2|α(k)|−1
951
+ n
952
+
953
+ s=1
954
+ αs(k)
955
+ ���F (s) (ϕt(z))
956
+ ���
957
+ for all z ∈ Dn(0, ρ), t > 0 and k ≥ 1. By using the maximum modulus principle for
958
+ bounded domains A.2 with the holomorphic function F (s) ◦ ϕt, we can denote
959
+ M =
960
+ max
961
+ z∈∂Dn,s=1,··· ,n |(F (s) ◦ ϕt)(z)|
962
+ and we obtain
963
+ ϵk
964
+ ����
965
+ d
966
+ dt |⟨U ∗
967
+ t (kz − k0), ek⟩|2
968
+ ���� < 2M|α(k)|ϵkρ2|α(k)|−1.
969
+ Finally, absolute and uniform convergence of both series follow from the Weierstrass
970
+ test (A.4).
971
+ Lemma 3.6. Let ˙z = F(z) be a nonlinear system on Dn with an upper triangular
972
+ Jacobian matrix JF(0) and let ˙f = LF f be its corresponding Koopman system on
973
+ D (LF ) ⊂ H2(Dn).
974
+ If the series (3.3) and (3.4) are absolutely and uniformly convergent in Dn for all
975
+ t > 0, then, for all double sequences of positive real numbers (bjk)j≥1,k≥1 such that
976
+ (3.5)
977
+
978
+
979
+ k=1
980
+ bjk ≤ 1,
981
+ This manuscript is for review purposes only.
982
+
983
+ 14
984
+ C. M. ZAGABE AND A. MAUROY
985
+ one has
986
+ d
987
+ dtV (U ∗
988
+ t (kz − k0)) ≤2
989
+
990
+
991
+ j=1
992
+ bjjϵj |cj|2 ℜ (λj)
993
+ + 2
994
+
995
+
996
+ j=2
997
+ j−1
998
+
999
+ k=1
1000
+
1001
+ bjkϵj |cj|2 ℜ (λj) + bkjϵk |ck|2 ℜ (λk) + ϵkℜ (cj¯ck ⟨ej, LF ek⟩)
1002
+
1003
+ ,
1004
+ (3.6)
1005
+ with cj = ⟨U ∗
1006
+ t (kz − k0), ej⟩ and λj = ⟨LF ej, ej⟩.
1007
+ Proof. Suppose that z ∈ Dn is such that the series (3.3) and (3.4) are absolutely
1008
+ and uniformly convergent. Then, by using Lemma 2.13 (3), we obtain
1009
+ d
1010
+ dtϵk |⟨U ∗
1011
+ t (kz − k0), ek⟩|2 = 2ϵkℜ
1012
+ �� d
1013
+ dtU ∗
1014
+ t (kz − k0), ek
1015
+
1016
+ ⟨U ∗
1017
+ t (kz − k0), ek⟩
1018
+
1019
+ = 2ϵkℜ
1020
+
1021
+ ⟨L∗
1022
+ F U ∗
1023
+ t (kz − k0), ek⟩ ⟨U ∗
1024
+ t (kz − k0), ek⟩
1025
+
1026
+ = 2ϵk
1027
+
1028
+
1029
+ j=1
1030
+ ℜ (cj¯ck ⟨ej, LF ek⟩)
1031
+ where we used the decomposition U ∗
1032
+ t (kz − k0) =
1033
+
1034
+
1035
+ j=1
1036
+ cjej. Since (3.4) is absolutely
1037
+ and uniformly convergent, term by term derivation yields
1038
+ d
1039
+ dtV (U ∗
1040
+ t (kz − k0)) =
1041
+
1042
+
1043
+ k=1
1044
+ d
1045
+ dtϵk |⟨U ∗
1046
+ t (kz − k0), ek⟩|2
1047
+ = 2
1048
+
1049
+
1050
+ k=1
1051
+
1052
+
1053
+ j=1
1054
+ ϵkℜ (cj¯ck ⟨ej, LF ek⟩)
1055
+ = 2
1056
+
1057
+
1058
+ j=1
1059
+ ϵj |cj|2 ℜ (λj) + 2
1060
+
1061
+
1062
+ j=2
1063
+ j−1
1064
+
1065
+ k=1
1066
+ ϵkℜ (cj¯ck ⟨ej, LF ek⟩)
1067
+ where we used the triangular form of ¯LF (which follows from Lemma 3.1 since JF(0)
1068
+ is triangular) and λj = ⟨LF ej, ej⟩.
1069
+ Using (3.5), we have
1070
+ d
1071
+ dtV (U ∗
1072
+ t (kz − k0)) ≤ 2
1073
+
1074
+
1075
+ j=1
1076
+
1077
+
1078
+ j
1079
+
1080
+ k=1
1081
+ bjk +
1082
+
1083
+
1084
+ k=j+1
1085
+ bjk
1086
+
1087
+ � ϵj |cj|2 ℜ (λj) + 2
1088
+
1089
+
1090
+ j=2
1091
+ j−1
1092
+
1093
+ k=1
1094
+ ϵkℜ (cj¯ck ⟨ej, LF ek⟩)
1095
+ = 2
1096
+
1097
+
1098
+ j=1
1099
+ bjjϵj |cj|2 ℜ (λj) + 2
1100
+
1101
+
1102
+ j=2
1103
+ j−1
1104
+
1105
+ k=1
1106
+ bjkϵj |cj|2 ℜ (λj)
1107
+ + 2
1108
+
1109
+
1110
+ j=1
1111
+
1112
+
1113
+ k=j+1
1114
+ bjkϵj |cj|2 ℜ (λj) + 2
1115
+
1116
+
1117
+ j=2
1118
+ j−1
1119
+
1120
+ k=1
1121
+ ϵkℜ (cj¯ck ⟨ej, LF ek⟩)
1122
+ = 2
1123
+
1124
+
1125
+ j=1
1126
+ bjjϵj |cj|2 ℜ (λj)
1127
+ + 2
1128
+
1129
+
1130
+ j=2
1131
+ j−1
1132
+
1133
+ k=1
1134
+
1135
+ bjkϵj |cj|2 ℜ (λj) + bkjϵk |ck|2 ℜ (λk) + ϵkℜ (cj¯ck ⟨ej, LF ek⟩)
1136
+
1137
+ .
1138
+ This manuscript is for review purposes only.
1139
+
1140
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
1141
+ 15
1142
+ Under Assumption 3, it follows from (3.1) that ℜ{λj} < 0 for all j. Therefore, the
1143
+ time derivative (3.6) of the Lyapunov functional is negative if negative terms related
1144
+ to the diagonal entries ⟨LF ej, ej⟩ = λj and ⟨LF ek, ek⟩ = λk compensate (possibly
1145
+ positive) cross-terms related to ⟨ej, LF ek⟩. We note that a term associated with a
1146
+ diagonal entry will be used to compensate an infinity of cross-terms (associated with
1147
+ entries in the corresponding row and column of the Koopman matrix), and the values
1148
+ bjk play the role of weights in the compensation process.
1149
+ We are now in position to state our main result.
1150
+ Theorem 3.7. Let
1151
+ (3.7)
1152
+
1153
+ ˙z = F (i)(z)
1154
+ �m
1155
+ i=1
1156
+ be a switched nonlinear system on Dn and assume that
1157
+ • all subsystems of (3.7) have a common hyperbolic equilibrium ze = 0 that is
1158
+ globally asymptotically stable on the polydisk Dn,
1159
+ • the Lie algebra span
1160
+
1161
+ JF (i)(0)
1162
+
1163
+ Lie is solvable (and therefore there exists a
1164
+ matrix P such that P −1JF (i)(0)P are upper triangular),
1165
+ • there exists ρ ∈]0, 1] such that Dn (0, ρ) is forward invariant with respect to
1166
+ the flows ϕt
1167
+ i of F (i).
1168
+ Consider double sequences of positive real numbers
1169
+
1170
+ b(i)
1171
+ jk
1172
+
1173
+ j≥1,k≥1, with i = 1, . . . , m,
1174
+ such that b(i)
1175
+ jk b(i)
1176
+ kj > 0 if ⟨L �
1177
+ F (i)�ek, �ej⟩ ̸= 0 (where �ej(�z) = �zα(j) are monomials in the
1178
+ new coordinates �z = P −1z) and such that
1179
+
1180
+
1181
+ k=1
1182
+ b(i)
1183
+ jk ≤ 1, and define the double sequence
1184
+ (3.8)
1185
+
1186
+
1187
+ �Q(i)
1188
+ jk
1189
+ def
1190
+ =
1191
+
1192
+
1193
+
1194
+
1195
+
1196
+ ���
1197
+ L �
1198
+ F (i)�ek, �ej
1199
+ ���2
1200
+ 4
1201
+ ��ℜ
1202
+ ��
1203
+ L �
1204
+ F (i)�ej, �ej
1205
+ ���� ��ℜ
1206
+ ��
1207
+ L �
1208
+ F (i)�ek, �ek
1209
+ ����
1210
+ 1
1211
+ b(i)
1212
+ jk b(i)
1213
+ kj
1214
+ if
1215
+
1216
+ L �
1217
+ F (i)�ek, �ej
1218
+
1219
+ ̸= 0
1220
+ 0
1221
+ otherwise
1222
+
1223
+
1224
+
1225
+ j≥2,1≤k≤j−1
1226
+ .
1227
+ If the series
1228
+ (3.9)
1229
+ +∞
1230
+
1231
+ k=1
1232
+ |α(k)| ϵk ρ2|α(k)|
1233
+ is convergent with
1234
+ (3.10)
1235
+ ϵj >
1236
+ max
1237
+ i=1,··· ,m
1238
+ k=1,...,j−1
1239
+ ϵk Q(i)
1240
+ jk ,
1241
+ then the switched system (3.7) is GUAS on Dn(0, ρ). Moreover the series
1242
+ V (z) =
1243
+
1244
+
1245
+ k=1
1246
+ ϵk
1247
+ ���
1248
+
1249
+ P −1z
1250
+ �α(k)���
1251
+ 2
1252
+ is a common global Lyapunov function on Dn(0, ρ).
1253
+ This manuscript is for review purposes only.
1254
+
1255
+ 16
1256
+ C. M. ZAGABE AND A. MAUROY
1257
+ Proof. Consider the switched system
1258
+ (3.11)
1259
+
1260
+ ˙�z = �F (i)(�z)
1261
+ �m
1262
+ i=1
1263
+ defined on Dn. By Corollary 3.3, the monomials �ek(�z) = (�z)α(k) generate a common
1264
+ infinite invariant maximal flag for ¯L �
1265
+ F (i). We first show that the candidate Lyapunov
1266
+ functional �V(f) =
1267
+
1268
+
1269
+ k=1
1270
+ ϵk |⟨f, �ek⟩|2 satisfies
1271
+ d
1272
+ dt
1273
+ �V
1274
+
1275
+ (�U (i)
1276
+ t )∗ (k�z − k0)
1277
+
1278
+ < 0
1279
+ for all i = 1, · · · , m, where �U (i)
1280
+ t
1281
+ denotes the Koopman semigroup associated with the
1282
+ subsystem ˙�z = �F (i)(�z). Lemma 3.5 with (3.9) implies that the series (3.3) and (3.4)
1283
+ are absolutely convergent on Dn (0, ρ). Then, it follows from Lemma 3.6 that
1284
+ d
1285
+ dt
1286
+ �V
1287
+
1288
+ (�U (i)
1289
+ t )∗ (k�z − k0)
1290
+
1291
+ ≤2
1292
+
1293
+
1294
+ j=1
1295
+ b(i)
1296
+ jj ϵj
1297
+ ���c(i)
1298
+ j
1299
+ ���
1300
+ 2
1301
+
1302
+
1303
+ λ(i)
1304
+ j
1305
+
1306
+ + 2
1307
+
1308
+
1309
+ j=2
1310
+ j−1
1311
+
1312
+ k=1
1313
+ b(i)
1314
+ jk ϵj
1315
+ ���c(i)
1316
+ j
1317
+ ���
1318
+ 2
1319
+
1320
+
1321
+ λ(i)
1322
+ j
1323
+
1324
+ + b(i)
1325
+ kj ϵk
1326
+ ���c(i)
1327
+ k
1328
+ ���
1329
+ 2
1330
+
1331
+
1332
+ λ(i)
1333
+ k
1334
+
1335
+ + ϵkℜ
1336
+
1337
+ c(i)
1338
+ j ¯c(i)
1339
+ k
1340
+
1341
+ �ej, L �
1342
+ F (i)�ek
1343
+ ��
1344
+ where c(i)
1345
+ j
1346
+ =
1347
+
1348
+ (�U (i)
1349
+ t )∗ (k�z − k0) , �ej
1350
+
1351
+ and λ(i)
1352
+ j
1353
+ =
1354
+
1355
+ L �
1356
+ F (i)�ej, �ej
1357
+
1358
+ . Since ℜ{λ(i)
1359
+ j } < 0 (see
1360
+ (3.1)), one has to find a sequence of positive numbers (ϵj)j≥1 such that
1361
+ b(i)
1362
+ jk ϵj
1363
+ ���c(i)
1364
+ j
1365
+ ���
1366
+ 2 ���ℜ
1367
+
1368
+ λ(i)
1369
+ j
1370
+ ���� + b(i)
1371
+ kj ϵk
1372
+ ���c(i)
1373
+ k
1374
+ ���
1375
+ 2 ���ℜ
1376
+
1377
+ λ(i)
1378
+ k
1379
+ ���� > ϵk
1380
+ ���ℜ
1381
+
1382
+ c(i)
1383
+ j ¯c(i)
1384
+ k
1385
+
1386
+ �ej, L �
1387
+ F (i)�ek
1388
+ �����
1389
+ for all i = 1, · · · , m and for all j, k with j > k such that
1390
+ (3.12)
1391
+
1392
+ �ej, L �
1393
+ F (i)�ek
1394
+
1395
+ ̸= 0.
1396
+ By using the inequality
1397
+ ���ℜ
1398
+
1399
+ c(i)
1400
+ j ¯c(i)
1401
+ k
1402
+
1403
+ �ej, L �
1404
+ F (i)�ek
1405
+ ����� ≤
1406
+ ���c(i)
1407
+ j
1408
+ ���
1409
+ ���c(i)
1410
+ k
1411
+ ���
1412
+ ���
1413
+ �ej, L �
1414
+ F (i)�ek
1415
+ ��� ,
1416
+ one has to satisfy
1417
+ b(i)
1418
+ jk ϵj
1419
+ ���c(i)
1420
+ j
1421
+ ���
1422
+ 2 ���ℜ
1423
+
1424
+ λ(i)
1425
+ j
1426
+ ���� + b(i)
1427
+ kj ϵk
1428
+ ���c(i)
1429
+ k
1430
+ ���
1431
+ 2 ���ℜ
1432
+
1433
+ λ(i)
1434
+ k
1435
+ ���� > ϵk
1436
+ ���c(i)
1437
+ j
1438
+ ���
1439
+ ���c(i)
1440
+ k
1441
+ ���
1442
+ ���
1443
+ L �
1444
+ F (i)�ek, �ej
1445
+ ���
1446
+ or equivalently
1447
+ (3.13)
1448
+ ϵj > ϵk
1449
+
1450
+ �−
1451
+ b(i)
1452
+ kj
1453
+ b(i)
1454
+ jk
1455
+ ���ℜ
1456
+
1457
+ λ(i)
1458
+ k
1459
+ ����
1460
+ ���ℜ
1461
+
1462
+ λ(i)
1463
+ j
1464
+ ����
1465
+ �����
1466
+ c(i)
1467
+ k
1468
+ c(i)
1469
+ j
1470
+ �����
1471
+ 2
1472
+ +
1473
+ ���
1474
+ L �
1475
+ F (i)�ek, �ej
1476
+ ���
1477
+ b(i)
1478
+ jk
1479
+ ���ℜ
1480
+
1481
+ λ(i)
1482
+ j
1483
+ ����
1484
+ �����
1485
+ c(i)
1486
+ k
1487
+ c(i)
1488
+ j
1489
+ �����
1490
+
1491
+ � def
1492
+ = ϵk h
1493
+ ������
1494
+ c(i)
1495
+ k
1496
+ c(i)
1497
+ j
1498
+ �����
1499
+
1500
+ .
1501
+ It is easy to see that the real quadratic function h has the maximal value
1502
+ Q(i)
1503
+ jk =
1504
+ ���
1505
+ L �
1506
+ F (i)�ek, �ej
1507
+ ���2
1508
+ 4
1509
+ ���ℜ
1510
+
1511
+ λ(i)
1512
+ j
1513
+ ����
1514
+ ���ℜ
1515
+
1516
+ λ(i)
1517
+ k
1518
+ ����
1519
+ 1
1520
+ b(i)
1521
+ jk b(i)
1522
+ kj
1523
+ This manuscript is for review purposes only.
1524
+
1525
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
1526
+ 17
1527
+ so that (3.13) is satisfied if we choose iteratively ϵj according to (3.10). It follows that
1528
+ we have
1529
+ d
1530
+ dt
1531
+ �V
1532
+
1533
+ (�U (i)
1534
+ t )∗ (k�z − k0)
1535
+
1536
+ < 2
1537
+
1538
+
1539
+ j=1
1540
+ b(i)
1541
+ jj ϵj
1542
+ ���c(i)
1543
+ j
1544
+ ���
1545
+ 2
1546
+
1547
+
1548
+ λ(i)
1549
+ j
1550
+
1551
+ < − min
1552
+ j
1553
+
1554
+ b(i)
1555
+ jj
1556
+ ���ℜ
1557
+
1558
+ λ(i)
1559
+ j
1560
+ ����
1561
+ � ∞
1562
+
1563
+ j=1
1564
+ ϵj
1565
+ ���c(i)
1566
+ j
1567
+ ���
1568
+ 2
1569
+ = − min
1570
+ j
1571
+
1572
+ b(i)
1573
+ jj
1574
+ ���ℜ
1575
+
1576
+ λ(i)
1577
+ j
1578
+ ����
1579
+
1580
+ �V
1581
+
1582
+ (�U (i)
1583
+ t )∗ (k�z − k0)
1584
+
1585
+ .
1586
+ With the evaluation functional k�z, we can define
1587
+ �V : Dn (0, ρ) → R+,
1588
+ �V (�z) = �V (k�z − k0)
1589
+ and, using Lemma 2.13, we verify that
1590
+ �V
1591
+
1592
+ �ϕ(i)
1593
+ t (�z)
1594
+
1595
+ = �V
1596
+
1597
+ k�ϕ(i)
1598
+ t
1599
+ (�z) − k0
1600
+
1601
+ = �V
1602
+
1603
+ k�ϕ(i)
1604
+ t
1605
+ (�z) − k�ϕ(i)
1606
+ t
1607
+ (0)
1608
+
1609
+ = �V
1610
+
1611
+ (�U (i)
1612
+ t )∗ (k�z − k0)
1613
+
1614
+ < �V (k�z − k0) = �V (�z).
1615
+ In addition, if we define V = �V ◦ P −1 : Dn (0, ρ) → R+, we have
1616
+ V
1617
+
1618
+ ϕ(i)
1619
+ t (z)
1620
+
1621
+ = �V
1622
+
1623
+ P −1ϕ(i)
1624
+ t (P �z)
1625
+
1626
+ = �V
1627
+
1628
+ �ϕ(i)
1629
+ t (�z
1630
+
1631
+ < �V (�z) = V (P �z) = V (z).
1632
+ Therefore, we have the CLF
1633
+ (3.14)
1634
+ V (z) =
1635
+
1636
+
1637
+ k=1
1638
+ ϵk |⟨k�z − k0, �ek⟩|2 =
1639
+
1640
+
1641
+ k=1
1642
+ ϵk |⟨k�z, �ek⟩|2 =
1643
+
1644
+
1645
+ k=1
1646
+ ϵk
1647
+ ���
1648
+
1649
+ P −1z
1650
+ �α(k)���
1651
+ 2
1652
+ for the switched nonlinear system (3.7). Finally, since Dn (0, ρ) is forward invariant
1653
+ with respect to ϕ(i)
1654
+ t , the switched system (3.7) is GUAS on Dn(0, ρ).
1655
+ Note that, if the assumptions of Theorem 3.7 are satisfied but the polydisk
1656
+ Dn(0, ρ) is not forward invariant with respect to the flow generated by the subsys-
1657
+ tems, then the switched system is GUAS in the largest sublevel set of the Lyapunov
1658
+ function that is contained in Dn(0, ρ).
1659
+ The condition on the boundedness of the double sequence (3.8) could be inter-
1660
+ preted as the dominance of diagonal entries of the matrix ¯L �
1661
+ F (i) (i.e., the Koopman
1662
+ eigenvalues (3.1)) with respect to the other entries. Moreover, the number of non-
1663
+ zero cross-terms (3.12) to be compensated affects the way we define the sequence of
1664
+ weights b(i)
1665
+ jk and therefore the sequence ϵj in (3.10). If the double sequence (3.8) has
1666
+ an upper bound Q < 1, one can set ϵk = Q for all k. However, such case rarely
1667
+ appears. Instead, if Q > 1, one might have ϵk = O(Qk) and it is clear that (3.9)
1668
+ diverges for all ρ since k − 2|α(k)| → ∞ as k → ∞ (except in the case n = 1 where
1669
+ |α(k)| = k, see also Remark 3.10 below). In the following, we will consider specific
1670
+ vector fields such that the series (3.9) converges for a proper choice of sequence b(i)
1671
+ jk ,
1672
+ so that Theorem 3.7 can be used.
1673
+ For polynomial vector fields of the form F (i)
1674
+ l
1675
+ (z) =
1676
+ r
1677
+
1678
+ k=1
1679
+ a(i)
1680
+ l,kzα(k), we denote by K(i)
1681
+ the number of nonzero terms (without counting the monomial zl in F (i)
1682
+ l
1683
+ ), i.e.
1684
+ (3.15)
1685
+ K(i) =
1686
+ m
1687
+
1688
+ l=1
1689
+ #
1690
+
1691
+ k ̸= l : a(i)
1692
+ l,k ̸= 0
1693
+
1694
+ This manuscript is for review purposes only.
1695
+
1696
+ 18
1697
+ C. M. ZAGABE AND A. MAUROY
1698
+ where # is the cardinal of a set. In this case, we have the following result.
1699
+ Corollary 3.8. Let
1700
+ (3.16)
1701
+
1702
+ ˙z = F (i)(z)
1703
+ �m
1704
+ i=1
1705
+ be a switched nonlinear system on Dn, where F (i) are polynomial vector fields. Assume
1706
+ that
1707
+ • all subsystems of (3.16) have a common hyperbolic equilibrium ze = 0 that is
1708
+ globally asymptotically stable on Dn,
1709
+ • the Lie algebra span
1710
+
1711
+ JF (i)(0)
1712
+
1713
+ Lie is solvable (and therefore there exists a
1714
+ matrix P such that PJF (i)(0)P −1 are upper triangular),
1715
+ • the unit polydisk Dn is forward invariant with respect to the flows ϕ(i)
1716
+ t
1717
+ gener-
1718
+ ated by F (i).
1719
+ If
1720
+ (3.17)
1721
+ max
1722
+ i=1,··· ,m lim sup
1723
+ j∈N
1724
+ max
1725
+ k=1,...,j−1
1726
+ ( �K(i))2 ���
1727
+ L �
1728
+ F (i)�ek, �ej
1729
+ ���2
1730
+ ��ℜ
1731
+ ��
1732
+ L �
1733
+ F (i)�ej, �ej
1734
+ ���� ��ℜ
1735
+ ��
1736
+ L �
1737
+ F (i)�ek, �ek
1738
+ ���� < 1,
1739
+ where �K(i) is the number of nonzero terms of �F (i)(�z) = P −1F (i)(P �z) (see (3.15)) and
1740
+ where �ej(�z) = �zα(j) are the monomials in the new coordinates �z = P −1z, then (3.16)
1741
+ is GUAS on Dn.
1742
+ Proof. The result follows from Theorem 3.7 with the sequence
1743
+ (3.18)
1744
+
1745
+
1746
+
1747
+
1748
+
1749
+
1750
+
1751
+ b(i)
1752
+ jj = (1 − ξ)
1753
+ b(i)
1754
+ jk =
1755
+ ξ
1756
+ 2K(i)
1757
+ if j ̸= k with
1758
+
1759
+ L �
1760
+ F (i)�ek, �ej
1761
+
1762
+ ̸= 0 or
1763
+
1764
+ L �
1765
+ F (i)�ej, �ek
1766
+
1767
+ ̸= 0
1768
+ b(i)
1769
+ jk = 0,
1770
+ if j ̸= k with
1771
+
1772
+ L �
1773
+ F (i)�ek, �ej
1774
+
1775
+ = 0 or
1776
+
1777
+ L �
1778
+ F (i)�ej, �ek
1779
+
1780
+ = 0,
1781
+ with ξ ∈]0, 1[.
1782
+ It is clear from (2.14) that, for a fixed j and for all k ∈ N \ {j},
1783
+ there are at most K(i) nonzero values ⟨L �
1784
+ F (i)�ek, �ej⟩ and at most K(i) nonzero values
1785
+ ⟨L �
1786
+ F (i)�ej, �ek⟩, so that the sequence (3.18) satisfies
1787
+
1788
+
1789
+ k=1
1790
+ b(i)
1791
+ jk ≤ 1. The elements Q(i)
1792
+ jk of
1793
+ the double sequence (3.8) are given by
1794
+ (3.19)
1795
+ Q(i)
1796
+ jk =
1797
+ ( �K(i))2 ���
1798
+ L �
1799
+ F (i)�ek, �ej
1800
+ ���2
1801
+ ξ2 ��ℜ
1802
+ ��
1803
+ L �
1804
+ F (i)�ej, �ej
1805
+ ���� ��ℜ
1806
+ ��
1807
+ L �
1808
+ F (i)�ek, �ek
1809
+ ����.
1810
+ The condition (3.17) implies that
1811
+ max
1812
+ i=1,··· ,m lim sup
1813
+ j∈N
1814
+ max
1815
+ k=1,...,j−1 Q(i)
1816
+ jk
1817
+ def
1818
+ = Q < 1 for some
1819
+ ξ ∈]0, 1[, so that (3.10) is satisfied with
1820
+ (3.20)
1821
+ ϵj ∼ max
1822
+ k∈Kj {ϵk Q}
1823
+ for j ≫ 1, with Kj = {k ∈ {1, . . . , j − 1} :
1824
+
1825
+ L �
1826
+ F (i)�ek, �ej
1827
+
1828
+ ̸= 0 for some i ∈ {1, . . . , m}}.
1829
+ The sequence (3.20) yields ϵj = O(Qj) for j ≫ 1. It follows that (3.9) is convergent
1830
+ for any ρ ≤ 1 and Theorem 3.7 implies that the switched system (3.16) is GUAS on
1831
+ Dn.
1832
+ This manuscript is for review purposes only.
1833
+
1834
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
1835
+ 19
1836
+ Another result is obtained when a diagonal dominance property is assumed for
1837
+ the Jacobian matrices JF (i)(0).
1838
+ Corollary 3.9. Let
1839
+ (3.21)
1840
+
1841
+ ˙z = F (i)(z)
1842
+ �m
1843
+ i=1
1844
+ be a switched nonlinear system on Dn, with F (i)
1845
+ l
1846
+ (z) =
1847
+ +∞
1848
+
1849
+ k=1
1850
+ a(i)
1851
+ l,kzα(k) and
1852
+ +∞
1853
+
1854
+ k=1
1855
+ |a(i)
1856
+ l,k| < ∞
1857
+ for all i = {1, . . . , m} and l ∈ {1, . . . , n}. Assume that
1858
+ • all subsystems of (3.21) have a common hyperbolic equilibrium ze = 0 that is
1859
+ globally asymptotically stable on Dn,
1860
+ • the Lie algebra span
1861
+
1862
+ JF (i)(0)
1863
+
1864
+ Lie is solvable (and therefore there exists a
1865
+ matrix P such that PJF (i)(0)P −1 are upper triangular),
1866
+ • there exists ρ ∈]0, 1] such that Dn (0, ρ) is forward invariant with respect to
1867
+ the flows ϕ(i)
1868
+ t
1869
+ generated by F (i).
1870
+ If there exist ξ ∈]0, 1[ and κ ∈]0, 1[ with ξ + κ < 1 such that, for all q, r ∈
1871
+ {1, · · · , n} with q < r (when n > 1),
1872
+ (3.22)
1873
+ ���J �F (i)(0)]qr
1874
+ ���
1875
+ 2
1876
+ <
1877
+
1878
+
1879
+ n2 − n
1880
+ �2 ���ℜ([J �F (i)(0)]rr)
1881
+ ���
1882
+ ���ℜ([J �F (i)(0)]qq)
1883
+ ��� ,
1884
+ (3.23)
1885
+ ���[J �F (i)(0)]qr
1886
+ ��� <
1887
+
1888
+ n2 − n
1889
+ ���ℜ([J �F (i)(0)]qq)
1890
+ ���
1891
+ and
1892
+ (3.24)
1893
+ max
1894
+ i=1,··· ,m lim sup
1895
+ j∈N
1896
+ max
1897
+ k=1,...,j−1
1898
+ ⟨L �
1899
+ F (i) �ek,�ej⟩̸=0
1900
+ �∞
1901
+ l=1
1902
+ ���
1903
+ L �
1904
+ F (i)�el, �ej
1905
+ ��� �∞
1906
+ l=1
1907
+ ���
1908
+ L �
1909
+ F (i)�ek, �el
1910
+ ���
1911
+ κ2 ��ℜ
1912
+ ��
1913
+ L �
1914
+ F (i)�ej, �ej
1915
+ ���� ��ℜ
1916
+ ��
1917
+ L �
1918
+ F (i)�ek, �ek
1919
+ ���� < 1
1920
+ ρ2 ,
1921
+ where �ej(�z) = �zα(j) are monomials in the new coordinates �z = P −1z, then (3.21) is
1922
+ GUAS on Dn(0, ρ).
1923
+ Proof. We will denote by D
1924
+ def
1925
+ = (n2 − n)/2 the number of upper off-diagonal
1926
+ entries of the Jacobian matrices J �F (i)(0). The result follows from Theorem 3.7 with
1927
+ the sequence
1928
+
1929
+
1930
+
1931
+
1932
+
1933
+
1934
+
1935
+
1936
+
1937
+
1938
+
1939
+
1940
+
1941
+
1942
+
1943
+
1944
+
1945
+
1946
+
1947
+
1948
+
1949
+
1950
+
1951
+
1952
+
1953
+ b(i)
1954
+ jj = (1 − ξ − κ)
1955
+ b(i)
1956
+ jk =
1957
+ ξ
1958
+ 2D
1959
+ if j ̸= k with |α(j)| = |α(k)|, and if
1960
+
1961
+ L �
1962
+ F (i)�ek, �ej
1963
+
1964
+ ̸= 0 or
1965
+
1966
+ L �
1967
+ F (i)�ej, �ek
1968
+
1969
+ ̸= 0
1970
+ b(i)
1971
+ jk = 0
1972
+ if |α(j)| = |α(k)|,
1973
+
1974
+ L �
1975
+ F (i)�ek, �ej
1976
+
1977
+ = 0 and
1978
+
1979
+ L �
1980
+ F (i)�ej, �ek
1981
+
1982
+ = 0
1983
+ b(i)
1984
+ jk = κ
1985
+ 2
1986
+ ���
1987
+ L �
1988
+ F (i)�ek, �ej
1989
+ ���
1990
+ �∞
1991
+ l=1
1992
+ ���
1993
+ L �
1994
+ F (i)�el, �ej
1995
+ ���
1996
+ if |α(k)| < |α(j)|
1997
+ b(i)
1998
+ jk = κ
1999
+ 2
2000
+ ���
2001
+ L �
2002
+ F (i)�ej, �ek
2003
+ ���
2004
+ �∞
2005
+ l=1
2006
+ ���
2007
+ L �
2008
+ F (i)�ej, �el
2009
+ ���
2010
+ if |α(k)| > |α(j)|
2011
+ with ξ ∈]0, 1[ and κ ∈]0, 1[. It follows from (2.15) in Remark 2.16 and the fact that
2012
+ the Jacobian matrices J �F (i)(0) are upper triangular that, for a fixed j and all k ̸= j
2013
+ This manuscript is for review purposes only.
2014
+
2015
+ 20
2016
+ C. M. ZAGABE AND A. MAUROY
2017
+ with |α(k)| = |α(j)|, there are at most D nonzero values
2018
+
2019
+ L �
2020
+ F (i)�ek, �ej
2021
+
2022
+ and at most D
2023
+ nonzero values
2024
+
2025
+ L �
2026
+ F (i)�ej, �ek
2027
+
2028
+ . Therefore, the sequence b(i)
2029
+ jk satisfies
2030
+
2031
+
2032
+ k=1
2033
+ b(i)
2034
+ jk < (1 − ξ − κ) + ξ + κ
2035
+ 2
2036
+ �j
2037
+ k=1
2038
+ ���
2039
+ L �
2040
+ F (i)�ek, �ej
2041
+ ���
2042
+ �∞
2043
+ l=1
2044
+ ���
2045
+ L �
2046
+ F (i)�el, �ej
2047
+ ��� + κ
2048
+ 2
2049
+ �∞
2050
+ k=j+1
2051
+ ���
2052
+ L �
2053
+ F (i)�ej, �ek
2054
+ ���
2055
+ �∞
2056
+ l=1
2057
+ ���
2058
+ L �
2059
+ F (i)�ej, �el
2060
+ ���
2061
+ < 1.
2062
+ The elements Q(i)
2063
+ jk of the double sequence (3.8) are given by
2064
+ (3.25)
2065
+ Q(i)
2066
+ jk =
2067
+
2068
+
2069
+
2070
+
2071
+
2072
+
2073
+
2074
+
2075
+
2076
+
2077
+
2078
+
2079
+
2080
+ D2 ���
2081
+ L �
2082
+ F (i)�ek, �ej
2083
+ ���2
2084
+ ξ2 ��ℜ
2085
+ ��
2086
+ L �
2087
+ F (i)�ej, �ej
2088
+ ���� ��ℜ
2089
+ ��
2090
+ L �
2091
+ F (i)�ek, �ek
2092
+ ����
2093
+ if |α(j)| = |α(k)|
2094
+ �∞
2095
+ l=1
2096
+ ���
2097
+ L �
2098
+ F (i)�el, �ej
2099
+ ��� �∞
2100
+ l=1
2101
+ ���
2102
+ L �
2103
+ F (i)�ek, �el
2104
+ ���
2105
+ κ2 ��ℜ
2106
+ ��
2107
+ L �
2108
+ F (i)�ej, �ej
2109
+ ���� ��ℜ
2110
+ ��
2111
+ L �
2112
+ F (i)�ek, �ek
2113
+ ����
2114
+ if |α(k)| ̸= |α(j)| and
2115
+
2116
+ L �
2117
+ F (i)�ek, �ej
2118
+
2119
+ ̸= 0
2120
+ 0
2121
+ otherwise.
2122
+ We note that
2123
+
2124
+
2125
+ l=1
2126
+ ���
2127
+ L �
2128
+ F (i)�el, �ej
2129
+ ��� and
2130
+
2131
+
2132
+ l=1
2133
+ ���
2134
+ L �
2135
+ F (i)�ek, �el
2136
+ ��� are finite according to the as-
2137
+ sumption.
2138
+ Next, we show that the conditions (3.22) and (3.23) imply that Q(i)
2139
+ jk < 1 if |α(j)| =
2140
+ |α(k)|. Indeed, it follows from (2.15) and (3.25) that this latter inequality is equivalent
2141
+ to
2142
+ α2
2143
+ q(k)|[J �F (i)(0)]qr|2 < ξ2
2144
+ D2
2145
+ �����
2146
+ n
2147
+
2148
+ l=1
2149
+ αl(j)ℜ([J �F (i)(0)]ll)
2150
+ �����
2151
+ �����
2152
+ n
2153
+
2154
+ l=1
2155
+ αl(k)ℜ([J �F (i)(0)]ll)
2156
+ �����
2157
+ for all j > k such that α(j) = (α1(k), · · · , αq(k)−1, · · · , αr(k)+1, · · · , αn(k)) for some
2158
+ q < r. Since the diagonal entries of the (upper-triangular) Jacobian matrices J �F (i)(0)
2159
+ are the eigenvalues and therefore have negative real parts, the most restrictive case is
2160
+ obtained with αl(k) = 0 for all l ̸= q, which yields
2161
+ α2
2162
+ q(k)|[J �F (i)(0)]qr|2 < ξ2
2163
+ D2
2164
+ ���(αq(k) − 1)ℜ([J �F (i)(0)]qq) + ℜ([J �F (i)(0)]rr)
2165
+ ���
2166
+ ���αq(k)ℜ([J �F (i)(0)]qq)
2167
+ ��� .
2168
+ When αq(k) = 1, this inequality is equivalent to (3.22). When αq(k) > 1, we can
2169
+ rewrite
2170
+ (αq(k) − 1)|[J �F (i)(0)]qr|2 + |[J �F (i)(0)]qr|2
2171
+ < ξ2
2172
+ D2
2173
+
2174
+ (αq(k) − 1)
2175
+ ���ℜ([J �F (i)(0)]qq)
2176
+ ���
2177
+ 2
2178
+ +
2179
+ ���ℜ([J �F (i)(0)]rr)
2180
+ ���
2181
+ ���ℜ([J �F (i)(0)]qq)
2182
+ ���
2183
+
2184
+ .
2185
+ Using (3.22), we have that the above inequality is satisfied if
2186
+ (αq(k) − 1)|[J �F (i)(0)]qr|2 < ξ2
2187
+ D2 (αq(k) − 1)
2188
+ ���ℜ([J �F (i)(0)]qq)
2189
+ ���
2190
+ 2
2191
+ ,
2192
+ which is equivalent to (3.23).
2193
+ While Q(i)
2194
+ jk < 1 for |α(j)| = |α(k)|, it is easy to see that Q(i)
2195
+ jk > 1 for |α(j)| > |α(k)|.
2196
+ The condition (3.24) therefore implies that
2197
+ max
2198
+ i=1,··· ,m lim sup
2199
+ j∈N
2200
+ max
2201
+ k=1,...,j−1 Q(i)
2202
+ jk
2203
+ def
2204
+ = Q < 1/ρ2
2205
+ and (3.10) is satisfied with
2206
+ (3.26)
2207
+ ϵj ∼ max
2208
+ k∈Kj {ϵk Q}
2209
+ This manuscript is for review purposes only.
2210
+
2211
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
2212
+ 21
2213
+ for j ≫ 1, with
2214
+ Kj = {k ∈ 1, . . . , j − 1 :
2215
+
2216
+ L �
2217
+ F (i)�ek, �ej
2218
+
2219
+ ̸= 0 for some i ∈ {1, . . . , m} and |α(k)| < |α(j)|}.
2220
+ Hence, the sequence (3.26) yields ϵj = O(Q|α(j)|). It follows that (3.9) is convergent
2221
+ and Theorem 3.7 implies that the switched system (3.21) is GUAS on Dn(0, ρ).
2222
+ For the particular case where the Jacobian matrices JF (i)(0) are simultaneously
2223
+ diagonalizable (i.e. they are diagonalizable and they commute), the diagonal dom-
2224
+ inance conditions (3.22) and (3.23) are trivially satisfied. We should mention that
2225
+ the Lie-algebraic property of commutation is only needed for the Jacobian matrices
2226
+ JF (i)(0), an assumption which contrasts with the commutation property imposed on
2227
+ vector fields in [17], [30] and [33].
2228
+ Remark 3.10. In the case n = 1, we recover the trivial GUAS property of switched
2229
+ systems from Corollary 3.9. Indeed, consider the vector fields F (i)(z) =
2230
+
2231
+
2232
+ k=1
2233
+ a(i)
2234
+ k zk
2235
+ on D, with
2236
+
2237
+
2238
+ k=1
2239
+ |a(i)
2240
+ k | < ∞, and assume that the subsystems have a globally stable
2241
+ equilibrium at the origin. The Lie-algebra generated by the scalars JF (i)(0) is trivially
2242
+ solvable and D(0, ρ) is forward invariant for all ρ. Moreover, the conditions (3.22) and
2243
+ (3.23) are trivially satisfied. Then Corollary 3.9 implies that the switched system is
2244
+ GUAS on D(0, ρ) for ρ ∈]0, 1[ which satisfies (3.24). It follows from (2.14) that
2245
+
2246
+
2247
+ l=1
2248
+ |⟨LF (i)el, ej⟩| =
2249
+ j
2250
+
2251
+ l=1
2252
+ l|a(i)
2253
+ j−l+1| =
2254
+ j
2255
+
2256
+ l=1
2257
+ (j − l + 1)|a(i)
2258
+ l |,
2259
+
2260
+
2261
+ l=1
2262
+ |⟨LF (i)ek, el⟩| = k
2263
+
2264
+
2265
+ l=1
2266
+ |a(i)
2267
+ l |,
2268
+ and |ℜ (⟨ej, LF (i)ej⟩)| = j
2269
+ ���ℜ(a(i)
2270
+ 1 )
2271
+ ���. With κ arbitrarily close to 1 (since ξ can be taken
2272
+ arbitrarily small in (3.22) and (3.23)), condition (3.24) is rewritten as
2273
+ max
2274
+ i=1,··· ,m lim sup
2275
+ j∈N
2276
+ �j
2277
+ l=1(j − l + 1)|a(i)
2278
+ l | �∞
2279
+ l=1 |a(i)
2280
+ l |
2281
+ j
2282
+ ���ℜ(a(i)
2283
+ 1 )
2284
+ ���
2285
+ 2
2286
+ < 1
2287
+ ρ2
2288
+ and, using (j − l + 1)|a(i)
2289
+ l | ≤ j|a(i)
2290
+ l | for all l, we obtain
2291
+ ρ <
2292
+ min
2293
+ i=1,··· ,m
2294
+ �∞
2295
+ l=1 |a(i)
2296
+ l |
2297
+ ���ℜ(a(i)
2298
+ 1 )
2299
+ ���
2300
+ .
2301
+ 4. Examples. This section presents two examples that illustrate our results. We
2302
+ will focus on specific cases that satisfy the assumptions of Corollaries 3.8 and 3.9 and,
2303
+ without loss of generality, we will directly consider Jacobian matrices in triangular
2304
+ form.
2305
+ 4.1. Example 1: polynomial vector fields. Similarly to Example 1, we con-
2306
+ sider the vector fields on the bidisk D2
2307
+ (4.1)
2308
+ F (1)(z1, z2) =
2309
+
2310
+ −az1
2311
+ −az2
2312
+ and F (2)(z1, z2) =
2313
+
2314
+
2315
+
2316
+ −az1 + b
2317
+
2318
+ z2
2319
+ 1 − z1z2
2320
+ 2
2321
+
2322
+ −az2 + b
2323
+ 2z1z2,
2324
+ This manuscript is for review purposes only.
2325
+
2326
+ 22
2327
+ C. M. ZAGABE AND A. MAUROY
2328
+ where b > 0 and a > 3b. For all ρ < 1, the bidisk D2(0, ρ) is invariant with respect
2329
+ to the flows of F (i). Indeed, for all z ∈ ∂D2(0, ρ) (i.e. |zl| = ρ for some l), one has to
2330
+ verify that ℜ
2331
+
2332
+ F (i)
2333
+ l
2334
+ (z) ¯zl
2335
+
2336
+ < 0. We have
2337
+ • |zl| = ρ ⇒ ℜ
2338
+
2339
+ F (1)
2340
+ l
2341
+ (z)¯zl
2342
+
2343
+ = −aρ2 < 0,
2344
+ • |z1| = ρ ⇒ ℜ
2345
+
2346
+ F (2)
2347
+ 1
2348
+ (z)¯z1
2349
+
2350
+ = −aρ2 + bρ2ℜ
2351
+
2352
+ z1 − z2
2353
+ 2
2354
+
2355
+ < 0,
2356
+ • |z2| = ρ ⇒ ℜ
2357
+
2358
+ F (2)
2359
+ 2
2360
+ (z)¯z2
2361
+
2362
+ = ρ2
2363
+
2364
+ −a + b
2365
+ 2ℜ (z1)
2366
+
2367
+ < 0.
2368
+ It is clear that vector field F (1) generates a holomorphic flow on D2.
2369
+ The same
2370
+ property holds for F (2) since the conditions of Proposition A.1 are satisfied with
2371
+ h1 (z′
2372
+ 1) = h2 (z′
2373
+ 2) = 0 (i.e. ℜ{G1(z)} = ℜ{−a + b
2374
+
2375
+ z1 − z2
2376
+ 2
2377
+
2378
+ } < 0 and ℜ{G2(z)} =
2379
+ ℜ{−a + b
2380
+ 2z1} < 0). The unique global stable equilibrium of the subsystems in the
2381
+ bidisk D2 is the origin. According to (2.14), the entries of the Koopman matrices
2382
+ ¯LF (1) and ¯LF (2) are given by
2383
+ ⟨LF (1)ek, ej⟩ =
2384
+
2385
+ −a |α(j)|
2386
+ if k = j
2387
+ 0
2388
+ otherwise
2389
+ and
2390
+ ⟨LF (2)ek, ej⟩ =
2391
+
2392
+
2393
+
2394
+
2395
+
2396
+
2397
+
2398
+
2399
+
2400
+
2401
+
2402
+
2403
+
2404
+ −a |α(j)|
2405
+ if k = j
2406
+ b
2407
+
2408
+ α1(j) + α2(j) − 2
2409
+ 2
2410
+
2411
+ if α1(k) = α1(j) − 1 ≥ 0 and α2(k) = α2(j)
2412
+ −b α1(j)
2413
+ if α1(k) = α1(j) and α2(k) = α2(j) − 2 ≥ 0
2414
+ 0
2415
+ otherwise .
2416
+ Since ⟨LF (1)ek, ej⟩=0 for all k ̸= j, we have Q(1)
2417
+ jk = 0 for all k, j in (3.17). Moreover,
2418
+ K(2) = 3 and the condition (3.17) can be rewritten as
2419
+ Q = lim sup
2420
+ j∈N
2421
+ |α(j)|>1
2422
+ max
2423
+
2424
+
2425
+
2426
+
2427
+
2428
+ 9b2
2429
+ a2
2430
+
2431
+ α1(j) + α2(j)−2
2432
+ 2
2433
+ �2
2434
+ |α(j)| (|α(j)| − 1) , 9b2
2435
+ a2
2436
+ (α1(j))2
2437
+ |α(j)| (|α(j)| − 2)
2438
+
2439
+
2440
+
2441
+
2442
+
2443
+ = 9b2
2444
+ a2 < 1
2445
+ and is satisfied since a > 3b. Hence, it follows from Corollary 3.8 that the switched
2446
+ system (4.1) is GUAS in D2. Note that, in this case, a CLF is given by
2447
+ V (z) =
2448
+
2449
+
2450
+ k=1
2451
+ Q−k ���zα(k)���
2452
+ 2
2453
+ .
2454
+ 4.2. Example 2: analytic vector fields. The following example is taken from
2455
+ [1] and [12]. Consider the switched system defined by the vector fields
2456
+ F (1)(x1, x2) =
2457
+
2458
+ −x1 + 1
2459
+ µ sin2(x1)x2
2460
+ 1x2, −x2
2461
+
2462
+ F (2)(x1, x2) =
2463
+
2464
+ −x1 + 1
2465
+ µ cos2(x1)x2
2466
+ 1x2, −x2
2467
+
2468
+ ,
2469
+ where µ ≥ 12/5. Both subsystems are globally asymptotically stable, but the switched
2470
+ system is not GUAS in R2, as shown in [1] by using the fact that the convex com-
2471
+ bination F =
2472
+
2473
+ F (1) + F (2)�
2474
+ /2 of the two subsystems is not globally asymptotically
2475
+ This manuscript is for review purposes only.
2476
+
2477
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
2478
+ 23
2479
+ stable in R2 (see Corollary 2.5). Yet, our result allows to infer the GUAS property
2480
+ in a specific region of the invariant real square ] − 1, 1[2. To do so, we complexify
2481
+ the dynamics and define the vector fields on the bidisk D2. For all ρ < 1, the bidisk
2482
+ D2(0, ρ) is invariant with respect to the flows of F (i). Indeed, we have
2483
+ • |z1| = ρ ⇒ ℜ
2484
+
2485
+ F (1)
2486
+ 1
2487
+ (z)¯z1
2488
+
2489
+ = ρ2
2490
+
2491
+ −1 + 1
2492
+ µℜ
2493
+
2494
+ z1z2 sin2(z1)
2495
+ ��
2496
+ < 0 since
2497
+ 1
2498
+ µ
2499
+ ��ℜ
2500
+
2501
+ z1z2 sin2(z1)
2502
+ ��� ≤ ρ2 ��sin2(z1)
2503
+ ��
2504
+ µ
2505
+ < 1
2506
+ where we used max
2507
+ z1∈D
2508
+ ��sin2(z1)
2509
+ �� < 12/5 ≤ µ,
2510
+ • |z2| = ρ ⇒ ℜ
2511
+
2512
+ F (1)
2513
+ 2
2514
+ (z)¯z2
2515
+
2516
+ = −ρ2 < 0.
2517
+ The same result follows for F (2) (with max
2518
+ z1∈D
2519
+ ��cos2(z1)
2520
+ �� < 12/5 ≤ µ). The vector field
2521
+ F (1) generates a holomorphic flow on D2 since the conditions of Proposition A.1 are
2522
+ satisfied with h1 (z′
2523
+ 1) = h2 (z′
2524
+ 2) = 0 (i.e. ℜ{G1(z)} = ℜ{−1 + 1
2525
+ µ sin2(z1)z1z2} < 0 and
2526
+ ℜ{G2(z)} = −1 < 0. The same result holds for F (2). The Taylor expansion of the
2527
+ vector fields yields
2528
+ F (1)(z) =
2529
+
2530
+ −z1 + 1
2531
+ µ
2532
+
2533
+
2534
+ p=1
2535
+ (−1)p+122p−1
2536
+ (2p)!
2537
+ z2p+2
2538
+ 1
2539
+ z2, −z2
2540
+
2541
+ F (2)(z) =
2542
+
2543
+ −z1 + 1
2544
+ µz2
2545
+ 1z2 + 1
2546
+ µ
2547
+
2548
+
2549
+ p=1
2550
+ (−1)p22p−1
2551
+ (2p)!
2552
+ z2p+2
2553
+ 1
2554
+ z2, −z2
2555
+
2556
+ .
2557
+ According to (2.14), the entries of the Koopman matrices ¯LF (1) and ¯LF (2) are given
2558
+ by
2559
+ ⟨LF (1)ek, ej⟩ =
2560
+
2561
+
2562
+
2563
+
2564
+
2565
+
2566
+
2567
+
2568
+
2569
+ − |α(k)|
2570
+ if k = j
2571
+ α1(k)
2572
+ µ
2573
+ (−1)p+122p−1
2574
+ (2p)!
2575
+ if α1(k) = α1(j) − 1 − 2p ≥ 0 and α2(k) = α2(j) − 1 ≥ 0
2576
+ 0
2577
+ otherwise
2578
+ and
2579
+ ⟨LF (2)ek, ej⟩ =
2580
+
2581
+
2582
+
2583
+
2584
+
2585
+
2586
+
2587
+
2588
+
2589
+
2590
+
2591
+
2592
+
2593
+
2594
+
2595
+ − |α(k)|
2596
+ if k = j
2597
+ α1(k)
2598
+ µ
2599
+ if α1(k) = α1(j) − 1 ≥ 0 and α2(k) = α2(j) − 1 ≥ 0
2600
+ α1(k)
2601
+ µ
2602
+ (−1)p22p−1
2603
+ (2p)!
2604
+ if α1(k) = α1(j) − 1 − 2p ≥ 0 and α2(k) = α2(j) − 1 ≥ 0
2605
+ 0
2606
+ otherwise
2607
+ This manuscript is for review purposes only.
2608
+
2609
+ 24
2610
+ C. M. ZAGABE AND A. MAUROY
2611
+ where p = 1, · · · ,
2612
+ �α1(j) − 1
2613
+ 2
2614
+
2615
+ . This implies that we have
2616
+ (4.2)
2617
+
2618
+
2619
+ l=1
2620
+ |⟨LF (1)el, ej⟩| = |α(j)| + 1
2621
+ µ
2622
+ ⌊ α1(j)−1
2623
+ 2
2624
+
2625
+
2626
+ l=1
2627
+ (α1(j) − 1 − 2l)22l−1
2628
+ (2l)!
2629
+
2630
+
2631
+ l=1
2632
+ |⟨LF (2)el, ej⟩| = |α(j)| + α1(j) − 1
2633
+ µ
2634
+ + 1
2635
+ µ
2636
+ ⌊ α1(j)−1
2637
+ 2
2638
+
2639
+
2640
+ l=1
2641
+ (α1(j) − 1 − 2l)22l−1
2642
+ (2l)!
2643
+
2644
+
2645
+ l=1
2646
+ |⟨LF (1)ek, el⟩| = |α(k)| + α1(k)
2647
+ µ
2648
+
2649
+
2650
+ p=1
2651
+ 22p−1
2652
+ (2p)! = |α(k)| + α1(k)
2653
+
2654
+ (cosh(2) − 1)
2655
+
2656
+
2657
+ l=1
2658
+ |⟨LF (2)ek, el⟩| = |α(k)| + α1(k)
2659
+ µ
2660
+
2661
+ 1 +
2662
+
2663
+
2664
+ p=1
2665
+ 22p−1
2666
+ (2p)!
2667
+
2668
+ = |α(k)| + α1(k)
2669
+
2670
+ (cosh(2) + 1) .
2671
+ Since the Jacobian matrices JF (i)(0) are diagonal, the conditions (3.22) and (3.23)
2672
+ are trivially satisfied (with ξ arbitrarily small). Moreover, we observe from (4.2) that
2673
+
2674
+
2675
+ l=1
2676
+ |⟨LF (1)el, ej⟩| ≤
2677
+
2678
+
2679
+ l=1
2680
+ |⟨LF (2)el, ej⟩| and
2681
+
2682
+
2683
+ l=1
2684
+ |⟨LF (1)ek, el⟩| ≤
2685
+
2686
+
2687
+ l=1
2688
+ |⟨LF (2)ek, el⟩| for all
2689
+ k, j. It follows that, with κ arbitrarily close to 1, condition (3.24) can be rewritten as
2690
+ lim sup
2691
+ j∈N
2692
+ max
2693
+ k=1,...,j−1
2694
+ �∞
2695
+ l=1 |⟨LF (2)el, ej⟩| �∞
2696
+ l=1 |⟨LF (2)ek, el⟩|
2697
+ |ℜ (⟨LF (2)ej, ej⟩)| |ℜ (⟨LF (2)ek, ek⟩)|
2698
+ < 1
2699
+ ρ2 ,
2700
+ which is verified for ρ =
2701
+
2702
+ 1 + 1
2703
+ 2µ (cosh(2) + 1)
2704
+ �−1
2705
+ . Indeed, from (4.2), we have
2706
+
2707
+
2708
+ l=1
2709
+ |⟨LF (2)el, ej⟩| < |α(j)| + α1(j)
2710
+ µ
2711
+
2712
+ 1 +
2713
+
2714
+
2715
+ l=1
2716
+ 22l−1
2717
+ (2l)!
2718
+
2719
+ = |α(j)| + α1(j)
2720
+
2721
+ (cosh(2) + 1)
2722
+ ≤ |α(j)|
2723
+
2724
+ 1 + 1
2725
+ 2µ (cosh(2) + 1)
2726
+
2727
+ .
2728
+ and
2729
+
2730
+
2731
+ l=1
2732
+ |⟨LF (2)ek, el⟩| ≤ |α(k)|
2733
+
2734
+ 1 + 1
2735
+ 2µ (cosh(2) + 1)
2736
+
2737
+ .
2738
+ It follows from Corollary 3.9 that the switched system is GUAS on D2(0, ρ). See
2739
+ Figure 1 for the different values of ρ depending on µ. Note that a CLF is given by
2740
+ V (z) =
2741
+
2742
+
2743
+ k=1
2744
+ Q−2|α(k)| ���zα(k)���
2745
+ 2
2746
+ where Q = 1/ρ.
2747
+ 5. Conclusion and perspectives. This paper provides new advances on the
2748
+ uniform stability problem for switched nonlinear systems satisfying Lie-algebraic solv-
2749
+ ability conditions. First, we have shown that the solvability condition on nonlinear
2750
+ This manuscript is for review purposes only.
2751
+
2752
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
2753
+ 25
2754
+ Fig. 1. The switched system is shown to be GUAS on a polydisk of radius ρ that depends on
2755
+ the parameter µ.
2756
+ vector fields does not guarantee the existence of a common invariant flag and, instead,
2757
+ we have imposed the solvability condition only on the linear part of the vector fields.
2758
+ Then we have constructed a common Lyapunov functional for an equivalent infinite-
2759
+ dimensional switched linear system obtained with the adjoint of the Koopman genera-
2760
+ tor on the Hardy space of the polydisk. Finally we have derived a common Lyapunov
2761
+ function via evaluation functionals to prove that specific switched nonlinear systems
2762
+ are uniformly globally asymptotically stable on invariant sets. Our results heavily
2763
+ rely on the Koopman operator framework, which appears to be a valid tool to tackle
2764
+ theoretical questions from a novel angle.
2765
+ We envision several perspectives for future research. Our results apply to specific
2766
+ types of switched nonlinear systems within the frame of Lie-algebraic solvability con-
2767
+ ditions. They could be extended to more general dynamics, including dynamics that
2768
+ possess a limit cycle or a general attractor. In the same line, the Koopman operator-
2769
+ based techniques developed in this paper could be applied to other types of stability
2770
+ than uniform stability. More importantly, the obtained stability results are limited
2771
+ to bounded invariant sets, mainly due to the convergence properties of the Lyapunov
2772
+ functions and the very definition of the Hardy space on the polydisk. We envision
2773
+ that these results could possibly be adapted to infer global stability in Rn. Finally,
2774
+ our results are not restricted to switched systems and have direct implications in the
2775
+ global stability properties of nonlinear dynamical systems, which will be investigated
2776
+ in a future publication.
2777
+ Appendix A. General theorems.
2778
+ We recall here some general results that are used in the proofs of our results.
2779
+ Proposition A.1 ([4]).
2780
+ Let F : Dn → Cn be holomorphic.
2781
+ Then F is an
2782
+ infinitesimal generator on Dn if and only if, for all l = 1, · · · , n and for all z ∈ Dn,
2783
+ Fl(z) = Gl(z) (zl − hl (z′
2784
+ l))
2785
+ where z′
2786
+ l = (z1, · · · , zl−1, zl+1, · · · zn), hl : Dn−1 → D is holomorphic, Gl : Dn → C is
2787
+ holomorphic, and ℜ ((1 − hl (z′
2788
+ l) ¯zl) Gl(z)) ≤ 0.
2789
+ Theorem A.2 (Maximum Modulus Principle for bounded domains [27]).
2790
+ Let
2791
+ Dn ⊂ Cn be a bounded domain and f : Dn → C be a continuous function, whose
2792
+ restriction to Dn is holomorphic. Then |f| attains a maximum on the boundary ∂Dn.
2793
+ Theorem A.3 (Abel’s multidimensional lemma [27] p.36).
2794
+ Let
2795
+
2796
+ α∈Nn
2797
+ aαzα be
2798
+ This manuscript is for review purposes only.
2799
+
2800
+ 1
2801
+ 0.9
2802
+ 0.8
2803
+ 0.7
2804
+ 0.6
2805
+ 0.5
2806
+ 0
2807
+ 10
2808
+ 20
2809
+ 30
2810
+ 40
2811
+ 50
2812
+ μ26
2813
+ C. M. ZAGABE AND A. MAUROY
2814
+ a power series. If there exist r ∈ Cn such that sup
2815
+ α∈Nn |aαrα| < ∞, then the series
2816
+
2817
+ α∈Nn
2818
+ aαzα is normally convergent for all z ∈ Cn such that |z1| < |r1|, · · · , |zn| < |rn|.
2819
+ Theorem A.4 (Weierstrass’s M-test).
2820
+ Let
2821
+ +∞
2822
+
2823
+ k=1
2824
+ fn(z) be a series of functions on
2825
+ a domain Dn of Cn. If there exists a sequence of real numbers Mk such that
2826
+ • Mk > 0 for all k,
2827
+ • the numerical series
2828
+ +∞
2829
+
2830
+ k=1
2831
+ Mk is convergent and
2832
+ • ∀k, ∀z ∈ Dn, |fk(z)| ≤ Mk.
2833
+ Then the series
2834
+ +∞
2835
+
2836
+ k=1
2837
+ fn(z) is absolutely and uniformly convergent on Dn.
2838
+ Theorem A.5 (Lie’s theorem [8] p.49).
2839
+ Let X be a nonzero n-complex vector
2840
+ space, and g be a solvable Lie subalgebra of the Lie algebra of n × n complex matrices.
2841
+ Then X has a basis (v1, . . . , vn) with respect to which every element of g has an upper
2842
+ triangular form.
2843
+ REFERENCES
2844
+ [1] D. Angeli and D. Liberzon, A note on uniform global asymptotic stability of nonlinear
2845
+ switched systems in triangular form, in Proc. 14th Int. Symp. on Mathematical Theory of
2846
+ Networks and Systems (MTNS), 2000.
2847
+ [2] V. I. Arnold, Geometrical methods in the theory of ordinary differential equations, vol. 250,
2848
+ Springer Science & Business Media, 2012.
2849
+ [3] M. Budiˇsi´c, R. Mohr, and I. Mezi´c, Applied Koopmanism, Chaos:
2850
+ An Interdisciplinary
2851
+ Journal of Nonlinear Science, 22 (2012), p. 047510.
2852
+ [4] R.-Y. Chen and Z.-H. Zhou, Parametric representation of infinitesimal generators on the
2853
+ polydisk, Complex Analysis and Operator Theory, 10 (2016), pp. 725–735.
2854
+ [5] M. Contreras, C. De Fabritiis, and S. D´ıaz-Madrigal, Semigroups of holomorphic func-
2855
+ tions in the polydisk, Proceedings of the American Mathematical Society, 139 (2011),
2856
+ pp. 1617–1624.
2857
+ [6] C. Dubi, An algorithmic approach to simultaneous triangularization, Linear Algebra and its
2858
+ Applications, 430 (2009), pp. 2975–2981.
2859
+ [7] K.-J. Engel, R. Nagel, and S. Brendle, One-parameter semigroups for linear evolution
2860
+ equations, vol. 194, Springer, 2000.
2861
+ [8] K. Erdmann and M. J. Wildon, Introduction to Lie algebras, vol. 122, Springer, 2006.
2862
+ [9] P. Gaspard, G. Nicolis, A. Provata, and S. Tasaki, Spectral signature of the pitchfork
2863
+ bifurcation: Liouville equation approach, Physical Review E, 51 (1995), p. 74.
2864
+ [10] A. Katavolos and H. Radjavi, Simultaneous triangularization of operators on a banach space,
2865
+ Journal of the London Mathematical Society, 2 (1990), pp. 547–554.
2866
+ [11] A. Lasota and M. C. Mackey, Chaos, fractals, and noise: stochastic aspects of dynamics,
2867
+ vol. 97, Springer Science & Business Media, 1998.
2868
+ [12] D. Liberzon, Switching in systems and control, vol. 190, Springer, 2003.
2869
+ [13] D. Liberzon, Lie algebras and stability of switched nonlinear systems, Princeton University
2870
+ Press Princeton, NJ/Oxford, 2004, pp. 203–207.
2871
+ [14] D. Liberzon, Switched systems : Stability analysis and control synthesis, 2013.
2872
+ [15] D. Liberzon, J. P. Hespanha, and A. S. Morse, Stability of switched systems: a Lie-algebraic
2873
+ condition, Systems & Control Letters, 37 (1999), pp. 117–122.
2874
+ [16] D. Liberzon and A. S. Morse, Basic problems in stability and design of switched systems,
2875
+ IEEE control systems magazine, 19 (1999), pp. 59–70.
2876
+ [17] J. L. Mancilla-Aguilar, A condition for the stability of switched nonlinear systems, IEEE
2877
+ Transactions on Automatic Control, 45 (2000), pp. 2077–2079.
2878
+ [18] M. Margaliot and D. Liberzon, Lie-algebraic stability conditions for nonlinear switched
2879
+ systems and differential inclusions, Systems & control letters, 55 (2006), pp. 8–16.
2880
+ This manuscript is for review purposes only.
2881
+
2882
+ UNIFORM STABILITY OF SWITCHED NONLINEAR SYSTEMS
2883
+ 27
2884
+ [19] A. Mauroy and I. Mezi´c, Global stability analysis using the eigenfunctions of the Koopman
2885
+ operator, IEEE Transactions on Automatic Control, 61 (2016), pp. 3356–3369.
2886
+ [20] A. Mauroy, I. Mezi´c, and J. Moehlis, Isostables, isochrons, and Koopman spectrum for the
2887
+ action–angle representation of stable fixed point dynamics, Physica D: Nonlinear Phenom-
2888
+ ena, 261 (2013), pp. 19–30.
2889
+ [21] A. Mauroy, Y. Susuki, and I. Mezi´c, Koopman operator in systems and control, Springer,
2890
+ 2020.
2891
+ [22] I. Mezi´c, Analysis of fluid flows via spectral properties of the Koopman operator, Annual
2892
+ Review of Fluid Mechanics, 45 (2013), pp. 357–378.
2893
+ [23] Y. Mori, T. Mori, and Y. Kuroe, A solution to the common Lyapunov function problem
2894
+ for continuous-time systems, in Proceedings of the 36th IEEE Conference on Decision and
2895
+ Control, vol. 4, 1997, pp. 3530–3531 vol.4, https://doi.org/10.1109/CDC.1997.652397.
2896
+ [24] K. S. Narendra and J. Balakrishnan, A common Lyapunov function for stable lti systems
2897
+ with commuting a-matrices, IEEE Transactions on automatic control, 39 (1994), pp. 2469–
2898
+ 2471.
2899
+ [25] W. Rudin, Function Theory in Polydiscs, Mathematics lecture note series, W. A. Benjamin,
2900
+ 1969, https://books.google.be/books?id=9waoAAAAIAAJ.
2901
+ [26] W. Rudin, Function theory in the unit ball of Cn, Springer Science & Business Media, 2008.
2902
+ [27] V. Scheidemann, Introduction to complex analysis in several variables, Springer, 2005.
2903
+ [28] J. H. Shapiro, Composition operators: and classical function theory, Springer Science & Busi-
2904
+ ness Media, 2012.
2905
+ [29] Y. Sharon and M. Margaliot, Third-order nilpotency, finite switchings and asymptotic sta-
2906
+ bility, in Proceedings of the 44th IEEE Conference on Decision and Control, IEEE, 2005,
2907
+ pp. 5415–5420.
2908
+ [30] H. Shim, D. Noh, and J. H. Seo, Common Lyapunov function for exponentially stable non-
2909
+ linear systems, 2001.
2910
+ [31] R. Shorten and K. Narendra, On the stability and existence of common Lyapunov functions
2911
+ for stable linear switching systems, in Proceedings of the 37th IEEE Conference on Decision
2912
+ and Control (Cat. No. 98CH36171), vol. 4, IEEE, 1998, pp. 3723–3724.
2913
+ [32] R. Shorten, F. Wirth, O. Mason, K. Wulff, and C. King, Stability criteria for switched
2914
+ and hybrid systems, SIAM review, 49 (2007), pp. 545–592.
2915
+ [33] L. Vu and D. Liberzon, Common Lyapunov functions for families of commuting nonlinear
2916
+ systems, Systems & control letters, 54 (2005), pp. 405–416.
2917
+ [34] C. M. Zagabe and A. Mauroy, Switched nonlinear systems in the Koopman operator frame-
2918
+ work: Toward a Lie-algebraic condition for uniform stability, in 2021 European Control
2919
+ Conference (ECC), IEEE, 2021, pp. 281–286.
2920
+ This manuscript is for review purposes only.
2921
+
ANE5T4oBgHgl3EQfSg9u/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
AtFQT4oBgHgl3EQf9DeI/content/2301.13449v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97007270d2939ce6e5842a9a60014771b23af8dbc02568337eb8cee710aebae6
3
+ size 494466
AtFQT4oBgHgl3EQf9DeI/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8e9a9ac12a0e5664b8dc5d6e0777d2a38f7eb577c4cf1a0362054a268c269fd
3
+ size 3801133
AtFQT4oBgHgl3EQf9DeI/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e9a57e6cfe9eb6651b6ab6fd7972b7551c9800f83a5fb9cd051ea7e1f50d113
3
+ size 158213
CNAzT4oBgHgl3EQfwP6m/content/tmp_files/2301.01720v1.pdf.txt ADDED
@@ -0,0 +1,657 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Augmenting data-driven models for energy systems
2
+ through feature engineering: A Python framework
3
+ for feature engineering
4
+ Sandra Wilfling
5
+ Abstract—Data-driven modeling is an approach in energy systems
6
+ modeling that has been gaining popularity. In data-driven mod-
7
+ eling, machine learning methods such as linear regression, neural
8
+ networks or decision-tree based methods are being applied.
9
+ While these methods do not require domain knowledge, they are
10
+ sensitive to data quality. Therefore, improving data quality in a
11
+ dataset is beneficial for creating machine learning-based models.
12
+ The improvement of data quality can be implemented through
13
+ preprocessing methods. A selected type of preprocessing is feature
14
+ engineering, which focuses on evaluating and improving the
15
+ quality of certain features inside the dataset. Feature engineering
16
+ methods include methods such as feature creation, feature ex-
17
+ pansion, or feature selection. In this work, a Python framework
18
+ containing different feature engineering methods is presented.
19
+ This framework contains different methods for feature creation,
20
+ expansion and selection; in addition, methods for transforming
21
+ or filtering data are implemented. The implementation of the
22
+ framework is based on the Python library scikit-learn. The
23
+ framework is demonstrated on a case study of a use case
24
+ from energy demand prediction. A data-driven model is created
25
+ including selected feature engineering methods. The results show
26
+ an improvement in prediction accuracy through the engineered
27
+ features.
28
+ Keywords: Energy Systems Modeling, Data-driven Modeling,
29
+ Feature Engineering, Python, Frameworks
30
+ I. INTRODUCTION
31
+ Modeling and simulation is an crucial step in the design and
32
+ optimization of energy systems. While traditional modeling
33
+ methods rely on system parameters, a recent approach focuses
34
+ on creating data-driven models based on measurement data
35
+ from an underlying system. In data-driven modeling, models
36
+ are not created based on system parameters, but on existing
37
+ measurement data. These models are based on machine learn-
38
+ ing (ML) methods [1]. While the area of machine learning
39
+ includes a wide range of methods such as clustering algorithms
40
+ or classifiers, the focus in data-driven modeling is set to
41
+ regression analysis for prediction and forecasting [2]. In re-
42
+ gression analysis, methods such as linear regression, decision-
43
+ tree based regression, or neural networks are being applied
44
+ [3]. While some of these methods, such as linear regression,
45
+ can be classified as white-box ML methods, others, such as
46
+ neural networks, are classified as black-box ML methods due
47
+ to their lack of comprehensibility [4]. While white-box ML
48
+ methods give more insight about their internal structure than
49
+ black box ML methods, their architecture is simpler, making
50
+ it more difficult to model complex dependencies, for instance
51
+ non-linearities [5]. To capture such dependencies using white-
52
+ box ML models, information about the dependencies can be
53
+ passed to the model through the dataset. This step is called
54
+ feature engineering. The main purpose of feature engineering
55
+ is to augment the existing dataset [6]. This can be done through
56
+ adding new information, or expanding or reducing the existing
57
+ feature set. In addition, the quality of a single feature can be
58
+ improved, for instance through transformation or filtering [7].
59
+ The area of feature engineering covers a wide number of
60
+ methods, such as feature expansion [8] or feature selection
61
+ [9]. The term feature creation covers the creation of features
62
+ to add new information. Methods of feature creation include
63
+ encodings of time-based features, such as cyclic features [10],
64
+ or categorical encoding [11]. Similarly, feature expansion
65
+ is the method of creating new features based on existing
66
+ features. Feature expansion covers classical methods such as
67
+ polynomial expansion [8] or spline interpolation [12].
68
+ In contrast to feature creation and expansion, feature selection
69
+ aims to reduce the size of the feature set. While large feature
70
+ sets may contain more information than smaller feature sets,
71
+ there may be redundancy in the data, as well as sparsity [13] or
72
+ multicollinearity [14]. To reduce the sparsity or multicollinear-
73
+ ity, as well as to remove redundant features, feature selection
74
+ mechanisms are applied. While methods such as Principal
75
+ Component Analysis (PCA) [15] aim to reduce the feature
76
+ set through transformation, feature selection methods discard
77
+ features based on certain criteria [16]. Feature selection can
78
+ be implemented for instance through sequential methods, such
79
+ as forward or backward selection [17], or through correlation
80
+ criteria [9]. Correlation criteria include measures based on
81
+ the Pearson Correlation Coefficient, as well as entropy-based
82
+ criteria [16]. The feature selection is then implemented through
83
+ a threshold-based selection. Threshold-based feature selection
84
+ analyzes features based on the selected criterion, and discards
85
+ features below a certain threshold.
86
+ Mainly, the methods of feature engineering are applied during
87
+ the first steps of creating a data-driven model, creating an en-
88
+ gineered dataset. This engineered dataset is then used to train
89
+ the model [18]. However, feature engineering methods can
90
+ also be used in combination with model selection procedures,
91
+ such as grid search [19]. Feature engineering methods are
92
+ widely used in applications from the energy domain, such as
93
+ in prediction for building energy demand [20] or photovoltaic
94
+ power prediction [18].
95
+ arXiv:2301.01720v1 [cs.LG] 4 Jan 2023
96
+
97
+ A. Main Contribution
98
+ In the creation of data-driven models, a significant factor is
99
+ the quality of the underlying dataset. To improve the dataset
100
+ quality, feature engineering methods can be applied.
101
+ The main contribution of this work is a Python framework for
102
+ feature engineering that can be used for data-driven model
103
+ creation. The framework implements different methods for
104
+ feature creation, feature expansion, feature selection or trans-
105
+ formation. The feature engineering framework is implemented
106
+ in Python based on the scikit-learn framework and can be
107
+ imported as a Python package. The functionality of the frame-
108
+ work is demonstrated on a case study of an energy demand
109
+ prediction use case. The results of the case study show an
110
+ improvement prediction accuracy through the applied feature
111
+ engineering steps.
112
+ II. METHOD
113
+ The presented framework implements various feature engi-
114
+ neering methods in Python based on the research in [21] and
115
+ on the interfaces defined by scikit-learn. The methods are im-
116
+ plemented using either scikit-learn’s TransformerMixin or
117
+ SelectorMixin interface. The framework implements meth-
118
+ ods for feature expansion, feature creation, feature selection,
119
+ as well as transformation and filtering operations.
120
+ A. Feature Creation and Expansion
121
+ In the framework, different methods for feature creation and
122
+ expansion are implemented. These methods create new fea-
123
+ tures from time values or from expansion of existing features.
124
+ To create new features, the implemented framework supports
125
+ categorical encoding and cyclic encoding of time-based values.
126
+ Cyclic Features Cyclic features can be used to model time
127
+ values through cyclic functions [10]. Cyclic features were
128
+ implemented in [21], as well as in [22] and [23]. In the
129
+ implementation of the framework, sinusoidal signals xsin, xcos
130
+ with a selected frequency f can be created based on a sample
131
+ series n:
132
+ xsin[n] = sin(2πfn)
133
+ (1)
134
+ xcos[n] = cos(2πfn)
135
+ (2)
136
+ The implementation offers the creation of features with a zero-
137
+ order hold function for a certain time period, for instance TS =
138
+ 1 day for a signal with a time period of T = 1 week.
139
+ Categorical Features Categorical encoding creates a repre-
140
+ sentation of discrete numerical values through a number of
141
+ features with boolean values [11], [21]. In this implementation,
142
+ for a number of categorical features x0,....,N for a feature x
143
+ with discrete possible values v0,....,N, a single feature xi is
144
+ defined as:
145
+ xi =
146
+
147
+ 1
148
+ x = vi
149
+ 0
150
+ else
151
+ (3)
152
+ The framework offers categorical encoding for time-based
153
+ values. In addition, a division factor is implemented to create
154
+ an encoding of a downsampled version of the time values.
155
+ Feature Expansion For feature expansion, the framework im-
156
+ plements wrappers for scikit-learn’s PolynomialFeatures and
157
+ SplineTransformer classes. The method of polynomial expan-
158
+ sion was applied in [21]. The parameters for the expansion
159
+ methods are passed through the wrapper.
160
+ Time-based Features The framework implements a method of
161
+ dynamic timeseries unrolling to create features xn−1, xn−2,
162
+ ... xn−N from an existing feature x. The method of dynamic
163
+ timeseries unrolling is based on the research in [24], [25], and
164
+ [22]. While [25] and [24] use dynamic timeseries unrolling for
165
+ both input and target features of a model, allowing the creation
166
+ of auto-recursive models, this implementation only supports
167
+ dynamic timeseries unrolling for the input features, similar
168
+ to the method used in [22]. In this implementation, dynamic
169
+ timeseries unrolling is implemented through filter operations
170
+ from the scipy.signal library. The dynamic features are created
171
+ through the convolution of the signal x with a Kronecker delta
172
+ for i = 1...N:
173
+ xdyn,i[n] = x[n] ∗ δ[n − i]
174
+ (4)
175
+ This operation creates delayed signals xdyn,1, ..., xdyn,N. In
176
+ our implementation, for the samples in the delayed signals,
177
+ for which no values are available, zero values are used.
178
+ B. Feature Selection
179
+ In the framework, several threshold-based feature selection
180
+ methods are implemented. These methods analyze the input
181
+ and target features based on a certain criterion, and then
182
+ discard features with a low value of the criterion. A widely
183
+ used criterion is the Pearson Correlation Coefficient, which
184
+ is used to detect linear correlations between features [18].
185
+ The Pearson Correlation Coefficient calculates the correlation
186
+ between two features for samples x0,....,N, y0,...,N with mean
187
+ values ¯x and ¯y:
188
+ rx,y =
189
+ �N
190
+ i=0(xi − ¯x)(yi − ¯y)
191
+ ��N
192
+ i=0(xi − ¯x)2 �N
193
+ i=0(yi − ¯y)2
194
+ (5)
195
+ While the Pearson correlation identifies linear correlations,
196
+ non-linear dependencies are not detected. To detect non-linear
197
+ dependencies, criteria such as Maximum Information Coeffi-
198
+ cient (MIC) [26], ennemi [27], dCor [28] or the Randomized
199
+ Dependence Coefficient (RDC) [29] can be used.
200
+ The framework provides classes for the criteria Pearson Corre-
201
+ lation Coefficient, F-statistic based on the Pearson Correlation
202
+ Coefficient, as well as thresholds based on the MIC, ennemi
203
+ and RDC.
204
+ C. Transformation and Filtering Operations
205
+ To transform features, the framework implements the Box-
206
+ cox transformation as well as the square root and inverse
207
+ transformation. In addition, the framework provides filtering
208
+ operations, which were applied in timeseries prediction for
209
+ instance in [7]. Discrete-time based filters can be implemented
210
+ in Python through the functions implemented in scipy.signal.
211
+ The scipy.signal library offers functions for calculating the
212
+ coefficients for different types of digital filters. A digital filter
213
+
214
+ of order N can be defined through the transfer function H(z)
215
+ in a direct form:
216
+ H(z) =
217
+ �N
218
+ i=0 bizi
219
+ �N
220
+ i=0 aizi
221
+ (6)
222
+ The filter coefficients ai and bi define the behavior of the
223
+ filter. The scipy.signal library offers functions to compute the
224
+ filter coefficients for filter types such as the Butterworth or
225
+ Chebyshev filter [30]. While scipy.signal offers the compu-
226
+ tation of analog and digital filter coefficients, the framework
227
+ implementation focuses on digital filter implementations. The
228
+ framework implements the Butterworth and Chebyshev fil-
229
+ ter as scikit-learn TransformerMixin classes. In addition, an
230
+ envelope detection filter was implemented for demodulation
231
+ of modulated signals. This filter was implemented using the
232
+ pandas rolling average function. For all filters, offset com-
233
+ pensation before and after applying the filter operation and a
234
+ mask for handling NaN values were implemented. The direct
235
+ form filter classes of the framework offer a simple option for
236
+ extension. Different architectures can be implemented by re-
237
+ defining the implemented method for coefficient calculation.
238
+ This allows to create filters with different Finite Impulse
239
+ Response (FIR) or Infinite Impulse Response (IIR) structures.
240
+ D. Composite Transformers
241
+ In feature engineering, it is often the case that only a se-
242
+ lected subset of features should be transformed. To offer the
243
+ possibility to transform only selected features, a composite
244
+ transformer wrapper was implemented. This wrapper offers to
245
+ either automatically replace features through their transformed
246
+ versions, or add transformed features separately to the dataset.
247
+ E. Implementation
248
+ The framework offers compatibility with the sklearn.Pipeline
249
+ implementation,
250
+ making
251
+ it
252
+ possible
253
+ to
254
+ use
255
+ objects
256
+ as
257
+ part of a ML pipeline. The parameters of each objects
258
+ can be adapted through grid search, for instance using
259
+ sklearn.model_selection.GridSearchCV. In addition, every cre-
260
+ ated object can be stored to and loaded from a Pickle file using
261
+ the save_pkl or load_pkl method.
262
+ While the filtering, feature expansion and feature cre-
263
+ ation methods support operations on a numpy.ndarray or
264
+ pd.Dataframe or pd.Series object, the feature creation methods
265
+ require a pd.Dataframe or pd.Series object with a DateTimeIn-
266
+ dex or TimedeltaIndex to create samples based on a certain
267
+ date.
268
+ III. CASE STUDY
269
+ The framework is demonstrated on a use case from prediction
270
+ for energy systems modeling. For this purpose, a mixed office-
271
+ campus building is selected. A prediction model should be
272
+ trained based on existing measurement data. The data-driven
273
+ model is created using a workflow based on the implemented
274
+ methods.
275
+ A. Application
276
+ In this case study, the energy demand of a mixed office-campus
277
+ building should be evaluated. The data was provided from the
278
+ research in [24]. The energy demand of a building is subject
279
+ to various factors. Main factors that influence building energy
280
+ demand are thermal characteristics and Heating, Ventilation,
281
+ Air Conditioning and Cooling (HVAC) system behavior [31].
282
+ Additionally, building energy demand may be dependent on
283
+ occupancy [3] or subject to seasonal trends [10]. Many of these
284
+ factors show non-linear behavior, which makes it difficult to
285
+ address them through a purely linear model. Therefore, feature
286
+ engineering was used to model additional factors.
287
+ B. Data-driven Model
288
+ For the selected application, a data-driven model of the build-
289
+ ing energy demand should be created. To demonstrate the
290
+ effect of feature engineering, two models were trained based
291
+ on the existing measurement data: a basic regression model
292
+ and a regression model with engineered features.
293
+ Measurement Data The energy demand was measured during
294
+ a period from 05/2019 to 03/2020, with a sampling time of
295
+ 1h [24]. The measurement data includes features based on
296
+ weather data, such as temperature, as well as occupancy data,
297
+ such as registrations. The rest of the features are time-based,
298
+ such as daytime or weekday.
299
+ TABLE I
300
+ FEATURE SET FOR ENERGY CONSUMPTION PREDICTION
301
+ Feature Name
302
+ Unit
303
+ Description
304
+ temperature
305
+ °C
306
+ Outdoor Temperature
307
+ daytime
308
+ h
309
+ Daytime
310
+ weekday
311
+ d
312
+ Weekday from 0 to 6
313
+ holiday
314
+ Public holiday
315
+ daylight
316
+ day or night
317
+ registrations
318
+ registrations for lectures
319
+ Consumption
320
+ kWh
321
+ Energy Consumption
322
+ Model Architecture For the energy demand, a linear regression
323
+ model should be trained. The linear regression architecture was
324
+ selected due to its simplicity and comprehensibility as a white-
325
+ box ML model. Non-linear behavior of the underlying system
326
+ should be incorporated through feature engineering.
327
+ Feature Engineering To model the non-linear behavior of
328
+ the energy demand, categorical features and cyclical features
329
+ were used in combination with Butterworth Filtering, dynamic
330
+ timeseries unrolling and feature selection through the Pearson
331
+ Correlation Coefficient. An overview of the implemented
332
+ workflow is depicted in Figure 1.
333
+ Training Parameters For the model training, a train-test split
334
+ of 0.8 was selected together with a 5-fold cross-validation.
335
+ For the model with engineered features, the parameters for
336
+ the steps timeseries unrolling and feature selection were deter-
337
+ mined through a grid search based on the metrics Coefficient
338
+ of Determination (R2), mean squared error (MSE) and Mean
339
+ Absolute Percentage Error (MAPE).
340
+
341
+ Basic Feature
342
+ Set
343
+ Extended
344
+ Feature Set
345
+ Engineered
346
+ Feature Set
347
+ Feature Selection –
348
+ Pearson Correlation
349
+ Fig. 1. Implemented Workflow.
350
+ C. Experimental Results
351
+ The two models were trained on the measurement data and
352
+ compared in terms of performance metrics. Additionally, anal-
353
+ yses of the predicted values through timeseries analysis and
354
+ prediction error plots were performed.
355
+ Performance Metrics To evaluate the performance of the
356
+ model, the metrics R2, Coefficient of Variation of the Root
357
+ Mean Square Error (CV-RMSE) and MAPE were used [21].
358
+ Table II gives an overview of the metrics.
359
+ TABLE II
360
+ PERFORMANCE METRICS
361
+ Model
362
+ R2
363
+ CV-RMSE
364
+ MAPE
365
+ Basic Regression
366
+ 0.548
367
+ 0.267
368
+ 22.764 %
369
+ Engineered Features
370
+ 0.638
371
+ 0.201
372
+ 17.493%
373
+ From the performance metrics, an improvement in prediction
374
+ accuracy for the linear regression model through the engi-
375
+ neered features could be observed.
376
+ Timeseries Analysis The improvement in prediction accuracy
377
+ could also be observed from the timeseries analysis depicted
378
+ in Figure 2.
379
+ 2020-01-05
380
+ 2020-01-06
381
+ 2020-01-07
382
+ 2020-01-08
383
+ 2020-01-09
384
+ 2020-01-10
385
+ 2020-01-11
386
+ 2020-01-12
387
+ 2020-01-13
388
+ 2020-01-14
389
+ 2020-01-15
390
+ 2020-01-16
391
+ 2020-01-17
392
+ 2020-01-18
393
+ 2020-01-19
394
+ 2020-01-20
395
+ 2020-01-21
396
+ 2020-01-22
397
+ 2020-01-23
398
+ 2020-01-24
399
+ 2020-01-25
400
+ 20
401
+ 40
402
+ 60
403
+ 80
404
+ Time [Days]
405
+ Consumption [kWh]
406
+ Measurement value
407
+ Basic Regression
408
+ Engineered Features
409
+ Fig. 2. Timeseries Analysis for period of 25 days from test set.
410
+ The timeseries analysis showed that the cyclic behavior of
411
+ the day-night changes in the energy demand could be more
412
+ accurately replicated by the model with engineered features.
413
+ Additionally, the prediction using engineered features shows a
414
+ higher accuracy in replicating low energy demand values than
415
+ the basic regression. This effect can be observed in Figure 3.
416
+ 2020-01-15
417
+ 2020-01-16
418
+ 2020-01-17
419
+ 2020-01-18
420
+ 2020-01-19
421
+ 2020-01-20
422
+ 20
423
+ 40
424
+ 60
425
+ Time [Days]
426
+ Consumption [kWh]
427
+ Measurement value
428
+ Basic Regression
429
+ Engineered Features
430
+ Fig. 3. Timeseries Analysis for period of five days from test set.
431
+ For both models, the residual error was analyzed through
432
+ prediction error plots (Figure 4). The prediction error plots
433
+ show that the residual error is decreased for the model with
434
+ engineered features. In addition, the homogenity of the error
435
+ distribution is improved through the applied feature engineer-
436
+ ing methods.
437
+ 20
438
+ 30
439
+ 40
440
+ 50
441
+ 60
442
+ 70
443
+ 80
444
+ 20
445
+ 30
446
+ 40
447
+ 50
448
+ 60
449
+ 70
450
+ 80
451
+ True Value [kWh]
452
+ Predicted Value [kWh]
453
+ Basic Regression
454
+ Optimal Prediction
455
+ 20
456
+ 25
457
+ 30
458
+ 35
459
+ 40
460
+ 45
461
+ 50
462
+ 55
463
+ 20
464
+ 25
465
+ 30
466
+ 35
467
+ 40
468
+ 45
469
+ 50
470
+ 55
471
+ True Value [kWh]
472
+ Predicted Value [kWh]
473
+ Engineered Features
474
+ Optimal Prediction
475
+ Fig. 4. Prediction Error Plots for Energy Consumption
476
+ Since performance metrics, timeseries analysis and prediction
477
+ error plots show an improvement in accuracy, the feature engi-
478
+ neering steps are suggested to be beneficial for the prediction
479
+ model.
480
+ IV. RELATED WORK
481
+ In the creation of data-driven models in Python, many frame-
482
+ works have been implemented. One of the most well-known
483
+ Python ML frameworks is the scikit-learn framework, which
484
+ provides methods such as data preprocessing, feature engineer-
485
+ ing, clustering, and implementations of various ML models.
486
+ The scikit-learn framework offers interfaces which can be used
487
+ to implement additional methods. Due to the popularity of
488
+ scikit-learn, various frameworks extending scikit-learn have
489
+ been implemented. For instance, the imblearn framework [32]
490
+ focuses on extending scikit-learn’s functionality to processing
491
+ imbalanced datasets. In addition, the imblearn framework
492
+ offers different resampling methods. The mlxtend framework
493
+ [33] offers feature extraction methods such as PCA, or fea-
494
+ ture selection methods such as sequential feature selection.
495
+ Additionally, different evaluation and utility functions are
496
+ implemented. In contrast, libraries such as statsmodels [34]
497
+ provide their own interface for their regression models. The
498
+
499
+ statsmodels framework provides models based on stochastic
500
+ and statistical methods, such as the Weighted Least Squares
501
+ (WLS). In the area of feature engineering, different Python
502
+ packages have been created. The feature-engine [35] library
503
+ contains a large collection of feature engineering methods,
504
+ which are implemented based on scikit-learn. The featuretools
505
+ framework [36] allows the synthesis of features from relational
506
+ databases. offers functionality for feature encoding, as well as
507
+ different transformations or aggregate functions. Additionally,
508
+ this framework offers transformations, feature encoding, ag-
509
+ gregate functions, as well as coordinate transformations.
510
+ V. CONCLUSION
511
+ This paper presents a Python framework for feature engineer-
512
+ ing that provides different methods through a standardized
513
+ interface. The framework is based on the scikit-learn package
514
+ and offers different methods. The framework offers classic
515
+ feature engineering methods such feature expansion, as well
516
+ as as feature creation, feature selection or transformation and
517
+ filter operations. The framework is implemented as a Python
518
+ package and can be included in different projects. Through
519
+ the specifically defined interfaces of the framework, additional
520
+ methods can be added with low effort. Finally, we demonstrate
521
+ the framework on a case study of energy demand prediction,
522
+ using a workflow created from a subset of the implemented
523
+ methods for data-driven model creation.
524
+ A. Future Work
525
+ The current version of the framework gives many options
526
+ for extensions. For instance, additional feature engineering
527
+ methods can be added using the provided interfaces of the
528
+ framework. In addition, combinations of the implemented
529
+ feature engineering methods can be used for prediction in
530
+ different use cases.
531
+ REFERENCES
532
+ [1] A. Mosavi, M. Salimi, S. F. Ardabili, T. Rabczuk, S. Shamshirband,
533
+ and A. Varkonyi-Koczy, “State of the art of machine learning models in
534
+ energy systems, a systematic review,” Energies, vol. 12, no. 7, p. 1301,
535
+ Apr. 2019. [Online]. Available: https://doi.org/10.3390/en12071301
536
+ [2] K. Arendt, M. Jradi, H. R. Shaker, and C. T. Veje, “Comparative Analysis
537
+ of white-, gray- and black-box models for thermal simulation of indoor
538
+ environment: Teaching Building Case Study,” in 2018 Building Perfor-
539
+ mance Modeling Conference and SimBuild Co-Organized by ASHRAE
540
+ and IBPSA-USA Chicago, 2018, p. 8.
541
+ [3] A. Ghofrani, S. D. Nazemi, and M. A. Jafari, “Prediction of building
542
+ indoor temperature response in variable air volume systems,” Journal of
543
+ Building Performance Simulation, vol. 13, no. 1, pp. 34–47, Jan. 2020.
544
+ [4] C. Rudin, “Stop explaining black box machine learning models for high
545
+ stakes decisions and use interpretable models instead,” Nature Machine
546
+ Intelligence, vol. 1, no. 5, pp. 206–215, May 2019.
547
+ [5] O. Loyola-Gonzalez, “Black-box vs. white-box: Understanding their
548
+ advantages and weaknesses from a practical point of view,” IEEE access
549
+ : practical innovations, open solutions, vol. 7, pp. 154 096–154 113,
550
+ 2019.
551
+ [6] M. Kuhn and K. Johnson, Feature Engineering and Selection: A Prac-
552
+ tical Approach for Predictive Models.
553
+ CRC Press, Jul. 2019.
554
+ [7] V. Gómez, “The Use of Butterworth Filters for Trend and Cycle
555
+ Estimation in Economic Time Series,” Journal of Business & Economic
556
+ Statistics, vol. 19, no. 3, pp. 365–373, Jul. 2001.
557
+ [8] X. Cheng, B. Khomtchouk, N. Matloff, and P. Mohanty, “Polynomial
558
+ Regression As an Alternative to Neural Nets,” arXiv:1806.06850 [cs,
559
+ stat], Apr. 2019.
560
+ [9] H. Peng, F. Long, and C. Ding, “Feature selection based on mutual
561
+ information: Criteria of Max-Dependency, Max-Relevance, and Min-
562
+ Redundancy,” IEEE Transactions on Pattern Analysis and Machine
563
+ Intelligence, vol. 27, no. 8, pp. 1226–1238, 2005.
564
+ [10] G. Zhang, C. Tian, C. Li, J. J. Zhang, and W. Zuo, “Accurate forecasting
565
+ of building energy consumption via a novel ensembled deep learning
566
+ method considering the cyclic feature,” Energy, vol. 201, p. 117531,
567
+ Jun. 2020.
568
+ [11] J. T. Hancock and T. M. Khoshgoftaar, “Survey on categorical data for
569
+ neural networks,” Journal of Big Data, vol. 7, no. 1, p. 28, Dec. 2020.
570
+ [12] P. H. C. Eilers and B. D. Marx, “Flexible smoothing with B-splines and
571
+ penalties,” Statistical Science, vol. 11, no. 2, May 1996.
572
+ [13] A. J. Rothman, E. Levina, and J. Zhu, “Sparse Multivariate Regression
573
+ With Covariance Estimation,” Journal of Computational and Graphical
574
+ Statistics, vol. 19, no. 4, pp. 947–962, Jan. 2010.
575
+ [14] D. O’Driscoll and D. Ramirez, “Mitigating collinearity in linear re-
576
+ gression models using ridge, surrogate and raised estimators,” Cogent
577
+ Mathematics, vol. 3, p. 1144697, Jan. 2016.
578
+ [15] V. Gupta and M. Mittal, “Respiratory signal analysis using PCA, FFT
579
+ and ARTFA,” in 2016 International Conference on Electrical Power and
580
+ Energy Systems (ICEPES), Dec. 2016, pp. 221–225.
581
+ [16] J. Cai, J. Luo, S. Wang, and S. Yang, “Feature selection in machine
582
+ learning: A new perspective,” Neurocomputing, vol. 300, pp. 70–79,
583
+ Jul. 2018.
584
+ [17] I. Guyon and A. Elisseeff, “An Introduction to Variable and Feature
585
+ Selection,” Journal of machine learning research, vol. 3, no. Mar, pp.
586
+ 1157–1182, 2003.
587
+ [18] H. Chen and X. Chang, “Photovoltaic power prediction of LSTM model
588
+ based on Pearson feature selection,” Energy Reports, vol. 7, pp. 1047–
589
+ 1054, Nov. 2021.
590
+ [19] M. F. Akay, “Support vector machines combined with feature selection
591
+ for breast cancer diagnosis,” Expert Systems with Applications, vol. 36,
592
+ no. 2, pp. 3240–3247, Mar. 2009.
593
+ [20] A. Zheng and A. Casari, Feature Engineering for Machine Learning:
594
+ Principles and Techniques for Data Scientists, 1st ed.
595
+ O’Reilly Media,
596
+ Inc., 2018.
597
+ [21] S. Wilfling, M. Ebrahimi, Q. Alfalouji, G. Schweiger, and M. Basirat,
598
+ “Learning non-linear white-box predictors: A use case in energy sys-
599
+ tems,” in 21st IEEE International Conference on Machine Learning and
600
+ Applications.
601
+ IEEE, 2022.
602
+ [22] M. Dogliani, N. Nord, Á. Doblas, I. Calixto, S. Wilfling, Q. Alfalouji,
603
+ and G. Schweiger, “Machine Learning for Building Energy Prediction:
604
+ A Case Study of an Office Building,” p. 8.
605
+ [23] T. Schranz, G. Schweiger, S. Pabst, and F. Wotawa, “Machine Learning
606
+ for Water Supply Supervision,” in Trends in Artificial Intelligence Theory
607
+ and Applications. Artificial Intelligence Practices, H. Fujita, P. Fournier-
608
+ Viger, M. Ali, and J. Sasaki, Eds.
609
+ Cham: Springer International
610
+ Publishing, 2020, vol. 12144, pp. 238–249.
611
+ [24] T. Schranz, J. Exenberger, C. Legaard, J. Drgona and G. Schweiger, “En-
612
+ ergy Prediction under Changed Demand Conditions: Robust Machine
613
+ Learning Models and Input Feature Combinations,” in 17th Interna-
614
+ tional Conference of the International Building Performance Simulation
615
+ Association (Building Simulation 2021), 2021.
616
+ [25] B. Falay, S. Wilfling, Q. Alfalouji, J. Exenberger, T. Schranz, C. M.
617
+ Legaard, I. Leusbrock, and G. Schweiger, “Coupling physical and ma-
618
+ chine learning models: Case study of a single-family house,” Modelica
619
+ Conferences, pp. 335–341, Sep. 2021.
620
+ [26] Y. A. Reshef, D. N. Reshef, H. K. Finucane, P. C. Sabeti, and M. Mitzen-
621
+ macher, “Measuring Dependence Powerfully and Equitably,” Journal of
622
+ Machine Learning Research, p. 63, 2016.
623
+ [27] P. Laarne, M. A. Zaidan, and T. Nieminen, “Ennemi: Non-linear
624
+ correlation detection with mutual information,” SoftwareX, vol. 14, p.
625
+ 100686, Jun. 2021.
626
+ [28] G. J. Székely, M. L. Rizzo, and N. K. Bakirov, “Measuring and testing
627
+ dependence by correlation of distances,” The Annals of Statistics, vol. 35,
628
+ no. 6, Dec. 2007.
629
+ [29] D. Lopez-Paz, P. Hennig, and B. Schölkopf, “The Randomized Depen-
630
+ dence Coefficient,” p. 9.
631
+ [30] M. Sandhu, S. Kaur, and J. Kaur, “A Study on Design and Implementa-
632
+ tion of Butterworth, Chebyshev and Elliptic Filter with MatLab,” vol. 4,
633
+ no. 6, p. 4, 2016.
634
+ [31] A. Maccarini, E. Prataviera, A. Zarrella, and A. Afshari, “Development
635
+ of a Modelica-based simplified building model for district energy
636
+
637
+ simulations,” Journal of Physics: Conference Series, vol. 2042, no. 1,
638
+ p. 012078, Nov. 2021.
639
+ [32] G. Lemaître, F. Nogueira, and C. K. Aridas, “Imbalanced-learn: A
640
+ python toolbox to tackle the curse of imbalanced datasets in machine
641
+ learning,” Journal of Machine Learning Research, vol. 18, no. 17, pp.
642
+ 1–5, 2017. [Online]. Available: http://jmlr.org/papers/v18/16-365
643
+ [33] S. Raschka, “Mlxtend: Providing machine learning and data science
644
+ utilities and extensions to python’s scientific computing stack,” The
645
+ Journal of Open Source Software, vol. 3, no. 24, Apr. 2018. [Online].
646
+ Available: http://joss.theoj.org/papers/10.21105/joss.00638
647
+ [34] S. Seabold and J. Perktold, “statsmodels: Econometric and statistical
648
+ modeling with python,” in 9th Python in Science Conference, 2010.
649
+ [35] S. Galli, “Feature-engine: A Python package for feature engineering for
650
+ machine learning,” Journal of Open Source Software, vol. 6, no. 65, p.
651
+ 3642, Sep. 2021.
652
+ [36] J. M. Kanter and K. Veeramachaneni, “Deep feature synthesis: Towards
653
+ automating data science endeavors,” in 2015 IEEE International Con-
654
+ ference on Data Science and Advanced Analytics, DSAA 2015, Paris,
655
+ France, October 19-21, 2015.
656
+ IEEE, 2015, pp. 1–10.
657
+
CNAzT4oBgHgl3EQfwP6m/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,501 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf,len=500
2
+ page_content='Augmenting data-driven models for energy systems through feature engineering: A Python framework for feature engineering Sandra Wilfling Abstract—Data-driven modeling is an approach in energy systems modeling that has been gaining popularity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
3
+ page_content=' In data-driven mod- eling, machine learning methods such as linear regression, neural networks or decision-tree based methods are being applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
4
+ page_content=' While these methods do not require domain knowledge, they are sensitive to data quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
5
+ page_content=' Therefore, improving data quality in a dataset is beneficial for creating machine learning-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
6
+ page_content=' The improvement of data quality can be implemented through preprocessing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
7
+ page_content=' A selected type of preprocessing is feature engineering, which focuses on evaluating and improving the quality of certain features inside the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
8
+ page_content=' Feature engineering methods include methods such as feature creation, feature ex- pansion, or feature selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
9
+ page_content=' In this work, a Python framework containing different feature engineering methods is presented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
10
+ page_content=' This framework contains different methods for feature creation, expansion and selection;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
11
+ page_content=' in addition, methods for transforming or filtering data are implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
12
+ page_content=' The implementation of the framework is based on the Python library scikit-learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
13
+ page_content=' The framework is demonstrated on a case study of a use case from energy demand prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
14
+ page_content=' A data-driven model is created including selected feature engineering methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
15
+ page_content=' The results show an improvement in prediction accuracy through the engineered features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
16
+ page_content=' Keywords: Energy Systems Modeling, Data-driven Modeling, Feature Engineering, Python, Frameworks I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
17
+ page_content=' INTRODUCTION Modeling and simulation is an crucial step in the design and optimization of energy systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
18
+ page_content=' While traditional modeling methods rely on system parameters, a recent approach focuses on creating data-driven models based on measurement data from an underlying system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
19
+ page_content=' In data-driven modeling, models are not created based on system parameters, but on existing measurement data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
20
+ page_content=' These models are based on machine learn- ing (ML) methods [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
21
+ page_content=' While the area of machine learning includes a wide range of methods such as clustering algorithms or classifiers, the focus in data-driven modeling is set to regression analysis for prediction and forecasting [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
22
+ page_content=' In re- gression analysis, methods such as linear regression, decision- tree based regression, or neural networks are being applied [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
23
+ page_content=' While some of these methods, such as linear regression, can be classified as white-box ML methods, others, such as neural networks, are classified as black-box ML methods due to their lack of comprehensibility [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
24
+ page_content=' While white-box ML methods give more insight about their internal structure than black box ML methods, their architecture is simpler, making it more difficult to model complex dependencies, for instance non-linearities [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
25
+ page_content=' To capture such dependencies using white- box ML models, information about the dependencies can be passed to the model through the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
26
+ page_content=' This step is called feature engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
27
+ page_content=' The main purpose of feature engineering is to augment the existing dataset [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
28
+ page_content=' This can be done through adding new information, or expanding or reducing the existing feature set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
29
+ page_content=' In addition, the quality of a single feature can be improved, for instance through transformation or filtering [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
30
+ page_content=' The area of feature engineering covers a wide number of methods, such as feature expansion [8] or feature selection [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
31
+ page_content=' The term feature creation covers the creation of features to add new information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
32
+ page_content=' Methods of feature creation include encodings of time-based features, such as cyclic features [10], or categorical encoding [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
33
+ page_content=' Similarly, feature expansion is the method of creating new features based on existing features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
34
+ page_content=' Feature expansion covers classical methods such as polynomial expansion [8] or spline interpolation [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
35
+ page_content=' In contrast to feature creation and expansion, feature selection aims to reduce the size of the feature set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
36
+ page_content=' While large feature sets may contain more information than smaller feature sets, there may be redundancy in the data, as well as sparsity [13] or multicollinearity [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
37
+ page_content=' To reduce the sparsity or multicollinear- ity, as well as to remove redundant features, feature selection mechanisms are applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
38
+ page_content=' While methods such as Principal Component Analysis (PCA) [15] aim to reduce the feature set through transformation, feature selection methods discard features based on certain criteria [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
39
+ page_content=' Feature selection can be implemented for instance through sequential methods, such as forward or backward selection [17], or through correlation criteria [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
40
+ page_content=' Correlation criteria include measures based on the Pearson Correlation Coefficient, as well as entropy-based criteria [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
41
+ page_content=' The feature selection is then implemented through a threshold-based selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
42
+ page_content=' Threshold-based feature selection analyzes features based on the selected criterion, and discards features below a certain threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
43
+ page_content=' Mainly, the methods of feature engineering are applied during the first steps of creating a data-driven model, creating an en- gineered dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
44
+ page_content=' This engineered dataset is then used to train the model [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
45
+ page_content=' However, feature engineering methods can also be used in combination with model selection procedures, such as grid search [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
46
+ page_content=' Feature engineering methods are widely used in applications from the energy domain, such as in prediction for building energy demand [20] or photovoltaic power prediction [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
47
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
48
+ page_content='01720v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
49
+ page_content='LG] 4 Jan 2023 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
50
+ page_content=' Main Contribution In the creation of data-driven models, a significant factor is the quality of the underlying dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
51
+ page_content=' To improve the dataset quality, feature engineering methods can be applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
52
+ page_content=' The main contribution of this work is a Python framework for feature engineering that can be used for data-driven model creation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
53
+ page_content=' The framework implements different methods for feature creation, feature expansion, feature selection or trans- formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
54
+ page_content=' The feature engineering framework is implemented in Python based on the scikit-learn framework and can be imported as a Python package.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
55
+ page_content=' The functionality of the frame- work is demonstrated on a case study of an energy demand prediction use case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
56
+ page_content=' The results of the case study show an improvement prediction accuracy through the applied feature engineering steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
57
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
58
+ page_content=' METHOD The presented framework implements various feature engi- neering methods in Python based on the research in [21] and on the interfaces defined by scikit-learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
59
+ page_content=' The methods are im- plemented using either scikit-learn’s TransformerMixin or SelectorMixin interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
60
+ page_content=' The framework implements meth- ods for feature expansion, feature creation, feature selection, as well as transformation and filtering operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
61
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
62
+ page_content=' Feature Creation and Expansion In the framework, different methods for feature creation and expansion are implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
63
+ page_content=' These methods create new fea- tures from time values or from expansion of existing features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
64
+ page_content=' To create new features, the implemented framework supports categorical encoding and cyclic encoding of time-based values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
65
+ page_content=' Cyclic Features Cyclic features can be used to model time values through cyclic functions [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
66
+ page_content=' Cyclic features were implemented in [21], as well as in [22] and [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
67
+ page_content=' In the implementation of the framework, sinusoidal signals xsin, xcos with a selected frequency f can be created based on a sample series n: xsin[n] = sin(2πfn) (1) xcos[n] = cos(2πfn) (2) The implementation offers the creation of features with a zero- order hold function for a certain time period, for instance TS = 1 day for a signal with a time period of T = 1 week.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
68
+ page_content=' Categorical Features Categorical encoding creates a repre- sentation of discrete numerical values through a number of features with boolean values [11], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
69
+ page_content=' In this implementation, for a number of categorical features x0,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
70
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
71
+ page_content='.,N for a feature x with discrete possible values v0,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
72
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
73
+ page_content='.,N, a single feature xi is defined as: xi = � 1 x = vi 0 else (3) The framework offers categorical encoding for time-based values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
74
+ page_content=' In addition, a division factor is implemented to create an encoding of a downsampled version of the time values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
75
+ page_content=' Feature Expansion For feature expansion, the framework im- plements wrappers for scikit-learn’s PolynomialFeatures and SplineTransformer classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
76
+ page_content=' The method of polynomial expan- sion was applied in [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
77
+ page_content=' The parameters for the expansion methods are passed through the wrapper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
78
+ page_content=' Time-based Features The framework implements a method of dynamic timeseries unrolling to create features xn−1, xn−2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
79
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
80
+ page_content=' xn−N from an existing feature x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
81
+ page_content=' The method of dynamic timeseries unrolling is based on the research in [24], [25], and [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
82
+ page_content=' While [25] and [24] use dynamic timeseries unrolling for both input and target features of a model, allowing the creation of auto-recursive models, this implementation only supports dynamic timeseries unrolling for the input features, similar to the method used in [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
83
+ page_content=' In this implementation, dynamic timeseries unrolling is implemented through filter operations from the scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
84
+ page_content='signal library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
85
+ page_content=' The dynamic features are created through the convolution of the signal x with a Kronecker delta for i = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
86
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
87
+ page_content='N: xdyn,i[n] = x[n] ∗ δ[n − i] (4) This operation creates delayed signals xdyn,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
88
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
89
+ page_content=', xdyn,N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
90
+ page_content=' In our implementation, for the samples in the delayed signals, for which no values are available, zero values are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
91
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
92
+ page_content=' Feature Selection In the framework, several threshold-based feature selection methods are implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
93
+ page_content=' These methods analyze the input and target features based on a certain criterion, and then discard features with a low value of the criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
94
+ page_content=' A widely used criterion is the Pearson Correlation Coefficient, which is used to detect linear correlations between features [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
95
+ page_content=' The Pearson Correlation Coefficient calculates the correlation between two features for samples x0,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
96
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
97
+ page_content='.,N, y0,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
98
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
99
+ page_content=',N with mean values ¯x and ¯y: rx,y = �N i=0(xi − ¯x)(yi − ¯y) ��N i=0(xi − ¯x)2 �N i=0(yi − ¯y)2 (5) While the Pearson correlation identifies linear correlations, non-linear dependencies are not detected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
100
+ page_content=' To detect non-linear dependencies, criteria such as Maximum Information Coeffi- cient (MIC) [26], ennemi [27], dCor [28] or the Randomized Dependence Coefficient (RDC) [29] can be used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
101
+ page_content=' The framework provides classes for the criteria Pearson Corre- lation Coefficient, F-statistic based on the Pearson Correlation Coefficient, as well as thresholds based on the MIC, ennemi and RDC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
102
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
103
+ page_content=' Transformation and Filtering Operations To transform features, the framework implements the Box- cox transformation as well as the square root and inverse transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
104
+ page_content=' In addition, the framework provides filtering operations, which were applied in timeseries prediction for instance in [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
105
+ page_content=' Discrete-time based filters can be implemented in Python through the functions implemented in scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
106
+ page_content='signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
107
+ page_content=' The scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
108
+ page_content='signal library offers functions for calculating the coefficients for different types of digital filters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
109
+ page_content=' A digital filter of order N can be defined through the transfer function H(z) in a direct form: H(z) = �N i=0 bizi �N i=0 aizi (6) The filter coefficients ai and bi define the behavior of the filter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
110
+ page_content=' The scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
111
+ page_content='signal library offers functions to compute the filter coefficients for filter types such as the Butterworth or Chebyshev filter [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
112
+ page_content=' While scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
113
+ page_content='signal offers the compu- tation of analog and digital filter coefficients, the framework implementation focuses on digital filter implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
114
+ page_content=' The framework implements the Butterworth and Chebyshev fil- ter as scikit-learn TransformerMixin classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
115
+ page_content=' In addition, an envelope detection filter was implemented for demodulation of modulated signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
116
+ page_content=' This filter was implemented using the pandas rolling average function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
117
+ page_content=' For all filters, offset com- pensation before and after applying the filter operation and a mask for handling NaN values were implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
118
+ page_content=' The direct form filter classes of the framework offer a simple option for extension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
119
+ page_content=' Different architectures can be implemented by re- defining the implemented method for coefficient calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
120
+ page_content=' This allows to create filters with different Finite Impulse Response (FIR) or Infinite Impulse Response (IIR) structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
121
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
122
+ page_content=' Composite Transformers In feature engineering, it is often the case that only a se- lected subset of features should be transformed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
123
+ page_content=' To offer the possibility to transform only selected features, a composite transformer wrapper was implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
124
+ page_content=' This wrapper offers to either automatically replace features through their transformed versions, or add transformed features separately to the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
125
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
126
+ page_content=' Implementation The framework offers compatibility with the sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
127
+ page_content='Pipeline implementation, making it possible to use objects as part of a ML pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
128
+ page_content=' The parameters of each objects can be adapted through grid search, for instance using sklearn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
129
+ page_content='model_selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
130
+ page_content='GridSearchCV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
131
+ page_content=' In addition, every cre- ated object can be stored to and loaded from a Pickle file using the save_pkl or load_pkl method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
132
+ page_content=' While the filtering, feature expansion and feature cre- ation methods support operations on a numpy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
133
+ page_content='ndarray or pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
134
+ page_content='Dataframe or pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
135
+ page_content='Series object, the feature creation methods require a pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
136
+ page_content='Dataframe or pd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
137
+ page_content='Series object with a DateTimeIn- dex or TimedeltaIndex to create samples based on a certain date.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
138
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
139
+ page_content=' CASE STUDY The framework is demonstrated on a use case from prediction for energy systems modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
140
+ page_content=' For this purpose, a mixed office- campus building is selected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
141
+ page_content=' A prediction model should be trained based on existing measurement data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
142
+ page_content=' The data-driven model is created using a workflow based on the implemented methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
143
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
144
+ page_content=' Application In this case study, the energy demand of a mixed office-campus building should be evaluated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
145
+ page_content=' The data was provided from the research in [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
146
+ page_content=' The energy demand of a building is subject to various factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
147
+ page_content=' Main factors that influence building energy demand are thermal characteristics and Heating, Ventilation, Air Conditioning and Cooling (HVAC) system behavior [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
148
+ page_content=' Additionally, building energy demand may be dependent on occupancy [3] or subject to seasonal trends [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
149
+ page_content=' Many of these factors show non-linear behavior, which makes it difficult to address them through a purely linear model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
150
+ page_content=' Therefore, feature engineering was used to model additional factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
151
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
152
+ page_content=' Data-driven Model For the selected application, a data-driven model of the build- ing energy demand should be created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
153
+ page_content=' To demonstrate the effect of feature engineering, two models were trained based on the existing measurement data: a basic regression model and a regression model with engineered features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
154
+ page_content=' Measurement Data The energy demand was measured during a period from 05/2019 to 03/2020, with a sampling time of 1h [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
155
+ page_content=' The measurement data includes features based on weather data, such as temperature, as well as occupancy data, such as registrations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
156
+ page_content=' The rest of the features are time-based, such as daytime or weekday.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
157
+ page_content=' TABLE I FEATURE SET FOR ENERGY CONSUMPTION PREDICTION Feature Name Unit Description temperature °C Outdoor Temperature daytime h Daytime weekday d Weekday from 0 to 6 holiday Public holiday daylight day or night registrations registrations for lectures Consumption kWh Energy Consumption Model Architecture For the energy demand, a linear regression model should be trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
158
+ page_content=' The linear regression architecture was selected due to its simplicity and comprehensibility as a white- box ML model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
159
+ page_content=' Non-linear behavior of the underlying system should be incorporated through feature engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
160
+ page_content=' Feature Engineering To model the non-linear behavior of the energy demand, categorical features and cyclical features were used in combination with Butterworth Filtering, dynamic timeseries unrolling and feature selection through the Pearson Correlation Coefficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
161
+ page_content=' An overview of the implemented workflow is depicted in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
162
+ page_content=' Training Parameters For the model training, a train-test split of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
163
+ page_content='8 was selected together with a 5-fold cross-validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
164
+ page_content=' For the model with engineered features, the parameters for the steps timeseries unrolling and feature selection were deter- mined through a grid search based on the metrics Coefficient of Determination (R2), mean squared error (MSE) and Mean Absolute Percentage Error (MAPE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
165
+ page_content=' Basic Feature Set Extended Feature Set Engineered Feature Set Feature Selection – Pearson Correlation Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
166
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
167
+ page_content=' Implemented Workflow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
168
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
169
+ page_content=' Experimental Results The two models were trained on the measurement data and compared in terms of performance metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
170
+ page_content=' Additionally, anal- yses of the predicted values through timeseries analysis and prediction error plots were performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
171
+ page_content=' Performance Metrics To evaluate the performance of the model, the metrics R2, Coefficient of Variation of the Root Mean Square Error (CV-RMSE) and MAPE were used [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
172
+ page_content=' Table II gives an overview of the metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
173
+ page_content=' TABLE II PERFORMANCE METRICS Model R2 CV-RMSE MAPE Basic Regression 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
174
+ page_content='548 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
175
+ page_content='267 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
176
+ page_content='764 % Engineered Features 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
177
+ page_content='638 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
178
+ page_content='201 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
179
+ page_content='493% From the performance metrics, an improvement in prediction accuracy for the linear regression model through the engi- neered features could be observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
180
+ page_content=' Timeseries Analysis The improvement in prediction accuracy could also be observed from the timeseries analysis depicted in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
181
+ page_content=' 2020-01-05 2020-01-06 2020-01-07 2020-01-08 2020-01-09 2020-01-10 2020-01-11 2020-01-12 2020-01-13 2020-01-14 2020-01-15 2020-01-16 2020-01-17 2020-01-18 2020-01-19 2020-01-20 2020-01-21 2020-01-22 2020-01-23 2020-01-24 2020-01-25 20 40 60 80 Time [Days] Consumption [kWh] Measurement value Basic Regression Engineered Features Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
182
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
183
+ page_content=' Timeseries Analysis for period of 25 days from test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
184
+ page_content=' The timeseries analysis showed that the cyclic behavior of the day-night changes in the energy demand could be more accurately replicated by the model with engineered features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
185
+ page_content=' Additionally, the prediction using engineered features shows a higher accuracy in replicating low energy demand values than the basic regression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
186
+ page_content=' This effect can be observed in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
187
+ page_content=' 2020-01-15 2020-01-16 2020-01-17 2020-01-18 2020-01-19 2020-01-20 20 40 60 Time [Days] Consumption [kWh] Measurement value Basic Regression Engineered Features Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
188
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
189
+ page_content=' Timeseries Analysis for period of five days from test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
190
+ page_content=' For both models, the residual error was analyzed through prediction error plots (Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
191
+ page_content=' The prediction error plots show that the residual error is decreased for the model with engineered features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
192
+ page_content=' In addition, the homogenity of the error distribution is improved through the applied feature engineer- ing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
193
+ page_content=' 20 30 40 50 60 70 80 20 30 40 50 60 70 80 True Value [kWh] Predicted Value [kWh] Basic Regression Optimal Prediction 20 25 30 35 40 45 50 55 20 25 30 35 40 45 50 55 True Value [kWh] Predicted Value [kWh] Engineered Features Optimal Prediction Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
194
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
195
+ page_content=' Prediction Error Plots for Energy Consumption Since performance metrics, timeseries analysis and prediction error plots show an improvement in accuracy, the feature engi- neering steps are suggested to be beneficial for the prediction model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
196
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
197
+ page_content=' RELATED WORK In the creation of data-driven models in Python, many frame- works have been implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
198
+ page_content=' One of the most well-known Python ML frameworks is the scikit-learn framework, which provides methods such as data preprocessing, feature engineer- ing, clustering, and implementations of various ML models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
199
+ page_content=' The scikit-learn framework offers interfaces which can be used to implement additional methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
200
+ page_content=' Due to the popularity of scikit-learn, various frameworks extending scikit-learn have been implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
201
+ page_content=' For instance, the imblearn framework [32] focuses on extending scikit-learn’s functionality to processing imbalanced datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
202
+ page_content=' In addition, the imblearn framework offers different resampling methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
203
+ page_content=' The mlxtend framework [33] offers feature extraction methods such as PCA, or fea- ture selection methods such as sequential feature selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
204
+ page_content=' Additionally, different evaluation and utility functions are implemented.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
205
+ page_content=' In contrast, libraries such as statsmodels [34] provide their own interface for their regression models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
206
+ page_content=' The statsmodels framework provides models based on stochastic and statistical methods, such as the Weighted Least Squares (WLS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
207
+ page_content=' In the area of feature engineering, different Python packages have been created.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
208
+ page_content=' The feature-engine [35] library contains a large collection of feature engineering methods, which are implemented based on scikit-learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
209
+ page_content=' The featuretools framework [36] allows the synthesis of features from relational databases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
210
+ page_content=' offers functionality for feature encoding, as well as different transformations or aggregate functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
211
+ page_content=' Additionally, this framework offers transformations, feature encoding, ag- gregate functions, as well as coordinate transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
212
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
213
+ page_content=' CONCLUSION This paper presents a Python framework for feature engineer- ing that provides different methods through a standardized interface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
214
+ page_content=' The framework is based on the scikit-learn package and offers different methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
215
+ page_content=' The framework offers classic feature engineering methods such feature expansion, as well as as feature creation, feature selection or transformation and filter operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
216
+ page_content=' The framework is implemented as a Python package and can be included in different projects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
217
+ page_content=' Through the specifically defined interfaces of the framework, additional methods can be added with low effort.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
218
+ page_content=' Finally, we demonstrate the framework on a case study of energy demand prediction, using a workflow created from a subset of the implemented methods for data-driven model creation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
219
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
220
+ page_content=' Future Work The current version of the framework gives many options for extensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
221
+ page_content=' For instance, additional feature engineering methods can be added using the provided interfaces of the framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
222
+ page_content=' In addition, combinations of the implemented feature engineering methods can be used for prediction in different use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
223
+ page_content=' REFERENCES [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
224
+ page_content=' Mosavi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
225
+ page_content=' Salimi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
226
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
227
+ page_content=' Ardabili, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
228
+ page_content=' Rabczuk, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
229
+ page_content=' Shamshirband, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
230
+ page_content=' Varkonyi-Koczy, “State of the art of machine learning models in energy systems, a systematic review,” Energies, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
231
+ page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
232
+ page_content=' 7, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
233
+ page_content=' 1301, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
234
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
235
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
236
+ page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
237
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
238
+ page_content='3390/en12071301 [2] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
239
+ page_content=' Arendt, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
240
+ page_content=' Jradi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
241
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
242
+ page_content=' Shaker, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
243
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
244
+ page_content=' Veje, “Comparative Analysis of white-, gray- and black-box models for thermal simulation of indoor environment: Teaching Building Case Study,” in 2018 Building Perfor- mance Modeling Conference and SimBuild Co-Organized by ASHRAE and IBPSA-USA Chicago, 2018, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
245
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
246
+ page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
247
+ page_content=' Ghofrani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
248
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
249
+ page_content=' Nazemi, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
250
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
251
+ page_content=' Jafari, “Prediction of building indoor temperature response in variable air volume systems,” Journal of Building Performance Simulation, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
252
+ page_content=' 13, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
253
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
254
+ page_content=' 34–47, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
255
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
256
+ page_content=' [4] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
257
+ page_content=' Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” Nature Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
258
+ page_content=' 1, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
259
+ page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
260
+ page_content=' 206–215, May 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
261
+ page_content=' [5] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
262
+ page_content=' Loyola-Gonzalez, “Black-box vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
263
+ page_content=' white-box: Understanding their advantages and weaknesses from a practical point of view,” IEEE access : practical innovations, open solutions, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
264
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
265
+ page_content=' 154 096–154 113, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
266
+ page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
267
+ page_content=' Kuhn and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
268
+ page_content=' Johnson, Feature Engineering and Selection: A Prac- tical Approach for Predictive Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
269
+ page_content=' CRC Press, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
270
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
271
+ page_content=' [7] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
272
+ page_content=' Gómez, “The Use of Butterworth Filters for Trend and Cycle Estimation in Economic Time Series,” Journal of Business & Economic Statistics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
273
+ page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
274
+ page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
275
+ page_content=' 365–373, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
276
+ page_content=' 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
277
+ page_content=' [8] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
278
+ page_content=' Cheng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
279
+ page_content=' Khomtchouk, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
280
+ page_content=' Matloff, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
281
+ page_content=' Mohanty, “Polynomial Regression As an Alternative to Neural Nets,” arXiv:1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
282
+ page_content='06850 [cs, stat], Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
283
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
284
+ page_content=' [9] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
285
+ page_content=' Peng, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
286
+ page_content=' Long, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
287
+ page_content=' Ding, “Feature selection based on mutual information: Criteria of Max-Dependency, Max-Relevance, and Min- Redundancy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
288
+ page_content=' 27, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
289
+ page_content=' 8, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
290
+ page_content=' 1226–1238, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
291
+ page_content=' [10] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
292
+ page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
293
+ page_content=' Tian, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
294
+ page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
295
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
296
+ page_content=' Zhang, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
297
+ page_content=' Zuo, “Accurate forecasting of building energy consumption via a novel ensembled deep learning method considering the cyclic feature,” Energy, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
298
+ page_content=' 201, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
299
+ page_content=' 117531, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
300
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
301
+ page_content=' [11] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
302
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
303
+ page_content=' Hancock and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
304
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
305
+ page_content=' Khoshgoftaar, “Survey on categorical data for neural networks,” Journal of Big Data, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
306
+ page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
307
+ page_content=' 1, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
308
+ page_content=' 28, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
309
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
310
+ page_content=' [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
311
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
312
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
313
+ page_content=' Eilers and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
314
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
315
+ page_content=' Marx, “Flexible smoothing with B-splines and penalties,” Statistical Science, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
316
+ page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
317
+ page_content=' 2, May 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
318
+ page_content=' [13] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
319
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
320
+ page_content=' Rothman, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
321
+ page_content=' Levina, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
322
+ page_content=' Zhu, “Sparse Multivariate Regression With Covariance Estimation,” Journal of Computational and Graphical Statistics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
323
+ page_content=' 19, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
324
+ page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
325
+ page_content=' 947–962, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
326
+ page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
327
+ page_content=' [14] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
328
+ page_content=' O’Driscoll and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
329
+ page_content=' Ramirez, “Mitigating collinearity in linear re- gression models using ridge, surrogate and raised estimators,” Cogent Mathematics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
330
+ page_content=' 3, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
331
+ page_content=' 1144697, Jan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
332
+ page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
333
+ page_content=' [15] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
334
+ page_content=' Gupta and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
335
+ page_content=' Mittal, “Respiratory signal analysis using PCA, FFT and ARTFA,” in 2016 International Conference on Electrical Power and Energy Systems (ICEPES), Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
336
+ page_content=' 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
337
+ page_content=' 221–225.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
338
+ page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
339
+ page_content=' Cai, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
340
+ page_content=' Luo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
341
+ page_content=' Wang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
342
+ page_content=' Yang, “Feature selection in machine learning: A new perspective,” Neurocomputing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
343
+ page_content=' 300, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
344
+ page_content=' 70–79, Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
345
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
346
+ page_content=' [17] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
347
+ page_content=' Guyon and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
348
+ page_content=' Elisseeff, “An Introduction to Variable and Feature Selection,” Journal of machine learning research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
349
+ page_content=' 3, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
350
+ page_content=' Mar, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
351
+ page_content=' 1157–1182, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
352
+ page_content=' [18] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
353
+ page_content=' Chen and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
354
+ page_content=' Chang, “Photovoltaic power prediction of LSTM model based on Pearson feature selection,” Energy Reports, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
355
+ page_content=' 7, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
356
+ page_content=' 1047– 1054, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
357
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
358
+ page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
359
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
360
+ page_content=' Akay, “Support vector machines combined with feature selection for breast cancer diagnosis,” Expert Systems with Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
361
+ page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
362
+ page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
363
+ page_content=' 3240–3247, Mar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
364
+ page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
365
+ page_content=' [20] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
366
+ page_content=' Zheng and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
367
+ page_content=' Casari, Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists, 1st ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
368
+ page_content=' O’Reilly Media, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
369
+ page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
370
+ page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
371
+ page_content=' Wilfling, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
372
+ page_content=' Ebrahimi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
373
+ page_content=' Alfalouji, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
374
+ page_content=' Schweiger, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
375
+ page_content=' Basirat, “Learning non-linear white-box predictors: A use case in energy sys- tems,” in 21st IEEE International Conference on Machine Learning and Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
376
+ page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
377
+ page_content=' [22] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
378
+ page_content=' Dogliani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
379
+ page_content=' Nord, Á.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
380
+ page_content=' Doblas, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
381
+ page_content=' Calixto, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
382
+ page_content=' Wilfling, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
383
+ page_content=' Alfalouji, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
384
+ page_content=' Schweiger, “Machine Learning for Building Energy Prediction: A Case Study of an Office Building,” p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
385
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
386
+ page_content=' [23] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
387
+ page_content=' Schranz, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
388
+ page_content=' Schweiger, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
389
+ page_content=' Pabst, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
390
+ page_content=' Wotawa, “Machine Learning for Water Supply Supervision,” in Trends in Artificial Intelligence Theory and Applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
391
+ page_content=' Artificial Intelligence Practices, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
392
+ page_content=' Fujita, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
393
+ page_content=' Fournier- Viger, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
394
+ page_content=' Ali, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
395
+ page_content=' Sasaki, Eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
396
+ page_content=' Cham: Springer International Publishing, 2020, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
397
+ page_content=' 12144, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
398
+ page_content=' 238–249.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
399
+ page_content=' [24] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
400
+ page_content=' Schranz, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
401
+ page_content=' Exenberger, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
402
+ page_content=' Legaard, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
403
+ page_content=' Drgona and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
404
+ page_content=' Schweiger, “En- ergy Prediction under Changed Demand Conditions: Robust Machine Learning Models and Input Feature Combinations,” in 17th Interna- tional Conference of the International Building Performance Simulation Association (Building Simulation 2021), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
405
+ page_content=' [25] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
406
+ page_content=' Falay, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
407
+ page_content=' Wilfling, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
408
+ page_content=' Alfalouji, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
409
+ page_content=' Exenberger, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
410
+ page_content=' Schranz, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
411
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
412
+ page_content=' Legaard, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
413
+ page_content=' Leusbrock, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
414
+ page_content=' Schweiger, “Coupling physical and ma- chine learning models: Case study of a single-family house,” Modelica Conferences, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
415
+ page_content=' 335–341, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
416
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
417
+ page_content=' [26] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
418
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
419
+ page_content=' Reshef, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
420
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
421
+ page_content=' Reshef, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
422
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
423
+ page_content=' Finucane, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
424
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
425
+ page_content=' Sabeti, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
426
+ page_content=' Mitzen- macher, “Measuring Dependence Powerfully and Equitably,” Journal of Machine Learning Research, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
427
+ page_content=' 63, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
428
+ page_content=' [27] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
429
+ page_content=' Laarne, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
430
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
431
+ page_content=' Zaidan, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
432
+ page_content=' Nieminen, “Ennemi: Non-linear correlation detection with mutual information,” SoftwareX, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
433
+ page_content=' 14, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
434
+ page_content=' 100686, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
435
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
436
+ page_content=' [28] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
437
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
438
+ page_content=' Székely, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
439
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
440
+ page_content=' Rizzo, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
441
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
442
+ page_content=' Bakirov, “Measuring and testing dependence by correlation of distances,” The Annals of Statistics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
443
+ page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
444
+ page_content=' 6, Dec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
445
+ page_content=' 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
446
+ page_content=' [29] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
447
+ page_content=' Lopez-Paz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
448
+ page_content=' Hennig, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
449
+ page_content=' Schölkopf, “The Randomized Depen- dence Coefficient,” p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
450
+ page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
451
+ page_content=' [30] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
452
+ page_content=' Sandhu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
453
+ page_content=' Kaur, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
454
+ page_content=' Kaur, “A Study on Design and Implementa- tion of Butterworth, Chebyshev and Elliptic Filter with MatLab,” vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
455
+ page_content=' 4, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
456
+ page_content=' 6, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
457
+ page_content=' 4, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
458
+ page_content=' [31] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
459
+ page_content=' Maccarini, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
460
+ page_content=' Prataviera, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
461
+ page_content=' Zarrella, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
462
+ page_content=' Afshari, “Development of a Modelica-based simplified building model for district energy simulations,” Journal of Physics: Conference Series, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
463
+ page_content=' 2042, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
464
+ page_content=' 1, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
465
+ page_content=' 012078, Nov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
466
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
467
+ page_content=' [32] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
468
+ page_content=' Lemaître, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
469
+ page_content=' Nogueira, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
470
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
471
+ page_content=' Aridas, “Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning,��� Journal of Machine Learning Research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
472
+ page_content=' 18, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
473
+ page_content=' 17, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
474
+ page_content=' 1–5, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
475
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
476
+ page_content=' Available: http://jmlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
477
+ page_content='org/papers/v18/16-365 [33] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
478
+ page_content=' Raschka, “Mlxtend: Providing machine learning and data science utilities and extensions to python’s scientific computing stack,” The Journal of Open Source Software, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
479
+ page_content=' 3, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
480
+ page_content=' 24, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
481
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
482
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
483
+ page_content=' Available: http://joss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
484
+ page_content='theoj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
485
+ page_content='org/papers/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
486
+ page_content='21105/joss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
487
+ page_content='00638 [34] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
488
+ page_content=' Seabold and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
489
+ page_content=' Perktold, “statsmodels: Econometric and statistical modeling with python,” in 9th Python in Science Conference, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
490
+ page_content=' [35] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
491
+ page_content=' Galli, “Feature-engine: A Python package for feature engineering for machine learning,” Journal of Open Source Software, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
492
+ page_content=' 6, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
493
+ page_content=' 65, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
494
+ page_content=' 3642, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
495
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
496
+ page_content=' [36] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
497
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
498
+ page_content=' Kanter and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
499
+ page_content=' Veeramachaneni, “Deep feature synthesis: Towards automating data science endeavors,” in 2015 IEEE International Con- ference on Data Science and Advanced Analytics, DSAA 2015, Paris, France, October 19-21, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
500
+ page_content=' IEEE, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
501
+ page_content=' 1–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/CNAzT4oBgHgl3EQfwP6m/content/2301.01720v1.pdf'}
CdFJT4oBgHgl3EQfASzG/content/2301.11420v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a95247a2a1200c7a860bb8ac0b0ef32461ad05e829c326fa4b1568651520484
3
+ size 617189
CdFJT4oBgHgl3EQfASzG/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecb3f01de295af30a218cd03143c489f3c65627e69f6fc564d0a314a7f031e98
3
+ size 2752557
CtE2T4oBgHgl3EQfSAdG/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0700446865f5cbe078132f2202857d6e99ab7c0a01d01fae008e296ddf09adc
3
+ size 3145773
CtE2T4oBgHgl3EQfSAdG/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff72f9682ad429ed57132e8c275bd36f30650626523aad52e396d58febf19095
3
+ size 114865
D9FQT4oBgHgl3EQfQDZ4/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:747a1559457b81125d7489fd195b67910120a588bec927cbfb9c81c91b89c62d
3
+ size 10747949
DtFKT4oBgHgl3EQfZS4v/content/tmp_files/2301.11802v1.pdf.txt ADDED
@@ -0,0 +1,1239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.11802v1 [cs.LG] 27 Jan 2023
2
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret
3
+ Bounds
4
+ Johan ¨Ostman 1 Ather Gattami 1 Daniel Gillblad 1
5
+ Abstract
6
+ We consider a decentralized multiplayer game,
7
+ played over T rounds, with a leader-follower hi-
8
+ erarchy described by a directed acyclic graph.
9
+ For each round, the graph structure dictates the
10
+ order of the players and how players observe
11
+ the actions of one another. By the end of each
12
+ round, all players receive a joint bandit-reward
13
+ based on their joint action that is used to update
14
+ the player strategies towards the goal of minimiz-
15
+ ing the joint pseudo-regret. We present a learn-
16
+ ing algorithm inspired by the single-player multi-
17
+ armed bandit problem and show that it achieves
18
+ sub-linear joint pseudo-regret in the number of
19
+ rounds for both adversarial and stochastic ban-
20
+ dit rewards. Furthermore, we quantify the cost
21
+ incurred due to the decentralized nature of our
22
+ problem compared to the centralized setting.
23
+ 1. Introduction
24
+ Decentralized multi-agent online learning concerns agents
25
+ that, simultaneously, learn to behave over time in order
26
+ to achieve their goals.
27
+ Compared to the single-agent
28
+ setup, novel challenges are present as agents may not
29
+ share the same objectives, the environment becomes non-
30
+ stationary, and information asymmetry may exist between
31
+ agents (Yang & Wang, 2020).
32
+ Traditionally, the multi-
33
+ agent problem has been addressed by either relying on
34
+ a central controller to coordinate the agents’ actions or
35
+ to let the agents learn independently.
36
+ However, access
37
+ to a central controller may not be realistic and indepen-
38
+ dent learning suffers from convergence issues (Zhang et al.,
39
+ 2019). To circumvent these issues, a common approach
40
+ is to drop the central coordinator and allow informa-
41
+ tion exchange between agents (Zhang et al., 2018; 2019;
42
+ Cesa-Bianchi et al., 2021).
43
+ Decision-making that involves multiple agents is often
44
+ 1AI Sweden, Gothenburg, Sweden. Correspondence to: Johan
45
+ ¨Ostman <johan.ostman@ai.se>.
46
+ modeled as a game and studied under the lens of game
47
+ theory to describe the learning outcomes.1
48
+ Herein, we
49
+ consider games with a leader-follower structure in which
50
+ players act consecutively. For two players, such games
51
+ are known as Stackelberg games (Hicks, 1935). Stackel-
52
+ berg games have been used to model diverse learning situ-
53
+ ations such as airport security (Balcan et al., 2015), poach-
54
+ ing (Sessa et al., 2020), tax planning (Zheng et al., 2020),
55
+ and generative adversarial networks (Moghadam et al.,
56
+ 2021).
57
+ In a Stackelberg game, one is typically con-
58
+ cerned with finding the Stackelberg equilibrium, some-
59
+ times called Stackelberg-Nash equilibrium, in which the
60
+ leader uses a mixed strategy and the follower is best-
61
+ responding. A Stackelberg equilibrium may be obtained by
62
+ solving a bi-level optimization problem if the reward func-
63
+ tions are known (Sch¨afer et al., 2020; Aussel & Svensson,
64
+ 2020) or, otherwise, it may be learnt via online learn-
65
+ ing techniques (Bai et al., 2021; Zhong et al., 2021), e.g.,
66
+ no-regret algorithms (Shalev-Shwartz, 2012; Deng et al.,
67
+ 2019; Goktas et al., 2022).
68
+ No-regret algorithms have emerged from the single-player
69
+ multi-armed bandit problem as a means to alleviate
70
+ the exploitation-exploration trade-off (Bubeck & Slivkins,
71
+ 2012). An algorithm is called no-regret if the difference be-
72
+ tween the cumulative rewards of the learnt strategy and the
73
+ single best action in hindsight is sublinear in the number
74
+ of rounds (Shalev-Shwartz, 2012). In the multi-armed ban-
75
+ dit problem, rewards may be adversarial (based on random-
76
+ ness and previous actions), oblivious adversarial (random),
77
+ or stochastic (independent and identically distributed) over
78
+ time (Auer et al., 2002). Different assumptions on the ban-
79
+ dit rewards yield different algorithms and regret bounds.
80
+ Indeed, algorithms tailored for one kind of rewards are
81
+ sub-optimal for others, e.g., the EXP3 algorithm due
82
+ to Auer et al. (2002) yields the optimal scaling for adversar-
83
+ ial rewards but not for stochastic rewards. For this reason,
84
+ best-of-two-worlds algorithms, able to optimally handle
85
+ both the stochastic and adversarial rewards, have recently
86
+ been pursued and resulted in algorithms with close to op-
87
+ timal performance in both settings (Auer & Chiang, 2016;
88
+ 1The convention is to use agents in learning applications and
89
+ players in game theoretic applications, we shall use the game-
90
+ theoretic nomenclature in the remainder of the paper.
91
+
92
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
93
+ Wei & Luo, 2018; Zimmert & Seldin, 2021). Extensions to
94
+ multiplayer multi-armed bandit problems have been pro-
95
+ posed in which players attempt to maximize the sum of
96
+ rewards by pulling an arm each, see, e.g., (Kalathil et al.,
97
+ 2014; Bubeck et al., 2021).
98
+ No-regret algorithms are a common element also when an-
99
+ alyzing multiplayer games.
100
+ For example, in continuous
101
+ two-player Stackelberg games, the leader strategy, based
102
+ on a no-regret algorithm, converges to the Stackelberg equi-
103
+ librium if the follower is best-responding (Goktas et al.,
104
+ 2022). In contrast, if also the follower adopts a no-regret al-
105
+ gorithm, the regret dynamics is not guaranteed to converge
106
+ to a Stackelberg equilibrium point (Goktas et al., 2022,
107
+ Ex. 3.2).
108
+ In (Deng et al., 2019), it was shown for two-
109
+ player Stackelberg games that a follower playing a, so-
110
+ called, mean-based no-regret algorithm, enables the leader
111
+ to achieve a reward strictly larger than the reward achieved
112
+ at the Stackelberg equilibrium.
113
+ This result does, how-
114
+ ever, not generalize to n-player games as demonstrated
115
+ by D’Andrea (2022). Apart from studying the Stackelberg
116
+ equilibrium, several papers have analyzed the regret. For
117
+ example, Sessa et al. (2020) presented upper-bounds on the
118
+ regret of a leader, employing a no-regret algorithm, playing
119
+ against an adversarial follower with an unknown response
120
+ function. Furthermore, Stackelberg games with states were
121
+ introduced by Lauffer et al. (2022) along with an algorithm
122
+ that was shown to achieve no-regret.
123
+ As the follower in a Stackelberg game observes the leader’s
124
+ action, there is information exchange. A generalization
125
+ to multiple players has been studied in a series of pa-
126
+ pers (Cesa-Bianchi et al., 2016; 2020; 2021). In this line
127
+ of work, players with a common action space form an ar-
128
+ bitrary graph and are randomly activated in each round.
129
+ Active players share information with their neighbors by
130
+ broadcasting their observed loss, previously received neigh-
131
+ bor losses, and their current strategy. The goal of the play-
132
+ ers is to minimize the network regret, defined with respect
133
+ to the cumulative losses observed by active players over
134
+ the rounds. The players, however, update their strategies
135
+ according to their individually observed loss. Although we
136
+ consider players connected on a graph, our work differs
137
+ significantly from (Cesa-Bianchi et al., 2016; 2020; 2021),
138
+ e.g., we allow only actions to be observed between players
139
+ and the players update their strategies based on a common
140
+ bandit reward rather than an individual reward.
141
+ Contributions:
142
+ We introduce the joint pseudo-regret,
143
+ defined with respect to the cumulative reward where
144
+ all the players observe the same bandit-reward in each
145
+ round. We provide an online learning-algorithmfor general
146
+ consecutive-play games that relies on no-regret algorithms
147
+ developed for the single-player multi-armed bandit prob-
148
+ lem. The main novelty of our contribution resides in the
149
+ joint analysis of players with coupled rewards where we
150
+ derive upper bounds on the joint pseudo-regret and prove
151
+ our algorithm to be no-regret in the adversarial setting. Fur-
152
+ thermore, we quantify the penalty incurred by our decen-
153
+ tralized setting in relation to the centralized setting.
154
+ 2. Problem formulation
155
+ In this section, we formalize the consecutive structure of
156
+ the game and introduce the joint pseudo-regret that will
157
+ be used as a performance metric throughout. We consider
158
+ a decentralized setting where, in each round of the game,
159
+ players pick actions consecutively. The consecutive nature
160
+ of the game allows players to observe preceding players’
161
+ actions and may be modeled by a DAG. For example, in
162
+ Fig. 1, a seven-player game is illustrated in which player 1
163
+ initiates the game and her action is observed by players 2, 5,
164
+ and 6. The observations available to the remaining players
165
+ follow analogously. Note that for a two-player consecutive
166
+ game, the DAG models a Stackelberg game.
167
+ We let G = (V, E) denote a DAG where V denotes the ver-
168
+ tices and E denotes the edges. For our setting, V constitutes
169
+ the n different players and E = {(j, i) : j → i, j ∈ V, i ∈
170
+ V} describes the observation structure where j → i indi-
171
+ cates that player i observes the action of player j. Accord-
172
+ ingly, a given player i ∈ V observes the actions of its direct
173
+ parents, i.e., players j ∈ Ei = {k : (k, i) ∈ E}. Further-
174
+ more, each player i ∈ V is associated with a discrete action
175
+ space Ai of size Ai. We denote by πi(t), the mixed strat-
176
+ egy of player i over the action space Ai in round t ∈ [T ]
177
+ such that πi(t) = a with probability pi,a for a ∈ Ai. In
178
+ the special case when pi,a = 1 for some a ∈ Ai, the strat-
179
+ egy is referred to as pure. Let AB denote the joint action
180
+ space of players in a set B given by the Cartesian product
181
+ AB = �
182
+ i∈B Ai. If a player i has no parents, i.e., Ei = ∅,
183
+ we use the convention |AEi| = 1.
184
+ We consider a collaborative setting with bandit rewards
185
+ given by a mapping rt : AV → [0, 1] in each round t ∈ [T ].
186
+ The bandit rewards are assumed to be adversarial. Let C de-
187
+ note a set of cliques in the DAG (Koller & Friedman, 2009,
188
+ Def. 2.13) and let Nk ∈ C for k ∈ [|C|] denote the players
189
+ in the kth clique in C with joint action space ANk such that
190
+ Nk ∩ Nj = ∅ for j ̸= k. For a joint action a(t) ∈ AV,
191
+ we consider bandit rewards given by a linear combination
192
+ of the clique-rewards as
193
+ rt(a(t)) =
194
+ |C|
195
+
196
+ k=1
197
+ βkrk
198
+ t (P k(a(t))),
199
+ (1)
200
+ where rk
201
+ t
202
+ :
203
+ ANk
204
+
205
+ [0, 1], βk
206
+
207
+ 0 is the weight
208
+ of the kth clique reward such that �|C|
209
+ k=1 βk
210
+ =
211
+ 1,
212
+ and P k(a(t)) denotes the joint action of the players in
213
+ Nk. As an example, Fig. 2 highlights the cliques C =
214
+
215
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
216
+ 1
217
+ 2
218
+ 3
219
+ 4
220
+ 5
221
+ 6
222
+ 7
223
+ Figure 1. A game with seven players.
224
+ 1
225
+ 2
226
+ 3
227
+ 4
228
+ 5
229
+ 6
230
+ 7
231
+ Figure 2. Colored cliques comprising the bandit reward.
232
+ {{2, 3, 4}, {1, 5}, {6}, {7}} and we have, e.g., N1
233
+ =
234
+ {2, 3, 4}, and P 1(a(t)) = (a2(t), a3(t), a4(t)). Note that
235
+ each player influences only a single term in the reward (1).
236
+ In each round t ∈ [T ], the game proceeds as follows for
237
+ player i ∈ V:
238
+ 1) the player is idle until the actions of all parents in Ei
239
+ have been observed,
240
+ 2) the player picks an action ai(t) ∈ Ai according to its
241
+ strategy πi(t),
242
+ 3) once all the n players in V have chosen an action,
243
+ the player observes the bandit reward rt(a(t)) and up-
244
+ dates its strategy.
245
+ The goal of the game is to find policies {πi(t)}n
246
+ i=1 that de-
247
+ pend on past actions and rewards in order to minimize the
248
+ joint pseudo-regret R(T ) which is defined similarly to the
249
+ pseudo regret (Shalev-Shwartz, 2012, Ch. 4.2) as
250
+ R(T ) = r(a⋆) − E
251
+ � T
252
+
253
+ t=1
254
+ rt(a(t))
255
+
256
+ ,
257
+ (2)
258
+ where
259
+ r(a⋆) = max
260
+ a∈AV E
261
+ � T
262
+
263
+ t=1
264
+ rt(a)
265
+
266
+ ,
267
+ and the expectations are taken with respect to the rewards
268
+ and the player actions.2 Note that r(a⋆) corresponds to the
269
+ largest expected reward obtainable if all players use pure
270
+ strategies. Hence, the pseudo-regret in (2) quantifies the
271
+ difference between the expected reward accumulated by the
272
+ learnt strategies and the reward-maximizing pure strategies
273
+ in hindsight.
274
+ Our problem formulation pertains to a plethora of appli-
275
+ cations.
276
+ Examples include resource allocation in cog-
277
+ nitive radio networks where available frequencies are
278
+ obtained via channel sensing (Janatian et al., 2015) and
279
+ semi-autonomous vehicles with adaptive cruise control,
280
+ i.e., vehicles ahead are observed before an action is de-
281
+ cided (Marsden et al., 2001).
282
+ Also recently, the impor-
283
+ tance of coupled rewards and partner awareness through
284
+ implicit communications, e.g., by observation, has been
285
+ highlighted in human-robot and human-AI collaborative
286
+ settings (Bıyık et al., 2022). Furthermore, our formulation
287
+ is applicable in simple scenarios within to reinforcement
288
+ learning (Ibarz et al., 2021).
289
+ As will be shown in the next section, any no-regret al-
290
+ gorithm can be used as a building block for the games
291
+ considered herein to guarantee a sub-linear pseudo-regret
292
+ in the number of rounds T .
293
+ As our goal is to study
294
+ the joint pseudo-regret (2) for adversarial, we start from
295
+ a state-of-the-art algorithm for the adversarial multi-
296
+ armed bandit problem. In particular, we will utilize the
297
+ TSALLIS-INF algorithm that guarantees a pseudo-regret
298
+ with the optimal scaling in the adversarial single-player set-
299
+ ting (Zimmert & Seldin, 2021).
300
+ 3. Analysis of the joint pseudo-regret
301
+ Our analysis of the joint pseudo-regret builds upon learning
302
+ algorithms for the single-player multi-armed bandit prob-
303
+ lem. First, let us build intuition on how to use a multi-
304
+ armed bandit algorithm in the DAG-based game described
305
+ in Section 2. Consider a 2-player Stackelberg game where
306
+ the players choose actions from A1 and A2, respectively,
307
+ and where player 2 observes the actions of player 1. For
308
+ simplicity, we let player 1 use a mixed strategy whereas
309
+ player 2 is limited to a pure strategy. Furthermore, con-
310
+ sider the rewards to be a priori known by the players and let
311
+ T = 1 for which the Stackelberg game may be viewed as a
312
+ bi-level optimization problem (Aussel & Svensson, 2020).
313
+ In this setting, the action of player 1 imposes a Nash game
314
+ on player 2 whom attempts to play optimally given the ob-
315
+ servation. Hence, player 2 has A1 pure strategies, one for
316
+ each of the A1 actions of player 1.
317
+ We may generalize this idea to the DAG-based multiplayer
318
+ game with unknown bandit-rewards and T ≥ 1 to achieve
319
+ 2This is called pseudo-regret as r(a⋆) is obtained by a maxi-
320
+ mization outside of the expectation.
321
+
322
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
323
+ no-regret. Indeed, a player i ∈ V may run |AEi| different
324
+ multi-armed bandit algorithms, one for each of the joint
325
+ actions of its parents.
326
+ Algorithm 1 illustrates this idea
327
+ in conjunction with the TSALLIS-INF update rule intro-
328
+ duced by Zimmert & Seldin (2021), which is given in Al-
329
+ gorithm 2 for completeness.3 In particular, for the 2-player
330
+ Stackelberg game, the leader runs a single multi-armed ban-
331
+ dit algorithm whereas the follower runs A1 learning algo-
332
+ rithms. For simplicity, Algorithm 1 assumes that player i
333
+ knows the size of the joint action space of its parents, i.e.,
334
+ |AEi|. Dropping this assumption is straightforward: simply
335
+ keep track of the observed joint actions and initiate a new
336
+ multi-armed bandit learner upon a unique observation.
337
+ Algorithm 1 Learning algorithm of player i ∈ V
338
+ 1: Input: for ease of notation, let the actions in AEi be
339
+ labeled as 1, 2, . . ., |AEi|
340
+ 2: initialize cumulative loss Lk ← 0 ∈ RAi for k ∈
341
+ [|AEi|]
342
+ 3: initialize fixed-point xk ← 0 for k ∈ [|AEi|]
343
+ 4: initialize counter nk ← 0 for k ∈ [|AEi|]
344
+ 5: for t = 1, 2, . . ., T do
345
+ 6:
346
+ observe the joint action j ∈ [|AEi|] of the preceding
347
+ players
348
+ 7:
349
+ increase counter nj ← nj + 1
350
+ 8:
351
+ obtain
352
+ new
353
+ strategy
354
+ and
355
+ new
356
+ fixed-point
357
+ (πi(t), xj) ← TSALLIS-INF(nj, Lj, xj)
358
+ 9:
359
+ play action ai(t) ∼ πi(t)
360
+ 10:
361
+ observe the joint bandit-reward rt(a(t))
362
+ 11:
363
+ update the cumulative loss for all k ∈ [Ai] as
364
+ Lj,k ← Lj,k + 1{ai(t) = k}(1 − rt(a(t)))/pk
365
+ 12: end for
366
+ Algorithm 2 Strategy update for player i ∈ V
367
+ 1: Input: time step t, cumulative rewards L ∈ RAi
368
+ + , pre-
369
+ vious fixed point x Output strategy πi(t), fixed point
370
+ x
371
+ 2: set learning rate η ← 2
372
+
373
+ 1/t
374
+ 3: repeat
375
+ 4:
376
+ pj ← 4(η(Lj − x))−2 for all j ∈ [Ai]
377
+ 5:
378
+ x ← x −
379
+ ��Ai
380
+ j=1 pj − 1
381
+
382
+ /
383
+
384
+ η �Ai
385
+ j=1 p3/2
386
+ j
387
+
388
+ 6: until convergence
389
+ 7: update strategy πi(t) ← (p1, . . . , pAi)
390
+ Next, we go on to analyze the joint pseudo-regret of Algo-
391
+ rithm 1. First, we present a result on the pseudo-regret for
392
+ the single-player multi-armed bandit problem that will be
393
+ used throughout.
394
+ 3The original TSALLIS-INF Algorithm is given in terms of
395
+ losses. To use rewards, one may simply use the relationship l =
396
+ 1 − r.
397
+ Theorem 3.1 (Pseudo-regret of TSALLIS-INF). Consider
398
+ a single-player multi-armed bandit problem with A1 arms,
399
+ played over T rounds. Let the player operate according to
400
+ Algorithm 1. Then, the pseudo-regret satisfies
401
+ R(T ) ≤ 4
402
+
403
+ A1T + 1.
404
+ Proof. For a single player, E1 = ∅ and we have |AE1| =
405
+ 1 by convention. Hence, our setting becomes equivalent
406
+ to that of Zimmert & Seldin (2021, Th 1) and the result
407
+ follows thereof.
408
+ Next, we consider a two-player Stackelberg game with
409
+ joint bandit-rewards defined over a two-player clique. We
410
+ have the following upper bound on the joint pseudo-regret.
411
+ Theorem 3.2 (Joint pseudo-regret over cliques of size 2).
412
+ Consider a 2-player Stackelberg game with bandit-rewards,
413
+ given by (1), defined over a single clique containing both
414
+ players. Furthermore, let each of the players follow Algo-
415
+ rithm 1. Then, the joint pseudo-regret satisfies
416
+ R(T ) ≤ 4
417
+
418
+ A1A2T + 4
419
+
420
+ A1T + A1 + 1.
421
+ Proof. Without loss of generality, let player 2 observe the
422
+ actions of player 1. Let a1(t) ∈ A1 and a2(t) ∈ A2 denote
423
+ the actions of player 1 and player 2, respectively, at time t ∈
424
+ [T ] and let a⋆
425
+ 1 and a⋆
426
+ 2(a1) denote the reward-maximizing
427
+ pure strategies of the players in hindsight, i.e.,
428
+ a⋆
429
+ 1 = arg max
430
+ a1∈A1 E
431
+ � T
432
+
433
+ t=1
434
+ rt(a1, a⋆
435
+ 2(a1))
436
+
437
+ ,
438
+ (3)
439
+ a⋆
440
+ 2(a1) = arg max
441
+ a2∈A2 E
442
+ � T
443
+
444
+ t=1
445
+ rt(a1, a2)
446
+
447
+ .
448
+ (4)
449
+ Note that the optimal joint decision in hindsight is given by
450
+ (a⋆
451
+ 1, a⋆
452
+ 2(a⋆
453
+ 1)). The joint pseudo-regret is given by
454
+ R(T ) =
455
+ T
456
+
457
+ t=1
458
+ E [rt(a⋆
459
+ 1, a⋆
460
+ 2(a⋆
461
+ 1)) − rt(a⋆
462
+ 1, a2(t))]
463
+ + E [rt(a⋆
464
+ 1, a2(t)) − rt(a1(t), a2(t))]
465
+
466
+ T
467
+
468
+ t=1
469
+ max
470
+ at∈A1 E [rt(at, a⋆
471
+ 2(at)) − rt(at, a2(t))]
472
+ + E
473
+ � T
474
+
475
+ t=1
476
+ rt(a⋆
477
+ 1, a2(t)) − rt(a1(t), a2(t))
478
+
479
+ .
480
+ (5)
481
+ Next, let
482
+ a+
483
+ 1 (t) = arg max
484
+ at∈A1 E [rt(at, a⋆
485
+ 2(at)) − rt(at, a2(t))]
486
+ and let Ta = {t : a+
487
+ 1 (t) = a}, for a ∈ A1, denote all the
488
+ rounds that player 1 chose action a and introduce Ta = |Ta|.
489
+
490
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
491
+ Then, the first term in (5) is upper-bounded as
492
+ T
493
+
494
+ t=1
495
+ max
496
+ at∈A1 E [rt(at, a⋆
497
+ 2(at)) − rt(at, a2(t))]
498
+ =
499
+
500
+ a∈A1
501
+
502
+ t∈Ta
503
+ E [rt(a, a⋆
504
+ 2(a)) − rt(a, a2(t))]
505
+
506
+
507
+ a∈A1
508
+ 4
509
+
510
+ A2Ta + 1
511
+ (6)
512
+
513
+ max
514
+
515
+ a Ta=T
516
+
517
+ a∈A1
518
+ 4
519
+
520
+ A2Ta + 1
521
+ = 4
522
+
523
+ A1A2T + A1
524
+ (7)
525
+ where (6) follows from Theorem 3.1 and because player 2
526
+ follows Algorithm 1. Note that the actions in Ta may not be
527
+ consecutive. However, as we consider adversarial rewards,
528
+ Theorem 3.1 is still applicable.
529
+ Next, we consider the second term in (5). Note that, accord-
530
+ ing to (4), a⋆
531
+ 1 is obtained from the optimal pure strategies
532
+ in hindsight of both the players. Let
533
+ a◦
534
+ 1 = arg max
535
+ a1∈A1
536
+ T
537
+
538
+ t=1
539
+ E [rt(a1, a2(t))]
540
+ and note that
541
+ E
542
+ � T
543
+
544
+ t=1
545
+ rt(a⋆
546
+ 1, a2(t))
547
+
548
+ ≤ E
549
+ � T
550
+
551
+ t=1
552
+ rt(a◦
553
+ 1, a2(t))
554
+
555
+ .
556
+ By adding and subtracting rt(a◦
557
+ 1, a2(t)) to the second term
558
+ in (5), we get
559
+ E
560
+ � T
561
+
562
+ t=1
563
+ rt(a⋆
564
+ 1, a2(t)) − rt(a1(t), a2(t))
565
+
566
+ ≤ E
567
+ � T
568
+
569
+ t=1
570
+ rt(a◦
571
+ 1, a2(t)) − rt(a1(t), a2(t))
572
+
573
+ ≤ 4
574
+
575
+ A1T + 1
576
+ (8)
577
+ where the last equality follows from Theorem 3.1. The re-
578
+ sult follows from (7) and (8).
579
+ From Theorem 3.2, we note that the joint pseudo-regret
580
+ scales with the size of the joint action space as R(T ) =
581
+ O(√A1A2T). This is expected as a centralized version
582
+ of the cooperative Stackelberg game may be viewed as
583
+ a single-player multi-armed bandit problem with A1A2
584
+ arms where, according to Theorem 3.1, the pseudo-regret
585
+ is upper-bounded by 4√A1A2T + 1.
586
+ Hence, from
587
+ Theorem 3.2, we observe a penalty of 4√A1T + A1
588
+ due to the decentralized nature of our setup.
589
+ More-
590
+ over, in the single-player setting, Algorithm 2 was shown
591
+ in Zimmert & Seldin (2021) to achieve the same scaling as
592
+ the lower bound in Cesa-Bianchi & Lugosi (2006, Th. 6.1).
593
+ Hence, Algorithm 1 achieves the optimal scaling. Next, we
594
+ extend Theorem 3.2 to cliques of size larger than two.
595
+ Theorem 3.3 (Joint pseudo-regret over a clique of arbitrary
596
+ size). Consider a DAG-based game with bandit rewards
597
+ given by (1), defined over a single clique containing m
598
+ players. Let each of the players operate according to Al-
599
+ gorithm 1. Then, the joint pseudo-regret satisfies
600
+ R(T ) ≤ 4
601
+
602
+ T
603
+ m
604
+
605
+ i=1
606
+ i�
607
+ k=1
608
+
609
+ Ak +
610
+ m−1
611
+
612
+ i=1
613
+ i�
614
+ k=1
615
+ Ak + 1.
616
+ Proof. Let Rub(T, m) denote an upper bound on the joint
617
+ pseudo-regret when the bandit-reward is defined over a
618
+ clique containing m players. From Theorem 3.1 and Theo-
619
+ rem 3.2, we have that
620
+ Rub(T, 1) = 4
621
+
622
+ A1T + 1
623
+ Rub(T, 2) = 4
624
+
625
+ A1T + 4
626
+
627
+ A1A2T + A1 + 1,
628
+ respectively. Therefore, we form an induction hypothesis
629
+ as
630
+ Rub(T, m) = 4
631
+
632
+ T
633
+ m
634
+
635
+ i=1
636
+ i�
637
+ k=1
638
+
639
+ Ak +
640
+ m−1
641
+
642
+ i=1
643
+ i�
644
+ k=1
645
+ Ak + 1. (9)
646
+ Assume that (9) is true for a clique containing m − 1
647
+ players and add an additional player, assigned player in-
648
+ dex 1, whose actions are observable to the original m − 1
649
+ players.
650
+ The m players now form a clique C of size
651
+ m.
652
+ Let a(t) ∈ AC denote the joint action of all the
653
+ players in the clique at time t ∈ [T ] and let a−i(t) =
654
+ (a1(t), . . . , ai−1(t), ai+1(t), . . . , am(t)) ∈ AC\i denote
655
+ the joint action excluding the action of player i. Further-
656
+ more, let
657
+ a⋆
658
+ 1 = arg max
659
+ a1∈A1 E
660
+ � T
661
+
662
+ t=1
663
+ rt(a1, a⋆
664
+ −1(a1))
665
+
666
+ a⋆
667
+ −1(a1) = arg max
668
+ a∈AC\1 E
669
+ � T
670
+
671
+ t=1
672
+ rt(a1, a)
673
+
674
+ denote the optimal actions in hindsight of player 1 and the
675
+ optimal joint action of the original m − 1 players given the
676
+ action of player 1, respectively. The optimal joint action in
677
+ hindsight is given as a⋆ = (a⋆
678
+ 1, a⋆
679
+ −1(a⋆
680
+ 1)). Following the
681
+
682
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
683
+ steps in the proof of Theorem 3.2 verbatim, we obtain
684
+ R(T ) =
685
+ T
686
+
687
+ t=1
688
+ E [rt(a⋆) − rt(a⋆
689
+ 1, a−1(t))]
690
+ + E [rt(a⋆
691
+ 1, a−1(t)) − rt(a(t))]
692
+
693
+ T
694
+
695
+ t=1
696
+ max
697
+ a1 E
698
+
699
+ rt(a1, a⋆
700
+ −1(a1)) − rt(a1, a−1(t))
701
+
702
+ +
703
+ T
704
+
705
+ t=1
706
+ E [rt(a⋆
707
+ 1, a−1(t)) − rt(a1(t), a−1(t))]
708
+
709
+
710
+ a∈A1
711
+
712
+ t∈Ta
713
+ E
714
+
715
+ rt(a, a⋆
716
+ −1(a)) − rt(a, a−1(t))
717
+
718
+ +
719
+ T
720
+
721
+ t=1
722
+ E [rt(a◦
723
+ 1, a−1(t)) − rt(a1(t), a−1(t))]
724
+
725
+
726
+ a∈A1
727
+ Rub(Ta, m − 1) + 4
728
+
729
+ A1T + 1
730
+ ≤ A1Rub(T/A1, m − 1) + 4
731
+
732
+ A1T + 1
733
+ (10)
734
+ where Ta, Ta, and a◦
735
+ n are defined analogously as in the
736
+ proof of Theorem 3.2. By using the induction hypothe-
737
+ sis (9) in (10) and by accounting for the original m − 1
738
+ players being indexed from 2 to m, we obtain
739
+ R(T ) ≤ A1
740
+
741
+ 4
742
+
743
+ T/A1
744
+ m
745
+
746
+ i=2
747
+ i�
748
+ k=2
749
+
750
+ Ak +
751
+ m−1
752
+
753
+ i=2
754
+ i�
755
+ k=2
756
+ Ak + 1
757
+
758
+ + 4
759
+
760
+ A1T + 1
761
+ = Rub(T, m)
762
+ which is what we wanted to show.
763
+ As in the two-player game,
764
+ the joint pseudo-regret
765
+ of
766
+ Algorithm 1
767
+ achieves the
768
+ optimal scaling,
769
+ i.e.,
770
+ R(T )
771
+ =
772
+ O(
773
+
774
+ T �m
775
+ k=1
776
+ √Ak), but exhibits a penalty
777
+ due to the decentralized setting which is equal to
778
+ 4
779
+
780
+ T �m−1
781
+ i=1
782
+ �i
783
+ k=1
784
+ √Ak + �m−2
785
+ i=1
786
+ �i
787
+ k=1 Ak.
788
+ Up until this point, we have considered the pseudo-regret
789
+ when the bandit-reward (1) is defined over a single clique.
790
+ The next theorem leverages the previous results to provide
791
+ an upper bound on the joint pseudo-regret when the bandit-
792
+ reward is defined over an arbitrary number of independent
793
+ cliques in the DAG.
794
+ Theorem 3.4 (Joint pseudo-regret in DAG-based games).
795
+ Consider a DAG-based game with bandit rewards given as
796
+ in (1) and let C contain a collection of independent cliques
797
+ associated with the DAG. Let each player operate accord-
798
+ ing to Algorithm 1. Then, the joint pseudo-regret satisfies
799
+ R(T ) = O
800
+ ��
801
+ T max
802
+ k∈[|C|] |ANk|
803
+
804
+ where ANk denotes the joint action-space of the players in
805
+ the kth clique Nk ∈ C.
806
+ Proof. Let Nk ∈ C denote the players belonging to the kth
807
+ clique in C with joint action space ANk. The structure of (1)
808
+ allows us to express the joint pseudo-regret as
809
+ R(T ) = E
810
+ � T
811
+
812
+ t=1
813
+ rt(a⋆) − rt(a(t))
814
+
815
+
816
+ |C|
817
+
818
+ k=1
819
+ βkE
820
+ � T
821
+
822
+ t=1
823
+ rk
824
+ t (a⋆
825
+ k) − rk
826
+ t (P k(a(t)))
827
+
828
+ (11)
829
+ where
830
+ a⋆ = arg max
831
+ a∈AV E
832
+ � T
833
+
834
+ t=1
835
+ rt(a)
836
+
837
+ ,
838
+ a⋆
839
+ k = arg max
840
+ a∈ANk
841
+ E
842
+ � T
843
+
844
+ t=1
845
+ rk
846
+ t (a)
847
+
848
+ ,
849
+ and the inequality follows since E
850
+ ��T
851
+ t=1 rk
852
+ t (P k(a⋆))
853
+
854
+
855
+ E
856
+ ��T
857
+ t=1 rk
858
+ t (a⋆
859
+ k)
860
+
861
+ . Now, for each clique Nk ∈ C, let the
862
+ player indices in Nk be ordered according to the order of
863
+ player observations within the clique.
864
+ As Theorem 3.3
865
+ holds for any Nk ∈ C, we may, with a slight abuse of nota-
866
+ tion, bound the joint pseudo-regret of each clique as
867
+ R(T ) ≤
868
+ |C|
869
+
870
+ k=1
871
+ βkRub(T, Nk) ≤ max
872
+ k∈[|C|] βkRub (T, Nk)
873
+ where Rub(T, Nk) follows from Theorem 3.3 as
874
+ Rub(T, Nk) = 4
875
+
876
+ T
877
+
878
+ i∈Nk
879
+
880
+ j≤i,j∈Nk
881
+
882
+ Aj
883
+ +
884
+
885
+ i∈N −
886
+ k
887
+
888
+ j≤i,j∈N −
889
+ k
890
+ Aj + 1
891
+ where N −
892
+ k excludes the last element in Nk. The result fol-
893
+ lows as Rub(T, Nk) = O(
894
+
895
+ T |ANk|) where the βk has
896
+ been consumed in the prefactor.
897
+ 4. Numerical results
898
+ The experimental setup in this section is inspired by the
899
+ socio-economic simulation in (Zheng et al., 2020).4
900
+ We
901
+ consider a simple taxation game where one player acts as
902
+ a socio-economic planner and the remaining M players act
903
+ 4The source code of our experiments is available on
904
+ https://anonymous.4open.science/r/bandit_optimization_dag-242C/.
905
+
906
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
907
+ as workers that earn an income by performing actions, e.g.,
908
+ constructing houses. The socio-economic planner divides
909
+ the possible incomes into N brackets where [βi−1, βi] de-
910
+ notes the ith bracket with β0 = 0 and βN = ∞. In each
911
+ round t ∈ [T ], the socio-economic planner picks an action
912
+ ap(t) = (ap,1(t), . . . , ap,N(t)) that determines the taxation
913
+ rate where ap,i(t) ∈ Ri denotes the the marginal taxation
914
+ rate in income bracket i and Ri is a finite set. We use the
915
+ discrete set Ap = �N
916
+ i=1 Ri of size Ap to denote the action
917
+ space of the planner.
918
+ In each round, the workers observe the taxation policy
919
+ ap(t) ∈ Ap and choose their actions consecutively, see
920
+ Fig. 3. Worker j ∈ [M] takes actions aj(t) ∈ Aj where
921
+ Aj is a finite set. A chosen action aj(t) ∈ Aj translates
922
+ into a tuple (xj(t), ˜lj(t)) consisting of a gross income and
923
+ a marginal default labor cost, respectively. Furthermore,
924
+ each worker has a skill level sj that serves as a divisor of
925
+ the default labor, resulting in an effective marginal labor
926
+ lj(t) = ˜lj(t)/sj. Hence, given a common action, high-
927
+ skilled workers exhibit less labor than low-skilled workers.
928
+ The gross income xj(t) of worker j in round t is taxed ac-
929
+ cording to ap(t) as
930
+ ξ(xj(t)) =
931
+ N
932
+
933
+ i=1
934
+ ap,i(t)(βi − βi−1)1{xj(t) > βi}
935
+ + (xj(t) − βi−1)1{xj(t) ∈ [βi−1, βi]}
936
+ where ap,i(t) is the taxation rate of the ith income bracket
937
+ and ξ(xj(t)) denotes the collected tax. Hence, worker j’s
938
+ cumulative net income zj(t) and cumulative labor ℓj(t) in
939
+ round t are given as
940
+ zj(t) =
941
+ t
942
+
943
+ u=1
944
+ xj(u) − ξ(xj(u)),
945
+ ℓj(t) =
946
+ t
947
+
948
+ u=1
949
+ lj(u).
950
+ In round t, the utility of worker j depends on the cumula-
951
+ tive net income and the cumulative labor as
952
+ rj
953
+ t (zj(t), ℓj(t)) = (zj(t))1−η − 1
954
+ 1 − η
955
+ − ℓj(t)
956
+ (12)
957
+ where η > 0 determines the non-linear impact of income.
958
+ An example of the utility function in (12) is shown in Fig. 4
959
+ for η = 0.3, income xj(t) = 10, and a default marginal la-
960
+ bor ˜lj(t) = 1 at different skill levels. It can be seen that the
961
+ utility initially increases with income until a point at which
962
+ the cumulative labor outweighs the benefits of income and
963
+ the worker gets burnt out.
964
+ We consider bandit-rewards defined with respect to the
965
+ Socio-economic planner
966
+ 1
967
+ 2
968
+ 3
969
+ workers
970
+ Figure 3. Socio-economic setup with 4 players among which 3 are
971
+ designated workers.
972
+ 100
973
+ 101
974
+ 102
975
+ 103
976
+ 104
977
+ 0
978
+ 200
979
+ 400
980
+ 600
981
+ 800
982
+ 1,000
983
+ houses built
984
+ worker utility
985
+ s = 1
986
+ s = 2
987
+ s = 3
988
+ Figure 4. Example of utility functions for different skill levels
989
+ when xj(t) = 10 and ˜ℓj(t) = 1.
990
+ worker utilities and the total collected tax as
991
+ rt(ap(t), a1(t), . . . , aM(t)) =
992
+ 1
993
+ (M + 1)
994
+
995
+
996
+ M
997
+
998
+ j=1
999
+ wrj
1000
+ t (zj(t), ℓj(t)) + wp
1001
+ M
1002
+
1003
+ j=1
1004
+ ξ(xj(t))
1005
+
1006
+
1007
+ (13)
1008
+ where the weights trade off worker utility for the col-
1009
+ lected tax and satisfy Mw + wp
1010
+ =
1011
+ M + 1.
1012
+ The
1013
+ individual rewards are all normalized to [0, 1], hence,
1014
+ rt(ap(t), a1(t), . . . , aM(t)) ∈ [0, 1].
1015
+ For the numerical experiment, we consider N = 2 in-
1016
+ come brackets where the boundaries of the income brackets
1017
+ are {0, 14, ∞} and the socio-economic planner chooses a
1018
+ marginal taxation rate from R = {0.1, 0.3, 0.5} in each
1019
+ income bracket, hence, Ap = 9. We consider M = 3 work-
1020
+ ers with the same action set A of size 3. Consequently,
1021
+ the joint action space is of size 243. Furthermore, we let
1022
+ the skill level of the workers coincide with the worker in-
1023
+ dex, i.e., sj = j for j ∈ [M].
1024
+ Simply, workers able
1025
+ to observe others have higher skill. The worker actions
1026
+ translate to a gross marginal income and a marginal la-
1027
+ bor as aj(t) → (xj(t), lj(t)) where xj(t) = 5aj(t) and
1028
+
1029
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
1030
+ lj(t) = aj(t)/sj for aj(t) ∈ {1, 2, 3}. Finally, we set
1031
+ η = 0.3 and let w = 1/M and wp = M to model a sit-
1032
+ uation where the collected tax is preferred over workers’
1033
+ individual utility.
1034
+ The joint pseudo-regret of the socio-economic simulation is
1035
+ illustrated in Fig. 5 and Fig. 6 (different scales) along with
1036
+ the upper bound in Theorem 3.4. We collect 100 realiza-
1037
+ tions of the experiment and, along with the pseudo-regret
1038
+ R(T ), two standard deviations are also presented. It can
1039
+ be seen that the players initially explore the action space
1040
+ and are able to eventually converge on an optimal strategy
1041
+ from a pseudo-regret perspective. The upper bound in the
1042
+ figures is admittedly loose and does not exhibit the same
1043
+ asymptotic decay as the simulation due to different con-
1044
+ stants in the scaling law, see Fig. 6. However, it remains
1045
+ valuable as it provides an asymptotic no-regret guarantee
1046
+ for the learning algorithm.
1047
+ 100
1048
+ 101
1049
+ 102
1050
+ 103
1051
+ 104
1052
+ 105
1053
+ 106
1054
+ 0.1
1055
+ 0.2
1056
+ 0.3
1057
+ 0.4
1058
+ 0.5
1059
+ T
1060
+ Regret
1061
+ Upper bound
1062
+ R(T )
1063
+ Figure 5. Pseudo regret vs the upper bound in Theorem 3.4 (linear
1064
+ scale).
1065
+ 5. Conclusion
1066
+ We have studied multiplayer games with joint bandit-
1067
+ rewards where players execute actions consecutively and
1068
+ observe the actions of the preceding players.
1069
+ We intro-
1070
+ duced the notion of joint pseudo-regret and presented an
1071
+ algorithm that is guaranteed to achieve no-regret for adver-
1072
+ sarial bandit rewards. A bottleneck of many multi-agent
1073
+ algorithms is that the complexity scales with the joint ac-
1074
+ tion space (Jin et al., 2021) and our algorithm is no ex-
1075
+ ception. An interesting venue of further study is to find
1076
+ algorithms that have more benign scaling properties, see
1077
+ e.g., (Jin et al., 2021; Daskalakis et al., 2021).
1078
+ Further-
1079
+ more, recent results on correlated multi-armed bandits have
1080
+ demonstrated that multi-armed bandits with many arms
1081
+ may become significantly more feasible if one is able to
1082
+ 101
1083
+ 102
1084
+ 103
1085
+ 104
1086
+ 105
1087
+ 106
1088
+ 10−3
1089
+ 10−2
1090
+ 10−1
1091
+ 100
1092
+ 101
1093
+ 102
1094
+ T
1095
+ Regret
1096
+ Upper bound
1097
+ R(T )
1098
+ Figure 6. Pseudo regret vs the upper bound in Theorem 3.4 (log-
1099
+ scale).
1100
+ exploit dependencies among arms (Gupta et al., 2021). It
1101
+ would be interesting to explore how the scaling of our algo-
1102
+ rithm is affected by modelling and exploiting dependencies
1103
+ among players.
1104
+ References
1105
+ Auer, P. and Chiang, C.-K. An algorithm with nearly op-
1106
+ timal pseudo-regret for both stochastic and adversarial
1107
+ bandits. In Proceedings of the 29th Annual Conference
1108
+ on Learning Theory (COLT), 2016.
1109
+ Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E.
1110
+ The nonstochastic multiarmed bandit problem.
1111
+ SIAM
1112
+ Journal on Computing, 32(1):48–77, 2002.
1113
+ Aussel, D. and Svensson, A. A short state of the art on
1114
+ multi-leader-follower games.
1115
+ In Bilevel Optimization.
1116
+ Springer, Cham, Switzerland, 2020.
1117
+ Bai, Y., Jin, C., Wang, H., and Xiong, C. Sample-efficient
1118
+ learning of Stackelberg equilibria in general-sum games.
1119
+ In NeurIPS, 2021.
1120
+ Balcan, M.-F., Blum, A., Haghtalab, N., and Procaccia,
1121
+ A. D. Commitment without regrets: Online learning in
1122
+ Stackelberg security games. In Proceedings of the 16th
1123
+ ACM Conference on Economics and Computation, 2015.
1124
+ Bubeck, S. and Slivkins, A.
1125
+ The best of both worlds:
1126
+ Stochastic and adversarial bandits. In Proceedings of the
1127
+ 25th Annual Conference on Learning Theory (COLT),
1128
+ 2012.
1129
+ Bubeck, S., Budzinski, T., and Sellke, M. Cooperative and
1130
+ stochastic multi-player multi-armed bandit: Optimal re-
1131
+ gret with neither communication nor collisions. In Pro-
1132
+
1133
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
1134
+ ceedings of the 34th Annual Conference on Learning
1135
+ Theory (COLT), 2021.
1136
+ Bıyık, E., Lalitha, A., Saha, R., Goldsmith, A., and Sadigh,
1137
+ D. Partner-aware algorithms in decentralized coopera-
1138
+ tive bandit teams. In The Thirty-Sixth AAAI Conference
1139
+ on Artificial Intelligence (AAAI), 2022.
1140
+ Cesa-Bianchi, N. and Lugosi, G. Prediction, Learning, and
1141
+ Games. Cambridge University Press, Cambridge, UK,
1142
+ 2006.
1143
+ Cesa-Bianchi, N., Gentile, C., Mansour, Y., and Minora, A.
1144
+ Delay and cooperation in nonstochastic bandits. In the
1145
+ 29th Annual Conference on Learning Theory (COLT),
1146
+ 2016.
1147
+ Cesa-Bianchi, N., Cesari, T., and Monteleoni, C. Coopera-
1148
+ tive online learning: Keeping your neighbors updated. In
1149
+ the 31st International Conference on Algorithmic Learn-
1150
+ ing Theory (ALT), 2020.
1151
+ Cesa-Bianchi, N., Cesari, T. R., and Della Vecchia, R. Co-
1152
+ operative online learning with feedback graphs, 2021.
1153
+ arXiv:2106.04982.
1154
+ D’Andrea, M.
1155
+ Playing against no-regret players, 2022.
1156
+ arXiv:2202.09364.
1157
+ Daskalakis, C., Fishelson, M., and Golowich, N.
1158
+ Near-
1159
+ optimal no-regret learning in general games. In NeurIPS,
1160
+ 2021.
1161
+ Deng, Y., Schneider, J., and Sivan, B. Strategizing against
1162
+ no-regret learners. In NeurIPS, 2019.
1163
+ Goktas, D., Zhao, J., and Greenwald, A. Robust no-regret
1164
+ learning in min-max Stackelberg games. In The AAAI-22
1165
+ Workshop on Adversarial Machine Learning and Beyond,
1166
+ 2022.
1167
+ Gupta, S., Chaudhari, S., Joshi, G., and Yagan, O. Multi-
1168
+ armed bandits with correlated arms. IEEE Transactions
1169
+ on Information Theory, 67(10):6711–6732, 2021.
1170
+ Hicks, J. R. Marktform und gleichgewicht. The Economic
1171
+ Journal, 45(178):334–336, 1935.
1172
+ Ibarz, J., Tan, J., Finn, C., Kalakrishnan, M., Pastor, P., and
1173
+ Levine, S. How to train your robot with deep reinforce-
1174
+ ment learning: lessons we have learned. The Interna-
1175
+ tional Journal of Robotics Research, 40(4-5):698–721,
1176
+ 2021.
1177
+ Janatian, N., Modarres-Hashemi, M., and Sun, S. Sensing-
1178
+ based resource allocation in multi-channel cognitive ra-
1179
+ dio networks. In the IEEE Symposium on Communica-
1180
+ tions and Vehicular Technology (SCVT), 2015.
1181
+ Jin, C., Liu, Q., Wang, Y., and Yu, T. V-learning – a sim-
1182
+ ple, efficient, decentralized algorithm for multiagent RL,
1183
+ 2021. arXiv:2110.14555.
1184
+ Kalathil, D., Nayyar, N., and Jain, R. Decentralized learn-
1185
+ ing for multiplayer multiarmed bandits. IEEE Transac-
1186
+ tions on Information Theory, 60(4):2331–2345, 2014.
1187
+ Koller, D. and Friedman, N. Probabilistic Graphical Mod-
1188
+ els: Principles and Techniques - Adaptive Computation
1189
+ and Machine Learning. The MIT Press, Cambridge, MA,
1190
+ USA, 2009.
1191
+ Lauffer, N., Ghasemi, M., Hashemi, A., Savas, Y., and
1192
+ Topcu, U. No-regret learning in dynamic Stackelberg
1193
+ Games, 2022. arXiv:2202.04786.
1194
+ Marsden, G., McDonald, M., and Brackstone, M. Towards
1195
+ an understanding of adaptive cruise control. Transporta-
1196
+ tion research Part C: Emerging Technologies, 9(1):33–
1197
+ 51, 2001.
1198
+ Moghadam, M. M., Boroomand, B., Jalali, M., Zareian, A.,
1199
+ DaeiJavad, A., Manshaei, M. H., and Krunz, M. Game
1200
+ of GANs: Game-Theoretical Models for Generative Ad-
1201
+ versarial Networks, 2021. arXiv:2106.06976.
1202
+ Sch¨afer, F., Anandkumar, A., and Owhadi, H. Competitive
1203
+ mirror descent, 2020. arXiv:2006.10179.
1204
+ Sessa, P. G., Bogunovic, I., Kamgarpour, M., and Krause,
1205
+ A. Learning to play sequential games versus unknown
1206
+ opponents. In NeurIPS, 2020.
1207
+ Shalev-Shwartz, S. Online learning and online convex op-
1208
+ timization. Foundations and Trends® in Machine Learn-
1209
+ ing, 4(2):107–194, 2012.
1210
+ Wei, C.-Y. and Luo, H. More adaptive algorithms for ad-
1211
+ versarial bandits. In Proceedings of the 31st Annual Con-
1212
+ ference On Learning Theory (COLT), 2018.
1213
+ Yang, Y. and Wang, J.
1214
+ An overview of multi-agent re-
1215
+ inforcement learning from game theoretical perspective,
1216
+ 2020. arXiv:2011.00583.
1217
+ Zhang, K., Yang, Z., Liu, H., Zhang, T., and Basar, T.
1218
+ Fully decentralized multi-agent reinforcement learning
1219
+ with networked agents. In Proceedings of the 35th Inter-
1220
+ national Conference on Machine Learning ICML), 2018.
1221
+ Zhang, K., Yang, Z., and Bas¸ar, T. Multi-agent reinforce-
1222
+ ment learning: A selective overview of theories and al-
1223
+ gorithms. In Handbook of Reinforcement Learning and
1224
+ Control. Springer, Cham, Switzerland, 2019.
1225
+ Zheng, S., Trott, A., Srinivasa, S., Naik, N., Gruesbeck,
1226
+ M., Parkes, D. C., and Socher, R. The AI economist:
1227
+ Improving equality and productivity with AI-driven tax
1228
+ policies, 2020. arXiv:2004.13332.
1229
+
1230
+ Decentralized Online Bandit Optimization on Directed Graphs with Regret Bounds
1231
+ Zhong, H., Yang, Z., Wang, Z., and Jordan, M. I.
1232
+ Can
1233
+ reinforcement learning find Stackelberg-Nash equilibria
1234
+ in general-sum Markov games with myopic followers?,
1235
+ 2021. arXiv:2112.13521.
1236
+ Zimmert, J. and Seldin, Y. Tsallis-inf: An optimal algo-
1237
+ rithm for stochastic and adversarial bandits. Journal of
1238
+ Machine Learning Research, 22(28):1–49, 2021.
1239
+
DtFKT4oBgHgl3EQfZS4v/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
G9AzT4oBgHgl3EQfjP2K/content/2301.01513v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5224caf3ec1c79fc819179a848a32da940cb1fa64dc40f425d68fb12863c4bb5
3
+ size 105948
G9AzT4oBgHgl3EQfjP2K/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c5e32c53269d0bfa7de27c0e9633313c7cff91dab2fb32d870630342c6b7985
3
+ size 655405
G9AzT4oBgHgl3EQfjP2K/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f15eb1771a364cd35842c21539dbb01f7d4165c8610657ffac3e7803441ce41f
3
+ size 31176