jackkuo commited on
Commit
98d8788
·
verified ·
1 Parent(s): 02f035a

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -9AyT4oBgHgl3EQfdffn/vector_store/index.faiss +3 -0
  2. -9AzT4oBgHgl3EQfvf1g/content/tmp_files/2301.01707v1.pdf.txt +840 -0
  3. -9AzT4oBgHgl3EQfvf1g/content/tmp_files/load_file.txt +0 -0
  4. -NAzT4oBgHgl3EQfFfpP/content/2301.01011v1.pdf +3 -0
  5. -NAzT4oBgHgl3EQfFfpP/vector_store/index.pkl +3 -0
  6. -NE1T4oBgHgl3EQf8QVm/content/2301.03543v1.pdf +3 -0
  7. -NE1T4oBgHgl3EQf8QVm/vector_store/index.faiss +3 -0
  8. -NE1T4oBgHgl3EQf8QVm/vector_store/index.pkl +3 -0
  9. .gitattributes +71 -0
  10. 0NE5T4oBgHgl3EQfOQ6v/content/tmp_files/2301.05496v1.pdf.txt +1515 -0
  11. 0NE5T4oBgHgl3EQfOQ6v/content/tmp_files/load_file.txt +0 -0
  12. 1dE2T4oBgHgl3EQfNQZD/content/2301.03734v1.pdf +3 -0
  13. 1dE2T4oBgHgl3EQfNQZD/vector_store/index.faiss +3 -0
  14. 1dE2T4oBgHgl3EQfNQZD/vector_store/index.pkl +3 -0
  15. 1dFAT4oBgHgl3EQfCxxo/content/tmp_files/2301.08412v1.pdf.txt +622 -0
  16. 1dFAT4oBgHgl3EQfCxxo/content/tmp_files/load_file.txt +374 -0
  17. 29FAT4oBgHgl3EQflB1V/content/tmp_files/2301.08614v1.pdf.txt +0 -0
  18. 29FAT4oBgHgl3EQflB1V/content/tmp_files/load_file.txt +0 -0
  19. 39AyT4oBgHgl3EQfP_aH/content/2301.00036v1.pdf +3 -0
  20. 3dFQT4oBgHgl3EQfGzVw/content/tmp_files/2301.13246v1.pdf.txt +867 -0
  21. 3dFQT4oBgHgl3EQfGzVw/content/tmp_files/load_file.txt +0 -0
  22. 4NAzT4oBgHgl3EQfffyv/content/2301.01454v1.pdf +3 -0
  23. 4NAzT4oBgHgl3EQfffyv/vector_store/index.faiss +3 -0
  24. 4NAzT4oBgHgl3EQfffyv/vector_store/index.pkl +3 -0
  25. 4dE2T4oBgHgl3EQf6Qjc/content/2301.04199v1.pdf +3 -0
  26. 4dE2T4oBgHgl3EQf6Qjc/vector_store/index.faiss +3 -0
  27. 5tAyT4oBgHgl3EQf2fk_/vector_store/index.faiss +3 -0
  28. 6NE0T4oBgHgl3EQfvwGn/vector_store/index.faiss +3 -0
  29. 6NE0T4oBgHgl3EQfvwGn/vector_store/index.pkl +3 -0
  30. 6tE4T4oBgHgl3EQfcQy_/content/tmp_files/2301.05082v1.pdf.txt +1039 -0
  31. 6tE4T4oBgHgl3EQfcQy_/content/tmp_files/load_file.txt +0 -0
  32. 79E4T4oBgHgl3EQf2g20/content/2301.05299v1.pdf +3 -0
  33. 79E4T4oBgHgl3EQf2g20/vector_store/index.pkl +3 -0
  34. 7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf +3 -0
  35. 7dA0T4oBgHgl3EQfOf8M/vector_store/index.faiss +3 -0
  36. 7dA0T4oBgHgl3EQfOf8M/vector_store/index.pkl +3 -0
  37. 8tE1T4oBgHgl3EQfngT9/content/2301.03311v1.pdf +3 -0
  38. 8tE1T4oBgHgl3EQfngT9/vector_store/index.faiss +3 -0
  39. 8tE1T4oBgHgl3EQfngT9/vector_store/index.pkl +3 -0
  40. B9FKT4oBgHgl3EQfXi52/content/tmp_files/2301.11795v1.pdf.txt +4110 -0
  41. B9FKT4oBgHgl3EQfXi52/content/tmp_files/load_file.txt +0 -0
  42. D9E1T4oBgHgl3EQfEQOb/content/tmp_files/2301.02888v1.pdf.txt +0 -0
  43. D9E1T4oBgHgl3EQfEQOb/content/tmp_files/load_file.txt +0 -0
  44. D9E2T4oBgHgl3EQfSgcI/content/2301.03792v1.pdf +3 -0
  45. D9E2T4oBgHgl3EQfSgcI/vector_store/index.faiss +3 -0
  46. D9E2T4oBgHgl3EQfSgcI/vector_store/index.pkl +3 -0
  47. DdE1T4oBgHgl3EQf-Abs/content/2301.03564v1.pdf +3 -0
  48. DdE1T4oBgHgl3EQf-Abs/vector_store/index.faiss +3 -0
  49. ENE4T4oBgHgl3EQfGgwU/content/tmp_files/2301.04894v1.pdf.txt +0 -0
  50. ENE4T4oBgHgl3EQfGgwU/content/tmp_files/load_file.txt +0 -0
-9AyT4oBgHgl3EQfdffn/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e80960255e2607dbc70cc8f5738f608e7c4fee3eec63e52cfe6897186302ddb1
3
+ size 34668589
-9AzT4oBgHgl3EQfvf1g/content/tmp_files/2301.01707v1.pdf.txt ADDED
@@ -0,0 +1,840 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Implementation of hyperbolic complex numbers in Julia language
2
+ Anna V. Korolkova,1, \ast Migran N. Gevorkyan,1, \dagger and Dmitry S. Kulyabov1, 2, ‡
3
+ 1Peoples’ Friendship University of Russia (RUDN University),
4
+ 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation
5
+ 2Joint Institute for Nuclear Research
6
+ 6 Joliot-Curie, Dubna, Moscow region, 141980, Russian Federation
7
+ Background: Hyperbolic complex numbers are used in the description of hyperbolic spaces. One
8
+ of the well-known examples of such spaces is the Minkowski space, which plays a leading role in
9
+ the problems of the special theory of relativity and electrodynamics. However, such numbers are
10
+ not very common in different programming languages. Purpose: Of interest is the implementation
11
+ of hyperbolic complex in scientific programming languages, in particular, in the Julia language.
12
+ Methods: The Julia language is based on the concept of multiple dispatch. This concept is an
13
+ extension of the concept of polymorphism for object-oriented programming languages. To implement
14
+ hyperbolic complex numbers, the multiple dispatching approach of the Julia language was used.
15
+ Results: The result is a library that implements hyperbolic numbers. Conclusions: Based on
16
+ the results of the study, we can conclude that the concept of multiple dispatching in scientific
17
+ programming languages is convenient and natural.
18
+ Keywords: Julia programming language, multiple dispatch, abstract data types, type conversion, parametric
19
+ structures, hyperbolic complex numbers
20
+ I.
21
+ INTRODUCTION
22
+ The Julia programming language [1, 2] is a promising language for scientific computing. At the moment, the Julia
23
+ language has reached a stable state. By design, Julia solves the problem of two languages. This problem lies in the fact
24
+ that for rapid prototyping, data processing and visualization, an interpreted dynamic language or a mathematical
25
+ package (Python, Matlab, etc.) is used, and for intensive numerical calculations, the program has to be rewritten in a
26
+ compiled language with static typing (C/ C++, Fortran).
27
+ An illustration of this problem can be seen in Python, which has gained wide popularity as an interface language-glue.
28
+ Numerous wrapper libraries were written on it, which used Python code to call C/C++ and Fortran functions from
29
+ precompiled libraries. For example, the well-known library NumPy [3] consists of 51% C code and only 47% Python
30
+ code (the remaining percentages are divided between C++, Fortran, JavaScript and Unix shell).
31
+ The Julia language combines the flexibility of dynamically typed interpreted languages with the performance of
32
+ statically typed compiled languages.
33
+ The basic part of the Julia language is very similar to other scientific programming languages, so it does not cause
34
+ difficulties in mastering. However, Julia’s core is built around the concept of multiple dispatch [4], which is rare in other
35
+ languages. It is in this mechanism that the essential difference of Julia from other languages lies, and its understanding
36
+ is essential for the full use of all the advantages of Julia.
37
+ A.
38
+ Paper structure
39
+ In the article, the authors paid great attention to illustrating the mechanism of multiple dispatch and other
40
+ mechanisms that are closely related to it.
41
+ In the first part of the article, we give the necessary definitions and illustrate the concept of multiple dispatch
42
+ with simple examples that allow you to understand the syntax associated with this part of the language and capture
43
+ the essence of this approach. In the second part, we give an example of the implementation of hyperbolic complex
44
+ numbers in the Julia language. This example allows you to touch not only multiple dispatch, but also the type casting
45
+ mechanism, the abstract type hierarchy, overloading arithmetic operators, and specifying user-defined data types.
46
+ \ast korolkova-av@rudn.ru
47
+ \dagger gevorkyan-mn@rudn.ru
48
+ ‡ kulyabov-ds@rudn.ru
49
+ arXiv:2301.01707v1 [cs.MS] 4 Jan 2023
50
+
51
+ 2
52
+ II.
53
+ MULTIPLE DISPATCH
54
+ A.
55
+ Common definitions
56
+ Dynamic dispatch is a mechanism that allows you to choose which of the many implementations of a polymorphic
57
+ function (or operator) should be called in a given case [5]. In this case, the choice of one or another implementation
58
+ is carried out at the stage of program execution. Multiple dispatch is based on dynamic dispatch. In this case, the
59
+ choice of implementation of a polymorphic function is made based on the type, number, and order of the function’s
60
+ arguments. This is how runtime polymorphic dispatch is implemented [6, 7]. Note also that in addition to the term
61
+ multiple dispatch, the term multimethod is also used.
62
+ The mechanism of multiple dispatch is similar to the mechanism of overloading functions and operators, implemented,
63
+ for example, in the C++ language. Function overloading, however, is done exclusively at compile time, while multiple
64
+ dispatch should work at runtime as well (runtime polymorphism).
65
+ B.
66
+ Multiple dispatch in Julia
67
+ To illustrate the mechanism of multiple dispatch, we will give the following code example in the Julia language.
68
+ function f(x, y)
69
+ println("Generic implementation")
70
+ return x + y
71
+ end
72
+ function f(x)
73
+ println("For single argument")
74
+ return x
75
+ end
76
+ function f(x::Integer, y::Integer)
77
+ println("Implementation for integers")
78
+ return x + y
79
+ end
80
+ function f(x::String, y::String)
81
+ println("Implementation for strings")
82
+ return x * " " * y
83
+ end
84
+ function f(x::Tuple{Int, Int}, y::Tuple{Int, Int})
85
+ println("Implementation for tuples of two integer elements")
86
+ return (x[1], x[2], y[1], y[2])
87
+ end
88
+ In this example, we have created five implementations of the f function, which differ from each other in different
89
+ signatures. In terms of the Julia language, this means that one function f now has four different methods. In the first
90
+ two methods, we did not use type annotations, so the type of the arguments will be determined either at compile
91
+ time or at run time (as in interpreted languages). It is also worth noting that Julia uses dynamic JIT compilation
92
+ (just-in-time), so the compilation stage is not explicitly separated from the execution stage for the user.
93
+ The arguments of the following three methods are annotated with types, so they will only be called if the types
94
+ match the annotations. In the f for strings, the * concatenation operator is used. The choice of the multiplication sign
95
+ * instead of the more traditional addition sign + is justified by the creators of the language by the fact that string
96
+ concatenation is not a commuting operation, so it is more logical to use the multiplication sign for it, rather than the
97
+ addition sign, which is often used to denote commuting operations.
98
+ The following code snippet illustrates how multiple dispatch works at compile time. The @show macro is used to
99
+ print out the name of the function and the arguments passed to it.
100
+ @show f(2.0, 1)
101
+ @show f(2, 2)
102
+
103
+ 3
104
+ @show f(0x2, 0x1) # numbers in hexadecimal system
105
+ @show f("Text", "line")
106
+ @show f(3)
107
+ @show f([1, 2], [3, 4])
108
+ @show f((1, 2), (3, 4))
109
+ • In the first line, we passed real (floating-point) type arguments to the function, so a generic implementation
110
+ call was made. Since the operator + is defined for floating point numbers, the function succeeded and gave the
111
+ correct result.
112
+ • Methods for integers were called in the second and third lines. Note that the Integer type is an abstract type
113
+ and includes signed and unsigned integers from 1 to 16 bytes in size, defined in the language core. Numbers
114
+ written in hexadecimal are interpreted by default as unsigned integers.
115
+ • The method for strings was called on the fourth line. In the fifth line, the method for one argument.
116
+ • The sixth line passed two arrays as arguments. The + operation is defined for arrays, so the function ran without
117
+ error and returned an element-wise sum.
118
+ • In the seventh line, the function arguments are tuples consisting of two integers. Since we defined a method for
119
+ such a combination of arguments, the function worked correctly.
120
+ Generic implementation
121
+ f(2.0, 1) = 3.0
122
+ Implementation for integers
123
+ f(2, 2) = 4
124
+ Implementation for integers
125
+ f(0x02, 0x01) = 0x03
126
+ Implementation for strings
127
+ f("Text", "line") = "Text line"
128
+ For single argument
129
+ f(3) = 3
130
+ Generic implementation
131
+ f([1, 2], [3, 4]) = [4, 6]
132
+ Implementation for tuples of two integer elements
133
+ f((1, 2), (3, 4)) = (1, 2, 3, 4)
134
+ The above examples will work correctly in languages that support function overloading and do not demonstrate the
135
+ specifics of dynamic dispatching, since the types of arguments are known at the compilation stage and are available to
136
+ the translator.
137
+ To test the work of dynamic method calls, consider the following code:
138
+ print("Enter an integer:")
139
+ # Read a string and convert to an integer type
140
+ @show n = parse(Int32, readline())
141
+ if n > 0
142
+ x = 1.2; y = 0.1
143
+ else
144
+ x = 1; y = 2
145
+ end
146
+ f(x, y)
147
+ Here, the types of variable values x and y are not known at compile time, as they depend on what number the user
148
+ enters during program execution. However, for the case of integer x and y the corresponding method is called.
149
+ III.
150
+ HYPERBOLIC NUMBERS
151
+ We will use hyperbolic numbers to illustrate the multiple dispatch capabilities of the Julia language, so we will limit
152
+ ourselves to the definition and basic arithmetic operations.
153
+
154
+ 4
155
+ Hyperbolic numbers [8–11], along with elliptic and parabolic numbers, are a generalization of complex numbers.
156
+ Hyperbolic numbers can be defined as follows:
157
+ z = x + jy, j2 = 1, j \not = \pm 1.
158
+ The quantity j will be called the hyperbolic imaginary unit, and the quantities x and y will be called the real and
159
+ imaginary parts, respectively.
160
+ For two hyperbolic numbers z1 = x1 + jy1 and z2 = x2 + jy2 the following arithmetic operations are performed.
161
+ Addition: z1 + z2 = (x1 + x2) + j(y1 + y2).
162
+ Multiplication: z1z2 = (x1x2 + y1y2) + j(x1y2 + x2y1).
163
+ Conjugation: z\ast = x - jy.
164
+ Inverse number: z - 1 =
165
+ x
166
+ x2 + y2 - j
167
+ y
168
+ x2 - y2 .
169
+ Division: z1
170
+ z2
171
+ = x1x2 - y1y2
172
+ x2
173
+ 2 - y2
174
+ 2
175
+ + jx1y1 - x1y2
176
+ x2
177
+ 2 - y2
178
+ 2
179
+ .
180
+ The implementation of hyperbolic numbers is in many respects similar to the implementation of complex ones.
181
+ Operators +, -, * must be overloaded, and /, root extraction, exponentiation, elementary math functions, etc. At
182
+ the same time, for the purposes of illustrating the mechanism of operation of multiple dispatching, it is arithmetic
183
+ operations that are of primary interest. This is due to the fact that elementary functions take only one argument,
184
+ and it is enough to define only one method for them. In the case of arithmetic operators, it is necessary to provide
185
+ combinations of arguments of different numeric types. So, for example, it should be possible to add a hyperbolic
186
+ number to an integer, rational, irrational number, which automatically affects not only multiple dispatch, but also
187
+ type casting mechanisms, an abstract type hierarchy, and default constructor overloading.
188
+ Therefore, we will confine ourselves to examples of the implementation of precisely arithmetic operations and that’s
189
+ all, without touching on the more mathematically complex calculations of various elementary functions of a hyperbolic
190
+ number.
191
+ Note that in addition to the term hyperbolic numbers, there are also terms in the literature: double numbers, split
192
+ complex numbers, perplex numbers, hyperbolic numbers [8, 12–15].
193
+ IV.
194
+ IMPLEMENTATION OF HYPERBOLIC NUMBERS IN JULIA
195
+ A.
196
+ Declaring a Data Structure
197
+ The implementation of hyperbolic numbers in Julia was based on the code for complex numbers available in
198
+ the official Julia repository. We also used the developments obtained in the implementation of parabolic complex
199
+ numbers [16]. New type Hyperbolic defined with an immutable structure:
200
+ struct Hyperbolic{T<:Real} <: Number
201
+ "Real part"
202
+ re::T
203
+ "Imaginary part"
204
+ jm::T
205
+ end
206
+ The structure is simple and contains only two fields of parametric type T. This requires that the type T was a
207
+ subtype of the abstract type Real (syntax T<:Real). The type Hyperbolic is a subtype of the abstract type Number
208
+ (see Fig. 1). Thus, hyperbolic numbers are built into an already existing hierarchy of numeric types.
209
+ After the structure is defined, a new object of type Hyperbolic can be created by calling the default constructor.
210
+ So, for example, the number h = 1 + j3 is given as follows:
211
+ h = Hyperbolic{Float64}(1, 3)
212
+ After creation, you can access the fields of the structure as h.re and h.jm, but an attempt changing the value of a
213
+ field of an already existing object will result in an error, since structs are immutable entities.
214
+ h = Hyperbolic(1, 3)
215
+
216
+ 5
217
+ Number
218
+ Hyperbolic
219
+ Complex
220
+ Real
221
+ Integer
222
+ Signed
223
+ Int8
224
+ Int16
225
+ Int32
226
+ Int64
227
+ Int128
228
+ Bool
229
+ Unsigned
230
+ UInt8
231
+ UInt16
232
+ UInt32
233
+ UInt64
234
+ UInt128
235
+ Rational
236
+ AbstractFloat
237
+ Float16
238
+ Float32
239
+ Float64
240
+ Legend:
241
+ Abstract type
242
+ Primitive type
243
+ Structure
244
+ Figure 1. Location of Hyperbolic Numbers in Julia’s Type Hierarchy
245
+ However, if the argument types are different, then the default constructor will not be able to implicitly cast and
246
+ create a new object. In this case, you must explicitly specify the parametric type
247
+ # Float64 и Int64
248
+ h = Hyperbolic(1.0, 3) # Error
249
+ h = Hyperbolic{Float64}(1.0, 3) # Correct
250
+ B.
251
+ Additional constructors
252
+ The default constructor is a normal function whose name is the same as the type name. By creating additional
253
+ methods for this function, you can create additional constructors to handle various special cases.
254
+ So, for example, in order not to specify a parametric type every time, you should add a new constructor of the
255
+ following form:
256
+ """Constructor №2"""
257
+ function Hyperbolic(x::Real, y::Real)
258
+ return Hyperbolic(promote(x, y)...)
259
+ end
260
+ The promote function casts the arguments passed to it to a common type and returns the result as a tuple. Postfix
261
+ operator ... unpacks the tuple and passes its elements as arguments to the constructor function. The language core
262
+ defines casting rules for all subtypes of the Real abstract type, so now the constructor will work correctly for any
263
+ combination of arguments, as long as the T<:Real rule is fulfilled. For example, the following code will work correctly:
264
+ # Rational и Float64
265
+ h = Hyperbolic(1//3, pi)
266
+ >> Hyperbolic{Float64}(0.5, 3.141592653589793)
267
+ We passed a rational number (type Rational) and a built-in global constant (number \pi ) of type Float64 to the
268
+ constructor. After that, the type casting rule worked and both arguments were cast to the type Float64 as more
269
+ general.
270
+ Declaring two more additional constructors will allow you to specify hyperbolic numbers with zero imaginary part:
271
+ """Constructor №3"""
272
+ function Hyperbolic{T}(x::Real) where {T<:Real}
273
+ return Hyperbolic{T}(x, 0)
274
+ end
275
+ """Constructor №4"""
276
+ function Hyperbolic(x::Real)
277
+ return Hyperbolic(promote(x, 0)...)
278
+ end
279
+ Constructor number 3 is a parametric function that is declared using the where construct. The T is a subtype of the
280
+ abstract type Real. Constructor number 4 works similarly to constructor number 2.
281
+ Two more constructors will allow you to pass other hyperbolic numbers as an argument to the constructor.
282
+
283
+ 6
284
+ """Constructor №5"""
285
+ function Hyperbolic{T}(h::Hyperbolic) where {T<:Real}
286
+ Hyperbolic{T}(h.re, h.jm)
287
+ end
288
+ """Constructor №6"""
289
+ function Hyperbolic(h::Hyperbolic)
290
+ return Hyperbolic(promote(h.re, h.jm)...)
291
+ end
292
+ For more convenience, you can also create a separate constant for the imaginary cost j:
293
+ const jm = Hyperbolic(0, 1)
294
+ C.
295
+ Data printing
296
+ To be able to print hyperbolic type values in a compact and readable form, you should add the appropriate methods
297
+ to the show function from the Base module.
298
+ function Base.show(io::IO, h::Hyperbolic)
299
+ print(io, h.re, "+", h.jm, "j")
300
+ end
301
+ Function show is used when printing data to the console, in particular, it is called by the println and macro @show.
302
+ The code and output listings below will assume that the show method has been added for hyperbolic numbers.
303
+ D.
304
+ Type casting
305
+ Before proceeding to the implementation of methods for arithmetic operations with hyperbolic numbers, it is
306
+ necessary to define the rules for type casting. To do this, create a new method for the function promote_rule from
307
+ the Base module.
308
+ function Base.promote_rule(::Type{Hyperbolic{T}}, ::Type{S}) where {T<:Real, S<:Real}
309
+ return Hyperbolic{promote_type(T, S)}
310
+ end
311
+ function Base.promote_rule(::Type{Hyperbolic{T}}, ::Type{Hyperbolic{S}}) where {T<:Real,
312
+ S<:Real}
313
+ \lhook →
314
+ return Hyperbolic{promote_type(T, S)}
315
+ end
316
+ As arguments in promote_rule parametric types are specified, which should be cast to one enclosing type. In our
317
+ case, this is possible if one of the types is a subtype of Real, then the enclosing type is Hyperbolic.
318
+ After adding methods for promote_rule, it becomes possible to use functions promote, promote_type and convert.
319
+ >>h = Hyperbolic(1 // 2)
320
+ >>promote(h, 1)
321
+ (1//2+0//1j, 1//1+0//1j)
322
+ >>promote_type(Hyperbolic{Int64}, Float32)
323
+ Hyperbolic{Float32}
324
+ The first function is already familiar to us. The second allows you to infer the enclosing type not of specific variable
325
+ values, but of the types themselves. A type in Julia is an object of the first kind (type DataType) and can be assigned
326
+ to other variables, passed as function arguments, and so on.
327
+ Function convert allows you to convert the type specific value, for example:
328
+ >>convert(Hyperbolic, 1)
329
+ 1+0j
330
+ After adding methods for type casting, you can start adding methods for arithmetic operations. A feature of Julia is
331
+ the implementation of arithmetic operations not in the form of operators, but in the form of functions. For example,
332
+ the following calls are correct:
333
+
334
+ 7
335
+ >>+(1,2)
336
+ 3
337
+ >>+(1,2,3,4)
338
+ 10
339
+ >>+((i for i in 1:10)...) # числа от 1 до 10
340
+ 55
341
+ In this regard, adding methods for arithmetic operations is no different from the corresponding process for other
342
+ functions.
343
+ Adding methods for unary operations + and - is carried out as follows:
344
+ Base.:+(h::Hyperbolic) = Hyperbolic(+h.re, +h.jm)
345
+ Base.:-(h::Hyperbolic) = Hyperbolic(-h.re, -h.jm)
346
+ This is an abbreviated function declaration.
347
+ Similarly, methods are added for binary addition, subtraction, multiplication, and division. Here is the code for
348
+ addition and multiplication.
349
+ # Binary + and *
350
+ function Base.:+(x::Hyperbolic, y::Hyperbolic)
351
+ xx = x.re + y.re
352
+ yy = x.jm + y.jm
353
+ Hyperbolic(xx, yy)
354
+ end
355
+ function Base.:*(x::Hyperbolic, y::Hyperbolic)
356
+ xx = x.re * y.re + x.jm * y.jm
357
+ yy = x.re * y.jm + x.je * y.re
358
+ return Hyperbolic(xx, yy)
359
+ end
360
+ V.
361
+ CONCLUSION
362
+ We examined the mechanism of multiple dispatch underlying the Julia language, using the example of the implemen-
363
+ tation of hyperbolic numbers. This example allowed us to touch upon such concepts of the language as the hierarchy of
364
+ data types, composite data types, type casting mechanisms, function overloading (creating new methods for functions
365
+ in terms of the Julia language), etc.
366
+ ACKNOWLEDGMENTS
367
+ This paper has been supported by the RUDN University Strategic Academic Leadership Program.
368
+ [1] J. Bezanson, A. Edelman, S. Karpinski, V. B. Shah, Julia: A fresh approach to numerical computing, SIAM Review 59 (1)
369
+ (2017) 65–98. doi:10.1137/141000671.
370
+ [2] M. N. Gevorkyan, D. S. Kulyabov, L. A. Sevastyanov, Review of julia programming language for scientific computing, in:
371
+ The 6th International Conference "Distributed Computing and Grid-technologies in Science and Education", 2014, p. 27.
372
+ [3] T. E. Oliphant, Guide to NumPy, 2nd Edition, CreateSpace Independent Publishing Platform, 2015.
373
+ [4] F. Zappa Nardelli, J. Belyakova, A. Pelenitsyn, B. Chung, J. Bezanson, J. Vitek, Julia subtyping: a rational reconstruction,
374
+ Proceedings of the ACM on Programming Languages 2 (OOPSLA) (2018) 1–27. doi:10.1145/3276483.
375
+ [5] K. Driesen, U. H¨olzle, J. Vitek, Message Dispatch on Pipelined Processors, Lecture Notes in Computer Science, Springer
376
+ Berlin Heidelberg, 1995. doi:10.1007/3-540-49538-x_13.
377
+ [6] R. Muschevici, A. Potanin, E. Tempero, J. Noble, Multiple dispatch in practice, in: OOPSLA’08: Proceedings of the 23rd
378
+ ACM SIGPLAN conference on Object-oriented programming systems languages and applications, ACM Press, 2008, p.
379
+ 563–582. doi:10.1145/1449764.1449808.
380
+ [7] S. Gowda, Y. Ma, A. Cheli, M. Gw´o´zzd´z, V. B. Shah, A. Edelman, C. Rackauckas, High-performance symbolic-numerics
381
+ via multiple dispatch, ACM Communications in Computer Algebra 55 (3) (2022) 92–96. doi:10.1145/3511528.3511535.
382
+ [8] I. M. Yaglom, Complex Numbers in Geometry, Academic Press, 1968.
383
+
384
+ 8
385
+ [9] I. M. Yaglom, B. A. Rozenfel’d, E. U. Yasinskaya, Projective metrics, Russian Mathematical Surveys 19 (5) (1964) 49–107.
386
+ doi:10.1070/RM1964v019n05ABEH001159.
387
+ [10] D. S. Kulyabov, A. V. Korolkova, L. A. Sevastianov, Complex numbers for relativistic operations (Dec 2021).
388
+ doi:
389
+ 10.20944/preprints202112.0094.v1.
390
+ [11] D. S. Kulyabov, A. V. Korolkova, M. N. Gevorkyan, Hyperbolic numbers as einstein numbers, Journal of Physics: Conference
391
+ Series 1557 (2020) 012027.1–5. doi:10.1088/1742-6596/1557/1/012027.
392
+ [12] P. Fjelstad, Extending special relativity via the perplex numbers, American Journal of Physics 54 (5) (1986) 416–422.
393
+ doi:10.1119/1.14605.
394
+ [13] W. Band, Comments on extending relativity via the perplex numbers, American Journal of Physics 56 (5) (1988) 469–469.
395
+ doi:10.1119/1.15582.
396
+ [14] J. Rooney, On the three types of complex number and planar transformations, Environment and Planning B: Planning and
397
+ Design 5 (1) (1978) 89–99. doi:10.1068/b050089.
398
+ [15] J. Rooney, Generalised complex numbers in mechanics, in: M. Ceccarelli, V. A. Glazunov (Eds.), Advances on Theory
399
+ and Practice of Robots and Manipulators, Vol. 22 of Mechanisms and Machine Science, Springer International Publishing,
400
+ Cham, 2014, pp. 55–62. doi:10.1007/978-3-319-07058-2_7.
401
+ [16] M. N. Gevorkyan, A. V. Korolkova, D. S. Kulyabov, Approaches to the implementation of generalized complex numbers in
402
+ the julia language, in: D. S. Kulyabov, K. E. Samouylov, L. A. Sevastianov (Eds.), Workshop on information technology and
403
+ scientific computing in the framework of the X International Conference Information and Telecommunication Technologies
404
+ and Mathematical Modeling of High-Tech Systems (ITTMM-2020), Vol. 2639 of CEUR Workshop Proceedings, Aachen,
405
+ 2020, pp. 141–157.
406
+ URL http://ceur-ws.org/Vol-2639/paper-13.pdf
407
+
408
+ Реализация гиперболических комплексных чисел на языке Julia
409
+ А. В. Королькова,1, \ast М. Н. Геворкян,1, \dagger and Д. С. Кулябов1, 2, ‡
410
+ 1Российский университет дружбы народов,
411
+ 117198, Москва, ул. Миклухо-Маклая, д. 6
412
+ 2Объединённый институт ядерных исследований,
413
+ ул. Жолио-Кюри 6, Дубна, Московская область, Россия, 141980
414
+ Предпосылки. Гиперболические комплексные числа применяются при описании гиперболи-
415
+ ческих пространств. Одним из известных примером таких пространств является пространство
416
+ Минковского, играющее ведущее значение в задачах частной теории относительности, электро-
417
+ динамики. Однако такие числа не очень распространены в разных языках программирования.
418
+ Цель. Представляет интерес реализация гиперболических комплексных в языках научного
419
+ программирования, в частности, в языке Julia. Методы. В основе языка Julia лежит концепция
420
+ множественной диспетчеризации (multiple dispatch). Эта концепция является расширением
421
+ концепции полиморфизма для объектно-ориентированных языков программирования. Для
422
+ реализации гиперболических комплексных чисел использован подход множественной дис-
423
+ петчеризацию языка Julia. Результаты. В результате получена библиотека, реализующая
424
+ гиперболические числа. Выводы. По результатам исследования можно сделать вывод об
425
+ удобстве и естественности концепции множественной диспетчеризации в языках научного
426
+ программирования.
427
+ Keywords: язык программирования Julia, множественная диспетчеризация, абстрактные типы данных,
428
+ конвертация типов, параметрические структуры, гиперболические комплексные числа
429
+ I.
430
+ ВВЕДЕНИЕ
431
+ Язык программирования Julia [1, 2] — это перспективный язык, предназначенный для научных вычислений. В
432
+ настоящий момент язык Julia достиг стабильного состояния. По замыслу разработчиков Julia решае�� проблему
433
+ двух языков. Данная проблема заключается в том, что для быстрого прототипирования, обработки данных и
434
+ визуализации используется интерпретируемый динамический язык или математический пакет (Python, Matlab и
435
+ т.д.), а для интенсивных численных расчётов программу приходится переписывать на компилируемом языке со
436
+ статической типизацией (C/C++, Fortran).
437
+ Иллюстрацию данной проблемы можно увидеть на примере языка Python, который приобрел широкую попу-
438
+ лярность в качестве интерфейсного «языка-клея». На нем было написано большое количество библиотек-обёрток,
439
+ которые использовали Python-код для вызова C/C++ и Fortran функций из предварительно скомпилированных
440
+ библиотек. Так, например, известная библиотека NumPy [3] на 51% состоит из кода на языке Си и лишь на 47%
441
+ из кода на языке Python (оставшиеся проценты делят между собой C++, Fortran, JavaScript и Unix shell).
442
+ Язык Julia совмещает в себе гибкости интерпретируемых языков с динамической типизацией и производитель-
443
+ ность компилируемых языков со статической типизацией.
444
+ Базовая часть языка Julia крайне схожа с другими языками научного программирования поэтому не вызывает
445
+ трудности при освоении. Однако ядро Julia построено вокруг концепцию множественной диспетчеризации
446
+ (multiple dispatch) [4], которая редко встречается в других языках. Именно в этом механизме лежит существенное
447
+ отличие Julia от других языков и его понимание существенно для полноценного использования всех преимуществ
448
+ Julia.
449
+ A.
450
+ Структура статьи
451
+ В статье авторы уделили большое внимание иллюстрации механизма множественной диспетчеризации и
452
+ других механизмов, которые близко с ней связаны.
453
+ В первой части статьи мы даем необходимые определения и иллюстрируем концепцию множественной
454
+ диспетчеризации на простых примерах, позволяющих понять синтаксис, связанный с этой частью языка и
455
+ \ast korolkova-av@rudn.ru
456
+ \dagger gevorkyan-mn@rudn.ru
457
+ ‡ kulyabov-ds@rudn.ru
458
+ arXiv:2301.01707v1 [cs.MS] 4 Jan 2023
459
+
460
+ 2
461
+ уловить суть данного подхода. Во второй части мы приводим пример реализации гиперболических комплексных
462
+ чисел на языке Julia. Данный пример позволяет затронуть не только множественную диспетчеризацию, но и
463
+ механизм приведения типов, иерархию абстрактных типов, перегрузку арифметических операторов и задание
464
+ пользовательских типов данных.
465
+ II.
466
+ МНОЖЕСТВЕННАЯ ДИСПЕТЧЕРИЗАЦИЯ
467
+ A.
468
+ Общие определения
469
+ Динамическая диспетчеризация (dynamic dispatch) — это механизм, который позволяет выбрать какую из
470
+ множества реализаций полиморфной функции (или оператора) следует вызвать в данном конкретном случае [5].
471
+ При этом выбор той или иной реализации осуществляется на стадии выполнения программы. Множественная
472
+ диспетчеризация основывается на динамической диспетчеризации. В этом случае выбор реализации полиморф-
473
+ ной функции делается исходя из типа, количества и порядка следования аргументов функции. Таким образом
474
+ реализуется полиморфизм времени выполнения (runtime polymorphic dispatch) [6, 7]. Заметим также, что кроме
475
+ термина «множественная диспетчеризация», также употребляется термин мультиметод.
476
+ Механизм множественной диспетчеризации похож на механизм перегрузки функций и операторов, реали-
477
+ зованный, например, в языке C++. Перегрузка функций, однако, осуществляется исключительно на стадии
478
+ компиляции, тогда как множественная диспетчеризация должна работать также и на стадии выполнения
479
+ программы (полиморфизм времени выполнения).
480
+ B.
481
+ Множественная диспетчеризация в Julia
482
+ Для иллюстрации механизма множественной диспетчеризации приведём следующий пример кода на языке
483
+ Julia.
484
+ function f(x, y)
485
+ println("Общая реализация")
486
+ return x + y
487
+ end
488
+ function f(x)
489
+ println("Для одного аргумента")
490
+ return x
491
+ end
492
+ function f(x::Integer, y::Integer)
493
+ println("Реализация для целых чисел")
494
+ return x + y
495
+ end
496
+ function f(x::String, y::String)
497
+ println("Реализация для строк")
498
+ return x * " " * y
499
+ end
500
+ function f(x::Tuple{Int, Int}, y::Tuple{Int, Int})
501
+ println("Реализация для кортежей из двух целочисленных элементов")
502
+ return (x[1], x[2], y[1], y[2])
503
+ end
504
+ В данном примере мы создали пять реализаций функции f, которые отличаются друг от друга разными
505
+ сигнатурами. В терминах языка Julia это означает, что у одной функции f теперь существует четыре разных
506
+ метода. В первых двух методах мы не использовали аннотаций типов, поэтому тип аргументов будет определен
507
+ или на стадии компиляции или на стадии выполнения программы (как в интерпретируемых языках). Стоит
508
+ также отметит, что Julia использует динамическую JIT-компиляцию (just-in-time), поэтому стадия компиляции
509
+ от стадии выполнения отделена для пользователя не явным образом.
510
+
511
+ 3
512
+ Аргументы трех следующих методов аннотированы типами, поэтому будут вызываться только в случае
513
+ совпадения типов с аннотациями. В методе f для строк используется оператор конкатенации *. Выбор знака
514
+ умножения * вместо более традиционного знака сложения + обосновывается создателями языка тем, что
515
+ конкатенация строк операция не коммутирующая, поэтому более логично использовать для нее знак умножения,
516
+ а не сложения, которым чаще все принято обозначать коммутирующие операции.
517
+ Следующий фрагмент кода иллюстрирует работу множественной диспетчеризации на стадии компиляции.
518
+ Макрос @show служит для распечатки имени функции и переданных ей аргументов.
519
+ @show f(2.0, 1)
520
+ @show f(2, 2)
521
+ @show f(0x2, 0x1) # числа в шестнадцатеричной системе
522
+ @show f("Строка", "текста")
523
+ @show f(3)
524
+ @show f([1, 2], [3, 4])
525
+ @show f((1, 2), (3, 4))
526
+ • В первой строке мы передали функции аргументы вещественного типа (с плавающей точкой), поэтому
527
+ был осуществлен вызов общей реализации. Так как для чисел с плавающей точкой определен оператор +,
528
+ то функция выполнилась успешно и дала правильный результат.
529
+ • Во второй и третей строках были вызваны методы для целых чисел. Заметим, что тип Integer является
530
+ абстрактным типом и включает в себя знаковые и беззнаковые целые числа размером от 1 до 16 байт,
531
+ определённые в ядре языка. Числа, записанные в шестнадцатерич��ой системе счисления интерпретируются
532
+ по умолчанию как беззнаковые целые.
533
+ • В четвертой строке был вызван метод для строк. В пятой строке метод для одного аргумента.
534
+ • В шестой строке в качестве аргументов переданы два массива. Операция + определена для массивов,
535
+ поэтому функция выполнилась без ошибок и вернула поэлементную сумму.
536
+ • В седьмой строке аргументами функции являются кортежи, состоящие из двух целых чисел. Так как нами
537
+ был определен метод для такой комбинации аргументов – функция отработала корректно.
538
+ Общая реализация
539
+ f(2.0, 1) = 3.0
540
+ Реализация для целых чисел
541
+ f(2, 2) = 4
542
+ Реализация для целых чисел
543
+ f(0x02, 0x01) = 0x03
544
+ Реализация для строк
545
+ f("Строка", "текста") = "Строка текста"
546
+ Для одного аргумента
547
+ f(3) = 3
548
+ Общая реализация
549
+ f([1, 2], [3, 4]) = [4, 6]
550
+ Реализация для кортежей из двух целочисленных элементов
551
+ f((1, 2), (3, 4)) = (1, 2, 3, 4)
552
+ Приведённые примеры корректно сработают и в языках, поддерживающих перегрузку функций и не демон-
553
+ стрируют специфику динамической диспетчеризации, так как типы аргументов известны на стадии компиляции
554
+ и доступны транслятору.
555
+ Для проверки работы именно динамического вызова методов рассмотрим следующий код:
556
+ print("Введите целое число:")
557
+ # Считываем строку и конвертируем в целый тип
558
+ @show n = parse(Int32, readline())
559
+ if n > 0
560
+ x = 1.2; y = 0.1
561
+ else
562
+ x = 1; y = 2
563
+ end
564
+ f(x, y)
565
+
566
+ 4
567
+ Здесь типы значений переменных x и y не известны на стадии компиляции, так как зависят от того, какое
568
+ число введёт пользователь во время выполнения программы. Тем не менее, для случая целочисленных x и y
569
+ вызывается соответствующий метод.
570
+ III.
571
+ ГИПЕРБОЛИЧЕСКИЕ ЧИСЛА
572
+ Мы будем использовать гиперболические числа для иллюстрации возможностей множественной диспетчериза-
573
+ ции языка Julia, поэтому ограничимся лишь определением и основными арифметическими операциями.
574
+ Гиперболические числа [8–11], наряду с эллиптическими и параболическими числами, являются обобщением
575
+ комплексных чисел. Гиперболические числа можно определить следующим образом:
576
+ z = x + jy, j2 = 1, j \not = \pm 1.
577
+ Величину j будем называть гиперболической мнимой единицей, а величины x и y действительной и мнимой
578
+ частями соответственно.
579
+ Для двух гиперболических чисел z1 = x1 + jy1 и z2 = x2 + jy2 выполняются следующие арифметические
580
+ операции.
581
+ Сложение: z1 + z2 = (x1 + x2) + j(y1 + y2).
582
+ Умножение: z1z2 = (x1x2 + y1y2) + j(x1y2 + x2y1).
583
+ Сопряжение: z\ast = x - jy.
584
+ Обратное число: z - 1 =
585
+ x
586
+ x2 + y2 - j
587
+ y
588
+ x2 - y2 .
589
+ Деление: z1
590
+ z2
591
+ = x1x2 - y1y2
592
+ x2
593
+ 2 - y2
594
+ 2
595
+ + jx1y1 - x1y2
596
+ x2
597
+ 2 - y2
598
+ 2
599
+ .
600
+ Реализация гиперболических чисел во многом аналогична реализации комплексных. Необходимо перегрузить
601
+ операторы +, -, * и /, функции извлечения корня, возведения в степень, элементарные математические функции
602
+ и т.д. При этом для целей иллюстрации механизма работы множественной диспетчеризации основной интерес
603
+ представляют именно арифметические операции. Это обусловлено тем, что элементарные функции принимают
604
+ только один аргумент и для них достаточно определить только один метод. В случае же арифметических
605
+ операторов необходимо предусмотреть комбинации аргументов разных числовых типов. Так, например, должна
606
+ иметься возможность сложения гиперболического числа с целым, рациональны, иррациональным числом, что
607
+ автоматически затрагивает не только множественную диспетчеризацию, но и механизмы приведения типов,
608
+ иерархию абстрактных типов и перегрузку конструктора по умолчанию.
609
+ Поэтому мы ограничимся примерами реализации именно арифметических операций и все, не затронув более
610
+ сложные в математическом плане вычисления разнообразных элементарных функций от гиперболического
611
+ числа.
612
+ Отметим, что кроме термина гиперболические числа, в литературе встречаются также термины: двойные
613
+ числа, расщепленные комплексные числа, комплексные числа гиперболического типа (double numbers, split
614
+ complex numbers, perplex numbers, hyperbolic numbers) [8, 12–15].
615
+ IV.
616
+ РЕАЛИЗАЦИЯ ГИПЕРБОЛИЧЕСКИХ ЧИСЕЛ В JULIA
617
+ A.
618
+ Объявление структуры данных
619
+ При реализации гиперболических чисел в Julia за основу был взят код для комплексных чисел, доступный в
620
+ официальном репозитории Julia. Также использовались наработки, полученные при реализации параболических
621
+ комплексных чисел [16]. Новый тип Hyperbolic определяется с помощью неизменяемой структуры:
622
+ struct Hyperbolic{T<:Real} <: Number
623
+ "Real part"
624
+ re::T
625
+ "Imaginary part"
626
+ jm::T
627
+ end
628
+
629
+ 5
630
+ Number
631
+ Hyperbolic Complex
632
+ Real
633
+ Integer
634
+ Signed
635
+ Int8
636
+ Int16
637
+ Int32
638
+ Int64
639
+ Int128
640
+ Bool
641
+ Unsigned
642
+ UInt8
643
+ UInt16
644
+ UInt32
645
+ UInt64 UInt128
646
+ Rational
647
+ AbstractFloat
648
+ Float16
649
+ Float32
650
+ Float64
651
+ Легенда:
652
+ Абстрактный тип
653
+ Примитивный тип
654
+ Структура
655
+ Рис. 1. Местоположение гиперболических чисел в иерархии типов Julia
656
+ Структура проста и содержит всего два поля параметрического типа T. При этом требуется, чтобы тип T был
657
+ подтипом абстрактного типа Real (синтаксис T<:Real). Сам тип Hyperbolic является подтипом абстрактного
658
+ типа Number (см рис. 1). Таким образом гиперболические числа встраиваются в уже существующую иерархию
659
+ числовых типов.
660
+ После определения структуры новый объект типа Hyperbolic можно создать путем вызова конструктора по
661
+ умолчанию. Так, например, число h = 1 + j3 задается следующим образом:
662
+ h = Hyperbolic{Float64}(1, 3)
663
+ После создания можно обращаться к полям структуры как h.re и h.jm, но попытка изменения значения поля
664
+ уже существующего объекта приведёт к ошибке, так как структуры являются неизменяемыми сущностями.
665
+ Если оба аргумента конструктора имеют один и тот же тип T, то его можно явно не указывать в фигурных
666
+ скобках, так как он будет выведен автоматически из типа передаваемых аргументов.
667
+ h = Hyperbolic(1, 3)
668
+ Однако, если типы аргументов отличаются, то конструктор по умолчанию не сможет осуществить неявное
669
+ приведение типов и создать новый объект. В этом случае необходимо явно указывать параметрический тип
670
+ # Float64 и Int64
671
+ h = Hyperbolic(1.0, 3) # Error
672
+ h = Hyperbolic{Float64}(1.0, 3) # Correct
673
+ B.
674
+ Дополнительные конструкторы
675
+ Конструктор по умолчанию представляет собой обычную функцию, имя которой совпадает с именем типа. Со-
676
+ здавая дополнительные методы для этой функции можно создать дополнительные конструкторы для обработки
677
+ различных частных случаев.
678
+ Так, например, чтобы не указывать всякий раз параметрический тип, следует добавить новый конструктор
679
+ следующего вида:
680
+ """Constructor №2"""
681
+ function Hyperbolic(x::Real, y::Real)
682
+ return Hyperbolic(promote(x, y)...)
683
+ end
684
+ Функция promote осуществляет приведение типов переданных ей аргументов к общему типу и возвращает
685
+ результат в виде кортежа. Постфиксный оператор ... распаковывает картеж и передает его элементы в виде
686
+ аргументов в функцию-конструктор. В ядре языка определены правила приведения для всех подтипов абстракт-
687
+ ного типа Real, поэтому теперь конструктор будет корректно работать для любой комбинации аргументов,
688
+ главное чтобы выполнялось правило T<:Real. Например, следующий код сработает корректно:
689
+ # Rational и Float64
690
+ h = Hyperbolic(1//3, pi)
691
+ >> Hyperbolic{Float64}(0.5, 3.141592653589793)
692
+ Мы передали в конструктор рациональное число (тип Rational) и встроенную глобальную константу (число
693
+ \pi ) типа Float64. После чего сработало правило приведения типов и оба аргументы были приведены к типу
694
+ Float64 как к более общему.
695
+
696
+ 6
697
+ Объявление еще двух дополнительных конструкторов позволит задавать гиперболические числа с нулевой
698
+ мнимой частью:
699
+ """Constructor №3"""
700
+ function Hyperbolic{T}(x::Real) where {T<:Real}
701
+ return Hyperbolic{T}(x, 0)
702
+ end
703
+ """Constructor №4"""
704
+ function Hyperbolic(x::Real)
705
+ return Hyperbolic(promote(x, 0)...)
706
+ end
707
+ Конструктор №3 является параметрической функцией, которая объявляется с использованием конструк-
708
+ ции where. Параметр T является подтипом абстрактного типа Real. Конструктор №4 работает аналогично
709
+ конструктору №2.
710
+ Ещё два конструктора позволят передавать в качестве аргумента конструктора другие гиперболические числа.
711
+ """Constructor №5"""
712
+ function Hyperbolic{T}(h::Hyperbolic) where {T<:Real}
713
+ Hyperbolic{T}(h.re, h.jm)
714
+ end
715
+ """Constructor №6"""
716
+ function Hyperbolic(h::Hyperbolic)
717
+ return Hyperbolic(promote(h.re, h.jm)...)
718
+ end
719
+ Для большего удобства также можно создать отдельную константу для мнимой единицы j:
720
+ const jm = Hyperbolic(0, 1)
721
+ C.
722
+ Вывод данных
723
+ Для возможности распечатывать значения гиперболического типа в компактном и читаемом виде, следует
724
+ добавить соответствующие методы для функции show из модуля Base.
725
+ function Base.show(io::IO, h::Hyperbolic)
726
+ print(io, h.re, "+", h.jm, "j")
727
+ end
728
+ Функция show используется при распечатке данных в консоль, в частности ее вызывают функция println
729
+ и макрос @show. В приведенных далее листингах кода и результатов его работы будет предполагаться, что
730
+ добавлен метод show для гиперболических чисел.
731
+ D.
732
+ Приведение типов
733
+ Прежде чем переходить к реализации методов для арифметических операций с гиперболическими числами,
734
+ необходимо определить правила приведения типов. Для этого следует создать новый метод для функции
735
+ promote_rule из модуля Base.
736
+ function Base.promote_rule(::Type{Hyperbolic{T}}, ::Type{S}) where {T<:Real, S<:Real}
737
+ return Hyperbolic{promote_type(T, S)}
738
+ end
739
+ function Base.promote_rule(::Type{Hyperbolic{T}}, ::Type{Hyperbolic{S}}) where {T<:Real,
740
+ S<:Real}
741
+ \lhook →
742
+ return Hyperbolic{promote_type(T, S)}
743
+ end
744
+ В качестве аргументов в promote_rule указываются параметрические типы, которые следует привести к
745
+ одному объемлющему типу. В нашем случае это возможно, если один из типов является подтипом Real, тогда
746
+ объемлющим типом будет тип Hyperbolic.
747
+
748
+ 7
749
+ После добавления методов для promote_rule становится возможным использовать функции promote,
750
+ promote_type и convert.
751
+ >>h = Hyperbolic(1 // 2)
752
+ >>promote(h, 1)
753
+ (1//2+0//1j, 1//1+0//1j)
754
+ >>promote_type(Hyperbolic{Int64}, Float32)
755
+ Hyperbolic{Float32}
756
+ Первая функция нам уже знакома. Вторая же позволяет выводить объемлющий тип не конкретных значений
757
+ переменных, а самих типов. Тип в Julia является объектом первого рода (тип DataType) и его можно присваивать
758
+ другим переменным, передавать в качестве аргументов функции и т.д.
759
+ Функция convert позволяет преобразовать тип конкретного значения, например:
760
+ >>convert(Hyperbolic, 1)
761
+ 1+0j
762
+ E.
763
+ Арифметические операции над гиперболическими числами
764
+ После добавления методов для приведения типов, можно приступить к добавлению методов для арифметиче-
765
+ ских операций. Особенностью Julia является реализация арифметических операций не в виде операторов, а в
766
+ виде функций. Так, например, корректны следующие вызовы:
767
+ >>+(1,2)
768
+ 3
769
+ >>+(1,2,3,4)
770
+ 10
771
+ >>+((i for i in 1:10)...) # числа от 1 до 10
772
+ 55
773
+ В связи с этим, добавление методов для арифметических операций ничем не отличается от соответствующего
774
+ процесса для других функций.
775
+ Добавление методов для унарных операций + и - осуществляется следующим образом:
776
+ Base.:+(h::Hyperbolic) = Hyperbolic(+h.re, +h.jm)
777
+ Base.:-(h::Hyperbolic) = Hyperbolic(-h.re, -h.jm)
778
+ Здесь используется сокращенная запись объявления функции.
779
+ Аналогично добавляются методы для бинарного сложения, вычитания, умножения и деления. Приведем здесь
780
+ код для сложения и умножения.
781
+ # Binary + and *
782
+ function Base.:+(x::Hyperbolic, y::Hyperbolic)
783
+ xx = x.re + y.re
784
+ yy = x.jm + y.jm
785
+ Hyperbolic(xx, yy)
786
+ end
787
+ function Base.:*(x::Hyperbolic, y::Hyperbolic)
788
+ xx = x.re * y.re + x.jm * y.jm
789
+ yy = x.re * y.jm + x.je * y.re
790
+ return Hyperbolic(xx, yy)
791
+ end
792
+ V.
793
+ ЗАКЛЮЧЕНИЕ
794
+ Мы рассмотрели механизм множественной диспетчеризации, лежащий в основе языка Julia, на примере
795
+ реализации гиперболических чисел. Данный пример позволил затронуть такие понятия языка как иерархия
796
+ типов данных, составные типы данных, механизмы приведения типов, перегрузка функций (создание новых
797
+ методов для функций в терминах языка Julia) и т.д.
798
+
799
+ 8
800
+ БЛАГОДАРНОСТИ
801
+ Публикация выполнена при поддержке Программы стратегического академического лидерства РУДН.
802
+ [1] Bezanson J., Edelman A., Karpinski S., Shah V. B. Julia: A fresh approach to numerical computing // SIAM Review. ----
803
+ 2017. ---- jan. ---- Vol. 59, no. 1. ---- P. 65--98.
804
+ [2] Gevorkyan M. N., Kulyabov D. S., Sevastyanov L. A. Review of Julia programming language for scientific computing // The
805
+ 6th International Conference "Distributed Computing and Grid-technologies in Science and Education". ---- 2014. ---- P. 27.
806
+ [3] Oliphant T. E. Guide to NumPy. ---- 2nd edition. ---- CreateSpace Independent Publishing Platform, 2015. ---- ISBN: 978-
807
+ 1517300074.
808
+ [4] Zappa Nardelli F., Belyakova J., Pelenitsyn A., Chung B., Bezanson J., Vitek J. Julia subtyping: a rational reconstruction //
809
+ Proceedings of the ACM on Programming Languages. ---- 2018. ---- oct. ---- Vol. 2, no. OOPSLA. ---- P. 1--27.
810
+ [5] Driesen K., H\"olzle U., Vitek J. Message Dispatch on Pipelined Processors // ECOOP'95 --- Object-Oriented Programming,
811
+ 9th European Conference, \r Aarhus, Denmark, August 7--11, 1995 / Ed. by M. Tokoro, R. Pareschi. ---- Lecture Notes in
812
+ Computer Science. Springer Berlin Heidelberg, 1995. ---- 253--282 p. ---- ISBN: 9783540601609.
813
+ [6] Muschevici R., Potanin A., Tempero E., Noble J. Multiple dispatch in practice // OOPSLA'08: Proceedings of the 23rd
814
+ ACM SIGPLAN conference on Object-oriented programming systems languages and applications. ---- ACM Press, 2008. ----
815
+ 10. ---- P. 563--582.
816
+ [7] Gowda S., Ma Y., Cheli A., Gw\'o\'zzd\'z M., Shah V. B., Edelman A., Rackauckas C. High-Performance Symbolic-Numerics
817
+ via Multiple Dispatch // ACM Communications in Computer Algebra. ---- 2022. ---- jan. ---- Vol. 55, no. 3. ---- P. 92--96.
818
+ [8] Яглом И. М. Комплексные числа и их применение в геометрии // Математика, ее преподавание, приложения и
819
+ история. — 1961. — Т. 6 из Математическое просвещение, сер. 2. — С. 61–106. — Режим доступа: http://mi.mathnet.
820
+ ru/mp680.
821
+ [9] Яглом И. М., Розенфельд Б. А., Ясинская Е. У. Проективные метрики // Успехи математических наук. — 1964. —
822
+ Т. 19, № 5 (119). — С. 51–113.
823
+ [10] Kulyabov D. S., Korolkova A. V., Sevastianov L. A. Complex Numbers for Relativistic Operations. ---- 2021. ---- Dec.
824
+ [11] Kulyabov D. S., Korolkova A. V., Gevorkyan M. N. Hyperbolic numbers as Einstein numbers // Journal of Physics:
825
+ Conference Series. ---- 2020. ---- may. ---- Vol. 1557. ---- P. 012027.
826
+ [12] Fjelstad P. Extending special relativity via the perplex numbers // American Journal of Physics. ---- 1986. ---- may. ---- Vol. 54,
827
+ no. 5. ---- P. 416--422.
828
+ [13] Band W. Comments on Extending relativity via the perplex numbers // American Journal of Physics. ---- 1988. ---- may. ----
829
+ Vol. 56, no. 5. ---- P. 469--469.
830
+ [14] Rooney J. On the Three Types of Complex Number and Planar Transformations // Environment and Planning B: Planning
831
+ and Design. ---- 1978. ---- Vol. 5, no. 1. ---- P. 89--99.
832
+ [15] Rooney J. Generalised Complex Numbers in Mechanics // Advances on Theory and Practice of Robots and Manipulators /
833
+ Ed. by M. Ceccarelli, V. A. Glazunov. ---- Cham : Springer International Publishing, 2014. ---- Vol. 22 of Mechanisms and
834
+ Machine Science. ---- P. 55--62.
835
+ [16] Gevorkyan M. N., Korolkova A. V., Kulyabov D. S. Approaches to the implementation of generalized complex numbers in
836
+ the Julia language // Workshop on information technology and scientific computing in the framework of the X International
837
+ Conference Information and Telecommunication Technologies and Mathematical Modeling of High-Tech Systems (ITTMM-
838
+ 2020) / Ed. by D. S. Kulyabov, K. E. Samouylov, L. A. Sevastianov. ---- Vol. 2639 of CEUR Workshop Proceedings. ---- Aachen,
839
+ 2020. ---- apr. ---- P. 141--157. ---- Access mode: http://ceur-ws.org/Vol-2639/paper-13.pdf.
840
+
-9AzT4oBgHgl3EQfvf1g/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-NAzT4oBgHgl3EQfFfpP/content/2301.01011v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9fa6422a9d5cc2cf3b14051e44e62b8eb62e8b56d89ffd66ba361b3117ae3f8
3
+ size 301646
-NAzT4oBgHgl3EQfFfpP/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b951cd38103869c8068c3dd0bc22076f6822adfec5789f85918e2cb998c93c4
3
+ size 126241
-NE1T4oBgHgl3EQf8QVm/content/2301.03543v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4e221a3ba68bbdb17c03e2912435e9f8d8fc4ff0217169563de12f868a6e551
3
+ size 696723
-NE1T4oBgHgl3EQf8QVm/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ebd77537b72c9fd505ba8e982f27a34e7de20ccc49bf82f2a8b0646e35e3213
3
+ size 3801133
-NE1T4oBgHgl3EQf8QVm/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71e6b0e3bcdf3fefe8079e39ad7db5e1195b18fb48c3e0b42064af98257ac614
3
+ size 159977
.gitattributes CHANGED
@@ -8451,3 +8451,74 @@ _tA0T4oBgHgl3EQfPf_J/content/2301.02177v1.pdf filter=lfs diff=lfs merge=lfs -tex
8451
  vdAzT4oBgHgl3EQfP_uD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8452
  MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf filter=lfs diff=lfs merge=lfs -text
8453
  VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8451
  vdAzT4oBgHgl3EQfP_uD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8452
  MNE1T4oBgHgl3EQfHAOD/content/2301.02921v1.pdf filter=lfs diff=lfs merge=lfs -text
8453
  VtE4T4oBgHgl3EQfng0j/content/2301.05176v1.pdf filter=lfs diff=lfs merge=lfs -text
8454
+ a9FAT4oBgHgl3EQf4x7K/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8455
+ eNE2T4oBgHgl3EQfbQcn/content/2301.03882v1.pdf filter=lfs diff=lfs merge=lfs -text
8456
+ ndE3T4oBgHgl3EQf7Asa/content/2301.04794v1.pdf filter=lfs diff=lfs merge=lfs -text
8457
+ IdFKT4oBgHgl3EQfdi7q/content/2301.11821v1.pdf filter=lfs diff=lfs merge=lfs -text
8458
+ QdAyT4oBgHgl3EQfU_c5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8459
+ UNE4T4oBgHgl3EQfMAxr/content/2301.04943v1.pdf filter=lfs diff=lfs merge=lfs -text
8460
+ JNE2T4oBgHgl3EQfUgdI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8461
+ IdFKT4oBgHgl3EQfdi7q/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8462
+ 4dE2T4oBgHgl3EQf6Qjc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8463
+ PtE4T4oBgHgl3EQf-Q74/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8464
+ G9FKT4oBgHgl3EQfcS40/content/2301.11815v1.pdf filter=lfs diff=lfs merge=lfs -text
8465
+ dtAyT4oBgHgl3EQfwvmK/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8466
+ JNE4T4oBgHgl3EQfIgz1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8467
+ 4dE2T4oBgHgl3EQf6Qjc/content/2301.04199v1.pdf filter=lfs diff=lfs merge=lfs -text
8468
+ a9FAT4oBgHgl3EQf4x7K/content/2301.08729v1.pdf filter=lfs diff=lfs merge=lfs -text
8469
+ p9AzT4oBgHgl3EQfAvo4/content/2301.00930v1.pdf filter=lfs diff=lfs merge=lfs -text
8470
+ VdE0T4oBgHgl3EQf2wKz/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8471
+ 39AyT4oBgHgl3EQfP_aH/content/2301.00036v1.pdf filter=lfs diff=lfs merge=lfs -text
8472
+ uNE0T4oBgHgl3EQfsAG-/content/2301.02574v1.pdf filter=lfs diff=lfs merge=lfs -text
8473
+ TtE0T4oBgHgl3EQf2QKO/content/2301.02710v1.pdf filter=lfs diff=lfs merge=lfs -text
8474
+ WtFRT4oBgHgl3EQfMzdN/content/2301.13507v1.pdf filter=lfs diff=lfs merge=lfs -text
8475
+ PtFQT4oBgHgl3EQfYzaZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8476
+ dtFAT4oBgHgl3EQfZB2n/content/2301.08543v1.pdf filter=lfs diff=lfs merge=lfs -text
8477
+ PtFQT4oBgHgl3EQfYzaZ/content/2301.13313v1.pdf filter=lfs diff=lfs merge=lfs -text
8478
+ 4NAzT4oBgHgl3EQfffyv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8479
+ 6NE0T4oBgHgl3EQfvwGn/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8480
+ 4NAzT4oBgHgl3EQfffyv/content/2301.01454v1.pdf filter=lfs diff=lfs merge=lfs -text
8481
+ DdE1T4oBgHgl3EQf-Abs/content/2301.03564v1.pdf filter=lfs diff=lfs merge=lfs -text
8482
+ G9E3T4oBgHgl3EQftwvH/content/2301.04679v1.pdf filter=lfs diff=lfs merge=lfs -text
8483
+ StA0T4oBgHgl3EQfD_-h/content/2301.02012v1.pdf filter=lfs diff=lfs merge=lfs -text
8484
+ x9AzT4oBgHgl3EQfd_xv/content/2301.01429v1.pdf filter=lfs diff=lfs merge=lfs -text
8485
+ dtFJT4oBgHgl3EQf_i1w/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8486
+ 7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf filter=lfs diff=lfs merge=lfs -text
8487
+ WtFRT4oBgHgl3EQfMzdN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8488
+ 1dE2T4oBgHgl3EQfNQZD/content/2301.03734v1.pdf filter=lfs diff=lfs merge=lfs -text
8489
+ 7dA0T4oBgHgl3EQfOf8M/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8490
+ x9AzT4oBgHgl3EQfd_xv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8491
+ aNFPT4oBgHgl3EQfvTWR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8492
+ 1dE2T4oBgHgl3EQfNQZD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8493
+ LdAzT4oBgHgl3EQfyv4r/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8494
+ 79E4T4oBgHgl3EQf2g20/content/2301.05299v1.pdf filter=lfs diff=lfs merge=lfs -text
8495
+ dtFJT4oBgHgl3EQfSSx0/content/2301.11499v1.pdf filter=lfs diff=lfs merge=lfs -text
8496
+ -NE1T4oBgHgl3EQf8QVm/content/2301.03543v1.pdf filter=lfs diff=lfs merge=lfs -text
8497
+ jNAzT4oBgHgl3EQfpP3S/content/2301.01611v1.pdf filter=lfs diff=lfs merge=lfs -text
8498
+ 8tE1T4oBgHgl3EQfngT9/content/2301.03311v1.pdf filter=lfs diff=lfs merge=lfs -text
8499
+ -9AyT4oBgHgl3EQfdffn/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8500
+ jdE1T4oBgHgl3EQfgASs/content/2301.03225v1.pdf filter=lfs diff=lfs merge=lfs -text
8501
+ 5tAyT4oBgHgl3EQf2fk_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8502
+ TdE3T4oBgHgl3EQfaArM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8503
+ G9FKT4oBgHgl3EQfcS40/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8504
+ R9A0T4oBgHgl3EQfDv_g/content/2301.02009v1.pdf filter=lfs diff=lfs merge=lfs -text
8505
+ DdE1T4oBgHgl3EQf-Abs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8506
+ jdE1T4oBgHgl3EQfgASs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8507
+ kdFAT4oBgHgl3EQfbB3C/content/2301.08555v1.pdf filter=lfs diff=lfs merge=lfs -text
8508
+ jNAzT4oBgHgl3EQfpP3S/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8509
+ ztE2T4oBgHgl3EQf4Qjg/content/2301.04180v1.pdf filter=lfs diff=lfs merge=lfs -text
8510
+ D9E2T4oBgHgl3EQfSgcI/content/2301.03792v1.pdf filter=lfs diff=lfs merge=lfs -text
8511
+ R9A0T4oBgHgl3EQfDv_g/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8512
+ btFLT4oBgHgl3EQfYS80/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8513
+ ndAzT4oBgHgl3EQfqf0O/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8514
+ 8tE1T4oBgHgl3EQfngT9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8515
+ v9E0T4oBgHgl3EQfsgHf/content/2301.02581v1.pdf filter=lfs diff=lfs merge=lfs -text
8516
+ -NAzT4oBgHgl3EQfFfpP/content/2301.01011v1.pdf filter=lfs diff=lfs merge=lfs -text
8517
+ -NE1T4oBgHgl3EQf8QVm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8518
+ aNE1T4oBgHgl3EQfKQOD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8519
+ D9E2T4oBgHgl3EQfSgcI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8520
+ adE5T4oBgHgl3EQfeg9M/content/2301.05619v1.pdf filter=lfs diff=lfs merge=lfs -text
8521
+ u9E0T4oBgHgl3EQfsQGm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8522
+ eNFIT4oBgHgl3EQfoyuB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
8523
+ ZtE5T4oBgHgl3EQfeA-9/content/2301.05616v1.pdf filter=lfs diff=lfs merge=lfs -text
8524
+ s9E3T4oBgHgl3EQfNAmL/content/2301.04379v1.pdf filter=lfs diff=lfs merge=lfs -text
0NE5T4oBgHgl3EQfOQ6v/content/tmp_files/2301.05496v1.pdf.txt ADDED
@@ -0,0 +1,1515 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Learning Transformations To Reduce the Geometric Shift in Object Detection
2
+ Vidit Vidit1 Martin Engilberge1 Mathieu Salzmann1,2
3
+ CVLab, EPFL1, ClearSpace SA2
4
+ firstname.lastname@epfl.ch
5
+ Abstract
6
+ The performance of modern object detectors drops when
7
+ the test distribution differs from the training one. Most of
8
+ the methods that address this focus on object appearance
9
+ changes caused by, e.g., different illumination conditions,
10
+ or gaps between synthetic and real images. Here, by con-
11
+ trast, we tackle geometric shifts emerging from variations in
12
+ the image capture process, or due to the constraints of the
13
+ environment causing differences in the apparent geometry
14
+ of the content itself. We introduce a self-training approach
15
+ that learns a set of geometric transformations to minimize
16
+ these shifts without leveraging any labeled data in the new
17
+ domain, nor any information about the cameras. We evalu-
18
+ ate our method on two different shifts, i.e., a camera’s field
19
+ of view (FoV) change and a viewpoint change. Our results
20
+ evidence that learning geometric transformations helps de-
21
+ tectors to perform better in the target domains.
22
+ 1. Introduction
23
+ While modern object detectors [1, 2, 17, 23, 24] achieve
24
+ impressive results, their performance decreases when the
25
+ test data depart from the training distribution. This prob-
26
+ lem arises in the presence of appearance variations due to,
27
+ for example, differing illumination or weather conditions.
28
+ Considering the difficulty and cost of acquiring annotated
29
+ data in the test (i.e., target) domain, Unsupervised Domain
30
+ Adaptation (UDA) has emerged as the standard strategy to
31
+ address such scenarios [3,4,9,26,38].
32
+ In this context, much effort has been made to learn do-
33
+ main invariant features, such that the source and target dis-
34
+ tributions in this feature space are similar. This has led to
35
+ great progress in situations where the appearance of the ob-
36
+ jects changes drastically from one domain to the other, as
37
+ in case of real-to-sketch adaptation (e.g., Pascal VOC [10]
38
+ to Comics [15]), or weather adaptation (e.g., Cityscapes [6]
39
+ to Foggy Cityscapes [27]). Nevertheless, such object ap-
40
+ pearance changes are not the only sources of domain shifts.
41
+ They can also have geometric origins.
42
+ For example, as
43
+ shown in Fig. 1, they can be due to a change in camera view-
44
+ Figure 1.
45
+ Geometric shifts.
46
+ (Left) Due to a different FoV,
47
+ the cars highlighted in green, undergo different distortions even
48
+ though they appear in similar image regions. (Right) Different
49
+ camera viewpoints (front facing vs downward facing) yield dif-
50
+ ferent distortions and occlusion patterns for pedestrian detection.
51
+ (Bottom) The distributions of pedestrian bounding box sizes in
52
+ Cityscapes [6] and MOT [8] differ significantly as the pedestrians
53
+ are usually far away or in the periphery in Cityscapes. The top im-
54
+ ages are taken from Cityscapes [6], and the bottom-left and right
55
+ ones from KITTI [12] and MOT [8], respectively.
56
+ point or field-of-view (FoV), or a change of object scale due
57
+ to different scene setups. In practice, such geometric shifts
58
+ typically arise from a combination of various factors, in-
59
+ cluding but not limited to the ones mentioned above.
60
+ In this paper, we introduce a domain adaptation approach
61
+ tackling such geometric shifts. To the best of our knowl-
62
+ edge, the recent work of [13] constitutes the only attempt at
63
+ considering such geometric distortions. However, it intro-
64
+ duces a method solely dedicated to FoV variations, assum-
65
+ ing that the target FoV is fixed and known. Here, we de-
66
+ 1
67
+ arXiv:2301.05496v1 [cs.CV] 13 Jan 2023
68
+
69
+ rkplatz/F
70
+ Cityscapes
71
+ Apparent Bbox Distribution
72
+ MOT20velop a more general framework able to cope with a much
73
+ broader family of geometric shifts.
74
+ To this end, we model geometric transformations as a
75
+ combination of multiple homographies. We show both the-
76
+ oretically and empirically that this representation is suffi-
77
+ cient to encompass a broad variety of complex geometric
78
+ transformations. We then design an aggregator block that
79
+ can be incorporated to the detector to provide it with the
80
+ capacity to tackle geometric shifts. We use this modified
81
+ detector to generate pseudo labels for the target domain,
82
+ which let us optimize the homographies so as to reduce the
83
+ geometric shift.
84
+ Our contributions can be summarized as follows. (i) We
85
+ tackle the problem of general geometric shifts for object
86
+ detection. (ii) We learn a set of homographies using unla-
87
+ beled target data, which alleviates the geometric bias arising
88
+ in source-only training. (iii) Our method does not require
89
+ prior information about the target geometric distortions and
90
+ generalizes to a broad class of geometric shifts. Our ex-
91
+ periments demonstrate the benefits of our approach in sev-
92
+ eral scenarios. In the presence of FoV shifts, our approach
93
+ yields similar performance to the FoV-dedicated framework
94
+ of [13] but without requiring any camera information. As
95
+ such, it generalizes better to other FoVs. Furthermore, we
96
+ show the generality of our method by using it to adapt to
97
+ a new camera viewpoint in the context of pedestrian detec-
98
+ tion.
99
+ 2. Related Work
100
+ Unsupervised Domain Adaptation (UDA).
101
+ UDA for
102
+ image recognition [11, 21, 22, 30, 32, 35, 36] and object de-
103
+ tection [3,4,9,20,26,38] has made a great progress in the
104
+ past few years. The common trend in both tasks consists
105
+ of learning domain invariant features. For object detection,
106
+ this entails aligning the global (e.g., illumination, weather)
107
+ and local (foreground objects) features in the two domains.
108
+ In this context, [3,5,26,28] align image- and instance-level
109
+ features in the two domains via adversarial learning [11];
110
+ [33] learns category-specific attention maps to better align
111
+ specific image regions; [38] clusters the proposed object re-
112
+ gions using k-means clustering and uses the centroids for
113
+ instance-level alignment. While this successfully tackles
114
+ domain shifts caused by object appearance variations, it
115
+ fails to account for the presence of shifts due to the image
116
+ capture process itself, such as changes in camera intrinsics
117
+ or viewpoint. The only initial step at considering a geo-
118
+ metric shift is the work of [13], which shows the existence
119
+ of an FoV gap in driving datasets [6, 12] and proposes a
120
+ Position Invariant Transform (PIT) that corrects the distor-
121
+ tions caused specifically by an FoV change. In essence, PIT
122
+ undistorts the images by assuming knowledge of the target
123
+ FoV. By contrast, here, we introduce an approach that gen-
124
+ eralizes to a broad family of geometric shifts by learning
125
+ transformations without requiring any camera information.
126
+ Self-training.
127
+ Self-training, generally employed in the
128
+ semi-supervised setting, offers an alternative to learning
129
+ domain-invariant features and utilize unlabeled data to im-
130
+ prove a detector’s performance. In this context, [29] uses
131
+ a student-teacher architecture where the teacher model is
132
+ trained with supervised data and generates pseudo-labels
133
+ on unannotated data. These pseudo-labels are then used
134
+ to train a student model.
135
+ While effective in the stan-
136
+ dard semi-supervised learning scenario, the quality of the
137
+ pseudo-labels obtained with this approach tends to deteri-
138
+ orate when the labeled and unlabeled data present a distri-
139
+ bution shift. [9, 20] have therefore extended this approach
140
+ to domain adaptation by using the Mean Teacher strategy
141
+ of [31] to generate reliable pseudo-labels in the target do-
142
+ main. Other approach include the use of CycleGAN [37]
143
+ generated images to train an unbiased teacher model [9],
144
+ and that of different augmentation strategies to generate ro-
145
+ bust pseudo-labels [20]. Our approach also follows a self-
146
+ training strategy but, while these works focus on object ap-
147
+ pearance shifts, we incorporate learnable blocks to address
148
+ geometric shifts. As shown in our experiment, this lets us
149
+ outperform the state-of-the-art AdaptTeacher [20].
150
+ Learning
151
+ Geometric
152
+ Transformations.
153
+ End-to-end
154
+ learning of geometric transformations has been used to
155
+ boost the performance of deep networks.
156
+ For example,
157
+ Spatial Transformer Networks (STNs) [16] reduce the
158
+ classification error by learning to correct for affine trans-
159
+ formations; deformable convolutions [7] model geometric
160
+ transformations by applying the convolution kernels to
161
+ non-local neighborhoods. These methods work well when
162
+ annotations are available for supervision, and make the
163
+ network invariant to the specific geometric transformations
164
+ seen during training. Here, by contrast, we seek to learn
165
+ transformations in an unsupervised manner and allow the
166
+ network to generalize to unknown target transformations.
167
+ 3. Modeling Geometric Transformations
168
+ In the context of UDA, multiple geometric differences
169
+ can be responsible for the gap between the domains. Some
170
+ can be characterized by the camera parameters, such as a
171
+ change in FoV (intrinsic) or viewpoint (extrinsic), whereas
172
+ others are content specific, such as a difference in road
173
+ width between different countries. Ultimately, the geomet-
174
+ ric shift is typically a combination of different geometric
175
+ operations. Since the parameters of these operations are un-
176
+ known, we propose to bridge the domain gap by learning a
177
+ geometric transform. Specifically, we aggregate the results
178
+ of multiple perspective transforms, i.e., homographies, to
179
+ obtain a differentiable operation that can emulate a wide
180
+ variety of geometric transforms.
181
+ 2
182
+
183
+ 3.1. Theoretical Model
184
+ Let us first show that, given sufficiently many homogra-
185
+ phies, one can perfectly reproduce any mapping between
186
+ R2 \ (0, 0) and R2.
187
+ Single homography for a single point.
188
+ First, we show
189
+ that a single homography with 4 degrees of freedom can
190
+ map a point p ∈ R2 \(0, 0) to any other point in R2. To this
191
+ end, let
192
+ H =
193
+
194
+
195
+ sx
196
+ 0
197
+ 0
198
+ 0
199
+ sy
200
+ 0
201
+ lx
202
+ ly
203
+ 1
204
+
205
+
206
+ (1)
207
+ be a homography, with (sx, sy) the scaling factors on the x-
208
+ and y-axis, respectively, and (lx, ly) the perspective factors
209
+ in x and y, respectively. For any destination point d ∈ R2,
210
+ there exists a set of parameters (sx, sy, lx, ly) such that d =
211
+ H × p. One such set is ( dx
212
+ px , dy
213
+ py , 0, 0).
214
+ Emulating any geometric transformation
215
+ Now that we
216
+ have shown that a single homography can move a point to
217
+ any other point in R2, we describe a simple protocol to emu-
218
+ late any geometric transform. Given an unknown geometric
219
+ transform T : R2\(0, 0) → R2, we aim to emulate T with a
220
+ set of homographies. In general, for an image I ∈ R3×h×w,
221
+ we can restrict the domain of T to only image coordinates.
222
+ To this end, we can define a set of homographies Hi ∈ H
223
+ for i in {1, 2, 3, ..., h × w}, where the parameters of Hi are
224
+ chosen to mimic the transform T for location i of the image.
225
+ In this protocol, the aggregation mechanism is trivial since
226
+ each homography is in charge of remapping a single pixel
227
+ coordinate of the original space.
228
+ While this works in theory, this is of course not viable
229
+ in practice since it would require too many homographies.
230
+ With a smaller number of homographies, each transform
231
+ needs to remap multiple points, and a more sophisticated
232
+ aggregation mechanism is required. Specifically, the ag-
233
+ gregation mechanism needs to select which transform is in
234
+ charge of remapping which point. In the next section, we
235
+ empirically show that this strategy lets us closely approxi-
236
+ mate the spherical projection mapping used in PIT [13].
237
+ 3.2. Approximating PIT with Homographies
238
+ To demonstrate the possibility offered by aggregating
239
+ multiple homographies, we design an approximation of PIT
240
+ using only homographies. PIT proposes to correct for an
241
+ FoV gap by remapping images to a spherical surface. Dur-
242
+ ing this transformation, regions further from the center of
243
+ a scene are compressed with a higher ratio. This variable
244
+ compression of the space cannot be reproduced by a single
245
+ homography transformation. To overcome this limitation,
246
+ we combine the results of multiple homographies that all
247
+ have different compression rates (scaling parameters). For
248
+ the aggregation mechanism, we use the optimal strategy by
249
+ selecting for each pixel the homography that approximates
250
+ best the PIT mapping. As shown in Fig. 2, this combination
251
+ closely approximates the PIT results with only 5 homogra-
252
+ phies. Further analysis of these experiments is available in
253
+ the supplementary material in Fig. A.3.
254
+ Figure 2. Approximating PIT with homographies. We show the
255
+ original image (top), the PIT [13] correction (middle), and our ap-
256
+ proximation of PIT using 5 homographies. Note that 5 homogra-
257
+ phies are sufficient to closely match the PIT spherical correction.
258
+ 3.3. Homographies in a Learning Setup
259
+ In the two previous sections, we have demonstrated both
260
+ theoretically and empirically the flexibility of aggregating
261
+ homographies. This makes this representation an ideal can-
262
+ didate for domain adaptation since the geometric shift be-
263
+ tween the domains is unknown and can be a combination
264
+ of different transforms, such as FoV change, viewpoint
265
+ change, camera distortion, or appearance distortion. As will
266
+ be discussed in the next section, by learning jointly the set
267
+ of perspective transforms and the aggregation mechanism
268
+ on real data, our model can reduce the geometric shift be-
269
+ tween the two domains without prior knowledge about this
270
+ domain gap.
271
+ 4. Method
272
+ Let us now introduce our approach to reducing the ge-
273
+ ometric shift in object detection. Following the standard
274
+ UDA setting, let Ds = {(Is, Bs, Cs)} be a labeled source
275
+ dataset containing images Is = {Ii
276
+ s}Ns
277
+ 1
278
+ with correspond-
279
+ ing object bounding boxes Bs = {bi
280
+ s}Ns
281
+ 1
282
+ and object classes
283
+ Cs = {ci
284
+ s}Ns
285
+ 1 . Furthermore, let Dt = {It} denote an un-
286
+ labeled target dataset for which only images It = {Ii
287
+ t}Nt
288
+ 1
289
+ 3
290
+
291
+ Original
292
+ PIT
293
+ Approximation 5 H.Figure 3. Architecture: The input image is first transformed by a set of trainable homographies. The feature maps extracted from
294
+ the transformed images are then unwarped by the inverse homographies to achieve spatial consistency. We then combine the unwarped
295
+ feature maps using a trainable aggregator, whose output is passed to a detection head. The blocks shown in green correspond to standard
296
+ FasterRCNN operations. The � symbol represents the concatenation operation.
297
+ are available, without annotations. Here, we tackle the case
298
+ where the two domains differ by geometric shifts but as-
299
+ sume no knowledge about the nature of these shifts. Below,
300
+ we first introduce the architecture we developed to handle
301
+ this and then our strategy to train this model.
302
+ 4.1. Model Architecture
303
+ The overall architecture of our approach is depicted in
304
+ Fig. 3. In essence, and as discussed in Sec. 3, we charac-
305
+ terize the geometric changes between the source and target
306
+ data by a set of transformations T = {Hi}N
307
+ 1 . Each Hi in
308
+ T is a homography of the same form as in Eq. (1). For our
309
+ method to remain general, we assume the transformations to
310
+ be unknown, and our goal, therefore, is to learn T to bridge
311
+ the gap between the domains. This requires differentiabil-
312
+ ity w.r.t. the transformation parameters, which we achieve
313
+ using the sampling strategy proposed in [16].
314
+ As shown in Fig. 3, the input image is transformed by the
315
+ individual homographies in T , and the transformed images
316
+ are fed to a modified FasterRCNN [24] detector. Specif-
317
+ ically, we extract a feature map FHi ∈ RH×W ×C for
318
+ each transformed image via a feature extractor shared by
319
+ all transformations. To enforce spatial correspondence be-
320
+ tween the different FHis, we unwarp them with H−1
321
+ i
322
+ .
323
+ We then introduce an aggregator Aθg, parameterized by
324
+ θg, whose goal is to learn a common representation given
325
+ a fixed number of unwarped feature maps F′
326
+ Hi. To achieve
327
+ this, the aggregator takes as input
328
+ G = F′
329
+ H1 ⊕ F′
330
+ H2 ⊕ ... ⊕ F′
331
+ HN ∈ RH×W ×C×N ,
332
+ (2)
333
+ where ⊕ represents concatenation in the channel dimension.
334
+ The aggregator outputs a feature map Aθg(G) ∈ RH×W ×C,
335
+ whose dimension is independent of the number of transfor-
336
+ mations. This output is then passed to a detection head to
337
+ obtain the objects’ bounding boxes and class labels.
338
+ 4.2. Model Training
339
+ Our training procedure relies on three steps: (i) Fol-
340
+ lowing common practice in UDA, we first train the Faster-
341
+ RCNN detector with source-only data; (ii) We then intro-
342
+ duce the aggregator and train it so that it learns to com-
343
+ bine different homographies using the labeled source data;
344
+ (iii) Finally, we learn the optimal transformations for adap-
345
+ tation using both the source and target data via a Mean
346
+ Teacher [31] strategy.
347
+ Aggregator Training.
348
+ To train the aggregator, we ran-
349
+ domly sample a set of homographies T ∈ RN×4 in each
350
+ training iteration.1 This gives the aggregator the ability to
351
+ robustly combine diverse input transformations but requires
352
+ strong supervision to avoid training instabilities. We, there-
353
+ fore, perform this step using the source data.
354
+ The loss function for a set of transformed images T (Is)
355
+ is then defined as in standard FasterRCNN training with
356
+ a combination of classification and regression terms [24].
357
+ That is, we train the aggregator by solving
358
+ min
359
+ θg Lcls(T (Is)) + Lreg(T (Is)) ,
360
+ (3)
361
+ 1As our homographies involve only 4 parameters, with a slight abuse
362
+ of notation, we say that Hi ∈ R4.
363
+ 4
364
+
365
+ Homography
366
+ Feature
367
+ Homography
368
+ Detection
369
+ Aggregator
370
+ Projections
371
+ Extractor
372
+ Projections
373
+ Headwhere
374
+ Lcls(T (Is)) = Lrpn
375
+ cls + Lroi
376
+ cls ,
377
+ (4)
378
+ Lreg(T (Is)) = Lrpn
379
+ reg + Lroi
380
+ reg .
381
+ (5)
382
+ Lrpn
383
+ ·
384
+ and Lroi
385
+ ·
386
+ correspond to the Region Proposal Network
387
+ (RPN) loss terms and the Region of Interest (RoI) ones, re-
388
+ spectively. During this process, we freeze the parameters
389
+ θb of the base network, i.e, feature extractor and detection
390
+ head, which were first trained on the source data without ag-
391
+ gregator. Ultimately, the aggregator provides the network
392
+ with the capacity to encode different transformations that
393
+ are not seen in the source domain. The third training step
394
+ then aims to learn the best transformation for successful ob-
395
+ ject detection in the target domain.
396
+ Learning the Transformations.
397
+ As we have no annota-
398
+ tions in the target domain, we exploit a Mean Teacher (MT)
399
+ strategy to learn the optimal transformations. To this end,
400
+ our starting point is the detector with a trained aggregator
401
+ and a set of random transformations T . The MT strategy is
402
+ illustrated in Fig. 4. In essence, MT training [31] involves
403
+ two copies of the model: A student model, with parameters
404
+ θst = {T st, θst
405
+ b , θst
406
+ g }, that will be used during inference,
407
+ and a teacher model, with parameters θte = {T te, θte
408
+ b , θte
409
+ g },
410
+ that is updated as an Exponentially Moving Average (EMA)
411
+ of the student model.
412
+ That is, the student’s parameters
413
+ are computed with standard backpropagation, whereas the
414
+ teacher’s ones are updated as
415
+ θte ← αθte + (1 − α)θst .
416
+ (6)
417
+ The student model is trained using both source and tar-
418
+ get detection losses. Since the target domain does not have
419
+ annotations, the teacher model is used to generate pseudo-
420
+ labels. These labels might be noisy, and hence we only keep
421
+ the predictions with a confidence score above a threshold τ.
422
+ Furthermore, non-maxima suppression (NMS) is used to re-
423
+ move the highly-overlapping bounding box predictions.
424
+ Formally, given a source image Is and a target image It,
425
+ the student model is trained by solving
426
+ min
427
+ T st,θst
428
+ g ,θst
429
+ b
430
+ Ldet(T (Is)) + λLdet(T (It)) ,
431
+ (7)
432
+ where λ controls the target domain contribution and
433
+ Ldet(T (Is)) = Lcls(T (Is)) + Lreg(T (Is)) ,
434
+ (8)
435
+ Ldet(T (It)) = Lcls(T (It)) .
436
+ (9)
437
+ Similarly to [18,20], we update the student model with only
438
+ the classification loss in the target domain to help stabilize
439
+ training.
440
+ Figure 4. Mean Teacher formalism. The student model is trained
441
+ with ground-truth labels in the source domain and pseudo labels in
442
+ the target one. These pseudo labels are produced by the teacher
443
+ model, which corresponds to an exponentially moving average
444
+ (EMA) of the student network.
445
+ 5. Experiments
446
+ We demonstrate the effectiveness and generality of our
447
+ method on different geometric shifts. First, to compare to
448
+ the only other work that modeled a geometric shift [13],
449
+ we tackle the problem of a change in FoV between the
450
+ source and target domain. Note that, in contrast to [13], we
451
+ do not assume knowledge of the target FoV. Furthermore,
452
+ while [13] was dedicated to FoV adaptation, our approach
453
+ generalizes to other geometric shifts. We demonstrate this
454
+ on the task of pedestrian detection under a viewpoint shift.
455
+ We compare our method with the state-of-the-art Adapt-
456
+ Teacher [20], which also uses a Mean Teacher, but focuses
457
+ on appearance shifts. In the remainder of this section, we
458
+ describe our experimental setup and discuss our results.
459
+ 5.1. Datasets
460
+ Cityscapes [6]
461
+ contains 2975 training and 500 test im-
462
+ ages with annotations provided for 8 categories (person,
463
+ car, train, rider, truck, motorcycle, bicycle and bus). The
464
+ average horizontal (FoVx) and vertical (FoVy) FoVs of the
465
+ capturing cameras are 50°and 26°, respectively. We use this
466
+ dataset as the source domain for both FoV adaptation and
467
+ viewpoint adaptation.
468
+ KITTI [12]
469
+ is also a street-view dataset containing 6684
470
+ images annotated with the car category.
471
+ The horizontal
472
+ (FoVx) and vertical (FoVy) FoVs of the camera are 90°and
473
+ 34°, respectively.
474
+ We use this dataset as target domain
475
+ for FoV adaptation, as the viewpoint is similar to that of
476
+ Cityscapes. Following [13], we use 5684 images for unsu-
477
+ pervised training and 1000 images for evaluation.
478
+ MOT [8]
479
+ is a multi-object tracking dataset. We use the in-
480
+ door mall sequence, MOT20-02, consisting of 2782 frames
481
+ 5
482
+
483
+ Feature
484
+ H1
485
+ Hi-1
486
+ Extractor
487
+ Feature
488
+ Detection
489
+ H2
490
+ Aggregator
491
+ Extractor
492
+ Head
493
+ ..
494
+ ...
495
+ Feature
496
+ -1
497
+ Extractor
498
+ Mean Teacher
499
+ Ldet
500
+ Feature
501
+ H1
502
+ H-1
503
+ Extractor
504
+ Feature
505
+ Detection
506
+ H2
507
+ H2-1
508
+ Ldet
509
+ Aggregator
510
+ Extractor
511
+ Head
512
+ Feature
513
+ Hn
514
+ -1
515
+ Extractor
516
+ Studentannotated with the person category. We employ this dataset
517
+ as target domain for viewpoint adaptation. We use the first
518
+ 2000 frame for unsupervised training and last 782 for eval-
519
+ uation.
520
+ 5.2. Adaptation Tasks and Metric
521
+ FoV adaptation.
522
+ As in [13], we consider the case of
523
+ an increasing FoV using Cityscapes as source domain and
524
+ KITTI as target domain. The horizontal and vertical FoVs
525
+ increase from (50°, 26°) in Cityscapes to (90°, 34°) in
526
+ KITTI. Therefore, as can be seen in Fig. 1, the KITTI
527
+ images have a higher distortion in the corners than the
528
+ Cityscapes ones. Similarly to PIT [13], we use the car cat-
529
+ egory in our experiments.
530
+ FoV generalization.
531
+ Following PIT [13], we study the
532
+ generalization of our approach to new FoVs by cropping
533
+ the KITTI images to mimic different FoV changes in the
534
+ horizontal direction (FoVx). Specifically, we treat FoVx =
535
+ 50° as the source domain and the cropped images with FoVx
536
+ = {70°, 80°, 90°} as different target domains. We evaluate
537
+ our approach on car on these different pairs of domains.
538
+ Viewpoint adaptation.
539
+ This task entails detecting objects
540
+ seen from a different viewpoint in the source and target do-
541
+ mains. We use the front-facing Cityscapes images as source
542
+ domain and the downward-facing MOT ones as target one.
543
+ As the MOT data depicts pedestrians, we use the bounding
544
+ boxes corresponding to the person category in Cityscapes.2
545
+ Metric.
546
+ In all of our experiments, we use the Average Pre-
547
+ cision (AP) as our metric. Specifically, following [13], we
548
+ report the AP@0.5, which considers the predictions as true
549
+ positives if they match the ground-truth label and have an
550
+ intersection over union (IOU) score of more than 0.5 with
551
+ the ground-truth bounding boxes.
552
+ 5.3. Implementation Details
553
+ We use the Detectron2 [34] implementation of Faster-
554
+ RCNN [24] with a ResNet50 [14] backbone as our base
555
+ architecture. In all of our experiments, the images are re-
556
+ sized so that the shorter side has 800 pixels while maintain-
557
+ ing the aspect ratio. The base network is first trained on
558
+ source-only images with random cropping and random flip-
559
+ ping augmentation for 24k iterations with batch size 8. We
560
+ use the Stochastic Gradient Descent (SGD) optimizer with
561
+ a learning rate of 0.01, scaled down by a 0.1 factor after
562
+ 18k iterations. We use ImageNet [25] pretrained weights to
563
+ initialize the ResNet50 backbone.
564
+ 2In Cityscapes, a person may be labeled as either person or rider. Since
565
+ the rider label is used for people riding a vehicle, we omit these cases.
566
+ Figure 5. FoV Adaptation: Qualitative Results. We visualize
567
+ a car detection result in the Cityscapes-to-KITTI FoV adaptation
568
+ scenario. The top left image corresponds to the ground truth, the
569
+ bottom left to the Mean Teacher result, which confuses the orange
570
+ container with a car, the bottom right to the Mean Teacher adapta-
571
+ tion + PIT FoV adaptation result, which also mistakes the orange
572
+ container for a car and further detects the speed limit on the road.
573
+ Our approach, on the top right, correctly matches the ground truth.
574
+ We then incorporate the aggregator in the trained base
575
+ architecture.
576
+ The aggregator architecture contains three
577
+ convolutional layers with a kernel size of 3 × 3, and one
578
+ 1 × 1 convolutional layer.
579
+ We first train the aggregator
580
+ on the source data with the base frozen and using ran-
581
+ dom transformations T .
582
+ The transformations are gener-
583
+ ated by randomly sampling each Hi parameters as sx, sy ∼
584
+ U[0.5,2.0], U[0.5,2.0] and lx, ly ∼ U[−0.5,0.5], U[−0.5,0.5]. We
585
+ train the aggregator for 30k iterations using a batch size of
586
+ 8 and the SGD optimizer with a learning rate of 1e−4.
587
+ The student and teacher models are then initialized with
588
+ this detector and the random T = {Hi}N
589
+ i=1. We optimize T
590
+ using Adam [19], while the base and aggregator networks
591
+ are optimized by SGD. The learning rate is set to 1e−3 and
592
+ scaled down by a factor 0.1 after 10k iterations for the SGD
593
+ optimizer. For the first 10k iterations in FoV adaptation and
594
+ for 2k iterations for viewpoint adaptation, we only train T
595
+ keeping base and aggregator frozen. The α coefficient for
596
+ the EMA update is set to 0.99; the confidence threshold
597
+ τ = 0.6; λ = {0.01, 0.1} for FoV and viewpoint adapta-
598
+ tion, respectively. The Mean Teacher framework is trained
599
+ using both the source and target data. We set N = 5, unless
600
+ otherwise specified, and use a batch size of 4, containing 2
601
+ source and 2 target images. We apply random color jitter-
602
+ ing on both the source and target data as in [20, 31]. All
603
+ of our models are trained on a single NVIDIA V100 GPU.
604
+ A detailed hyper-parameter study is provided in the supple-
605
+ mentary material.
606
+ 5.4. Comparison with the State of the Art
607
+ We compare our approach with the following baselines.
608
+ FR: FasterRCNN trained only on the source data with
609
+ random crop augmentation; AT: AdaptTeacher [20]; MT:
610
+ Mean Teacher initialized with FR and trained with ran-
611
+ dom color jittering on both the source and target data (i.e.,
612
+ this corresponds to our mean teacher setup in Sec. 4.2
613
+ but without the aggregator and without transformations T );
614
+ 6
615
+
616
+ GT
617
+ Ours
618
+ M
619
+ MT+PITMethod
620
+ Car AP@0.5
621
+ FR [24]
622
+ 76.1
623
+ AT [20]
624
+ 77.2
625
+ FR+PIT
626
+ 77.6
627
+ MT
628
+ 78.3
629
+ MT+PIT [13]
630
+ 79.7
631
+ Ours
632
+ 80.4 ± 0.15
633
+ Table 1. FoV Adaptation.
634
+ Car AP@0.5 for FoVx
635
+ Method
636
+ 50°
637
+ 70°
638
+ 80°
639
+ 90°
640
+ FR [24]
641
+ 94.3
642
+ 90.2
643
+ 86.8
644
+ 80.6
645
+ FR+PIT [13]
646
+ 93.6
647
+ 91.4
648
+ 89.2
649
+ 85.9
650
+ Ours-h
651
+ 94.1± 0.16
652
+ 93.1 ± 0.33
653
+ 91.8 ± 0.40
654
+ 88.8 ± 0.21
655
+ Table 2. FoV Generalization.
656
+ FR+PIT: Same setup as FR but with the images corrected
657
+ with PIT [13]; MT+PIT: Same setup as MT but with the
658
+ images corrected with PIT. We refer to our complete ap-
659
+ proach (Sec. 4.2) as Ours. For the task of FoV generaliza-
660
+ tion, we report our results as Ours-h to indicate that we only
661
+ optimize the homographies (5×4 parameters) in T to adapt
662
+ to the new FoVs while keeping the base and aggregator net-
663
+ works frozen. This matches the setup of PIT [13], which
664
+ also corrects the images according to the new FoVs. As
665
+ Ours and Ours-h are trained with randomly initialized T ,
666
+ we report the average results and standard deviations over
667
+ three independent runs.
668
+ FoV adaptation.
669
+ The results of Cityscapes → KITTI FoV
670
+ adaptation are provided in Tab. 1. Both MT+PIT and Ours
671
+ both bridge the FoV gap, outperforming the MT baseline.
672
+ Note, however, that we achieve this by learning the trans-
673
+ formations, without requiring any camera-specific informa-
674
+ tion, which is needed by PIT. Note also that MT outper-
675
+ forms FR by learning a better representation in the target do-
676
+ main, even though FR is trained with strong augmentation,
677
+ such as random cropping. AT underperforms because its
678
+ strong augmentation strategy fails to generalize for datasets
679
+ having prominent geometric shifts. Our improvement over
680
+ MT evidences that learning transformations helps to over-
681
+ come geometric shifts. We optimize with N = 9, homo-
682
+ graphies in this setup. Fig. 5 shows a qualitative example.
683
+ Different homographies look into different image regions
684
+ and the aggregator learns how to combine the activations
685
+ corresponding to objects as depicted in Fig. 7.
686
+ FoV generalization.
687
+ Tab. 2 summarizes the results ob-
688
+ tained by using different FoVs as target domains while fix-
689
+ Method
690
+ Pedestrian AP@0.5
691
+ FR [24]
692
+ 43.7
693
+ AT [20]
694
+ 63.5
695
+ MT
696
+ 64.7
697
+ Ours
698
+ 65.3± 0.37
699
+ Table 3. Viewpoint Adaptation.
700
+ Figure 6. Varying the number of homographies. We evaluate
701
+ the effect of N on the FoV adaptation task.
702
+ ing the source FoV to 50°. Since both the source and tar-
703
+ get images are taken from KITTI, the domain gap is only
704
+ caused by a FoV change.
705
+ Note that the performance of
706
+ FR drops quickly as the FoV gap increases. Ours-h out-
707
+ performs FR+PIT by a growing margin as the FoV gap in-
708
+ creases. This shows that learning transformations helps to
709
+ generalize better to different amounts of geometric shifts.
710
+ Viewpoint adaptation.
711
+ As shown in Fig. 1, a change in
712
+ the camera viewpoint yields differences in the observed dis-
713
+ tortions and type of occlusions. The results in Tab. 3 show
714
+ the benefits of our method over MT in this case. Note that
715
+ PIT, which was designed for FoV changes, cannot be ap-
716
+ plied to correct for a viewpoint change. Other baselines out-
717
+ perform FR, as they use pseudo labels to fix the difference
718
+ in bounding box distribution, as shown in Fig. 1. These
719
+ results illustrate the generality of our method to different
720
+ kinds of geometric shifts. Qualitative results for this task
721
+ can be found in Fig. A.10.
722
+ 5.5. Additional Analyses
723
+ Variable number of homographies.
724
+ Let us now study
725
+ the influence of the number of homographies in T .
726
+ To
727
+ this end, we vary this number between 1 and 9. In Fig. 6,
728
+ we plot the resulting APs for the Cityscapes-to-KITTI FoV
729
+ adaptation task. Increasing the number of transformations
730
+ results in a steady increase in performance, which nonethe-
731
+ less tends to plateau starting at 4 homographies. Due to lim-
732
+ ited compute resources, we couldn’t run experiments with
733
+ 7
734
+
735
+ 80.5
736
+ 80.0
737
+ AP@0.5
738
+ 79.5
739
+ 79.0
740
+ 78.5
741
+ 1
742
+ 2
743
+ 3
744
+ 4
745
+ 5
746
+ 6
747
+ 7
748
+ 8
749
+ 9
750
+ Number of HomographyFigure 7. Feature Maps: Top row: predictions of our network and
751
+ feature map after aggregator. Left column: Image I, transformed
752
+ by learned homographies; Right Column: Feature maps F warped
753
+ by corresponding H−1 which are input to the aggregator. Each
754
+ transform distorts the image regions differently. Most of the cars
755
+ are on the left side and of small size in the image. H1 distorts
756
+ the left side leading to no activation(H−1
757
+ 1 F1) for the object. H3
758
+ which causes the zoom-in effect has the strongest activation as the
759
+ smaller objects are visible better here. These maps are generated
760
+ by taking maximum over channel dimension.
761
+ more than 9 homographies. This confirms the intuition that
762
+ a higher number of perspective transformations can better
763
+ capture the geometric shift between two domains. There-
764
+ fore, we conducted all experiments with the maximum num-
765
+ ber of homographies allowed by our compute resources.
766
+ Only optimizing T .
767
+ We also run the Ours-h baseline in
768
+ the FoV and viewpoint adaptation scenarios. The result-
769
+ ing APs are 78.2 and 49.8, respectively. By learning only
770
+ the 20 (5 × 4) homography parameters, our approach out-
771
+ performs FR (in Tab. 1 and Tab. 3, respectively) by a large
772
+ margin in both cases. This confirms that our training strat-
773
+ egy is able to efficiently optimize T to bridge the geometric
774
+ gap between different domains. We visualize in Fig. A.9 in
775
+ the supplementary material some transformations learned
776
+ for FoV adaptation by Ours-h. Note that they converge to
777
+ diverse homographies that mimic a different FoV, correctly
778
+ reflecting the adaptation task.
779
+ Diversity in T .
780
+ To show that our approach can learn
781
+ a diverse set of transformations that help in the adapta-
782
+ tion task, we initialize all the homographies with iden-
783
+ tity. Fig. 8 depicts the diversity of the learned homogra-
784
+ phies on the FoV adaptation task.
785
+ Even though we do
786
+ not enforce diversity, our approach learns a diverse set of
787
+ transformations.
788
+ With these learned homorgraphies, our
789
+ model achieves 79.5 AP@0.5 score for the FoV adaptation
790
+ task. We show additional results in the supplementary ma-
791
+ terial Sec. 4 and Sec. 5.
792
+ Figure 8. Diversity in T : We train 5 homographies initialized as
793
+ Hi = I. We plot the evolution of sx for different homograhies
794
+ as training proceeds. Each homography is shown in a different
795
+ color. Note that the values for the different homographies become
796
+ diverse. The best score is achieved at iteration = 22k, indicated
797
+ with the vertical line.
798
+ Limitations.
799
+ Our approach assumes that the geometric
800
+ gap between two domains can be bridged by a set of per-
801
+ spective transformations. We have shown that with enough
802
+ transformations this is true. However, using a large num-
803
+ ber of homographies comes at a computational cost. The
804
+ computational overhead leads to an increment in the infer-
805
+ ence time from 0.062s to 0.096s for N = 5 on an A100
806
+ Nvidia GPU with image dimension 402 × 1333. Neverthe-
807
+ less, our simple implementation shows promising results,
808
+ and we will work on reducing this overhead in future work.
809
+ Moreover since the optimization of the homography set is
810
+ done at the dataset level, only certain transformations are
811
+ beneficial to a given image. In the future, we therefore in-
812
+ tend to condition the homography on the input image, which
813
+ would reduce the total number of homographies needed.
814
+ 6. Conclusion
815
+ We have introduced an approach to bridge the gap be-
816
+ tween two domains caused by geometric shifts by learning
817
+ a set of homographies. We have shown the effectiveness our
818
+ method on two different kinds of shifts, without relying on
819
+ any annotations in the target domain, including information
820
+ about the nature of the geometric shifts. Our analyses have
821
+ evidenced that optimizing the transformations alone brings
822
+ in improvement over the base detector and increasing the
823
+ number of learnt homographies helps further. In the future,
824
+ we plan to learn transformations that are conditioned on the
825
+ input image to model image-dependent geometric shifts.
826
+ 8
827
+
828
+ H1
829
+ 1.4
830
+ H2
831
+ H3
832
+ 1.3
833
+ H4
834
+ H5
835
+ 1.2
836
+ 1.1
837
+ 1.0
838
+ 0.9
839
+ 0.8
840
+ 0.7
841
+ 0.6
842
+ 20
843
+ 5620
844
+ 11220
845
+ 16820
846
+ 22420
847
+ 28020
848
+ IterationsReferences
849
+ [1] Alexey
850
+ Bochkovskiy,
851
+ Chien-Yao
852
+ Wang,
853
+ and
854
+ Hong-
855
+ Yuan Mark Liao. Yolov4: Optimal speed and accuracy of
856
+ object detection. arXiv preprint arXiv:2004.10934, 2020. 1
857
+ [2] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nico-
858
+ las Usunier, Alexander Kirillov, and Sergey Zagoruyko.
859
+ Detr, https://github.com/facebookresearch/
860
+ detr, 2020. 1
861
+ [3] Chaoqi Chen, Zebiao Zheng, Xinghao Ding, Yue Huang, and
862
+ Qi Dou. Harmonizing transferability and discriminability for
863
+ adapting object detectors. In Proceedings of the IEEE/CVF
864
+ Conference on Computer Vision and Pattern Recognition,
865
+ pages 8869–8878, 2020. 1, 2
866
+ [4] Chaoqi Chen, Zebiao Zheng, Yue Huang, Xinghao Ding, and
867
+ Yizhou Yu. I3net: Implicit instance-invariant network for
868
+ adapting one-stage object detectors. In IEEE Conference on
869
+ Computer Vision and Pattern Recognition (CVPR), 2021. 1,
870
+ 2
871
+ [5] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and
872
+ Luc Van Gool. Domain adaptive faster r-cnn for object de-
873
+ tection in the wild. In Proceedings of the IEEE conference on
874
+ computer vision and pattern recognition, pages 3339–3348,
875
+ 2018. 2
876
+ [6] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo
877
+ Rehfeld,
878
+ Markus Enzweiler,
879
+ Rodrigo Benenson,
880
+ Uwe
881
+ Franke, Stefan Roth, and Bernt Schiele.
882
+ The cityscapes
883
+ dataset for semantic urban scene understanding. In Proceed-
884
+ ings of the IEEE conference on computer vision and pattern
885
+ recognition, pages 3213–3223, 2016. 1, 2, 5
886
+ [7] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong
887
+ Zhang, Han Hu, and Yichen Wei. Deformable convolutional
888
+ networks. In Proceedings of the IEEE international confer-
889
+ ence on computer vision, pages 764–773, 2017. 2
890
+ [8] Patrick Dendorfer, Hamid Rezatofighi, Anton Milan, Javen
891
+ Shi, Daniel Cremers, Ian Reid, Stefan Roth, Konrad
892
+ Schindler, and Laura Leal-Taix´e.
893
+ Mot20: A benchmark
894
+ for multi object tracking in crowded scenes. arXiv preprint
895
+ arXiv:2003.09003, 2020. 1, 5
896
+ [9] Jinhong Deng, Wen Li, Yuhua Chen, and Lixin Duan. Un-
897
+ biased mean teacher for cross-domain object detection. In
898
+ Proceedings of the IEEE/CVF Conference on Computer Vi-
899
+ sion and Pattern Recognition, pages 4091–4101, 2021. 1,
900
+ 2
901
+ [10] Mark Everingham, Luc Van Gool, Christopher KI Williams,
902
+ John Winn, and Andrew Zisserman. The pascal visual object
903
+ classes (voc) challenge. International journal of computer
904
+ vision, 88(2):303–338, 2010. 1
905
+ [11] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pas-
906
+ cal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario
907
+ Marchand, and Victor Lempitsky. Domain-adversarial train-
908
+ ing of neural networks.
909
+ The journal of machine learning
910
+ research, 17(1):2096–2030, 2016. 2
911
+ [12] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we
912
+ ready for autonomous driving? the kitti vision benchmark
913
+ suite. In 2012 IEEE Conference on Computer Vision and
914
+ Pattern Recognition, pages 3354–3361. IEEE, 2012. 1, 2, 5
915
+ [13] Qiqi Gu, Qianyu Zhou, Minghao Xu, Zhengyang Feng,
916
+ Guangliang Cheng, Xuequan Lu, Jianping Shi, and Lizhuang
917
+ Ma. Pit: Position-invariant transform for cross-fov domain
918
+ adaptation. In Proceedings of the IEEE/CVF International
919
+ Conference on Computer Vision, pages 8761–8770, 2021. 1,
920
+ 2, 3, 5, 6, 7
921
+ [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
922
+ Deep residual learning for image recognition. In Proceed-
923
+ ings of the IEEE conference on computer vision and pattern
924
+ recognition, pages 770–778, 2016. 6
925
+ [15] Naoto Inoue, Ryosuke Furuta, Toshihiko Yamasaki, and Kiy-
926
+ oharu Aizawa. Cross-domain weakly-supervised object de-
927
+ tection through progressive domain adaptation. In Proceed-
928
+ ings of the IEEE conference on computer vision and pattern
929
+ recognition, pages 5001–5009, 2018. 1
930
+ [16] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al.
931
+ Spatial transformer networks. Advances in neural informa-
932
+ tion processing systems, 28, 2015. 2, 4
933
+ [17] Glenn Jocher, Alex Stoken, Jirka Borovec, NanoCode012,
934
+ ChristopherSTAN, Liu Changyu, Laughing, tkianai, Adam
935
+ Hogan, lorenzomammana, yxNONG, AlexWang1900, Lau-
936
+ rentiu Diaconu, Marc, wanghaoyang0106, ml5ah, Doug,
937
+ Francisco Ingham, Frederik, Guilhen, Hatovix, Jake Poznan-
938
+ ski, Jiacong Fang, Lijun Yu, changyu98, Mingyu Wang, Na-
939
+ man Gupta, Osama Akhtar, PetrDvoracek, and Prashant Rai.
940
+ ultralytics/yolov5: v3.1 - Bug Fixes and Performance Im-
941
+ provements, Oct. 2020. 1
942
+ [18] Seunghyeon Kim, Jaehoon Choi, Taekyung Kim, and Chang-
943
+ ick Kim. Self-training and adversarial background regular-
944
+ ization for unsupervised domain adaptive one-stage object
945
+ detection.
946
+ In Proceedings of the IEEE/CVF International
947
+ Conference on Computer Vision, pages 6092–6101, 2019. 5
948
+ [19] Diederik P Kingma and Jimmy Ba. Adam: A method for
949
+ stochastic optimization.
950
+ arXiv preprint arXiv:1412.6980,
951
+ 2014. 6
952
+ [20] Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu,
953
+ Kan Chen, Bichen Wu, Zijian He, Kris Kitani, and Peter Va-
954
+ jda. Cross-domain adaptive teacher for object detection. In
955
+ IEEE Conference on Computer Vision and Pattern Recogni-
956
+ tion (CVPR), 2022. 2, 5, 6, 7
957
+ [21] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jor-
958
+ dan. Learning transferable features with deep adaptation net-
959
+ works.
960
+ In International conference on machine learning,
961
+ pages 97–105. PMLR, 2015. 2
962
+ [22] Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin
963
+ Wang.
964
+ Multi-adversarial domain adaptation.
965
+ In Thirty-
966
+ second AAAI conference on artificial intelligence, 2018. 2
967
+ [23] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali
968
+ Farhadi. You only look once: Unified, real-time object de-
969
+ tection. In Proceedings of the IEEE conference on computer
970
+ vision and pattern recognition, pages 779–788, 2016. 1
971
+ [24] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.
972
+ Faster r-cnn: towards real-time object detection with region
973
+ proposal networks. IEEE transactions on pattern analysis
974
+ and machine intelligence, 39(6):1137–1149, 2016. 1, 4, 6, 7
975
+ [25] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San-
976
+ jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy,
977
+ 9
978
+
979
+ Aditya Khosla, Michael Bernstein, Alexander C. Berg, and
980
+ Li Fei-Fei. ImageNet Large Scale Visual Recognition Chal-
981
+ lenge.
982
+ International Journal of Computer Vision (IJCV),
983
+ 115(3):211–252, 2015. 6
984
+ [26] Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, and Kate
985
+ Saenko. Strong-weak distribution alignment for adaptive ob-
986
+ ject detection. In Proceedings of the IEEE/CVF Conference
987
+ on Computer Vision and Pattern Recognition, pages 6956–
988
+ 6965, 2019. 1, 2
989
+ [27] Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Seman-
990
+ tic foggy scene understanding with synthetic data. Interna-
991
+ tional Journal of Computer Vision, 126(9):973–992, 2018.
992
+ 1
993
+ [28] Zhiqiang Shen, Harsh Maheshwari, Weichen Yao, and Mar-
994
+ ios Savvides. Scl: Towards accurate domain adaptive object
995
+ detection via gradient detach based stacked complementary
996
+ losses. arXiv preprint arXiv:1911.02559, 2019. 2
997
+ [29] Kihyuk Sohn, Zizhao Zhang, Chun-Liang Li, Han Zhang,
998
+ Chen-Yu Lee, and Tomas Pfister. A simple semi-supervised
999
+ learning framework for object detection.
1000
+ arXiv preprint
1001
+ arXiv:2005.04757, 2020. 2
1002
+ [30] Baochen Sun and Kate Saenko.
1003
+ Deep coral: Correlation
1004
+ alignment for deep domain adaptation.
1005
+ In European con-
1006
+ ference on computer vision, pages 443–450. Springer, 2016.
1007
+ 2
1008
+ [31] Antti Tarvainen and Harri Valpola. Mean teachers are better
1009
+ role models: Weight-averaged consistency targets improve
1010
+ semi-supervised deep learning results. Advances in neural
1011
+ information processing systems, 30, 2017. 2, 4, 5, 6
1012
+ [32] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell.
1013
+ Adversarial discriminative domain adaptation. In Proceed-
1014
+ ings of the IEEE conference on computer vision and pattern
1015
+ recognition, pages 7167–7176, 2017. 2
1016
+ [33] Vibashan VS, Vikram Gupta, Poojan Oza, Vishwanath A
1017
+ Sindagi, and Vishal M Patel. Mega-cda: Memory guided
1018
+ attention for category-aware unsupervised domain adaptive
1019
+ object detection. In Proceedings of the IEEE/CVF Confer-
1020
+ ence on Computer Vision and Pattern Recognition, pages
1021
+ 4516–4526, 2021. 2
1022
+ [34] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen
1023
+ Lo, and Ross Girshick. Detectron2. https://github.
1024
+ com/facebookresearch/detectron2, 2019. 6
1025
+ [35] Shaoan Xie, Zibin Zheng, Liang Chen, and Chuan Chen.
1026
+ Learning semantic representations for unsupervised domain
1027
+ adaptation. In International conference on machine learning,
1028
+ pages 5423–5432. PMLR, 2018. 2
1029
+ [36] Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie
1030
+ Wang, Qi Tian, and Wenjun Zhang.
1031
+ Adversarial domain
1032
+ adaptation with domain mixup.
1033
+ In Proceedings of the
1034
+ AAAI Conference on Artificial Intelligence, volume 34, pages
1035
+ 6502–6509, 2020. 2
1036
+ [37] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A
1037
+ Efros.
1038
+ Unpaired image-to-image translation using cycle-
1039
+ consistent adversarial networks. In Proceedings of the IEEE
1040
+ international conference on computer vision, pages 2223–
1041
+ 2232, 2017. 2
1042
+ [38] Xinge Zhu, Jiangmiao Pang, Ceyuan Yang, Jianping Shi, and
1043
+ Dahua Lin. Adapting object detectors via selective cross-
1044
+ domain alignment. In Proceedings of the IEEE/CVF Con-
1045
+ ference on Computer Vision and Pattern Recognition, pages
1046
+ 687–696, 2019. 1, 2
1047
+ 10
1048
+
1049
+ Supplementary Material: Learning Transformations To Reduce the Geometric
1050
+ Shift in Object Detection
1051
+ Vidit Vidit1 Martin Engilberge1 Mathieu Salzmann1,2
1052
+ CVLab, EPFL1, ClearSpace SA2
1053
+ firstname.lastname@epfl.ch
1054
+ 1. Transformations through Homography
1055
+ We use homography to introduce varied perspective
1056
+ transformations so that they can distort the same image re-
1057
+ gions differently as seen in Fig. A.1. This helps the detector
1058
+ to learn robust object features and simultaneously optimize
1059
+ an aggregator with a different set of homographies which
1060
+ can bridge the gap between two domains.
1061
+ 2. Feature Maps Activation
1062
+ We show in Fig. A.2 how different homographies gen-
1063
+ erate activation in the feature maps. Not all homographies
1064
+ look at the same image region, therefore the task of the ag-
1065
+ gregator is to bring in the activations from different trans-
1066
+ formations together.
1067
+ 3. Other Aggregator Architecture
1068
+ We implement aggregator using standard functions to
1069
+ combine {FHi}N
1070
+ i=1. Tab. A.1 illustrates this study for FoV
1071
+ adaptation, where the training is done under mean teacher
1072
+ formalism to learn |T | = N = 5. We see that these non-
1073
+ learnable aggregators are able to outperform MT baseline
1074
+ (Sec. 5.4, in the main paper) suggesting that including trans-
1075
+ formations helps to bridge the geometric shifts.
1076
+ Function
1077
+ Car AP@0.5
1078
+ sum
1079
+ 78.1± 0.14
1080
+ mean
1081
+ 78.7± 0.05
1082
+ max
1083
+ 78.7± 0.12
1084
+ min+max
1085
+ 78.9± 0.43
1086
+ MT
1087
+ 78.3
1088
+ Ours
1089
+ 79.9± 0.14
1090
+ Table A.1. Aggregator Architecture without learnable parameters
1091
+ 4. Diversity in T
1092
+ In order to show that diverse transformations are learned,
1093
+ we set Hi = I and train our mean teacher formulation.
1094
+ Fig. A.4 shows diverse set of transformations learned in
1095
+ FoV adaptation task. Even though we do not enforce di-
1096
+ versity among homographies, it is learned through our ap-
1097
+ proach.
1098
+ 5. Evolution of T
1099
+ We provide qualitative results for T learned in FoV and
1100
+ Viewpoint adaptation, Fig. A.5 and Fig. A.8, respectively.
1101
+ The qualitative results for the same adaptation task can be
1102
+ seen in Fig. A.6 and Fig. A.7, respectively.
1103
+ 6. Hyperparameter details
1104
+ Augmentations.
1105
+ We use Detectron2 [2]s implementation
1106
+ for random crop and torchvision 1 for color jittering.
1107
+ Kind
1108
+ Details
1109
+ Random Crop
1110
+ Relative Range: [0.3, 1]
1111
+ Color Jitter
1112
+ Brightness=.5, Hue=.3
1113
+ Table A.2. Augmentations
1114
+ FasterRCNN [1] training.
1115
+ We train our base network
1116
+ with random crop strategy on with only source data, which
1117
+ is Cityscapes for both the adaptation tasks.
1118
+ The trained
1119
+ model achieves 74.7 and 58.4 AP@0.5 score on the source
1120
+ domain validation set for car and person detection, respec-
1121
+ tively.
1122
+ Mean Teacher Training
1123
+ For our mean teacher setup
1124
+ (Sec. 4.2, in the main paper), we choose τ = 0.6 as the con-
1125
+ fidence threshold for the pseudo-labels and evaluate con-
1126
+ tribution of target domain loss for different λ. Fig. A.11
1127
+ summarizes this study. We see that method performs worse
1128
+ when we have equal contribution from both source and tar-
1129
+ get domain loss λ = 1, as the false positives in the target
1130
+ 1https://pytorch.org/vision/stable/transforms.
1131
+ html
1132
+ 1
1133
+ arXiv:2301.05496v1 [cs.CV] 13 Jan 2023
1134
+
1135
+ Figure A.1. Transformations: Here we demonstrate how the two objects in the original image undergo different perspective transforma-
1136
+ tions. Our task is to learn robust object features under such transformations and use them to bring the two domains closer while being
1137
+ agnostic to the camera parameters. We train with a multiple set of transformations to change the same image region differently. With our
1138
+ trainable aggregator, we can then combine features from different regions to help in improving the detector’s performance.
1139
+ domain quickly deteriorate the training. Fig. A.12, evalua-
1140
+ tion for different values of τ.
1141
+ 7. Architecture details
1142
+ Our aggregator architecture consists of three convolution
1143
+ layers along with BatchNorm and Relu layers after each
1144
+ convolution. Tab. A.3 shows the details of different layers.
1145
+ Here, C = 1024 corresponds to the output of the feature
1146
+ extractor.
1147
+ Table A.3. Aggregator Architecture for |T | = N
1148
+ # Channels
1149
+ Layer
1150
+ Input
1151
+ Output
1152
+ Conv2d 3 × 3
1153
+ N × C
1154
+ N × C/2
1155
+ BatchNorm + Relu
1156
+ N × C/2
1157
+ N × C/2
1158
+ Conv2d 3 × 3
1159
+ N × C/2
1160
+ C
1161
+ BatchNorm + Relu
1162
+ C
1163
+ C
1164
+ Conv2d 1 × 1
1165
+ C
1166
+ C
1167
+ BatchNorm + Relu
1168
+ C
1169
+ C
1170
+ References
1171
+ [1] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.
1172
+ Faster r-cnn: towards real-time object detection with region
1173
+ proposal networks. IEEE transactions on pattern analysis and
1174
+ machine intelligence, 39(6):1137–1149, 2016. 1
1175
+ [2] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen
1176
+ Lo, and Ross Girshick. Detectron2. https://github.
1177
+ com/facebookresearch/detectron2, 2019. 1
1178
+ 2
1179
+
1180
+ Figure A.2. Feature Maps: Top row: predictions of our network and feature map after aggregator. Left column: Image I, transformed
1181
+ by 5 learnt homographies; Right Column: Feature maps F warped by corresponding H−1 which are input to aggregator. Each transform
1182
+ distorts the image regions differently. Most of the cars are on the left side and of small size in the image. H1 distorts the left side leading to
1183
+ no activation(H−1
1184
+ 1 F1) for the object. H3 which causes zoom-in effect has the strongest activation as the smaller objects are visible better
1185
+ here. Overall aggregator feature map contains activation from the region where the objects exist. The aggregator has learnt how to combine
1186
+ regions with activations under different homographies. The feature maps are generated by taking maximum over channel dimension.
1187
+ 3
1188
+
1189
+ Figure A.3. Approximating PIT with homographies. Left column: Visualization of each homography use to approximate PIT with 5
1190
+ transforms; the top one is the identitity, and the following ones are in order of increasing compression. Center column: Contribution of
1191
+ each homography to the final remapping. Right column: The top figure shows the per pixel coordinate error when compared to the PIT
1192
+ remapping as a function of the number of homographies used in the approximation; the three bottom figures depict the coordinate error
1193
+ maps for 1, 5, and 25 homographies used to approximate PIT (note the scale change in pixel coordinate error).
1194
+ 4
1195
+
1196
+ sx
1197
+ sy
1198
+ lx
1199
+ ly
1200
+ Figure A.4. Diversity in T : We train |T | = 5 initialized with Hi = I. Homographies parameterized by sx, sy, lx, ly evolve as the training
1201
+ proceeds and tend to become diverse. Each homography is shown in different color. Even though we do not enforce any diversity, our
1202
+ approach learns diverse set of transformations. With these learned homorgraphies, we achieve 79.5 AP@0.5 score for FoV adaptation task.
1203
+ The best score is achieved at iteration = 22k shown with the vertical line.
1204
+ 5
1205
+
1206
+ H1
1207
+ 1.4
1208
+ H2
1209
+ H3
1210
+ 1.3
1211
+ H4
1212
+ H5
1213
+ 1.2
1214
+ 1.1
1215
+ 1.0
1216
+ 0.9
1217
+ 0.8
1218
+ 0.7
1219
+ 0.6
1220
+ 20
1221
+ 5620
1222
+ 11220
1223
+ 16820
1224
+ 22420
1225
+ 28020
1226
+ Iterations1.7
1227
+ H1
1228
+ H2
1229
+ 1.6
1230
+ H3
1231
+ H4
1232
+ H5
1233
+ 1.5
1234
+ 1.4
1235
+ 1.3
1236
+ 1.2
1237
+ 1.1
1238
+ 1.0
1239
+ 0.9
1240
+ 20
1241
+ 5620
1242
+ 11220
1243
+ 16820
1244
+ 22420
1245
+ 28020
1246
+ Iterations0.3
1247
+ H1
1248
+ H2
1249
+ H3
1250
+ 0.2
1251
+ H4
1252
+ H5
1253
+ 0.1
1254
+ 0.0
1255
+ -0.1
1256
+ -0.2
1257
+ -0.3
1258
+ 20
1259
+ 5620
1260
+ 11220
1261
+ 16820
1262
+ 22420
1263
+ 28020
1264
+ Iterations0.20
1265
+ H1
1266
+ H2
1267
+ 0.15
1268
+ H3
1269
+ H4
1270
+ H5
1271
+ 0.10
1272
+ 0.05
1273
+ 0.00
1274
+ -0.05
1275
+ -0.10
1276
+ -0.15
1277
+ -0.20
1278
+ 20
1279
+ 5620
1280
+ 11220
1281
+ 16820
1282
+ 22420
1283
+ 28020
1284
+ Iterationssx
1285
+ sy
1286
+ lx
1287
+ ly
1288
+ Figure A.5. Quantitative results for the corresponding results in Figure A.6. The randomly initialized transforms, parameterized by
1289
+ sx, sy, lx, ly, evolve to achieve the best score at 28k iterations (shown by the vertical bar). The colors represent different homographies.
1290
+ Some set of parameters converges to similar value but overall each homography is unique.
1291
+ 6
1292
+
1293
+ 0.4
1294
+ 0.2
1295
+ H1
1296
+ H2
1297
+ 0.0
1298
+ H3
1299
+ H4
1300
+ H5
1301
+ -0.2
1302
+ -0.4
1303
+ 20
1304
+ 5620
1305
+ 11220
1306
+ 16820
1307
+ 22420
1308
+ 28020
1309
+ Iterations0.4
1310
+ 0.3
1311
+ 0.2
1312
+ 0.1
1313
+ 0.0
1314
+ -0.1
1315
+ H1
1316
+ H2
1317
+ H3
1318
+ -0.2
1319
+ H4
1320
+ H5
1321
+ 20
1322
+ 5620
1323
+ 11220
1324
+ 16820
1325
+ 22420
1326
+ 28020
1327
+ IterationsH1
1328
+ 1.8
1329
+ H2
1330
+ H3
1331
+ H4
1332
+ 1.6
1333
+ H5
1334
+ 1.4
1335
+ 1.2
1336
+ 1.0
1337
+ 0.8
1338
+ 0.6
1339
+ 20
1340
+ 5620
1341
+ 11220
1342
+ 16820
1343
+ 22420
1344
+ 28020
1345
+ Iterations1.8
1346
+ 1.6
1347
+ 1.4
1348
+ 1.2
1349
+ 1.0
1350
+ 0.8
1351
+ H1
1352
+ H2
1353
+ H3
1354
+ 0.6
1355
+ H4
1356
+ H5
1357
+ 0.4
1358
+ 20
1359
+ 5620
1360
+ 11220
1361
+ 16820
1362
+ 22420
1363
+ 28020
1364
+ IterationsFigure A.6. FoV adaptation: The randomly initialized homographies evolve as the training progresses to improve the overall AP score.
1365
+ We train with 5 homographies and show how they transform an image for the corresponding FoV adaptation task.
1366
+ 7
1367
+
1368
+ PredFigure A.7. Viewpoint adaptation: The randomly initialized homographies evolve as the training progresses to improve the overall AP
1369
+ score. We train with 5 homographies and show how they transform an image for the corresponding viewpoint adaptation task.
1370
+ 8
1371
+
1372
+ Predsx
1373
+ sy
1374
+ lx
1375
+ ly
1376
+ Figure A.8. Quantitative results for the corresponding results in Figure A.7. The randomly initialized transforms, parameterized by
1377
+ sx, sy, lx, ly, evolve to achieve the best score at 8k iterations (shown by the vertical bar). The colors represent different homographies.
1378
+ Some sy parameters start at a similar value but eventually diverge.
1379
+ 9
1380
+
1381
+ 1.8
1382
+ 1.6
1383
+ 1.4
1384
+ H1
1385
+ H2
1386
+ H3
1387
+ 1.2
1388
+ H4
1389
+ H5
1390
+ 1.0
1391
+ 0.8
1392
+ 0.6
1393
+ 20
1394
+ 1820
1395
+ 3620
1396
+ 5420
1397
+ 7220
1398
+ 9020
1399
+ Iterations1.4
1400
+ 1.2
1401
+ 1.0
1402
+ 0.8
1403
+ H1
1404
+ H2
1405
+ H3
1406
+ 0.6
1407
+ H4
1408
+ H5
1409
+ 20
1410
+ 1820
1411
+ 3620
1412
+ 5420
1413
+ 7220
1414
+ 9020
1415
+ IterationsH1
1416
+ 0.3
1417
+ H2
1418
+ H3
1419
+ H4
1420
+ 0.2
1421
+ H5
1422
+ 0.1
1423
+ 0.0
1424
+ -0.1
1425
+ -0.2
1426
+ -0.3
1427
+ -0.4
1428
+ -0.5
1429
+ 20
1430
+ 1820
1431
+ 3620
1432
+ 5420
1433
+ 7220
1434
+ 9020
1435
+ IterationsH1
1436
+ H2
1437
+ 0.3
1438
+ H3
1439
+ H4
1440
+ H5
1441
+ 0.2
1442
+ 0.1
1443
+ 0.0
1444
+ -0.1
1445
+ -0.2
1446
+ 20
1447
+ 1820
1448
+ 3620
1449
+ 5420
1450
+ 7220
1451
+ 9020
1452
+ IterationsFigure A.9. Evolution of T . We showcase how two homographies, H1 and H5, evolve across the training iterations and influence
1453
+ the prediction scores. Starting from random homographies at iteration 0, the transformations converge to homographies suited for FoV
1454
+ adaptation. The detection scores consequently increase throughout the training process. Moreover, this increase in detection score is
1455
+ reflected in the overall AP@0.5 score, which jumps from 74.1 to 78.2.
1456
+ Figure A.10. Viewpoint Adaptation: Qualitative Results. We visualize results for viewpoint adaptation between Cityscapes and MOT20-
1457
+ 02. The left image depicts the ground truth, the middle one the results of Mean Teacher adaptation, and the right one those of our approach.
1458
+ Our approach recovers more detections (e.g., the woman near the stroller in the center-left) while having fewer false positives (overlapping
1459
+ box in bottom-left corner of the MT results).
1460
+ 10
1461
+
1462
+ 88%
1463
+ 94%
1464
+ 93%GT
1465
+ MT
1466
+ OursFigure A.11. Study on λ for τ = 0.6, |T | = 5
1467
+ Figure A.12. Study on τ for FoV and Viewpoint adaptation using
1468
+ λ = 0.01, 0.1, respectively. Here ,|T | = 5 is used for the study.
1469
+ 11
1470
+
1471
+ 81
1472
+ 79
1473
+ 77
1474
+ 75
1475
+ 73
1476
+ 71
1477
+ 5
1478
+ 69
1479
+ AP@O.
1480
+ 67
1481
+ 65
1482
+ 63
1483
+ 61
1484
+ 59
1485
+ Fov adapt.
1486
+ 57
1487
+ Viewpoint adapt
1488
+ 55
1489
+ 1
1490
+ 0.1
1491
+ 0.01
1492
+ 0.001
1493
+ 入81
1494
+ 79
1495
+ 77
1496
+ 75
1497
+ 73
1498
+ 71
1499
+ 5
1500
+ AP@O.!
1501
+ 69
1502
+ 67
1503
+ 65
1504
+ 63
1505
+ 61
1506
+ 59
1507
+ FoV adapt.
1508
+ 57
1509
+ Viewpoint adapt.
1510
+ 55
1511
+ 0.5
1512
+ 0.6
1513
+ 0.7
1514
+ 0.8
1515
+ T
0NE5T4oBgHgl3EQfOQ6v/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
1dE2T4oBgHgl3EQfNQZD/content/2301.03734v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b6764450acb73a62c1cc5a430c607c486f02c6ab813a615d5dc403922250c80
3
+ size 960181
1dE2T4oBgHgl3EQfNQZD/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dd4898c3689fb02566b8b376f2a438ec316cdb0c5f5f94e59f94b7051bb9d04
3
+ size 1703981
1dE2T4oBgHgl3EQfNQZD/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:487512cc00f96d1a8420dc2604cc7d1d300721d1c2bfc89b4b9eb43daf987ef8
3
+ size 55919
1dFAT4oBgHgl3EQfCxxo/content/tmp_files/2301.08412v1.pdf.txt ADDED
@@ -0,0 +1,622 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Fair Credit Scorer through Bayesian Approach
2
+ Zhuo Zhao
3
+ Department of Applied Mathematics
4
+ Johns Hopkins University
5
+ zzhao62@jhu.edu
6
+ Abstract
7
+ Machine learning currently plays an increasingly important role in people’s lives
8
+ in areas such as credit scoring, auto-driving, disease diagnosing, and insurance
9
+ quoting. However, in many of these areas, machine learning models have performed
10
+ unfair behaviors against some sub-populations, such as some particular groups of
11
+ race, sex, and age. These unfair behaviors can be on account of the pre-existing
12
+ bias in the training dataset due to historical and social factors. In this paper, we
13
+ focus on a real-world application of credit scoring and construct a fair prediction
14
+ model by introducing latent variables to remove the correlation between protected
15
+ attributes, such as sex and age, with the observable feature inputs, including house
16
+ and job. For detailed implementation, we apply Bayesian approaches, including
17
+ the Markov Chain Monte Carlo simulation, to estimate our proposed fair model.
18
+ 1
19
+ Introduction
20
+ Nowadays, Machine Learning methods are used to automate decisions in a variety of areas, including
21
+ determining credit scores Nanni and Lumini [2009], classifying tumor components from MRI images
22
+ Lundervold and Lundervold [2019], detecting pedestrians on the road Dollar et al. [2011], and
23
+ understanding natural languages Goldberg and Levy [2014], etc. However, machine learning methods
24
+ are heavily dependent on data Mitchell and Mitchell [1997] and this data-dependent nature makes the
25
+ learned models sensitive to the latent bias existing in the training datasets Mehrabi et al. [2021]. Thus,
26
+ the final decisions made by the learned models are unfairly biased against certain sub-populations,
27
+ differentiated by some sensitive/protected attributes, such as race, sex, or age, etc. For example,
28
+ cameras sometimes fail to recognize whether Asian blink their eyes Sharp [2009] and the beauty
29
+ pageant judged by AI would prefer light skin Guardian [2016]. However, we would expect AI to give
30
+ the same decision independent from the protected attributes and thus we concern about the fairness
31
+ of machine learning methods Mehrabi et al. [2021].
32
+ In this paper, we focus on constructing fair machine learning models to predict the credit score, with
33
+ using the German Credit Risk dataset Hoffman [2016] (Sec. 3). The goal is to predict the credit
34
+ score based on some observable variables, including housing and job information. However, this
35
+ personal financial information, such as income, housing, and saving, are usually highly correlated
36
+ to gender and age due to historical and social reasons Rennison and Planty [2003]. Therefore, it
37
+ is necessary to learn an effective model to filter the prediction bias against sex and age, caused by
38
+ the latent correlation between these observable variables and the protected attributes. In detail, we
39
+ analyze and compare from the fairness perspective across the full model Montgomery et al. [2021],
40
+ unaware model Dwork et al. [2012], and fair model based on causals and counterfactuals Kusner et al.
41
+ [2017] (Sec. 4). Then, we apply the Markov Chain Monte Carlo (MCMC) simulation Mooney [1997]
42
+ and the Gibbs’ sampling Gelfand [2000] to solve the corresponding parameters in these models and
43
+ evaluate the performances (Sec. 5 and Sec. 6).
44
+ arXiv:2301.08412v1 [cs.LG] 20 Jan 2023
45
+
46
+ 2
47
+ Related work
48
+ Fairness. Many recent works (Calders and Verwer [2010], Bolukbasi et al. [2016], Dwork et al.
49
+ [2012], Hardt et al. [2016], Joseph et al. [2016], Kusner et al. [2017]) have been focusing on fairness
50
+ in machine learning algorithms. Bolukbasi et al. [2016] pointed out that there is a risk of amplifying
51
+ the bias introduced from the dataset, if using machine learning algorithms without taking effects to
52
+ handle the pre-existing bias. For example, in the word embedding, learned over Google News with
53
+ pre-existing gender stereotypes, the gender-neutral words widely spread along a latent embedding
54
+ direction capturing gender difference, such as "receptionist" falling far along the direction related
55
+ to "female" Bolukbasi et al. [2016]. Calders and Verwer [2010] modifies the Naive Bayes classifier
56
+ by adding independence restriction toward sensitive attributes. Dwork et al. [2012] proposes a
57
+ task-specific metric to evaluate the similarity between individuals relative to the classification task
58
+ and optimizes over the proposed metric with the goal that similar individuals are treated similarly
59
+ in the classification task. Kusner et al. [2017] focuses on causal inference and counterfactual, with
60
+ introducing the latent confounding variables, which are related to the observable variables but
61
+ independent from the protected attributes. Our work is based on the Kusner et al. [2017] idea to
62
+ construct a fair prediction model over the German Credit Risk dataset Hoffman [2016].
63
+ 3
64
+ Dataset
65
+ We consider the Kaggle German Credit Risk dataset Hoffman [2016] to analyze and compare different
66
+ types of unfair models and our method for constructing a fair model using Bayesian approaches. In
67
+ this dataset, each entry represents a person who takes credit from a bank. The objective is to predict
68
+ the credit amount of a person based on his/her attributes. "Sex" and "age" are the sensitive/protected
69
+ attributes related to the bias during training and prediction in the unfairness problem. Feature "job" is
70
+ a binary variable representing whether a person has a job or not. Feature "house" is a binary variable
71
+ that indicates whether or not a person owns a house. The "credit amount" is our prediction target.
72
+ The dataset is composed of 1000 records. We randomly pick 800 records for training and 200 records
73
+ for testing. Figure 1 shows the detailed distributions of all these features in the whole dataset. In
74
+ Figure 2, we illustrate the covariance between all the input features and the prediction target. We can
75
+ observe a high correlation from the sensitive / protected attributes, i.e. "sex" and "age", to the "job"
76
+ and "house". Thus, it is necessary to consider the issue of fairness when constructing a prediction
77
+ model over "job" and "house".
78
+ Figure 1: Distribution of features in the German Credit Risk dataset Hoffman [2016]. "Age" and
79
+ "sex" are the sensitive / protected attributes. "Job" and "house" are the observable variables. "Credit
80
+ amount" is the prediction target.
81
+ 2
82
+
83
+ 700
84
+ 140
85
+ 160
86
+ 600
87
+ 120
88
+ 500
89
+ 100
90
+ 140
91
+ 60
92
+ 300
93
+ 120
94
+ 40
95
+ 200
96
+ 100
97
+ 100
98
+ 20
99
+ Count
100
+ 70
101
+ 0.2
102
+ 0.4
103
+ 0.6
104
+ 0.8
105
+ 1.0
106
+ age
107
+ sex
108
+ 80
109
+ 800
110
+ 700
111
+ 700
112
+ 600
113
+ 60
114
+ 600
115
+ 500
116
+ 40
117
+ 8 400
118
+ 300
119
+ 300
120
+ 20
121
+ 200
122
+ 200
123
+ 100
124
+ 100
125
+ 0
126
+ 0.2
127
+ 0.4
128
+ 0.6
129
+ 0.8
130
+ 1.0
131
+ 0.4
132
+ 0
133
+ 5000
134
+ 10000
135
+ 15000
136
+ job
137
+ 0.2
138
+ 0.6
139
+ house
140
+ credit amtFigure 2: Illustration of the covariance matrix between all the input features and the prediction target.
141
+ Here, we observe a high correlation from "age" and "sex" (the sensitive/protected attributes) to "job"
142
+ and "house" (the observable variables).
143
+ 4
144
+ Methods
145
+ Full Model: The full model Montgomery et al. [2021] completely ignores fairness issues and
146
+ includes sensitive variables like sex and age in the learning process. It is easy to understand that the
147
+ full model is unfair because the predictions depend on sex and age. Figure 3 presents the directed
148
+ acyclic graph (DAG) of the full model. In the full model, all the features are assumed to be connected.
149
+ Unaware Model: The unaware model Dwork et al. [2012] does not use sensitive variables
150
+ in the learning and prediction process, but it is still unfair. Even though the sensitive variables do not
151
+ influence the target directly in the learning and prediction processes, it still has an indirect impact on
152
+ the target through the non-sensitive variables. In our example, to predict a person’s credit amount,
153
+ sex may influence whether a person can get a job. The job attribute still preserves the information of
154
+ sex. Simply ignoring the sex attribute will not fully eliminate its impact on the predictions. Figure 3
155
+ presents the DAG of an unaware model. The attributes under the grey circles are unobserved. In the
156
+ unaware model, sex and age are not directly connected with the credit amount, but they are connected
157
+ with job and house. It is still unfair because the change of sex and age will change the status of job
158
+ and house, and thus influence the credit amount predictions.
159
+ Figure 3: Two types of unfair models. Left: full model, which builds regression over all possible
160
+ attributes without the consideration of fairness. Right: unaware model, which excludes sensi-
161
+ tive/protected attributes, i.e. sex and age in our case.
162
+ Fair Model: In order to build a fair model, we need to find a proxy variable that is independent
163
+ of sensitive variables but still preserves the information in the credit amount prediction Kusner
164
+ et al. [2017]. We can introduce the concept of latent confounding variable to resolve this issue.
165
+ 3
166
+
167
+ 1.0
168
+ age
169
+ 0.8
170
+ sex
171
+ 0.6
172
+ job
173
+ 0.4
174
+ credit_amt house
175
+ 0.2
176
+ 0.0
177
+ age
178
+ sex
179
+ job
180
+ house
181
+ credit_amtSex
182
+ Job
183
+ Sex
184
+ Job
185
+ Credit
186
+ Credit
187
+ House
188
+ Age
189
+ House
190
+ Age
191
+ Full Model
192
+ Unaware ModelThe confounding variable is a variable that influences both the independent variable and dependent
193
+ variables. In our fair model, we assume that there is an unobserved confounder C that reflects how
194
+ reliable a person is in paying back the loan. The confounder should be independent of the sensitive
195
+ variables to make the model fair. Figure 4 shows the DAG of the fair model structure. In the inference
196
+ stage, we assume that job, house, and credit amount are confounded by the unobserved reliability
197
+ level C and C is independent of sex and age. The reason is that sex and age can neither determine
198
+ nor be related to how reliable a person is in paying back loans. Meanwhile, reliability is co-related to
199
+ a person’s job performance, housing situation, and also credit amount. Then, in the prediction stage,
200
+ we only use the inferred C as our feature to predict the credit amount. In this way, the predicting
201
+ process does not contain any information about sex or age, and thus this procedure is an effective,
202
+ fair learning algorithm in our scenario.
203
+ Figure 4: DAG of the fair model. Here, we introduce the latent confounding variable "unobserved
204
+ reliability level", which is independent to "sex" and "age" (the sensitive/protected attributes) but
205
+ related to "job", "house", and "credit amount". Left: during the inference stage, we estimate this
206
+ latent "reliability" feature with Bayesian approaches. Right: during the prediction stage, we only use
207
+ this inferred "reliability" feature to predict the "credit amount".
208
+ 5
209
+ Experiments
210
+ We can represent the DAG of the fair model in a probabilistic way. We sample the two binary
211
+ variables, job and house from two Bernoulli distributions and sample the confounder from the normal
212
+ distribution. In the meantime, we choose the Poisson distribution as a prior for the credit amount.
213
+ Our choices of priors correspond to the nature of the data. The job and house features are binary. And
214
+ the credit amount is a positive attribute with a shape alike the Poisson distribution. The probabilistic
215
+ model can be written as:
216
+ Job ∼ Bernoulli(logit(bj + Sex × βj,s + Age × βj,a + C × βj,c))
217
+ (1)
218
+ House ∼ Bernoulli(logit(bh + Sex × βh,s + Age × βh,a + C × βh,c))
219
+ (2)
220
+ Credit ∼ Poisson(Exp(Sex × βc,s + Age × βc,a + C × βc,c))
221
+ (3)
222
+ C ∼ Normal(0, 1)
223
+ where
224
+ C ⊥ Sex,
225
+ C ⊥ Age
226
+ (4)
227
+ The parameters we need to find are in the set Θ = {βm,n, bm} where m = j, h and n = s, a, c. We
228
+ assume that these parameters are sampled from the normal distributions:
229
+ βm,n ∼ N(0, 1)
230
+ (5)
231
+ bm ∼ N(0, 1)
232
+ (6)
233
+ We implement the Metropolis–Hastings algorithm to infer the probabilistic model. M-H algorithm
234
+ Hastings [1970] is a Markov Chain Monte Carlo (MCMC) method for obtaining a sequence of
235
+ 4
236
+
237
+ Job
238
+ Sex
239
+ Unobserved
240
+ Unobserved
241
+ House
242
+ Credit
243
+ Reliability Level
244
+ Reliability Level
245
+ Age
246
+ Credit
247
+ Fair model - Inference
248
+ Fair model - PredictionAlgorithm 1 Infer C by Metropolis–Hastings
249
+ for i = 1 to N do
250
+ Choose J(C∗
251
+ i |C(s)
252
+ i
253
+ ) = uniform(C(s)
254
+ i
255
+ − δ, C(s)
256
+ i
257
+ + δ);
258
+ Set an initial state C0
259
+ i ;
260
+ for s = 1 to 5000 do
261
+ Sample C∗
262
+ i ∼ J(C∗
263
+ i |C(s)
264
+ i
265
+ );
266
+ Compute the acceptance ratio r =
267
+ p(C∗
268
+ i |y)
269
+ p(C(s)
270
+ i
271
+ |y) =
272
+ p(y|C∗
273
+ i )p(C∗
274
+ i )
275
+ p(y|C(s)
276
+ i
277
+ )p(C(s)
278
+ i
279
+ );
280
+ sample u ∼ uniform(0, 1);
281
+ if u < r then;
282
+ C(s+1)
283
+ i
284
+ = C∗
285
+ i ;
286
+ else
287
+ C(s+1)
288
+ i
289
+ = C(s)
290
+ i
291
+ ;
292
+ end if
293
+ end for
294
+ end for
295
+ random samples from a probability distribution from which direct sampling is difficult. Algorithm 1
296
+ explains how to infer the reliability level C.
297
+ Once we obtain the posteriors of the inferred reliability level C, we can fit a new model using kernel
298
+ g(.) based on the C in the prediction stage. In our experiment, since there is a nonlinear relationship
299
+ between credit amount and "Reliability Level" in our inference stage setup (Poisson), we decide to
300
+ use random-forest as the kernel function g(.) in our second stage prediction.
301
+ Credit ∼ g(C)
302
+ (7)
303
+ 6
304
+ Results
305
+ In this section, we provide experimental results and a discussion of the MCMC process performance.
306
+ Specifically, in Sec. 6.1, we firstly present the MCMC estimation result and the convergence analysis
307
+ on the fair model’s latent confounding variable C and parameters. Then, we compare the prediction
308
+ and fairness performance across the three types of models in Sec. 6.2.
309
+ 6.1
310
+ Fair model’s MCMC performance:
311
+ Figure 5: Auto-correlation plots of parameters in Eq. 1 and Eq. 2 throughout the MCMC process.
312
+ "alpha" refers to the constant offset term b in the equations.
313
+ In Figure 5, we illustrate the auto-correlation plot of the model’s parameters in Eq. 1 and Eq. 2. We
314
+ observe a clear decrease in auto-correlation throughout the MCMC process. Thus, this is an efficient
315
+ MCMC process that leads to convergence. Further, in Figure 6, we provide the posterior estimation
316
+ and the trace plot of the fair model parameters throughout the MCMC process. Though we still
317
+ 5
318
+
319
+ qoreaq
320
+ alpha job
321
+ 1.00
322
+ 1.00
323
+ 0.75
324
+ 0.75
325
+ 0.50
326
+ 0.50
327
+ 0.25
328
+ 0.25
329
+ 0.00
330
+ 0.005
331
+ 0.25
332
+ 0.25
333
+ 0.50
334
+ 0.50
335
+ 0.75
336
+ 0.75
337
+ -1.000
338
+ -1.000
339
+ 20
340
+ 40
341
+ 60
342
+ 80
343
+ 100 0
344
+ 20
345
+ 100 0
346
+ 60
347
+ 80
348
+ 100
349
+ 20
350
+ 60
351
+ 80
352
+ 100
353
+ alpha_house
354
+ ouse
355
+ beta
356
+ ouse
357
+ 0,2
358
+ 1.00
359
+ 1.00
360
+ 0.75
361
+ 0.75
362
+ 0.50
363
+ 0.50
364
+ 0.25
365
+ 0.25
366
+ 0.00
367
+ 0.00 ,
368
+ -0.25
369
+ -0.25
370
+ -0.50
371
+ -0.50
372
+ -0.75
373
+ 0.75
374
+ -1.000
375
+ 100 0
376
+ 100
377
+ 20
378
+ 40
379
+ 60
380
+ 80
381
+ 100Figure 6: Posterior estimation (left column) and trace plot (right column) of parameters in Eq. 1 and
382
+ Eq. 2 throughout the MCMC process. "alpha" refers to the constant offset term b in the equations.
383
+ observe some fluctuations till the end of the process, however, this is reasonable and acceptable. The
384
+ reason is that we are applying over a real-world dataset, rather than a simulated dataset. Therefore,
385
+ it is impossible to make our assumed distributions perfectly capture the behavior of the real-world
386
+ dataset. Then, in Table 1, we provide the confidence interval over the posterior estimation of the fair
387
+ model’s parameters.
388
+ 6.2
389
+ Performance comparison across models:
390
+ In this section, we compare how three distinct models perform while making predictions. In Table
391
+ 2, we present the R2 of three models in both training and testing environments. The full model
392
+ outperforms the unaware model in both fitting and predicting by including sensitive information. It is
393
+ surprising to see that the fair model outperforms the other two unfair models with R2 = 0.801 in the
394
+ training set and R2 = 0.768 in the testing set. It turns out that our fair model does not only resolve
395
+ the fairness issue but distills the information on the reliability level. The fair model is robust enough
396
+ to be used to make fair and accurate predictions.
397
+ 7
398
+ Conclusion
399
+ In this paper, we have presented a fair model focusing on predicting the German credit score with
400
+ considering the job and housing features. Specifically, we introduce the latent confounding variable
401
+ "reliability level", which is independent of the protected attributes, i.e., "sex" and "age", but related
402
+ to other observable variables and the prediction goal. For implementation, we apply the MCMC
403
+ 6
404
+
405
+ betajob
406
+ beta job
407
+ 4
408
+ 0
409
+ 2
410
+ 1000
411
+ 2000
412
+ 3000
413
+ 4000
414
+ alpha house
415
+ alpha_house
416
+ 4
417
+ 3
418
+ 2
419
+ 1
420
+ 0
421
+ 1
422
+ 2
423
+ m
424
+ 4
425
+ 0
426
+ 1000
427
+ 2000
428
+ 3000
429
+ 4000
430
+ beta house
431
+ beta house
432
+ -2
433
+ 0
434
+ 0
435
+ 1000
436
+ 2000
437
+ 3000
438
+ 4000
439
+ beta_credit
440
+ beta credit
441
+ 0.0
442
+ 2.5
443
+ 5.0
444
+ -7.5
445
+ -8
446
+ -6
447
+ -4
448
+ 0
449
+ 0
450
+ 1000
451
+ 2000
452
+ 3000
453
+ 4000
454
+ C
455
+ c
456
+ 4
457
+ -2
458
+ 0
459
+ 2
460
+ 0
461
+ 1000
462
+ 2000
463
+ 3000
464
+ 4000std
465
+ 5%
466
+ median
467
+ 95%
468
+ ess_bulk
469
+ ess_tail
470
+ bj
471
+ 1.02
472
+ -1.66
473
+ 0.03
474
+ 1.71
475
+ 4643.63
476
+ 3709.55
477
+ βj,s
478
+ 0.98
479
+ -1.32
480
+ 0.27
481
+ 1.88
482
+ 7128.36
483
+ 3907.04
484
+ βj,a
485
+ 0.65
486
+ -2.64
487
+ -1.57
488
+ -0.50
489
+ 1502.60
490
+ 2245.58
491
+ βj,c
492
+ 0.47
493
+ 2.82
494
+ 3.46
495
+ 4.36
496
+ 2058.64
497
+ 2494.02
498
+ bh
499
+ 1.01
500
+ -1.61
501
+ 0.03
502
+ 1.67
503
+ 5113.35
504
+ 3167.94
505
+ βh,s
506
+ 0.99
507
+ -1.73
508
+ -0.11
509
+ 1.55
510
+ 5506.02
511
+ 3900.82
512
+ βh,a
513
+ 0.67
514
+ -0.04
515
+ 1.05
516
+ 2.17
517
+ 1625.86
518
+ 2583.99
519
+ βh,c
520
+ 0.46
521
+ 3.00
522
+ 3.65
523
+ 4.50
524
+ 1896.31
525
+ 2939.44
526
+ βc,s
527
+ 0.54
528
+ -7.78
529
+ -6.85
530
+ -5.98
531
+ 3255.68
532
+ 3206.54
533
+ βc,a
534
+ 0.52
535
+ -3.17
536
+ -2.26
537
+ -1.46
538
+ 4326.49
539
+ 3800.31
540
+ βc,c
541
+ 0.23
542
+ -0.37
543
+ 0.01
544
+ 0.38
545
+ 4455.72
546
+ 3211.78
547
+ Table 1: The confidence intervals of the parameters estimated in Eq. 1 and Eq. 2 through the MCMC
548
+ process.
549
+ R2
550
+ Full Model
551
+ Unaware Model
552
+ Fair Model Random Forest Kernel
553
+ Training
554
+ 0.597
555
+ 0.466
556
+ 0.801
557
+ Testing
558
+ 0.521
559
+ 0.424
560
+ 0.768
561
+ Table 2: The R2 of three types of models defined in Sec. 4.
562
+ approach to solve for the latent confounding variable and the parameters of the model. Compared
563
+ with tradition models, our model effectively eliminates the bias related to sex and age and thus
564
+ achieves a fair prediction of the credit amount. For the future work, we recommend trying different
565
+ types of assumptions on the distribution for the variables over the German Credit Risk dataset and
566
+ checking the effects on the choice of distributions over the convergence of the MCMC process and
567
+ the final prediction.
568
+ References
569
+ Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is
570
+ to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in
571
+ neural information processing systems, 29, 2016.
572
+ Toon Calders and Sicco Verwer. Three naive bayes approaches for discrimination-free classification.
573
+ Data mining and knowledge discovery, 21(2):277–292, 2010.
574
+ Piotr Dollar, Christian Wojek, Bernt Schiele, and Pietro Perona. Pedestrian detection: An evaluation
575
+ of the state of the art. IEEE transactions on pattern analysis and machine intelligence, 34(4):
576
+ 743–761, 2011.
577
+ Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through
578
+ awareness. In Proceedings of the 3rd innovations in theoretical computer science conference,
579
+ pages 214–226, 2012.
580
+ Alan E Gelfand. Gibbs sampling. Journal of the American statistical Association, 95(452):1300–1304,
581
+ 2000.
582
+ Yoav Goldberg and Omer Levy. word2vec explained: deriving mikolov et al.’s negative-sampling
583
+ word-embedding method. arXiv preprint arXiv:1402.3722, 2014.
584
+ The Guardian.
585
+ A beauty contest was judged by ai and the robots didn’t like dark
586
+ skin,
587
+ Sep 2016.
588
+ URL https://www.theguardian.com/technology/2016/sep/08/
589
+ artificial-intelligence-beauty-contest-doesnt-like-black-people.
590
+ Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances
591
+ in neural information processing systems, 29, 2016.
592
+ W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. 1970.
593
+ 7
594
+
595
+ Donald Hoffman. German credit risk, Dec 2016. URL https://www.kaggle.com/datasets/
596
+ uciml/german-credit.
597
+ Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. Rawlsian fairness
598
+ for machine learning. arXiv preprint arXiv:1610.09559, 1(2):19, 2016.
599
+ Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. Advances in
600
+ neural information processing systems, 30, 2017.
601
+ Alexander Selvikvåg Lundervold and Arvid Lundervold. An overview of deep learning in medical
602
+ imaging focusing on mri. Zeitschrift für Medizinische Physik, 29(2):102–127, 2019.
603
+ Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey
604
+ on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35, 2021.
605
+ Tom M Mitchell and Tom M Mitchell. Machine learning, volume 1. McGraw-hill New York, 1997.
606
+ Douglas C Montgomery, Elizabeth A Peck, and G Geoffrey Vining. Introduction to linear regression
607
+ analysis. John Wiley & Sons, 2021.
608
+ Christopher Z Mooney. Monte carlo simulation. Number 116. Sage, 1997.
609
+ Loris Nanni and Alessandra Lumini. An experimental comparison of ensemble of classifiers for
610
+ bankruptcy prediction and credit scoring. Expert systems with applications, 36(2):3028–3033,
611
+ 2009.
612
+ Callie Rennison and Mike Planty. Nonlethal intimate partner violence: Examining race, gender, and
613
+ income patterns. Violence and victims, 18(4):433–443, 2003.
614
+ Gwen Sharp.
615
+ Nikon camera says asians:
616
+ People are always blinking - sociolog-
617
+ ical images,
618
+ 2009.
619
+ URL https://thesocietypages.org/socimages/2009/05/29/
620
+ nikon-camera-says-asians-are-always-blinking/.
621
+ 8
622
+
1dFAT4oBgHgl3EQfCxxo/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,374 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf,len=373
2
+ page_content='Fair Credit Scorer through Bayesian Approach Zhuo Zhao Department of Applied Mathematics Johns Hopkins University zzhao62@jhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
3
+ page_content='edu Abstract Machine learning currently plays an increasingly important role in people’s lives in areas such as credit scoring, auto-driving, disease diagnosing, and insurance quoting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
4
+ page_content=' However, in many of these areas, machine learning models have performed unfair behaviors against some sub-populations, such as some particular groups of race, sex, and age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
5
+ page_content=' These unfair behaviors can be on account of the pre-existing bias in the training dataset due to historical and social factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
6
+ page_content=' In this paper, we focus on a real-world application of credit scoring and construct a fair prediction model by introducing latent variables to remove the correlation between protected attributes, such as sex and age, with the observable feature inputs, including house and job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
7
+ page_content=' For detailed implementation, we apply Bayesian approaches, including the Markov Chain Monte Carlo simulation, to estimate our proposed fair model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
8
+ page_content=' 1 Introduction Nowadays, Machine Learning methods are used to automate decisions in a variety of areas, including determining credit scores Nanni and Lumini [2009], classifying tumor components from MRI images Lundervold and Lundervold [2019], detecting pedestrians on the road Dollar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
9
+ page_content=' [2011], and understanding natural languages Goldberg and Levy [2014], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
10
+ page_content=' However, machine learning methods are heavily dependent on data Mitchell and Mitchell [1997] and this data-dependent nature makes the learned models sensitive to the latent bias existing in the training datasets Mehrabi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
11
+ page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
12
+ page_content=' Thus, the final decisions made by the learned models are unfairly biased against certain sub-populations, differentiated by some sensitive/protected attributes, such as race, sex, or age, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
13
+ page_content=' For example, cameras sometimes fail to recognize whether Asian blink their eyes Sharp [2009] and the beauty pageant judged by AI would prefer light skin Guardian [2016].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
14
+ page_content=' However, we would expect AI to give the same decision independent from the protected attributes and thus we concern about the fairness of machine learning methods Mehrabi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
15
+ page_content=' [2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
16
+ page_content=' In this paper, we focus on constructing fair machine learning models to predict the credit score, with using the German Credit Risk dataset Hoffman [2016] (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
17
+ page_content=' 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
18
+ page_content=' The goal is to predict the credit score based on some observable variables, including housing and job information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
19
+ page_content=' However, this personal financial information, such as income, housing, and saving, are usually highly correlated to gender and age due to historical and social reasons Rennison and Planty [2003].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
20
+ page_content=' Therefore, it is necessary to learn an effective model to filter the prediction bias against sex and age, caused by the latent correlation between these observable variables and the protected attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
21
+ page_content=' In detail, we analyze and compare from the fairness perspective across the full model Montgomery et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
22
+ page_content=' [2021], unaware model Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
23
+ page_content=' [2012], and fair model based on causals and counterfactuals Kusner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
24
+ page_content=' [2017] (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
25
+ page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
26
+ page_content=' Then, we apply the Markov Chain Monte Carlo (MCMC) simulation Mooney [1997] and the Gibbs’ sampling Gelfand [2000] to solve the corresponding parameters in these models and evaluate the performances (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
27
+ page_content=' 5 and Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
28
+ page_content=' 6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
29
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
30
+ page_content='08412v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
31
+ page_content='LG] 20 Jan 2023 2 Related work Fairness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
32
+ page_content=' Many recent works (Calders and Verwer [2010], Bolukbasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
33
+ page_content=' [2016], Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
34
+ page_content=' [2012], Hardt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
35
+ page_content=' [2016], Joseph et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
36
+ page_content=' [2016], Kusner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
37
+ page_content=' [2017]) have been focusing on fairness in machine learning algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
38
+ page_content=' Bolukbasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
39
+ page_content=' [2016] pointed out that there is a risk of amplifying the bias introduced from the dataset, if using machine learning algorithms without taking effects to handle the pre-existing bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
40
+ page_content=' For example, in the word embedding, learned over Google News with pre-existing gender stereotypes, the gender-neutral words widely spread along a latent embedding direction capturing gender difference, such as "receptionist" falling far along the direction related to "female" Bolukbasi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
41
+ page_content=' [2016].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
42
+ page_content=' Calders and Verwer [2010] modifies the Naive Bayes classifier by adding independence restriction toward sensitive attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
43
+ page_content=' Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
44
+ page_content=' [2012] proposes a task-specific metric to evaluate the similarity between individuals relative to the classification task and optimizes over the proposed metric with the goal that similar individuals are treated similarly in the classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
45
+ page_content=' Kusner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
46
+ page_content=' [2017] focuses on causal inference and counterfactual, with introducing the latent confounding variables, which are related to the observable variables but independent from the protected attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
47
+ page_content=' Our work is based on the Kusner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
48
+ page_content=' [2017] idea to construct a fair prediction model over the German Credit Risk dataset Hoffman [2016].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
49
+ page_content=' 3 Dataset We consider the Kaggle German Credit Risk dataset Hoffman [2016] to analyze and compare different types of unfair models and our method for constructing a fair model using Bayesian approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
50
+ page_content=' In this dataset, each entry represents a person who takes credit from a bank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
51
+ page_content=' The objective is to predict the credit amount of a person based on his/her attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
52
+ page_content=' "Sex" and "age" are the sensitive/protected attributes related to the bias during training and prediction in the unfairness problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
53
+ page_content=' Feature "job" is a binary variable representing whether a person has a job or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
54
+ page_content=' Feature "house" is a binary variable that indicates whether or not a person owns a house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
55
+ page_content=' The "credit amount" is our prediction target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
56
+ page_content=' The dataset is composed of 1000 records.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
57
+ page_content=' We randomly pick 800 records for training and 200 records for testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
58
+ page_content=' Figure 1 shows the detailed distributions of all these features in the whole dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
59
+ page_content=' In Figure 2, we illustrate the covariance between all the input features and the prediction target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
60
+ page_content=' We can observe a high correlation from the sensitive / protected attributes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
61
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
62
+ page_content=' "sex" and "age", to the "job" and "house".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
63
+ page_content=' Thus, it is necessary to consider the issue of fairness when constructing a prediction model over "job" and "house".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
64
+ page_content=' Figure 1: Distribution of features in the German Credit Risk dataset Hoffman [2016].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
65
+ page_content=' "Age" and "sex" are the sensitive / protected attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
66
+ page_content=' "Job" and "house" are the observable variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
67
+ page_content=' "Credit amount" is the prediction target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
68
+ page_content=' 2 700 140 160 600 120 500 100 140 60 300 120 40 200 100 100 20 Count 70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
69
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
70
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
71
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
72
+ page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
73
+ page_content='0 age sex 80 800 700 700 600 60 600 500 40 8 400 300 300 20 200 200 100 100 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
74
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
75
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
76
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
77
+ page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
78
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
79
+ page_content='4 0 5000 10000 15000 job 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
80
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
81
+ page_content='6 house credit amtFigure 2: Illustration of the covariance matrix between all the input features and the prediction target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
82
+ page_content=' Here, we observe a high correlation from "age" and "sex" (the sensitive/protected attributes) to "job" and "house" (the observable variables).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
83
+ page_content=' 4 Methods Full Model: The full model Montgomery et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
84
+ page_content=' [2021] completely ignores fairness issues and includes sensitive variables like sex and age in the learning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
85
+ page_content=' It is easy to understand that the full model is unfair because the predictions depend on sex and age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
86
+ page_content=' Figure 3 presents the directed acyclic graph (DAG) of the full model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
87
+ page_content=' In the full model, all the features are assumed to be connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
88
+ page_content=' Unaware Model: The unaware model Dwork et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
89
+ page_content=' [2012] does not use sensitive variables in the learning and prediction process, but it is still unfair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
90
+ page_content=' Even though the sensitive variables do not influence the target directly in the learning and prediction processes, it still has an indirect impact on the target through the non-sensitive variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
91
+ page_content=' In our example, to predict a person’s credit amount, sex may influence whether a person can get a job.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
92
+ page_content=' The job attribute still preserves the information of sex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
93
+ page_content=' Simply ignoring the sex attribute will not fully eliminate its impact on the predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
94
+ page_content=' Figure 3 presents the DAG of an unaware model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
95
+ page_content=' The attributes under the grey circles are unobserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
96
+ page_content=' In the unaware model, sex and age are not directly connected with the credit amount, but they are connected with job and house.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
97
+ page_content=' It is still unfair because the change of sex and age will change the status of job and house, and thus influence the credit amount predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
98
+ page_content=' Figure 3: Two types of unfair models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
99
+ page_content=' Left: full model, which builds regression over all possible attributes without the consideration of fairness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
100
+ page_content=' Right: unaware model, which excludes sensi- tive/protected attributes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
101
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
102
+ page_content=' sex and age in our case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
103
+ page_content=' Fair Model: In order to build a fair model, we need to find a proxy variable that is independent of sensitive variables but still preserves the information in the credit amount prediction Kusner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
104
+ page_content=' [2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
105
+ page_content=' We can introduce the concept of latent confounding variable to resolve this issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
106
+ page_content=' 3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
107
+ page_content='0 age 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
108
+ page_content='8 sex 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
109
+ page_content='6 job 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
110
+ page_content='4 credit_amt house 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
111
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
112
+ page_content='0 age sex job house credit_amtSex Job Sex Job Credit Credit House Age House Age Full Model Unaware ModelThe confounding variable is a variable that influences both the independent variable and dependent variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
113
+ page_content=' In our fair model, we assume that there is an unobserved confounder C that reflects how reliable a person is in paying back the loan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
114
+ page_content=' The confounder should be independent of the sensitive variables to make the model fair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
115
+ page_content=' Figure 4 shows the DAG of the fair model structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
116
+ page_content=' In the inference stage, we assume that job, house, and credit amount are confounded by the unobserved reliability level C and C is independent of sex and age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
117
+ page_content=' The reason is that sex and age can neither determine nor be related to how reliable a person is in paying back loans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
118
+ page_content=' Meanwhile, reliability is co-related to a person’s job performance, housing situation, and also credit amount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
119
+ page_content=' Then, in the prediction stage, we only use the inferred C as our feature to predict the credit amount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
120
+ page_content=' In this way, the predicting process does not contain any information about sex or age, and thus this procedure is an effective, fair learning algorithm in our scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
121
+ page_content=' Figure 4: DAG of the fair model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
122
+ page_content=' Here, we introduce the latent confounding variable "unobserved reliability level", which is independent to "sex" and "age" (the sensitive/protected attributes) but related to "job", "house", and "credit amount".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
123
+ page_content=' Left: during the inference stage, we estimate this latent "reliability" feature with Bayesian approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
124
+ page_content=' Right: during the prediction stage, we only use this inferred "reliability" feature to predict the "credit amount".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
125
+ page_content=' 5 Experiments We can represent the DAG of the fair model in a probabilistic way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
126
+ page_content=' We sample the two binary variables, job and house from two Bernoulli distributions and sample the confounder from the normal distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
127
+ page_content=' In the meantime, we choose the Poisson distribution as a prior for the credit amount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
128
+ page_content=' Our choices of priors correspond to the nature of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
129
+ page_content=' The job and house features are binary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
130
+ page_content=' And the credit amount is a positive attribute with a shape alike the Poisson distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
131
+ page_content=' The probabilistic model can be written as: Job ∼ Bernoulli(logit(bj + Sex × βj,s + Age × βj,a + C × βj,c)) (1) House ∼ Bernoulli(logit(bh + Sex × βh,s + Age × βh,a + C × βh,c)) (2) Credit ∼ Poisson(Exp(Sex × βc,s + Age × βc,a + C × βc,c)) (3) C ∼ Normal(0, 1) where C ⊥ Sex, C ⊥ Age (4) The parameters we need to find are in the set Θ = {βm,n, bm} where m = j, h and n = s, a, c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
132
+ page_content=' We assume that these parameters are sampled from the normal distributions: βm,n ∼ N(0, 1) (5) bm ∼ N(0, 1) (6) We implement the Metropolis–Hastings algorithm to infer the probabilistic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
133
+ page_content=' M-H algorithm Hastings [1970] is a Markov Chain Monte Carlo (MCMC) method for obtaining a sequence of 4 Job Sex Unobserved Unobserved House Credit Reliability Level Reliability Level Age Credit Fair model - Inference Fair model - PredictionAlgorithm 1 Infer C by Metropolis–Hastings for i = 1 to N do Choose J(C∗ i |C(s) i ) = uniform(C(s) i − δ, C(s) i + δ);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
134
+ page_content=' Set an initial state C0 i ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
135
+ page_content=' for s = 1 to 5000 do Sample C∗ i ∼ J(C∗ i |C(s) i );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
136
+ page_content=' Compute the acceptance ratio r = p(C∗ i |y) p(C(s) i |y) = p(y|C∗ i )p(C∗ i ) p(y|C(s) i )p(C(s) i );' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
137
+ page_content=' sample u ∼ uniform(0, 1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
138
+ page_content=' if u < r then;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
139
+ page_content=' C(s+1) i = C∗ i ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
140
+ page_content=' else C(s+1) i = C(s) i ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
141
+ page_content=' end if end for end for random samples from a probability distribution from which direct sampling is difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
142
+ page_content=' Algorithm 1 explains how to infer the reliability level C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
143
+ page_content=' Once we obtain the posteriors of the inferred reliability level C, we can fit a new model using kernel g(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
144
+ page_content=') based on the C in the prediction stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
145
+ page_content=' In our experiment, since there is a nonlinear relationship between credit amount and "Reliability Level" in our inference stage setup (Poisson), we decide to use random-forest as the kernel function g(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
146
+ page_content=') in our second stage prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
147
+ page_content=' Credit ∼ g(C) (7) 6 Results In this section, we provide experimental results and a discussion of the MCMC process performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
148
+ page_content=' Specifically, in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
149
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
150
+ page_content='1, we firstly present the MCMC estimation result and the convergence analysis on the fair model’s latent confounding variable C and parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
151
+ page_content=' Then, we compare the prediction and fairness performance across the three types of models in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
152
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
153
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
154
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
155
+ page_content='1 Fair model’s MCMC performance: Figure 5: Auto-correlation plots of parameters in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
156
+ page_content=' 1 and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
157
+ page_content=' 2 throughout the MCMC process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
158
+ page_content=' "alpha" refers to the constant offset term b in the equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
159
+ page_content=' In Figure 5, we illustrate the auto-correlation plot of the model’s parameters in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
160
+ page_content=' 1 and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
161
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
162
+ page_content=' We observe a clear decrease in auto-correlation throughout the MCMC process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
163
+ page_content=' Thus, this is an efficient MCMC process that leads to convergence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
164
+ page_content=' Further, in Figure 6, we provide the posterior estimation and the trace plot of the fair model parameters throughout the MCMC process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
165
+ page_content=' Though we still 5 qoreaq alpha job 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
166
+ page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
167
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
168
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
169
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
170
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
171
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
172
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
173
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
174
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
175
+ page_content='005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
176
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
177
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
178
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
179
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
180
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
181
+ page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
182
+ page_content='000 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
183
+ page_content='000 20 40 60 80 100 0 20 100 0 60 80 100 20 60 80 100 alpha_house ouse beta ouse 0,2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
184
+ page_content='00 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
185
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
186
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
187
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
188
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
189
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
190
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
191
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
192
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
193
+ page_content='00 , 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
194
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
195
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
196
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
197
+ page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
198
+ page_content='75 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
199
+ page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
200
+ page_content='000 100 0 100 20 40 60 80 100Figure 6: Posterior estimation (left column) and trace plot (right column) of parameters in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
201
+ page_content=' 1 and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
202
+ page_content=' 2 throughout the MCMC process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
203
+ page_content=' "alpha" refers to the constant offset term b in the equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
204
+ page_content=' observe some fluctuations till the end of the process, however, this is reasonable and acceptable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
205
+ page_content=' The reason is that we are applying over a real-world dataset, rather than a simulated dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
206
+ page_content=' Therefore, it is impossible to make our assumed distributions perfectly capture the behavior of the real-world dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
207
+ page_content=' Then, in Table 1, we provide the confidence interval over the posterior estimation of the fair model’s parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
208
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
209
+ page_content='2 Performance comparison across models: In this section, we compare how three distinct models perform while making predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
210
+ page_content=' In Table 2, we present the R2 of three models in both training and testing environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
211
+ page_content=' The full model outperforms the unaware model in both fitting and predicting by including sensitive information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
212
+ page_content=' It is surprising to see that the fair model outperforms the other two unfair models with R2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
213
+ page_content='801 in the training set and R2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
214
+ page_content='768 in the testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
215
+ page_content=' It turns out that our fair model does not only resolve the fairness issue but distills the information on the reliability level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
216
+ page_content=' The fair model is robust enough to be used to make fair and accurate predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
217
+ page_content=' 7 Conclusion In this paper, we have presented a fair model focusing on predicting the German credit score with considering the job and housing features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
218
+ page_content=' Specifically, we introduce the latent confounding variable "reliability level", which is independent of the protected attributes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
219
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
220
+ page_content=', "sex" and "age", but related to other observable variables and the prediction goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
221
+ page_content=' For implementation, we apply the MCMC 6 betajob beta job 4 0 2 1000 2000 3000 4000 alpha house alpha_house 4 3 2 1 0 1 2 m 4 0 1000 2000 3000 4000 beta house beta house 2 0 0 1000 2000 3000 4000 beta_credit beta credit 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
222
+ page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
223
+ page_content='5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
224
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
225
+ page_content='5 8 6 4 0 0 1000 2000 3000 4000 C c 4 2 0 2 0 1000 2000 3000 4000std 5% median 95% ess_bulk ess_tail bj 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
226
+ page_content='02 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
227
+ page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
228
+ page_content='03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
229
+ page_content='71 4643.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
230
+ page_content='63 3709.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
231
+ page_content='55 βj,s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
232
+ page_content='98 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
233
+ page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
234
+ page_content='27 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
235
+ page_content='88 7128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
236
+ page_content='36 3907.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
237
+ page_content='04 βj,a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
238
+ page_content='65 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
239
+ page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
240
+ page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
241
+ page_content='50 1502.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
242
+ page_content='60 2245.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
243
+ page_content='58 βj,c 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
244
+ page_content='47 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
245
+ page_content='82 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
246
+ page_content='46 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
247
+ page_content='36 2058.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
248
+ page_content='64 2494.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
249
+ page_content='02 bh 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
250
+ page_content='01 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
251
+ page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
252
+ page_content='03 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
253
+ page_content='67 5113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
254
+ page_content='35 3167.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
255
+ page_content='94 βh,s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
256
+ page_content='99 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
257
+ page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
258
+ page_content='11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
259
+ page_content='55 5506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
260
+ page_content='02 3900.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
261
+ page_content='82 βh,a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
262
+ page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
263
+ page_content='04 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
264
+ page_content='05 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
265
+ page_content='17 1625.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
266
+ page_content='86 2583.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
267
+ page_content='99 βh,c 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
268
+ page_content='46 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
269
+ page_content='00 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
270
+ page_content='65 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
271
+ page_content='50 1896.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
272
+ page_content='31 2939.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
273
+ page_content='44 βc,s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
274
+ page_content='54 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
275
+ page_content='78 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
276
+ page_content='85 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
277
+ page_content='98 3255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
278
+ page_content='68 3206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
279
+ page_content='54 βc,a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
280
+ page_content='52 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
281
+ page_content='17 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
282
+ page_content='26 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
283
+ page_content='46 4326.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
284
+ page_content='49 3800.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
285
+ page_content='31 βc,c 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
286
+ page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
287
+ page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
288
+ page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
289
+ page_content='38 4455.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
290
+ page_content='72 3211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
291
+ page_content='78 Table 1: The confidence intervals of the parameters estimated in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
292
+ page_content=' 1 and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
293
+ page_content=' 2 through the MCMC process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
294
+ page_content=' R2 Full Model Unaware Model Fair Model Random Forest Kernel Training 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
295
+ page_content='597 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
296
+ page_content='466 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
297
+ page_content='801 Testing 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
298
+ page_content='521 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
299
+ page_content='424 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
300
+ page_content='768 Table 2: The R2 of three types of models defined in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
301
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
302
+ page_content=' approach to solve for the latent confounding variable and the parameters of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
303
+ page_content=' Compared with tradition models, our model effectively eliminates the bias related to sex and age and thus achieves a fair prediction of the credit amount.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
304
+ page_content=' For the future work, we recommend trying different types of assumptions on the distribution for the variables over the German Credit Risk dataset and checking the effects on the choice of distributions over the convergence of the MCMC process and the final prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
305
+ page_content=' References Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
306
+ page_content=' Man is to computer programmer as woman is to homemaker?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
307
+ page_content=' debiasing word embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
308
+ page_content=' Advances in neural information processing systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
309
+ page_content=' Toon Calders and Sicco Verwer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
310
+ page_content=' Three naive bayes approaches for discrimination-free classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
311
+ page_content=' Data mining and knowledge discovery, 21(2):277–292, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
312
+ page_content=' Piotr Dollar, Christian Wojek, Bernt Schiele, and Pietro Perona.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
313
+ page_content=' Pedestrian detection: An evaluation of the state of the art.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
314
+ page_content=' IEEE transactions on pattern analysis and machine intelligence, 34(4): 743–761, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
315
+ page_content=' Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
316
+ page_content=' Fairness through awareness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
317
+ page_content=' In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
318
+ page_content=' Alan E Gelfand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
319
+ page_content=' Gibbs sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
320
+ page_content=' Journal of the American statistical Association, 95(452):1300–1304, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
321
+ page_content=' Yoav Goldberg and Omer Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
322
+ page_content=' word2vec explained: deriving mikolov et al.’s negative-sampling word-embedding method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
323
+ page_content=' arXiv preprint arXiv:1402.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
324
+ page_content='3722, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
325
+ page_content=' The Guardian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
326
+ page_content=' A beauty contest was judged by ai and the robots didn’t like dark skin, Sep 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
327
+ page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
328
+ page_content='theguardian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
329
+ page_content='com/technology/2016/sep/08/ artificial-intelligence-beauty-contest-doesnt-like-black-people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
330
+ page_content=' Moritz Hardt, Eric Price, and Nati Srebro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
331
+ page_content=' Equality of opportunity in supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
332
+ page_content=' Advances in neural information processing systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
333
+ page_content=' W Keith Hastings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
334
+ page_content=' Monte carlo sampling methods using markov chains and their applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
335
+ page_content=' 1970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
336
+ page_content=' 7 Donald Hoffman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
337
+ page_content=' German credit risk, Dec 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
338
+ page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
339
+ page_content='kaggle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
340
+ page_content='com/datasets/ uciml/german-credit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
341
+ page_content=' Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
342
+ page_content=' Rawlsian fairness for machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
343
+ page_content=' arXiv preprint arXiv:1610.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
344
+ page_content='09559, 1(2):19, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
345
+ page_content=' Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
346
+ page_content=' Counterfactual fairness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
347
+ page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
348
+ page_content=' Alexander Selvikvåg Lundervold and Arvid Lundervold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
349
+ page_content=' An overview of deep learning in medical imaging focusing on mri.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
350
+ page_content=' Zeitschrift für Medizinische Physik, 29(2):102–127, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
351
+ page_content=' Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
352
+ page_content=' A survey on bias and fairness in machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
353
+ page_content=' ACM Computing Surveys (CSUR), 54(6):1–35, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
354
+ page_content=' Tom M Mitchell and Tom M Mitchell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
355
+ page_content=' Machine learning, volume 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
356
+ page_content=' McGraw-hill New York, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
357
+ page_content=' Douglas C Montgomery, Elizabeth A Peck, and G Geoffrey Vining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
358
+ page_content=' Introduction to linear regression analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
359
+ page_content=' John Wiley & Sons, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
360
+ page_content=' Christopher Z Mooney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
361
+ page_content=' Monte carlo simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
362
+ page_content=' Number 116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
363
+ page_content=' Sage, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
364
+ page_content=' Loris Nanni and Alessandra Lumini.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
365
+ page_content=' An experimental comparison of ensemble of classifiers for bankruptcy prediction and credit scoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
366
+ page_content=' Expert systems with applications, 36(2):3028–3033, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
367
+ page_content=' Callie Rennison and Mike Planty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
368
+ page_content=' Nonlethal intimate partner violence: Examining race, gender, and income patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
369
+ page_content=' Violence and victims, 18(4):433–443, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
370
+ page_content=' Gwen Sharp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
371
+ page_content=' Nikon camera says asians: People are always blinking - sociolog- ical images, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
372
+ page_content=' URL https://thesocietypages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
373
+ page_content='org/socimages/2009/05/29/ nikon-camera-says-asians-are-always-blinking/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
374
+ page_content=' 8' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/1dFAT4oBgHgl3EQfCxxo/content/2301.08412v1.pdf'}
29FAT4oBgHgl3EQflB1V/content/tmp_files/2301.08614v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
29FAT4oBgHgl3EQflB1V/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
39AyT4oBgHgl3EQfP_aH/content/2301.00036v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e02d526b52fba9c7e1542cc47d63361547c7bc9faeb85e9c97721f2a1a58cf90
3
+ size 791302
3dFQT4oBgHgl3EQfGzVw/content/tmp_files/2301.13246v1.pdf.txt ADDED
@@ -0,0 +1,867 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CONVERSATIONAL AUTOMATED PROGRAM REPAIR
2
+ Chunqiu Steven Xia, Lingming Zhang
3
+ University of Illinois at Urbana-Champaign
4
+ {chunqiu2, lingming}@illinois.edu
5
+ ABSTRACT
6
+ Automated Program Repair (APR) can help developers automatically generate
7
+ patches for bugs. Due to the impressive performance obtained using Large Pre-
8
+ Trained Language Models (LLMs) on many code related tasks, researchers have
9
+ started to directly use LLMs for APR. However, prior approaches simply repeat-
10
+ edly sample the LLM given the same constructed input/prompt created from the
11
+ original buggy code, which not only leads to generating the same incorrect patches
12
+ repeatedly but also miss the critical information in testcases. To address these lim-
13
+ itations, we propose conversational APR, a new paradigm for program repair that
14
+ alternates between patch generation and validation in a conversational manner.
15
+ In conversational APR, we iteratively build the input to the model by combining
16
+ previously generated patches with validation feedback. As such, we leverage the
17
+ long-term context window of LLMs to not only avoid generating previously incor-
18
+ rect patches but also incorporate validation feedback to help the model understand
19
+ the semantic meaning of the program under test. We evaluate 10 different LLM
20
+ including the newly developed ChatGPT model to demonstrate the improvement
21
+ of conversational APR over the prior LLM for APR approach.
22
+ 1
23
+ INTRODUCTION
24
+ Bugs in software can cause significant financial losses Matteson (2018) and create dangerous health
25
+ and safety problems Hanbury (2019). Due to the high manual cost of fixing bugs O’Dell (2017),
26
+ Automated Program Repair (APR) Gazzola et al. (2019) is a promising solution to reduce developer
27
+ work by automatically generating patches given the buggy code and failing testcases.
28
+ Traditionally, APR approaches commonly use the paradigm of Generate and Validate (G&V), where
29
+ APR tools will first generate a list of candidate patches given the original buggy code and then
30
+ validate each one sequentially until a plausible patch that passes all the testcases is found. Plausible
31
+ patch is then passed on to a human developer where they have to determine if this is a correct
32
+ patch that correctly fixes the underlying bug. Traditional APR approaches such as template-based
33
+ tools Ghanbari et al. (2019); Liu et al. (2019); Lou et al. (2020) have been proven useful in fixing
34
+ bugs with pre-defined templates to match buggy and corresponding fix code patterns. Recently,
35
+ researchers have designed learning-based APR tools Ye et al. (2022); Zhu et al. (2021); Jiang et al.
36
+ (2021) which build a Neural Machine Translation (NMT) model by training on pairs of buggy and
37
+ patch code. However, these learning-based APR tools suffer from lack of patch variety as it can
38
+ only repair the types of bugs that are a part of the buggy/patch training data. Furthermore, these bug
39
+ fixing datasets can be difficult to construct as it require scraping open-source bug fix commits which
40
+ may contain many false positives, adding noise to the dataset.
41
+ Recognizing the limitation of prior learning-based APR tools, researchers have started to look
42
+ into directly leveraging Large Pre-Trained Language Models (LLMs) for APR without fine-tuning.
43
+ LLMs have proven their ability in various code generation tasks Austin et al. (2021). Xia & Zhang
44
+ (2022) first introduced cloze-style APR where a LLM directly fill-in the correct code given its sur-
45
+ rounding context. Other studies Prenner et al. (2022); Kolak et al. (2022); Xia et al. (2022) have also
46
+ investigated directly applying different types of LLMs for APR by smartly applying prompts or giv-
47
+ ing original buggy code as context. Typically, directly applying LLMs for APR involves creating a
48
+ common prompt/prefix which can be just the buggy context (zero-shot) or combining buggy context
49
+ with a few examples of bug fixes (few-shot) as input to the model. Following the G&V paradigm,
50
+ 1
51
+ arXiv:2301.13246v1 [cs.SE] 30 Jan 2023
52
+
53
+ prior approach will sample the LLMs multiple times to obtain candidate patches. However, this
54
+ pipeline has the following limitations:
55
+ First, sampling from the same prefix/prompt multiple times can lead to many repeated patches due
56
+ to the probabilistic nature of sampling. This means the LLMs may waste a lot of compute and
57
+ time generating the same patches which have already been validated as incorrect by the testsuite.
58
+ Second, prompts provided to the LLMs for APR are created only from the original buggy code and
59
+ does not include any of the testcase information. Such information like the expected input and output
60
+ examples that can help LLMs understand the functionality of the buggy program are not provided.
61
+ Third, prior approaches also fail to consider the outputs produced by the generated incorrect patches.
62
+ Previously incorrect patches may fail on a particular corner case, which can be exposed by looking
63
+ at the test output and providing it to the LLM to address it in future patches.
64
+ Our Work. We propose conversational APR – a new paradigm of using LLMs for APR that di-
65
+ rectly leverages the testcase validation information to provide feedback to LLMs in a conversational
66
+ manner. In conversational APR, we interleave patch generation with validation where LLM first
67
+ generates a patch, we then validate it against testsuite to provide feedback and prompt LLM with
68
+ the new feedback information to generate a new patch. While in this paper we consider simple test-
69
+ case input/output/error validation feedback, one can apply conversational APR with a wild range of
70
+ possible feedback information such as human evaluation of the patch. We refer to the process of
71
+ generating a patch followed by validation as a turn where a conversation chain is made up of mul-
72
+ tiple turns in sequence. In the start of the conversation chain, we begin with an initial prompt and
73
+ sample the LLM to obtain a candidate patch. As we continue the conversation, the input given to the
74
+ LLM in each turn is a concatenation of all previously incorrect patches along with their associated
75
+ testcase feedback within the same conversation chain. A conversational chain is terminated once a
76
+ patch that passes all the testcases are found or the maximum chain length is reached (i.e., maximum
77
+ number of turns). In the latter case, we start a new conversation chain with the initial prompt again.
78
+ Compared with prior LLM for APR tools which only use the buggy code snippet as inputs, conver-
79
+ sational APR incorporates patch validation in the form of validation feedback to help the model un-
80
+ derstand the reason why previously generated patches are incorrect. Such feedback can contain the
81
+ incorrect and expected test outputs or indicate if the generated patch contains compilation/runtime
82
+ errors. Furthermore, while prior LLM for APR tools continuously sample from the same input, our
83
+ approach iteratively builds the input by including previously incorrect patches. As such, the LLM,
84
+ through its long context window, can recognize previous generations and avoid repeatedly generat-
85
+ ing an already validated incorrect patch. We evaluated our conversational APR by using 10 popular
86
+ LLMs, where we found that our approach not only improves the number of bugs fixed but also
87
+ can arrive at the correct patch faster compared with sampling-based baseline. Furthermore, we also
88
+ evaluate the recently developed ChatGPT Schulman et al. (2022)1, a dialogue focused LLM trained
89
+ using reinforcement learning and highlight the performance of conversational APR when using a
90
+ LLM designed for conversation/dialogue.
91
+ 2
92
+ BACKGROUND & RELATED WORK
93
+ 2.1
94
+ LLMS FOR APR
95
+ To combat the reliance on training using bug-fixing datasets to build learning-based APR tools based
96
+ on NMT models, researchers directly applied LLMs for APR without any fine-tuning. Xia & Zhang
97
+ (2022) proposed AlphaRepair, the first cloze-style APR to directly leverage LLMs for APR in a
98
+ zero-shot setting by removing the buggy line and replacing it with masked tokens. AlphaRepair
99
+ then queries the CodeBERT Feng et al. (2020) model to fill-in the masked tokens with the correct
100
+ tokens to generate patches. Prenner et al. (2022) investigated the ability for Codex Chen et al. (2021)
101
+ to repair bugs using a simple prompting method to generate a complete patched function given the
102
+ original buggy function. Kolak et al. (2022) evaluated the scaling effect of LLMs for APR by using
103
+ 4 LLMs of different model sizes to generate a single line fix given only the original buggy prefix
104
+ (i.e., removing all lines after and including the buggy line of the buggy function). Recently, Xia et al.
105
+ (2022) conducted an extensive study on directly applying LLMs for APR. In the study, they adopt
106
+ 1While we perform repair using ChatGPT, no part of this paper is written by ChatGPT. :)
107
+ 2
108
+
109
+ several repair settings, including few-shot generation using a few examples of bug fixes, cloze-style
110
+ APR and also single line generation.
111
+ The findings across these prior work is consistent in showing that directly using LLMs for APR
112
+ achieves comparable if not better performance compared to prior APR tools. However, these pro-
113
+ posed LLMs for APR techniques almost exclusively use sampling where patches are generated by
114
+ sampling from the same input over and over again, leading to many repeated patches. Furthermore,
115
+ the inputs to the LLMs are only constructed from the original buggy function, missing the rich infor-
116
+ mation in the form of testcases. In this work, our conversational APR approach aims to bridge these
117
+ limitations in LLMs for APR by constructing new inputs based on prior incorrect patches to avoid
118
+ sampling repeated patches and providing the validation feedback to add another dimension of input
119
+ apart from original buggy code to help the model understand the semantic meaning of the program.
120
+ 2.2
121
+ MULTI-STEP PROGRAM REASONING AND SYNTHESIS USING LLMS
122
+ A related research direction is in applying multi-step reasoning for code understanding and synthe-
123
+ sis. Nye et al. (2021) trains a LLM designed for program understanding by introducing the idea
124
+ of a “scratchpad” in which the LLM predicts the intermediate states of a program along with the
125
+ final execution results. Chen et al. (2022) extends the chain-of-thoughts Wei et al. (2022) prompting
126
+ style in NLP to propose program-of-thoughts where the prompt contains an explicit command to
127
+ construct the program step-by-step. However, these work still generates a complete result (i.e., final
128
+ program execution or code), albeit with intermediate results, in one shot, whereas our conversational
129
+ APR samples multiple times LLMs with different inputs to obtain one output plausible patch.
130
+ Different from one-shot methods, Austin et al. (2021) investigated the ability for LLMs to use hu-
131
+ man feedback in a conversational manner for program synthesis. The approach works by keeping a
132
+ conversation of previously generated code and correcting any mistake using natural language feed-
133
+ back provided by human developers. Nijkamp et al. (2022) manually created a multi-step synthesis
134
+ dataset where each target program is broken down into multiple smaller steps where only a few lines
135
+ of code needs to be generated. They then sample the model multiple times to iteratively complete
136
+ each smaller step and concatenate them together to form the final program. While these described
137
+ techniques involve iteratively sampling from the model with new feedback similar to a conversa-
138
+ tional manner, our work can automatically create this feedback through testcase execution without
139
+ any human-in-the-loop.
140
+ 3
141
+ CONVERSATIONAL APR
142
+ We propose a conversational APR approach to prompt LLM patch generation by combining previ-
143
+ ously generated patches and validation feedback in a conversational manner. Contrasting with the
144
+ classic Generate and Validate (G&V) APR approach that first generates a large number of candidate
145
+ patches and then validate each one to find a list of plausible patches, conversational APR interleaves
146
+ generation and validation to provide immediate feedback for the new candidate patch. Different
147
+ from previous APR tools which make use of LLMs through sampling given the same prefix/context
148
+ for each bug, conversational APR approach aims to incorporate feedback information after each
149
+ generation (if the candidate patch failed to pass all tests) as new context for subsequent generations.
150
+ Specifically, the feedback information includes both the incorrect generated patch and its associated
151
+ failed testcase information.
152
+ Conversational APR involves iteratively obtaining new candidate patches from the LLM by using
153
+ previously generated patches/validation results as feedback. We refer to this process as a turn, where
154
+ each turn includes three different steps: 1) construct new a prompt based on prior feedback 2) sam-
155
+ ple the model to produce a sample output function 3) validate the sample output function against
156
+ testcases to obtain validation feedback. Multiple turns in sequence is defined as a chain. The ter-
157
+ minating conditions are that the sample output patch is able to pass all testcases (i.e., a plausible
158
+ patch is obtained) or the maximum number of turns (length of the chain) is reached. Note that each
159
+ turn (all three steps) are done automatically without needing any human-in-the-loop, this allows
160
+ conversational APR to be an automatic approach for program repair.
161
+ 3
162
+
163
+ Turn 1:
164
+ Turn 2:
165
+ The following code is buggy.
166
+ def sieve(max):
167
+ primes = []
168
+ for n in range(2, max):
169
+ if any(n%p for p in primes):
170
+ primes.append(n)
171
+ return primes
172
+ Please provide a fixed version.
173
+ def sieve(max):
174
+ primes = []
175
+ for n in range(2, max+1):
176
+ if not any(n%p for p in primes):
177
+ primes.append(n)
178
+ return primes
179
+ The fixed version is still not correct.
180
+ def sieve(max):
181
+ primes = []
182
+ for n in range(2, max):
183
+ if all(n%p for p in primes):
184
+ primes.append(n)
185
+ return primes
186
+ def sieve(max):
187
+ primes = []
188
+ for n in range(2, max+1):
189
+ if all(n%p for p in primes):
190
+ primes.append(n)
191
+ return primes
192
+ sieve(4) returns [2, 4] but it should return [2, 3]
193
+ Please provide a fixed version.
194
+ The fixed version is still not correct.
195
+ sieve(2) returns [] but it should return [2]
196
+ Please provide a fixed version.
197
+ S
198
+ I
199
+ F1
200
+ I
201
+ S1
202
+ F1
203
+ concatenate
204
+ S2
205
+ F2
206
+ I
207
+ S1
208
+ F1
209
+ S2
210
+ F2
211
+ concatenate
212
+ S3
213
+ Turn 3:
214
+ F3
215
+ Initial
216
+ Prompt
217
+ sample
218
+ output
219
+ validation
220
+ feedback
221
+ sample model
222
+ sample model
223
+ sample model
224
+ run
225
+ testcase
226
+ run
227
+ testcase
228
+ run
229
+ testcase
230
+ sample
231
+ output
232
+ validation
233
+ feedback
234
+ sample
235
+ output
236
+ The fixed version is correct!
237
+ validation
238
+ feedback
239
+ def sieve(max):
240
+ primes = []
241
+ for n in range(2, max):
242
+ if any(n%p for p in primes):
243
+ primes.append(n)
244
+ return primes
245
+ def sieve(max):
246
+ primes = []
247
+ for n in range(2, max):
248
+ if all(n % p for p in primes):
249
+ primes.append(n)
250
+ return primes
251
+ original buggy function
252
+ plausible patch
253
+ S1
254
+ I
255
+ S2
256
+ S3
257
+ F3
258
+ F2
259
+ F1
260
+ I
261
+ S1
262
+ I
263
+ S1
264
+ F1
265
+ F1
266
+ S2
267
+ F2
268
+ F3
269
+ S3
270
+ I
271
+ S1
272
+ F1
273
+ S2
274
+ F2
275
+ Figure 1: Overview of conversational APR with an illustrative example in fixing the buggy
276
+ sieve function
277
+ 3.1
278
+ PIPELINE & EXAMPLE
279
+ Figure 1 shows an illustrative example of a conversation chain (multiple turns) and an overview
280
+ of the pipeline of the conversational APR approach. We first take in as input the original buggy
281
+ function and a set of testcases which contains some failing tests that expose the underlying bug.
282
+ In the example, the buggy function (sieve) attempts to use to sieve algorithm to calculate the list
283
+ of prime numbers below the integer input (max). The location of the bug occurs on line 4 where
284
+ the buggy function incorrectly uses any instead of all. This bug is exposed by the testcase of
285
+ sieve(2) = [2] where the buggy function incorrectly returns an empty array [].
286
+ • Turn 1: We first create an initial prompt
287
+ I using the original buggy function which contains
288
+ natural language description to indicate that the function is buggy (The following code is
289
+ buggy) and the task we want the LLM to solve (Please provide a fixed version). We
290
+ then sample the model using the initial prompt
291
+ I to obtain the first sample output function S1 .
292
+ The change is made to line 4 where the function in S1 negated the original if condition. We then
293
+ validate S1 against the list of tests and found that while the new patch is able to successfully pass
294
+ the previous failing test of sieve(2) = [2], it returns [2, 4] for sieve(4) when the correct
295
+ output should be [2, 3]. This validation information F1 is collected as feedback to use during
296
+ the next conversation turn.
297
+ • Turn 2: Different from turn 1, where the input to the LLM is just the initial prompt
298
+ I , now we
299
+ provide the model also with the previously generated patch and its failing testcase. In short, we
300
+ construct the validation feedback F1 by using the failing testcase and indicate to the model that the
301
+ previous sample S1 is still not correct (The fixed version is still not correct) and
302
+ the new task (Please provide another fixed version). We then concatenate the initial
303
+ prompt, first sample output function and the validation feedback { I , S1 , F1 } together as the input
304
+ to the LLM. As such, the model is able to not only use the original buggy function but also use the
305
+ previously generated sample and its testcase feedback to generate a new patched function. Similar
306
+ to turn 1, we obtain S2 and F2 where the correct line 4 is obtained (switching any to all) but the
307
+ candidate patch function incorrectly reduced the upper range of the for loop by 1.
308
+ 4
309
+
310
+ • Turn 3: Similar to turn 2, we first construct the new validation feedback F2 from the previous
311
+ failing test case. We then concatenate all previously sampled output along with its validation
312
+ feedback in sequence to produce { I , S1 , F1 , S2 , F2 }. Using this input, we then sample the LLM
313
+ again to produce the next candidate patch S3 . We observe that this candidate patch correctly fixes
314
+ the underlying bug and this is indicated by its validation F3 where it is able to pass all the testcases.
315
+ The program repair process is then terminated as we have obtained our plausible patch S3 .
316
+ Compared to prior approach in APR based on LLMs which simply samples from a pre-defined
317
+ prompt/context, conversational APR leverages the previously missing key feedback information in
318
+ the form of testcase results to prompt future patch generations. The testcase feedback not only tells
319
+ the LLM that the previous patches are incorrect (i.e. leading to more unique patches) but also pro-
320
+ vides input and output examples which helps the model to understand the underlying functionality
321
+ of the function (i.e. leading to more correct patches).
322
+ 3.2
323
+ DESIGN DECISIONS
324
+ In the above example illustrated in Figure 1, we show the overall pipeline of conversational APR.
325
+ However, there are different design decisions which can impact the performance of the approach:
326
+ Prompt engineering. Prompting has been shown to be an effective way of leveraging LLMs on
327
+ various downstream tasks without needing any explicit fine-tuning. In conversational APR approach,
328
+ we follow the style of prior work Xia et al. (2022) in providing a short and concise prompt with
329
+ respect to the description of the input and the task we want to model to solve. Additionally, we
330
+ follow prior guidelines and kept the prompt to be open-ended rather than to restrict the generation
331
+ with a close-ended prompt. One particular important prompt constructing is validation feedback
332
+ in providing the failing testcase to the LLM. In the Figure 1 example, we provide a functional
333
+ prompt that directly invokes the function and highlight the discrepancy between output and expected
334
+ testcase output. We refer to this as functional prompt since it directly calls the function with input
335
+ parameters similar to what one would do in code. In Section 6.2, we compare this style of validation
336
+ prompting with other methods including without any testcase information to demonstrate the benefit
337
+ of including validation feedback to the model.
338
+ Maximum chain length. Recall that a conversation chain refers to the continuous sequence of turns
339
+ to fix a bug. A chain is demonstrated in Figure 1 with a chain length of 3. Along with finding a
340
+ plausible patch, a preset value for the maximum chain length is also a terminating condition since
341
+ the LLM used will have a maximum context window and cannot take in arbitrary length inputs.
342
+ Once this maximum chain length is reached, conversational APR will restart from the beginning
343
+ (i.e., by crafting initial prompt again) with a new chain conversation. The maximum chain length
344
+ is a parameter which controls how much history the LLM may receive. A maximum chain length
345
+ of 1 refers to the base case of sampling from the initial prompt over and over again, meaning the
346
+ model does not know any of the previously generated incorrect patches. A higher maximum chain
347
+ length means the model can see multiple previously failed patches, however this also may not be
348
+ beneficial as it can cause the LLM to repeat some of the earlier patches or get stuck on a particular
349
+ implementation of the function. In Section 6.2, we evaluate the effect of the chain length has on
350
+ repair performance.
351
+ 4
352
+ DATASETS
353
+ In this section, we describe the LLMs used in our evaluation and also the repair benchmark used to
354
+ evaluate our proposed technique.
355
+ 4.1
356
+ LLMS
357
+ In our work, we evaluate 10 different LLMs to not only demonstrate the effect of scaling behavior
358
+ on our proposed conversational APR approach but also to evaluate how different pre-training and
359
+ model design contribute to the overall effectiveness. Table 1 presents an overview of the studied
360
+ LLMs. Column Model is the model name, #Parameters indicates the number of model parameters,
361
+ Context Window represents the size of the context window, and Training Strategy refers to the
362
+ training strategy used.
363
+ 5
364
+
365
+ Table 1: Evaluation LLM overview
366
+ Model
367
+ #Parameters
368
+ Context Window
369
+ Training Strategy
370
+ CODEGEN-MONO
371
+ 350M/2B/6B/16B
372
+ 2048
373
+ Unsupervised CLM
374
+ CODEGEN-MULTI
375
+ 350M/2B/6B/16B
376
+ 2048
377
+ Unsupervised CLM
378
+ Codex
379
+ 12B
380
+ 4096
381
+ Unsupervised CLM
382
+ ChatGPT
383
+ ∼175B
384
+ ∼4000
385
+ Reinforcement Learning
386
+ from Human Feedback + CLM
387
+ bitcount.py
388
+ bitcount.java
389
+ fixed line
390
+ fixed line
391
+ testcase
392
+ Figure 2: Example bug in both Python and Java in QuixBugs along with the testcases
393
+ • CODEGEN Nijkamp et al. (2022). A family of autoregressive LLMs trained using Causal Lan-
394
+ guage Modeling (CLM) objective (next-token-prediction) ranging from 350M to 16B in parameter
395
+ size. CODEGEN is first trained on the open-source ThePile Gao et al. (2020), containing 22 diverse
396
+ text-based datasets. The models are then trained on BigQuery BigQuery, a dataset of open-source
397
+ code from 6 programming languages. We refer to these models (trained on ThePile then Big-
398
+ Query) as CODEGEN-MULTI. CODEGEN-MULTI is then further trained on a dataset containing
399
+ large amounts of Python GitHub code to produce CODEGEN-MONO. In our experiments, we
400
+ use CODEGEN-MONO for repair benchmarks in Python and CODEGEN-MULTI for repair bench-
401
+ marks in other programming languages by refer to them both as CODEGEN for simplicity.
402
+ • Codex Chen et al. (2021). A programming language focused autoregressive model based on the
403
+ GPT-3 architecture Brown et al. (2020). Codex is first initialized with GPT-3 weights from training
404
+ on natural language corpus and then fine-tuned using next-token-prediction on a large dataset of
405
+ code files. While Codex also contains a version which can take in suffix tokens (i.e., fill-in code
406
+ in the middle), for our experiments, we only use Codex by providing the prefix context.
407
+ • ChatGPT Schulman et al. (2022). A conversational-based LLM first initialized from GPT-3.5
408
+ model and then fine-tuned using Reinforcement Learning from Human Feedback (RLHF) Ziegler
409
+ et al. (2019). ChatGPT is first fine-tuned based on supervised learning where human provides
410
+ example responses to prompts in the dataset. Using this fine-tuned model, a reward model is
411
+ then trained by sampling multiple outputs of the model from a given prompt and again using a
412
+ human to rank the outputs. The reward model is used in the reinforcement learning step where
413
+ Proximal Policy Optimization Schulman et al. (2017) is used to fine-tune ChatGPT. Different from
414
+ the Codex and CODEGEN, ChatGPT through the usage of RLHF and fine-tuning data is designed
415
+ for conversation where the usage encourages a dialogue format. Note that much of the ChatGPT
416
+ model detail is unknown to the public, therefore, we can only provide an approximate value for
417
+ the number of parameters2 and context window size OpenAI (2022) according to verified sources.
418
+ 4.2
419
+ BENCHMARKS
420
+ We use the QuixBugs Lin et al. (2017) repair benchmark to evaluate our proposed conversational
421
+ APR approach.
422
+ QuixBugs has been widely used to evaluate many repair tools including both
423
+ learning-based Ye et al. (2022); Zhu et al. (2021); Jiang et al. (2021); Drain et al. (2021) and LLM for
424
+ APR Xia & Zhang (2022); Xia et al. (2022); Kolak et al. (2022); Prenner et al. (2022) approaches.
425
+ QuixBugs dataset contains the same 40 bugs and it associated correct patch in both Python and
426
+ Java. These bugs are self contained functions based on classic algorithms and it usually only takes
427
+ a single line change to fix the underlying bug. Each bug comes with a set of testcases which the
428
+ buggy function failed to pass and can be used to evaluate any candidate patch generated. Figure 2
429
+ shows an example bug for the bitcount function in QuixBugs for both Java and Python. The bug
430
+ occurs inside the while loop where the code incorrectly uses the ˆ operator instead of & operator. We
431
+ also show the example testcases for bitcount where it contains example inputs and the expected
432
+ outputs when evaluated using the function.
433
+ 2As ChatGPT is fine-tuned on GPT-3.5, we assume a similar number of parameters as GPT-3.5
434
+ 6
435
+
436
+ Out of the 40 bugs in QuixBugs, we further filter out 10 bugs which includes testcases that are
437
+ difficult to represent with our validation feedback prompt. For example, testcases for detect cycle
438
+ involves a graph as an input to the function. In total, we use 60 bugs (30 and 30 respectively for Java
439
+ and Python) for our evaluation.
440
+ 5
441
+ EXPERIMENTAL SETUP
442
+ In this section, we describe the key research questions that our evaluation seek to answer, the evalu-
443
+ ation metrics used and also describe the implementation details.
444
+ 5.1
445
+ RESEARCH QUESTIONS
446
+ We aim to investigate the following research questions:
447
+ • RQ1: What is the effectiveness of applying conversational APR?
448
+ • RQ2: How do different components of conversational APR effect performance?
449
+ In RQ1, we first compare the performance of conversational APR with a baseline approach used
450
+ in prior LLM for APR work where the patches are generated by continuously sampling from the
451
+ same initial prompt. We further evaluate both the scaling effective of LLM as we increase the size
452
+ of the model and also investigate the difference in performance of different pre-training strategies
453
+ (e.g., ChatGPT vs. Codex). In RQ2, we dive deeper into the different parameters of conversational
454
+ APR. Specifically, we evaluate how the length of the conversational chain and different validation
455
+ feedback prompts affect the performance.
456
+ 5.2
457
+ EVALUATION METRICS
458
+ Our evaluation metric consist of the standard metric used to evaluate APR tools: number of plausible
459
+ patches: patches which passes all the testcases and correct patches: patches which are semantically
460
+ equivalent to the reference developer patch. Additionally, since we are using sampling LLMs, we
461
+ also define tries as the number of samples needed to obtain a plausible/correct patch. This metric is
462
+ useful when comparing two approaches/models that achieve similar number of bugs fixed, the one
463
+ with fewer number of tries is preferred as we want to limit the number of times we have to sample
464
+ the LLM.
465
+ 5.3
466
+ IMPLEMENTATION
467
+ We implemented the LLM generation pipeline in Python using Hugging Face HuggingFace imple-
468
+ mentation of the CODEGEN models. We access Codex through the OpenAI API by querying the
469
+ code-davinci-002 engine. Since ChatGPT is not open-sourced and does not provide an official API
470
+ endpoint (like Codex), we manually input the prompt and extract the outputs. For all models apart
471
+ from ChatGPT, we use a default generation setting of nucleus sampling with top p = 0.95, tempera-
472
+ ture = 1, 50 samples per bug with a maximum chain length of 3. We generate and evaluate patches
473
+ on a 32-Core workstation with AMD Ryzen Threadripper PRO 5975WX CPU, 256 GB RAM and 3
474
+ NVIDIA GeForce RTX 3090 GPUs, running Ubuntu 22.04.1 LTS.
475
+ 6
476
+ RESULTS
477
+ 6.1
478
+ RQ1: CONVERSATIONAL APR EFFECTIVENESS
479
+ We first evaluate the effectiveness of applying conversational APR using validation feedback com-
480
+ pared to prior method of sampling given the same prompt without any feedback. Table 2 shows the
481
+ results on QuixBugs-Python and QuixBugs-Java. We observe that by applying our feedback driven
482
+ conversational APR, we are able to improve the # of correct and plausible patches for all unsupervis-
483
+ edly trained LLM across all model sizes. Additionally, conversational APR is also able to decrease
484
+ the # of tries (# of samples) needed before obtaining the first plausible/correct patch. Compared
485
+ to traditional sampling method of producing patches, conversational APR is able to leverage the
486
+ 7
487
+
488
+ Table 2: Conversational APR performance on both QuixBugs-Python and QuixBugs-Java
489
+ compared with baseline sampling method. #c/#p refers to the number of correct / plausible
490
+ patches.
491
+ Models
492
+ QuixBugs-Python
493
+ QuixBugs-Java
494
+ Sampling
495
+ Conversational
496
+ Sampling
497
+ Conversational
498
+ #c/#p
499
+ #tries
500
+ #c/#p
501
+ #tries
502
+ #c/#p
503
+ #tries
504
+ #c/#p
505
+ #tries
506
+ CODEGEN-350M
507
+ 7 / 10
508
+ 20.5
509
+ 8 / 11
510
+ 18.4
511
+ 4 / 4
512
+ 24.2
513
+ 5 / 5
514
+ 23.5
515
+ CODEGEN-2B
516
+ 22 / 23
517
+ 16.6
518
+ 25 / 26
519
+ 14.3
520
+ 12 / 14
521
+ 18.8
522
+ 15 / 16
523
+ 16.4
524
+ CODEGEN-6B
525
+ 22 / 24
526
+ 14.0
527
+ 27 / 28
528
+ 12.1
529
+ 18 / 20
530
+ 19.8
531
+ 22 / 22
532
+ 13.5
533
+ CODEGEN-16B
534
+ 29 / 29
535
+ 5.6
536
+ 30 / 30
537
+ 4.8
538
+ 24 / 25
539
+ 14.5
540
+ 28 / 29
541
+ 13.2
542
+ Codex
543
+ 29 / 30
544
+ 4.6
545
+ 30 / 30
546
+ 3.8
547
+ 28 / 30
548
+ 7.2
549
+ 29 / 30
550
+ 5.7
551
+ Table 3: ChatGPT and Codex comparison on QuixBugs-Python and QuixBugs-Java where
552
+ each cell indicates the number of correct / plausible patches
553
+ Models
554
+ QuixBugs-Python
555
+ QuixBugs-Java
556
+ one try
557
+ two tries
558
+ three tries
559
+ one try
560
+ two tries
561
+ three tries
562
+ Codex
563
+ 16 / 16
564
+ 21 / 21
565
+ 24 / 24
566
+ 11 / 12
567
+ 18 / 19
568
+ 21 / 22
569
+ ChatGPT
570
+ 24 / 24
571
+ 27 / 28
572
+ 28 / 29
573
+ 24 / 24
574
+ 26 / 26
575
+ 26 / 26
576
+ model’s understanding of natural language feedback to indicate why the patch is incorrect. LLMs
577
+ can use this validation feedback information to generate new patches that try to pass the previ-
578
+ ously failed testcase. Furthermore, conversational APR also helps to reduce the number of repeated
579
+ patches from sampling using the same prompt over and over again. By using the large context size
580
+ of many state-of-the-art LLMs, conversational APR can use recently generated incorrect patches as
581
+ previous context to prompt the model to generate a new patch that is different.
582
+ ChatGPT evaluation. We now evaluate the performance of ChatGPT when using conversational
583
+ APR. Due to the requirement of manually inputting and extracting outputs from ChatGPT, we only
584
+ use a single conversation chain with at most 3 tries (i.e. maximum chain length of 3). We compare
585
+ with the best performing LLM of Codex from previous results under the same setting in Table 3.
586
+ We observe that compared to Codex, which is trained in an unsupervised manner, ChatGPT which
587
+ is fine-tuned using Reinforcement Learning from Human Feedback (RLHF) performed much better
588
+ across the two repair datasets. This improvement in result can be partially attributed to increase
589
+ in model parameter size, but we believe this is also due to the dialogue-based fine-tuning dataset
590
+ used in ChatGPT. Conversational APR relies on the model understanding the validation feedback
591
+ to condition the future generation in trying to generate a patch that passes the testcase. A more
592
+ dialogue-oriented model such as ChatGPT is well suited for this task as both the training data and
593
+ algorithm contain feedback driven loops. As ChatGPT and other dialogue-based LLMs become
594
+ more popular, we believe conversational APR can also be further improved through more usage of
595
+ these LLMs.
596
+ 6.2
597
+ RQ2: COMPONENT ANALYSIS
598
+ Maximum chain length. We first investigate the effect of different maximum chain length has on
599
+ the repair performance. Figure 3 shows the number of plausible patches when we vary the maximum
600
+ chain length from 1 to 6 for the 4 CODEGEN models. Recall from Section 3 that chain length refers
601
+ Figure 3: Number of plausible patches for the 4 different CODEGEN models as we vary the
602
+ maximum chain length on QuixBugs-Python
603
+ 8
604
+
605
+ CodeGen-350M
606
+ CodeGen-2B
607
+ CodeGen-6B
608
+ CodeGen-16B
609
+ 12
610
+ 28
611
+ Patches
612
+ 30
613
+ 30
614
+ 10
615
+ 26
616
+ 28
617
+ 28
618
+ 8
619
+ 24
620
+ Plausible
621
+ 26
622
+ 26
623
+ 22
624
+ 24
625
+ 24
626
+ 20
627
+ #
628
+ 6
629
+ 2
630
+ 3
631
+ 4
632
+ 6
633
+ 1
634
+ 2
635
+ 3
636
+ Maximum ChainLengthTable 4: Prompting Style Evaluation on QuixBugs-Python with each cell showing the number
637
+ of plausible patches
638
+ Models
639
+ no testcase
640
+ natural language
641
+ functional
642
+ CODEGEN-350M
643
+ 9
644
+ 11
645
+ 11
646
+ CODEGEN-2B
647
+ 20
648
+ 25
649
+ 26
650
+ CODEGEN-6B
651
+ 24
652
+ 27
653
+ 28
654
+ CODEGEN-16B
655
+ 27
656
+ 30
657
+ 30
658
+ Codex
659
+ 29
660
+ 30
661
+ 30
662
+ to the number of turns (each turn consist of generating and validating a new patch) in a conversation
663
+ chain. A maximum chain length of 1 is the simple sampling from the same initial prompt baseline
664
+ (used in prior LLM for APR tools). As we increase chain length, the model has to take in more
665
+ and more previous context in the form of prior generations and feedbacks. We observe that the
666
+ performance increase as we start from a small chain length and reaches the maximum around 3 or 4
667
+ and then decrease as chain length continue to increase. The decrease in number of plausible patches
668
+ once we reach a high chain length is because the context may be too much for the model to handle
669
+ since it can include multiple previously failed patches. We also observe that this decrease is more
670
+ significant in smaller models, where larger models are less affected by longer chain length, showing
671
+ the ability for larger models to better capture the long term context dependencies. This shows that
672
+ the optimal chain length to use for conversational APR can be dependent on the individual LLM
673
+ used.
674
+ Feedback prompting style.
675
+ We now evaluate the effect of the feedback prompting style
676
+ used in our conversational APR. Table 4 shows the number of plausible patches using differ-
677
+ ent validation prompts in QuixBugs-Python.
678
+ Column no testcase does not include any test-
679
+ case feedback (only states that the patch is not correct), natural language describes the failing
680
+ testcase (e.g., when input is 2, the patch incorrectly returns [] but it should
681
+ return [2]) and functional which is the default prompting style discussed in Section 3. We ob-
682
+ serve that different prompting style does have an effect on the final performance of conversational
683
+ APR. Starting from no testcase prompt, we can improve performance by adding specific testcase
684
+ feedback information on top of telling the LLM that the patch is not correct. We also observe that
685
+ the functional prompting style, using the buggy/patch function name and passing parameters (see
686
+ Figure 1), performs the best. Functional prompting style conveys the failing testcase information in
687
+ a more concise and natural way by phrasing the testcase input and expected output relationship as a
688
+ function call.
689
+ 7
690
+ CONCLUSION
691
+ We propose conversational APR, a new paradigm for program repair that interleaves patch gener-
692
+ ation with validation to provide immediate feedback for LLMs to better prompt future generated
693
+ patches. Compared to previous LLM for APR approaches that only sample from the same input,
694
+ conversational APR iteratively builds the input by concatenating previously incorrect patches and
695
+ validation feedback. This allows for the model to avoid generating previously incorrect patches and
696
+ also understand the semantic meaning of the function through validation feedback. Our evaluation
697
+ on 10 different LLMs shows the improvement of conversational APR over the baseline sampling
698
+ method used in prior LLM for APR tools. Furthermore, we demonstrate the promising future of ap-
699
+ plying ChatGPT, a conversational/dialogue driven LLM, for conversational APR, or APR in general
700
+ for the first time.
701
+ REFERENCES
702
+ Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan,
703
+ Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with large
704
+ language models, 2021. arXiv:2108.07732.
705
+ BigQuery.
706
+ Bigquery github repos, 2022.
707
+ https://console.cloud.google.com/
708
+ marketplace/details/github/github-repos.
709
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari-
710
+ wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal,
711
+ 9
712
+
713
+ Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.
714
+ Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz
715
+ Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec
716
+ Radford, Ilya Sutskever, and Dario Amodei.
717
+ Language models are few-shot learners, 2020.
718
+ arXiv:2005.14165.
719
+ Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared
720
+ Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri,
721
+ Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan,
722
+ Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian,
723
+ Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fo-
724
+ tios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex
725
+ Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders,
726
+ Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec
727
+ Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob Mc-
728
+ Grew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large
729
+ language models trained on code, 2021. arXiv:2107.03374.
730
+ Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen.
731
+ Program of thoughts
732
+ prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2022.
733
+ arXiv:2211.12588.
734
+ Dawn Drain, Colin B. Clement, Guillermo Serrato, and Neel Sundaresan.
735
+ Deepdebug: Fixing
736
+ python bugs using stack traces, backtranslation, and code skeletons, 2021. arXiv:2105.09352.
737
+ Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing
738
+ Qin, Ting Liu, Daxin Jiang, and Ming Zhou. Codebert: A pre-trained model for programming
739
+ and natural languages, 2020. arXiv:2002.08155.
740
+ Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason
741
+ Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The pile: An 800gb dataset of diverse text
742
+ for language modeling. 2020. arXiv:2101.00027.
743
+ Luca Gazzola, Daniela Micucci, and Leonardo Mariani. Automatic software repair: A survey. IEEE
744
+ Transactions on Software Engineering, 45(1):34–67, 2019.
745
+ Ali Ghanbari, Samuel Benton, and Lingming Zhang. Practical program repair via bytecode muta-
746
+ tion. In Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing
747
+ and Analysis, ISSTA 2019, pp. 19–30. ACM, 2019. ISBN 978-1-4503-6224-5.
748
+ Mary
749
+ Hanbury.
750
+ Investigators
751
+ have
752
+ reportedly
753
+ found
754
+ more
755
+ evidence
756
+ that
757
+ could
758
+ con-
759
+ nect
760
+ the
761
+ ethiopian
762
+ boeing
763
+ 737
764
+ max
765
+ crash
766
+ to
767
+ a
768
+ deadly
769
+ accident
770
+ five
771
+ months
772
+ be-
773
+ fore.
774
+ Business
775
+ Insider,
776
+ 2019.
777
+ https://www.businessinsider.com/
778
+ potential-link-between-ethiopian-boeing-737-max-crash-lion-air-mishap-2019-3.
779
+ HuggingFace. Hugging face, 2022. https://huggingface.co.
780
+ Nan Jiang, Thibaud Lutellier, and Lin Tan. Cure: Code-aware neural machine translation for auto-
781
+ matic program repair. 2021 IEEE/ACM 43rd International Conference on Software Engineering
782
+ (ICSE), May 2021.
783
+ Sophia D Kolak, Ruben Martins, Claire Le Goues, and Vincent Josua Hellendoorn. Patch generation
784
+ with language models: Feasibility and scaling behavior. In Deep Learning for Code Workshop,
785
+ 2022.
786
+ Derrick Lin, James Koppel, Angela Chen, and Armando Solar-Lezama.
787
+ Quixbugs: A multi-
788
+ lingual program repair benchmark set based on the quixey challenge.
789
+ SPLASH Companion
790
+ 2017, pp. 55–56, New York, NY, USA, 2017. Association for Computing Machinery.
791
+ ISBN
792
+ 9781450355148.
793
+ Kui Liu, Anil Koyuncu, Dongsun Kim, and Tegawend´e F. Bissyand´e. Tbar: Revisiting template-
794
+ based automated program repair. In Proceedings of the 28th ACM SIGSOFT International Sym-
795
+ posium on Software Testing and Analysis, ISSTA 2019, pp. 31–42, New York, NY, USA, 2019.
796
+ ACM. ISBN 9781450362245.
797
+ 10
798
+
799
+ Yiling Lou, Ali Ghanbari, Xia Li, Lingming Zhang, Haotian Zhang, Dan Hao, and Lu Zhang. Can
800
+ automated program repair refine fault localization? a unified debugging approach. In Proceedings
801
+ of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 75–87,
802
+ 2020.
803
+ Scott Matteson.
804
+ Report:
805
+ Software failure caused $1.7 trillion in financial losses in
806
+ 2017.
807
+ TechRepublic,
808
+ 2018.
809
+ https://www.techrepublic.com/article/
810
+ report-software-failure-caused-1-7-trillion-in-financial-losses-in-2017/.
811
+ Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese,
812
+ and Caiming Xiong. Codegen: An open large language model for code with multi-turn program
813
+ synthesis, 2022. arXiv:2203.13474.
814
+ Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David
815
+ Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Au-
816
+ gustus Odena. Show your work: Scratchpads for intermediate computation with language models,
817
+ 2021. arXiv:2112.00114.
818
+ Devon H. O’Dell. The debugging mindset. acmqueue, 2017. https://queue.acm.org/
819
+ detail.cfm?id=3068754/.
820
+ OpenAI.
821
+ Does
822
+ chatgpt
823
+ remember
824
+ what
825
+ happened
826
+ earlier
827
+ in
828
+ the
829
+ conver-
830
+ sation?
831
+ 2022.
832
+ https://help.openai.com/en/articles/
833
+ 6787051-does-chatgpt-remember-what-happened-earlier-in-the-conversation/.
834
+ Julian Aron Prenner, Hlib Babii, and Romain Robbes. Can openai’s codex fix bugs?: An evaluation
835
+ on quixbugs. In 2022 IEEE/ACM International Workshop on Automated Program Repair (APR),
836
+ pp. 69–75, 2022.
837
+ John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy
838
+ optimization algorithms, 2017. arXiv:1707.06347.
839
+ John Schulman, Barret Zoph, Jacob Hilton Christina Kim, Jacob Menick, Jiayi Weng, Juan Fe-
840
+ lipe Ceron Uribe, Liam Fedus, Luke Metz, Michael Pokorny, Rapha Gontijo Lopes, Shengjia
841
+ Zhao, Arun Vijayvergiya, Eric Sigler, Adam Perelman, Chelsea Voss, Mike Heaton, Joel Parish,
842
+ Dave Cummings, Rajeev Nayak, Valerie Balcom, David Schnurr, Tomer Kaftan, Chris Hal-
843
+ lacy, Nicholas Turley, Noah Deutsch, Vik Goel, Jonathan Ward, Aris Konstantinidis, Woj-
844
+ ciech Zaremba, Long Ouyang, Leonard Bogdonoff, Joshua Gross, David Medina, Sarah Yoo,
845
+ Teddy Lee, Ryan Lowe, Dan Mossing, Joost Huizinga, Roger Jiang, Carroll Wainwright, Diogo
846
+ Almeida, Steph Lin, Marvin Zhang, Kai Xiao, Katarina Slama, Steven Bills, Alex Gray, Jan Leike,
847
+ Jakub Pachocki, Phil Tillet, Shantanu Jain, Greg Brockman, and Nick Ryder. Chatgpt: Optimiz-
848
+ ing language models for dialogue. 2022. https://openai.com/blog/chatgpt/.
849
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc
850
+ Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models,
851
+ 2022. arXiv:2201.11903.
852
+ Chunqiu Steven Xia and Lingming Zhang. Less training, more repairing please: Revisiting auto-
853
+ mated program repair via zero-shot learning, 2022. arXiv:2207.08281.
854
+ Chunqiu Steven Xia, Yuxiang Wei, and Lingming Zhang. Practical program repair in the era of large
855
+ pre-trained language models, 2022. arXiv:2210.14179.
856
+ He Ye, Matias Martinez, and Martin Monperrus. Neural program repair with execution-based back-
857
+ propagation. In 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE),
858
+ pp. 1506–1518, 2022.
859
+ Qihao Zhu, Zeyu Sun, Yuan-an Xiao, Wenjie Zhang, Kang Yuan, Yingfei Xiong, and Lu Zhang.
860
+ A syntax-guided edit decoder for neural program repair. In Proceedings of the 29th ACM Joint
861
+ Meeting on European Software Engineering Conference and Symposium on the Foundations of
862
+ Software Engineering, pp. 341–353, New York, NY, USA, 2021. ACM. ISBN 9781450385626.
863
+ Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul
864
+ Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences, 2019.
865
+ arXiv:1909.08593.
866
+ 11
867
+
3dFQT4oBgHgl3EQfGzVw/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4NAzT4oBgHgl3EQfffyv/content/2301.01454v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0315eb83bdea9b167729ce171d7f2b3539a33e409c1964993c34374db443dac
3
+ size 1519728
4NAzT4oBgHgl3EQfffyv/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9c3f36978ff9e1ee74bec38e1270df56abc060252158e41917dfb5d0f167115
3
+ size 3080237
4NAzT4oBgHgl3EQfffyv/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56839a0f3c465c27400d41eebeee185cdecaa6b2d2cf6c12994554691973bbcf
3
+ size 114745
4dE2T4oBgHgl3EQf6Qjc/content/2301.04199v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ae50a9ab893df7f338f153243ec79c7e9a89815fbb9767f5d2f3be69bf5c40d
3
+ size 7075619
4dE2T4oBgHgl3EQf6Qjc/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7f3ddd172e8f892a1072fdf7106b0c9abde91fe52624d7940030663de20fc43
3
+ size 7471149
5tAyT4oBgHgl3EQf2fk_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b09e620966bd18774e254e22d556ae044d0e2d5a24ac7b5da39682ed09971573
3
+ size 9633837
6NE0T4oBgHgl3EQfvwGn/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27dac5927d37b5a587bbb2b644ebaefc765b3c808e680ef77bce7ba13398f5e9
3
+ size 1703981
6NE0T4oBgHgl3EQfvwGn/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b56f2ae8d2f0051a436577f6e4913e20f56a4d68329aacf43cfd0aa3ae263fc
3
+ size 69408
6tE4T4oBgHgl3EQfcQy_/content/tmp_files/2301.05082v1.pdf.txt ADDED
@@ -0,0 +1,1039 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ DISCOVERING AND EXPLAINING DRIVER BEHAVIOUR UNDER
2
+ HOS REGULATIONS
3
+ A PREPRINT
4
+ Ignacio Vellido1, Juan Fdez-Olivares1, and Ra´ul P´erez1
5
+ 1Department of Computer Science and Artificial Intelligence, University of Granada, Spain
6
+ ignaciovellido@ugr.es, {faro, fgr}@decsai.ugr.es
7
+ January 13, 2023
8
+ ABSTRACT
9
+ World wide transport authorities are imposing complex Hours of Service regulations to drivers,
10
+ which constraint the amount of working, driving and resting time when delivering a service. As a
11
+ consequence, transport companies are responsible not only of scheduling driving plans aligned with
12
+ laws that define the legal behaviour of a driver, but also of monitoring and identifying as soon as
13
+ possible problematic patterns that can incur in costs due to sanctions. Transport experts are fre-
14
+ quently in charge of many drivers and lack time to analyse the vast amount of data recorded by
15
+ the onboard sensors, and companies have grown accustomed to pay sanctions rather than predict
16
+ and forestall wrongdoings. This paper exposes an application for summarising raw driver activity
17
+ logs according to these regulations and for explaining driver behaviour in a human readable format.
18
+ The system employs planning, constraint, and clustering techniques to extract and describe what
19
+ the driver has been doing while identifying infractions and the activities that originate them. Fur-
20
+ thermore, it groups drivers based on similar driving patterns. An experimentation in real world data
21
+ indicates that recurring driving patterns can be clustered from short basic driving sequences to whole
22
+ drivers working days.
23
+ 1
24
+ Introduction
25
+ World wide transport authorities are imposing complex Hours of Service (from now on, HoS) regulations to drivers
26
+ (Meyer 2011, Goel and Vidal 2013), which constraint the amount of working, driving and resting time when delivering
27
+ a service. As a consequence, transport companies are responsible not only of scheduling driving plans aligned with
28
+ laws that define the legal behaviour of a driver, but also of monitoring and identifying as soon as possible problematic
29
+ patterns that can incur in costs due to sanctions.
30
+ Fortunately, the widespread adoption of onboard IoT devices in vehicle fleets enables recording of the driver activities
31
+ in event logs, but the large amount of data ingested makes difficult for transport experts to understand what happened
32
+ and to make actions that forestall illegal behaviour. For this reason, an important technical challenge is to come up
33
+ with easily interpretable descriptive models that help understand the huge amount of information stored in such event
34
+ logs. The main objective not only consists of finding out if drivers workplan complies with the HoS regulation, but
35
+ also summarising their activities in a concise but representative way. Additionally, these underlying patterns in the
36
+ event log could be analysed in order to discover driving styles which could make possible the suggestion of routes or
37
+ tasks more aligned to the driver preferences.
38
+ The creation of driver profiles based on driving styles with HoS can be extremely useful for managers, as they could
39
+ assign transport routes to the most appropriate drivers, given the length of the route and the proximity of the deadline.
40
+ For example, drivers who maximise their driving hours could be preferred for long distance routes and drivers who
41
+ tend to take split rest to on-city deliveries.
42
+ arXiv:2301.05082v1 [cs.AI] 12 Jan 2023
43
+
44
+ Discovering and Explaining Driver Behaviour under HoS Regulations
45
+ A PREPRINT
46
+ Weekly Driving
47
+ Normal
48
+ Daily
49
+ Driving
50
+ Extended
51
+ Daily
52
+ Driving
53
+ Driving
54
+ Sequence
55
+ Driving
56
+ Sequence
57
+ Driving
58
+ Sequence
59
+ Driving
60
+ Sequence
61
+ Uninterrupted
62
+ Split 1
63
+ Split 2
64
+ Activities
65
+ Break
66
+ Type 1
67
+ Rest
68
+ Day
69
+ Activities
70
+ Activities
71
+ Break
72
+ Type 2
73
+ ...
74
+ Driving
75
+ Sequence
76
+ Weekly
77
+ Rest
78
+ ...
79
+ Figure 1: Partial example of the HoS tree. At the upmost level, a weekly driving period is formed by several daily
80
+ periods, which must end with a weekly rest. Similarly, daily driving periods are separated by daily rests, and according
81
+ to the accumulative hours of driving in them can be classified as Normal Daily Driving periods (up to 9 hours) or
82
+ Extended Daily Driving periods (more than 9 hours). Because each driving sequence should not surpass 4.5 hours of
83
+ driving time, they can be distinguished by the number of driving sequences in them.
84
+ Therefore, in this paper we present a method that, starting from real event logs extracted from a tachograph device, 1)
85
+ labels driver activities according to the HoS regulation, 2) identifies infractions and their cause, 3) extract summarised
86
+ information about the log while clustering driving sequences based on similar behaviour patterns, and 4) group drivers
87
+ by similarity of those clustered patterns. As a results, experts are provided with an understandable analysis of what
88
+ the driver has been doing in multiple levels of granularity, from a detailed description of the activities and infractions
89
+ under the HoS regulation to a categorisation with similar tendencies.
90
+ The remainder of this paper shows, firstly, a description of the problem addressed and some background concepts to
91
+ it. Then, we present the methodology of the approach, followed by details of experimentation conducted over a proof
92
+ of concept of the application. Finally, we conclude discussing related and future work.
93
+ 2
94
+ Problem Description
95
+ We are collaborating with a company which provides decision support based on prediction services to its customers.
96
+ Ultimately they want to help them govern the behaviour of their drivers by predicting whether a driver is close to
97
+ committing an infraction, as well as characterising drivers according to their driving style with respect to the HoS
98
+ regulation.
99
+ They handed us tachograph logs of multiple drivers with thousands of activities and asked us to develop a system to
100
+ analyse driver behaviour. Due to the regulation imposing additional difficulties at interpreting the data, and the high
101
+ volume that is constantly being generated, experts cannot interpret directly the original tachograph logs and require
102
+ summarisation of what a driver has been doing during that period of time to make business decisions. A tachograph
103
+ (Baldini et al. 2018) is an automated recording device fitted into a vehicle that extracts information from the driving
104
+ activities such as speed, duration and distance.
105
+ Our dataset represented an event log where every activity is a tuple (id, start, end, dur, a), each component
106
+ referring to: driver identifier id; start and end timestamps; activity duration dur; and activity identifier a, respectively.
107
+ A value for a is any of the labels [Driving, Other, Break, Idle] meaning that the driver is either Driving, performing
108
+ 2
109
+
110
+ Discovering and Explaining Driver Behaviour under HoS Regulations
111
+ A PREPRINT
112
+ Tachograph
113
+ Log
114
+ Activity Recognition
115
+ and
116
+ HOS anaylsis
117
+ Labelled
118
+ Log
119
+ Infringement
120
+ Analysis
121
+ Extended
122
+ Labelled
123
+ Log
124
+ Driver Behaviour
125
+ Analysis
126
+ Day
127
+ summary
128
+ Driver
129
+ Patterns
130
+ Clustering
131
+ Day
132
+ summary
133
+ ...
134
+ Clustering
135
+ ...
136
+ ...
137
+ ...
138
+ Figure 2: General overview of our approach.
139
+ Another Work, at Break or Idle during dur minutes, between start and end. The semantics of each event is completed
140
+ with the definitions provided by the HoS regulation, which are detailed in the following paragraphs.
141
+ Although the HoS standard is applied in several countries, in this work we focus on the European Union regulation
142
+ (EC) No 561/2006, which has been extensively analysed in (Goel and Vidal 2013, Meyer 2011). The basic terms refer
143
+ to four types of driver activities as break (short period for recuperation), rest (free disposal period with enough time to
144
+ sleep), driving (time during which the driver is operating a vehicle) and other work (time devoted to any work except
145
+ driving, like loading).
146
+ These activities are hierarchically grouped up to weekly intervals, based on the duration of the events contained in
147
+ them. To ease the explanation of this article we are referring at the whole structures as HoS trees. In Figure 1 we
148
+ exemplify a portion of a HoS tree displaying a Normal Daily Driving (NDD) period on the first day and a Extended
149
+ Daily Driving (EDD) period on the last.
150
+ At the lower levels activities are joined in different types of driving sequences. A basic driving sequence is composed
151
+ of a totally ordered set of the elements of [Driving, Other, Break, Idle] constrained so that the duration of any
152
+ Break is less than 15 minutes. More constraints are defined over the duration of the rests and breaks, and over the
153
+ accumulated duration of driving sequences.
154
+ The regulation provides a set of basic and optional rules, should the former not be satisfied, thus allowing more
155
+ flexibility to generate and interpret driving schedules under such constraints. For example, either a break of 45 min
156
+ has to be taken after 4.5 hours of accumulated driving or it can be taken split in two parts of at least 15 min and
157
+ 30 min respectively. This feature is good for drivers since it provides flexibility to their work, but complicates the
158
+ interpretability of what they are doing. The regulation also defines additional constraints (for example, the maximum
159
+ number of occurrences of a reduced rest in a weekly driving period), and relationships between the different types of
160
+ sub-sequences, as well as their internal structure.
161
+ 3
162
+ Background
163
+ Automated planning (Ghallab et al. 2016) is a branch of A.I. concerned with the study of agent acting techniques.
164
+ However, its uses can be broaden and as we show in this paper planning can also be applied to recognition tasks.
165
+ Two elements are required in a planning environment: (i) the action models existing in the world, referred as domain;
166
+ and (ii) a description of the initial state of the world, the objects involved in it and the desired goals, called problem.
167
+ These two inputs are provided to a planner, a search-based algorithm that determines the plan (sequence of actions)
168
+ that achieve the goals from the starting state.
169
+ Our proposed methodology employs hierarchical planning, more commonly referred as Hierarchical Task Networks
170
+ (HTN). HTNs forms a branch of classical planning where the domain can be decomposed in hierarchical structures
171
+ of tasks/subtasks, with low level tasks representing temporally annotated actions, and compound tasks representing
172
+ temporal ordering strategies between those actions.
173
+ 4
174
+ Application Overall Description
175
+ To solve the problem of explaining and summarising a driver’s tachograph log and its compliance with the HoS
176
+ regulation we propose a modular architecture divided in three main components, as seen in Figure 2:
177
+ • First, an initial planning process to label the input tachograph log according to the HoS regulation.
178
+ • Then, a system to identify and explain the causes of driver infractions extending the previous labelled log.
179
+ 3
180
+
181
+ Discovering and Explaining Driver Behaviour under HoS Regulations
182
+ A PREPRINT
183
+ Tachograph
184
+ Log
185
+ Transformation
186
+ to HTN
187
+ problem
188
+ Labelled
189
+ Log
190
+ HPDL
191
+ problem
192
+ HPDL
193
+ domain
194
+ Planner
195
+ HoS
196
+ Rules
197
+ Transformation
198
+ to HTN
199
+ domain
200
+ Temporal
201
+ Observations
202
+ Atrribute
203
+ Grammar
204
+ 1
205
+ 2
206
+ 3
207
+ 5
208
+ 4
209
+ Figure 3: Labelling process for a tachograph log.
210
+ Table 1: Labelling output for legal activities. This example shows the second (and last) driving sequence in a normal
211
+ daily driving period, where the required break has been taken in two parts, a small break in the first split and a second
212
+ one extended as a daily rest.
213
+ Original Log
214
+ Annotated Labels
215
+ Driver
216
+ Start
217
+ End
218
+ Duration
219
+ Activity
220
+ Week
221
+ Day
222
+ DayType
223
+ Sequence
224
+ BreakType
225
+ Token
226
+ Legal
227
+ driver1
228
+ 11/01/2017 17:33
229
+ 11/01/2017 17:37
230
+ 4
231
+ Driving
232
+ 1
233
+ 4
234
+ ndd
235
+ second
236
+ split 1
237
+ A
238
+ yes
239
+ driver1
240
+ 11/01/2017 17:37
241
+ 11/01/2017 18:16
242
+ 39
243
+ Break
244
+ B T2
245
+ yes
246
+ driver1
247
+ 11/01/2017 18:16
248
+ 11/01/2017 18:17
249
+ 1
250
+ Driving
251
+ split 2
252
+ A
253
+ yes
254
+ driver1
255
+ 11/01/2017 18:17
256
+ 11/01/2017 18:25
257
+ 8
258
+ Other
259
+ A
260
+ yes
261
+ driver1
262
+ 11/01/2017 18:25
263
+ 11/01/2017 19:54
264
+ 89
265
+ Driving
266
+ A
267
+ yes
268
+ driver1
269
+ 11/01/2017 19:54
270
+ 11/01/2017 19:57
271
+ 3
272
+ Break
273
+ B T0
274
+ yes
275
+ driver1
276
+ 11/01/2017 19:57
277
+ 11/01/2017 19:58
278
+ 1
279
+ Driving
280
+ A
281
+ yes
282
+ driver1
283
+ 11/01/2017 19:58
284
+ 11/01/2017 20:01
285
+ 3
286
+ Other
287
+ A
288
+ yes
289
+ driver1
290
+ 11/01/2017 20:01
291
+ 12/01/2017 07:06
292
+ 665
293
+ Break
294
+ DR T1
295
+ yes
296
+ • Thirdly, a module to analyse driver behaviour via summarisation of driving sequences.
297
+ • Lastly, summarised driving days are used as training data to clusterize drivers by similar driving patterns.
298
+ The following subsections provide a detailed explanation of each component.
299
+ 4.1
300
+ Labelling Driver Activities
301
+ To label our logs with HoS terms we employ our previously developed methodology proposed in (Vellido-Exp´osito.
302
+ et al. 2022), where a HTN domain serves to both recognise and tag activities from a tachograph log. We provide a
303
+ brief summary below, but we refer the reader to the original paper for an in depth explanation of the methodology. The
304
+ overall steps of this system, represented in Figure 3, are:
305
+ 1. Generate a set of ordered temporal observations from the tachograph activity log, which are part of the initial
306
+ state of a HTN problem.
307
+ 2. Represent the recognition of a driver activity as a temporal HTN problem, where an activity is added to
308
+ the plan if (i) the temporal information of the activity is consistent with the domain, and (ii) the temporal
309
+ constraints of the activity are consistent with the rest of temporal constraints of the actions already added to
310
+ the plan.
311
+ 3. Codify a HoS tree in an attribute grammar (Knuth 1968) as an intermediate representation, with HoS rules as
312
+ productions.
313
+ 4. Translate the grammar into a temporal HTN domain, aimed at representing the parsing of the activity log as
314
+ a HTN problem where (i) terminal symbols are recognised as temporal events and (ii) nonterminal symbols
315
+ are recognised according to grammar rules.
316
+ 5. Extend the domain to both recognise and label activities from the log to be easily interpretable. The resulting
317
+ log contains five new labels according to the contexts DayType (Normal or Extended Daily Driving period),
318
+ Sequence (if the activity belongs to the first, second or third sequence in the day), BreakType (if breaks are
319
+ taken in one or two parts), Token (the type of activity at the lowest level of in the HoS tree1) and Legal
320
+ 1Many types of categories exist at the lowest level, based on the duration of the action. For example, A indicates a working
321
+ activity, B T0 a break of less than 15 minutes, DR T1 a daily rest of more than 11 hours, and WR T1 a weekly rest with more than
322
+ 45 hours.
323
+ 4
324
+
325
+ Discovering and Explaining Driver Behaviour under HoS Regulations
326
+ A PREPRINT
327
+ Labelled
328
+ Log
329
+ Labelled Log
330
+ with
331
+ Infringements
332
+ Test
333
+ Evaluation
334
+ Labels
335
+ Comparison
336
+ Test
337
+ List
338
+ Original
339
+ Tachograph
340
+ Log
341
+ Relaxed
342
+ HPDL
343
+ domain
344
+ Relaxed
345
+ Labelled
346
+ Log
347
+ Labelling
348
+ Process
349
+ Figure 4: Infringement analysis process for a labelled log.
350
+ (whether the activity complies or not with the regulation), as well as two counter columns for the day and the
351
+ week processed. An output example can be seen in Table 1.
352
+ In summary, the recognition problem is solved with a planning process where the domain walks through an activity log
353
+ and its internal HTN structure simultaneously, the latter codifying the HoS tree. If activities comply with the temporal
354
+ and formal restrictions they are labelled with the appropriate terms, in other case contexts are tagged as unrecognised.
355
+ Nevertheless, the domain is designed to label as many contexts as possible. If a higher (more general) context cannot
356
+ be identified, the domain still attempts to identify lower contexts before ignoring the action. That means that when
357
+ a bigger sequence cannot be grouped and labelled together (e.g. when the driver exceeds the maximum number
358
+ of driving hours and the DayType column cannot be tagged), the domain tries to tag smaller sequences with their
359
+ corresponding label. An example is shown below in Table 3, where although DayType and Sequence tags could not be
360
+ recognised the system still identifies both BreakType splits and includes the appropriate labels, as well as the correct
361
+ Token contexts.
362
+ 4.2
363
+ Explaining Infringements
364
+ The previous recognition process labels the tachograph log considering the terms defined by the HoS regulation, its
365
+ compliance with it and details of their position in the HoS tree. However, when drivers commit infractions this system
366
+ by itself cannot provide an explanation of the cause and the exact root activity, due to the fact that planning techniques
367
+ rely on backtracking (that is, the ability to retract while exploring the planning graph) and there is not a simple way to
368
+ distinguish between a genuine backtracking step while walking through the HTN domain or a forced one by an illegal
369
+ activity in the log.
370
+ Therefore we found a need to further analyse the labelled log and explain these information to users without requiring
371
+ them to inspect all activities not recognised in the log. We solved this problem from two perspectives, each one
372
+ concerned with different kinds of violations, which are explained in the following subsections. Figure 4 shows an
373
+ overview of the approaches.
374
+ 4.2.1
375
+ Test evaluation
376
+ On one hand we represent rules from the HoS regulation as tests and applied them to the sequences the labelling
377
+ process found unrecognisable events (i.e., those missing at least a label). These tests, as exemplified in the left part
378
+ of Table 2, codify limits and restrictions in the duration of driving sequences and breaks. Whenever a test flags a
379
+ sequence the system marks it and provides an explanation of the infringement, as seen in Table 3.
380
+ 5
381
+
382
+ Discovering and Explaining Driver Behaviour under HoS Regulations
383
+ A PREPRINT
384
+ Table 2: Tests applied to driving sequences in the log in order to identify infringement causes.
385
+ Test
386
+ Infraction type
387
+ dt seq > 4.5h
388
+ Excessive Driving without breaks
389
+ dt day > 9h
390
+
391
+ EDDs this week > 2
392
+ Excessive Driving in day (NDD)
393
+ dt day > 10h
394
+ Excessive Driving in day (EDD)
395
+ Token day before = DR T3
396
+
397
+ Token = ¬ (DR T4 or WR)
398
+ Missing other half of split daily rest
399
+ Token = DR or WR
400
+
401
+ Legal = No
402
+
403
+ Remaining
404
+ contexts = ¬ Unknown
405
+ Rest past the daily/weekly deadline
406
+ Table 3: Labelling output example for illegal activities and the infraction detected by the tests list.
407
+ Original Log
408
+ Annotated Labels
409
+ Driver
410
+ Start
411
+ End
412
+ Duration
413
+ Activity
414
+ Week
415
+ Day
416
+ DayType
417
+ Sequence
418
+ BreakType
419
+ Token
420
+ Legal
421
+ Infraction
422
+ driver39
423
+ 10/01/2017 12:12
424
+ 10/01/2017 14:17
425
+ 125
426
+ Driving
427
+ 1
428
+ 5
429
+ unkown
430
+ unkown
431
+ split 1
432
+ A
433
+ no
434
+ Surpassed NDD driving time
435
+ driver39
436
+ 10/01/2017 14:17
437
+ 10/01/2017 14:40
438
+ 23
439
+ Break
440
+ B T2
441
+ no
442
+ driver39
443
+ 10/01/2017 14:40
444
+ 10/01/2017 16:52
445
+ 132
446
+ Driving
447
+ split 2
448
+ A
449
+ no
450
+ driver39
451
+ 10/01/2017 16:52
452
+ 10/01/2017 17:25
453
+ 33
454
+ Break
455
+ B T3
456
+ no
457
+ driver39
458
+ 10/01/2017 17:25
459
+ 10/01/2017 20:27
460
+ 182
461
+ Driving
462
+ ndd
463
+ first
464
+ split 1
465
+ A
466
+ yes
467
+ driver39
468
+ 10/01/2017 20:27
469
+ 10/01/2017 20:42
470
+ 15
471
+ Break
472
+ B T2
473
+ yes
474
+ driver39
475
+ 10/01/2017 20:42
476
+ 10/01/2017 21:54
477
+ 72
478
+ Driving
479
+ split 2
480
+ A
481
+ yes
482
+ driver39
483
+ 10/01/2017 21:54
484
+ 10/01/2017 21:59
485
+ 5
486
+ Break
487
+ B T0
488
+ yes
489
+ driver39
490
+ 10/01/2017 21:59
491
+ 10/01/2017 22:00
492
+ 1
493
+ Driving
494
+ A
495
+ yes
496
+ driver39
497
+ 10/01/2017 22:00
498
+ 10/01/2017 22:37
499
+ 37
500
+ Break
501
+ B T3
502
+ yes
503
+ driver39
504
+ 10/01/2017 22:37
505
+ 10/01/2017 23:21
506
+ 44
507
+ Driving
508
+ second
509
+ uninterrupted
510
+ A
511
+ yes
512
+ driver39
513
+ 10/01/2017 23:21
514
+ 11/01/2017 08:53
515
+ 572
516
+ Break
517
+ DR T2
518
+ yes
519
+ Tests takes the form of logic constraints
520
+ f(astart, aend) o V
521
+ (1)
522
+ being:
523
+ • f a function applied over the sequence defined between activities astart and aend (e.g. sum, context value).
524
+ • o a logic operator.
525
+ • V either the value of a context (e.g. Token, DayType), a scalar or a duration.
526
+ As an example, the first constraint in Table 2 could be rewritten as duration(seqstart, seqend) > 4.5h.
527
+ It is important to note that to correctly identify the infraction some tests may consider not only the illegal activities
528
+ but also prior activities of other days, a situation frequently present in reduced breaks and rests, where sometimes
529
+ compensation breaks are not fulfilled. Therefore the interval of activities checked by the tests depends on the test
530
+ itself.
531
+ Because tests are encoded as logic constraints, it is easy to extend the system with additional expert provided rules or
532
+ modify them if the regulation changes.
533
+ 4.2.2
534
+ Re-labelling
535
+ A second approach consists of re-labelling the log using a domain with relaxed duration intervals, that is, the limits
536
+ imposed by the regulation are softened (e.g. maximum driving time or minimum break time are enlarged up and down)
537
+ and the system looks for changes between the new log and the original tagged log.
538
+ This process helps to discover infringements caused by a slightly borderline duration, like the driver surpassing (prob-
539
+ ably unconsciously) the restriction by a small amount. These kind of situations are not easily identified by the tests,
540
+ due to the fact that the activity by itself could still be legal but labelled differently, becoming an infraction later on.
541
+ 6
542
+
543
+ Discovering and Explaining Driver Behaviour under HoS Regulations
544
+ A PREPRINT
545
+ Table 4: Identifying infringements with a relaxed domain. In this example the fourth activity surpasses by one minute
546
+ the duration limit to be considered B T0, making the whole sequence illegal.
547
+ Original Labelled Log
548
+ Duration
549
+ Activity
550
+ DayType
551
+ Sequence
552
+ BreakType
553
+ Token
554
+ Legal
555
+ 57
556
+ Driving
557
+ unknown
558
+ unknown
559
+ split 1
560
+ A
561
+ no
562
+ 3
563
+ Break
564
+ B T0
565
+ no
566
+ 2
567
+ Driving
568
+ A
569
+ no
570
+ 16
571
+ Break
572
+ B T2
573
+ no
574
+ Relaxed Labelled Log
575
+ Duration
576
+ Activity
577
+ DayType
578
+ Sequence
579
+ BreakType
580
+ Token
581
+ Legal
582
+ 57
583
+ Driving
584
+ ndd
585
+ first
586
+ split 1
587
+ A
588
+ yes
589
+ 3
590
+ Break
591
+ B T0
592
+ yes
593
+ 2
594
+ Driving
595
+ A
596
+ yes
597
+ 16
598
+ Break
599
+ B T0
600
+ yes
601
+
602
+ Extended
603
+ Labelled
604
+ Log
605
+ Clustered
606
+ Log
607
+ Paragraph
608
+ Vector
609
+ Model
610
+ HDBSCAN
611
+ Model
612
+ Vectors
613
+ Legal
614
+ Days
615
+ Illegal
616
+ Days
617
+ Paragraph
618
+ Vector
619
+ Model
620
+ Vectors
621
+ HDSBSCAN
622
+ Model
623
+ Centroids
624
+ Encoding
625
+ Encoding
626
+ Figure 5: Clustering process for a labelled log.
627
+ For example, a driver could surpass the maximum limit for a pause before being considered a break by a few minutes
628
+ without noticing, and proceeding like a break has not been consumed. As a consequence, that action will be valid, but
629
+ after the next breaks infractions may arise because the driver is not following its plan as expected, and such actions
630
+ may not fit correctly under the HoS tree.
631
+ If the violation is related with this type of mistake, the new relabelled log will contain less illegal sequences than the
632
+ original and we can compare the Token contexts (concerning the type of activity at the lowest level in the HoS tree) to
633
+ understand which changes make the sequence legal. Table 4 shows an example with a driver exceeding the break time
634
+ by two minutes.
635
+ Therefore, this method allows us to (a) discover new infringements not considered by the test list, and (b) analyse how
636
+ the activity should have been to avoid infractions.
637
+ 4.3
638
+ Analysing Driver Behaviour
639
+ The two previous steps provide a way to understand a driver log and its compliance with the HoS regulation. However,
640
+ experts are usually responsible of dozens of drivers and its not feasible to analyse the substantial logs of each one of
641
+ them in order to detect problematic tendencies.
642
+ Therefore, we developed a module that clusters behaviour patterns in driver activities and summarises each cluster
643
+ with expert knowledge. This method helps to separate standard driving days from unusual ones without inspecting the
644
+ driver log, and let users concentrate their efforts in analysing only the problematic sequences.
645
+ In order to do that, we considered our problem as an NLP (Natural Language Processing) task, where activities from
646
+ the log are treated as words and daily sequences as documents. That way we can employ NLP oriented techniques to
647
+ transform sequences of varying length into fixed dimensions and measure similarity between them.
648
+ Figure 5 shows an overview of the process, consisting of the following steps:
649
+ 7
650
+
651
+ Discovering and Explaining Driver Behaviour under HoS Regulations
652
+ A PREPRINT
653
+ Table 5: Partial output of the clustering process. The system identifies the most similar centroid to the input sequence
654
+ and the description associated with it.
655
+ Labelled Log
656
+ Activity
657
+ DayType
658
+ Sequence
659
+ BreakType
660
+ Token
661
+ Legal
662
+ Cluster
663
+ Driving
664
+ ndd
665
+ unique
666
+ uninterrupted
667
+ A
668
+ yes
669
+ 2
670
+ Other
671
+ A
672
+ yes
673
+ Break
674
+ DR T1
675
+ yes
676
+ Most similar centroid
677
+ Activity
678
+ DayType
679
+ Sequence
680
+ BreakType
681
+ Token
682
+ Legal
683
+ Cluster
684
+ Driving
685
+ ndd
686
+ unique
687
+ uninterrupted
688
+ A
689
+ yes
690
+ 2
691
+ Other
692
+ A
693
+ yes
694
+ Break
695
+ DR T3
696
+ yes
697
+ Description
698
+ Legal and standard daily driving formed by a unique and uninterrupted driving sequence
699
+ 1. First, a preprocessing step is applied in which the dataset is split in two parts depending on whether the days
700
+ contains or not illegal activities. The reason behind this process is that an infraction recognition process
701
+ is already provided by the previous module, and thus there is no need for our clustering model to learn to
702
+ distinguish between legal and illegal sequences. On the contrary, we are providing a prior separation to help
703
+ the model extract more interesting patterns that are not related with the legality of the sequence.
704
+ 2. The subset of labelled columns (i.e. contexts) that describe the action from an overall point of view are
705
+ selected, these are (Activity, DayType, BreakType, Token). For the illegal subset, the Infraction column is also
706
+ included to generate clusters and centroids associated with already identified infringement. Specific details
707
+ about duration and timestamps are not relevant to summarise the days. Nevertheless, some of the information
708
+ they provided is encoded in the labels, as it is used by the labelling process. This step could be consider as
709
+ cleaning a document prior an NLP topic categorisation task.
710
+ 3. Because columns contains categorical features not suitable for computation they are transformed into numer-
711
+ ical, and then joined together using a special character as a separator. After this step we can consider each
712
+ entry in our log as a word.
713
+ 4. Both previous steps are repeated for each activity in our dataset, and activities of the same day are grouped
714
+ into documents. As a result, we have a collection of documents each one encoding the activities in a driving
715
+ day sequence as words.
716
+ 5. We then use Paragraph Vector (Le and Mikolov 2014) (also known as Doc2Vec) models to obtain dense rep-
717
+ resentations of fixed dimensions2. Although one model could be trained for both data splits (and reasonable
718
+ so, as both encodings are subset of the same language), we obtained better results finetuning one model for
719
+ each split, but ultimately both transforming a document into a 200 sized output vector.
720
+ 6. The resulting representations are now suitable for clustering techniques. We obtained our best results using
721
+ HDBSCAN (Campello et al. 2013) thanks to its robustness to noise, and choosing the number of clusters
722
+ based on both expert knowledge and metrics results (Silhouette Coefficient, Calinski-Harabasz and Davies-
723
+ Bouldin indexs), setting on a final value of 8 clusters for legal data and 7 for days with infractions. In the next
724
+ section we display a comparative analysis of other techniques under this data.
725
+ 7. Lastly, days are clustered and presented with the decoded centroids, which are described by an expert with a
726
+ meaningful description, as shown in Table 5.
727
+ 4.4
728
+ Generating Driver Profiles
729
+ Similarly to the working days clustering previously explained, we performed categorisation of drivers based on similar
730
+ behaviour with the idea of extracting driver profiles. With enough data, we saw that the large amount of activities
731
+ contained in event logs can be summarised in different types of driving days as described in the previous sections,
732
+ and such types encode enough information to extract a characterisation of the driver that can be informative for the
733
+ transport company.
734
+ 2Other techniques like Word2Vec or Bag of Words could be used, but we considered the paragraph weight extracted by Paragraph
735
+ Vector a useful source of information in our task.
736
+ 8
737
+
738
+ Discovering and Explaining Driver Behaviour under HoS Regulations
739
+ A PREPRINT
740
+ Table 6: Example input data for extracting driver profiles. Each row encodes how frequently a driver perform one of
741
+ four types of driving days. Given the uneven number of data of each driver values are expressed as percentages and
742
+ all rows sums to one.
743
+ Driver
744
+ Driving day type
745
+ Split Sequences Normal Rest
746
+ Uninterrupted Sequences Normal Rest
747
+ Split Sequences Reduced Rest
748
+ Uninterrupted Sequences Reduced Rest
749
+ . . .
750
+ 1
751
+ 0.5
752
+ 0.1
753
+ 0.3
754
+ 0.1
755
+ 2
756
+ 0.2
757
+ 0.8
758
+ 0.0
759
+ 0.0
760
+ 3
761
+ 0.15
762
+ 0.5
763
+ 0.3
764
+ 0.15
765
+ We performed the following steps to categorise drivers:
766
+ 1. Drop days with infractions: Tests introducing violations gave us results who grouped drivers by similar ratio
767
+ of infractions and by day types (e.g., those who tended to excess their break time ended up in the same cluster).
768
+ However, we opted to not include such information as they did not align with the purposes of the application.
769
+ Because the context around the infraction cannot be extracted exclusively from the tachograph data (e.g., was
770
+ the infringement voluntary, due to lack of correct planning or caused by unexpected circumstances on the
771
+ road?) we believe managers should analyse case by case rather than making decisions without understanding
772
+ the real motive behind the infraction.
773
+ 2. Create frequency table: For each driver its daily logs are processed following the methodology explained
774
+ in subsections 4.1 and 4.3, keeping the day type predicted by the clustering model. The training dataset is
775
+ created counting the frequencies the driver has performed each type of driving day. Due to the fact that in
776
+ our data the amount of information varies for each driver, these frequencies are transformed into percentages.
777
+ Ultimately, we obtain a table D × C as shown in Table 6, being D the number of drivers and C the number
778
+ of different days categories (i.e., the number of clusters discovered in the previous section).
779
+ 3. Training: We perform clustering with the resulting table. From a multiple of techniques our best results were
780
+ obtained with a Gaussian mixture model trained with the Expectation-Maximization algorithm (Fraley and
781
+ Raftery 2002).
782
+ The deciding factor at choosing the best partition was based exclusively on expert knowledge. Because we
783
+ were informed that our data was for drivers who performed similar routes on the same country, we looked for
784
+ a few number of clusters that separated nicely the data.
785
+ As a closing point, we would like to note that driver profiles based only on tachograph data could be misleading, as
786
+ they do not account for the specific routes they perform. We believe a better approach would be to combine the cluster
787
+ information with route details like distance, type of vehicle or type of cargo, and as a result get a categorisation of
788
+ the driver given the type of route. That way, decisions based on these profiles will not be biased, and traffic managers
789
+ could assess their drivers for each particular service. We intend to explore those options in future work.
790
+ 5
791
+ Experimentation
792
+ We have validated our methodology with an experimentation using real tachograph logs provided by an industrial
793
+ collaborator. We were provided with a dataset formed by two-weeks-long sequences of activities from 290 different
794
+ drivers.
795
+ Because the architecture is composed of three different components, each one was validated individually. The labelling
796
+ process was verified against multiple driving sequences selected at random, both legal and illegal, manually verifying
797
+ that the output was the appropriate under the HoS regulation. For the infringement analysis system multiple tests for
798
+ each kind of infraction were carried out, confirming that not only the cause, but also the subsequence containing the
799
+ infraction was detected.
800
+ Lastly, due to the fact that the clustering in our problem is an unsupervised task, we experimented with different
801
+ techniques and hyperparametrization to discover the best possible clusters. The quality of each partition was measured
802
+ with the Silhouette Coefficient and both Calinski-Harabasz and Davies-Bouldin indexs. The final clustering result was
803
+ selected between the best performing tests, and after expert inspection of the resulting clusters and centroids.
804
+ Figure 6 shows the performance of multiple algorithms in data with and without infractions, respectively. The algo-
805
+ rithms are: Gaussian Mixture models, having each component its own covariance matrix, and controlling the number
806
+ of mixture components as the number of clusters; HDBSCAN (Campello et al. 2013), a hierarchical clustering model
807
+ employing density based measures; classical agglomerative clustering using average, complete and ward criteria; and
808
+ K-Means with cosine similarity as distance metric.
809
+ 9
810
+
811
+ Discovering and Explaining Driver Behaviour under HoS Regulations
812
+ A PREPRINT
813
+ Data with infractions
814
+ Data without infractions
815
+ 2
816
+ 4
817
+ 6
818
+ 8
819
+ 10
820
+ 2
821
+ 4
822
+ 6
823
+ 8
824
+ 10
825
+ 12
826
+ 14
827
+ 0.2
828
+ 0.3
829
+ 0.4
830
+ 0.5
831
+ 0.6
832
+ 0.1
833
+ 0.2
834
+ 0.3
835
+ 0.4
836
+ 0.5
837
+ Silhouette coefficent
838
+ 2
839
+ 4
840
+ 6
841
+ 8
842
+ 10
843
+ 2
844
+ 4
845
+ 6
846
+ 8
847
+ 10
848
+ 12
849
+ 14
850
+ 0
851
+ 500
852
+ 1000
853
+ 1500
854
+ 0
855
+ 100
856
+ 200
857
+ 300
858
+ Calinski−Harabarsz score
859
+ 2
860
+ 4
861
+ 6
862
+ 8
863
+ 10
864
+ 2
865
+ 4
866
+ 6
867
+ 8
868
+ 10
869
+ 12
870
+ 14
871
+ 1
872
+ 2
873
+ 3
874
+ 4
875
+ 5
876
+ 0.9
877
+ 1.2
878
+ 1.5
879
+ Davies−Bouldin score
880
+ Number of Clusters
881
+ Algorithm
882
+ Gaussian Mixture
883
+ HDBSCAN
884
+ Hierarchical (avg)
885
+ Hierarchical (complete)
886
+ Hierarchical (ward)
887
+ KMeans
888
+ Figure 6: Silhouette Coefficient, Calinski-Harabasz and Davies-Bouldin indexs as a function of the number of clusters
889
+ for multiple clustering algorithms. Notice that the y-axis scalings differ among the different panels of this figure.
890
+ Some insights can be extracted from the graphs. HDBSCAN is without doubt the best algorithm under both subsets of
891
+ this data, but we believe that the reason relies mostly due to its robustness to noise points. Furthermore, results on data
892
+ with infringements are, as expected, more variable, as this subset combines multiple types of infractions with driving
893
+ sequences that can be perfectly legal. The runner-up model is not clear, as results vary greatly with the number of
894
+ clusters.
895
+ For fully legal data we see hierarchical clustering with average and complete linkage method vastly underperforming.
896
+ The graphs for the rest of techniques take similar shape, mostly agreeing in that 8 clusters seems an appropriate
897
+ partition for this data. Nevertheless, as the results are intended for human interpretation, it is important to remind that
898
+ the clusters should be reviewed by an expert whenever possible before setting on a final value.
899
+ Finally, we believe is worth mentioning our experimentation with the LDA (Latent Dirichlet Allocation) (Blei et al.
900
+ 2003). This technique is frequently used in NLP tasks to summarise a document with a set of topics. Due to a small
901
+ vocabulary size in our data as opposed to an NLP task, most words (i.e. driver activities) are present in many different
902
+ clusters (with the exception of illegal activities), and although the most relevant topics could be ranked and considered
903
+ as centroids there is no assurance that these topics are understandable (e.g. a B T2 break only makes sense if followed
904
+ 10
905
+
906
+ Discovering and Explaining Driver Behaviour under HoS Regulations
907
+ A PREPRINT
908
+ Table 7: Clustering results for driver profiles and they interpretation.
909
+ Cluster
910
+ Interpretation
911
+ Proportion
912
+ 1
913
+ No extended days and mostly
914
+ takes rests uninterrupted
915
+ 8.6%
916
+ 2
917
+ Usually splits rests as much as possible
918
+ and rarely takes extended days
919
+ 51.8%
920
+ 3
921
+ Neither takes many extended days
922
+ or splits rests
923
+ 20.2%
924
+ 4
925
+ No clear tendency,
926
+ driver seems to be flexible
927
+ 14.4%
928
+ 5
929
+ Tends to split rests as much as possible
930
+ and frequently takes extended days
931
+ 5.0%
932
+ by a B T3 break. The presence of only one of them as a topic does not clarify if the driver completed the sequence or
933
+ committed an infraction).
934
+ For our driver clustering experimentation Table 7 shows 5 resulting clusters and their interpretation after training.
935
+ Given that our training data is compromised of mostly event logs of national deliveries in Spain, we can see that more
936
+ than half of our drives prefer to spent their rests split in two. Nevetheless, as mentioned above, the lack of data about
937
+ the routes performed in the tachograph hinders the expressivenes, but experts welcome any information that could help
938
+ them assign the best driver to a service as easily as possible.
939
+ The methodology and experimental results are encapsulated in an web application publicly available at https://
940
+ github.com/IgnacioVellido/Driver-Assistance-System.
941
+ 6
942
+ Related Work
943
+ This project is an extension of authors prior work (Fernandez-Olivares and Perez 2020) focused on the recognition
944
+ and labelling of driver activities under the HoS regulation. The novel contributions provided in this paper go a step
945
+ forward in our goal of developing an intelligent assistant to drivers and traffic managers, proposing a planning and
946
+ constraint based analysis of infractions causes and summarisation of driver behaviour with NLP techniques.
947
+ Regarding applications concerned with the HoS regulation, many approaches have been developed aimed to solve
948
+ route planning problems under these rules while minimising transportation costs (Mbiydzenyuy 2015, Omelianenko
949
+ et al. 2019, Goel 2018, Goel and Irnich 2017). Nonetheless, the authors have not found works that extract insights that
950
+ can be useful for experts in analysing and understanding driver activities from a legal perspective.
951
+ As for driver behaviour modelling from tachograph data, proposals like (Zhou and Zhang 2019) employs data mining
952
+ techniques to categorise truck drivers and analyse dangerous tendencies. Their approach is similar to ours in that
953
+ clusters are manually studied and labelled. However, PCA for dimensionally reduction and DBSCAN for clustering
954
+ are directly used instead due to the fact that their data does not contain categorical variables.
955
+ Lastly, word embedding techniques like Paragraph Vector models has been previously applied in non textual data like
956
+ web user activities (Tagami et al. 2015) and server logs (Mimura and Tanaka 2018) as a way to transform sequential
957
+ data of variable length into dimensionally fixed data. Similarly, although oriented to process mining applications,
958
+ the trace2vec model proposed in (De Koninck et al. 2018) uses embedding techniques for discovery, monitoring and
959
+ clustering of sequences of activities. Nevertheless, up to the author’s knowledge there have not been prior research
960
+ with tachograph logs.
961
+ 7
962
+ Conclusion
963
+ We have presented a novel planning application that brings the worlds of Data Analytics, IoT and Automated Planning
964
+ and Scheduling together. The approach provides support to experts on the task of interpreting what drivers are or have
965
+ been doing by recognising and summarising their activity recorded in an event log.
966
+ 11
967
+
968
+ Discovering and Explaining Driver Behaviour under HoS Regulations
969
+ A PREPRINT
970
+ Using as a basis our prior work in driver activity recognition, the main contributions exposed in this paper are an
971
+ infringement analysis process with a planning and constraint based approach, the summarisation of temporal activity
972
+ logs using word embeddings and clustering, and the creation of driver profiles based on such summaries. The over-
973
+ all system provides a human readable summary of the driver behaviour under the HoS regulation while explaining
974
+ infractions and the root cause.
975
+ Regarding future work, it is worth noting that the main interest and the ultimate goal of the company is to build an
976
+ intelligent assistant to provide decision support services to both drivers and companies decision makers. This is a
977
+ research direction aligned with the concept of assistive interaction (Freedman and Zilberstein 2017), that advocates
978
+ for the integration of plan recognition and planning. In this way, the recognition of driver’s intent is a previous stage
979
+ needed to respond with a generated plan adapted to the currently recognised task.
980
+ For our next steps we intent to enrich the driver profiling model adding non-tachograph data about the transport service,
981
+ like type of vehicle and cargo. Additionally, we are focused on integrating descriptive support to the assistant, being
982
+ able to suggest drivers plans of actions in compliance with the HoS regulation and considering preference patterns
983
+ extracted from previous personal behaviour.
984
+ References
985
+ Baldini, G., Sportiello, L., Chiaramello, M. and Mahieu, V.: 2018, Regulated applications for the road transportation
986
+ infrastructure: The case study of the smart tachograph in the european union, International Journal of Critical
987
+ Infrastructure Protection 21, 3–21.
988
+ Blei, D. M., Ng, A. Y. and Jordan, M. I.: 2003, Latent dirichlet allocation, Journal of machine Learning research
989
+ 3(Jan), 993–1022.
990
+ Campello, R. J. G. B., Moulavi, D. and Sander, J.: 2013, Density-based clustering based on hierarchical density
991
+ estimates, in J. Pei, V. S. Tseng, L. Cao, H. Motoda and G. Xu (eds), Advances in Knowledge Discovery and Data
992
+ Mining, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 160–172.
993
+ De Koninck, P., vanden Broucke, S. and De Weerdt, J.: 2018, act2vec, trace2vec, log2vec, and model2vec: Repre-
994
+ sentation learning for business processes, in M. Weske, M. Montali, I. Weber and J. vom Brocke (eds), Business
995
+ Process Management, Springer International Publishing, Cham, pp. 305–321.
996
+ Fernandez-Olivares, J. and Perez, R.: 2020, Driver activity recognition by means of temporal htn planning, Proceed-
997
+ ings of the International Conference on Automated Planning and Scheduling 30(1), 375–383.
998
+ URL: https://ojs.aaai.org/index.php/ICAPS/article/view/6683
999
+ Fraley, C. and Raftery, A. E.: 2002, Model-based clustering, discriminant analysis, and density estimation, Journal of
1000
+ the American statistical Association 97(458), 611–631.
1001
+ Freedman, R. G. and Zilberstein, S.: 2017, Integration of planning with recognition for responsive interaction using
1002
+ classical planners, Thirty-First AAAI Conference on Artificial Intelligence.
1003
+ Ghallab, M., Nau, D. and Traverso, P.: 2016, Automated Planning and Acting, 1st edn, Cambridge University Press,
1004
+ USA.
1005
+ Goel, A.: 2018, Legal aspects in road transport optimization in europe, Transportation research part E: logistics and
1006
+ transportation review 114, 144–162.
1007
+ Goel, A. and Irnich, S.: 2017, An exact method for vehicle routing and truck driver scheduling problems, Transporta-
1008
+ tion Science 51(2), 737–754.
1009
+ Goel, A. and Vidal, T.: 2013, Hours of service regulations in road freight transport: An optimization-based interna-
1010
+ tional assessment, Transportation science 48(3), 391–412.
1011
+ Knuth, D. E.: 1968, Semantics of context-free languages, Mathematical systems theory 2(2), 127–145.
1012
+ Le, Q. and Mikolov, T.: 2014, Distributed representations of sentences and documents, in E. P. Xing and T. Jebara
1013
+ (eds), Proceedings of the 31st International Conference on Machine Learning, Vol. 32 of Proceedings of Machine
1014
+ Learning Research, PMLR, Bejing, China, pp. 1188–1196.
1015
+ URL: https://proceedings.mlr.press/v32/le14.html
1016
+ Mbiydzenyuy, G.: 2015, Arrival times with hours of service regulations for truck drivers-tracks and gaps from current
1017
+ research, 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pp. 2631–2636.
1018
+ Meyer, C. M.: 2011, European Legislation on Driving and Working Hours in Road Transportation, in C. M. Meyer
1019
+ (ed.), Vehicle Routing under Consideration of Driving and Working Hours: A Distributed Decision Making Per-
1020
+ spective, Gabler, Wiesbaden, pp. 9–24.
1021
+ URL: https: // doi. org/ 10. 1007/ 978-3-8349-6732-9_ 2
1022
+ 12
1023
+
1024
+ Discovering and Explaining Driver Behaviour under HoS Regulations
1025
+ A PREPRINT
1026
+ Mimura, M. and Tanaka, H.: 2018, Leaving all proxy server logs to paragraph vector, Journal of Information Process-
1027
+ ing 26, 804–812.
1028
+ Omelianenko, S., Kondratenko, Y., Kondratenko, G. and Sidenko, I.: 2019, Advanced system of planning and opti-
1029
+ mization of cargo delivery and its iot application, 2019 3rd International Conference on Advanced Information and
1030
+ Communications Technologies (AICT), IEEE, pp. 302–307.
1031
+ Tagami, Y., Kobayashi, H., Ono, S. and Tajima, A.: 2015, Modeling user activities on the web using paragraph vector,
1032
+ Proceedings of the 24th International Conference on World Wide Web, pp. 125–126.
1033
+ Vellido-Exp´osito., I., Fern´andez-Olivares., J., P´erez., R. and Castillo., L.: 2022, Analyzing driver behavior compli-
1034
+ ance under hos regulations, Proceedings of the 8th International Conference on Vehicle Technology and Intelligent
1035
+ Transport Systems - VEHITS,, INSTICC, SciTePress, pp. 463–470.
1036
+ Zhou, T. and Zhang, J.: 2019, Analysis of commercial truck drivers’ potentially dangerous driving behaviors based on
1037
+ 11-month digital tachograph data and multilevel modeling approach, Accident Analysis & Prevention 132, 105256.
1038
+ 13
1039
+
6tE4T4oBgHgl3EQfcQy_/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
79E4T4oBgHgl3EQf2g20/content/2301.05299v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c96752c05f34fd83ed72891e4fd3b21d456a98e64058dc8ad7bf96145b8e702e
3
+ size 16460806
79E4T4oBgHgl3EQf2g20/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95fb6d2192335b8ffe4515b7cf4f06a8993aac36273b39e7422e30338e87f824
3
+ size 161435
7dA0T4oBgHgl3EQfOf8M/content/2301.02160v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:503990481c44647aaed46638d5f6d6451ce4a1d3426b2a5e3bde4befc5509c8e
3
+ size 3880557
7dA0T4oBgHgl3EQfOf8M/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21d1ae044e7c1e1b858bf4905b18122683b55b7ada041f5b271e982dec3b7954
3
+ size 1966125
7dA0T4oBgHgl3EQfOf8M/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e491e7bb704006b9ad1269f4aed20c79b6fafd9a3a54e742eb379dec838fae10
3
+ size 82373
8tE1T4oBgHgl3EQfngT9/content/2301.03311v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5df75189cdc333e6ccbe6e8ffa0a8b6556525450254f86e40eed594ae7e5161c
3
+ size 1149380
8tE1T4oBgHgl3EQfngT9/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:444dd7dc03aff167d98bd321dbe9a9197c3f75125a95794e516edb02e8c4fb2b
3
+ size 8192045
8tE1T4oBgHgl3EQfngT9/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc519b6af55a0bf2582c051cb007c63021f00d249399a60deff32c396f14b614
3
+ size 263207
B9FKT4oBgHgl3EQfXi52/content/tmp_files/2301.11795v1.pdf.txt ADDED
@@ -0,0 +1,4110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.11795v1 [math.AP] 27 Jan 2023
2
+ Higher regularity for weak solutions
3
+ to degenerate parabolic problems
4
+ Andrea Gentile - Antonia Passarelli di Napoli∗
5
+ Dipartimento di Matematica e Applicazioni “R. Caccioppoli”
6
+ Universitá di Napoli “Federico II”, via Cintia - 80126 Napoli
7
+ e-mail: andrea.gentile@unina.it,antpassa@unina.it
8
+ January 30, 2023
9
+ Abstract
10
+ In this paper, we study the regularity of weak solutions to the following strongly degen-
11
+ erate parabolic equation
12
+ ut − div
13
+
14
+ (|Du| − 1)p−1
15
+ +
16
+ Du
17
+ |Du|
18
+
19
+ = f
20
+ in ΩT ,
21
+ where Ω is a bounded domain in Rn for n ≥ 2, p ≥ 2 and ( · )+ stands for the positive
22
+ part. We prove the higher differentiability of a nonlinear function of the spatial gradient
23
+ of the weak solutions, assuming only that f ∈ L2
24
+ loc (ΩT ). This allows us to establish the
25
+ higher integrability of the spatial gradient under the same minimal requirement on the
26
+ datum f.
27
+ Key words. Widely degenerate problems. Second order regularity. Higher integrability.
28
+ AMS Classification. 35B45, 35B65, 35D30, 35K10, 35K65
29
+ 1
30
+ Introduction
31
+ In this paper, we study the regularity properties of weak solutions u : ΩT → R to the following
32
+ parabolic equation
33
+ ut − div
34
+
35
+ (|Du| − 1)p−1
36
+ +
37
+ Du
38
+ |Du|
39
+
40
+ = f
41
+ in ΩT = Ω × (0, T ),
42
+ (1.1)
43
+ which appears in gas filtration problems taking into account the initial pressure gradient. For a precise
44
+ description of this motivation we refer to [1] and [3, Section 1.1].
45
+ The main feature of this equation is that it possesses a wide degeneracy, coming from the fact that
46
+ its modulus of ellipticity vanishes at all points where |Du| ≤ 1 and hence its principal part of behaves
47
+ like a p-Laplacian operator only at infinity.
48
+ In this paper we address two interrelated aspects of the regularity theory for solutions to parabolic
49
+ problems, namely the higher differentiability and the higher integrability of the weak solutions to
50
+ (1.1), with the main aim of weakening the assumption on the datum f with respect to the available
51
+ literature.
52
+ ∗Aknowledgments. The work of the authors is supported by GNAMPA (Gruppo Nazionale per l’Analisi
53
+ Matematica, la Probabilità e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica). The
54
+ authors have been also supported by the Universitá degli Studi di Napoli “Federico II” through the project
55
+ FRA-000022-ALTRI-CDA-752021-FRA-PASSARELLI.
56
+ 1
57
+
58
+ 2
59
+ These questions have been exploited in case of non degenerate parabolic problems with quadratic
60
+ growth by Campanato in [9], by Duzaar et al. in [13] in case of superquadratic growth, while Scheven in
61
+ [17] faced the subquadratic growth case. In the above mentioned papers, the problem have been faced
62
+ or in case of homogeneous equations or considering sufficiently regular datum. It is worth mentioning
63
+ that the higher integrability of the gradient of the solution is achieved through an interpolation
64
+ argument, once its higher differentiability is established.
65
+ This strategy has revealed to be successful also for degenerate equations as in (1.1). Indeed the higher
66
+ integrability of the spatial gradient of weak solutions to equation (1.1), has been proven in [3] , under
67
+ suitable assumptions on the datum f in the scale of Sobolev spaces.
68
+ We’d like to recall that a common feature for nonlinear problems with growth rate p > 2 is that the
69
+ higher differentiability is proven for a nonlinear expression of the gradient which takes into account
70
+ the growth of the principal part of the equation.
71
+ Indeed, already for the non degenerate p-Laplace equation, the higher differentiability refers to the
72
+ function Vp (Du) =
73
+
74
+ 1 + |Du|2� p−2
75
+ 4 Du. In case of widely degenerate problems, this phenomenon
76
+ persists, and higher differentiability results, both for the elliptic and the parabolic problems, hold true
77
+ for the function H p
78
+ 2 (Du) = (|Du| − 1)
79
+ p
80
+ 2
81
+ +
82
+ Du
83
+ |Du|. It is worth noticing that, as it can be expected, this
84
+ function of the gradient doesn’t give information on the second regularity of the solutions in the set
85
+ where the equation degenerates. Actually, since every 1-Lipschitz continuous function is a solution to
86
+ the elliptic equation
87
+ div (Hp−1 (Du)) = 0,
88
+ where Hp−1 (Du) = (|Du| − 1)p−1
89
+ +
90
+ Du
91
+ |Du|, no more than Lipschitz regularity can be expected.
92
+ Moreover, it is well known that in case of degenerate problems (already for the degenerate p-Laplace
93
+ equation, with p > 2) a Sobolev regularity is required for the datum f in order to get the higher
94
+ differentiability of the solutions (see, for example [8] for elliptic and [3] for parabolic equations).
95
+ Actually, the sharp assumption for the datum in the elliptic setting has been determined in [8] as a
96
+ fractional Sobolev regularity suitably related to the growth exponent p and the dimension n.
97
+ The main aim of this paper is to show that without assuming any kind of Sobolev regularity for the
98
+ datum, but assuming only f ∈ L2, we are still able to obtain higher differentiability for the weak
99
+ solutions but outside a set larger than the degeneracy set of the problem. It is worth mentioning that,
100
+ while for the p-Laplace equation the degeneracy appears for p > 2, here, even in case p = 2, under
101
+ a L2 integrability assumption on the datum f, the local W 2,2 regularity of the solutions cannot be
102
+ obtained.
103
+ Actually, we shall prove the following
104
+ Theorem 1.1. Let n ≥ 2, p ≥ 2 and f ∈ L2
105
+ loc (ΩT ). Moreover, let us assume that
106
+ u ∈ C0 �
107
+ 0, T ; L2 (Ω)
108
+
109
+ ∩ Lp
110
+ loc
111
+
112
+ 0, T ; W 1,p
113
+ loc (Ω)
114
+
115
+ is a weak solution to (1.1). Then, for any δ ∈ (0, 1), we have
116
+
117
+
118
+ (|Du| − 1 − δ)+
119
+
120
+ ∈ L2
121
+ loc
122
+
123
+ 0, T ; W 1,2
124
+ loc (Ω)
125
+
126
+ ,
127
+ where
128
+ Gδ(t) :=
129
+ ˆ t
130
+ 0
131
+ s(s + δ)
132
+ p−2
133
+ 2
134
+
135
+ 1 + δ + s2 ds,
136
+ for every t ≥ 0.
137
+ Moreover the following estimate
138
+ ˆ
139
+ Q R
140
+ 16
141
+ ��D
142
+
143
+
144
+
145
+ (|Du| − δ − 1)+
146
+ ����2 dz
147
+
148
+ c (n, p)
149
+ R2δ2
150
+ �ˆ
151
+ QR
152
+ (|Du|p + 1) dz + 1
153
+ δp
154
+ ˆ
155
+ QR
156
+ |f|2 dz
157
+
158
+ ,
159
+ (1.2)
160
+ holds for any R > 0 such that QR = QR (z0) ⋐ ΩT .
161
+
162
+ 3
163
+ As already mentioned, the weak solutions of (1.1) are not twice differentiable, and hence it is not
164
+ possible in general to differentiate the equation to estimate the second derivative of the solutions. We
165
+ overcome this difficulty by introducing a suitable family of approximating problems whose solutions
166
+ are regular enough by the standard theory ([11]). The major effort in the proof of previous Theorem
167
+ is to establish suitable estimates for the solutions of the regularized problems that are uniform with
168
+ respect to the approximation’s parameter. Next, we take advantage from these uniform estimates
169
+ in the use of a comparison argument aimed to bound the difference quotient of a suitable nonlinear
170
+ function of the gradient of the solution that vanishes in the set { |Du| ≤ 1 + δ }, with δ > 0.
171
+ Roughly speaking, due to the weakness of our assumption on the datum, we only get the higher
172
+ differentiability of a nonlinear function of the gradient of the solutions that vanishes in a set which is
173
+ larger with respect to that of the degeneracy of the problem. This is quite predictable, since the same
174
+ kind of phenomenon occurs in the setting of widely degenerate elliptic problems (see, for example
175
+ [10]).
176
+ Anyway, as a consequence of the higher differentiability result in Theorem 1.1, we establish a higher
177
+ integrability result for the spatial gradient of the solution to equation (1.1), which is the following
178
+ Theorem 1.2. Under the assumptions of Theorem 1.1, we have
179
+ Du ∈ L
180
+ p+ 4
181
+ n
182
+ loc
183
+ (ΩT )
184
+ with the following estimate
185
+ ˆ
186
+ Q ρ
187
+ 2
188
+ |Du|p + 4
189
+ n dz ≤ c (n, p)
190
+ ρ
191
+ 2(n+2)
192
+ n
193
+ �ˆ
194
+ Q2ρ
195
+
196
+ 1 + |Du|p + |f|2�
197
+ dz
198
+ � 2
199
+ n +1
200
+ ,
201
+ (1.3)
202
+ for every parabolic cylinder Q2ρ (z0) ⋐ ΩT , with a constant c = c(n, p).
203
+ The proof of previous Theorem consists in using an interpolation argument with the aim of establishing
204
+ an estimate for the Lp+ 4
205
+ n norm of the gradient of the solutions to the approximating problems that
206
+ is preserved in the passage to the limit.
207
+ We conclude mentioning that the elliptic version of our equation naturally arises in optimal transport
208
+ problems with congestion effects, and the regularity properties of its weak solutions have been widely
209
+ investigated (see e.g. [2, 4, 6, 8]). Moreover, we’d like to stress that, for sake of clarity, we confine
210
+ ourselves to equation (1.1), but we believe that our techniques apply as well to a general class of
211
+ equations with a widely degenerate structure.
212
+ 2
213
+ Notations and preliminaries
214
+ In this paper we shall denote by C or c a general positive constant that may vary on different
215
+ occasions. Relevant dependencies on parameters will be properly stressed using parentheses or sub-
216
+ scripts. The norm we use on Rn will be the standard Euclidean one and it will be denoted by | · |. In
217
+ particular, for the vectors ξ, η ∈ Rn, we write ⟨ξ, η⟩ for the usual inner product and |ξ| := ⟨ξ, ξ⟩
218
+ 1
219
+ 2 for
220
+ the corresponding Euclidean norm.
221
+ For points in space-time, we will use abbreviations like z = (x, t) or z0 = (x0, t0), for spatial variables
222
+ x, x0 ∈ Rn and times t, t0 ∈ R. We also denote by B (x0, ρ) = Bρ (x0) = { x ∈ Rn : |x − x0| < ρ } the
223
+ open ball with radius ρ > 0 and center x0 ∈ Rn; when not important, or clear from the context, we
224
+ shall omit to indicate the center, denoting: Bρ ≡ B (x0, ρ). Unless otherwise stated, different balls in
225
+ the same context will have the same center. Moreover, we use the notation
226
+ Qρ (z0) := Bρ (x0) ×
227
+
228
+ t0 − ρ2, t0
229
+
230
+ ,
231
+ z0 = (x0, t0) ∈ Rn × R,
232
+ ρ > 0,
233
+ for the backward parabolic cylinder with vertex (x0, t0) and width ρ. We shall sometimes omit the
234
+ dependence on the vertex when the cylinders occurring share the same vertex. Finally, for a cylinder
235
+ Q = A × (t1, t2), where A ⊂ Rn and t1 < t2, we denote by
236
+ ∂parQ := (A × { t1 }) ∪ (∂A × [t1, t2])
237
+ the usual parabolic boundary of Q, which is nothing but the standard topological boundary without
238
+ the upper cap A × { t2 }.
239
+
240
+ 4
241
+ We now recall some tools that will be useful to prove our results.
242
+ For the auxiliary function Hλ : Rn → Rn defined as
243
+ Hλ(ξ) :=
244
+
245
+
246
+
247
+
248
+
249
+
250
+
251
+ (|ξ| − 1)λ
252
+ +
253
+ ξ
254
+ |ξ|
255
+ if
256
+ ξ ∈ Rn \ {0} ,
257
+ 0
258
+ if
259
+ ξ = 0,
260
+ (2.1)
261
+ where λ > 0 is a parameter, we record the following estimates (see [7, Lemma 4.1]):
262
+ Lemma 2.1. If 2 ≤ p < ∞, then for every ξ, η ∈ Rn it holds
263
+ ⟨Hp−1(ξ) − Hp−1(η), ξ − η⟩ ≥ 4
264
+ p2
265
+ ���H p
266
+ 2 (ξ) − H p
267
+ 2 (η)
268
+ ���
269
+ 2
270
+ ,
271
+ |Hp−1(ξ) − Hp−1(η)| ≤ (p − 1)
272
+ ����H p
273
+ 2 (ξ)
274
+ ���
275
+ p−2
276
+ p
277
+ +
278
+ ���H p
279
+ 2 (η)
280
+ ���
281
+ p−2
282
+ p � ���H p
283
+ 2 (ξ) − H p
284
+ 2 (η)
285
+ ��� .
286
+ we record the following estimates (see [4, Lemma 2.8])
287
+ Lemma 2.2. Let ξ, η ∈ Rk with |ξ| > 1. Then, we have
288
+ |Hp−1(ξ) − Hp−1(η)| ≤ c(p)
289
+
290
+ (|ξ| − 1) + (|η| − 1)+
291
+ �p−1
292
+ |ξ| − 1
293
+ |ξ − η|
294
+ and
295
+ ⟨Hp−1(η) − Hp−1(ξ), ·η − ξ⟩ ≥ min { 1, p − 1 }
296
+ 2p+1
297
+ (|ξ| − 1)p
298
+ |ξ| (|ξ| + |η|) |η − ξ|2 .
299
+ Definition 2.3. With the use of (2.1), a function u ∈ C0 �
300
+ 0, T ; L2 (Ω)
301
+
302
+ ∩Lp �
303
+ 0, T ; W 1,p (Ω)
304
+
305
+ is a weak
306
+ solution of equation (1.1) if
307
+ ˆ
308
+ ΩT
309
+ (u · ∂tϕ − ⟨Hp−1 (Du) , Dϕ⟩) dz = −
310
+ ˆ
311
+ ΩT
312
+ fϕ dz
313
+ (2.2)
314
+ for every ϕ ∈ C∞
315
+ 0 (ΩT ).
316
+ In the following, we shall also use the well known auxiliary function Vp : Rn → Rn defined as
317
+ Vp(ξ) :=
318
+
319
+ 1 + |ξ|2� p−2
320
+ 4 ξ,
321
+ where p ≥ 2. We have the following result.
322
+ Lemma 2.4.
323
+ For every ξ, η ∈ Rn there hold
324
+ 1
325
+ c1 (p) |Vp(ξ) − Vp(η)|2
326
+
327
+
328
+ 1 + |ξ|2 + |η|2� p−2
329
+ 2 |ξ − η|2
330
+
331
+ c1(p)
332
+ ��
333
+ 1 + |ξ|2� p−2
334
+ 2 ξ −
335
+
336
+ 1 + |η|2� p−2
337
+ 2 η, ξ − η
338
+
339
+ ,
340
+ We refer to [16, Chapter 12] or to [15, Lemma 9.2] for a proof of these fundamental inequalities.
341
+ For further needs, we also record the following interpolation inequality whose proof can be found
342
+ in [12, Proposition 3.1]
343
+ Lemma 2.5.
344
+ Assume that the function v : Qr(z0) ∪ ∂parQr(z0) → R satisfies
345
+ v ∈ L∞ �
346
+ t0 − r2, t0; Lq (Br (x0))
347
+
348
+ ∩ Lp �
349
+ t0 − r2, t0; W 1,p
350
+ 0
351
+ (Br (x0))
352
+
353
+ for some exponents 1 ≤ p, q < ∞ . Then the following estimate
354
+ ˆ
355
+ Qr(z0)
356
+ |v|p+ pq
357
+ n dz ≤ c
358
+
359
+ sup
360
+ s∈(t0−r2,t0)
361
+ ˆ
362
+ Br(x0)
363
+ |v(x, s)|q dx
364
+ � p
365
+ n ˆ
366
+ Qr(z0)
367
+ |Dv|p dz
368
+ holds true for a positive constant c depending at most on n, p and q.
369
+
370
+ 5
371
+ 2.1
372
+ Difference quotients
373
+ We recall here the definition and some elementary properties of the difference quotients (see, for ex-
374
+ ample, [15, Chapter 8]).
375
+ Definition 2.6. For every function F : Rn → RN the finite difference operator in the direction xs is
376
+ defined by
377
+ τs,hF(x) = F (x + hes) − F(x),
378
+ where h ∈ R, es is the unit vector in the direction xs and s ∈ {1, . . . , n}.
379
+ The difference quotient of F with respect to xs is defined for h ∈ R \ {0} as
380
+ ∆s,hF(x) = τs,hF(x)
381
+ h
382
+ .
383
+ We shall omit the index s when it is not necessary, and simply write τhF(x) = F(x + h) − F(x) and
384
+ |∆hF(x)| = |τhF(x)|
385
+ |h|
386
+ for h ∈ Rn.
387
+ Proposition 2.7. Let F ∈ W 1,p (Ω), with p ≥ 1, and let us set
388
+ Ω|h| := { x ∈ Ω : dist (x, ∂Ω) > |h| } .
389
+ Then:
390
+ (a) ∆hF ∈ W 1,p �
391
+ Ω|h|
392
+
393
+ and
394
+ Di(∆hF) = ∆h(DiF),
395
+ for every i ∈ {1, . . . , n} .
396
+ (b) If at least one of the functions F or G has support contained in Ω|h|, then
397
+ ˆ
398
+
399
+ F ∆hG dx = −
400
+ ˆ
401
+
402
+ G ∆−hF dx.
403
+ (c) We have
404
+ ∆h (FG) (x) = F (x + hes) ∆hG(x) + G(x)∆hF(x).
405
+ The next result about the finite difference operator is a kind of integral version of Lagrange Theorem
406
+ (see [15, Lemma 8.1]).
407
+ Lemma 2.8. If 0 < ρ < R, |h| < R − ρ
408
+ 2
409
+ , 1 < p < +∞, and F ∈ W 1,p �
410
+ BR, RN�
411
+ , then
412
+ ˆ
413
+
414
+ |τhF(x)|p dx ≤ cp(n) |h|p
415
+ ˆ
416
+ BR
417
+ |DF(x)|p dx.
418
+ Moreover
419
+ ˆ
420
+
421
+ |F(x + hes)|p dx ≤
422
+ ˆ
423
+ BR
424
+ |F(x)|p dx.
425
+ We conclude this section with the following fundamental result, whose proof can be found in [15,
426
+ Lemma 8.2]:
427
+ Lemma 2.9. Let F : Rn → RN, F ∈ Lp �
428
+ BR, RN�
429
+ with 1 < p < +∞. Suppose that there exist
430
+ ρ ∈ (0, R) and a constant M > 0 such that
431
+ n
432
+
433
+ s=1
434
+ ˆ
435
+
436
+ |τs,hF(x)|p dx ≤ M p |h|p
437
+ for every h, with |h| < R − ρ
438
+ 2
439
+ . Then F ∈ W 1,p �
440
+ Bρ, RN�
441
+ and
442
+ ∥DF∥Lp(Bρ) ≤ M.
443
+ Moreover
444
+ ∆s,hF → DsF
445
+ strongly in Lp
446
+ loc (BR) , as h → 0,
447
+ for each s ∈ {1, . . . , n}.
448
+
449
+ 6
450
+ 2.2
451
+ Some auxiliary functions and related algebraic inequalities
452
+ In this section we introduce some auxiliary functions and we list some of their properties, that will be
453
+ used in what follows.
454
+ For any k > 1 and for s ∈ [0, +∞), let us consider the function
455
+ gk(s) =
456
+ s2
457
+ k + s2 ,
458
+ (2.3)
459
+ for which we record the following
460
+ Lemma 2.10. Let k > 1, and let gk be the function defined by (2.3). Then for every A, B ≥ 0 the
461
+ following Young’s type inequality
462
+ A · B[s · g′
463
+ k ((s − k)+)] ≤ 2
464
+
465
+ 2k
466
+
467
+ αA2gk ((s − k)+) + ασA2 + cαB2�
468
+ ,
469
+ (2.4)
470
+ holds for every parameters α, σ > 0 with a constant cα independent of σ. Moreover, there exists a
471
+ constant ck > 0, depending on k, such that
472
+ sg′
473
+ k
474
+ ��
475
+ s2 − k
476
+
477
+ +
478
+
479
+ ≤ ck,
480
+ ∀s ≥ 0.
481
+ (2.5)
482
+ Proof. Since
483
+ g′
484
+ k(s) =
485
+ 2ks
486
+ (k + s2)2 ,
487
+ (2.6)
488
+ both the conclusions trivially hold for s ≤
489
+
490
+ k. Now assume that s >
491
+
492
+ k and note that Young’s
493
+ inequality implies
494
+ A · B [s · g′
495
+ k ((s − k)+)]
496
+ =
497
+ A · B · s · g′
498
+ k ((s − k)+) [σ + (s − k)+]
499
+ 1
500
+ 2
501
+ [σ + (s − k)+]
502
+ 1
503
+ 2
504
+
505
+ αA2s · g′
506
+ k ((s − k)+) [σ + (s − k)+] + cα
507
+ B2s · g′
508
+ k ((s − k)+)
509
+ [σ + (s − k)+]
510
+ =
511
+ αA2s · g′
512
+ k ((s − k)+) (s − k)+ + ασA2s · g′
513
+ k ((s − k)+)
514
+ +cα
515
+ B2s · g′
516
+ k ((s − k)+)
517
+ [σ + (s − k)+]
518
+
519
+ αA2
520
+ 2ks(s − k)2
521
+ +
522
+
523
+ k + (s − k)2
524
+ +
525
+ �2 + ασA2
526
+ 2ks(s − k)+
527
+
528
+ k + (s − k)2
529
+ +
530
+ �2
531
+ +cαB2
532
+ 2ks
533
+
534
+ k + (s − k)2
535
+ +
536
+ �2
537
+ (s − k)+
538
+ [σ + (s − k)+],
539
+ (2.7)
540
+ where we used the explicit expression of g′
541
+ k(s) at (2.6). Recalling (2.3) and since
542
+ t
543
+ k + t2 ≤ 1, from
544
+ (2.7) we deduce
545
+ A · B [s · g′
546
+ k ((s − k)+)]
547
+
548
+ αA2
549
+ 2ks
550
+ k + (s − k)2
551
+ +
552
+ gk ((s − k)+)
553
+ +ασA2
554
+ 2ks
555
+ k + (s − k)2
556
+ +
557
+ + cαB2
558
+ 2ks
559
+ k + (s − k)2
560
+ +
561
+ .
562
+ (2.8)
563
+ Setting h(s) =
564
+ s
565
+ k + (s − k)2
566
+ +
567
+ , we can easily check that
568
+ h(k) = 1,
569
+ lim
570
+ s→+∞ h(s) = 0,
571
+ max
572
+ s∈[k,+∞) h(s) = h
573
+ ��
574
+ k2 + k
575
+
576
+ = 1
577
+ 2
578
+
579
+ 1 +
580
+
581
+ 1 + 1
582
+ k
583
+
584
+ <
585
+
586
+ 2
587
+
588
+ 7
589
+ and so
590
+ 2ks
591
+ k + (s − k)2
592
+ +
593
+ ≤ 2
594
+
595
+ 2k
596
+ ∀s > k.
597
+ Inserting this in (2.8), we get (2.4).
598
+ In order to prove (2.5), let us notice that, recalling (2.6), we have
599
+ sg′ ��
600
+ s2 − k
601
+
602
+ +
603
+
604
+ =
605
+ 2ks
606
+
607
+ s2 − k
608
+
609
+ +
610
+
611
+ k + (s2 − k)2
612
+ +
613
+ �2 .
614
+ So, since the function sg′ ��
615
+ s2 − k
616
+
617
+ +
618
+
619
+ is continuous in the interval
620
+
621
+ s ≥ 0
622
+ �� s2 > k
623
+
624
+ =
625
+ �√
626
+ k, +∞
627
+
628
+ and
629
+ lim
630
+ s→+∞
631
+ 2ks
632
+
633
+ s2 − k
634
+
635
+ +
636
+
637
+ k + (s2 − k)2
638
+ +
639
+ �2 = 0,
640
+ then there exists a constant ck > 0 such that
641
+ sg′ ��
642
+ s2 − k
643
+
644
+ +
645
+
646
+ ≤ ck
647
+ for every s ≥ 0,
648
+ which is the conclusion.
649
+ For any δ > 0, let us define
650
+ Gδ(t) :=
651
+ ˆ t
652
+ 0
653
+ s(s + δ)
654
+ p−2
655
+ 2
656
+
657
+ 1 + δ + s2 ds,
658
+ for t ≥ 0,
659
+ (2.9)
660
+ and observe that
661
+ G′
662
+ δ(t) = t(t + δ)
663
+ p−2
664
+ 2
665
+
666
+ 1 + δ + t2 .
667
+ (2.10)
668
+ Next Lemma relates the function Gδ (|ξ|) with H p
669
+ 2 (ξ).
670
+ Lemma 2.11. Let Gδ be the function defined by (2.9) and H p
671
+ 2 be the one defined in (2.1) with λ = p
672
+ 2.
673
+ Then we have
674
+ ��Gδ
675
+
676
+ (|ξ| − δ − 1)+
677
+
678
+ − Gδ
679
+
680
+ (|η| − δ − 1)+
681
+ ���2 ≤ cp
682
+ ���H p
683
+ 2 (ξ) − H p
684
+ 2 (η)
685
+ ���
686
+ 2
687
+ (2.11)
688
+ for any ξ, η ∈ Rn.
689
+ Proof. If |ξ| < 1 + δ and |η| < 1 + δ there is nothing to prove. So will assume that |ξ| > 1 + δ, and
690
+ without loss of generality we may suppose that |η| ≤ |ξ|. Since Gδ(t) is increasing, we have
691
+ ��Gδ (|ξ| − 1 − δ) − Gδ
692
+
693
+ (|η| − 1 − δ)+
694
+ ���
695
+ =
696
+ Gδ (|ξ| − 1 − δ) − Gδ
697
+
698
+ (|η| − 1 − δ)+
699
+
700
+ =
701
+ ˆ |ξ|−1−δ
702
+ (|η|−1−δ)+
703
+ s(s + δ)
704
+ p−2
705
+ 2
706
+
707
+ 1 + δ + s2 ds
708
+
709
+ ˆ |ξ|−1−δ
710
+ (|η|−1−δ)+
711
+ (s + δ)
712
+ p−2
713
+ 2 ds
714
+ =
715
+ 2
716
+ p
717
+
718
+ (|ξ| − 1)
719
+ p
720
+ 2 −
721
+
722
+ (|η| − δ − 1)+ + δ
723
+ � p
724
+ 2 �
725
+ .
726
+ Now, it can be easily checked that
727
+ (|ξ| − 1)
728
+ p
729
+ 2 −
730
+
731
+ (|η| − δ − 1)+ + δ
732
+ � p
733
+ 2
734
+
735
+ 8
736
+ =
737
+
738
+
739
+
740
+
741
+
742
+ (|ξ| − 1)
743
+ p
744
+ 2 − δ
745
+ p
746
+ 2
747
+ if
748
+ |ξ| > δ + 1 and |η| ≤ δ + 1
749
+ (|ξ| − 1)
750
+ p
751
+ 2 − (|η| − 1)
752
+ p
753
+ 2
754
+ if
755
+ |ξ| > δ + 1 and |η| > δ + 1.
756
+ In the first case, we have
757
+ ���(|ξ| − 1)
758
+ p
759
+ 2 − δ
760
+ p
761
+ 2
762
+ ���
763
+ =
764
+ (|ξ| − 1)
765
+ p
766
+ 2 − δ
767
+ p
768
+ 2 ≤ (|ξ| − 1)
769
+ p
770
+ 2 − (|η| − 1)
771
+ p
772
+ 2
773
+ +
774
+ =
775
+ ���H p
776
+ 2 (ξ)
777
+ ��� −
778
+ ���H p
779
+ 2 (η)
780
+ ��� ≤
781
+ ���H p
782
+ 2 (η) − H p
783
+ 2 (ξ)
784
+ ��� ,
785
+ while, in the second,
786
+ (|ξ| − 1)
787
+ p
788
+ 2 −
789
+
790
+ (|η| − δ − 1)+ + δ
791
+ � p
792
+ 2 =
793
+ ���H p
794
+ 2 (ξ)
795
+ ��� −
796
+ ���H p
797
+ 2 (η)
798
+ ��� ≤
799
+ ���H p
800
+ 2 (η) − H p
801
+ 2 (ξ)
802
+ ��� .
803
+ Therefore,
804
+ ��Gδ
805
+
806
+ (|ξ| − δ − 1)+
807
+
808
+ − Gδ
809
+
810
+ (|η| − δ − 1)+
811
+ ���2 ≤ cp
812
+ ���H p
813
+ 2 (ξ) − H p
814
+ 2 (η)
815
+ ���
816
+ 2
817
+ for every ξ, η ∈ Rn, which is (2.11).
818
+ Arguing as in [14, Lemma 2.1], we prove the following.
819
+ Lemma 2.12. Let 0 < δ ≤ 1 and p ≥ 2. Then the following inequalities hold
820
+ cp,δ(t + δ)
821
+ p
822
+ 2 − ˜cp,δ ≤ Gδ(t) ≤ 2
823
+ p(t + δ)
824
+ p
825
+ 2
826
+ with constants ˜cp,δ and cp,δ < 2
827
+ p depending on p and δ.
828
+ Proof. If p = 2, one can easily calculate
829
+ Gδ(t) =
830
+ ˆ t
831
+ 0
832
+ s
833
+
834
+ 1 + δ + s2 ds =
835
+ ��
836
+ 1 + δ + s2
837
+ �t
838
+ 0 =
839
+
840
+ 1 + δ + t2 −
841
+
842
+ 1 + δ,
843
+ from which immediately follows
844
+ 1
845
+ 2 (t + δ) − 1
846
+ 2
847
+ �√
848
+ 1 + δ + δ
849
+
850
+ ≤ Gδ(t) ≤ t + δ.
851
+ Let p > 2. The right inequality is a simple consequence of the trivial bound
852
+ s
853
+
854
+ 1+δ+s2 < 1. For the
855
+ left inequality we start observing that
856
+
857
+ 1 + δ + s2 ≤
858
+
859
+ 1 + δ + s
860
+ =⇒
861
+ Gδ(t) ≥
862
+ ˆ t
863
+ 0
864
+ s (s + δ)
865
+ p−2
866
+ 2
867
+
868
+ 1 + δ + s ds.
869
+ Now, we calculate the integral in previous formula. By the change of variable r =
870
+
871
+ 1 + δ + s, we get
872
+ ˆ t
873
+ 0
874
+ s (s + δ)
875
+ p−2
876
+ 2
877
+
878
+ 1 + δ + s ds =
879
+ ˆ t+
880
+
881
+ 1+δ
882
+
883
+ 1+δ
884
+
885
+ r −
886
+
887
+ 1 + δ
888
+ � �
889
+ r −
890
+
891
+ 1 + δ + δ
892
+ � p−2
893
+ 2
894
+ r
895
+ ds
896
+ =
897
+ ˆ t+
898
+
899
+ 1+δ
900
+
901
+ 1+δ
902
+
903
+ r −
904
+
905
+ 1 + δ + δ
906
+ � p−2
907
+ 2
908
+ ds −
909
+
910
+ 1 + δ
911
+ ˆ t+
912
+
913
+ 1+δ
914
+
915
+ 1+δ
916
+
917
+ r −
918
+
919
+ 1 + δ + δ
920
+ � p−2
921
+ 2
922
+ r
923
+ ds
924
+
925
+ 2
926
+ p
927
+ ��
928
+ r −
929
+
930
+ 1 + δ + δ
931
+ � p
932
+ 2 �t+√1+δ
933
+ √1+δ
934
+
935
+
936
+ 1 + δ
937
+ ˆ t+
938
+
939
+ 1+δ
940
+
941
+ 1+δ
942
+
943
+ r −
944
+
945
+ 1 + δ + δ
946
+ � p
947
+ 2 −2
948
+ ds,
949
+ since 0 < δ ≤ 1, we have δ ≤
950
+
951
+ 1 + δ and therefore r −
952
+
953
+ 1 + δ + δ ≤ r. Calculating the last integral
954
+ in previous formula, we get
955
+ ˆ t
956
+ 0
957
+ s(s + δ)
958
+ p−2
959
+ 2
960
+
961
+ 1 + δ + s ds
962
+
963
+ 9
964
+
965
+ 2
966
+ p
967
+ ��
968
+ r −
969
+
970
+ 1 + δ + δ
971
+ � p
972
+ 2 �t+√1+δ
973
+ √1+δ
974
+ − 2
975
+
976
+ 1 + δ
977
+ p − 2
978
+ ��
979
+ r −
980
+
981
+ 1 + δ + δ
982
+ � p
983
+ 2 −1�t+√1+δ
984
+ √1+δ
985
+ =
986
+ 2
987
+ p
988
+
989
+ (t + δ)
990
+ p
991
+ 2 − δ
992
+ p
993
+ 2
994
+
995
+ − 2
996
+
997
+ 1 + δ
998
+ p − 2
999
+
1000
+ (t + δ)
1001
+ p
1002
+ 2 −1 − δ
1003
+ p
1004
+ 2 −1�
1005
+ =
1006
+ 2
1007
+ p(t + δ)
1008
+ p
1009
+ 2 − 2
1010
+
1011
+ 1 + δ
1012
+ p − 2 (t + δ)
1013
+ p
1014
+ 2 −1 + 2
1015
+
1016
+ 1 + δ
1017
+ p − 2 δ
1018
+ p
1019
+ 2 −1 − 2
1020
+
1021
+ p
1022
+ 2 .
1023
+ Therefore the lemma will be proven if there exists a constant cp,δ < 2
1024
+ p such that
1025
+ cp,δ(t + δ)
1026
+ p
1027
+ 2 ≤ 2
1028
+ p(t + δ)
1029
+ p
1030
+ 2 − 2
1031
+
1032
+ 1 + δ
1033
+ p − 2 (t + δ)
1034
+ p
1035
+ 2 −1 + 2
1036
+
1037
+ 1 + δ
1038
+ p − 2 δ
1039
+ p
1040
+ 2 −1 − 2
1041
+
1042
+ p
1043
+ 2
1044
+ which, setting
1045
+ h(t) = 2
1046
+
1047
+ 1 + δ
1048
+ p − 2 (t + δ)
1049
+ p
1050
+ 2 −1 +
1051
+
1052
+ cp,δ − 2
1053
+ p
1054
+
1055
+ (t + δ)
1056
+ p
1057
+ 2 ,
1058
+ is equivalent to prove that there exists cp,δ such that
1059
+ h(t) ≤ 2
1060
+
1061
+ 1 + δ
1062
+ p − 2 δ
1063
+ p
1064
+ 2 −1 − 2
1065
+
1066
+ p
1067
+ 2 .
1068
+ It is easy to check that h(t) attains his maximum for t + δ = 2
1069
+
1070
+ 1 + δ
1071
+ 2 − pcp,δ
1072
+ and so
1073
+ h(t) ≤ h
1074
+ � 2
1075
+
1076
+ 1 + δ
1077
+ 2 − pcp,δ
1078
+ − δ
1079
+
1080
+ =
1081
+
1082
+ 2
1083
+
1084
+ 1 + δ
1085
+ � p
1086
+ 2 �
1087
+ 1
1088
+ 2 − pcp,δ
1089
+ � p−2
1090
+ 2
1091
+ 2
1092
+ p (p − 2)
1093
+ Therefore, to complete the proof it’s enough to solve the following equation
1094
+
1095
+ 2
1096
+
1097
+ 1 + δ
1098
+ � p
1099
+ 2 �
1100
+ 1
1101
+ 2 − pcp,δ
1102
+ � p−2
1103
+ 2
1104
+ 2
1105
+ p (p − 2) = 2
1106
+
1107
+ 1 + δ
1108
+ p − 2 δ
1109
+ p
1110
+ 2 −1 − 2
1111
+
1112
+ p
1113
+ 2
1114
+ which is equivalent to
1115
+ 1
1116
+ 2 − pcp,δ
1117
+ =
1118
+
1119
+ δ
1120
+ 2
1121
+
1122
+ 1 + δ
1123
+
1124
+ p
1125
+ p−2 �
1126
+ p
1127
+ �√
1128
+ 1 + δ − δ
1129
+
1130
+ δ
1131
+ + 2
1132
+
1133
+ 2
1134
+ p−2
1135
+ that, for 0 < δ < 1, admits a unique solution cp,δ < 2
1136
+ p.
1137
+ 3
1138
+ The regularization
1139
+ For ε > 0, we introduce the sequence of operators
1140
+ Aε(ξ) := (|ξ| − 1)p−1
1141
+ +
1142
+ ξ
1143
+ |ξ| + ε
1144
+
1145
+ 1 + |ξ|2� p−2
1146
+ 2 ξ
1147
+ and by
1148
+ uε ∈ C0 �
1149
+ t0 − R2, t0; L2 (BR)
1150
+
1151
+ ∩ Lp �
1152
+ t0 − R2, t0; u + W 1,p
1153
+ 0
1154
+ (BR)
1155
+
1156
+ we denote the unique solution to the corresponding problems
1157
+
1158
+
1159
+
1160
+
1161
+ t − div (Aε (Duε)) = f ε
1162
+ in QR (z0)
1163
+ uε = u
1164
+ in ∂parQR (z0)
1165
+ (3.1)
1166
+ where QR (z0) ⋐ ΩT with R < 1, f ε = f ∗ ρε with ρε the usual sequence of mollifiers. One can easily
1167
+ check that the operator Aε satisfies p-growth and p-ellipticity assumptions with constants depending
1168
+
1169
+ 10
1170
+ on ε.
1171
+ Therefore, by the results in [13], we have
1172
+ Vp (Duε) ∈ L2
1173
+ loc
1174
+
1175
+ 0, T ; W 1,2
1176
+ loc (BR (x0) , Rn)
1177
+
1178
+ and
1179
+ |Duε| ∈ L
1180
+ p+ 4
1181
+ n
1182
+ loc
1183
+ (QR)
1184
+ and, by the definition of Vp(ξ), this yields
1185
+ DVp (Duε) ≈
1186
+
1187
+ 1 + |Duε|2� p−2
1188
+ 4 D2uε ∈ L2
1189
+ loc
1190
+
1191
+ QR; Rn×n�
1192
+ =⇒
1193
+ ��D2uε�� ∈ L2
1194
+ loc (QR)
1195
+ (3.2)
1196
+ By virtue of [3, Theorem 1.1], we also have H p
1197
+ 2 (Duε) ∈ L2
1198
+ loc
1199
+
1200
+ 0, T ; W 1,2
1201
+ loc (Ω, Rn)
1202
+
1203
+ and, by the definition
1204
+ of H p
1205
+ 2 (ξ), it follows
1206
+ ���DH p
1207
+ 2 (Du)
1208
+ ��� ≤ cp (|Duε| − 1)
1209
+ p−2
1210
+ 2
1211
+ +
1212
+ |D2uε| ∈ L2
1213
+ loc
1214
+
1215
+ QR; Rn×n�
1216
+ .
1217
+ (3.3)
1218
+ 3.1
1219
+ Uniform a priori estimates
1220
+ The first step in the proof of Theorem 1.1 is the following estimate for solutions to the regularized
1221
+ problem (3.1).
1222
+ Lemma 3.1. Let uε ∈ C0 �
1223
+ t0 − R2, t0; L2 (BR)
1224
+
1225
+ ∩ Lp �
1226
+ t0 − R2, t0; u + W 1,p
1227
+ 0
1228
+ (BR)
1229
+
1230
+ be the unique solu-
1231
+ tion to (3.1). Then the following estimate
1232
+ sup
1233
+ τ∈(t0−4ρ2,t0)
1234
+ ˆ
1235
+
1236
+
1237
+ |Duε(x, τ)|2 − 1 − δ
1238
+
1239
+ + dx
1240
+ +
1241
+ ˆ
1242
+
1243
+ ��D
1244
+
1245
+
1246
+
1247
+ (|Duε| − δ − 1)+
1248
+ ����2 dz
1249
+
1250
+ c
1251
+ ρ2
1252
+ �ˆ
1253
+ Q2ρ
1254
+ (1 + |Duε|p) dz + δ2−p
1255
+ ˆ
1256
+ Q2ρ
1257
+ |f ε|2 dz
1258
+
1259
+ (3.4)
1260
+ holds for any ε ∈ (0, 1] and for every Qρ ⋐ Q2ρ ⋐ QR, with a constant c = c(n, p) independent of ε.
1261
+ Proof. The weak formulation of (3.1) reads as
1262
+ ˆ
1263
+ QR
1264
+ (uε · ∂tϕ − ⟨Aε (Duε) , Dϕ⟩) dz = −
1265
+ ˆ
1266
+ QR
1267
+ f ε · ϕ dz
1268
+ for any test function ϕ ∈ C∞
1269
+ 0 (QR).
1270
+ Recalling the notation used in (2.2), and replacing ϕ with
1271
+ ∆−hϕ = τ−hϕ
1272
+ h
1273
+ for a sufficiently small h ∈ R \ { 0 }, by virtue of the properties of difference quotients,
1274
+ we have
1275
+ ˆ
1276
+ QR
1277
+
1278
+ ∆huε · ∂tϕ − ⟨∆hHp−1 (Duε) , Dϕ⟩ − ε
1279
+
1280
+ ∆h
1281
+ ��
1282
+ 1 + |Duε|2� p−2
1283
+ 2 Duε
1284
+
1285
+ , Dϕ
1286
+ ��
1287
+ dz
1288
+ =
1289
+
1290
+ ˆ
1291
+ QR
1292
+ f ε · ∆−hϕ dz.
1293
+ (3.5)
1294
+ Arguing as in [13, Lemma 5.1], from (3.5) we get
1295
+ ˆ
1296
+ QR
1297
+ ∂t∆huε · ϕ dz +
1298
+ ˆ
1299
+ QR
1300
+ ⟨∆hHp−1 (Duε) , Dϕ⟩ dz
1301
+
1302
+ ˆ
1303
+ QR
1304
+
1305
+ ∆h
1306
+ ��
1307
+ 1 + |Duε|2� p−2
1308
+ 2 Duε
1309
+
1310
+ , Dϕ
1311
+
1312
+ dz =
1313
+ ˆ
1314
+ QR
1315
+ f ε · ∆−hϕ dz.
1316
+ For Φ ∈ W 1,∞
1317
+ 0
1318
+ (QR) non negative and g ∈ W 1,∞ (R) non negative and non decreasing, we choose
1319
+ ϕ = Φ · ∆huε · g
1320
+
1321
+ |∆huε|2�
1322
+ in previous identity, thus getting
1323
+ ˆ
1324
+ QR
1325
+ ∂t (∆huε) ∆huε · g
1326
+
1327
+ |∆huε|2�
1328
+ Φ dz
1329
+
1330
+ 11
1331
+ +
1332
+ ˆ
1333
+ QR
1334
+
1335
+ ∆hHp−1 (Duε) , D
1336
+
1337
+ Φ∆huεg
1338
+
1339
+ |∆huε|2���
1340
+ dz
1341
+
1342
+ ˆ
1343
+ QR
1344
+
1345
+ ∆h
1346
+ ��
1347
+ 1 + |Duε|2� p−2
1348
+ 2 Duε
1349
+
1350
+ , D
1351
+
1352
+ Φ∆hug
1353
+
1354
+ |∆huε|2���
1355
+ dz
1356
+ =
1357
+ ˆ
1358
+ QR
1359
+ f ε · ∆−h
1360
+
1361
+ Φ∆huε · g
1362
+
1363
+ |∆huε|2��
1364
+ dz,
1365
+ i.e.
1366
+ ˆ
1367
+ QR
1368
+ ∂t (∆huε) ∆huε · g
1369
+
1370
+ |∆huε|2�
1371
+ Φ dz
1372
+ +
1373
+ ˆ
1374
+ QR
1375
+ Φ
1376
+
1377
+ ∆hHp−1 (Duε) , ∆hDuε · g
1378
+
1379
+ |∆huε|2��
1380
+ dz
1381
+
1382
+ ˆ
1383
+ QR
1384
+ Φ
1385
+
1386
+ ∆h
1387
+ ��
1388
+ 1 + |Duε|2� p−2
1389
+ 2 Duε
1390
+
1391
+ , ∆hDuε · g
1392
+
1393
+ |∆huε|2��
1394
+ dz
1395
+ +2
1396
+ ˆ
1397
+ QR
1398
+ Φ
1399
+
1400
+ ∆hHp−1 (Duε) , |∆huε|2 ∆hDuε · g′ �
1401
+ |∆huε|2��
1402
+ dz
1403
+ +2ε
1404
+ ˆ
1405
+ QR
1406
+ Φ
1407
+
1408
+ ∆h
1409
+ ��
1410
+ 1 + |Duε|2� p−2
1411
+ 2 Duε
1412
+
1413
+ , |∆huε|2 ∆hDuε · g′ �
1414
+ |∆huε|2��
1415
+ dz
1416
+ =
1417
+
1418
+ ˆ
1419
+ QR
1420
+
1421
+ ∆hHp−1 (Duε) , DΦ · ∆huε · g
1422
+
1423
+ |∆huε|2��
1424
+ dz
1425
+ −ε
1426
+ ˆ
1427
+ QR
1428
+
1429
+ ∆h
1430
+ ��
1431
+ 1 + |Duε|2� p−2
1432
+ 2 Duε
1433
+
1434
+ , DΦ · ∆huε · g
1435
+
1436
+ |∆huε|2��
1437
+ dz
1438
+ +
1439
+ ˆ
1440
+ QR
1441
+ f ε · ∆−h
1442
+
1443
+ Φ∆huε · g
1444
+
1445
+ |∆huε|2��
1446
+ dz,
1447
+ (3.6)
1448
+ that we rewrite as follows
1449
+ Jh,1 + Jh,2 + Jh,3 + Jh,4 + Jh,5 = −Jh,6 − Jh,7 + Jh,8.
1450
+ Arguing as in [5],the first integral in equation (3.6) can be expressed as follows
1451
+ Jh,1
1452
+ =
1453
+ ˆ
1454
+ QR
1455
+ ∂t (∆huε) ∆huε · g
1456
+
1457
+ |∆huε|2�
1458
+ Φ dz = 1
1459
+ 2
1460
+ ˆ
1461
+ QR
1462
+ ∂t
1463
+
1464
+ |∆huε|2�
1465
+ · g
1466
+
1467
+ |∆huε|2�
1468
+ Φ dz
1469
+ =
1470
+ 1
1471
+ 2
1472
+ ˆ
1473
+ QR
1474
+ ∂t
1475
+ �ˆ |∆huε|2
1476
+ 0
1477
+ g(s) ds
1478
+
1479
+ Φ dz = −1
1480
+ 2
1481
+ ˆ
1482
+ QR
1483
+ �ˆ |∆huε|2
1484
+ 0
1485
+ g(s) ds
1486
+
1487
+ ∂tΦ dz.
1488
+ Using Lemma 2.2, since Φ, g are non negative, we have
1489
+ Jh,2 ≥
1490
+ ˆ
1491
+ QR
1492
+ Φ · g
1493
+
1494
+ |∆huε|2�
1495
+ |∆hDuε|2
1496
+ (|Duε| − 1)p
1497
+ |Duε| (|Duε| + |Duε(x + h)|) dz.
1498
+ The right inequality in the assertion of Lemma 2.4 yields
1499
+ Jh,3 ≥ εcp
1500
+ ˆ
1501
+ QR
1502
+ Φ · g
1503
+
1504
+ |∆huε|2�
1505
+ |∆hVp (Duε)|2 dz
1506
+ Moreover, again by Lemmas 2.2 and 2.4 and the fact that g′(s) ≥ 0, we infer
1507
+ Jh,4 + Jh,5 ≥ 0.
1508
+ Therefore (3.6) implies
1509
+ −1
1510
+ 2
1511
+ ˆ
1512
+ QR
1513
+ �ˆ |∆huε|2
1514
+ 0
1515
+ g(s) ds
1516
+
1517
+ ∂tΦ dz
1518
+ +
1519
+ ˆ
1520
+ QR
1521
+ Φ · g
1522
+
1523
+ |∆huε|2�
1524
+ |∆hDuε|2
1525
+ (|Duε| − 1)p
1526
+ |Duε| (|Duε| + |Duε(x + h)|) dz
1527
+
1528
+ 12
1529
+ +cpε
1530
+ ˆ
1531
+ QR
1532
+ Φ · g
1533
+
1534
+ |∆huε|2�
1535
+ |∆hVp (Duε)|2 dz
1536
+
1537
+ ˆ
1538
+ QR
1539
+ |DΦ| |∆hHp−1 (Duε)| |∆huε| · g
1540
+
1541
+ |∆huε|2�
1542
+ dz
1543
+
1544
+ ˆ
1545
+ QR
1546
+ |DΦ|
1547
+ ����∆h
1548
+ ��
1549
+ 1 + |Duε|2� p−2
1550
+ 2 Duε
1551
+ ����� |∆huε| · g
1552
+
1553
+ |∆huε|2�
1554
+ dz
1555
+ +
1556
+ ˆ
1557
+ QR
1558
+ |f ε|
1559
+ ���∆−h
1560
+
1561
+ Φ∆huε · g
1562
+
1563
+ |∆huε|2����� dz.
1564
+ (3.7)
1565
+ Now let us consider a parabolic cylinder Qρ (z0) ⋐ Q2ρ (z0) ⋐ QR (z0) with ρ < 2ρ < R and t0 > 0.
1566
+ For a fixed time τ ∈
1567
+
1568
+ t0 − 4ρ2, t0
1569
+
1570
+ and θ ∈ (0, t0 − τ), we choose Φ(x, t) = η2(x)χ(t)˜χ(t) with η ∈
1571
+ C∞
1572
+ 0 (B2ρ (x0)), 0 ≤ η ≤ 1, χ ∈ W 1,∞ ([0, T ]) with ∂tχ ≥ 0 and ˜χ a Lipschitz continuous function
1573
+ defined, for 0 < τ < τ + θ < T , as follows
1574
+ ˜χ(t) =
1575
+
1576
+
1577
+
1578
+
1579
+
1580
+
1581
+
1582
+
1583
+
1584
+
1585
+
1586
+
1587
+
1588
+ 1
1589
+ if
1590
+ t ≤ τ
1591
+ 1 − t − τ
1592
+ θ
1593
+ if
1594
+ τ < t ≤ τ + θ
1595
+ 0
1596
+ if
1597
+ τ + θ < t ≤ T
1598
+ so that (3.7) yields
1599
+ Ih,1 + Ih,2 + Ih,3
1600
+ :=
1601
+ 1
1602
+ 2
1603
+ ˆ
1604
+ B2ρ
1605
+ η2χ(τ)
1606
+ �ˆ |∆huε(x,τ)|2
1607
+ 0
1608
+ g(s) ds
1609
+
1610
+ dx
1611
+ +cp
1612
+ ˆ
1613
+ Qτ η2χ(t) · g
1614
+
1615
+ |∆huε|2�
1616
+ |∆hDuε|2
1617
+ (|Duε| − 1)p
1618
+ |Duε| (|Duε| + |Duε(x + h)|) dz
1619
+ +cpε
1620
+ ˆ
1621
+ Qτ η2χ(t)g
1622
+
1623
+ |∆huε|2�
1624
+ |∆hVp (Duε)|2 dz
1625
+
1626
+ 2
1627
+ ˆ
1628
+ Qτ ηχ(t) |Dη| |∆hHp−1 (Duε)| |∆huε| · g
1629
+
1630
+ |∆huε|2�
1631
+ dz
1632
+ +2ε
1633
+ ˆ
1634
+ Qτ ηχ(t) |Dη|
1635
+ ����∆h
1636
+ ��
1637
+ 1 + |Duε|2� p−2
1638
+ 2 Duε
1639
+ ����� |∆huε| · g
1640
+
1641
+ |∆huε|2�
1642
+ dz
1643
+ +
1644
+ ˆ
1645
+ Qτ χ(t) |f ε|
1646
+ ���∆−h
1647
+
1648
+ η2∆huε · g
1649
+
1650
+ |∆huε|2����� dz
1651
+ +1
1652
+ 2
1653
+ ˆ
1654
+ Qτ η2∂tχ(t)
1655
+ �ˆ |∆huε|2
1656
+ 0
1657
+ g(s) ds
1658
+
1659
+ dz
1660
+ =:
1661
+ Ih,4 + Ih,5 + Ih,6 + Ih,7,
1662
+ (3.8)
1663
+ where we used the notation Qτ = B2ρ (x0) ×
1664
+
1665
+ t0 − 4ρ2, τ
1666
+
1667
+ .
1668
+ Since g ∈ W 1,∞ ([0, ∞)), by (3.2), by the last assertion of Lemma 2.9 and by Fatou’s Lemma, we have
1669
+ lim inf
1670
+ h→0 (Ih,1 + Ih,2 + Ih,3)
1671
+
1672
+ 1
1673
+ 2
1674
+ ˆ
1675
+ B2ρ
1676
+ η2χ(τ)
1677
+ �ˆ |Duε(x,τ)|2
1678
+ 0
1679
+ g(s) ds
1680
+
1681
+ dx
1682
+ +cp
1683
+ ˆ
1684
+ Qτ η2χ(t) · g
1685
+
1686
+ |Duε|2� ��D2uε��2 (|Duε| − 1)p
1687
+ |Duε|2
1688
+ dz
1689
+ +cpε
1690
+ ˆ
1691
+ Qτ η2χ(t)g
1692
+
1693
+ |Duε|2�
1694
+ |DVp (Duε)|2 dz.
1695
+ (3.9)
1696
+ and
1697
+ lim
1698
+ h→0 Ih,7 = 1
1699
+ 2
1700
+ ˆ
1701
+ Qτ η2∂tχ(t)
1702
+ �ˆ |Duε|2
1703
+ 0
1704
+ g(s) ds
1705
+
1706
+ dz.
1707
+ (3.10)
1708
+ Now let us observe that
1709
+ |DHp−1 (Duε)| ≤ cp (|Duε| − 1)p−2
1710
+ +
1711
+ ��D2u�
1712
+ (3.11)
1713
+
1714
+ 13
1715
+ and, using Hölder’s inequality with exponents
1716
+
1717
+ 2(p−1)
1718
+ p−2 , 2(p−1)
1719
+ p
1720
+
1721
+ , we have
1722
+ ˆ
1723
+ BR
1724
+ |DHp−1 (Duε)|
1725
+ p
1726
+ p−1 dx
1727
+
1728
+ cp
1729
+ ˆ
1730
+ BR
1731
+
1732
+ (|Duε| − 1)p−2
1733
+ +
1734
+ ��D2u�
1735
+
1736
+ p
1737
+ p−1 dx
1738
+
1739
+ cp
1740
+ �ˆ
1741
+ BR
1742
+ (|Duε| − 1)p
1743
+ + dx
1744
+
1745
+ p−2
1746
+ 2(p−1)
1747
+ ·
1748
+ �ˆ
1749
+ BR
1750
+
1751
+ (|Duε| − 1)
1752
+ p−2
1753
+ 2
1754
+ +
1755
+ ��D2u�
1756
+ �2
1757
+ dx
1758
+
1759
+ p
1760
+ 2(p−1)
1761
+ ,
1762
+ and since, by (3.3), the right hand side of previous inequality is finite again by Lemma 2.9, we have
1763
+ ∆hHp−1 (Duε) → DHp−1 (Duε)
1764
+ strongly in
1765
+ L2 �
1766
+ 0, T ; L
1767
+ p
1768
+ p−1 (BR)
1769
+
1770
+ as h → 0,
1771
+ which, since ∆huε → Duε strongly in L2 (0, T ; Lp (BR)) as h → 0, implies
1772
+ lim
1773
+ h→0 Ih,4 = 2
1774
+ ˆ
1775
+ Qτ ηχ(t) |Dη| |DHp−1 (Duε)| |Duε| g
1776
+
1777
+ |Duε|2�
1778
+ dz.
1779
+ (3.12)
1780
+ Using similar arguments, we can check that
1781
+ lim
1782
+ h→0 Ih,5 = 2ε
1783
+ ˆ
1784
+ Qτ ηχ(t) |Dη|
1785
+ ����D
1786
+ ��
1787
+ 1 + |Duε|2� p−2
1788
+ 2 Duε
1789
+ ����� |Duε| · g
1790
+
1791
+ |Duε|2�
1792
+ dz.
1793
+ (3.13)
1794
+ Now, by Proposition 2.7(c), it holds
1795
+ ���∆−h
1796
+
1797
+ η2∆huε · g
1798
+
1799
+ |∆huε|2�����
1800
+
1801
+ c∥Dη∥∞ |∆huε|
1802
+ ���g
1803
+
1804
+ |∆huε|2����
1805
+ +c |∆−h (∆huε)|
1806
+ ���g
1807
+
1808
+ |∆huε|2����
1809
+ +c |∆huε|2 ���g′ �
1810
+ |∆huε|2���� |∆hDuε| .
1811
+ and choosing g such that
1812
+ sg′ �
1813
+ s2�
1814
+ ≤ M,
1815
+ (3.14)
1816
+ for a positive constant M, we have
1817
+ ���∆−h
1818
+
1819
+ η2∆huε · g
1820
+
1821
+ |∆huε|2�����
1822
+
1823
+ c∥Dη∥∞ |∆huε|
1824
+ ���g
1825
+
1826
+ |∆huε|2����
1827
+ +c |∆−h (∆huε)|
1828
+ ���g
1829
+
1830
+ |∆huε|2����
1831
+ +cM |∆huε| |∆−hDuε|
1832
+ (3.15)
1833
+ Since ∆huε → Duε, ∆−h (∆huε) → D2uε, ∆−hDuε → D2uε strongly in L2 �
1834
+ 0, T ; L2
1835
+ loc (Ω)
1836
+
1837
+ as h → 0,
1838
+ and f ε ∈ C∞ (ΩT ), thanks to (3.15), we have
1839
+ lim
1840
+ h→0 Ih,6 =
1841
+ ˆ
1842
+ Qτ χ(t) |f ε|
1843
+ ���D
1844
+
1845
+ η2Duε · g
1846
+
1847
+ |Duε|2����� dz.
1848
+ (3.16)
1849
+ So, collecting (3.9), (3.10), (3.12), (3.13) and (3.16), we can pass to the limit as h → 0 in (3.8), thus
1850
+ getting
1851
+ 1
1852
+ 2
1853
+ ˆ
1854
+ B2ρ
1855
+ η2χ(τ)
1856
+ �ˆ |Duε(x,τ)|2
1857
+ 0
1858
+ g(s) ds
1859
+
1860
+ dx
1861
+ +cp
1862
+ ˆ
1863
+ Qτ η2χ(t) · g
1864
+
1865
+ |Duε|2� ��D2uε��2 (|Duε| − 1)p
1866
+ |Duε|2
1867
+ dz
1868
+ +cpε
1869
+ ˆ
1870
+ Qτ η2χ(t)g
1871
+
1872
+ |Duε|2�
1873
+ |DVp (Duε)|2 dz
1874
+
1875
+ 14
1876
+
1877
+ 2
1878
+ ˆ
1879
+ Qτ ηχ(t) |Dη| |DHp−1 (Duε)| |Duε| · g
1880
+
1881
+ |Duε|2�
1882
+ dz
1883
+ +2ε
1884
+ ˆ
1885
+ Qτ ηχ(t) |Dη|
1886
+ ����D
1887
+ ��
1888
+ 1 + |Duε|2� p−2
1889
+ 2 Duε
1890
+ ����� |Duε| · g
1891
+
1892
+ |Duε|2�
1893
+ dz
1894
+ +
1895
+ ˆ
1896
+ Qτ χ(t) |f ε|
1897
+ ���D
1898
+
1899
+ η2Duε · g
1900
+
1901
+ |Duε|2����� dz
1902
+ +1
1903
+ 2
1904
+ ˆ
1905
+ Qτ η2∂tχ(t)
1906
+ �ˆ |Duε|2
1907
+ 0
1908
+ g(s) ds
1909
+
1910
+ dz
1911
+ =:
1912
+ ˜I1 + ˜I2 + ˜I3 + ˜I4,
1913
+ (3.17)
1914
+ for every g ∈ W 1,∞(0, +∞) such that (3.14) holds true. Now, by (3.11) and by Young’s inequality,
1915
+ we have
1916
+ ˜I1 + ˜I2
1917
+
1918
+ cp
1919
+ ˆ
1920
+ Qτ ηχ(t) |Dη| (|Duε| − 1)p−2
1921
+ +
1922
+ ��D2uε�� |Duε| · g
1923
+
1924
+ |Duε|2�
1925
+ dz
1926
+ +cp · ε
1927
+ ˆ
1928
+ Qτ ηχ(t) |Dη|
1929
+
1930
+ 1 + |Duε|2� p−1
1931
+ 2 ��D2uε�� · g
1932
+
1933
+ |Duε|2�
1934
+ dz
1935
+
1936
+ σ
1937
+ ˆ
1938
+ Qτ η2χ(t)(|Duε| − 1)p
1939
+ +
1940
+ |Duε|2
1941
+ ��D2uε��2 · g
1942
+
1943
+ |Duε|2�
1944
+ dz
1945
+ +σε
1946
+ ˆ
1947
+ Qτ η2χ(t)
1948
+
1949
+ 1 + |Duε|2� p−2
1950
+ 2 ��D2uε��2 · g
1951
+
1952
+ |Duε|2�
1953
+ dz
1954
+ +cσ
1955
+ ˆ
1956
+ Qτ χ(t) |Dη|2 (|Duε| − 1)p−4
1957
+ +
1958
+ |Duε|4 · g
1959
+
1960
+ |Duε|2�
1961
+ dz
1962
+ +cp,σ · ε
1963
+ ˆ
1964
+ Qτ χ(t) |Dη|2 �
1965
+ 1 + |Duε|2� p
1966
+ 2 · g
1967
+
1968
+ |Duε|2�
1969
+ dz
1970
+
1971
+ σ
1972
+ ˆ
1973
+ Qτ η2χ(t)(|Duε| − 1)p
1974
+ +
1975
+ |Duε|2
1976
+ ��D2uε��2 · g
1977
+
1978
+ |Duε|2�
1979
+ dz
1980
+ +σε
1981
+ ˆ
1982
+ Qτ η2χ(t) |DVp (Duε)|2 · g
1983
+
1984
+ |Duε|2�
1985
+ dz
1986
+ +cσ,p ∥Dη∥2
1987
+ L∞ ∥g∥L∞
1988
+ ˆ
1989
+ Qτ χ(t) (1 + |Duε|)p dz,
1990
+ (3.18)
1991
+ where we used (3.2), and where σ > 0 is a parameter that will be chosen later.
1992
+ Now, using Young’s Inequality, we estimate the term ˜I3, as follows
1993
+ ˜I3
1994
+
1995
+ c
1996
+ ˆ
1997
+ Qτ χ(t) |f ε| η |Dη| |Duε| · g
1998
+
1999
+ |Duε|2�
2000
+ dz
2001
+ +c
2002
+ ˆ
2003
+ Qτ χ(t) |f ε| η2 ��D2uε�� · g
2004
+
2005
+ |Duε|2�
2006
+ dz
2007
+ +c
2008
+ ˆ
2009
+ Qτ χ(t) |f ε| η2 |Duε|2 ��D2uε�� · g′ �
2010
+ |Duε|2�
2011
+ dz
2012
+
2013
+ c ∥Dη∥∞ ∥g∥L∞
2014
+ ˆ
2015
+ Qτ ηχ(t) |f ε|2 dz
2016
+ +c ∥Dη∥∞ ∥g∥L∞
2017
+ ˆ
2018
+ Qτ ηχ(t) |Duε|2 dz
2019
+ +c
2020
+ ˆ
2021
+ Qτ η2χ(t) |f ε|
2022
+ ��D2uε�� · g
2023
+
2024
+ |Duε|2�
2025
+ dz
2026
+ +c
2027
+ ˆ
2028
+ Qτ η2χ(t) |f ε| |Duε|2 ��D2uε�� · g′ �
2029
+ |Duε|2�
2030
+ dz.
2031
+ (3.19)
2032
+ Plugging (3.18) and (3.19) into (3.17), we get
2033
+ 1
2034
+ 2
2035
+ ˆ
2036
+ B2ρ
2037
+ η2χ(τ)
2038
+ �ˆ |Duε(x,τ)|2
2039
+ 0
2040
+ g(s) ds
2041
+
2042
+ dx
2043
+
2044
+ 15
2045
+ +cp
2046
+ ˆ
2047
+ Qτ η2χ(t) · g
2048
+
2049
+ |Duε|2� (|Duε| − 1)p
2050
+ +
2051
+ |Duε|2
2052
+ ��D2u�2 dz
2053
+ +cpε
2054
+ ˆ
2055
+ Qτ η2χ(t)g
2056
+
2057
+ |Duε|2�
2058
+ |DVp (Duε)|2 dz
2059
+
2060
+ σ
2061
+ ˆ
2062
+ Qτ η2χ(t)(|Duε| − 1)p
2063
+ +
2064
+ |Duε|2
2065
+ ��D2uε��2 · g
2066
+
2067
+ |Duε|2�
2068
+ dz
2069
+ +σε
2070
+ ˆ
2071
+ Qτ η2χ(t) |DVp (Duε)|2 · g
2072
+
2073
+ |Duε|2�
2074
+ dz
2075
+ +cp,σ ∥Dη∥∞ ∥g∥L∞
2076
+ ˆ
2077
+ Qτ ηχ(t) |f ε|2 dz
2078
+ +cp,σ∥Dη∥∞ ∥g∥L∞
2079
+ ˆ
2080
+ Qτ ηχ(t) (1 + |Duε|)p dz
2081
+ +c
2082
+ ˆ
2083
+ Qτ η2χ(t) |f ε|
2084
+ ��D2uε�� · g
2085
+
2086
+ |Duε|2�
2087
+ dz
2088
+ +c
2089
+ ˆ
2090
+ Qτ η2χ(t) |f ε| |Duε|2 ��D2uε�� · g′ �
2091
+ |Duε|2�
2092
+ dz
2093
+ +1
2094
+ 2
2095
+ ˆ
2096
+ Qτ η2∂tχ(t)
2097
+ �ˆ |Duε|2
2098
+ 0
2099
+ g(s) ds
2100
+
2101
+ dz,
2102
+ which, for a sufficiently small σ, gives
2103
+ 1
2104
+ 2
2105
+ ˆ
2106
+ B2ρ
2107
+ η2χ(τ)
2108
+ �ˆ |Duε(x,τ)|2
2109
+ 0
2110
+ g(s) ds
2111
+
2112
+ dx
2113
+ +cp
2114
+ ˆ
2115
+ Qτ η2χ(t) · g
2116
+
2117
+ |Duε|2� (|Duε| − 1)p
2118
+ +
2119
+ |Duε|2
2120
+ ��D2u�2 dz
2121
+ +cpε
2122
+ ˆ
2123
+ Qτ η2χ(t)g
2124
+
2125
+ |Duε|2�
2126
+ |DVp (Duε)|2 dz
2127
+
2128
+ cp∥Dη∥∞ ∥g∥L∞
2129
+ ˆ
2130
+ Qτ ηχ(t) |f ε|2 dz
2131
+ +cp∥Dη∥∞ ∥g∥L∞
2132
+ ˆ
2133
+ Qτ ηχ(t) (1 + |Duε|)p dz
2134
+ +c
2135
+ ˆ
2136
+ Qτ η2χ(t) |f ε|
2137
+ ��D2uε�� · g
2138
+
2139
+ |Duε|2�
2140
+ dz
2141
+ +c
2142
+ ˆ
2143
+ Qτ η2χ(t) |f ε| |Duε|2 ��D2uε�� · g′ �
2144
+ |Duε|2�
2145
+ dz
2146
+ +1
2147
+ 2
2148
+ ˆ
2149
+ Qτ η2∂tχ(t)
2150
+ �ˆ |Duε|2
2151
+ 0
2152
+ g(s) ds
2153
+
2154
+ dz,
2155
+ that, neglecting the third integral in the left hand side, implies
2156
+ 1
2157
+ 2
2158
+ ˆ
2159
+ B2ρ
2160
+ η2χ(τ)
2161
+ �ˆ |Duε(x,τ)|2
2162
+ 0
2163
+ g(s) ds
2164
+
2165
+ dx
2166
+ +cp
2167
+ ˆ
2168
+ Qτ η2χ(t) · g
2169
+
2170
+ |Duε|2� (|Duε| − 1)p
2171
+ +
2172
+ |Duε|2
2173
+ ��D2u�2 dz
2174
+
2175
+ cp∥Dη∥∞ ∥g∥L∞
2176
+ ˆ
2177
+ Qτ ηχ(t) |f ε|2 dz
2178
+ +cp∥Dη∥∞ ∥g∥L∞
2179
+ ˆ
2180
+ Qτ ηχ(t) (1 + |Duε|)p dz
2181
+ +c
2182
+ ˆ
2183
+ Qτ η2χ(t) |f ε|
2184
+ ��D2uε�� · g
2185
+
2186
+ |Duε|2�
2187
+ dz
2188
+ +c
2189
+ ˆ
2190
+ Qτ η2χ(t) |f ε| |Duε|2 ��D2uε�� · g′ �
2191
+ |Duε|2�
2192
+ dz
2193
+ +1
2194
+ 2
2195
+ ˆ
2196
+ Qτ η2∂tχ(t)
2197
+ �ˆ |Duε|2
2198
+ 0
2199
+ g(s) ds
2200
+
2201
+ dz,
2202
+ (3.20)
2203
+
2204
+ 16
2205
+ Now, for δ ∈ (0, 1), recalling the notation in (2.3), we choose
2206
+ g(s) = g1+δ
2207
+
2208
+ (s − 1 − δ)+
2209
+
2210
+ that is
2211
+ g(s) =
2212
+ (s − 1 − δ)2
2213
+ +
2214
+ 1 + δ + (s − 1 − δ)2
2215
+ +
2216
+ ,
2217
+ that is legitimate since g ∈ W 1,∞([0, +∞)).
2218
+ Moreover, with this choice, we have g(s) ∈ [0, 1], for every s ≥ 0, and thanks to (2.5), there exists a
2219
+ constant cδ > 0 such that
2220
+ sg′ �
2221
+ s2�
2222
+ ≤ cδ
2223
+ for every s ≥ 0,
2224
+ so that (3.14) holds. Therefore, since g(s) vanishes on the set where s ≤ 1 + δ and g(s) ≤ 1 for every
2225
+ s, (3.20) becomes
2226
+ 1
2227
+ 2
2228
+ ˆ
2229
+ B2ρ
2230
+ η2χ(τ)
2231
+ �ˆ |Duε(x,τ)|2
2232
+ 0
2233
+ g(s) ds
2234
+
2235
+ dx
2236
+ +cp
2237
+ ˆ
2238
+ Qτ η2χ(t) · g
2239
+
2240
+ |Duε|2� (|Duε| − 1)p
2241
+ +
2242
+ |Duε|2
2243
+ ��D2u�2 dz
2244
+
2245
+ c
2246
+ ˆ
2247
+ Qτ∩{|Duε|2>1+δ}
2248
+ η2χ(t) |f ε|
2249
+ ��D2uε�� (|Duε| − 1)
2250
+ p
2251
+ 2
2252
+ +
2253
+ |Duε|
2254
+ |Duε|
2255
+ (|Duε| − 1)
2256
+ p
2257
+ 2
2258
+ +
2259
+ · g
2260
+
2261
+ |Duε|2�
2262
+ dz
2263
+ +c
2264
+ ˆ
2265
+ Qτ ∩{|Duε|2>1+δ}
2266
+ η2χ(t) |f ε| |Duε|2 (|Duε| − 1)
2267
+ p
2268
+ 2
2269
+ +
2270
+ |Duε|
2271
+ |Duε|
2272
+ (|Duε| − 1)
2273
+ p
2274
+ 2
2275
+ +
2276
+ ��D2uε�� g′ �
2277
+ |Duε|2�
2278
+ dz
2279
+ +cp∥Dη∥∞ ∥χ∥L∞
2280
+ ˆ
2281
+
2282
+
2283
+ 1 + |Duε|p + |f ε|2�
2284
+ dz +
2285
+ ˆ
2286
+ Qτ η2∂tχ(t)
2287
+ �ˆ |Duε|2
2288
+ 0
2289
+ g(s) ds
2290
+
2291
+ dz
2292
+
2293
+ cp
2294
+ δ
2295
+ p
2296
+ 2
2297
+ ˆ
2298
+ Qτ η2χ(t) |f ε|
2299
+ ��D2uε�� (|Duε| − 1)
2300
+ p
2301
+ 2
2302
+ +
2303
+ |Duε|
2304
+ · g
2305
+
2306
+ |Duε|2�
2307
+ dz
2308
+ + cp
2309
+ δ
2310
+ p
2311
+ 2
2312
+ ˆ
2313
+ Qτ η2χ(t) |f ε| |Duε|2 (|Duε| − 1)
2314
+ p
2315
+ 2
2316
+ +
2317
+ |Duε|
2318
+ ��D2uε�� g′ �
2319
+ |Duε|2�
2320
+ dz
2321
+ +cp∥Dη∥∞ ∥χ∥L∞
2322
+ ˆ
2323
+
2324
+
2325
+ 1 + |Duε|p + |f ε|2�
2326
+ dz +
2327
+ ˆ
2328
+ Qτ η2∂tχ(t)
2329
+ �ˆ |Duε|2
2330
+ 0
2331
+ g(s) ds
2332
+
2333
+ dz,
2334
+ where we used that
2335
+ sup
2336
+ x∈(
2337
+
2338
+ 1+δ,+∞)
2339
+ x
2340
+ (x − 1)
2341
+ p
2342
+ 2 =
2343
+
2344
+ 1 + δ
2345
+ �√
2346
+ 1 + δ − 1
2347
+ � p
2348
+ 2 =
2349
+
2350
+ 1 + δ
2351
+ �√
2352
+ 1 + δ + 1
2353
+ � p
2354
+ 2
2355
+ δ
2356
+ p
2357
+ 2
2358
+ ≤ cp
2359
+ δ
2360
+ p
2361
+ 2 ,
2362
+ since δ < 1. Using Young’s inequality in the first integral in the right hand, previous estimate yields
2363
+ 1
2364
+ 2
2365
+ ˆ
2366
+ B2ρ
2367
+ η2χ(τ)
2368
+ �ˆ |Duε(x,τ)|2
2369
+ 0
2370
+ g(s) ds
2371
+
2372
+ dx
2373
+ +cp
2374
+ ˆ
2375
+ Qτ η2χ(t) · g
2376
+
2377
+ |Duε|2� (|Duε| − 1)p
2378
+ +
2379
+ |Duε|2
2380
+ ��D2u�2 dz
2381
+
2382
+ cp(β)
2383
+ δp
2384
+ ˆ
2385
+ Qτ η2χ(t) |f ε|2 · g
2386
+
2387
+ |Duε|2�
2388
+ dz
2389
+
2390
+ ˆ
2391
+ Qτ η2χ(t)(|Duε| − 1)p
2392
+ +
2393
+ |Duε|2
2394
+ ��D2uε��2 · g
2395
+
2396
+ |Duε|2�
2397
+ dz
2398
+ + cp
2399
+ δ
2400
+ p
2401
+ 2
2402
+ ˆ
2403
+ Qτ η2χ(t) |f ε| |Duε|2 (|Duε| − 1)
2404
+ p
2405
+ 2
2406
+ +
2407
+ |Duε|
2408
+ ��D2uε�� g′ �
2409
+ |Duε|2�
2410
+ dz
2411
+ +cp∥Dη∥∞ ∥χ∥L∞
2412
+ ˆ
2413
+
2414
+
2415
+ 1 + |Duε|p + |f ε|2�
2416
+ dz
2417
+
2418
+ 17
2419
+ +
2420
+ ˆ
2421
+ Qτ η2∂tχ(t)
2422
+ �ˆ |Duε|2
2423
+ 0
2424
+ g(s) ds
2425
+
2426
+ dz.
2427
+ Choosing β sufficiently small, reabsorbing the second integral in the right hand side by the left hand
2428
+ side and using that g(s) ≤ 1, we get
2429
+ ˆ
2430
+ B2ρ
2431
+ η2χ(τ)
2432
+ �ˆ |Duε(x,τ)|2
2433
+ 0
2434
+ g(s) ds
2435
+
2436
+ dx
2437
+ +
2438
+ ˆ
2439
+ Qτ η2χ(t) · g
2440
+
2441
+ |Duε|2� (|Duε| − 1)p
2442
+ +
2443
+ |Duε|2
2444
+ ��D2u�2 dz
2445
+
2446
+ c cp
2447
+ δ
2448
+ p
2449
+ 2
2450
+ ˆ
2451
+ Qτ η2χ(t) |f ε| |Duε|2 (|Duε| − 1)
2452
+ p
2453
+ 2
2454
+ +
2455
+ |Duε|
2456
+ ��D2uε�� g′ �
2457
+ |Duε|2�
2458
+ dz
2459
+ +
2460
+ ˆ
2461
+ Qτ η2∂tχ(t)
2462
+ �ˆ |Duε|2
2463
+ 0
2464
+ g(s) ds
2465
+
2466
+ dz
2467
+ c ∥Dη∥2
2468
+ ∞ ∥χ∥∞
2469
+ ˆ
2470
+ Qτ (1 + |Duε|)p dz
2471
+ +c ∥χ∥L∞
2472
+ �cp
2473
+ δp + ∥Dη∥L∞
2474
+ � ˆ
2475
+ Qτ |f ε|2 dz.
2476
+ (3.21)
2477
+ We now estimate the first integral in the right side of previous inequality with the use of (2.4) with
2478
+ s = |Duε|2, A = (|Duε| − 1)
2479
+ p
2480
+ 2
2481
+ +
2482
+ |Duε|
2483
+ ��D2u�, B = cp
2484
+ δ
2485
+ p
2486
+ 2 |f ε| and k = 1 + δ, thus getting
2487
+ cp
2488
+ δ
2489
+ p
2490
+ 2
2491
+ ˆ
2492
+ Qτ η2χ(t) |f ε| |Duε|2 (|Duε| − 1)
2493
+ p
2494
+ 2
2495
+ +
2496
+ |Duε|
2497
+ ��D2uε�� g′ �
2498
+ |Duε|2�
2499
+ dz
2500
+
2501
+
2502
+ ˆ
2503
+ Qτ η2χ(t)(|Duε| − 1)p
2504
+ +
2505
+ |Duε|2
2506
+ ��D2u�2 g
2507
+
2508
+ |Duε|2�
2509
+ dz
2510
+ +2ασ
2511
+ ˆ
2512
+ Qτ η2χ(t)(|Duε| − 1)p
2513
+ +
2514
+ |Duε|2
2515
+ ��D2u�2 dz
2516
+ +cα,p
2517
+ δp
2518
+ ˆ
2519
+ Qτ η2χ(t) |f ε|2 dz,
2520
+ with constants c, cα both independent of σ and where we used that δ < 1. By virtue of (3.3), taking
2521
+ the limit as σ → 0 in previous inequality, we have
2522
+ cp
2523
+ δ
2524
+ p
2525
+ 2
2526
+ ˆ
2527
+ Qτ η2χ(t) |f ε| |Duε|2 (|Duε| − 1)
2528
+ p
2529
+ 2
2530
+ +
2531
+ |Duε|
2532
+ ��D2uε�� g′ �
2533
+ |Duε|2�
2534
+ dz
2535
+
2536
+
2537
+ ˆ
2538
+ Qτ η2χ(t)(|Duε| − 1)p
2539
+ +
2540
+ |Duε|2
2541
+ ��D2u�2 g
2542
+
2543
+ |Duε|2�
2544
+ dz
2545
+ +cα,p
2546
+ δp
2547
+ ˆ
2548
+ Qτ η2χ(t) |f ε|2 dz,
2549
+ (3.22)
2550
+ Inserting (3.22) in (3.21), we find
2551
+ ˆ
2552
+ B2ρ
2553
+ η2χ(τ)
2554
+ �ˆ |Duε(x,τ)|2
2555
+ 0
2556
+ g(s) ds
2557
+
2558
+ dx
2559
+ +
2560
+ ˆ
2561
+ Qτ η2χ(t) · g
2562
+
2563
+ |Duε|2� (|Duε| − 1)p
2564
+ +
2565
+ |Duε|2
2566
+ ��D2u�2 dz
2567
+
2568
+
2569
+ ˆ
2570
+ Qτ η2χ(t)(|Duε| − 1)p
2571
+ +
2572
+ |Duε|2
2573
+ ��D2u�2 g
2574
+
2575
+ |Duε|2�
2576
+ dz
2577
+ +cα,p
2578
+ δp
2579
+ ˆ
2580
+ Qτ η2χ(t)|f ε|2 dz
2581
+
2582
+ 18
2583
+ +
2584
+ ˆ
2585
+ Qτ η2∂tχ(t)
2586
+ �ˆ |Duε|2
2587
+ 0
2588
+ g(s) ds
2589
+
2590
+ dz
2591
+ c ∥Dη∥2
2592
+ ∞ ∥χ∥∞
2593
+ ˆ
2594
+ Qτ (1 + |Duε|)p dz
2595
+ +c ∥χ∥L∞
2596
+ �cp
2597
+ δp + ∥Dη∥L∞
2598
+ � ˆ
2599
+ Qτ |f ε|2 dz.
2600
+ Choosing α = 1
2601
+ 4 , we can reabsorb the first integral in the right hand side by the left hand side, thus
2602
+ obtaining
2603
+ ˆ
2604
+ B2ρ
2605
+ η2χ(τ)
2606
+ �ˆ |Duε(x,τ)|2
2607
+ 0
2608
+ g(s) ds
2609
+
2610
+ dx
2611
+ +
2612
+ ˆ
2613
+ Qτ η2χ(t) · g
2614
+
2615
+ |Duε|2� (|Duε| − 1)p
2616
+ +
2617
+ |Duε|2
2618
+ ��D2u�2 dz
2619
+
2620
+ c ∥Dη∥2
2621
+ ∞ ∥χ∥∞
2622
+ ˆ
2623
+ Qτ (1 + |Duε|)p dz
2624
+ + c
2625
+ δp ∥χ∥L∞ (1 + ∥Dη∥L∞)
2626
+ ˆ
2627
+ Qτ |f ε|2 dz
2628
+ +c
2629
+ ˆ
2630
+ Qτ η2∂tχ(t)
2631
+ �ˆ |Duε|2
2632
+ 0
2633
+ g(s) ds
2634
+
2635
+ dz.
2636
+ (3.23)
2637
+ By the definition of g, we have
2638
+ ˆ ζ
2639
+ 0
2640
+ g(s) ds =
2641
+
2642
+
2643
+
2644
+
2645
+
2646
+
2647
+
2648
+ 0
2649
+ if
2650
+ 0 < ζ ≤ 1 + δ
2651
+ ˆ ζ
2652
+ 1+δ
2653
+ (s − 1 − δ)2
2654
+ 1 + δ + (s − 1 − δ)2 ds
2655
+ if
2656
+ ζ > 1 + δ,
2657
+ and so it is easy to check that
2658
+ ˆ ζ
2659
+ 0
2660
+ g(s) ds =
2661
+
2662
+
2663
+
2664
+
2665
+
2666
+
2667
+
2668
+ 0
2669
+ if
2670
+ 0 < ζ ≤ 1 + δ
2671
+ ζ − 1 − δ −
2672
+
2673
+ 1 + δ arctan
2674
+ �ζ − 1 − δ
2675
+
2676
+ 1 + δ
2677
+
2678
+ if
2679
+ ζ > 1 + δ,
2680
+ that is
2681
+ ˆ ζ
2682
+ 0
2683
+ g(s) ds = (ζ − 1 − δ)+ −
2684
+
2685
+ 1 + δ arctan
2686
+ �(ζ − 1 − δ)+
2687
+
2688
+ 1 + δ
2689
+
2690
+ .
2691
+ Therefore, by previous equality and the properties of χ and η, (3.23) implies
2692
+ ˆ
2693
+ B2ρ
2694
+ η2χ(τ)
2695
+
2696
+ |Duε(x, τ)|2 − 1 − δ
2697
+
2698
+ + dx
2699
+ +
2700
+ ˆ
2701
+ Qτ η2χ(t) · g
2702
+
2703
+ |Duε|2� (|Duε| − 1)p
2704
+ +
2705
+ |Duε|2
2706
+ ��D2u�2 dz
2707
+
2708
+ c ∥Dη∥2
2709
+ ∞ ∥χ∥∞
2710
+ ˆ
2711
+ Qτ (1 + |Duε|)p dz
2712
+ + c
2713
+ δp ∥χ∥L∞ (1 + ∥Dη∥L∞)
2714
+ ˆ
2715
+ Qτ |f ε|2 dz
2716
+ +c
2717
+ ˆ
2718
+ Qτ η2∂tχ(t)
2719
+
2720
+ |Duε|2 − 1 − δ
2721
+
2722
+ + dz
2723
+ +c ∥∂tχ∥∞ |Qτ| + c ∥χ∥∞ |BR| ,
2724
+ (3.24)
2725
+ which holds for almost every τ ∈
2726
+
2727
+ t0 − 4ρ2, t0
2728
+
2729
+ .
2730
+ We now choose a cut-off function η ∈ C∞ (B2ρ (x0)) with η ≡ 1 on Bρ (x0) such that 0 ≤ η ≤ 1 and
2731
+
2732
+ 19
2733
+ |Dη| ≤ c
2734
+ ρ. For the cut-off function in time, we choose χ ∈ W 1,∞ �
2735
+ t0 − R2, t0, [0, 1]
2736
+
2737
+ such that χ ≡ 0
2738
+ on
2739
+
2740
+ t0 − R2, t0 − 4ρ2�
2741
+ , χ ≡ 1 on
2742
+
2743
+ t0 − ρ2, t0
2744
+
2745
+ and ∂tχ ≤ c
2746
+ ρ2 on
2747
+
2748
+ t0 − 4ρ2, t0 − ρ2�
2749
+ . With these choices,
2750
+ (3.24) gives
2751
+ sup
2752
+ τ∈(t0−4ρ2,t0)
2753
+ ˆ
2754
+
2755
+ χ(τ)
2756
+
2757
+ |Duε(x, τ)|2 − 1 − δ
2758
+
2759
+ + dx
2760
+ +
2761
+ ˆ
2762
+
2763
+ g
2764
+
2765
+ |Duε|2� (|Duε| − 1)p
2766
+ +
2767
+ |Duε|2
2768
+ ��D2u�2 dz
2769
+
2770
+ c
2771
+ ρ2
2772
+ ˆ
2773
+ Q2ρ
2774
+ (1 + |Duε|p) dz +
2775
+ c
2776
+ ρ2δp
2777
+ ˆ
2778
+ Q2ρ
2779
+ |f ε|2 dz
2780
+ +c |Q2ρ|
2781
+ ρ2
2782
+ + c |B2ρ| ,
2783
+ and since ρ < 2ρ < R < 1, and Q2ρ = Bρ ×
2784
+
2785
+ t0 − 4ρ2, t0
2786
+
2787
+ , we have
2788
+ sup
2789
+ τ∈(t0−4ρ2,t0)
2790
+ ˆ
2791
+
2792
+
2793
+ |Duε(x, τ)|2 − 1 − δ
2794
+
2795
+ + dx
2796
+ +
2797
+ ˆ
2798
+
2799
+ g
2800
+
2801
+ |Duε|2� (|Duε| − 1)p
2802
+ +
2803
+ |Duε|2
2804
+ ��D2u�2 dz
2805
+
2806
+ c
2807
+ ρ2
2808
+ ˆ
2809
+ Q2ρ
2810
+ (1 + |Duε|p) dz +
2811
+ c
2812
+ ρ2δp
2813
+ ˆ
2814
+ Q2ρ
2815
+ |f ε|2 dz.
2816
+ (3.25)
2817
+ Now, with Gδ(t) defined at (2.9), recalling (2.10), we have
2818
+ ��D
2819
+
2820
+
2821
+
2822
+ (|Duε| − δ − 1)+
2823
+ ����2
2824
+
2825
+ (|Duε| − δ − 1)2
2826
+ +
2827
+ 1 + δ + (|Duε| − δ − 1)2
2828
+ +
2829
+
2830
+ (|Duε| − δ − 1)+ + δ
2831
+ �p−2 ��D2uε��2
2832
+ =
2833
+ g (|Duε|)
2834
+
2835
+ (|Duε| − δ − 1)+ + δ
2836
+ �p−2 ��D2uε��2 .
2837
+ Since g(s) is nondecreasing, we have g(s) ≤ g
2838
+
2839
+ s2�
2840
+ , and therefore
2841
+ ��D
2842
+
2843
+
2844
+
2845
+ (|Duε| − δ − 1)+
2846
+ ����2 ≤ g
2847
+
2848
+ |Duε|2�
2849
+ (|Duε| − 1)p−2
2850
+ +
2851
+ ��D2u�2
2852
+
2853
+ cp
2854
+ δ2 g
2855
+
2856
+ |Duε|2� (|Duε| − 1)p
2857
+ +
2858
+ |Duε|2
2859
+ ��D2u�2 ,
2860
+ (3.26)
2861
+ where we also used that g(s) = 0, for 0 < s ≤ 1 + δ. Using (3.26) in the left hand side of (3.25), we
2862
+ obtain
2863
+ sup
2864
+ τ∈(t0−4ρ2,t0)
2865
+ ˆ
2866
+
2867
+
2868
+ |Duε(x, τ)|2 − 1 − δ
2869
+
2870
+ + dx
2871
+ +
2872
+ ˆ
2873
+
2874
+ ��D
2875
+
2876
+
2877
+
2878
+ (|Duε| − δ − 1)+
2879
+ ����2 dz
2880
+
2881
+ c
2882
+ ρ2δ2
2883
+ �ˆ
2884
+ Q2ρ
2885
+ (1 + |Duε|p) dz + 1
2886
+ δp
2887
+ ˆ
2888
+ Q2ρ
2889
+ |f ε|2 dz
2890
+
2891
+ ,
2892
+ which is (3.4).
2893
+ Combining Lemma 3.1 and Lemma 2.8, we have the following.
2894
+ Corollary 3.2. Let uε ∈ C0 �
2895
+ t0 − R2, t0; L2 (BR)
2896
+
2897
+ ∩ Lp �
2898
+ t0 − R2, t0; u + W 1,p
2899
+ 0
2900
+ (BR)
2901
+
2902
+ be the unique
2903
+ solution to (3.1). Then the following estimate
2904
+ ˆ
2905
+ Q ρ
2906
+ 2
2907
+ ��τh
2908
+
2909
+
2910
+
2911
+ (|Duε| − δ − 1)+
2912
+ ����2 dz
2913
+
2914
+ 20
2915
+
2916
+ c|h|2
2917
+ ρ2δ2
2918
+ �ˆ
2919
+ Q2ρ
2920
+ (1 + |Duε|p) dz + 1
2921
+ δp
2922
+ ˆ
2923
+ Q2ρ
2924
+ |f ε|2 dz
2925
+
2926
+ (3.27)
2927
+ holds for |h| < ρ
2928
+ 4, for any parabolic cylinder Q2ρ ⋐ QR (z0).
2929
+ 4
2930
+ Proof of Theorem 1.1
2931
+ This section is devoted to the proof of Theorem 1.1, that will be divided in two steps.
2932
+ In the first one we shall establish an estimate that will allow us to measure the L2-distance between
2933
+ H p
2934
+ 2 (Du) and H p
2935
+ 2 (Duε) in terms of the L2-distance between f and f ε.
2936
+ In the second one, we conclude combining this comparison estimate with the one obtained for the
2937
+ difference quotient of the solution to the regularized problem at (3.27).
2938
+ Proof of Theorem 1.1. Step 1: the comparison estimate.
2939
+ We formally proceed by testing equations (1.1) and (3.1) with the map ϕ = k(t) (uε − u), where
2940
+ k ∈ W 1,∞ (R) is chosen such that
2941
+ k(t) =
2942
+
2943
+
2944
+
2945
+
2946
+
2947
+
2948
+
2949
+
2950
+
2951
+
2952
+
2953
+
2954
+
2955
+ 1
2956
+ if
2957
+ t ≤ t2,
2958
+ − 1
2959
+ ω (t − t2 − ω)
2960
+ if
2961
+ t2 < t < t2 + ω,
2962
+ 0
2963
+ if
2964
+ t ≥ t2 + ω,
2965
+ with t0 − R2 < t2 < t2 + ω < t0, and then letting ω → 0. We observe that, at this stage, it is
2966
+ important that uε and u agree on the parabolic boundary ∂parQR (z0).
2967
+ Proceeding in a standard way (see for example [13]), for almost every t2 ∈
2968
+
2969
+ t0 − R2, t0
2970
+
2971
+ , we find
2972
+ 1
2973
+ 2
2974
+ ˆ
2975
+ BR(x0)
2976
+ |uε (x, t2) − u (x, t2)|2 dx
2977
+ +
2978
+ ˆ
2979
+ QR,t2
2980
+ ⟨Hp−1 (Duε) − Hp−1 (Du) , Duε − Du⟩ dz
2981
+
2982
+ ˆ
2983
+ QR,t2
2984
+ ��
2985
+ 1 + |Duε|2� p−2
2986
+ 2 Duε, Duε − Du
2987
+
2988
+ dz
2989
+ =
2990
+ ˆ
2991
+ QR,t2
2992
+ (f − f ε) (uε − u) dz,
2993
+ (4.1)
2994
+ where we used the abbreviation QR,t2 = BR (x0) ×
2995
+
2996
+ t0 − R2, t2
2997
+
2998
+ .
2999
+ Using Lemma 2.1, the Cauchy-
3000
+ Schwarz inequality as well as Young’s inequality, from (4.1) we infer
3001
+ λp
3002
+ sup
3003
+ t∈(t0−R2,t0)
3004
+ ∥uε(·, t) − u(·, t)∥2
3005
+ L2(BR(x0))
3006
+ +λp
3007
+ ˆ
3008
+ QR
3009
+ ���H p
3010
+ 2 (Duε) − H p
3011
+ 2 (Du)
3012
+ ���
3013
+ 2
3014
+ dz + ε
3015
+ ˆ
3016
+ QR(z0)
3017
+ |Duε|p dz
3018
+
3019
+ ˆ
3020
+ QR
3021
+ |f − f ε| |uε − u| dz + ε
3022
+ ˆ
3023
+ QR
3024
+ |Duε|p−1 |Du| dz
3025
+
3026
+ ˆ
3027
+ QR
3028
+ |f − f ε| |uε − u| dz + ε · cp
3029
+ ˆ
3030
+ QR
3031
+ |Du|p dz
3032
+ +1
3033
+ 2 · ε
3034
+ ˆ
3035
+ QR
3036
+ |Duε|p dz,
3037
+ (4.2)
3038
+ where we set λp = min
3039
+ � 1
3040
+ 2, 4
3041
+ p2
3042
+
3043
+ . Reabsorbing the last integral in the right-hand side of (4.2) by the
3044
+ left-hand side, we arrive at
3045
+ sup
3046
+ t∈(t0−R2,t0)
3047
+ ∥uε(·, t) − u(·, t)∥2
3048
+ L2(BR(x0))
3049
+
3050
+ 21
3051
+ +
3052
+ ˆ
3053
+ QR
3054
+ ���H p
3055
+ 2 (Duε) − H p
3056
+ 2 (Du)
3057
+ ���
3058
+ 2
3059
+ dz +
3060
+ ε
3061
+ 2λp
3062
+ ˆ
3063
+ QR
3064
+ |Duε|p dz
3065
+
3066
+ ε cp
3067
+ ˆ
3068
+ QR
3069
+ |Du|p dz + cp
3070
+ ˆ
3071
+ QR
3072
+ |f − f ε| |uε − u| dz.
3073
+ (4.3)
3074
+ Using in turn Hölder’s inequality and Lemma 2.5, we get
3075
+ ˜I
3076
+ :=
3077
+ ˆ
3078
+ QR
3079
+ |f − f ε| |uε − u| dz
3080
+
3081
+ C (R, n, p) ∥f − f ε∥L2(QR) ·
3082
+ �ˆ
3083
+ QR
3084
+ |uε − u|p+ 2p
3085
+ n dz
3086
+
3087
+ n
3088
+ p(n+2)
3089
+
3090
+ c (n, p, R) ∥f − f ε∥L2(QR) ·
3091
+ �ˆ
3092
+ QR
3093
+ |Duε − Du|p dz
3094
+
3095
+ n
3096
+ p(n+2)
3097
+ ·
3098
+
3099
+ sup
3100
+ t∈(t0−R2,t0)
3101
+ ∥uε(·, t) − u(·, t)∥2
3102
+ L2(BR(x0))
3103
+
3104
+ 1
3105
+ n+2
3106
+ (4.4)
3107
+ Now, let us notice that
3108
+ ˆ
3109
+ QR
3110
+ |Duε − Du|p dz
3111
+ =
3112
+ ˆ
3113
+ QR∩{|Duε|≥1}
3114
+ (|Duε| − 1 + 1)p dz +
3115
+ ˆ
3116
+ QR∩{|Duε|<1}
3117
+ |Duε|p dz +
3118
+ ˆ
3119
+ QR
3120
+ |Du|p dz
3121
+
3122
+ cp
3123
+ ˆ
3124
+ QR
3125
+
3126
+ (|Duε| − 1)p
3127
+ +
3128
+
3129
+ dz +
3130
+ ˆ
3131
+ QR
3132
+ (|Du|p + 1) dz
3133
+
3134
+ cp
3135
+ ˆ
3136
+ QR
3137
+ ����H p
3138
+ 2 (Duε) − H p
3139
+ 2 (Du) + H p
3140
+ 2 (Du)
3141
+ ���
3142
+ 2�
3143
+ dz + cp
3144
+ ˆ
3145
+ QR
3146
+ (|Du|p + 1) dz
3147
+
3148
+ cp
3149
+ ˆ
3150
+ QR
3151
+ ���H p
3152
+ 2 (Duε) − H p
3153
+ 2 (Du)
3154
+ ���
3155
+ 2
3156
+ dz + cp
3157
+ ˆ
3158
+ QR
3159
+ (|Du|p + 1) dz.
3160
+ (4.5)
3161
+ Inserting (4.5) in (4.4), we get
3162
+ ˜I
3163
+
3164
+ c (n, p, R) ∥f − f ε∥L2(QR(z0))
3165
+ ·
3166
+ �ˆ
3167
+ QR
3168
+ ���H p
3169
+ 2 (Duε) − H p
3170
+ 2 (Du)
3171
+ ���
3172
+ 2
3173
+ dz +
3174
+ ˆ
3175
+ QR
3176
+ (|Du|p + 1) dz
3177
+
3178
+ n
3179
+ p(n+2)
3180
+ ·
3181
+
3182
+ sup
3183
+ t∈(t0−R2,t0)
3184
+ ∥uε(·, t) − u(·, t)∥2
3185
+ L2(BR(x0))
3186
+
3187
+ 1
3188
+ n+2
3189
+
3190
+ c (n, p, R) ∥f − f ε∥L2(QR) ·
3191
+ �ˆ
3192
+ QR
3193
+ ���H p
3194
+ 2 (Duε) − H p
3195
+ 2 (Du)
3196
+ ���
3197
+ 2
3198
+ dz
3199
+
3200
+ n
3201
+ p(n+2)
3202
+ ·
3203
+
3204
+ sup
3205
+ t∈(t0−R2,t0)
3206
+ ∥uε(·, t) − u(·, t)∥2
3207
+ L2(BR(x0))
3208
+
3209
+ 1
3210
+ n+2
3211
+ +c (n, p, R) ∥f − f ε∥L2(QR) ·
3212
+ �ˆ
3213
+ QR
3214
+ (|Du|p + 1) dz
3215
+
3216
+ n
3217
+ p(n+2)
3218
+ ·
3219
+
3220
+ sup
3221
+ t∈(t0−R2,t0)
3222
+ ∥uε(·, t) − u(·, t)∥2
3223
+ L2(BR(x0))
3224
+
3225
+ 1
3226
+ n+2
3227
+ and, by Young’s inequality, we get
3228
+ ˜I
3229
+
3230
+ β
3231
+ ˆ
3232
+ QR
3233
+ ���H p
3234
+ 2 (Duε) − H p
3235
+ 2 (Du)
3236
+ ���
3237
+ 2
3238
+ dz + β
3239
+ sup
3240
+ t∈(t0−R2,t0)
3241
+ ∥uε(·, t) − u(·, t)∥2
3242
+ L2(BR(x0))
3243
+ +c (n, p, R, β) ∥f − f ε∥
3244
+ n+2
3245
+ n+1
3246
+ L2(QR) ·
3247
+ �ˆ
3248
+ QR
3249
+ (|Du|p + 1) dz
3250
+
3251
+ n
3252
+ p(n+1)
3253
+
3254
+ 22
3255
+ +c (n, p, R, β) ∥f − f ε∥
3256
+ p(n+2)
3257
+ n(p−1)+p
3258
+ L2(QR)
3259
+ .
3260
+ (4.6)
3261
+ Inserting (4.6) in (4.3), we obtain
3262
+ sup
3263
+ t∈(t0−R2,t0)
3264
+ ∥uε(·, t) − u(·, t)∥2
3265
+ L2(BR(x0))
3266
+ +
3267
+ ˆ
3268
+ QR
3269
+ ���H p
3270
+ 2 (Duε) − H p
3271
+ 2 (Du)
3272
+ ���
3273
+ 2
3274
+ dz +
3275
+ ε
3276
+ 2λp
3277
+ ˆ
3278
+ QR
3279
+ |Duε|p dz
3280
+
3281
+ β
3282
+ ˆ
3283
+ QR
3284
+ ���H p
3285
+ 2 (Duε) − H p
3286
+ 2 (Du)
3287
+ ���
3288
+ 2
3289
+ dz + β
3290
+ sup
3291
+ t∈(t0−R2,t0)
3292
+ ∥uε(·, t) − u(·, t)∥2
3293
+ L2(BR(x0))
3294
+ +c (n, p, R, β) ∥f − f ε∥
3295
+ n+2
3296
+ n+1
3297
+ L2(QR) ·
3298
+ �ˆ
3299
+ QR
3300
+ (|Du|p + 1) dz
3301
+
3302
+ n
3303
+ p(n+1)
3304
+ +c (n, p, R, β) ∥f − f ε∥
3305
+ p(n+2)
3306
+ n(p−1)+p
3307
+ L2(QR)
3308
+ + ε cp
3309
+ ˆ
3310
+ QR
3311
+ |Du|p dz.
3312
+ (4.7)
3313
+ Choosing β = 1
3314
+ 2 and neglecting the third non negative term in the left hand side of (4.7), we get
3315
+ sup
3316
+ t∈(t0−R2,t0)
3317
+ ∥uε(·, t) − u(·, t)∥2
3318
+ L2(BR(x0)) +
3319
+ ˆ
3320
+ QR
3321
+ ���H p
3322
+ 2 (Duε) − H p
3323
+ 2 (Du)
3324
+ ���
3325
+ 2
3326
+ dz
3327
+
3328
+ c (n, p, R) ∥f − f ε∥
3329
+ n+2
3330
+ n+1
3331
+ L2(QR) ·
3332
+ �ˆ
3333
+ QR
3334
+ (|Du|p + 1) dz
3335
+
3336
+ n
3337
+ p(n+1)
3338
+ +c (n, p, R) ∥f − f ε∥
3339
+ p(n+2)
3340
+ n(p−1)+p
3341
+ L2(QR)
3342
+ + ε cp
3343
+ ˆ
3344
+ QR
3345
+ |Du|p dz.
3346
+ (4.8)
3347
+ For further needs, we also record that, combining (4.5) and (4.8), we have
3348
+ ˆ
3349
+ QR
3350
+ |Duε|p dz
3351
+
3352
+ c (n, p, R) ∥f − f ε∥
3353
+ n+2
3354
+ n+1
3355
+ L2(QR) ·
3356
+ �ˆ
3357
+ QR
3358
+ (|Du|p + 1) dz
3359
+
3360
+ n
3361
+ p(n+1)
3362
+ +c (n, p, R) ∥f − f ε∥
3363
+ p(n+2)
3364
+ n(p−1)+p
3365
+ L2(QR)
3366
+ + ε cp
3367
+ ˆ
3368
+ QR
3369
+ |Du|p dz
3370
+ +cp
3371
+ ˆ
3372
+ QR
3373
+ (|Du|p + 1) dz.
3374
+ (4.9)
3375
+ Step 2: The conclusion.
3376
+ Let us fix ρ > 0 such that Q2ρ ⊂ QR. We start observing that
3377
+ ˆ
3378
+ Q ρ
3379
+ 2
3380
+ ��τh
3381
+
3382
+
3383
+
3384
+ (|Du| − δ − 1)+
3385
+ ����2 dz
3386
+
3387
+ c
3388
+ ˆ
3389
+ Q ρ
3390
+ 2
3391
+ ��τh
3392
+
3393
+
3394
+
3395
+ (|Duε| − δ − 1)+
3396
+ ����2 dz
3397
+ +c
3398
+ ˆ
3399
+
3400
+ ��Gδ
3401
+
3402
+ (|Duε| − δ − 1)+
3403
+
3404
+ − Gδ
3405
+
3406
+ (|Du| − δ − 1)+
3407
+ ���2 dz.
3408
+ We estimate the right hand side of previous inequality using (3.27) and (2.11), as follows
3409
+ ˆ
3410
+ Q ρ
3411
+ 2
3412
+ ��τh
3413
+
3414
+
3415
+
3416
+ (|Du| − δ − 1)+
3417
+ ����2 dz
3418
+
3419
+ c|h|2
3420
+ ρ2
3421
+ �ˆ
3422
+ Q2ρ
3423
+ (1 + |Duε|p) dz + δ2−p
3424
+ ˆ
3425
+ Q2ρ
3426
+ |f ε|2 dz
3427
+
3428
+ +cp
3429
+ ˆ
3430
+ Q2ρ
3431
+ ���H p
3432
+ 2 (Duε) − H p
3433
+ 2 (Du)
3434
+ ���
3435
+ 2
3436
+ dz
3437
+ that, thanks to (4.8), implies
3438
+ ˆ
3439
+ Q ρ
3440
+ 2
3441
+ ��τh
3442
+
3443
+
3444
+
3445
+ (|Du| − δ − 1)+
3446
+ ����2 dz
3447
+
3448
+ 23
3449
+
3450
+ c|h|2
3451
+ ρ2
3452
+ �ˆ
3453
+ Q2ρ
3454
+ (1 + |Duε|p) dz + δ2−p
3455
+ ˆ
3456
+ Q2ρ
3457
+ |f ε|2 dz
3458
+
3459
+ +c (n, p, R) ∥f − f ε∥
3460
+ n+2
3461
+ n+1
3462
+ L2(QR) ·
3463
+ �ˆ
3464
+ QR
3465
+ (|Du|p + 1) dz
3466
+
3467
+ n
3468
+ p(n+1)
3469
+ +c (n, p, R) ∥f − f ε∥
3470
+ p(n+2)
3471
+ n(p−1)+p
3472
+ L2(QR)
3473
+ + ε cp
3474
+ ˆ
3475
+ QR
3476
+ |Du|p dz.
3477
+ (4.10)
3478
+ Now, using (4.9), we get
3479
+ ˆ
3480
+ Q2ρ
3481
+ (1 + |Duε|p) dz
3482
+
3483
+ c (n, p, R) ∥f − f ε∥
3484
+ n+2
3485
+ n+1
3486
+ L2(QR) ·
3487
+ �ˆ
3488
+ QR
3489
+ (|Du|p + 1) dz
3490
+
3491
+ n
3492
+ p(n+1)
3493
+ +c (n, p, R) ∥f − f ε∥
3494
+ p(n+2)
3495
+ n(p−1)+p
3496
+ L2(QR)
3497
+ + ε cp
3498
+ ˆ
3499
+ QR
3500
+ |Du|p dz
3501
+ +cp
3502
+ ˆ
3503
+ QR
3504
+ (|Du|p + 1) dz
3505
+ which, combined with (4.10), implies
3506
+ ˆ
3507
+ Q ρ
3508
+ 2
3509
+ ��τh
3510
+
3511
+
3512
+
3513
+ (|Du| − δ − 1)+
3514
+ ����2 dz
3515
+
3516
+ c (n, p) |h|2
3517
+ ρ2
3518
+
3519
+ c(R) ∥f − f ε∥
3520
+ n+2
3521
+ n+1
3522
+ L2(QR) ·
3523
+ �ˆ
3524
+ QR
3525
+ (|Du|p + 1) dz
3526
+
3527
+ n
3528
+ p(n+1)
3529
+ +c(R) ∥f − f ε∥
3530
+ p(n+2)
3531
+ n(p−1)+p
3532
+ L2(QR)
3533
+ + ε
3534
+ ˆ
3535
+ QR
3536
+ |Du|p dz
3537
+ +
3538
+ ˆ
3539
+ QR
3540
+ (|Du|p + 1) dz + δ2−p
3541
+ ˆ
3542
+ QR
3543
+ |f ε|2 dz
3544
+
3545
+ .
3546
+ Taking the limit as ε → 0, and since f ε → f strongly in L2 (BR), we obtain
3547
+ ˆ
3548
+ Q ρ
3549
+ 2
3550
+ ��τh
3551
+
3552
+
3553
+
3554
+ (|Du| − δ − 1)+
3555
+ ����2 dz
3556
+
3557
+ c (n, p) |h|2
3558
+ ρ2
3559
+ �ˆ
3560
+ QR
3561
+ (|Du|p + 1) dz + δ2−p
3562
+ ˆ
3563
+ QR
3564
+ |f|2 dz
3565
+
3566
+ ,
3567
+ and thanks to Lemma 2.9, we have Gδ
3568
+
3569
+ (|Du| − δ − 1)+
3570
+
3571
+ ∈ L2 �
3572
+ t0 − ρ2, t0; W 1,2 (Bρ)
3573
+
3574
+ with the follow-
3575
+ ing estimate
3576
+ ˆ
3577
+ Q ρ
3578
+ 2
3579
+ ��D
3580
+
3581
+
3582
+
3583
+ (|Du| − δ − 1)+
3584
+ ����2 dz
3585
+
3586
+ c (n, p)
3587
+ ρ2
3588
+ �ˆ
3589
+ QR
3590
+ (|Du|p + 1) dz + δ2−p
3591
+ ˆ
3592
+ QR
3593
+ |f|2 dz
3594
+
3595
+ .
3596
+ Since previous estimate holds true for any ρ > 0 such that 4ρ < R, we may choose ρ = R
3597
+ 8 thus getting
3598
+ (1.2).
3599
+ 5
3600
+ Proof of Theorem 1.2
3601
+ The higher differentiability result of Theorem 1.1 allows us to argue as in [13, Lemma 5.3] and [17,
3602
+ Lemma 3.2] to obtain the proof of Theorem 1.2.
3603
+ Proof of Theorem 1.2. We start observing that
3604
+ ���D
3605
+ ��
3606
+
3607
+
3608
+ (|Duε| − 1 − δ)+
3609
+ �� 4
3610
+ np + 1����
3611
+
3612
+ 24
3613
+
3614
+ c
3615
+ ��Gδ
3616
+
3617
+ (|Duε| − 1 − δ)+
3618
+ ���
3619
+ 4
3620
+ np ��D
3621
+
3622
+
3623
+
3624
+ (|Duε| − 1 − δ)+
3625
+ ���� ,
3626
+ (5.1)
3627
+ where c ≡ c(n, p) > 0 and Gδ(t) is the function defined at (2.9).
3628
+ With the notation we used in the previous sections, for B2ρ (x0) ⋐ BR (x0), let ϕ ∈ C∞
3629
+ 0 (Bρ (x0)) and
3630
+ χ ∈ W 1,∞ ((0, T )) be two non-negative cut-off functions with χ(0) = 0 and ∂tχ ≥ 0. Now, we fix a
3631
+ time t0 ∈ (0, T ) and apply the Sobolev embedding theorem on the time slices Σt := Bρ(x0) × {t} for
3632
+ almost every t ∈ (0, t0), to infer that
3633
+ ˆ
3634
+ Σt
3635
+ ϕ2 ��
3636
+
3637
+
3638
+ (|Duε| − 1 − δ)+
3639
+ �� 4
3640
+ np + 1�2
3641
+ dx
3642
+
3643
+ c
3644
+ �ˆ
3645
+ Σt
3646
+ ���D
3647
+
3648
+ ϕ
3649
+
3650
+
3651
+
3652
+ (|Duε| − 1 − δ)+
3653
+ �� 4
3654
+ np + 1����
3655
+ 2n
3656
+ n+2 dx
3657
+ � n+2
3658
+ n
3659
+
3660
+ c
3661
+ �ˆ
3662
+ Σt
3663
+ ���ϕ D
3664
+ ��
3665
+
3666
+
3667
+ (|Duε| − 1 − δ)+
3668
+ �� 4
3669
+ np + 1����
3670
+ 2n
3671
+ n+2 dx
3672
+ � n+2
3673
+ n
3674
+ +c
3675
+ �ˆ
3676
+ Σt
3677
+ ���
3678
+ ��Gδ
3679
+
3680
+ (|Duε| − 1 − δ)+
3681
+ ���
3682
+ 4
3683
+ np + 1 Dϕ
3684
+ ���
3685
+ 2n
3686
+ n+2 dx
3687
+ � n+2
3688
+ n
3689
+ =:
3690
+ c I1(t) + c I2(t),
3691
+ where, in the second to last line, we have applied Minkowski’s and Young’s inequalities one after the
3692
+ other. We estimate I1(t) and I2(t) separately. Let us first consider I1(t). Using (5.1), Lemma 2.12
3693
+ and Hölder’s inequality with exponents
3694
+ �n + 2
3695
+ n
3696
+ , n + 2
3697
+ 2
3698
+
3699
+ , we deduce
3700
+ I1(t)
3701
+
3702
+ c
3703
+ �ˆ
3704
+ Σt
3705
+ ϕ
3706
+ 2n
3707
+ n+2
3708
+
3709
+ (|Duε| − 1)
3710
+ 2
3711
+ n
3712
+ +
3713
+ ��DGδ
3714
+
3715
+ (|Duε| − 1 − δ)+
3716
+ ���
3717
+ � 2n
3718
+ n+2 dx
3719
+ � n+2
3720
+ n
3721
+
3722
+ c
3723
+ ˆ
3724
+ Σt
3725
+ ϕ2 ��DGδ
3726
+
3727
+ (|Duε| − 1 − δ)+
3728
+ ���2 dx
3729
+ �ˆ
3730
+ supp(ϕ)
3731
+ (|Duε| − 1)2
3732
+ + dx
3733
+ � 2
3734
+ n
3735
+
3736
+ c
3737
+ ˆ
3738
+ Σt
3739
+ ϕ2 ��DGδ
3740
+
3741
+ (|Duε| − 1 − δ)+
3742
+ ���2 dx
3743
+ �ˆ
3744
+ supp(ϕ)
3745
+ |Duε|2 dx
3746
+ � 2
3747
+ n
3748
+ .
3749
+ We now turn our attention to I2(t). Lemma 2.12 and Hölder’s inequality yield
3750
+ I2(t)
3751
+
3752
+ c
3753
+ �ˆ
3754
+ Σt
3755
+ (|Duε| − 1)
3756
+ np + 4
3757
+ n+2
3758
+ +
3759
+ |Dϕ|
3760
+ 2n
3761
+ n+2 dx
3762
+ � n+2
3763
+ n
3764
+
3765
+ c
3766
+ �ˆ
3767
+ Σt
3768
+
3769
+ |Dϕ|2 |Duε|p�
3770
+ n
3771
+ n+2 |Du|
3772
+ 4
3773
+ n+2 dx
3774
+ � n+2
3775
+ n
3776
+
3777
+ c
3778
+ ˆ
3779
+ Σt
3780
+ |Dϕ|2 |Duε|p dx
3781
+ �ˆ
3782
+ supp(ϕ)
3783
+ |Duε|2 dx
3784
+ � 2
3785
+ n
3786
+ .
3787
+ Putting together the last three estimates, using Lemma 2.12 in the left hand side, and integrating
3788
+ with respect to time, we obtain
3789
+ ˆ
3790
+ Qt0
3791
+ χϕ2 (|Duε| − 1)
3792
+ p + 4
3793
+ n
3794
+ +
3795
+ dz
3796
+
3797
+ c
3798
+ ˆ t0
3799
+ 0
3800
+ χ
3801
+ �ˆ
3802
+ supp(ϕ)
3803
+ |Duε(x, t)|2 dx
3804
+ � 2
3805
+ n
3806
+ ·
3807
+ ·
3808
+ �ˆ
3809
+ Σt
3810
+
3811
+ ϕ2 ��DGδ
3812
+
3813
+ (|Duε| − 1 − δ)+
3814
+ ���2 + |Dϕ|2 |Du|p�
3815
+ dx
3816
+
3817
+ dt
3818
+
3819
+ c
3820
+ ˆ
3821
+ Qt0
3822
+ χ
3823
+
3824
+ ϕ2 ��DGδ
3825
+
3826
+ (|Duε| − 1 − δ)+
3827
+ ���2 + |Dϕ|2 |Du|p�
3828
+ dz
3829
+
3830
+ 25
3831
+ ·
3832
+
3833
+ sup
3834
+ 0<t<t0, χ(t)̸=0
3835
+ ˆ
3836
+ supp(ϕ)
3837
+ |Duε(x, t)|2 dx
3838
+ � 2
3839
+ n
3840
+ ,
3841
+ (5.2)
3842
+ where we have used the abbreviation Qt0 := Bρ (x0) × (0, t0).
3843
+ Now we choose χ ∈ W 1,∞ ((0, T )) such that χ ≡ 0 on
3844
+
3845
+ 0, t0 − ρ2�
3846
+ , χ ≡ 1 on
3847
+
3848
+ t0 −
3849
+ �ρ
3850
+ 2
3851
+ �2
3852
+ , T
3853
+
3854
+ and
3855
+ ∂tχ ≥ 0. For ϕ ∈ C∞
3856
+ 0 (Bρ (x0)), we assume that ϕ ≡ 1 on B ρ
3857
+ 2 (x0), 0 ≤ ϕ ≤ 1 and |Dϕ| ≤ C
3858
+ ρ .
3859
+ With these choices (5.2) turns into
3860
+ ˆ
3861
+ Q ρ
3862
+ 2
3863
+ (|Duε| − 1)
3864
+ p + 4
3865
+ n
3866
+ +
3867
+ dz
3868
+
3869
+ c(n, p)
3870
+ ˆ
3871
+
3872
+ ���DGδ (|Duε| − 1 − δ)+)
3873
+ ��2 + ρ−2 |Duε|p�
3874
+ dz
3875
+ ·
3876
+
3877
+ sup
3878
+ t0−ρ2<t<t0
3879
+ ˆ
3880
+ Bρ(x0)
3881
+ |Duε(x, t)|2 dx
3882
+ � 2
3883
+ n
3884
+ .
3885
+ (5.3)
3886
+ We now use (3.4), in order to estimate the first and second integral on the right-hand side of (5.3),
3887
+ thus getting
3888
+ ˆ
3889
+ Q ρ
3890
+ 2
3891
+ (|Duε| − 1)
3892
+ p + 4
3893
+ n
3894
+ +
3895
+ dz ≤
3896
+ c
3897
+ ρ
3898
+ 2(n+2)
3899
+ n
3900
+ �ˆ
3901
+ Q2ρ
3902
+ (1 + |Duε|p) dz + δ2−p
3903
+ ˆ
3904
+ Q2ρ
3905
+ |f ε|2 dz
3906
+ � 2
3907
+ n +1
3908
+ .
3909
+ Now we use (4.9) to deduce that
3910
+ ˆ
3911
+ Q ρ
3912
+ 2
3913
+ (|Duε| − 1)
3914
+ p + 4
3915
+ n
3916
+ +
3917
+ dz
3918
+
3919
+ c (n, p)
3920
+ ρ
3921
+ 2(n+2)
3922
+ n
3923
+
3924
+ c(ρ) ∥f − f ε∥
3925
+ n+2
3926
+ n+1
3927
+ L2(Q2ρ(z0)) ·
3928
+ �ˆ
3929
+ Q2ρ
3930
+ (|Du|p + 1) dz
3931
+
3932
+ n
3933
+ p(n+1) 
3934
+
3935
+ 2
3936
+ n +1
3937
+ +c (n, p)
3938
+ ρ
3939
+ 2(n+2)
3940
+ n
3941
+
3942
+ c(ρ) ∥f − f ε∥
3943
+ p(n+2)
3944
+ n(p−1)+p
3945
+ L2(Q2ρ) + ε cp
3946
+ ˆ
3947
+ Q2ρ
3948
+ |Du|p dz
3949
+ � 2
3950
+ n +1
3951
+ +c (n, p)
3952
+ ρ
3953
+ 2(n+2)
3954
+ n
3955
+ �ˆ
3956
+ Q2ρ
3957
+ (|Du|p + 1) dz + δ2−p
3958
+ ˆ
3959
+ Q2ρ
3960
+ |f ε|2 dz
3961
+ � 2
3962
+ n +1
3963
+ .
3964
+ (5.4)
3965
+ Let us observe that estimate (4.8) in particular implies that
3966
+ ˆ
3967
+ QR
3968
+ ���H p
3969
+ 2 (Duε) − H p
3970
+ 2 (Du)
3971
+ ���
3972
+ 2
3973
+ dz
3974
+
3975
+ c (n, p, R) ∥f − f ε∥
3976
+ n+2
3977
+ n+1
3978
+ L2(QR) ·
3979
+ �ˆ
3980
+ QR
3981
+ (|Du|p + 1) dz
3982
+
3983
+ n
3984
+ p(n+1)
3985
+ +c (n, p, R) ∥f − f ε∥
3986
+ p(n+2)
3987
+ n(p−1)+p
3988
+ L2(QR)
3989
+ + ε cp
3990
+ ˆ
3991
+ QR
3992
+ |Du|p dz.
3993
+ By the strong convergence of f ε → f in L2 (QR), passing to the limit as ε → 0, from previous estimate
3994
+ we deduce
3995
+ lim
3996
+ ε→0
3997
+ ˆ
3998
+ QR
3999
+ ���H p
4000
+ 2 (Duε) − H p
4001
+ 2 (Du)
4002
+ ���
4003
+ 2
4004
+ dz = 0
4005
+ that is H p
4006
+ 2 (Duε) → H p
4007
+ 2 (Du), strongly in L2 (QR) . Therefore, up to a not relabelled subsequence, we
4008
+ also have H p
4009
+ 2 (Duε) → H p
4010
+ 2 (Du), a.e. in QR (z0) and so
4011
+ (|Duε| − 1)+ → (|Du| − 1)+
4012
+ a.e. in QR (z0)
4013
+ By Fatou’s Lemma, taking the limit as ε → 0 in both sides of (5.4)
4014
+ ˆ
4015
+ Q ρ
4016
+ 2
4017
+ (|Du| − 1)
4018
+ p + 4
4019
+ n
4020
+ +
4021
+ dz ≤ lim inf
4022
+ ε→0
4023
+ ˆ
4024
+ Q ρ
4025
+ 2
4026
+ (|Duε| − 1)
4027
+ p + 4
4028
+ n
4029
+ +
4030
+ dz
4031
+
4032
+ 26
4033
+
4034
+ c (n, p)
4035
+ ρ
4036
+ 2(n+2)
4037
+ n
4038
+ �ˆ
4039
+ Q2ρ
4040
+ (|Du|p + 1) dz + δ2−p
4041
+ ˆ
4042
+ Q2ρ
4043
+ |f|2 dz
4044
+ � 2
4045
+ n +1
4046
+ ,
4047
+ which holds for any δ ∈ (0, 1), so we can fix δ = 1
4048
+ 2, to get the conclusion (1.3).
4049
+ References
4050
+ [1]
4051
+ Z. M. Akhmedov, G. I. Barenblatt, V. M. Entov, and A. Kh. Mirzadzhan-Zade. “Nonlin-
4052
+ ear effects in gas filtration”. In: Fluid Dynamics 4.5 (1969), pp. 68–72. doi: https://doi.org/10.1007/BF01015960.
4053
+ [2]
4054
+ P. Ambrosio. “Besov regularity for a class of singular or degenerate elliptic equations”.
4055
+ In: Journal of Mathematical Analysis and Applications 505.2 (2022), p. 125636. doi:
4056
+ https://doi.org/10.1016/j.jmaa.2021.125636.
4057
+ [3]
4058
+ P. Ambrosio and A. Passarelli di Napoli. Regularity results for a class of widely degener-
4059
+ ate parabolic equations. 2022. arXiv: 2204.05966 [math.AP]. url: https://arxiv.org/abs/2204.05966v3.
4060
+ [4]
4061
+ V. Bögelein, F. Duzaar, R. Giova, and A. Passarelli di Napoli. “Higher regularity in con-
4062
+ gested traffic dynamics”. In: Mathematische Annalen (2022), pp. 1–56. doi: https://doi.org/10.1007/s00208-022-02375-y.
4063
+ [5]
4064
+ V. Bögelein, F. Duzaar, and P. Marcellini. “Parabolic equations with p, q-growth”.
4065
+ In: Journal de Mathématiques Pures et Appliquées 100.4 (2013), pp. 535–563. doi:
4066
+ https://doi.org/10.1016/j.matpur.2013.01.012.
4067
+ [6]
4068
+ L. Brasco. “Global L∞ gradient estimates for solutions to a certain degenerate elliptic
4069
+ equation”. In: Nonlinear Analysis: Theory, Methods & Applications 74.2 (2011), pp. 516–
4070
+ 531. doi: https://doi.org/10.1016/j.na.2010.09.006.
4071
+ [7]
4072
+ L. Brasco, G. Carlier, and F. Santambrogio. “Congested traffic dynamics, weak flows
4073
+ and very degenerate elliptic equations”. In: Journal de Mathématiques Pures et Ap-
4074
+ pliquées 93.6 (2010), pp. 652–671. doi: https://doi.org/10.1016/j.matpur.2010.03.010.
4075
+ [8]
4076
+ L. Brasco and F. Santambrogio. “A sharp estimate à la Calderón–Zygmund for the p-
4077
+ Laplacian”. In: Communications in Contemporary Mathematics 20.03 (2018), p. 1750030.
4078
+ doi: https://doi.org/10.1142/S0219199717500304.
4079
+ [9]
4080
+ S. Campanato. “Equazioni paraboliche del secondo ordine e spazi L2,θ (Ω, δ)”. In: Annali
4081
+ di Matematica Pura ed Applicata 73.1 (1966), pp. 55–102. doi: https://doi.org/10.1007/BF02415082.
4082
+ [10]
4083
+ G. Cupini, F. Giannetti, R. Giova, and A. Passarelli di Napoli. “Regularity results for
4084
+ vectorial minimizers of a class of degenerate convex integrals”. In: Journal of Differential
4085
+ Equations 265.9 (2018), pp. 4375–4416. doi: https://doi.org/10.1016/j.jde.2018.06.010.
4086
+ [11]
4087
+ E. DiBenedetto. “C1+α local regularity of weak solutions of degenerate elliptic equa-
4088
+ tions”. In: Nonlinear Analysis: Theory, Methods & Applications 7.8 (1983), pp. 827–
4089
+ 850.
4090
+ [12]
4091
+ E. DiBenedetto. Degenerate parabolic equations. Springer Science & Business Media,
4092
+ 1993.
4093
+ [13]
4094
+ F. Duzaar, G. Mingione, and K. Steffen. Parabolic systems with polynomial growth and
4095
+ regularity. American Mathematical Soc., 2011.
4096
+ [14]
4097
+ M. Eleuteri, P. Marcellini, and E. Mascolo. “Regularity for scalar integrals without
4098
+ structure conditions”. In: Advances in Calculus of Variations 13.3 (2020), pp. 279–300.
4099
+ doi: https://doi.org/10.1515/acv-2017-0037.
4100
+ [15]
4101
+ E. Giusti. Direct Methods in the Calculus of Variations. World Scientific, 2003.
4102
+ [16]
4103
+ P. Lindqvist. Notes on the stationary p-Laplace equation. Springer, 2019.
4104
+
4105
+ 27
4106
+ [17]
4107
+ C. Scheven. “Regularity for subquadratic parabolic systems: higher integrability and di-
4108
+ mension estimates”. In: Proceedings of the Royal Society of Edinburgh: Section A Mathe-
4109
+ matics 140.6 (2010), pp. 1269–1308. doi: https://doi.org/10.1017/S030821050900167X.
4110
+
B9FKT4oBgHgl3EQfXi52/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
D9E1T4oBgHgl3EQfEQOb/content/tmp_files/2301.02888v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
D9E1T4oBgHgl3EQfEQOb/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
D9E2T4oBgHgl3EQfSgcI/content/2301.03792v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:560c7e0856a3d25f755a1909ea80840b1eff2a34fce0577ff06219521003a13b
3
+ size 356631
D9E2T4oBgHgl3EQfSgcI/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a81b66b20d16186f3bf9677b6d8672331ad5df9f19abd333f65e82e977d4f576
3
+ size 1966125
D9E2T4oBgHgl3EQfSgcI/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:329734e778c2ad5ddf511f8f0b2795fc5d1dbb7329e3de4c1741ad75ac310ee9
3
+ size 76366
DdE1T4oBgHgl3EQf-Abs/content/2301.03564v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8afc9206f2e1a4c3c1eeaa597bb3ed7ba09d0d5ad0dab10d26db6825abdf7ffd
3
+ size 5931527
DdE1T4oBgHgl3EQf-Abs/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef4594c03590986bc2d6d03d021b5840afb44d518cb475293a57d9ce2ac9044e
3
+ size 4259885
ENE4T4oBgHgl3EQfGgwU/content/tmp_files/2301.04894v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
ENE4T4oBgHgl3EQfGgwU/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff