diff --git "a/GtFIT4oBgHgl3EQfWyuf/content/tmp_files/2301.11241v1.pdf.txt" "b/GtFIT4oBgHgl3EQfWyuf/content/tmp_files/2301.11241v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/GtFIT4oBgHgl3EQfWyuf/content/tmp_files/2301.11241v1.pdf.txt" @@ -0,0 +1,3956 @@ +On the Convergence of No-Regret Learning Dynamics in +Time-Varying Games +Ioannis Anagnostides1, Ioannis Panageas2, Gabriele Farina3, and Tuomas Sandholm4 +1,3,4Carnegie Mellon University +2University of California Irvine +4Strategy Robot, Inc. +4Optimized Markets, Inc. +4Strategic Machine, Inc. +{ianagnos,gfarina,sandholm}@cs.cmu.edu, and ipanagea@ics.uci.edu +January 27, 2023 +Abstract +Most of the literature on learning in games has focused on the restrictive setting where +the underlying repeated game does not change over time. +Much less is known about the +convergence of no-regret learning algorithms in dynamic multiagent settings. In this paper, +we characterize the convergence of optimistic gradient descent (OGD) in time-varying games by +drawing a strong connection with dynamic regret. Our framework yields sharp convergence +bounds for the equilibrium gap of OGD in zero-sum games parameterized on the minimal first-order +variation of the Nash equilibria and the second-order variation of the payoff matrices, subsuming +known results for static games. Furthermore, we establish improved second-order variation +bounds under strong convexity-concavity, as long as each game is repeated multiple times. Our +results also apply to time-varying general-sum multi-player games via a bilinear formulation of +correlated equilibria, which has novel implications for meta-learning and for obtaining refined +variation-dependent regret bounds, addressing questions left open in prior papers. Finally, we +leverage our framework to also provide new insights on dynamic regret guarantees in static games. +arXiv:2301.11241v1 [cs.LG] 26 Jan 2023 + +1 +Introduction +Most of the classical results in the literate on learning in games—exemplified by, among others, +the work of Hart and Mas-Colell [HM00], Foster and Vohra [FV97], and Freund and Schapire +[FS99]—rest on the assumption that the underlying repeated game remains invariant throughout the +learning process. Yet, in many learning environments that is unrealistic [Duv+22; Zha+22; Car+19; +MS21]. One such class is settings where the underlying game is actually changing, such as routing +problems on the internet [MO11], online advertising auctions [LST16], and dynamic mechanism +design [Pap+22; DMZ21]. Another such class consists of settings in which many similar games need +to be solved [Har+22]. For example, one may want to solve variations of a game for the purpose +of sensitivity analysis with respect to the modeling assumptions used to construct the game model. +Another example is solving multiple versions of a game any one of which might be faced in the future. +Despite the considerable interest in such dynamic multiagent environments, much less is known +about the convergence of no-regret learning algorithms in time-varying games. No-regret dynamics +are natural learning algorithms that have desirable convergence properties in static settings. Also, +the state-of-the-art algorithms for finding minimax equilibria in two-player zero-sum games are based +on advanced forms of no-regret dynamics [FKS21; BS19a]. Indeed, all the superhuman milestones +in poker have used them in the equilibrium-finding module of their architectures [Bow+15; BS17; +BS19b]. +In this paper, we seek to fill this knowledge gap by understanding properties of no-regret dy- +namics in time-varying games. In particular, we primarily investigate the convergence of optimistic +gradient descent (OGD) [Chi+12; RS13] in time-varying games. Unlike traditional no-regret learning +algorithms, such as (online) gradient descent, OGD has been recently shown to exhibit last-iterate +convergence in static (two-player) zero-sum games [Das+18; GPD20; COZ22; GTG22]. For the more +challenging scenario where the underlying game can vary in every round, a fundamental question +arises: Under what conditions on the sequence of games does OGD (with high probability) approximate +the sequence of Nash equilibria? +1.1 +Our Results +In this paper, we build a new framework that enables us to characterize the convergence of OGD +in time-varying games. Specifically, our first contribution is to identify natural variation measures +on the sequence of games whose sublinear growth guarantees that almost all iterates of OGD are +(approximate) Nash equilibria in time-varying (two-player) zero-sum games (Corollary 3.6). More +precisely, in Theorem 3.5 we derive a sharp non-asymptotic characterization of the equilibrium gap of +OGD as a function of the variation measures we identify: the minimal first-order variation of the Nash +equilibria and the second-order variation of the payoff matrices. It is a compelling property, in light of +the multiplicity of Nash equilibria, that the variation of the Nash equilibria is measured in terms of the +most favorable—i.e., one that minimizes the variation—such sequence. Additionally, we show that +our convergence bounds can be further improved by considering a variation measure that depends on +the deviation of approximate Nash equilibria of the games, a measure that could be arbitrarily smaller +than the one based on (even the least varying) sequence of exact Nash equilibria (Proposition 3.3). +From a technical standpoint, our analysis revolves around a new connection we draw between the +convergence of OGD in time-varying games and dynamic regret. In particular, the first key observation +is that dynamic regret is always nonnegative under any sequence of Nash equilibria (Property 3.2). +By combining that property with a dynamic RVU bound—in the sense of Syrgkanis et al. [Syr+15]— +1 + +that we derive (Lemmas 3.1 and A.1), we obtain in Theorem 3.4 a variation-dependent bound for +the second-order path length of OGD in time-varying games. In turn, this leads to our main result, +Theorem 3.5, discussed above. As such, we extend the regret-based framework of Anagnostides et al. +[Ana+22b] from static to time-varying games. In the special case of static games, our result reduces +to a tight T −1/2 rate. It is worth stressing that Property 3.2 is in fact more general, being intricately +tied to the admission of a minimax theorem (Property A.3), and applies even under a sequence of +approximate Nash equilibria—with slackness that gently degrades with the approximation thereof. +Moreover, for strongly convex-concave time-varying games, we obtain a refined second-order +variation bound on the sequence of Nash equilibria, as long as each game is repeated multiple +times (Theorem 3.8); this is inspired by an improved second-order bound for dynamic regret under +analogous conditions due to Zhang et al. [Zha+17]. As a byproduct of our techniques, we point out +that any no-regret learners are approaching a Nash equilibrium under strong convexity-concavity +(Proposition 3.9). Those results apply even in non-strongly convex-concave settings by suitably +trading-off the magnitude of a regularizer that makes the game strongly convex-concave. This offers +significant gains in the meta-learning setting as well, wherein each game is repeated multiple times. +Next, we extend our results to time-varying general-sum multi-player games via a bilinear formu- +lation of correlated equilibria. As such, we recover similar convergence bounds parameterized on the +variation of the correlated equilibria (Theorem 3.12). To illustrate the power of our framework, we +immediately recover natural and algorithm-independent similarity measures for the meta-learning +setting (Proposition A.13) even in general games (Corollary A.22), thereby addressing an open +question of Harris et al. [Har+22]. Our techniques also imply new per-player regret bounds in +zero-sum and general-sum games (Corollaries 3.7 and A.23), the latter addressing a question left +open by Zhang et al. [Zha+22]. We further parameterize the convergence of (vanilla) gradient descent +in time-varying potential games in terms of the deviation of the potential functions (Theorem 3.10). +Finally, building on our techniques in time-varying games, we investigate the best dynamic-regret +guarantees possible in static games. Although this is a basic question, it has apparently eluded +prior research. We first show that instances of optimistic mirror descent guarantee O( +√ +T) dynamic +per-player regret (Proposition 3.13), matching the known rate of (online) gradient descent but for +the significantly weaker notion of external regret. We further point out that O(log T) dynamic +regret is attainable, but in a stronger two-point feedback model. In stark contrast, even obtaining +sublinear dynamic regret for each player is precluded in general-sum games (Proposition 3.15). This +motivates studying a relaxation of dynamic regret that constrains the number of switches in the +comparator, for which we derive accelerates rates in general games (Theorem 3.16) by leveraging the +techniques of Syrgkanis et al. [Syr+15] in conjunction with our dynamic RVU bound (Lemma 3.1). +1.2 +Further Related Work +Even in static (two-player) zero-sum games, the pointwise convergence of no-regret learning algo- +rithms is a tenuous affair. Indeed, traditional learning dynamics within the no-regret framework, such +as (online) mirror descent, may even diverge away from the equilibrium; e.g., see [SAF02; MPP18; +Vla+20; GVM21]. Notwithstanding, the empirical frequency of no-regret learners is well-known +to approach the set of Nash equilibria in zero-sum games [FS99], and the set of coarse correlated +equilibria in general-sum games [HM00]—a standard relaxation of the Nash equilibrium [MV78; +Aum74]. Unfortunately, those classical results are of little use beyond static games, thereby offering +a crucial impetus for investigating iterate-convergence in games with a time-varying component—a +ubiquitous theme in many practical scenarios of interest [DMZ21; Pap+22; Ven21; Gar17; Van10; +2 + +RG22; PKB22; YH15; RJW21]. +Indeed, there has been a considerable effort endeavoring to extend the scope of traditional +game-theoretic results to the time-varying setting, approached from a variety of different stand- +points [LST16; Zha+22; Car+19; MO11; MS21; Duv+22]. In particular, our techniques in Section 3.1 +share similarities with the ones used by Zhang et al. [Zha+22], but our primary focus is very different: +Zhang et al. [Zha+22] were mainly interested in obtaining variation-dependent regret bounds, while +our results revolve around iterate-convergence to Nash equilibria. We stress again that minimizing +regret and approaching Nash equilibria are two inherently distinct problems, although connections +have emerged [Ana+22b], and are further cultivated in this paper. +Another closely related direction is on meta-learning in games [Har+22], wherein each game +can be repeated for multiple iterations. Such considerations are motivated in part by a number of +use-cases in which many “similar” games—or multiple game variations—ought to be solved [BS16], +such as Poker with different stack-sizes. While the meta-learning problem is a special case of our +general setting, our results are strong enough to have new implications for meta-learning in games, +even though the algorithms considered herein are not tailored to operate in that setting. +Finally, although our focus is on the convergence of OGD in time-varying games, some of our +results—namely, the ones formalized in Appendix A.1.7—can be viewed as part of an ongoing effort +to characterize the class of variational inequalities (VIs) that are amenable to efficient algorithms; +see [DDJ21; CZ22; Azi+20; BMW21; CP04; DL15; MV21; MRS20; Nou+19; Son+20; YKH20; +Das22], and references therein. We also highlight that the techniques used to establish last-iterate +convergence even in monotone (time-invariant) settings are particularly involved [GPD20; COZ22; +GTG22]; the simplicity of our framework, therefore, in the more challenging time-varying regime is +a compelling aspect of this paper. +2 +Preliminaries +Notation +We let N := {1, 2, . . . , } be the set of natural numbers. For a number p ∈ N, we let +[[p]] := {1, . . . , p}. For a vector w ∈ Rd, we use ∥w∥2 to represent its Euclidean norm; we also +overload that notation so that ∥ · ∥2 denotes the spectral norm when the argument is a matrix. +For a two-player zero-sum game, we denote by X ⊆ Rdx and Y ⊆ Rdy the strategy sets of the two +players—namely, Player x and Player y, respectively—where dx, dy ∈ N represent the corresponding +dimensions. It is assumed that X and Y are nonempty convex and compact sets. For example, in +the special case where X := ∆dx and Y := ∆dy—each set corresponds to a probability simplex—the +game is said to be in normal form. Further, we denote by DX the ℓ2-diameter of X, and by ∥X∥2 +the maximum ℓ2-norm attained by a point in X. We will always assume that the strategy sets +remain invariant, while the payoff matrix can change in each round. For notational convenience, we +will denote by z := (x, y) the concatenation of x and y, and by Z := X × Y the Cartesian product +of X and Y. In general n−player games, we instead use subscripts indexed by i ∈ [[n]] to specify +quantities related to a player. Superscripts are typically reserved to identify the time index. Finally, +to simplify the exposition, we use the O(·) notation to suppress time-independent parameters of +the problem; precise statements are given in Appendix A. +Dynamic regret +We operate in the usual online learning setting under full-feedback. Namely, at +every time t ∈ N the learner decides on a strategy x(t) ∈ X, and then observes a utility x �→ ⟨x, u(t) +x ⟩, +for u(t) +x ∈ Rdx. Following Daskalakis, Deckelbaum, and Kim [DDK11], we will insist on allowing only +3 + +O(1) previous utilities to be stored; this will preclude trivial exploration protocols when learning in +games. +A strong performance benchmark in this online setting is dynamic regret, defined for a time +horizon T ∈ N as follows: +DReg(T) +x (s(T) +x ) := +T +� +t=1 +⟨x(t,⋆) − x(t), u(t) +x ⟩, +(1) +where s(T) +x +:= (x(1,⋆), . . . , x(T,⋆)) ∈ X T is the sequence of comparators; setting x(1,⋆) = x(2,⋆) = · · · = +x(T,⋆) in (1) we recover the standard notion of (external) regret (denoted simply by Reg(T) +x ), which is +commonly used to establish convergence of the time-average strategies in static two-player zero-sum +games [FS99]. On the other hand, the more general notion of dynamic regret, introduced in (1), +has been extensively used in more dynamic environments; e.g., [Zha+20; Zha+17; Jad+15; Ces+12; +HS09]. We also let DReg(T) +x +:= maxs(T ) +x +∈X T DReg(T) +x (s(T) +x ). While ensuring o(T) dynamic regret is +clearly hopeless in a truly adversarial environment, Section 3.4 reveals that non-trivial guarantees +are possible when learning in zero-sum games. +Optimistic gradient descent +Optimistic gradient descent (OGD) [Chi+12; RS13] is a no-regret +algorithm defined with the following update rule: +x(t) := ΠX +� +ˆx(t) + ηm(t) +x +� +, +ˆx(t+1) := ΠX +� +ˆx(t) + ηu(t) +x +� +. +(OGD) +Here, η > 0 is the learning rate; ˆx(1) := arg minˆx∈X ∥ˆx∥2 +2 represents the initialization of OGD; +m(t) +x +∈ Rdx is the prediction vector at time t, and it is set as m(t) +x +:= u(t−1) +x +when t ≥ 2, and +m(1) +x +:= 0dx; and finally, ΠX (·) represents the Euclidean projection to the set X, which is well- +defined, and can be further computed efficiently for structured sets, such as the probability simplex. +For our purposes, we will posit access to a projection oracle for the set X, in which case the update +rule (OGD) is efficiently implementable. +In a multi-player n-player game, each Player i ∈ [[n]] is associated with a utility function +ui :× +n +i=1 Xi → R. We recall the following fundamental definition [Nas50]. +Definition 2.1 (Approximate Nash equilibrium). A joint strategy profile (x⋆ +1, . . . , x⋆ +n) ∈× +n +i=1 Xi is +an ϵ-approximate Nash equilibrium (NE), for an ϵ ≥ 0, if for any Player i ∈ [[n]] and any possible +deviation x′ +i ∈ Xi, +ui(x⋆ +1, . . . , x⋆ +i , . . . , x⋆ +n) ≥ ui(x⋆ +1, . . . , x′ +i, . . . , x⋆ +n) − ϵ. +3 +Convergence in Time-Varying Games +In this section, we formalize our results regarding convergence in time-varying games. We organize +this section as follows: First, in Section 3.1, we build the foundations of our framework by studying +the convergence of OGD in time-varying bilinear saddle-point problems, culminating in the non- +asymptotic characterization of Theorem 3.5; Section 3.2 formalizes our improvements under strong +convexity-concavity; we then extend our results (in Section 3.3) to time-varying multi-player general- +sum and potential games; and finally, Section 3.4 concerns dynamic regret guarantees in static games. +4 + +3.1 +Bilinear Saddle-Point Problems +We first study an online learning setting wherein two players interact in a sequence of time- +varying bilinear saddle-point problems. More precisely, we assume that in every repetition t ∈ [[T]] +the players select a pair of strategies (x(t), y(t)) ∈ X × Y. Then, Player x receives the utility +u(t) +x +:= −A(t)y(t) ∈ Rdx, where A(t) ∈ Rdx×dy represents the payoff matrix at the t-th repetition; +similarly, Player y receives the utility u(t) +y +:= (A(t))⊤x(t) ∈ Rdy. The proofs of this subsection are +included in Appendix A.1. +Dynamic RVU bound +The first key ingredient that we need is the property of regret bounded by +variation in utilities (RVU), in the sense of Syrgkanis et al. [Syr+15], but with respect to dynamic +regret; such a bound is established below. +Lemma 3.1 (RVU bound for dynamic regret). Consider any sequence of utilities (u(1) +x , . . . , u(T) +x ) +up to time T ∈ N. The dynamic regret (1) of OGD with respect to any sequence of comparators +(x(1,⋆), . . . , x(T,⋆)) ∈ X T can be bounded by +D2 +X +2η + DX +η +T−1 +� +t=1 +∥x(t+1,⋆) − x(t,⋆)∥2+η +T +� +t=1 +∥u(t) +x − m(t) +x ∥2 +2 +− 1 +2η +T +� +t=1 +� +∥x(t) − ˆx(t)∥2 +2 + ∥x(t) − ˆx(t+1)∥2 +2 +� +. +(2) +In the special case of external regret—x(1,⋆) = x(2,⋆) = · · · = x(T,⋆)—(2) recovers the bound for +OGD of Syrgkanis et al. [Syr+15]. The key takeaway from Lemma 3.1 is that the overhead of dynamic +regret in (2) grows with the first-order variation of the sequence of comparators. In Lemma A.1 +we also articulate an extension of Lemma 3.1 for the more general optimistic mirror descent (OMD) +algorithm under a certain class of Bregman divergences. +Having established Lemma 3.1, we next point out a crucial property: by selecting a sequence of +Nash equilibria (recall Definition 2.1) as the comparators, the sum of the players’ dynamic regrets +is always nonnegative: +Property 3.2. Suppose that Z ∋ z(t,⋆) = (x(t,⋆), y(t,⋆)) is an ϵ(t)-approximate Nash equilibrium of +the t-th game. Then, for s(T) +x += (x(t,⋆))1≤t≤T and s(T) +y += (y(t,⋆))1≤t≤T , +DReg(T) +x (s(T) +x ) + DReg(T) +y +(s(T) +y +) ≥ −2 +T +� +t=1 +ϵ(t). +In particular, if ϵ(t) = 0 for all t ∈ [[T]], we have +DReg(T) +x (s(T) +x ) + DReg(T) +y +(s(T) +y +) ≥ 0. +(3) +In fact, as we show in Property A.3, Property 3.2 applies even in certain (time-varying) nonconvex- +nonconcave min-max optimization problems, and it is a consequence of the minimax theorem; +Property 3.2 also holds for time-varying variational inequalities (VIs) that satisfy the so-called MVI +property (see Remark A.4). For comparison, it is evident that under a static sequence of two-player +zero-sum games, it holds that Reg(T) +x ++ Reg(T) +y +≥ 0. +5 + +Next, let us introduce some natural measures of the games’ variation. First, the first-order +variation of the Nash equilibria is defined for T ≥ 2 as +V(T) +NE := +inf +z(t,⋆)∈Z(t,⋆),∀t∈[[T]] +T−1 +� +t=1 +∥z(t+1,⋆) − z(t,⋆)∥2, +(4) +where Z(t,⋆) is the (nonempty) set of Nash equilibria of the t-th game. We recall that there can be a +multiplicity of Nash equilibria [van91]; as such, a compelling feature of the variation measure (4) is +that it depends on the most favorable sequence of Nash equilibria—one that minimizes the first-order +variation. +It is also important to point out the well-known fact that Nash equilibria can change abruptly +even under a “small” perturbation in the payoff matrix (see Example A.5), which is a caveat of the +variation (4). To address this, and in accordance with Property 3.2, we consider a more favorable +variation measure, defined as +V(T) +ϵ−NE := inf +�T−1 +� +t=1 +∥z(t+1,⋆) − z(t,⋆)∥2 + C +T +� +t=1 +ϵ(t) +� +, +for a sufficiently large parameter C > 0; the infimum above is subject to ϵ(t) ∈ R≥0 and z(t,⋆) ∈ Z(t,⋆) +ϵ(t) +for all t ∈ [[T]], where we denote by Z(t,⋆) +ϵ(t) +the set of ϵ(t)-approximate NE. It is evident that +V(T) +ϵ−NE ≤ V(T) +NE since one can take ϵ(1) = · · · = ϵ(T) = 0; in fact, V(T) +ϵ−NE can be arbitrarily smaller: +Proposition 3.3. For any T ≥ 4, there is a sequence of T games such that V(T) +NE ≥ T +2 while +V(T) +ϵ−NE ≤ δ, for any δ > 0. +Moreover, we also introduce a quantity that captures the variation of the payoff matrices: +V(T) +A +:= +T−1 +� +t=1 +∥A(t+1) − A(t)∥2 +2, +(5) +where we recall that here ∥ · ∥2 denotes the spectral norm. Unlike (4), the variation measure (5) +depends on the second-order variation (of the payoff matrices), which could translate to a lower-order +impact compared to (4) (see, e.g., Corollary A.8). We stress that while our convergence bounds +will be parameterized based on (4) and (5), the underlying algorithm—namely OGD—will remain +oblivious to those variation measures. +We are ready now to establish a refined bound on the second-order path-length of OGD in +time-varying zero-sum games. +Theorem 3.4 (Detailed version in Theorem A.6). Suppose that both players employ OGD with learning +rate η ≤ +1 +4L in a time-varying bilinear saddle-point problem, where L := maxt∈[[T]] ∥A(t)∥2. Then, for +any T ∈ N, the second-order path length �T +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +can be bounded by +O +� +1 + V(T) +ϵ−NE + V(T) +A +� +. +(6) +6 + +It is worth noting that when the deviation of the payoff matrices is controlled by the deviation of +the players’ strategies, in the sense that �T−1 +t=1 ∥A(t+1)−A(t)∥2 +2 ≤ W 2 �T−1 +t=1 ∥z(t+1)−z(t)∥2 +2 for some +parameter W ∈ R>0, the variation measure V(T) +A +in (6)—and in the subsequent convergence bounds— +can be entirely eliminated; see Corollary A.8. The same, in fact, applies under an improved prediction +mechanism (Remark A.12), but that prediction is not implementable in our online learning setting. +Armed with Theorem 3.4, we are ready to establish Theorem 3.5. The key observation is that the +Nash equilibrium gap at the t-th game can be bounded in terms of the quantity ∥z(t)− ˆz(t)∥2+∥z(t)− +ˆz(t+1)∥2, which in turn allows us to use (6) to bound the cumulative (squared) Nash equilibrium +gaps across the sequence of games; the aforeclaimed property was established in [Ana+22b] for +static games (Claim A.9), but readily extends to our setting as well, and in fact applies to any +member of OMD under a smooth regularizer. Below, we use the notation EqGap(t)(z(t)) ∈ R≥0 to +represent the Nash equilibrium gap of the joint strategy profile z(t) ∈ Z at the t-th game. +Theorem 3.5 (Main result; detailed version in Theorem A.10). Suppose that both players em- +ploy OGD with learning rate η = +1 +4L in a time-varying bilinear saddle-point problem, where L := +maxt∈[[T]] ∥A(t)∥2. Then, +T +� +t=1 +� +EqGap(t)(z(t)) +�2 += O +� +1 + V(T) +ϵ−NE + V(T) +A +� +, +(7) +where (z(t))1≤t≤T is the sequence of joint strategy profiles produced by OGD. +We next state some immediate consequences of this result. (Item 2 below follows from (7) by +Jensen’s inequality.) +Corollary 3.6. In the setting of Theorem 3.5, +1. If at least a δ-fraction of the iterates of OGD have at least ϵ > 0 Nash equilibrium gap, then +ϵ2δ ≤ O +� +1 +T +� +V(T) +ϵ−NE + V(T) +A ++ 1 +�� +; +2. The average Nash equilibrium gap of OGD is bounded as O +�� +1 +T +� +V(T) +ϵ−NE + V(T) +A ++ 1 +�� +. +In particular, in terms of asymptotic implications, if limT→+∞ +V(T ) +ϵ−NE +T +, limT→+∞ +V(T ) +A +T += 0, then +(i) for any ϵ > 0 the fraction of iterates of OGD with at least an ϵ Nash equilibrium gap converges to +0; and (ii) the average Nash equilibrium gap of the iterates of OGD converges to 0. +In the special case where V(T) +ϵ−NE, V(T) +A += O(1), Theorem 3.5 recovers the T −1/2 rate of OGD in +static bilinear saddle-point problems. It is also worth pointing out that Theorem 3.5 readily extends +to more general time-varying variational inequality problems as well (Remark A.4). +We also state below another interesting consequence of Theorem 3.4, which bounds each player’s +individual regret parameterized based on the variation measures. +Corollary 3.7 (Detailed version in Corollary A.11). In the setup of Theorem 3.4, it holds that +Reg(T) +x , Reg(T) +y += O +� +1 +η + η(V(T) +NE + V(T) +A ) +� +. +7 + +The O(·) notation here is considered in the regime η ≪ 1. Hence, selecting optimally the learning +rate gives an O( +� +V(T) +NE + V(T) +A ) bound on the individual regret of each player; while that optimal +value depends on the variation measures, which are not known to the learners, there are techniques +that would allow bypassing this [Zha+22]. Corollary 3.7 can also be readily parameterized in +terms of the improved variation measure V(T) +ϵ−NE. Finally, in Appendix A.1.7 we highlight certain +implications of our framework on solving (static) general VIs. +Meta-Learning +Our results also have immediate applications in the meta-learning setting [Har+22]. +More precisely, meta-learning in games is a special case of time-varying games which consists of a +sequence of H ∈ N separate games, each of which is repeated for m ∈ N consecutive rounds, so that +T := m × H. The central goal in meta-learning is to obtain convergence bounds parameterized by +the similarity of the games; identifying suitable similarity metrics is a central question in that line +of work. +In this context, we highlight that Theorem 3.5 readily provides a meta-learning guarantee +parameterized by the following notion of similarity between the Nash equilibria: +inf +z(h,⋆)∈Z(h,⋆),∀h∈[[H]] +H−1 +� +h=1 +∥z(h+1,⋆) − z(h,⋆)∥2, +(8) +where Z(h,⋆) is the set of Nash equilibria of the h-th game in the meta-learning sequence,1 as well +as the similarity of the payoff matrices—corresponding to the term V(T) +A +in (7). In fact, under a +suitable prediction—the one used by Harris et al. [Har+22]—the dependence on V(T) +A +can be entirely +removed; see Proposition A.13 for our formal result. A compelling aspect of our meta-learning +guarantee is that the considered algorithm is oblivious to the boundaries of the meta-learning. We +further provide some novel results on meta-learning in general-sum games in Section 3.3. +3.2 +Strongly Convex-Concave Games +In this subsection, we show that under additional structure we can significantly improve the variation +measures established in Theorem 3.4. More precisely, we first assume that each objective function +f(x, y) is µ-strongly convex with respect to x and µ-strongly concave with respect to y. Our second +assumption is that each game is played for multiple rounds m ∈ N, instead of only a single round; +this is akin to the meta-learning setting. The key insight is that, as long as m is large enough, +m = Ω(1/µ), those two assumptions suffice to obtain a second-order variation bound in terms of +the sequence of Nash equilibria, +S(H) +NE := +H−1 +� +h=1 +∥z(h+1,⋆) − z(h,⋆)∥2 +2, +(9) +where z(h,⋆) is a Nash equilibrium of the h-th game. This significantly refines the result of Theo- +rem 3.4, and is inspired by the improved dynamic regret bounds obtained by Zhang et al. [Zha+17]. +Below we sketch the key ideas of the improvement; the proofs are included in Appendix A.2. +In this setting, it is assumed that Player x obtains the utility u(t) +x := −∇xf(t)(x(t), y(t)) at every +time t ∈ [[T]], while its regret will be denoted by Reg(T) +L,y; similar notation applies for Player y. The +1In accordance to Theorem 3.5, (8) can be refined using a sequence of approximate Nash equilibria. +8 + +first observation is that, focusing on a single (static) game, under strong convexity-concavity the +sum of the players’ regrets are strongly nonnegative (Lemma A.15): +Reg(m) +L,x(x⋆) + Reg(m) +L,y (y⋆) ≥ µ +2 +m +� +t=1 +∥z(t) − z⋆∥2 +2, +(10) +for any Nash equilibrium z⋆ ∈ Z of the game. In turn, this can be cast in terms of dynamic regret +over the sequence of the h games (Lemma A.16). Next, combining those dynamic-regret lower +bounds with a suitable RVU-type property leads to a refined second-order path length bound as +long as that m = Ω(1/µ), which in turn leads to our main result below. Before we present its +statement, let us introduce the following measure of variation of the gradients: +V(H) +∇f := +H−1 +� +h=1 +max +z∈Z ∥F (h+1)(z) − F (h)(z)∥2 +2, +(11) +where let F : z := (x, y) �→ (∇xf(x, y), −∇yf(x, y)). This variation measure is analogous to V(T) +A +we introduced in (5) for time-varying bilinear saddle-point problems. +Theorem 3.8 (Detailed version in Theorem A.18). Let f(h) : X × Y be a µ-strongly convex-concave +and L-smooth function, for h ∈ [[H]]. Suppose that both players employ OGD with learning rate +η = min +� +1 +8L, 1 +2µ +� +for T repetitions, where T = m × H and m ≥ +2 +ηµ. Then, +T +� +t=1 +� +EqGap(t)(z(t)) +�2 += O(1 + S(H) +NE + V(H) +∇f ), +where S(H) +NE and V(H) +∇f are defined in (9) and (11). +Our techniques also imply improved regret bounds in this setting, as we formalize in Corol- +lary A.19. +There is another immediate but important implication of (10): any no-regret algorithm in a +(static) strongly convex-concave setting ought to be approaching the Nash equilibrium; in contrast, +this property is spectacularly false in (general) monotone settings [MPP18]. +Proposition 3.9. Let f : X × Y → R be a µ-strongly convex-concave function. If players incur +regrets such that Reg(T) +L,x + Reg(T) +L,y ≤ CT 1−ω, for some parameters C > 0 and ω ∈ (0, 1], then for any +ϵ > 0 and T > +� +2C +µϵ2 +�1/ω +there is a pair of strategies z(t) ∈ Z such that ∥z(t) − z⋆∥2 ≤ ϵ, where z⋆ +is a Nash equilibrium. +The insights of this subsection are also of interest in general monotone settings by incorpo- +rating a strongly convex regularizer; tuning its magnitude allows us to trade-off between a better +approximation and the benefits of strong convexity-concavity revealed in this subsection. +3.3 +General-Sum Multi-player Games +Next, we turn our attention to general-sum multi-player games. For simplicity, in this subsection +we posit that the game is represented in normal form, so that each Player i ∈ [[n]] has a finite set of +available actions Ai, and Xi := ∆(Ai). The proofs of this subsection are included in Appendix A.3. +9 + +Potential Games +First, we study the convergence of (online) gradient descent (GD) in time- +varying potential games (see Definition A.20 for the formal description).2 In our time-varying +setup, it is assumed that each round t ∈ [[T]] corresponds to a different potential game described +with a potential function Φ(t). We further let d : (Φ, Φ′) �→ maxz∈×n +i=1 Xi (Φ(z) − Φ′(z)), so that +V(T) +Φ +:= �T−1 +t=1 d(Φ(t), Φ(t+1)); we emphasize the fact that d(·, ·) is not symmetric. Analogously +to Theorem 3.5, we use EqGap(t)(z(t)) ∈ R≥0 to represent the NE gap of the joint strategy profile +z(t) := (x(t) +1 , . . . , x(t) +n ) at the t-th game. +Theorem 3.10. Suppose that each player employs (online) GD with a sufficiently small learning +rate. Then, +T +� +t=1 +� +EqGap(t)(z(t)) +�2 += O(Φmax + V(T) +Φ ), +where Φmax is such that |Φ(t)(·)| ≤ Φmax for any t ∈ [[T]]. +We refer to Appendix B for some illustrative experiments. +General games +Unfortunately, unlike the settings considered thus far, computing Nash equilibria +in general games is computationally hard [DGP08; CDT09]. Instead, learning algorithms are known +to converge to relaxations of the Nash equilibrium, known as (coarse) correlated equilibria. For +our purposes, we will employ a bilinear formulation of (coarse) correlated equilibria, which dates +back to the seminal work of Hart and Schmeidler [HS89]. This will allow us to translate the results +of Section 3.1 to general multi-player games. +Specifically, correlated equilibria3 can be expressed via a game between the n players and a medi- +ator. Intuitively, the mediator is endeavoring to identify a correlated strategy µ ∈ Ξ := ∆ +�× +n +i=1 Ai +� +for which no player has an incentive to deviate from the recommendation. In contrast, the players +are trying to optimally deviate so as to maximize their own utility. More precisely, there exist +matrices A1, . . . , An, with each matrix Ai depending solely on the utility of Player i, for which the +bilinear problem can be expressed as +min +µ∈Ξ +max +(¯x1,...,¯xn)∈×n +i=1 ¯ +Xi +n +� +i=1 +µ⊤Ai ¯xi, +(12) +where ¯ +Xi := conv(Xi, 0); incorporating the 0 vector will be useful for our purposes. This zero-sum +game has the property that there exists a strategy µ⋆ ∈ Ξ such that max¯xi∈ ¯ +Xi(µ⋆)⊤Ai ¯xi ≤ 0, for +any Player i ∈ [[n]], which corresponds to a correlated equilibrium. +Before we proceed, it is important to note that the learning paradigm considered here deviates +from the traditional one in that there is an additional learning agent, resulting in a less decentralized +protocol. Yet, the dynamics induced by solving (12) via online algorithms remain uncoupled [HM00], +in the sense that each player obtains feedback—corresponding to the deviation benefit—that depends +solely on its own utility. +Now in the time-varying setting, the matrices A1, . . . , An that capture the players’ utilities can +change in each repetition. Crucially, we show that the structure of the induced bilinear problem (12) +2Unlike two-player zero-sum games, gradient descent is known to approach Nash equilibria in potential games. +3The following bilinear formulation applies to coarse correlated equilibria as well (with different payoff matrices), +but we will focus solely on the stronger variant (CE) for the sake of exposition. +10 + +is such that there is a sequence of correlated equilibria that guarantee nonnegative dynamic regret; +this refines Property 3.2 in that only one player’s strategies suffice to guarantee nonnegativity, even +if the strategies of the other player remain invariant. Below, we denote by DReg(T) +µ +the dynamic +regret of the min player in (12), and by Reg(T) +i +the regret of each Player i up to time T ∈ N, so +that the regret of the max player in (12) can be expressed as �n +i=1 Reg(T) +i +. +Property 3.11. Suppose that Ξ ∋ µ(t,⋆) is a correlated equilibrium of the game at any time t ∈ [[T]]. +Then, +DReg(T) +µ (µ(1,⋆), . . . , µ(T,⋆)) + +n +� +i=1 +Reg(T) +i +≥ 0. +As a result, this enables us to apply Theorem 3.5 parameterized on (i) the variation of the CE +V(T) +CE := +inf +µ(t,⋆)∈Ξ(t,⋆),∀t∈[[T]] +T−1 +� +t=1 +∥µ(t+1,⋆) − µ(t,⋆)∥2, +where Ξ(t,⋆) denotes the set of CE of the t-th game, and (ii) the variation in the players’ utilities +V(T) +A +≤ �n +i=1 +�T−1 +t=1 ∥A(t+1) +i +− A(t) +i ∥2 +2; below, we denote by CeGap(t)(µ(t)) the CE gap of µ(t) ∈ Ξ +at the t-th game. +Theorem 3.12. Suppose that each player employs OGD in (12) with a suitable learning rate. Then, +T +� +t=1 +� +CeGap(t)(µ(t)) +�2 += O(1 + V(T) +CE + V(T) +A ). +There are further interesting implications of our framework that are worth highlighting. First, +we obtain meta-learning guarantees for general games that depend on the (algorithm-independent) +similarity of the correlated equilibria (Corollary A.22); that was left as an open question by Harris +et al. [Har+22], where instead algorithm-dependent similarity metrics were derived. Further, by +applying Corollary 3.7, we derive natural variation-dependent per-player regret bounds in general +games (Corollary A.23), addressing a question left by Zhang et al. [Zha+22]; we suspect that +obtaining such results—parameterized on the variation of the CE—are not possible without the +presence of the additional player. +3.4 +Dynamic Regret Bounds in Static Games +Finally, in this subsection we switch gears by investigating dynamic regret guarantees when learning +in static games. The proofs of this subsection are included in Appendix A.4. +First, we point out that while traditional no-regret learning algorithms guarantee O( +√ +T) external +regret, instances of OMD—a generalization of OGD; see (13) in Appendix A—in fact guarantee O( +√ +T) +dynamic regret in two-player zero-sum games, which is a much stronger performance measure: +Proposition 3.13. Suppose that both players in a (static) two-player zero-sum game employ OMD +with a smooth regularizer. Then, DReg(T) +x , DReg(T) +y += O( +√ +T). +11 + +In proof, the dynamic regret for each player under OMD with a smooth regularizer can be +bounded by the first-order path length of that player’s strategies, which in turn can be bounded +by O( +√ +T) given that the second-order path length is O(1) (Theorem 3.4). +In fact, Theo- +rem 3.4 readily extends Proposition 3.13 to time-varying zero-sum games as well, implying that +DReg(T) +x , DReg(T) +y += O +�√ +T(1 + V(T) +ϵ−NE + V(T) +A ) +� +. +A question that arises from Proposition 3.13 is whether the O( +√ +T) guarantee for dynamic +regret of OMD can be improved in the online learning setting. Below, we point out a significant +improvement to O(log T), but under a stronger two-point feedback model; namely, we posit that in +every round each player can select an additional auxiliary strategy, and each player then gets to +additionally observe the utility corresponding to the auxiliary strategies. Notably, this is akin to +how the extra-gradient method works [Hsi+19] (also cf. [RS13, Section 4.2] for multi-point feedback +models in the bandit setting). +Observation 3.14. Under two-point feedback, there exist learning algorithms that guarantee +DReg(T) +x , DReg(T) +y += O(log T) in two-player zero-sum games. +In particular, it suffices for each player to employ OMD, but with the twist that the first strategy +in each round is the time-average of OMD; the auxiliary strategy is the standard output of OMD. +Then, the dynamic regret of each player will grow as O +��T +t=1 +1 +t +� += O(log T) since the duality +gap of the average strategies is decreasing with a rate of T −1 [RS13]. It is an interesting question +whether Observation 3.14 can be improved to O(1). +General-sum games +In contrast, no (efficient) sublinear dynamic-regret guarantees are possible +in general games: +Proposition 3.15. Unless PPAD ⊆ P, any polynomial-time algorithm incurs �n +i=1 DReg(T) +i += Ω(T), +even if n = 2, where Ω(·) here hides polynomial factors. +Indeed, this follows since computing a Nash equilibrium to (1/poly) accuracy in two-player +games is PPAD-hard [CDT09]. In fact, Proposition 3.15 applies beyond the online learning setting. +This motivates considering a relaxation of dynamic regret, wherein the sequence of comparators is +subject to the constraint �T−1 +t=1 1{x(t+1,⋆) ̸= x(t,⋆)} ≤ K − 1, for some parameter K ∈ N; this will +be referred to as K-DReg(T) +x . Naturally, external regret coincides with K-DReg(T) +x +under K = 1. +In this context, we employ Lemma 3.1 to bound K-DReg(T) under OGD: +Theorem 3.16 (Detailed version in Theorem A.24). Suppose that all n players employ OGD in an +L-smooth game. Then, for any K ∈ N, +1. �n +i=1 K-DReg(T) +i += O(K√nL); +2. K-DReg(T) +i += O(K3/4T 1/4n1/4√ +L), for i ∈ [[n]]. +One question that arises here is whether the per-player bound of O(K3/4T 1/4) (Item 2) can +be improved to ˜O(K), where ˜O(·) hides logarithmic factors. The main challenge is that, even for +K = 1, all known methods that obtain ˜O(1) [DFG21; PSS21; Ana+22a; Far+22] rely on non- +smooth regularizers that violate the preconditions of Lemma A.1—our dynamic RVU bound that +generalizes Lemma 3.1 beyond (squared) Euclidean regularization. It would also be interesting to give +12 + +a natural game-theoretic interpretation to the limit point of no-regret learners with K-DReg = o(T), +even for a fixed K ∈ N; for K = 1, it corresponds to the fundamental coarse correlated equilibrium. +At a superficial level, it seems to be related to the variant considered by Harrow, Natarajan, and +Wu [HNW16]. +4 +Conclusions and Future Work +In this paper, we developed a new framework for characterizing iterate-convergence of no-regret +learning algorithms—primarily optimistic gradient descent (OGD)—in time-varying games. There +are many promising avenues for future research. Besides closing the obvious gaps we highlighted +in Section 3.4, it is important to characterize the behavior of no-regret learning algorithms in +other fundamental multiagent settings, such as Stackelberg (security) games [Bal+15]. Moreover, +our results operate in the full-feedback model where each player receives feedback on all possible +actions. Extending the scope of our framework to capture partial-feedback models as well is another +interesting direction for future work. +Acknowledgements +We are grateful to Vince Conitzer and Caspar Oesterheld for helpful feedback. This material is based +on work supported by the National Science Foundation under grants IIS-1901403 and CCF-1733556 +and by the ARO under award W911NF2210266. +References +[Ana+22a] +Ioannis Anagnostides, Gabriele Farina, Christian Kroer, Chung-Wei Lee, Haipeng Luo, +and Tuomas Sandholm. “Uncoupled Learning Dynamics with O(log T) Swap Regret in +Multiplayer Games”. In: NeurIPS 2022. 2022. +[Ana+22b] +Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, and Tuomas Sandholm. “On +Last-Iterate Convergence Beyond Zero-Sum Games”. In: International Conference on +Machine Learning, ICML 2022. Vol. 162. Proceedings of Machine Learning Research. +PMLR, 2022, pp. 536–581. +[Aum74] +Robert Aumann. “Subjectivity and Correlation in Randomized Strategies”. In: Journal +of Mathematical Economics 1 (1974), pp. 67–96. +[Azi+20] +Wa¨ıss Azizian, Ioannis Mitliagkas, Simon Lacoste-Julien, and Gauthier Gidel. “A +Tight and Unified Analysis of Gradient-Based Methods for a Whole Spectrum of +Differentiable Games”. In: The 23rd International Conference on Artificial Intelligence +and Statistics, AISTATS 2020. Vol. 108. Proceedings of Machine Learning Research. +PMLR, 2020, pp. 2863–2873. +[Bal+15] +Maria-Florina Balcan, Avrim Blum, Nika Haghtalab, and Ariel D. Procaccia. “Commit- +ment Without Regrets: Online Learning in Stackelberg Security Games”. In: Proceedings +of the Sixteenth ACM Conference on Economics and Computation, EC ’15. ACM, 2015, +pp. 61–78. +13 + +[BMW21] +Heinz H. Bauschke, Walaa M. Moursi, and Xianfu Wang. “Generalized monotone +operators and their averaged resolvents”. In: Math. Program. 189.1 (2021), pp. 55–74. +[Bow+15] +Michael Bowling, Neil Burch, Michael Johanson, and Oskari Tammelin. “Heads-up +Limit Hold’em Poker is Solved”. In: Science 347.6218 (Jan. 2015). +[BS16] +Noam Brown and Tuomas Sandholm. “Strategy-Based Warm Starting for Regret +Minimization in Games”. In: AAAI Conference on Artificial Intelligence (AAAI). 2016. +[BS17] +Noam Brown and Tuomas Sandholm. “Superhuman AI for heads-up no-limit poker: +Libratus beats top professionals”. In: Science (Dec. 2017), eaao1733. +[BS19a] +Noam Brown and Tuomas Sandholm. “Solving imperfect-information games via dis- +counted regret minimization”. In: AAAI Conference on Artificial Intelligence (AAAI). +2019. +[BS19b] +Noam Brown and Tuomas Sandholm. “Superhuman AI for multiplayer poker”. In: +Science 365.6456 (2019), pp. 885–890. +[Car+19] +Adrian Rivera Cardoso, Jacob D. Abernethy, He Wang, and Huan Xu. “Competing +Against Nash Equilibria in Adversarially Changing Zero-Sum Games”. In: Proceedings +of the 36th International Conference on Machine Learning, ICML 2019. Vol. 97. +Proceedings of Machine Learning Research. PMLR, 2019, pp. 921–930. +[CDT09] +Xi Chen, Xiaotie Deng, and Shang-Hua Teng. “Settling the Complexity of Computing +Two-Player Nash Equilibria”. In: Journal of the ACM (2009). +[Ces+12] +Nicol`o Cesa-Bianchi, Pierre Gaillard, G´abor Lugosi, and Gilles Stoltz. “Mirror Descent +Meets Fixed Share (and feels no regret)”. In: Advances in Neural Information Processing +Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. +2012, pp. 989–997. +[Chi+12] +Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong +Jin, and Shenghuo Zhu. “Online optimization with gradual variations”. In: Conference +on Learning Theory. 2012, pp. 6–1. +[COZ22] +Yang Cai, Argyris Oikonomou, and Weiqiang Zheng. “Tight Last-Iterate Convergence +of the Extragradient Method for Constrained Monotone Variational Inequalities”. In: +CoRR abs/2204.09228 (2022). +[CP04] +Patrick L. Combettes and Teemu Pennanen. “Proximal Methods for Cohypomonotone +Operators”. In: SIAM J. Control. Optim. (2004), pp. 731–742. +[CZ22] +Yang Cai and Weiqiang Zheng. “Accelerated Single-Call Methods for Constrained +Min-Max Optimization”. In: CoRR abs/2210.03096 (2022). +[Das+18] +Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. “Training +GANs with Optimism”. In: 6th International Conference on Learning Representations, +ICLR 2018. OpenReview.net, 2018. +[Das22] +Constantinos Daskalakis. Non-Concave Games: A Challenge for Game Theory’s Next +100 Years. 2022. +14 + +[DDJ21] +Jelena Diakonikolas, Constantinos Daskalakis, and Michael I. Jordan. “Efficient Meth- +ods for Structured Nonconvex-Nonconcave Min-Max Optimization”. In: The 24th +International Conference on Artificial Intelligence and Statistics, AISTATS 2021. +Vol. 130. Proceedings of Machine Learning Research. PMLR, 2021, pp. 2746–2754. +[DDK11] +Constantinos Daskalakis, Alan Deckelbaum, and Anthony Kim. “Near-optimal no- +regret algorithms for zero-sum games”. In: Annual ACM-SIAM Symposium on Discrete +Algorithms (SODA). 2011. +[DFG21] +Constantinos Daskalakis, Maxwell Fishelson, and Noah Golowich. “Near-Optimal No- +Regret Learning in General Games”. In: Advances in Neural Information Processing +Systems 34: Annual Conference on Neural Information Processing Systems 2021, +NeurIPS 2021. 2021, pp. 27604–27616. +[DGP08] +Constantinos Daskalakis, Paul Goldberg, and Christos Papadimitriou. “The Complexity +of Computing a Nash Equilibrium”. In: SIAM Journal on Computing (2008). +[DL15] +Cong D. Dang and Guanghui Lan. “On the convergence properties of non-Euclidean +extragradient methods for variational inequalities with generalized monotone operators”. +In: Comput. Optim. Appl. (2015), pp. 277–310. +[DMZ21] +Yuan Deng, Vahab Mirrokni, and Song Zuo. “Non-Clairvoyant Dynamic Mechanism +Design with Budget Constraints and Beyond”. In: Proceedings of the 22nd ACM +Conference on Economics and Computation. EC ’21. New York, NY, USA: Association +for Computing Machinery, 2021, p. 369. +[Duv+22] +Benoit Duvocelle, Panayotis Mertikopoulos, Mathias Staudigl, and Dries Vermeulen. +“Multiagent online learning in time-varying games”. In: Mathematics of Operations +Research (2022). +[Far+22] +Gabriele Farina, Ioannis Anagnostides, Haipeng Luo, Chung-Wei Lee, Christian Kroer, +and Tuomas Sandholm. “Near-Optimal No-Regret Learning for General Convex Games”. +In: NeurIPS 2022. 2022. +[FKS21] +Gabriele Farina, Christian Kroer, and Tuomas Sandholm. “Faster Game Solving +via Predictive Blackwell Approachability: Connecting Regret Matching and Mirror +Descent”. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2021. +[FS99] +Yoav Freund and Robert Schapire. “Adaptive game playing using multiplicative +weights”. In: Games and Economic Behavior 29 (1999), pp. 79–103. +[FV97] +Dean Foster and Rakesh Vohra. “Calibrated Learning and Correlated Equilibrium”. +In: Games and Economic Behavior 21 (1997), pp. 40–55. +[Gar17] +Daniel F. Garrett. “Dynamic mechanism design: Dynamic arrivals and changing values”. +In: Games and Economic Behavior 104 (2017), pp. 595–612. +[GPD20] +Noah Golowich, Sarath Pattathil, and Constantinos Daskalakis. “Tight last-iterate +convergence rates for no-regret learning in multi-player games”. In: Advances in +Neural Information Processing Systems 33: Annual Conference on Neural Information +Processing Systems 2020, NeurIPS 2020. 2020. +[GTG22] +Eduard Gorbunov, Adrien Taylor, and Gauthier Gidel. Last-Iterate Convergence of +Optimistic Gradient Method for Monotone Variational Inequalities. 2022. +15 + +[GVM21] +Angeliki Giannou, Emmanouil-Vasileios Vlatakis-Gkaragkounis, and Panayotis Mer- +tikopoulos. “Survival of the strictest: Stable and unstable equilibria under regularized +learning with partial information”. In: Conference on Learning Theory, COLT 2021. +Vol. 134. Proceedings of Machine Learning Research. PMLR, 2021, pp. 2147–2148. +[Har+22] +Keegan Harris, Ioannis Anagnostides, Gabriele Farina, Mikhail Khodak, Zhiwei Steven +Wu, and Tuomas Sandholm. “Meta-Learning in Games”. In: CoRR abs/2209.14110 +(2022). +[HM00] +Sergiu Hart and Andreu Mas-Colell. “A Simple Adaptive Procedure Leading to Corre- +lated Equilibrium”. In: Econometrica 68 (2000), pp. 1127–1150. +[HNW16] +Aram W. Harrow, Anand Natarajan, and Xiaodi Wu. “Tight SoS-Degree Bounds for +Approximate Nash Equilibria”. In: 31st Conference on Computational Complexity, +CCC 2016. Ed. by Ran Raz. Vol. 50. LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum f¨ur +Informatik, 2016, 22:1–22:25. +[HS09] +Elad Hazan and C. Seshadhri. “Efficient Learning Algorithms for Changing Envi- +ronments”. In: Proceedings of the 26th Annual International Conference on Machine +Learning. ICML ’09. Association for Computing Machinery, 2009, pp. 393–400. +[HS89] +Sergiu Hart and David Schmeidler. “Existence of Correlated Equilibria”. In: Mathe- +matics of Operations Research 14.1 (1989), pp. 18–25. +[Hsi+19] +Yu-Guan Hsieh, Franck Iutzeler, J´erome Malick, and Panayotis Mertikopoulos. “On +the convergence of single-call stochastic extra-gradient methods”. In: Advances in +Neural Information Processing Systems 32: Annual Conference on Neural Information +Processing Systems 2019, NeurIPS 2019. 2019, pp. 6936–6946. +[Jad+15] +Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, and Karthik Sridharan. +“Online Optimization : Competing with Dynamic Comparators”. In: Proceedings of +the Eighteenth International Conference on Artificial Intelligence and Statistics. 2015, +pp. 398–406. +[LST16] +Thodoris Lykouris, Vasilis Syrgkanis, and ´Eva Tardos. “Learning and Efficiency in +Games with Dynamic Population”. In: Proceedings of the Twenty-Seventh Annual +ACM-SIAM Symposium on Discrete Algorithms, SODA 2016. SIAM, 2016, pp. 120–129. +[MO11] +Ishai Menache and Asuman E. Ozdaglar. Network Games: Theory, Models, and Dynam- +ics. Synthesis Lectures on Communication Networks. Morgan & Claypool Publishers, +2011. +[MPP18] +Panayotis Mertikopoulos, Christos H. Papadimitriou, and Georgios Piliouras. “Cycles +in Adversarial Regularized Learning”. In: Proceedings of the Twenty-Ninth Annual +ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. SIAM, 2018, pp. 2703– +2717. +[MRS20] +Eric Mazumdar, Lillian J. Ratliff, and S. Shankar Sastry. “On Gradient-Based Learning +in Continuous Games”. In: SIAM J. Math. Data Sci. (2020), pp. 103–131. +[MS21] +Panayotis Mertikopoulos and Mathias Staudigl. “Equilibrium Tracking and Convergence +in Dynamic Games”. In: 2021 60th IEEE Conference on Decision and Control (CDC). +IEEE, 2021, pp. 930–935. +16 + +[MV21] +Oren Mangoubi and Nisheeth K. Vishnoi. “Greedy adversarial equilibrium: an efficient +alternative to nonconvex-nonconcave min-max optimization”. In: STOC ’21: 53rd +Annual ACM SIGACT Symposium on Theory of Computing, 2021. ACM, 2021, pp. 896– +909. +[MV78] +H. Moulin and J.-P. Vial. “Strategically zero-sum games: The class of games whose +completely mixed equilibria cannot be improved upon”. In: International Journal of +Game Theory 7.3-4 (1978), pp. 201–221. +[Nas50] +John Nash. “Equilibrium points in N-person games”. In: Proceedings of the National +Academy of Sciences 36 (1950), pp. 48–49. +[Nou+19] +Maher Nouiehed, Maziar Sanjabi, Tianjian Huang, Jason D. Lee, and Meisam Raza- +viyayn. “Solving a Class of Non-Convex Min-Max Games Using Iterative First Order +Methods”. In: Advances in Neural Information Processing Systems 32: Annual Confer- +ence on Neural Information Processing Systems 2019. 2019, pp. 14905–14916. +[Pap+22] +Christos H. Papadimitriou, George Pierrakos, Alexandros Psomas, and Aviad Rubin- +stein. “On the complexity of dynamic mechanism design”. In: Games and Economic +Behavior 134 (2022), pp. 399–427. +[PKB22] +Jorge I. Poveda, Miroslav Krstic, and Tamer Basar. “Fixed-Time Seeking and Tracking +of Time-Varying Nash Equilibria in Noncooperative Games”. In: American Control +Conference, ACC 2022, Atlanta, GA, USA, June 8-10, 2022. IEEE, 2022, pp. 794–799. +[PSS21] +Georgios Piliouras, Ryann Sim, and Stratis Skoulakis. “Optimal No-Regret Learning in +General Games: Bounded Regret with Unbounded Step-Sizes via Clairvoyant MWU”. +In: arXiv preprint arXiv:2111.14737 (2021). +[RG22] +Aitazaz Ali Raja and Sergio Grammatico. “Payoff Distribution in Robust Coalitional +Games on Time-Varying Networks”. In: IEEE Trans. Control. Netw. Syst. 9.1 (2022), +pp. 511–520. +[RJW21] +Jad Rahme, Samy Jelassi, and S. Matthew Weinberg. “Auction Learning as a Two- +Player Game”. In: 9th International Conference on Learning Representations, ICLR +2021. 2021. +[RS13] +Alexander Rakhlin and Karthik Sridharan. “Optimization, learning, and games with +predictable sequences”. In: Advances in Neural Information Processing Systems. 2013, +pp. 3066–3074. +[SAF02] +Yuzuru Sato, Eizo Akiyama, and J. Doyne Farmer. “Chaos in learning a simple +two-person game”. In: Proceedings of the National Academy of Sciences 99.7 (2002), +pp. 4748–4751. +[Sha12] +Shai Shalev-Shwartz. “Online Learning and Online Convex Optimization”. In: Founda- +tions and Trends in Machine Learning 4 (2012). +[Son+20] +Chaobing Song, Zhengyuan Zhou, Yichao Zhou, Yong Jiang, and Yi Ma. “Optimistic +Dual Extrapolation for Coherent Non-monotone Variational Inequalities”. In: Ad- +vances in Neural Information Processing Systems 33: Annual Conference on Neural +Information Processing Systems 2020. 2020. +17 + +[Syr+15] +Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, and Robert E Schapire. “Fast conver- +gence of regularized learning in games”. In: Advances in Neural Information Processing +Systems. 2015, pp. 2989–2997. +[Van10] +Ngo Van Long. A survey of dynamic games in economics. Vol. 1. World Scientific, +2010. +[van91] +Eric van Damme. Stability and perfection of Nash equilibria. Vol. 339. Springer, 1991. +[Ven21] +Xavier Venel. “Regularity of dynamic opinion games”. In: Games and Economic +Behavior 126 (2021), pp. 305–334. +[Vla+20] +Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Thanasis Lianeas, Panay- +otis Mertikopoulos, and Georgios Piliouras. “No-Regret Learning and Mixed Nash +Equilibria: They Do Not Mix”. In: Advances in Neural Information Processing Systems +33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020. +2020. +[YH15] +Maojiao Ye and Guoqiang Hu. “Distributed Seeking of Time-Varying Nash Equilibrium +for Non-Cooperative Games”. In: IEEE Trans. Autom. Control. 60.11 (2015), pp. 3000– +3005. +[YKH20] +Junchi Yang, Negar Kiyavash, and Niao He. “Global Convergence and Variance-Reduced +Optimization for a Class of Nonconvex-Nonconcave Minimax Problems”. In: CoRR +abs/2002.09621 (2020). +[YM22] +Yuepeng Yang and Cong Ma. “O(T −1) Convergence of Optimistic-Follow-the-Regularized- +Leader in Two-Player Zero-Sum Markov Games”. In: arXiv preprint arXiv:2209.12430 +(2022). +[Zha+17] +Lijun Zhang, Tianbao Yang, Jinfeng Yi, Rong Jin, and Zhi-Hua Zhou. “Improved +Dynamic Regret for Non-degenerate Functions”. In: Advances in Neural Information +Processing Systems 30: Annual Conference on Neural Information Processing Systems +2017. 2017, pp. 732–741. +[Zha+20] +Peng Zhao, Yu-Jie Zhang, Lijun Zhang, and Zhi-Hua Zhou. “Dynamic Regret of Convex +and Smooth Functions”. In: Advances in Neural Information Processing Systems 33: +Annual Conference on Neural Information Processing Systems 2020. 2020. +[Zha+22] +Mengxiao Zhang, Peng Zhao, Haipeng Luo, and Zhi-Hua Zhou. “No-Regret Learning +in Time-Varying Zero-Sum Games”. In: International Conference on Machine Learn- +ing, ICML 2022. Vol. 162. Proceedings of Machine Learning Research. PMLR, 2022, +pp. 26772–26808. +[Zin03] +Martin Zinkevich. “Online Convex Programming and Generalized Infinitesimal Gradient +Ascent”. In: International Conference on Machine Learning (ICML). Washington, DC, +USA, 2003, pp. 928–936. +18 + +A +Omitted Proofs +In this section, we provide the proofs from Section 3. +A.1 +Proofs from Section 3.1 +First, we start with the proof of Lemma 3.1 from Section 3.1. Before we proceed, it will be useful to +express the update rule (OGD) in the following equivalent form: +x(t) := arg max +x∈X +� +Ψ(t) +x (x) := ⟨x, m(t) +x ⟩ − 1 +ηBφx(x ∥ ˆx(t)) +� +, +ˆx(t+1) := arg max +ˆx∈X +� +ˆΨ(t) +x (ˆx) := ⟨ˆx, u(t) +x ⟩ − 1 +ηBφx(ˆx ∥ ˆx(t)) +� +. +(13) +Here, Bφx(· ∥ ·) denotes the Bregman divergence induced by the (squared) Euclidean regularizer +φx : x �→ 1 +2∥x∥2 +2; namely, Bφx(x ∥ x′) := φ(x)−φ(x′)−⟨∇φ(x′), x−x′⟩ = 1 +2∥x−x′∥2 +2, for x, x′ ∈ X. +The update rule (13) for general Bregman divergences will be referred to as optimistic mirror descent +(OMD). +A.1.1 +Dynamic RVU Bounds +We now show Lemma 3.1, the statement of which is recalled below for the convenience of the reader. +Then, in Lemma A.1 we provide an extension of Lemma 3.1 to a broader class of regularizers. +Lemma 3.1 (RVU bound for dynamic regret). Consider any sequence of utilities (u(1) +x , . . . , u(T) +x ) +up to time T ∈ N. The dynamic regret (1) of OGD with respect to any sequence of comparators +(x(1,⋆), . . . , x(T,⋆)) ∈ X T can be bounded by +D2 +X +2η + DX +η +T−1 +� +t=1 +∥x(t+1,⋆) − x(t,⋆)∥2 + η +T +� +t=1 +∥u(t) +x − m(t) +x ∥2 +2 − 1 +2η +T +� +t=1 +� +∥x(t) − ˆx(t)∥2 +2 + ∥x(t) − ˆx(t+1)∥2 +2 +� +. +Proof. First, by (1/η)-strong convexity of the function Ψ(t) +x +(defined in (13)) for any time t ∈ [[T]], +we have that +⟨x(t), m(t) +x ⟩ − 1 +2η∥x(t) − ˆx(t)∥2 +2 − ⟨ˆx(t+1), m(t) +x ⟩ + 1 +2η∥ˆx(t+1) − ˆx(t)∥2 +2 ≥ 1 +2η∥x(t) − ˆx(t+1)∥2 +2, +(14) +where we used [Sha12, Lemma 2.8, p. 135]. Similarly, by (1/η)-strong convexity of the function ˆΨ(t) +x +(defined in (13)) for any time t ∈ [[T]], we have that for any comparator x(t,⋆) ∈ X, +⟨ˆx(t+1), u(t) +x ⟩ − 1 +2η∥ˆx(t+1) − ˆx(t)∥2 +2 − ⟨x(t,⋆), u(t) +x ⟩ + 1 +2η∥x(t,⋆) − ˆx(t)∥2 +2 ≥ 1 +2η∥ˆx(t+1) − x(t,⋆)∥2 +2. (15) +Thus, adding (14) and (15), +⟨x(t,⋆) − ˆx(t+1), u(t) +x ⟩ + ⟨ˆx(t+1) − x(t), m(t) +x ⟩ ≤ 1 +2η +� +∥ˆx(t) − x(t,⋆)∥2 +2 − ∥ˆx(t+1) − x(t,⋆)∥2 +2 +� +− 1 +2η +� +∥x(t) − ˆx(t)∥2 +2 + ∥x(t) − ˆx(t+1)∥2 +2 +� +. +(16) +19 + +We further see that +⟨x(t,⋆) − x(t), u(t) +x ⟩ = ⟨x(t) − ˆx(t+1), m(t) +x − u(t) +x ⟩ + ⟨x(t,⋆) − ˆx(t+1), u(t) +x ⟩ + ⟨ˆx(t+1) − x(t), m(t) +x ⟩. (17) +Now the first term on the right-hand side can be upper bounded using the fact that, by (14) and (15), +⟨x(t) − ˆx(t+1), m(t) +x − u(t) +x ⟩ ≥ 1 +η∥ˆx(t+1) − x(t)∥2 +2 =⇒ ∥ˆx(t+1) − x(t)∥2 ≤ η∥m(t) +x − u(t) +x ∥2, +by Cauchy-Schwarz, in turn implying that ⟨x(t) − ˆx(t+1), m(t) +x − u(t) +x ⟩ ≤ η∥m(t) +x − u(t) +x ∥2 +2. Thus, the +proof follows by combining this bound with (16) and (17), along with the fact that +T +� +t=1 +� +∥ˆx(t) − x(t,⋆)∥2 +2 − ∥ˆx(t+1) − x(t,⋆)∥2 +2 +� +≤ ∥ˆx(1) − x(1,⋆)∥2 +2 ++ +T−1 +� +t=1 +� +∥ˆx(t+1) − x(t+1,⋆)∥2 +2 − ∥ˆx(t+1) − x(t,⋆)∥2 +2 +� +≤ D2 +X + 2DX +T−1 +� +t=1 +∥x(t+1,⋆) − x(t,⋆)∥2, +where the last bound follows since +∥ˆx(t+1) − x(t+1,⋆)∥2 +2 − ∥ˆx(t+1) − x(t,⋆)∥2 +2 ≤ 2DX +���∥ˆx(t+1) − x(t+1,⋆)∥2 − ∥ˆx(t+1) − x(t,⋆)∥2 +��� +≤ 2DX ∥x(t+1,⋆) − x(t,⋆)∥2, +where we recall that DX denotes the ℓ2-diameter of X. +As an aside, we remark that assuming that m(t) +x := 0 and ∥u(t) +x ∥2 ≤ 1 for any t ∈ [[T]], Lemma 3.1 +implies that dynamic regret can be upper bounded by O +�� +(1 + �T−1 +t=1 ∥x(t+1,⋆) − x(t,⋆)∥2)T +� +, +for any (bounded)—potentially adversarially selected—sequence of utilities (u(1) +x , . . . , u(T) +x ), for +η := +� +D2 +X +2T + DX +�T −1 +t=1 ∥x(t+1,⋆)−x(t,⋆)∥2 +T +, which is a well-known result in online optimization [Zin03]; +while that requires setting the learning rate based on the first-order variation of the (optimal) +comparators, there are standard techniques that would allow bypassing that assumption. +Next, we provide an extension of Lemma 3.1 to the more general OMD algorithm under a broad +class of regularizers. +Lemma A.1 (Extension of Lemma 3.1 beyond Euclidean regularization). Consider a 1-strongly +convex continuously differentiable regularizer φ with respect to a norm ∥·∥ such that (i) ∥∇φ(x)∥∗ ≤ G +for any x, and (ii) Bφx(x ∥ x′) ≤ L∥x − x′∥ for any x, x′. Then, for any sequence of utilities +(u(1) +x , . . . , u(T) +x ) up to time T ∈ N the dynamic regret (1) of OMD with respect to any sequence of +comparators (x(1,⋆), . . . , x(T,⋆)) ∈ X T can be bounded as +Bφx(x(1,⋆) ∥ ˆx(1)) +η ++ L + 2G +η +T−1 +� +t=1 +∥x(t+1,⋆) − x(t,⋆)∥+η +T +� +t=1 +∥u(t) +x − m(t) +x ∥2 +∗ +− 1 +2η +T +� +t=1 +� +∥x(t) − ˆx(t)∥2 + ∥x(t) − ˆx(t+1)∥2� +. +20 + +The proof is analogous to that of Lemma 3.1, and relies on the well-known three-point identity +for the Bregman divergence: +Bφx(x ∥ x′) = Bφx(x ∥ x′′) + Bφx(x′′ ∥ x′) − ⟨x − x′′, ∇φ(x′) − ∇φ(x′′)⟩. +(18) +In particular, along with the assumptions of Lemma A.1 imposed on the regularizer φx, (18) implies +that the term �T−1 +t=1 +� +Bφx(x(t+1,⋆) ∥ ˆx(t+1)) − Bφx(x(t,⋆) ∥ ˆx(t+1)) +� +is equal to +T−1 +� +t=1 +� +Bφx(x(t+1,⋆) ∥ x(t,⋆)) − ⟨x(t+1,⋆) − x(t,⋆), ∇φ(ˆx(t+1)) − ∇φ(x(t,⋆))⟩ +� +≤ (L + 2G) +T−1 +� +t=1 +∥x(t+1,⋆) − x(t,⋆)∥, +since Bφx(x(t+1,⋆) ∥ x(t,⋆)) ≤ L∥x(t+1,⋆) − x(t,⋆)∥ (by assumption) and +⟨x(t+1,⋆) − x(t,⋆), ∇φ(ˆx(t+1)) − ∇φ(x(t,⋆))⟩≤ ∥x(t+1,⋆) − x(t,⋆)∥∥∇φ(ˆx(t+1)) − ∇φ(x(t,⋆))∥∗ +(19) +≤ +� +∥∇φ(ˆx(t+1))∥∗+ ∥∇φ(x(t,⋆))∥∗ +� +∥x(t+1,⋆) − x(t,⋆)∥ +(20) +≤ 2G∥x(t+1,⋆) − x(t,⋆)∥, +(21) +where (19) follows from the Cauchy-Schwarz inequality; (20) uses the triangle inequality for the +dual norm ∥ · ∥∗; and (21) follows from the assumption of Lemma A.1 that ∥∇φ(·)∥∗ ≤ G. The rest +of the proof of Lemma A.1 is analogous to Lemma 3.1, and it is therefore omitted. An important +question is whether Lemma A.1 can be extended under any regularizer; as we explain in Section 3.4, +this is the main obstacle to improving Theorem 3.16. +A.1.2 +Nonnegativity of Dynamic Regret +We next proceed with the proof of Property 3.2. To provide additional intuition, we first prove the +following special case; the proof of Property 3.2 is then analogous. +Property A.2 (Special case of Property 3.2). Suppose that Z ∋ z(t,⋆) = (x(t,⋆), y(t,⋆))) is a +Nash equilibrium of the t-th game, for any time t ∈ [[T]]. Then, for s(T) +x += (x(t,⋆))1≤t≤T and +s(T) +y += (y(t,⋆))1≤t≤T , +DReg(T) +x (s(T) +x ) + DReg(T) +y +(s(T) +y +) ≥ 0. +Proof. Let v(t) := ⟨x(t,⋆), A(t)y(t,⋆)⟩ be the value of the t-th game, for some t ∈ [[T]]. Then, we +have that v(t) = ⟨x(t,⋆), A(t)y(t,⋆)⟩ ≤ ⟨x, A(t)y(t,⋆)⟩ for any x ∈ X, since x(t,⋆) is a best response to +y(t,⋆); similarly, v(t) = ⟨x(t,⋆), A(t)y(t,⋆)⟩ ≥ ⟨x(t,⋆), A(t)y⟩ for any y ∈ Y. Hence, ⟨x(t), A(t)y(t,⋆)⟩ − +⟨x(t,⋆), A(t)y(t)⟩ ≥ 0, or equivalently, ⟨x(t,⋆), u(t) +x ⟩ + ⟨y(t,⋆), u(t) +y ⟩ ≥ 0. But given that the game is +zero-sum, it holds that ⟨x(t), u(t) +x ⟩ + ⟨y(t), u(t) +y ⟩ = 0, so the last inequality can be in turn cast as +⟨x(t,⋆), u(t) +x ⟩ − ⟨x(t), u(t) +x ⟩ + ⟨y(t,⋆), u(t) +y ⟩ − ⟨y(t), u(t) +y ⟩ ≥ 0, +21 + +for any t ∈ [[T]]. As a result, summing over all t ∈ [[T]] we have shown that +DReg(T) +x (x(1,⋆), . . . , x(T,⋆))+ DReg(T) +y +(y(1,⋆), . . . , y(T,⋆)) += +T +� +t=1 +⟨x(t,⋆), u(t) +x ⟩ − ⟨x(t), u(t) +x ⟩ + ⟨y(t,⋆), u(t) +y ⟩ − ⟨y(t), u(t) +y ⟩ ≥ 0. +Property 3.2. Suppose that Z ∋ z(t,⋆) = (x(t,⋆), y(t,⋆)) is an ϵ(t)-approximate Nash equilibrium of +the t-th game. Then, for s(T) +x += (x(t,⋆))1≤t≤T and s(T) +y += (y(t,⋆))1≤t≤T , +DReg(T) +x (s(T) +x ) + DReg(T) +y +(s(T) +y +) ≥ −2 +T +� +t=1 +ϵ(t). +Proof. Given that (x(t,⋆), y(t,⋆)) ∈ Z is an ϵ(t)-approximate Nash equilibrium of the t-th game, it +follows that ⟨x(t,⋆), A(t)y(t,⋆)⟩ ≤ ⟨x(t), A(t)y(t,⋆)⟩+ϵ(t) +x and ⟨x(t,⋆), A(t)y(t,⋆)⟩ ≥ ⟨x(t,⋆), A(t)y(t)⟩−ϵ(t) +y , +for some ϵ(t) +x , ϵ(t) +y +≤ ϵ(t). Thus, we have that ⟨x(t), A(t)y(t,⋆)⟩ ≥ ⟨x(t,⋆), A(t)y(t)⟩ − ϵ(t) +x − ϵ(t) +y , or +equivalently, ⟨x(t,⋆), u(t) +x ⟩ + ⟨y(t,⋆), u(t) +y ⟩ ≥ −ϵ(t) +x − ϵ(t) +y +≥ −2ϵ(t). As a result, +⟨x(t,⋆), u(t) +x ⟩ − ⟨x(t), u(t) +x ⟩ + ⟨y(t,⋆), u(t) +y ⟩ − ⟨y(t), u(t) +y ⟩ ≥ −2ϵ(t), +(22) +for any t ∈ [[T]], and the statement follows by summing (22) over all t ∈ [[T]]. +In fact, as we show below (in Property A.3), Property A.2 is a more general consequence of the +minimax theorem. In particular, for a nonlinear online learning problem, we define dynamic regret +with respect to a sequence of comparators (x(1,⋆), . . . , x(T,⋆)) ∈ X T as follows: +DReg(T) +x (x(1,⋆), . . . , x(T,⋆)) := +T +� +t=1 +� +u(t) +x (x(t,⋆)) − u(t) +x (x(t)) +� +, +(23) +where u(1) +x , . . . , u(T) +x +: x �→ R are the continuous utility functions observed by the learner, which +could be in general nonconcave, and (x(t))1≤t≤T is the sequence of strategies produced by the +learner; (23) generalizes the notion of dynamic regret (1) in online linear optimization, that is, when +u(t) +x : x �→ ⟨x, u(t) +x ⟩, where u(t) +x ∈ Rdx, for any time t ∈ [[T]]. +Property A.3. Suppose that f(t) : X × Y → R is a continuous function such that for any t ∈ [[T]], +min +x∈X max +y∈Y f(t)(x, y) = max +y∈Y min +x∈X f(t)(x, y). +Let also x(t,⋆) ∈ arg minx∈X maxy∈Y f(t)(x, y) and y(t,⋆) ∈ arg maxy∈Y minx∈X f(t)(x, y), for any +t ∈ [[T]]. Then, for s(T) +x += (x(t,⋆))1≤t≤T and s(T) +y += (y(t,⋆))1≤t≤T , +DReg(T) +x (s(T) +x ) + DReg(T) +y +(s(T) +y +) ≥ 0. +22 + +Proof. By definition of dynamic regret (23), it suffices to show that f(t)(x(t), y(t,⋆)) ≥ f(t)(x(t,⋆), y(t)), +for any time t ∈ [[T]]. Indeed, +f(t)(x(t), y(t,⋆)) ≥ min +x∈X f(t)(x, y(t,⋆)) +(24) += max +y∈Y min +x∈X f(t)(x, y) +(25) += min +x∈X max +y∈Y f(t)(x, y) +(26) += max +y∈Y f(t)(x(t,⋆), y) +(27) +≥ f(t)(x(t,⋆), y(t)), +(28) +where (24) and (28) are obvious; (25) and (27) follow from the definition of y(t,⋆) ∈ Y and x(t,⋆) ∈ X, +respectively; and (26) holds by assumption. This concludes the proof. +Remark A.4 (MVI property). Property (3) can also be generalized beyond time-varying bilinear +saddle-point problems to more general time-varying variational inequality (VI) problems as follows. +Let F (t) : Z → Z be the (single-valued) operator of the VI problem at time t. F (t) is said to +satisfy the MVI property if there exists a point z(t,⋆) ∈ Z such that ⟨z − z(t,⋆), F (t)(z)⟩ ≥ 0 for +any z ∈ Z. For example, in the special case of a bilinear saddle-point problem, we have that +F : z := (x, y) �→ (Ay, −A⊤x), and the MVI property is satisfied by virtue of Von Neumann’s +minimax theorem. It is direct to see that Property A.2 applies to any time-varying VI with respect +to the sequence (z(t,⋆))1≤t≤T as long as every operator in the sequence (F (1), . . . , F (T)) satisfies +the MVI property. (Even more broadly, it suffices if almost all operators in the sequence satisfy +the MVI property—in that their fraction converges to 1 as T → +∞.) This observation enables +extending Theorem 3.5 beyond time-varying bilinear saddle-point problems. +A.1.3 +Variation of the Nash Equilibria +In our next example, we point out that an arbitrarily small change in the entries of the payoff +matrix can lead to a substantial deviation in the Nash equilibrium. +Example A.5. Consider a 2 × 2 (two-player) zero-sum game, where X := ∆2, Y := ∆2, described by +the payoff matrix +A := +�2δ +0 +0 +δ +� +, +(29) +for some δ > 0. Then, it is easy to see that the unique Nash equilibrium of this game is such that +x⋆, y⋆ := ( 1 +3, 2 +3) ∈ ∆2. Suppose now that the original payoff matrix (29) is perturbed to a new +matrix +A′ := +�δ +0 +0 +2δ +� +. +(30) +The new (unique) Nash equilibrium now reads x⋆, y⋆ := ( 2 +3, 1 +3) ∈ ∆2. We conclude that an arbitrarily +small deviation in the entries of the payoff matrix can lead to a non-trivial change in the Nash +equilibrium. +Next, we leverage the simple observation of the example above to establish Proposition 3.3, the +statement of which is recalled below. +23 + +Proposition 3.3. For any T ≥ 4, there is a sequence of T games such that V(T) +NE ≥ T +2 while +V(T) +ϵ−NE ≤ δ, for any δ > 0. +Proof. We consider a sequence of T games such that X, Y := ∆2, and +A(t) = +� +A +if t +mod 2 = 1, +A′ +if t +mod 2 = 0, +where A, A′ are the payoff matrices defined in (29) and (30), and are parameterized by δ > 0 +(Example A.5). Then, the exact Nash equilibria read +x(t,⋆), y(t,⋆) = +� +( 1 +3, 2 +3) +if t +mod 2 = 1, +( 2 +3, 1 +3) +if t +mod 2 = 0. +As a result, it follows that V(T) +NE := �T−1 +t=1 ∥z(t+1,⋆) − z(t,⋆)∥2 = 2 +3(T − 1) ≥ T +2 , for T ≥ 4. In contrast, +it is clear that V(T) +ϵ−NE ≤ CδT, which follows by simply considering the sequence of strategies wherein +both players are always selecting actions uniformly at random; we recall that C > 0 here is the +value that parameterizes V(T) +ϵ−NE. Thus, taking δ := +δ′ +CT , for some arbitrarily small δ′ > 0, concludes +the proof. +A.1.4 +Main Result +Next, we proceed with the proof of our main result, Theorem 3.5. The key ingredient is Theorem 3.4, +which bounds the second-order path length of OGD in terms of the considered variation measures. +We first give the precise statement of Theorem 3.4, and we then proceed with its proof. +Theorem A.6 (Detailed version of Theorem 3.4). Suppose that both players employ OGD with +learning rate η ≤ +1 +4L in a time-varying bilinear saddle-point problem, where L := maxt∈[[T]] ∥A(t)∥2. +Then, for any time horizon T ∈ N, +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +≤ 2D2 +Z + 4η2L2∥Z∥2 +2 + 4DZV(T) +ϵ−NE + 8η2∥Z∥2 +2V(T) +A . +Proof of Theorem 3.4. First, for any t ≥ 2 we have that ∥u(t) +x − m(t) +x ∥2 +2 is equal to +∥A(t)y(t) − A(t−1)y(t−1)∥2 +2 ≤ 2∥A(t)(y(t) − y(t−1))∥2 +2 + 2∥(A(t) − A(t−1))y(t−1)∥2 +2 +(31) +≤ 2∥A(t)∥2 +2∥y(t) − y(t−1)∥2 +2 + 2∥A(t) − A(t−1)∥2 +2∥y(t−1)∥2 +2 +(32) +≤ 2L2∥y(t) − y(t−1)∥2 +2 + 2∥Y∥2 +2∥A(t) − A(t−1)∥2 +2, +(33) +where (31) uses the triangle inequality for the norm ∥ · ∥2 along with the inequality 2ab ≤ a2 + b2 for +any a, b ∈ R; (32) follows from the definition of the operator norm; and (33) uses the assumption +that ∥A(t)∥2 ≤ L and ∥y∥2 ≤ ∥Y∥2 for any y ∈ Y. A similar derivaiton shows that for t ≥ 2, +∥u(t) +y − m(t) +y ∥2 +2 ≤ 2L2∥x(t) − x(t−1)∥2 +2 + 2∥X∥2 +2∥A(t) − A(t−1)∥2 +2. +(34) +Further, for t = 1 we have that ∥u(1) +x +− m(1) +x ∥2 = ∥u(1) +x ∥2 = ∥ − A(1)y(1)∥2 ≤ L∥Y∥2, and +∥u(1) +y +− m(1) +y ∥2 = ∥u(1) +y ∥2 = ∥(A(1))⊤x(1)∥2 ≤ L∥X∥2. Next, we will use the following simple +corollary, which follows similarly to Lemma 3.1. +24 + +Corollary A.7. For any sequence s(T) +z +:= (z(t,⋆))1≤t≤T , the dynamic regret DReg(T) +z +(s(T) +z +) := +DReg(T) +x (s(T) +x ) + DReg(T) +y +(s(T) +y +) can be bounded by +D2 +Z +2η + DZ +η +T−1 +� +t=1 +∥z(t+1,⋆) −z(t,⋆)∥2 +η +T +� +t=1 +∥u(t) +z −m(t) +z ∥2 +2 − 1 +2η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +, +where m(t) +z +:= (m(t) +x , m(t) +y ) and u(t) +z +:= (u(t) +x , u(t) +y ) for any t ∈ [[T]]. +As a result, combining (34) and (33) with Corollary A.7 applied for the dynamic regret +of both players with respect to the sequence of comparators ((x(t,⋆), y(t,⋆)))1≤t≤T yields that +DReg(T) +x (x(1,⋆), . . . , x(T,⋆)) + DReg(T) +y +(y(1,⋆), . . . , y(T,⋆)) is upper bounded by +D2 +Z +2η + ηL2∥Z∥2 +2 + DZ +η +T−1 +� +t=1 +∥z(t+1,⋆) − z(t,⋆)∥2+2η∥Z∥2 +2V(T) +A +− 1 +4η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +, +where we used the fact that +2ηL2 +T +� +t=2 +∥z(t) − z(t−1)∥2 +2 − 1 +4η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +≤ +� +2ηL2 − 1 +8η +� +T +� +t=2 +∥z(t) − z(t−1)∥2 +2 ≤ 0, +for η ≤ +1 +4L. Finally, using the fact that DReg(T) +x (x(1,⋆), . . . , x(T,⋆)) + DReg(T) +y +(y(1,⋆), . . . , y(T,⋆)) ≥ +−2 �T +t=1 ϵ(t) for a suitable sequence of ϵ(t)-approximate Nash equilibria (Property 3.2)—one that +attains the variation measure V(T) +ϵ−NE—yields that +0 ≤ D2 +Z +2η + ηL2∥Z∥2 +2 + DZ +η V(T) +ϵ−NE + 2η∥Z∥2 +2V(T) +A +− 1 +4η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +, +where it suffices if the parameter C of V(T) +ϵ−NE is such that 2 ≤ DZ +η C. Thus, rearranging the last +displayed inequality concludes the proof. +Next, we refine this theorem in time-varying games in which the deviation of the payoff matrices +is bounded by the deviation of the players’ strategies, in the following formal sense. +Corollary A.8. Suppose that both players employ OGD with learning rate η ≤ min +� +1 +4L, +1 +8W∥Z∥ +� +in a +time-varying bilinear saddle-point problem, where L := maxt∈[[T]] ∥A(t)∥2 and V(T) +A +≤ W 2 �T−1 +t=1 ∥z(t+1)− +z(t)∥2 +2, for some parameter W ∈ R>0. Then, for any time horizon T ∈ N, +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥ˆz(t) − ˆz(t+1)∥2 +2 +� +≤ 4D2 +Z + 8η2L2∥Z∥2 +2 + 8DZV(T) +NE , +where V(T) +NE is defined in (4). +25 + +Proof. Following the proof of Theorem 3.4, we have that for any η ≤ +1 +4L, +0 ≤ D2 +Z +2η + ηL2∥Z∥2 +2 + DZ +η V(T) +NE + 2η∥Z∥2 +2V(T) +A +− 1 +4η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +. +Further, for η ≤ +1 +8W∥Z∥2 , +2η∥Z∥2 +2V(T) +A +− 1 +8η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +≤ +� +2η∥Z∥2 +2W 2 − +1 +16η +� T−1 +� +t=1 +∥z(t+1) − z(t)∥2 +2 ≤ 0. +Thus, we have shown that +0 ≤ D2 +Z +2η + ηL2∥Z∥2 +2 + DZ +η V(T) +NE − 1 +8η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +, +and rearranging concludes the proof. +Thus, in such time-varying games it is the first-order variation term, V(T) +NE , that will drive our +convergence bounds. +Now before proving Theorem 3.5, we state the connection between the equilibrium gap and the +deviation of the players’ strategies +� +∥z(t) − ˆz(t)∥2 + ∥z(t) − ˆz(t+1)∥2 +� +. In particular, the following +claim can be extracted by [Ana+22b, Claim A.14]. (We caution that we use a slightly different +indexing for the secondary sequence (ˆx(t) +i ) in the definition of OMD (13) compared to [Ana+22b].) +Claim A.9. Suppose that the sequences (x(t) +i )1≤t≤T and (ˆx(t) +i )1≤t≤T+1 are produced by OMD under +a G-smooth regularizer 1-strongly convex with respect to a norm ∥ · ∥. Then, for any time t ∈ [[T]] +and any xi ∈ Xi, +⟨x(t) +i , u(t) +i ⟩ ≥ ⟨xi, u(t) +i ⟩ − G +η ∥ˆx(t+1) +i +− ˆx(t) +i ∥ − ∥u(t) +i ∥∗∥x(t) +i +− ˆx(t+1) +i +∥. +We are now ready to prove Theorem 3.5, the precise version of which is stated below. +Theorem A.10 (Detailed version of Theorem 3.5). Suppose that both players employ OGD with +learning rate η = +1 +4L in a time-varying bilinear saddle-point problem, where L := maxt∈[[T]] ∥A(t)∥2. +Then, +T +� +t=1 +� +EqGap(t)(z(t)) +�2 +≤ 2L2(4 + ∥Z∥2)2 � +2D2 +Z + 4η2L2∥Z∥2 +2 + 4DZV(T) +ϵ−NE + 8η2∥Z∥2 +2V(T) +A +� +, +where (z(t))1≤t≤T is the sequence of joint strategy profiles produced by OGD. +26 + +Proof. Let us first fix a time t ∈ [[T]]. For convenience, we denote by BR(t) +x (x(t)) := maxx∈X {⟨x, u(t) +x ⟩}− +⟨x(t), u(t) +x ⟩, the best response gap of Player’s x strategy x(t) ∈ X, and similarly for BR(t) +y (y)(t). By +definition, it holds that EqGap(t) := max{BR(t) +x (x(t)), BR(t) +y (y(t))}. By Claim A.9, we have that +BR(t) +x (x(t)) ≤ 1 +η∥ˆx(t+1) − ˆx(t)∥2 + ∥u(t) +x ∥2∥x(t) − ˆx(t+1)∥2 +(35) +≤ 4L∥ˆx(t+1) − ˆx(t)∥2 + L∥Y∥2∥x(t) − ˆx(t+1)∥2 +(36) +≤ L (4 + ∥Z∥2) +� +∥x(t) − ˆx(t)∥2 + ∥x(t) − ˆx(t+1)∥2 +� +, +(37) +where (35) follows from Claim A.9 for G = 1 (since the squared Euclidean regularizer φx : x �→ 1 +2∥x∥2 +2) +is 1-smooth; (36) uses the fact that η := +1 +4L and ∥u(t) +x ∥2 = ∥ − A(t)y(t)∥2 ≤ L∥Y∥; and (37) follows +from the triangle inequality. A similar derivation shows that +BR(t) +y (y(t)) ≤ L(4 + ∥Z∥) +� +∥y(t) − ˆy(t)∥2 + ∥y(t) − ˆy(t+1)∥2 +� +. +(38) +Thus, +T +� +t=1 +� +EqGap(t)(z(t)) +�2 += +T +� +t=1 +� +max{BR(t) +x (x(t)), BR(t) +y (y(t))} +�2 +≤ +T +� +t=1 +�� +BR(t) +x (x(t)) +�2 ++ +� +BR(t) +y (y(t)) +�2� +≤ 2L2(4 + ∥Z∥2)2 +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +, +(39) +where the last bound uses (37) and (38). Combining (39) with Theorem A.6 concludes the proof. +A.1.5 +Variation-Dependent Regret Bounds +Here we state an important implication of Theorem 3.5 for deriving variation-dependent regret +bounds in time-varying bilinear saddle-point problems; cf. [Zha+22]. +Corollary A.11 (Detailed version of Corollary 3.7). In the setup of Theorem 3.4, it holds that +Reg(T) +x +≤ D2 +X +η ++ 8ηL2D2 +Z + ηL2∥Y∥2 +2 + 16η3L4∥Z∥2 +2 + 16ηL2DZV(T) +NE + (2η∥Y∥2 +2 + 32η3L2∥Z∥2 +2)V(T) +A , +and +Reg(T) +y +≤ D2 +Y +η ++ 8ηL2D2 +Z + ηL2∥X∥2 +2 + 16η3L4∥Z∥2 +2 + 16ηL2DZV(T) +NE + (2η∥X∥2 +2 + 32η3L2∥Z∥2 +2)V(T) +A . +Proof. First, applying Lemma 3.1 under x(1,⋆) = · · · = x(T,⋆), we have +Reg(T) +x +≤ D2 +X +η ++ ηL2∥Y∥2 +2 + 2ηL2 +T +� +t=2 +∥y(t) − y(t−1)∥2 +2 + 2η∥Y∥2 +2 +T +� +t=2 +∥A(t) − A(t−1)∥2 +2, +(40) +27 + +and similarly, +Reg(T) +y +≤ D2 +Y +η ++ ηL2∥X∥2 +2 + 2ηL2 +T +� +t=2 +∥x(t) − x(t−1)∥2 +2 + 2η∥X∥2 +2 +T +� +t=2 +∥A(t) − A(t−1)∥2 +2. +(41) +Now, by Theorem 3.4 we have +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +≤ 2D2 +Z + 4η2L2∥Z∥2 +2 + 4DZV(T) +NE + 8η2∥Z∥2 +2V(T) +A . +(42) +Further, +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +≥ +T +� +t=1 +� +∥x(t) − ˆx(t)∥2 +2 + ∥x(t) − ˆx(t+1)∥2 +2 +� +≥ 1 +2 +T +� +t=2 +∥x(t) − x(t−1)∥2 +2. +Combining this bound with (42) and (41) gives the claimed regret bound on Reg(T) +y +, and a similar +derivation also gives the claimed bound on Reg(T) +x . +A.1.6 +Meta-Learning +We next provide the implication of Theorem 3.5 in the meta-learning setting. We first make a +remark regarding the effect of the prediction of OGD to Theorem 3.5, and how that relates to an +assumption present in [Har+22]. +Remark A.12 (Improved predictions). Throughout Section 3.1, we have considered the standard +prediction m(t) +x +:= u(t−1) +x += −A(t−1)y(t−1) for t ≥ 2, and similarly for Player y. It is easy to see +that using the predictions +m(t) +x := −A(t)y(t−1) and m(t) +y +:= (A(t))⊤x(t−1) +(43) +for t ≥ 1 (where z(0) := ˆz(1)) entirely removes the dependency on V(T) +A +on all our convergence +bounds. While such a prediction cannot be implemented in the standard online learning model, +there are settings in which we might know the sequence of matrices in advance; the meta-learning +setting offers such examples, and indeed, Harris et al. [Har+22] use the improved prediction of (43). +Proposition A.13 (Meta-learning). Suppose that both players employ OGD with learning rate η = +1 +4L, +where L := maxh∈[[H]] ∥A(h)∥2, and the prediction of (43) in a meta-learning bilinear saddle-point +problem with H ∈ N games, each repeated for m ∈ N consecutive iterations. Then, for an average +game, +� +P +Hϵ2 + P ′V(H) +NE +Hϵ2 +� +(44) +iterations suffice to reach an ϵ-approximate Nash equilibrium, where P := 4L2(4 + ∥Z∥2)2D2 +Z, +P ′ := 8L2(4 + ∥Z∥)2DZ, and +V(H) +NE := +inf +z(h,⋆)∈Z(h,⋆),∀h∈[[H]] +H−1 +� +h=1 +∥z(h+1,⋆) − z(h,⋆)∥2. +28 + +The proof is a direct application of Theorem A.10, where we remark that the term depending +on V(T) +A +and the term 4η2L2∥Z∥2 +2 from Theorem A.10 are eliminated because of the improved +prediction of Remark A.12. The first term in the iteration complexity bound (44) vanishes in +the meta-learning regime—as the number of games increases H ≫ 1—while the second term is +proportional to V(H) +NE +H , a natural similarity measure; (44) always recovers the m−1/2 rate, but offers +significant gains if the games as similar, in the sense that V(H) +NE +H +≪ 1. It is worth noting that, unlike +the similarity measure derived in [Har+22], V(H) +NE +H +depends on the order of the games. We further +remark that Proposition A.13 can be readily extended even if each game in the meta-learning +sequence is not repeated for the same number of iterations. +A.1.7 +General Variational Inequalities +Although our main focus in this paper is on the convergence of learning algorithms in time-varying +games, our techniques could also be of interest for solving (static) general variational inequality +(VI) problems. +In particular, let F : Z → Z be a single-valued operator. Solving general VIs is well-known to +be computationally intractable, and so instead focus has been on identifying broad subclasses that +elude those intractability barriers (see our overview in Section 1.2). Our framework in Section 3.1 +motivates introducing the following measure of complexity for a VI problem: +C(F) := +inf +z(1,⋆),...,z(T,⋆)∈Z +T−1 +� +t=1 +∥z(t+1,⋆) − z(t,⋆)∥2, +(45) +subject to +DReg(T) +z +(z(1,⋆), . . . , z(T,⋆)) ≥ 0 ⇐⇒ +T +� +t=1 +⟨z(t) − z(t,⋆), F(z(t))⟩ ≥ 0. +(46) +In words, (45) expresses the infimum first-order variation that a sequence of comparators must have +in order to guarantee nonnegative dynamic regret (46); it is evident that (46) always admits a feasible +sequence, namely s(T) +z +:= (z(t))1≤t≤T . We note that, in accordance to our results in Section 3.1, one +can also consider an approximate version of the complexity measure (45), which could behave much +more favorably (recall Proposition 3.3). +Now in a (static) bilinear saddle-point problem, it holds that C(F) = 0 given that there exists a +static comparator that guarantees nonnegativity of the dynamic regret. More broadly, our techniques +imply O(poly(1/ϵ)) iteration-complexity bounds for any VI problem such that C(F) ≤ CT 1−ω, for a +time-independent parameter C > 0 and ω ∈ (0, 1]: +Proposition A.14. Consider a variational inequality problem described with the operator F : Z → Z +such that F is L-Lipschitz continuous, in the sense that ∥F(z) − F(z′)∥2 ≤ L∥z − z′∥2, and +C(F) ≤ CT 1−ω for C > 0 and ω ∈ (0, 1]. Then, OGD with learning rate η = +1 +4L reaches an ϵ-strong +solution z⋆ ∈ Z in O(ϵ−2/ω) iterations; that is, ⟨z − z⋆, F(z⋆)⟩ ≥ −ϵ for any z ∈ Z. +It is worth comparing (45) with another natural complexity measure, namely infz⋆∈Z +�T +t=1⟨z(t)− +z⋆, F(z(t))⟩; the latter measures how negative (external) regret can be, and has already proven +useful in certain settings that go bilinear saddle-point problems [YM22], although unlike (45), it +29 + +does not appear to be useful in characterizing time-varying bilinear saddle-point problems. In this +context, O(poly(1/ϵ)) iteration-complexity bounds can also be established whenever +• infz⋆∈Z +�T +t=1⟨z(t) − z⋆, F(z(t))⟩ ≥ −CT 1−ω for a time-invariant C > 0, or +• infz⋆∈Z +�T +t=1⟨z(t) − z⋆, F(z(t))⟩ ≥ −C �T−1 +t=1 ∥z(t+1) − z(t)∥2 +2, for a sufficiently small C > 0. +Following [YM22], identifying VIs that satisfy those relaxed conditions but not the MVI property +is an interesting direction. In particular, it is important to understand if those relaxations can shed +led light into the convergence properties of OGD in Shapley’s two-player zero-sum stochastic games. +A.2 +Proofs from Section 3.2 +In this subsection, we provide the proofs from Section 3.2, leading to our main result in Theorem 3.8. +Let us first introduce some additional notation. We let f(t) : X × Y → R be a continuously +differentiable function for any t ∈ [[T]]. We recall that in Section 3.2 it is assumed that the objective +function changes after m ∈ N (consecutive) repetitions, which is akin to the meta-learning setting. +Analogously to our setup for bilinear saddle-point problems (Section 3.1), it is assumed that Player +x is endeavoring to minimizing the objective function, while Player y is trying to maximize it. We +will denote by Reg(T) +L,x(x⋆) := �T +t=1⟨x(t) − x⋆, −u(t) +x ⟩ and Reg(T) +L,y(y⋆) := �T +t=1⟨y⋆ − y(t), u(t) +y ⟩, where +u(t) +x +:= −∇xf(x(t), y(t)) and u(t) +y +:= ∇yf(x(t), y(t)) for any t ∈ [[T]]; similar notation is used for +DReg(T) +L,x, DReg(T) +L,y. +Furthermore, we let s(T) +z += ((x(t,⋆), y(t,⋆)))1≤t≤T , so that x(t,⋆) = x(h,⋆) and y(t,⋆) = y(h,⋆) for +any t ∈ [[T]] such that ⌊(t − 1)/m⌋ = h ∈ [[H]]. The first important step in our analysis is that, +following the proof of Lemma 3.1, +DReg(T) +L,x(s(T) +x ) ≤ 1 +2η +H +� +h=1 +� +∥ˆx(h,1) − x(h,⋆)∥2 +2 − ∥ˆx(h,m+1) − x(h,⋆)∥2 +2 +� ++ η +T +� +t=1 +∥u(t) +x − m(t) +x ∥2 +2 +− 1 +2η +T +� +t=1 +� +∥x(t) − ˆx(t)∥2 +2 + ∥x(t) − ˆx(t+1)∥2 +2 +� +, +(47) +where ˆx(h,k) := ˆx((h−1)×m)+k) for any (h, k) ∈ [[H]] × [[m]], ˆx(h,m+1) := ˆx(h+1,1) for h ∈ [[H − 1]], and +ˆx(H,m+1) := ˆx(T+1). Similarly, +DReg(T) +L,y(s(T) +y +) ≤ 1 +2η +H +� +h=1 +� +∥ˆy(h,1) − y(h,⋆)∥2 +2 − ∥ˆy(h,m+1) − y(h,⋆)∥2 +2 +� ++ η +T +� +t=1 +∥u(t) +y − m(t) +y ∥2 +2 +− 1 +2η +T +� +t=1 +� +∥y(t) − ˆy(t)∥2 +2 + ∥y(t) − ˆy(t+1)∥2 +2 +� +. +(48) +Next, we will use the following key observation, which lower bounds the sum of the players’ +(external) regrets under strong convexity-concavity. +Lemma A.15. Suppose that f : X × Y → R is a µ-strongly convex-concave function with respect to +∥ · ∥2. Then, for any Nash equilibrium z⋆ = (x⋆, y⋆) ∈ Z, +Reg(m) +L,x(x⋆) + Reg(m) +L,y (y⋆) ≥ µ +2 +m +� +t=1 +∥z(t) − z⋆∥2 +2. +30 + +Proof. First, by µ-strong convexity of f(x, ·), we have that for any time t ∈ [[m]], +⟨x(t) − x⋆, ∇xf(x(t), y(t))⟩ ≥ f(x(t), y(t)) − f(x⋆, y(t)) + µ +2 ∥x(t) − x⋆∥2 +2. +(49) +Similarly, by µ-strong concavity of f(·, y), we have that for any time t ∈ [[m]], +⟨y⋆ − y(t), ∇yf(x(t), y(t))⟩ ≥ f(x(t), y⋆) − f(x(t), y(t)) + µ +2 ∥y(t) − y⋆∥2 +2. +(50) +Further, for any Nash equilibrium (x⋆, y⋆) ∈ Z it holds that f(x(t), y⋆) ≥ f(x(t), y(t)) ≥ f(x⋆, y(t)). +Combining this fact with (49) and (50) and summing over all t ∈ [[m]] gives the statement. +In turn, this readily implies the following lower bound for the dynamic regret. +Lemma A.16. Suppose that f(h) : X × Y → R is a µ-strongly convex-concave function with respect +to ∥ · ∥2, for any h ∈ [[H]]. Consider a sequence s(T) +z += ((x(t,⋆), y(t,⋆)))1≤t≤T , so that x(t,⋆) = x(h,⋆) +and y(t,⋆) = y(h,⋆) for any t ∈ [[T]] such that ⌊(t − 1)/m⌋ = h ∈ [[H]]. If (x(h,⋆), y(h,⋆)) ∈ Z is a Nash +equilibrium of f(h), +DReg(T) +L,x(s(T) +x ) + DReg(T) +L,y(s(T) +y +) ≥ µ +2 +H +� +h=1 +m +� +k=1 +∥z(h,k) − z(h,⋆)∥2 +2, +where z(h,k) := z((h−1)×m)+k) for any (h, k) ∈ [[H]] × [[m]]. +We next combine this with the following monotonicity property of OGD: If z⋆ is a Nash equilibrium, +∥ˆz(t) − z⋆∥2 is a decreasing function in t [Har+22, Proposition C.10]. This leads to the following +refinement of Lemma A.16. +Lemma A.17. Under the assumptions of Lemma A.16, if η ≤ +1 +2µ, +DReg(T) +L,x(s(T) +x ) + DReg(T) +L,y(s(T) +y +) + 1 +4η +T +� +t=1 +∥z(t) − ˆz(t+1)∥2 +2 ≥ µm +4 +H +� +h=1 +∥ˆz(h,m+1) − z(h,⋆)∥2 +2. +Proof. By Lemma A.16, +DReg(T) +L,x(s(T) +x ) + DReg(T) +L,y(s(T) +y +) + 1 +4η +T +� +t=1 +∥z(t) − ˆz(t+1)∥2 +2 ≥ µ +2 +H +� +h=1 +m +� +k=1 +∥z(h,k) − z(h,⋆)∥2 +2 ++ 1 +4η +T +� +t=1 +∥z(t) − ˆz(t+1)∥2 +2 +≥ µ +4 +H +� +h=1 +m +� +k=1 +∥ˆz(h,k+1) − z(h,⋆)∥2 +2 +(51) +≥ µm +4 +H +� +h=1 +∥ˆz(h,m+1) − z(h,⋆)∥2 +2, +(52) +where (51) uses that +1 +4η ≥ µ +2 along with Young’s inequality and triangle inequality, and (52) follows +from [Har+22, Proposition C.10]. +31 + +Armed with this important lemma, we are ready to establish our main result (Theorem 3.8), the +detailed version of which is given below. We first point out that a function f : X × Y → R is said +to be L-smooth if ∥F(z) − F(z′)∥2 ≤ L∥z − z′∥2, where F(z) := (∇xf(x, y), −∇yf(x, y)). +Theorem A.18 (Detailed version of Theorem 3.8). Let f(h) : X × Y be a µ-strongly convex-concave +and L-smooth function, for h ∈ [[H]]. Suppose that both players employ OGD with learning rate +η = min +� +1 +8L, 1 +2µ +� +for T repetitions, where T = m × H and m ≥ +2 +ηµ. Then, +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +≤ 4D2 +Z + 8η2∥F(z(1))∥2 +2 + 8S(H) +NE + 16η2V(H) +∇f , +where S(H) +NE and V(H) +∇f are defined in (9) and (11). Thus, �T +t=1 +� +EqGap(t)(z(t)) +�2 += O(1 + S(H) +NE + +V(H) +∇f ). +Proof. Combining Lemma A.17 with (47) and (48), +0 ≤ 1 +2η +H +� +h=1 +� +∥ˆz(h,1) − z(h,⋆)∥2 +2 − 2 + ηµm +2 +∥ˆz(h,m+1) − z(h,⋆)∥2 +2 +� ++η +T +� +t=1 +∥u(t) +z − m(t) +z ∥2 +2 − 1 +4η +T +� +t=1 +� +∥z(t) − ˆz(t)∥2 +2 + ∥z(t) − ˆz(t+1)∥2 +2 +� +, +for a sequence of Nash equilibria (z(h,⋆))1≤h≤H, where we used the notation u(t) +z +:= (u(t) +x , u(t) +y ) and +m(t) +z +:= (m(t) +x , m(t) +y ). Now we bound the first term of the right-hand side above as +1 +2η +H +� +h=1 +� +∥ˆz(h,1) − z(h,⋆)∥2 +2 − 2∥ˆz(h,m+1) − z(h,⋆)∥2 +2 +� +≤ +1 +2η∥ˆz(1,1) − z(1,⋆)∥2 +2 + 1 +2η +H−1 +� +h=1 +� +∥ˆz(h+1,1) − z(h+1,⋆)∥2 +2 − 2∥ˆz(h+1,1) − z(h,⋆)∥2 +2 +� +, +where we used the fact that m ≥ +2 +ηµ and ˆz(h,m+1) = ˆz(h+1,1), for h ∈ [[H − 1]]. Hence, continuing +from above, +1 +2η +H +� +h=1 +� +∥ˆz(h,1) − z(h,⋆)∥2 +2 − 2∥ˆz(h,m+1) − z(h,⋆)∥2 +2 +� +≤ 1 +2η∥ˆz(1,1)−z(1,⋆)∥2 +2+ 1 +η +H−1 +� +h=1 +∥z(h+1,⋆)−z(h,⋆)∥2 +2, +since ∥ˆz(h+1,1) − z(h+1,⋆)∥2 +2 ≤ 2∥ˆz(h+1,1) − z(h,⋆)∥2 +2 + 2∥z(h,⋆) − z(h+1,⋆)∥2 +2, by the triangle inequality +and Young’s inequality. Moreover, for t ≥ 2, +∥u(t) +z −u(t−1) +z +∥2 +2 = ∥F (t)(z(t))−F (t−1)(z(t−1))∥2 +2 ≤ 2L2∥z(t)−z(t−1)∥2 +2+2∥F (t)(z(t−1))−F (t−1)(z(t−1))∥2 +2, +by L-smoothness. As a result, +T−1 +� +t=1 +∥u(t+1) +z +− u(t) +z ∥2 +2 ≤ 2L2 +T−1 +� +t=1 +∥z(t) − z(t−1)∥2 +2 + 2V(T) +∇f , +and the claimed bound on the second-order path length follows. Finally, the second claim of the +theorem follows from Claim A.9 using convexity-concavity, analogously to Theorem 3.5. +32 + +We point out that the improved prediction mechanism described in Remark A.12 can also be used +in this setting as well, resulting in the elimination of the variation measure (11) from Theorem A.18. +We conclude this subsection by pointing out an improved variation-dependent regret bound, which +follows directly from Theorem A.18 (cf. Corollary A.11). +Corollary A.19. In the setup of Theorem A.18, +Reg(T) +L,x, Reg(T) +L,y ≤ D2 +Z +η ++ 4ηL2D2 +Z + 32η3L2∥F(z(1)∥2 +2 + 32ηL2S(H) +NE + (64η3L2 + 2η)V(H) +∇f . +Thus, setting the learning rate optimally implies that Reg(T) +L,x, Reg(T) +L,y = O +�� +S(H) +NE + V(H) +∇f +� +. +A.3 +Proofs from Section 3.3 +In this subsection, we provide the proofs from Section 3.3. +A.3.1 +Potential Games +We first characterize the behavior of GD in time-varying potential games. Below we give the formal +definition of an unweighted potential game, represented in normal form. +Definition A.20 (Potential game). A game admits a potential if there exists a function Φ : +× +n +i=1 Xi → R such that for any Player i ∈ [[n]], any joint strategy profile x−i ∈×i′̸=i Xi′, and any +pair of strategies xi, x′ +i ∈ Xi, +Φ(xi, x−i) − Φ(x′ +i, x−i) = ui(xi, x−i) − ui(x′ +i, x−i). +We also recall that GD is equivalent to OGD under the prediction m(t) +x += 0 for all t. The key +ingredient in the proof of Theorem 3.10 is the following key bound on the second-order path length +of the dynamics. +Proposition A.21. Suppose that each player employs GD with a sufficiently small learning rate +η > 0 and initialization (x(1) +1 , . . . , x(1) +n ) ∈× +n +i=1 Xi. Then, +1 +2η +T +� +t=1 +n +� +i=1 +∥x(t+1) +i +− x(t) +i ∥2 +2 ≤ +T +� +t=1 +� +Φ(t)(x(t+1) +1 +, . . . , x(t+1) +n +) − Φ(t)(x(t) +1 , . . . , x(t) +n ) +� +. +(53) +This bound can be derived from [Ana+22b, Theorem 4.3]. We note that if Φ(1) = Φ(2) = · · · = +Φ(T), the right-hand side of (53) telescops, thereby implying that the second-order path-length is +bounded. More generally, the right-hand side of (53) can be upper bounded by +2Φmax + +T−1 +� +t=1 +� +Φ(t)(x(t+1) +1 +, . . . , x(t+1) +n +) − Φ(t+1)(x(t+1) +1 +, . . . , x(t+1) +n +) +� +≤ 2Φmax + V(T) +Φ , +(54) +where Φmax is an upper bound on |Φ(t)(·)| for any t ∈ [[T]], and V(T) +Φ +is the variation measure of the +potential functions we introduced in Section 3.3. Furthermore, we know that the Nash equilibrium +gap in the t-th potential game can be bounded in terms of �n +i=1 ∥x(t+1) +i +− x(t) +i ∥2 (Claim A.9). As a +result, combining this property with Proposition A.21 and (54) establishes Theorem 3.10. +33 + +A.3.2 +General-Sum Games +We next turn out attention to general-sum multi-player games using the bilinear formulation +presented in Section 3.3. To establish Property 3.11, let us first define the regret of any Player +i ∈ [[n]] as +Reg(T) +i +(¯x⋆ +i ) := +T +� +t=1 +⟨¯x⋆ +i − ¯x(t) +i , (A(t) +i )⊤µ(t)⟩, +where ¯x⋆ +i ∈ ¯Xi, so that �n +i=1 Reg(T) +i +is easily seen to be equal to the regret of the maximizing +player in (12). Further, the dynamic regret of the mediator—the minimizing player in (12)—can be +expressed as +DReg(T) +µ (µ(1,⋆), . . . , µ(T,⋆)) := +T +� +t=1 +⟨µ(t) − µ(t,⋆), +n +� +i=1 +A(t) +i ¯x(t) +i ⟩. +Property 3.11. Suppose that Ξ ∋ µ(t,⋆) is a correlated equilibrium of the game at any time t ∈ [[T]]. +Then, +DReg(T) +µ (µ(1,⋆), . . . , µ(T,⋆)) + +n +� +i=1 +Reg(T) +i +≥ 0. +Proof. We have that +DReg(T) +µ ++ +n +� +i=1 +Reg(T) +i +(¯x⋆ +i ) = +n +� +i=1 +T +� +t=1 +⟨¯x⋆ +i , (A(t) +i )⊤µ(t)⟩ − +T +� +t=1 +⟨µ(t,⋆), +n +� +i=1 +A(t) +i ¯x(t) +i ⟩. +Now for any correlated equilibrium µ(t,⋆) of the t-th game we have that ⟨µ(t,⋆), A(t) +i ¯x(t) +i ⟩ ≤ 0 for any +Player i ∈ [[n]], ¯xi ∈ ¯ +Xi, and time t ∈ [[T]], which in turn implies that − �T +t=1⟨µ(t,⋆), �n +i=1 A(t) +i ¯x(t) +i ⟩ ≥ +0. Moreover, �n +i=1 max¯x⋆ +i ∈ ¯ +Xi +�T +t=1⟨¯x⋆ +i , (A(t) +i )⊤µ(t)⟩ ≥ 0 given that, by definition, 0 ∈ ¯ +Xi. This +concludes the proof. +Next, we provide the main implication of Theorem 3.12 in the meta-learning setting, which +is similar to the meta-learning guarantee of Proposition A.13 we established earlier in two-player +zero-sum games. Below, we denote by Ξ(h,⋆) the set of correlated equilibria of the h-th game in the +meta-learning sequence. +Corollary A.22 (Meta-learning general-sum). Suppose that each player employ OGD in (12) with a +suitable learning rate η > 0 and the prediction of (43) in a meta-learning general-sum problem with +H ∈ N games, each repeated for m ∈ N consecutive iterations. Then, for an average game, +O +� +1 +ϵ2H + V(H) +CE +ϵ2H +� +(55) +iterations suffice to reach an ϵ-approximate correlated equilibrium, where +V(H) +CE := +inf +µ(h,⋆)∈Ξ(h,⋆) ∥µ(h+1,⋆) − µ(h,⋆)∥2. +34 + +In particular, in the meta-learning regime, H ≫ 1, the iteration-complexity bound (55) is +dominated by the (algorithm-independent) similarity metric of the correlated equilibria +V(H) +CE +H . +Corollary A.22 establishes significant gains when V(H) +CE +H +≪ 1. +Finally, we conclude this subsection by providing a variation-dependent regret bound in general- +sum multi-player games. To do so, we combine Corollary 3.7 with Theorem 3.12, leading to the +following guarantee. +Corollary A.23 (Regret in general-sum games). In the setup of Theorem 3.12, +Reg(T) +µ , Reg(T) +i += O +�1 +η + η +� +V(T) +CE + V(T) +A +�� +, +for any Player i ∈ [[n]]. +In particular, if one selects optimally the learning rate, Corollary A.23 implies that the individual +regret of each player is bounded by O +�� +V(T) +CE + V(T) +A +� +. We note again that there are techniques +that would allow (nearly) recovering such regret guarantees without having to know the variation +measures in advance [Zha+22]. +A.4 +Proofs from Section 3.4 +Finally, in this subsection we present the proofs omitted from Section 3.4. We begin with Proposi- +tion 3.13, the statement of which is recalled below. We first recall that a regularizer φx, 1-strongly +convex with respect to a norm ∥ · ∥, is said to be G-smooth if ∥∇φx(x) − ∇φx(x′)∥∗ ≤ G∥x − x′∥, +for all x, x′. +Proposition 3.13. Suppose that both players in a (static) two-player zero-sum game employ OMD +with a smooth regularizer. Then, DReg(T) +x , DReg(T) +y += O( +√ +T). +Proof. First, using Claim A.9, it follows that the dynamic regret DReg(T) +x +of Player x up to time T +can be bounded as +T +� +t=1 +� +max +x(t,⋆)∈X +� +⟨x(t,⋆), u(t) +x ⟩ +� +− ⟨x(t), u(t) +x ⟩ +� +≤ +T +� +t=1 +� �G +η + ∥u(t) +x ∥∗ +� +∥x(t) − ˆx(t+1)∥ + G +η ∥x(t) − ˆx(t)∥ +� +, +(56) +where G > 0 is the smoothness parameter of the regularizer, and η > 0 is the learning rate. We further +know that �T +t=1 +� +∥x(t) − ˆx(t)∥2 + ∥x(t) − ˆx(t+1)���2� += O(1) for any instance of OMD in a two-player +zero-sum game [Ana+22b], which in turn implies that �T +t=1 +� +∥x(t) − ˆx(t)∥ + ∥x(t) − ˆx(t+1)∥ +� += +O( +√ +T) by Cauchy-Schwarz. Thus, combining with (56) we have shown that DReg(T) +x += O( +√ +T). +Similar reasoning yields that DReg(T) +y += O( +√ +T), concluding the proof. +In contrast, we next show that such a result is precluded in general-sum games. In particular, +we note that the following computational-hardness result holds beyond the online learning setting. +35 + +It should be stressed that without imposing computational or memory restrictions there are trivial +online algorithms that guarantee even O(1) dynamic regret by first exploring the payoff matrices +and then computing a Nash equilibrium; we suspect that under the memory limitations imposed in +our work, as in [DDK11], there could be unconditional information-theoretic lower bounds, but that +is left for future work. +Proposition 3.15. Unless PPAD ⊆ P, any polynomial-time algorithm incurs �n +i=1 DReg(T) +i += Ω(T), +even if n = 2, where Ω(·) here hides polynomial factors. +Proof. We will use the fact that computing a Nash equilibrium in two-player (normal-form) games +to a sufficiently small accuracy ϵ := 1/poly is PPAD-hard [CDT09]. Indeed, suppose that there exist +polynomial-time algorithms that always guarantee that �n +i=1 DReg(T) +i +≤ ϵT, where n := 2. Then, +this implies that there exists a time t ∈ [[T]] such that +max +x(t,⋆) +1 +∈X1 +⟨x(t,⋆) +1 +, u(t) +1 ⟩ − ⟨x(t) +1 , u(t) +1 ⟩ + +max +x(t,⋆) +2 +∈X2 +⟨x(t,⋆) +2 +, u(t) +2 ⟩ − ⟨x(t) +2 , u(t) +2 ⟩ ≤ ϵ, +which in turn implies that (x(t) +1 , x(t) +2 ) is an ϵ-approximate Nash equilibrium. Further, such a time +t ∈ [[T]] can be identified in polynomial time. But this would imply that PPAD ⊆ P, concluding the +proof. +Finally, we provide the proof of Theorem 3.16, the detailed version of which is provided below. +Theorem A.24 (Detailed version of Theorem 3.16). Consider an n-player game such that ∥∇xiui(z)− +∇xiui(z′)∥2 ≤ L∥z − z′∥2, where z, z′ ∈ × +n +i=1 Xi, for any Player i ∈ [[n]]. Then, if all players +employ OGD with learning rate η > 0 it holds that +1. �n +i=1 K-DReg(T) +i += O(K√nL) for η = Θ +� +1 +L√n +� +; +2. K-DReg(T) +i += O(K3/4T 1/4n1/4√ +L), for any Player i ∈ [[n]], for η = Θ +� +K1/4 +n1/4L1/2 +� +. +Proof. First, applying Lemma 3.1 subject to the constraint that �T−1 +t=1 1{x(t+1,⋆) ̸= x(t,⋆)} ≤ K − 1 +gives that for any Player i ∈ [[n]], +K-DReg(T) +i +≤ D2 +Xi +2η (2K − 1) + η∥u(1) +i ∥2 +2 + η +T−1 +� +t=1 +∥u(t+1) +i +− u(t) +i ∥2 +2 − 1 +4η +T−1 +� +t=1 +∥x(t+1) +i +− x(t) +i ∥2 +2. +(57) +Further, by L-smoothness we have that +∥u(t+1) +i +− u(t) +i ∥2 +2 = ∥∇xiui(z(t+1)) − ∇xiui(z(t))∥2 +2 ≤ L2 +n +� +i=1 +∥x(t+1) +i +− x(t) +i ∥2 +2, +for any t ∈ [[T − 1]], where (x(t) +1 , . . . , x(t) +n ) = z(t) ∈× +n +i=1 Xi is the joint strategy profile at time +t. Thus, summing (57) over all i ∈ [[n]] and taking η ≤ +1 +2L√n implies that �n +i=1 K-DReg(T) +i +≤ +2K−1 +2η +�n +i=1 D2 +Xi + η �n +i=1 ∥u(1) +i ∥2 +2, yielding the first part of the statement. The second part follows +directly from (57) using the stability property of OGD: ∥x(t+1) +i +− x(t) +i ∥2 = O(η), for any time +t ∈ [[T − 1]]. +36 + +B +Experimental Examples +Finally, although the focus of this paper is theoretical, in this section we provide some illustrative +experimental examples. In particular, Appendix B.1 contains experiments on time-varying potential +games, while Appendix B.2 focuses on time-varying (two-player) zero-sum games. For simplicity, we +will be assuming that each game is represented in normal form. +B.1 +Time-Varying Potential Games +Here we consider time-varying 2-player identical-interest games. We point out that such games +are potential games (recall Definition A.20), and as such they are indeed amenable to our theory +in Section 3.3. +In our first experiment, we first sampled two matrices A, P ∈ Rdx×dy, where dx = dy = 1000. +Then, we defined each payoff matrix as A(t) := A(t−1) + Pt−α for t ≥ 1, where A(0) := A. Here, +α > 0 is a parameter that controls the variation of the payoff matrices. In this time-varying setup, +we let each player employ (online) GD with learning rate η := 0.1. The results obtained under +different random initializations of matrices A and P are illustrated in Figure 1. +Next, we operate in the same time-varying setup but each player is now employing multiplicative +weights update (MWU), instead of gradient descent, with η := 0.1. As shown in Figure 2, while the +cumulative equilibrium gap is much larger compared to using GD (Figure 1), the dynamics still +appear to be approaching equilibria, although our theory does not cover MWU. We suspect that +theoretical results such as Theorem 3.10 should hold for MWU as well, but that has been left for +future work. +In our third experiment for identical-interest games, we again first sampled two matrices +A, P ∈ Rdx×dy, where dx = dy = 1000. Then, we defined A(t) := A(t−1) + ϵP for t ≥ 1, where +A(0) := A. Here, ϵ > 0 is the parameter intended to capture the variation of the payoff matrices. +The results obtained under different random initializations of A and P are illustrated in Figure 3. +As an aside, it is worth pointing out that this particular setting can be thought of as a game in +which the variation in the payoff matrices is controlled by another learning agent. In particular, our +theoretical results are helpful for characterizing the convergence properties of two-timescale learning +algorithms, in which the deviation of the game is controlled by a player constrained to be updating +its strategies with a much smaller learning rate. +B.2 +Time-Varying Zero-Sum Games +We next conduct experiments on time-varying bilinear saddle-point problems when players are +employing OGD. Such problems have been studied extensively in Section 3.1 from a theoretical +standpoint. +First, we sampled two matrices A, P ∈ Rdx×dy, where dx = dy = 10; here we consider lower- +dimensional payoff matrices compared to the experiments in Appendix B.1 for convenience in the +graphical illustrations. Then, we defined each payoff matrix as A(t) := A(t−1) + Pt−α for t ≥ 1, +where A(1) := A. The results obtained under different random initializations are illustrated in +Figure 4. +37 + +0 +50 +100 +150 +200 +0 +2 +4 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +0.0 +0.2 +0.4 +0.6 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +50 +100 +150 +200 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +50 +100 +150 +200 +0 +1 +2 +3 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +0.0 +0.2 +0.4 +0.6 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +50 +100 +150 +200 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +50 +100 +150 +200 +0 +1 +2 +3 +4 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +0.0 +0.2 +0.4 +0.6 +0.8 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +50 +100 +150 +200 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +50 +100 +150 +200 +Iteration (t) +0 +2 +4 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +Iteration (t) +0.0 +0.2 +0.4 +0.6 +0.8 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +50 +100 +150 +200 +Iteration (t) +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +Figure 1: The equilibrium gap and the players’ regrets in 2-player time-varying identical-interest +games when both players are employing (online) GD with learning rate η := 0.1 for T := 200 iterations. +Each row corresponds to a different random initialization of the matrices A, P ∈ Rdx×dy, which in +turn induces a different time-varying game. Further, each figure contains trajectories corresponding +to three different values of α ∈ {0.1, 0.2, 0.5}, but under the same initialization of A and P. As +expected, smaller values of α generally increase the equilibrium gap since the variation of the games +is more significant. Nevertheless, for all games we observe that the players are gradually approaching +equilibria. +38 + +0 +50 +100 +150 +200 +0 +100 +200 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +0 +2 +4 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +20 +40 +60 +80 +100 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +50 +100 +150 +200 +0 +100 +200 +300 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +0 +2 +4 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +20 +40 +60 +80 +100 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +50 +100 +150 +200 +0 +100 +200 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +0 +2 +4 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +20 +40 +60 +80 +100 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +50 +100 +150 +200 +Iterations +0 +100 +200 +300 +�t +τ=1(EG(τ))2 +0 +50 +100 +150 +200 +Iterations +0 +2 +4 +EG(t) +α = 0.5 +α = 0.2 +α = 0.1 +0 +20 +40 +60 +80 +100 +Iterations +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +Figure 2: The equilibrium gap and the players’ regrets in 2-player time-varying identical-interest +games when both players are employing (online) GD with learning rate η := 0.1 for T := 200 iterations. +Each row corresponds to a different random initialization of the matrices A, P ∈ Rdx×dy, which in +turn induces a different time-varying game. Further, each figure contains trajectories corresponding +to three different values of α ∈ {0.1, 0.2, 0.5}, but under the same initialization of A and P. The +MWU dynamics still appear to be approaching equilibria, although the cumulative gap is much larger +compared to GD (Figure 1). +39 + +0 +100 +200 +300 +400 +500 +0.0 +0.5 +1.0 +�t +τ=1(EG(τ))2 +0 +100 +200 +300 +400 +500 +0.0 +0.1 +0.2 +EG(t) +ϵ = 0.001 +ϵ = 0.01 +ϵ = 0.1 +0 +50 +100 +150 +200 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +100 +200 +300 +400 +500 +0.0 +0.5 +1.0 +�t +τ=1(EG(τ))2 +0 +100 +200 +300 +400 +500 +0.0 +0.1 +0.2 +0.3 +EG(t) +ϵ = 0.001 +ϵ = 0.01 +ϵ = 0.1 +0 +50 +100 +150 +200 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +100 +200 +300 +400 +500 +0 +1 +2 +�t +τ=1(EG(τ))2 +0 +100 +200 +300 +400 +500 +0.0 +0.1 +0.2 +0.3 +EG(t) +ϵ = 0.001 +ϵ = 0.01 +ϵ = 0.1 +0 +50 +100 +150 +200 +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +0 +100 +200 +300 +400 +500 +Iterations +0 +1 +2 +3 +�t +τ=1(EG(τ))2 +0 +100 +200 +300 +400 +500 +Iterations +0.0 +0.1 +0.2 +EG(t) +ϵ = 0.001 +ϵ = 0.01 +ϵ = 0.1 +0 +50 +100 +150 +200 +Iterations +0 +2 +4 +6 +max(Reg(t) +x , Reg(t) +y ) +Figure 3: The equilibrium gap and the players’ regrets in 2-player time-varying identical-interest +games when both players are employing (online) GD with learning rate η := 0.1 for T := 500 iterations. +Each row corresponds to a different random initialization of the matrices A, P ∈ Rdx×dy, which in +turn induces a different time-varying game. Further, each figure contains trajectories from three +different values of ϵ ∈ {0.1, 0.01, 0.001}, but under the same initialization of A and P. As expected, +larger values of ϵ generally increase the equilibrium gap since the variation of the games is more +significant. Yet, even for the larger value ϵ = 0.1, the dynamics are still appear to be approaching +Nash equilibria. +40 + +0 +200 +400 +600 +800 +1000 +0 +10000 +20000 +30000 +�t +τ=1(EG(τ))2 +0 +200 +400 +600 +800 +1000 +0.0 +0.2 +0.4 +0.6 +0.8 +EG(t) +α = 2 +α = 1 +α = 0.7 +0 +200 +400 +600 +800 +1000 +0 +5 +10 +15 +20 +max(Reg(t) +x , Reg(t) +y ) +0 +200 +400 +600 +800 +1000 +0 +2500 +5000 +7500 +10000 +�t +τ=1(EG(τ))2 +0 +200 +400 +600 +800 +1000 +0.0 +0.2 +0.4 +0.6 +0.8 +EG(t) +α = 2 +α = 1 +α = 0.7 +0 +200 +400 +600 +800 +1000 +0 +10 +20 +30 +40 +max(Reg(t) +x , Reg(t) +y ) +0 +200 +400 +600 +800 +1000 +0 +20000 +40000 +�t +τ=1(EG(τ))2 +0 +200 +400 +600 +800 +1000 +0.0 +0.2 +0.4 +0.6 +0.8 +EG(t) +α = 2 +α = 1 +α = 0.7 +0 +200 +400 +600 +800 +1000 +0 +10 +20 +30 +max(Reg(t) +x , Reg(t) +y ) +0 +200 +400 +600 +800 +1000 +Iteration (t) +0 +2500 +5000 +7500 +10000 +�t +τ=1(EG(τ))2 +0 +200 +400 +600 +800 +1000 +Iteration (t) +0.0 +0.2 +0.4 +0.6 +EG(t) +α = 2 +α = 1 +α = 0.7 +0 +200 +400 +600 +800 +1000 +Iteration (t) +0 +10 +20 +max(Reg(t) +x , Reg(t) +y ) +Figure 4: The equilibrium gap and the players’ regrets in 2-player time-varying zero-sum games +when both players are employing OGD with learning rate η := 0.01 and T := 1000 iterations. Each +row corresponds to a different random initialization of the matrices A, P ∈ Rdx×dy, which in turn +induces a different time-varying game. Further, each figure contains trajectories from three different +values of α ∈ {0.7, 1, 2}, but under the same initialization of A and P. The OGD dynamics appear +to be approaching equilibria, albeit with a much slower rate compared to the ones observed earlier +for potential games (Figure 1). +41 +