diff --git "a/KNE1T4oBgHgl3EQfYgRo/content/tmp_files/2301.03139v1.pdf.txt" "b/KNE1T4oBgHgl3EQfYgRo/content/tmp_files/2301.03139v1.pdf.txt" new file mode 100644--- /dev/null +++ "b/KNE1T4oBgHgl3EQfYgRo/content/tmp_files/2301.03139v1.pdf.txt" @@ -0,0 +1,2425 @@ +arXiv:2301.03139v1 [math.OC] 9 Jan 2023 +A Newton-CG based augmented Lagrangian method for finding a +second-order stationary point of nonconvex equality constrained +optimization with complexity guarantees +Chuan He∗ +Zhaosong Lu∗ +Ting Kei Pong† +April 10, 2022 (Revised: September 22, 2022; December 31, 2022) +Abstract +In this paper we consider finding a second-order stationary point (SOSP) of nonconvex equality con- +strained optimization when a nearly feasible point is known. In particular, we first propose a new Newton- +CG method for finding an approximate SOSP of unconstrained optimization and show that it enjoys a +substantially better complexity than the Newton-CG method [56]. We then propose a Newton-CG based +augmented Lagrangian (AL) method for finding an approximate SOSP of nonconvex equality constrained +optimization, in which the proposed Newton-CG method is used as a subproblem solver. We show that +under a generalized linear independence constraint qualification (GLICQ), our AL method enjoys a total +inner iteration complexity of �O(ǫ−7/2) and an operation complexity of �O(ǫ−7/2 min{n, ǫ−3/4}) for finding +an (ǫ, √ǫ)-SOSP of nonconvex equality constrained optimization with high probability, which are signif- +icantly better than the ones achieved by the proximal AL method [60]. Besides, we show that it has +a total inner iteration complexity of �O(ǫ−11/2) and an operation complexity of �O(ǫ−11/2 min{n, ǫ−5/4}) +when the GLICQ does not hold. To the best of our knowledge, all the complexity results obtained in +this paper are new for finding an approximate SOSP of nonconvex equality constrained optimization +with high probability. Preliminary numerical results also demonstrate the superiority of our proposed +methods over the ones in [56, 60]. +Keywords: Nonconvex equality constrained optimization, second-order stationary point, augmented Lagrangian +method, Newton-conjugate gradient method, iteration complexity, operation complexity +Mathematics Subject Classification: 49M15, 68Q25, 90C06, 90C26, 90C30, 90C60 +1 +Introduction +In this paper we consider nonconvex equality constrained optimization problem +min +x∈Rn f(x) +s. t. c(x) = 0, +(1) +where f : Rn → R and c : Rn → Rm are twice continuously differentiable, and we assume that problem (1) +has at least one optimal solution. Since (1) is a nonconvex optimization problem, it may have many local but +non-global minimizers and finding its global minimizer is generally NP-hard. A first-order stationary point +(FOSP) of it is usually found in practice instead. Nevertheless, a mere FOSP may sometimes not suit our +needs and a second-order stationary point (SOSP) needs to be sought. For example, in the context of linear +semidefinite programming (SDP), a powerful approach to solving it is by solving an equivalent nonconvex +∗Department +of +Industrial +and +Systems +Engineering, +University +of +Minnesota, +USA +(email: +he000233@umn.edu, +zhaosong@umn.edu). The work of the second author was partially supported by NSF Award IIS-2211491. +†Department of Applied Mathematics, the Hong Kong Polytechnic University, Hong Kong, People’s Republic of China +(email: tk.pong@polyu.edu.hk). The work of this author was partially supported by a Research Scheme of the Research Grants +Council of Hong Kong SAR, China (Project No. T22-504/21R). +1 + +equality constrained optimization problem [17, 18]. It was shown in [18, 15] that under some mild conditions +an SOSP of the latter problem can yield an optimal solution of the linear SDP, while a mere FOSP generally +cannot. It is therefore important to find an SOSP of problem (1). +In recent years, numerous methods with complexity guarantees have been developed for finding an ap- +proximate SOSP of several types of nonconvex optimization. For example, cubic regularized Newton methods +[52, 25, 1, 22], accelerated gradient methods [23, 24], trust-region methods [34, 35, 50], quadratic regulariza- +tion method [12], second-order line-search method [57], and Newton-conjugate gradient (Newton-CG) method +[56] were developed for nonconvex unconstrained optimization. In addition, interior-point method [8] and +log-barrier method [54] were proposed for nonconvex optimization with sign constraints. The interior-point +method [8] was also generalized in [38] to solve nonconvex optimization with sign constraints and additional +linear equality constraints. Furthermore, a projected gradient descent method with random perturbations +was proposed in [47] for nonconvex optimization with linear inequality constraints. Iteration complexity was +established for these methods for finding an approximate SOSP. Besides, operation complexity measured +by the amount of fundamental operations such as gradient evaluations and matrix-vector products was also +studied in [1, 23, 34, 41, 24, 57, 22, 56]. +Several methods including trust-region methods [21, 33], sequential quadratic programming method [14], +two-phase method [9, 30, 32] and augmented Lagrangian (AL) type methods [4, 10, 58, 60] were proposed +for finding an SOSP of problem (1). However, only a few of them have complexity guarantees for finding +an approximate SOSP of (1). In particular, the inexact AL method [58] has a worst-case complexity in +terms of the number of calls to a second-order oracle. Yet its operation complexity, measured by the amount +of fundamental operations such as gradient evaluations and Hessian-vector products, is unknown. To the +best of our knowledge, the proximal AL method in [60] appears to be the only existing method that enjoys +a worst-case complexity for finding an approximate SOSP of (1) in terms of fundamental operations. In +this method, given an iterate xk and a multiplier estimate λk at the kth iteration, the next iterate xk+1 is +obtained by finding an approximate stochastic SOSP of the proximal AL subproblem: +min +x∈Rn L(x, λk; ρ) + β∥x − xk∥2/2 +for some suitable positive ρ and β using a Newton-CG method proposed in [56], where L is the AL function +of (1) defined as +L(x, λ; ρ) := f(x) + λT c(x) + ρ∥c(x)∥2/2. +Then the multiplier estimate is updated using the classical scheme, i.e., λk+1 = λk + ρc(xk+1) (e.g., see +[39, 55]). The authors of [60] studied the worst-case complexity of their proximal AL method including: (i) +total inner iteration complexity, which measures the total number of iterations of the Newton-CG method [56] +performed in their method; (ii) operation complexity, which measures the total number of gradient evaluations +and matrix-vector products involving the Hessian of the AL function that are evaluated in their method. +Under some suitable assumptions, including that a generalized linear independence constraint qualification +(GLICQ) holds at all iterates, it was established in [60] that their proximal AL method enjoys a total inner +iteration complexity of �O(ǫ−11/2) and an operation complexity of �O(ǫ−11/2 min{n, ǫ−3/4}) for finding an +(ǫ, √ǫ)-SOSP of problem (1) with high probability.1 Yet, there is a big gap between these complexities and the +iteration complexity of �O(ǫ−3/2) and the operation complexity of �O(ǫ−3/2 min{n, ǫ−1/4}) that are achieved +by the methods in [1, 24, 57, 56] for finding an (ǫ, √ǫ)-SOSP of nonconvex unconstrained optimization with +high probability, which is a special case of (1) with c ≡ 0. Also, there is a lack of complexity guarantees for +this proximal AL method when the GLICQ does not hold. It shall be mentioned that Newton-CG based AL +methods were also developed for efficiently solving various convex optimization problems (e.g., see [61, 62]), +though their complexities remain unknown. +In this paper we propose a Newton-CG based AL method for finding an approximate SOSP of problem (1) +with high probability, and study its worst-case complexity with and without the assumption of a GLICQ. In +1In fact, a total inner iteration complexity of �O(ǫ−7) and an operation complexity of �O(ǫ−7 min{n, ǫ−1}) were established +in [60] for finding an (ǫ, ǫ)-SOSP of problem (1) with high probability; see [60, Theorem 4(ii), Corollary 3(ii), Theorem 5]. +Nonetheless, they can be modified to obtain the aforementioned complexity for finding an (ǫ, √ǫ)-SOSP of (1) with high +probability. +2 + +particular, we show that this method enjoys a total inner iteration complexity of �O(ǫ−7/2) and an operation +complexity of �O(ǫ−7/2 min{n, ǫ−3/4}) for finding a stochastic (ǫ, √ǫ)-SOSP of (1) under the GLICQ, which +are significantly better than the aforementioned ones achieved by the proximal AL method in [60]. Besides, +when the GLICQ does not hold, we show that it has a total inner iteration complexity of �O(ǫ−11/2) and +an operation complexity of �O(ǫ−11/2 min{n, ǫ−5/4}) for finding a stochastic (ǫ, √ǫ)-SOSP of (1), which fills +the research gap in this topic. Specifically, our AL method (Algorithm 2) proceeds in the following manner. +Instead of directly solving problem (1), it solves a perturbed problem of (1) with c replaced by its perturbed +counterpart ˜c constructed by using a nearly feasible point of (1) (see (25) for details). At the kth iteration, +an approximate stochastic SOSP xk+1 of the AL subproblem of this perturbed problem is found by our +newly proposed Newton-CG method (Algorithm 1) for a penalty parameter ρk and a truncated Lagrangian +multiplier λk, which results from projecting onto a Euclidean ball the standard multiplier estimate ˜λk +obtained by the classical scheme ˜λk = λk−1 + ρk˜c(xk).2 The penalty parameter ρk+1 is then updated by the +following practical scheme (e.g., see [7, Section 4.2]): +ρk+1 = +� rρk +if ∥˜c(xk+1)∥ > α∥˜c(xk)∥, +ρk +otherwise +for some r > 1 and α ∈ (0, 1). It shall be mentioned that in contrast with the classical AL method, our +method has two distinct features: (i) the values of the AL function along the iterates are bounded from above; +(ii) the multiplier estimates associated with the AL subproblems are bounded. In addition, to solve the AL +subproblems with better complexity guarantees, we propose a variant of the Newton-CG method in [56] for +finding an approximate stochastic SOSP of unconstrained optimization, whose complexity has significantly +less dependence on the Lipschitz constant of the Hessian of the objective than that of the Newton-CG method +in [56], while improving or retaining the same order of dependence on tolerance parameter. Given that such +a Lipschitz constant is typically large for the AL subproblems, our Newton-CG method (Algorithm 1) is a +much more favorable subproblem solver than the Newton-CG method in [56] that is used in the proximal +AL method in [60] from theoretical complexity perspective. +The main contributions of this paper are summarized below. +• We propose a new Newton-CG method for finding an approximate SOSP of unconstrained optimization +and show that it enjoys an iteration and operation complexity with a quadratic dependence on the +Lipschitz constant of the Hessian of the objective that improves the cubic dependence achieved by the +Newton-CG method in [56], while improving or retaining the same order of dependence on tolerance +parameter. In addition, our complexity results are established under the assumption that the Hessian +of the objective is Lipschitz continuous in a convex neighborhood of a level set of the objective. This +assumption is weaker than the one commonly imposed for the Newton-CG method in [56] and some +other methods (e.g., [12, 35]) that the Hessian of the objective is Lipschitz continuous in a convex set +containing this neighborhood and also all the trial points arising in the line search or trust region steps +of the methods (see Section 3 for more detailed discussion). +• We propose a Newton-CG based AL method for finding an approximate SOSP of nonconvex equality +constrained optimization (1) with high probability, and study its worst-case complexity with and +without the assumption of a GLICQ. Prior to our work, there was no complexity study on finding +an approximate SOSP of problem (1) without imposing a GLICQ. Besides, under the GLICQ and +some other suitable assumptions, we show that our method enjoys a total inner iteration complexity +of �O(ǫ−7/2) and an operation complexity of �O(ǫ−7/2 min{n, ǫ−3/4}) for finding an (ǫ, √ǫ)-SOSP of (1) +with high probability, which are significantly better than the respective complexity of �O(ǫ−11/2) and +�O(ǫ−11/2 min{n, ǫ−3/4}) achieved by the proximal AL method in [60]. To the best of our knowledge, all +the complexity results obtained in this paper are new for finding an approximate SOSP of nonconvex +equality constrained optimization with high probability. +2The λk obtained by projecting ˜λk onto a compact set is also called a safeguarded Lagrangian multiplier in the relevant +literature [11, 42, 13], which has been shown to enjoy many practical and theoretical advantages (see [11] for discussions). +3 + +For ease of comparison, we summarize in Table 1 the total inner iteration and operation complexity of +our AL method and the proximal AL method in [60] for finding a stochastic (ǫ, √ǫ)-SOSP of problem (1) +with or without assuming GLICQ. +Table 1: Total inner iteration and operation complexity of finding a stochastic (ǫ, √ǫ)-SOSP of (1). +Method +GLICQ +Total inner iteration complexity +Operation complexity +Proximal AL method [60] +✓ +�O(ǫ−11/2) +�O(ǫ−11/2 min{n, ǫ−3/4}) +Proximal AL method [60] +✗ +unknown +unknown +Our AL method +✓ +�O(ǫ−7/2) +�O(ǫ−7/2 min{n, ǫ−3/4}) +Our AL method +✗ +�O(ǫ−11/2) +�O(ǫ−11/2 min{n, ǫ−5/4}) +It shall be mentioned that there are many works other than [60] studying complexity of AL methods for +nonconvex constrained optimization. However, they aim to find an approximate FOSP rather than SOSP +of the problem (e.g., see [40, 37, 13, 51, 45]). +Since our main focus is on the complexity of finding an +approximate SOSP by AL methods, we do not include them in the above table for comparison. +The rest of this paper is organized as follows. In Section 2, we introduce some notation and optimality +conditions. In Section 3, we propose a Newton-CG method for unconstrained optimization and study its +worst-case complexity. +In Section 4, we propose a Newton-CG based AL method for (1) and study its +worst-case complexity. We present numerical results and the proof of the main results in Sections 5 and 6, +respectively. In Section 7, we discuss some future research directions. +2 +Notation and preliminaries +Throughout this paper, we let Rn denote the n-dimensional Euclidean space. We use ∥ · ∥ to denote the +Euclidean norm of a vector or the spectral norm of a matrix. For a real symmetric matrix H, we use λmin(H) +to denote its minimum eigenvalue. The Euclidean ball centered at the origin with radius R ≥ 0 is denoted +by BR := {x : ∥x∥ ≤ R}, and we use ΠBR(v) to denote the Euclidean projection of a vector v onto BR. For +a given finite set A, we let | A | denote its cardinality. For any s ∈ R, we let sgn(s) be 1 if s ≥ 0 and let it +be −1 otherwise. In addition, �O(·) represents O(·) with logarithmic terms omitted. +Suppose that x∗ is a local minimizer of problem (1) and the linear independence constraint qualification +holds at x∗, i.e., ∇c(x∗) := [∇c1(x∗) ∇c2(x∗) · · · ∇cm(x∗)] has full column rank. Then there exists a +Lagrangian multiplier λ∗ ∈ Rm such that +∇f(x∗) + ∇c(x∗)λ∗ = 0, +(2) +dT +� +∇2f(x∗) + +m +� +i=1 +λ∗ +i ∇2ci(x∗) +� +d ≥ 0, +∀d ∈ C(x∗), +(3) +where C(·) is defined as +C(x) := {d ∈ Rn : ∇c(x)T d = 0}. +(4) +The relations (2) and (3) are respectively known as the first- and second-order optimality conditions for (1) +in the literature (e.g., see [53]). Note that it is in general impossible to find a point that exactly satisfies (2) +and (3). Thus, we are instead interested in finding a point that satisfies their approximate counterparts. In +particular, we introduce the following definitions of an approximate first-order stationary point (FOSP) and +second-order stationary point (SOSP), which are similar to those considered in [4, 10, 60]. The rationality +of them can be justified by the study of the sequential optimality conditions for constrained optimization +[3, 4]. +Definition 2.1 (ǫ1-first-order stationary point). Let ǫ1 > 0. We say that x ∈ Rn is an ǫ1-first-order +stationary point (ǫ1-FOSP) of problem (1) if it, together with some λ ∈ Rm, satisfies +∥∇f(x) + ∇c(x)λ∥ ≤ ǫ1, +∥c(x)∥ ≤ ǫ1. +(5) +4 + +Definition 2.2 ((ǫ1, ǫ2)-second-order stationary point). Let ǫ1, ǫ2 > 0. We say that x ∈ Rn is an (ǫ1, ǫ2)- +second-order stationary point ((ǫ1, ǫ2)-SOSP) of problem (1) if it, together with some λ ∈ Rm, satisfies (5) +and additionally +dT +� +∇2f(x) + +m +� +i=1 +λi∇2ci(x) +� +d ≥ −ǫ2∥d∥2, +∀d ∈ C(x), +(6) +where C(·) is defined as in (4). +3 +A Newton-CG method for unconstrained optimization +In this section we propose a variant of Newton-CG method [56, Algorithm 3] for finding an approximate +SOSP of a class of unconstrained optimization problems, which will be used as a subproblem solver for the +AL method proposed in the next section. In particular, we consider an unconstrained optimization problem +min +x∈Rn F(x), +(7) +where the function F satisfies the following assumptions. +Assumption 3.1. (a) The level set LF (u0) := {x : F(x) ≤ F(u0)} is compact for some u0 ∈ Rn. +(b) The function F is twice Lipschitz continuously differentiable in a convex open neighborhood, denoted by +Ω, of LF (u0), that is, there exists LF +H > 0 such that +∥∇2F(x) − ∇2F(y)∥ ≤ LF +H∥x − y∥, +∀x, y ∈ Ω. +(8) +By Assumption 3.1, there exist Flow ∈ R, U F +g > 0 and U F +H > 0 such that +F(x) ≥ Flow, +∥∇F(x)∥ ≤ U F +g , +∥∇2F(x)∥ ≤ U F +H, +∀x ∈ LF(u0). +(9) +Recently, a Newton-CG method [56, Algorithm 3] was developed to find an approximate stochastic SOSP +of problem (7), which is not only easy to implement but also enjoys a nice feature that the main computation +consists only of gradient evaluations and Hessian-vector products associated with the function F. Under +the assumption that ∇2F is Lipschitz continuous in a convex open set containing LF (u0) and also all the +trial points arising in the line search steps of this method (see [56, Assumption 2]), it was established in [56, +Theorem 4, Corollary 2] that the iteration and operation complexity of this method for finding a stochastic +(ǫg, ǫH)-SOSP of (7) (namely, a point x satisfying ∥∇F(x)∥ ≤ ǫg deterministically and λmin(∇2F(x)) ≥ −ǫH +with high probability) are +O((LF +H)3 max{ǫ−3 +g ǫ3 +H, ǫ−3 +H }) +and +�O((LF +H)3 max{ǫ−3 +g ǫ3 +H, ǫ−3 +H } min{n, (U F +H/ǫH)1/2}), +(10) +respectively, where ǫg, ǫH ∈ (0, 1) are prescribed tolerances. +Yet, this assumption can be hard to check +because these trial points are unknown before the method terminates and moreover the distance between +the origin and them depends on the tolerance ǫH in O(ǫ−1 +H ) (see [56, Lemma 3]). +In addition, as seen +from (10), iteration and operation complexity of the Newton-CG method in [56] depend cubically on LF +H. +Notice that LF +H can sometimes be very large. For example, the AL subproblems arising in Algorithm 2 have +LF +H = O(ǫ−2 +1 ) or O(ǫ−1 +1 ), where ǫ1 ∈ (0, 1) is a prescribed tolerance for problem (1) (see Section 4). The +cubic dependence on LF +H makes such a Newton-CG method not appealing as an AL subproblem solver from +theoretical complexity perspective. +In the rest of this section, we propose a variant of the Newton-CG method [56, Algorithm 3] and show +that under Assumption 3.1, it enjoys an iteration and operation complexity of +O((LF +H)2 max{ǫ−2 +g ǫH, ǫ−3 +H }) +and +�O((LF +H)2 max{ǫ−2 +g ǫH, ǫ−3 +H } min{n, (U F +H/ǫH)1/2}), +(11) +for finding a stochastic (ǫg, ǫH)-SOSP of problem (7), respectively. These complexities are substantially +superior to those in (10) achieved by the Newton-CG method in [56]. +Indeed, the complexities in (11) +depend quadratically on LF +H, while those in (10) depend cubically on LF +H. In addition, it can be verified that +they improve or retain the order of dependence on ǫg and ǫH given in (10). +5 + +3.1 +Main components of a Newton-CG method +In this subsection we briefly discuss two main components of the Newton-CG method in [56], which will be +used to propose a variant of this method for finding an approximate stochastic SOSP of problem (7) in the +next subsection. +The first main component of the Newton-CG method in [56] is a capped CG method [56, Algorithm 1], +which is a modified CG method, for solving a possibly indefinite linear system +(H + 2εI)d = −g, +(12) +where 0 ̸= g ∈ Rn, ε > 0, and H ∈ Rn×n is a symmetric matrix. This capped CG method terminates within a +finite number of iterations. It outputs either an approximate solution d to (12) such that ∥(H +2εI)d+g∥ ≤ +�ζ∥g∥ and dT Hd ≥ −ε∥d∥2 for some �ζ ∈ (0, 1) or a sufficiently negative curvature direction d of H with +dT Hd < −ε∥d∥2. The second main component of the Newton-CG method in [56] is a minimum eigenvalue +oracle that either produces a sufficiently negative curvature direction v of H with ∥v∥ = 1 and vT Hv ≤ −ε/2 +or certifies that λmin(H) ≥ −ε holds with high probability. For ease of reference, we present these two +components in Algorithms 3 and 4 in Appendices A and B, respectively. +Algorithm 1 A Newton-CG method for problem (7) +Input: Tolerances ǫg, ǫH ∈ (0, 1), backtracking ratio θ ∈ (0, 1), starting point u0, CG-accuracy parameter ζ ∈ (0, 1), line- +search parameter η ∈ (0, 1), probability parameter δ ∈ (0, 1). +Set x0 = u0; +for t = 0, 1, 2, . . . do +if ∥∇F (xt)∥ > ǫg then +Call Algorithm 3 with H = ∇2F (xt), ε = ǫH, g = ∇F (xt), accuracy parameter ζ, and U = 0 to obtain outputs d, +d type; +if d type=NC then +dt ← − sgn(dT ∇F (xt))|dT ∇2F (xt)d| +∥d∥3 +d; +(13) +else {d type=SOL} +dt ← d; +(14) +end if +Go to Line Search; +else +Call Algorithm 4 with H = ∇2F (xt), ε = ǫH, and probability parameter δ; +if Algorithm 4 certifies that λmin(∇2F (xt)) ≥ −ǫH then +Output xt and terminate; +else {Sufficiently negative curvature direction v returned by Algorithm 4} +Set d type=NC and +dt ← − sgn(vT ∇F (xt))|vT ∇2F (xt)v|v; +(15) +Go to Line Search; +end if +end if +Line Search: +if d type=SOL then +Find αt = θjt, where jt is the smallest nonnegative integer j such that +F (xt + θjdt) < F (xt) − ηǫHθ2j∥dt∥2; +(16) +else {d type=NC} +Find αt = θjt, where jt is the smallest nonnegative integer j such that +F (xt + θjdt) < F (xt) − ηθ2j∥dt∥3/2; +(17) +end if +xt+1 = xt + αtdt; +end for +3.2 +A Newton-CG method for problem (7) +In this subsection we propose a Newton-CG method in Algorithm 1, which is a variant of the Newton-CG +method [56, Algorithm 3], for finding an approximate stochastic SOSP of problem (7). +6 + +Our Newton-CG method (Algorithm 1) follows the same framework as [56, Algorithm 3]. In particular, +at each iteration, if the gradient of F at the current iterate is not desirably small, then the capped CG +method (Algorithm 3) is called to solve a damped Newton system for obtaining a descent direction and a +subsequent line search along this direction results in a sufficient reduction on F. Otherwise, the current +iterate is already an approximate first-order stationary point of (7), and the minimum eigenvalue oracle +(Algorithm 4) is then called, which either produces a sufficiently negative curvature direction for F and a +subsequent line search along this direction results in a sufficient reduction on F, or certifies that the current +iterate is an approximate SOSP of (7) with high probability and terminates the algorithm. More details +about this framework can be found in [56]. +Despite sharing the same framework, our Newton-CG method and [56, Algorithm 3] use different line +search criteria. Indeed, our Newton-CG method uses a hybrid line search criterion adopted from [59], which +is a combination of the quadratic descent criterion (16) and the cubic descent criterion (17). Specifically, it +uses the quadratic descent criterion (16) when the search direction is of type ‘SOL’. On the other hand, it +uses the cubic descent criterion (17) when the search direction is of type ‘NC’.3 In contrast, the Newton-CG +method in [56] always uses a cubic descent criterion regardless of the type of search directions. As observed +from Theorem 3.1 below, our Newton-CG method achieves an iteration and operation complexity given in +(11), which are superior to those in (10) achieved by [56, Algorithm 3] in terms of the order dependence +on LF +H, while improving or retaining the order of dependence on ǫg and ǫH as given in (10). Consequently, +our Newton-CG method is more appealing than [56, Algorithm 3] as an AL subproblem solver for the AL +method proposed in Section 4 from theoretical complexity perspective. +The following theorem states the iteration and operation complexity of Algorithm 1, whose proof is +deferred to Section 6.1. +Theorem 3.1. Suppose that Assumption 3.1 holds. Let +T1 := +� Fhi − Flow +min{csol, cnc} max{ǫ−2 +g ǫH, ǫ−3 +H } +� ++ +�Fhi − Flow +cnc +ǫ−3 +H +� ++ 1, T2 := +�Fhi − Flow +cnc +ǫ−3 +H +� ++ 1, +(18) +where Fhi = F(u0), Flow is given in (9), and +csol := η min + + + + + + + +4 +4 + ζ + +� +(4 + ζ)2 + 8LF +H + + +2 +, +�min{6(1 − η), 2}θ +LF +H +�2 + + + + + +, +(19) +cnc := η +16 min +� +1, +�min{3(1 − η), 1}θ +LF +H +�2� +. +(20) +Then the following statements hold. +(i) The total number of calls of Algorithm 4 in Algorithm 1 is at most T2. +(ii) The total number of calls of Algorithm 3 in Algorithm 1 is at most T1. +(iii) (iteration complexity) Algorithm 1 terminates in at most T1 + T2 iterations with +T1 + T2 = O((Fhi − Flow)(LF +H)2 max{ǫ−2 +g ǫH, ǫ−3 +H }). +(21) +Also, its output xt satisfies ∥∇F(xt)∥ ≤ ǫg deterministically and λmin(∇2F(xt)) ≥ −ǫH with probability +at least 1 − δ for some 0 ≤ t ≤ T1 + T2. +(iv) (operation complexity) Algorithm 1 requires at most +�O((Fhi − Flow)(LF +H)2 max{ǫ−2 +g ǫH, ǫ−3 +H } min{n, (U F +H/ǫH)1/2}) +matrix-vector products, where U F +H is given in (9). +3SOL and NC stand for “approximate solution” and “negative curvature”, respectively. +7 + +4 +A Newton-CG based AL method for problem (1) +In this section we propose a Newton-CG based AL method for finding a stochastic (ǫ1, ǫ2)-SOSP of problem +(1) for any prescribed tolerances ǫ1, ǫ2 ∈ (0, 1). Before proceeding, we make some additional assumptions on +problem (1). +Assumption 4.1. (a) An ǫ1/2-approximately feasible point zǫ1 of problem (1), namely satisfying ∥c(zǫ1)∥ ≤ +ǫ1/2, is known. +(b) There exist constants fhi, flow and γ > 0, independent of ǫ1 and ǫ2, such that +f(zǫ1) ≤ fhi, +(22) +f(x) + γ∥c(x)∥2/2 ≥ flow, +∀x ∈ Rn, +(23) +where zǫ1 is given in (a). +(c) There exist some δf, δc > 0 such that the set +S(δf, δc) := {x : f(x) ≤ fhi + δf, ∥c(x)∥ ≤ 1 + δc} +(24) +is compact with fhi given above. Also, ∇2f and ∇2ci, i = 1, 2, . . ., m, are Lipschitz continuous in a +convex open neighborhood, denoted by Ω(δf, δc), of S(δf, δc). +We now make some remarks on Assumption 4.1. +Remark 4.1. +(i) A very similar assumption as Assumption 4.1(a) was considered in [31, 37, 49, 60]. By +imposing Assumption 4.1(a), we restrict our study on problem (1) for which an ǫ1/2-approximately fea- +sible point zǫ1 can be found by an inexpensive procedure. One example of such problem instances arises +when there exists v0 such that {x : ∥c(x)∥ ≤ ∥c(v0)∥} is compact, ∇2ci, 1 ≤ i ≤ m, is Lipschitz contin- +uous on a convex neighborhood of this set, and the LICQ holds on this set. Indeed, for this instance, a +point zǫ1 satisfying ∥c(zǫ1)∥ ≤ ǫ1/2 can be computed by applying our Newton-CG method (Algorithm 1) +to the problem minx∈Rn ∥c(x)∥2. As seen from Theorem 3.1, the resulting iteration and operation com- +plexity of Algorithm 1 for finding such zǫ1 are respectively O(ǫ−3/2 +1 +) and �O(ǫ−3/2 +1 +min{n, ǫ−1/4 +1 +}), which +are negligible compared with those of our AL method (see Theorems 4.4 and 4.5 below). As another +example, when the standard error bound condition ∥c(x)∥2 = O(∥∇(∥c(x)∥2)∥ν) holds on a level set +of ∥c(x)∥ for some ν > 0, one can find the above zǫ1 by applying a gradient method to the problem +minx∈Rn ∥c(x)∥2 (e.g., see [46, 58]). In addition, the Newton-CG based AL method (Algorithm 2) pro- +posed below is a second-order method with the aim to find a second-order stationary point. It is more +expensive than a first-order method in general. To make best use of such an AL method in practice, +it is natural to run a first-order method in advance to obtain an ǫ1/2-first-order stationary point zǫ1 +and then run the AL method using zǫ1 as an ǫ1/2-approximately feasible point. Therefore, Assump- +tion 4.1(a) is met in practice, provided that an ǫ1/2-first-order stationary point of (1) can be found by +a first-order method. +(ii) Assumption 4.1(b) is mild. In particular, the assumption in (22) holds if f(x) ≤ fhi holds for all x with +∥c(x)∥ ≤ 1, which is imposed in [60, Assumption 3]. It also holds if problem (1) has a known feasible +point, which is often imposed for designing AL methods for nonconvex constrained optimization (e.g., +see [49, 31, 48, 37]). Besides, the assumption in (23) implies that the quadratic penalty function is +bounded below when the associated penalty parameter is sufficiently large, which is typically used in the +study of quadratic penalty and AL methods for solving problem (1) (e.g., see [40, 37, 60, 43]). Clearly, +when infx∈Rn f(x) > −∞, one can see that (23) holds for any γ > 0. In general, one possible approach +to identifying γ is to apply the techniques on infeasibility detection developed in the literature (e.g., +[20, 19, 6]) to check the infeasibility of the level set {x : f(x)+γ∥c(x)∥2/2 ≤ ˜flow} for some sufficiently +small ˜flow. Note that this level set being infeasible for some ˜flow implies that (23) holds for the given +γ and flow = ˜flow. +8 + +(iii) Assumption 4.1(c) is not too restrictive. Indeed, the set S(δf, δc) is compact if f or f(·)+γ∥c(·)∥2/2 is +level-bounded. The latter level-boundedness assumption is commonly imposed for studying AL methods +(e.g., see [37, 60]), which is stronger than our assumption. +We next propose a Newton-CG based AL method in Algorithm 2 for finding a stochastic (ǫ1, ǫ2)-SOSP +of problem (1) under Assumption 4.1. Instead of solving (1) directly, this method solves the perturbed +problem: +min +x∈Rn f(x) +s. t. ˜c(x) := c(x) − c(zǫ1) = 0, +(25) +where zǫ1 is given in Assumption 4.1(a). Specifically, at the kth iteration, this method applies the Newton- +CG method (Algorithm 1) to find an approximate stochastic SOSP xk+1 of the AL subproblem associated +with (25): +min +x∈Rn +��L(x, λk, ρk) := f(x) + (λk)T ˜c(x) + ρk∥˜c(x)∥2/2 +� +(26) +such that �L(xk+1, λk; ρk) is below a threshold (see (27) and (28)), where λk is a truncated Lagrangian +multiplier, i.e., the one that results from projecting the standard multiplier estimate ˜λk onto an Euclidean +ball (see step 6 of Algorithm 2). The standard multiplier estimate ˜λk+1 is then updated by the classical +scheme described in step 4 of Algorithm 2. Finally, the penalty parameter ρk+1 is adaptively updated based +on the improvement on constraint violation (see step 7 of Algorithm 2). Such a practical update scheme is +often adopted in the literature (e.g., see [7, 2, 31]). +We would like to point out that the truncated Lagrangian multiplier sequence {λk} is used in the AL +subproblems of Algorithm 2 and is bounded, while the standard Lagrangian multiplier sequence {˜λk} is used +in those of the classical AL methods and can be unbounded. Therefore, Algorithm 2 can be viewed as a +safeguarded AL method. Truncated Lagrangian multipliers have been used in the literature for designing +some AL methods [2, 11, 42, 13], and will play a crucial role in the subsequent complexity analysis of +Algorithm 2. +Algorithm 2 A Newton-CG based AL method for problem (1) +Let γ be given in Assumption 4.1. +Input: ǫ1, ǫ2 ∈ (0, 1), Λ > 0, x0 ∈ Rn, λ0 ∈ BΛ, ρ0 > 2γ, α ∈ (0, 1), r > 1, δ ∈ (0, 1), and zǫ1 given in Assumption 4.1. +1: Set k = 0. +2: Set τ g +k = max{ǫ1, rk log ǫ1/ log 2} and τ H +k = max{ǫ2, rk log ǫ2/ log 2}. +3: Call Algorithm 1 with ǫg += τ g +k, ǫH += τ H +k +and u0 += xk +init to find an approximate solution xk+1 to +minx∈Rn �L(x, λk; ρk) such that +�L(xk+1, λk; ρk) ≤ f(zǫ1), ∥∇x�L(xk+1, λk; ρk)∥ ≤ τ g +k , +(27) +λmin(∇2 +xx�L(xk+1, λk; ρk)) ≥ −τ H +k with probability at least 1 − δ, +(28) +where +xk +init = +� +zǫ1 +if �L(xk, λk; ρk) > f(zǫ1), +xk +otherwise, +for k ≥ 0. +(29) +4: Set ˜λk+1 = λk + ρk˜c(xk+1). +5: If τ g +k ≤ ǫ1, τ H +k ≤ ǫ2 and ∥c(xk+1)∥ ≤ ǫ1, then output (xk+1, ˜λk+1) and terminate. +6: Set λk+1 = ΠBΛ(˜λk+1). +7: If k = 0 or ∥˜c(xk+1)∥ > α∥˜c(xk)∥, set ρk+1 = rρk. Otherwise, set ρk+1 = ρk. +8: Set k ← k + 1, and go to step 2. +Remark 4.2. +(i) Notice that the starting point x0 +init of Algorithm 2 can be different from zǫ1 and it may be +rather infeasible, though zǫ1 is a nearly feasible point of (1). Besides, zǫ1 is used to ensure convergence +of Algorithm 2. Specifically, if the algorithm runs into a “poorly infeasible point” xk, namely satisfying +�L(xk, λk; ρk) > f(zǫ1), it will be superseded by zǫ1 (see (29)), which prevents the iterates {xk} from +converging to an infeasible point. Yet, xk may be rather infeasible when k is not large. Thus, Algorithm +2 substantially differs from a funneling or two-phase type algorithm, in which a nearly feasible point +9 + +is found in Phase 1, and then approximate stationarity is sought while near feasibility is maintained +throughout Phase 2 (e.g., see [9, 16, 26, 27, 28, 29, 30, 36]). +(ii) The choice of ρ0 in Algorithm 2 is mainly for the simplicity of complexity analysis. Yet, it may be overly +large and lead to highly ill-conditioned AL subproblems in practice. To make Algorithm 2 practically +more efficient, one can possibly modify it by choosing a relatively small initial penalty parameter, then +solving the subsequent AL subproblems by a first-order method until an ǫ1-first-order stationary point +ˆx of (1) along with a Lagrangian multiplier ˆλ is found, and finally performing the steps described in +Algorithm 2 but with x0 = ˆx and λ0 = ΠBΛ(ˆλ). +Before analyzing the complexity of Algorithm 2, we first argue that it is well-defined if ρ0 is suitably +chosen. Specifically, we will show that when ρ0 is sufficiently large, one can apply the Newton-CG method +(Algorithm 1) to the AL subproblem minx∈Rn �L(x, λk; ρk) with xk +init as the initial point to find an xk+1 +satisfying (27) and (28). To this end, we start by noting from (22), (25), (26) and (29) that +�L(xk +init, λk; ρk) ≤ max{�L(zǫ1, λk; ρk), f(zǫ1)} = f(zǫ1) ≤ fhi. +(30) +Based on the above observation, we show in the next lemma that when ρ0 is sufficiently large, �L(·, λk; ρk) is +bounded below and its certain level set is bounded, whose proof is deferred to Section 6.2. +Lemma 4.1. Suppose that Assumption 4.1 holds. Let (λk, ρk) be generated at the kth iteration of Algorithm 2 +for some k ≥ 0, and S(δf, δc) and xk +init be defined in (24) and (29), respectively, and let fhi, flow, δf and δc +be given in Assumption 4.1. Suppose that ρ0 is sufficiently large such that δf,1 ≤ δf and δc,1 ≤ δc, where +δf,1 := Λ2/(2ρ0) +and +δc,1 := +� +2(fhi − flow + γ) +ρ0 − 2γ ++ +Λ2 +(ρ0 − 2γ)2 + +Λ +ρ0 − 2γ . +(31) +Then the following statements hold. +(i) {x : �L(x, λk; ρk) ≤ �L(xk +init, λk; ρk)} ⊆ S(δf, δc). +(ii) infx∈Rn �L(x, λk; ρk) ≥ flow − γ − Λδc. +Using Lemma 4.1, we can verify that the Newton-CG method (Algorithm 1), starting with u0 = xk +init, +is capable of finding an approximate solution xk+1 of the AL subproblem minx∈Rn �L(x, λk; ρk) satisfying +(27) and (28). +Indeed, let F(·) = �L(·, λk; ρk) and u0 = xk +init. +By these and Lemma 4.1, one can see +that {x : F(x) ≤ F(u0)} ⊆ S(δf, δc). +It then follows from this and Assumption 4.1(c) that the level +set {x : F(x) ≤ F(u0)} is compact and ∇2F is Lipschitz continuous on a convex open neighborhood of +{x : F(x) ≤ F(u0)}. Thus, such F and u0 satisfy Assumption 3.1. Based on this and the discussion in +Section 3, one can conclude that Algorithm 1, starting with u0 = xk +init, is applicable to the AL subproblem +minx∈Rn �L(x, λk; ρk). Moreover, it follows from Theorem 3.1 that this algorithm with (ǫg, ǫH) = (τ g +k , τ H +k ) can +produce a point xk+1 satisfying (28) and also the second relation in (27). In addition, since this algorithm is +descent and its starting point is xk +init, its output xk+1 must satisfy �L(xk+1, λk; ρk) ≤ �L(xk +init, λk; ρk), which +along with (30) implies that �L(xk+1, λk; ρk) ≤ f(zǫ1) and thus xk+1 also satisfies the first relation in (27). +The above discussion leads to the following conclusion concerning the well-definedness of Algorithm 2. +Theorem 4.1. Under the same settings as in Lemma 4.1, the Newton-CG method (Algorithm 1) applied to +the AL subproblem minx∈Rn �L(x, λk; ρk) with u0 = xk +init finds a point xk+1 satisfying (27) and (28). +The following theorem characterizes the output of Algorithm 2. Its proof is deferred to Section 6.2. +Theorem 4.2. Suppose that Assumption 4.1 holds and that ρ0 is sufficiently large such that δf,1 ≤ δf and +δc,1 ≤ δc, where δf,1 and δc,1 are defined in (31). If Algorithm 2 terminates at some iteration k, then xk+1 +is a deterministic ǫ1-FOSP of problem (1), and moreover, it is an (ǫ1, ǫ2)-SOSP of (1) with probability at +least 1 − δ. +10 + +Remark 4.3. As seen from this theorem, the output of Algorithm 2 is a stochastic (ǫ1, ǫ2)-SOSP of prob- +lem (1). Nevertheless, one can easily modify Algorithm 2 to seek some other approximate solutions. For +example, if one is only interested in finding an ǫ1-FOSP of (1), one can remove the condition (28) from +Algorithm 2. In addition, if one aims to find a deterministic (ǫ1, ǫ2)-SOSP of (1), one can replace the condi- +tion (28) and Algorithm 1 by λmin(∇2 +xx�L(xk+1, λk; ρk)) ≥ −τ H +k and a deterministic counterpart, respectively. +The purpose of imposing high probability in the condition (28) is to enable us to derive operation complexity +of Algorithm 2 measured by the number of matrix-vector products. +In the rest of this section, we study the worst-case complexity of Algorithm 2. Since our method has +two nested loops, particularly, outer loops executed by the AL method and inner loops executed by the +Newton-CG method for solving the AL subproblems, we consider the following measures of complexity for +Algorithm 2. +• Outer iteration complexity, which measures the number of outer iterations of Algorithm 2; +• Total inner iteration complexity, which measures the total number of iterations of the Newton-CG +method that are performed in Algorithm 2; +• Operation complexity, which measures the total number of matrix-vector products involving the Hessian +of the augmented Lagrangian function that are evaluated in Algorithm 2. +4.1 +Outer iteration complexity of Algorithm 2 +In this subsection we establish outer iteration complexity of Algorithm 2. For notational convenience, we +rewrite (τ g +k , τ H +k ) arising in Algorithm 2 as +(τ g +k , τ H +k ) = (max{ǫ1, ωk +1}, max{ǫ2, ωk +2}) with (ω1, ω2) := (rlog ǫ1/ log 2, rlog ǫ2/ log 2), +(32) +where ǫ1, ǫ2 and r are the input parameters of Algorithm 2. Since r > 1 and ǫ1, ǫ2 ∈ (0, 1), it is not hard to +verify that ω1, ω2 ∈ (0, 1). Also, we introduce the following quantity that will be used frequently later: +Kǫ1 := +� +min{k ≥ 0 : ωk +1 ≤ ǫ1} +� += ⌈log ǫ1/ log ω1⌉ . +(33) +In view of (32), (33) and the fact that +log ǫ1/ log ω1 = log ǫ2/ log ω2 = log 2/ log r, +(34) +we see that (τ g +k , τ H +k ) = (ǫ1, ǫ2) for all k ≥ Kǫ1. This along with the termination criterion of Algorithm 2 +implies that it runs for at least Kǫ1 iterations and terminates once ∥c(xk+1)∥ ≤ ǫ1 for some k ≥ Kǫ1. As +a result, to establish outer iteration complexity of Algorithm 2, it suffices to bound such k. The resulting +outer iteration complexity of Algorithm 2 is presented below, whose proof is deferred to Section 6.2. +Theorem 4.3. Suppose that Assumption 4.1 holds and that ρ0 is sufficiently large such that δf,1 ≤ δf and +δc,1 ≤ δc, where δf,1 and δc,1 are defined in (31). Let +ρǫ1 := max +� +8(fhi − flow + γ)ǫ−2 +1 ++ 4Λǫ−1 +1 ++ 2γ, 2ρ0 +� +, +(35) +Kǫ1 := inf{k ≥ Kǫ1 : ∥c(xk+1)∥ ≤ ǫ1}, +(36) +where Kǫ1 is defined in (33), and γ, fhi and flow are given in Assumption 4.1. Then Kǫ1 is finite, and +Algorithm 2 terminates at iteration Kǫ1 with +Kǫ1 ≤ +�log(ρǫ1ρ−1 +0 ) +log r ++ 1 +� ����� +log(ǫ1(2δc,1)−1) +log α +���� + 2 +� ++ 1. +(37) +Moreover, ρk ≤ rρǫ1 holds for 0 ≤ k ≤ Kǫ1 +Remark 4.4 (Upper bounds for Kǫ1 and {ρk}). As observed from Theorem 4.3, the number of outer +iterations of Algorithm 2 for finding a stochastic (ǫ1, ǫ2)-SOSP of problem (1) is Kǫ1 + 1, which is at most +of O(| log ǫ1|2). In addition, the penalty parameters {ρk} generated in this algorithm are at most of O(ǫ−2 +1 ). +11 + +4.2 +Total inner iteration and operation complexity of Algorithm 2 +We present the total inner iteration and operation complexity of Algorithm 2 for finding a stochastic (ǫ1, ǫ2)- +SOSP of (1), whose proof is deferred to Section 6.2. +Theorem 4.4. Suppose that Assumption 4.1 holds and that ρ0 is sufficiently large such that δf,1 ≤ δf and +δc,1 ≤ δc, where δf,1 and δc,1 are defined in (31). Then the following statements hold. +(i) The total number of iterations of Algorithm 1 performed in Algorithm 2 is at most �O(ǫ−4 +1 +max{ǫ−2 +1 ǫ2, ǫ−3 +2 }). +If c is further assumed to be affine, then it is at most �O(max{ǫ−2 +1 ǫ2, ǫ−3 +2 }). +(ii) The total number of matrix-vector products performed by Algorithm 1 in Algorithm 2 is at most +�O(ǫ−4 +1 +max{ǫ−2 +1 ǫ2, ǫ−3 +2 } min{n, ǫ−1 +1 ǫ−1/2 +2 +}). +If c is further assumed to be affine, then it is at most +�O(max{ǫ−2 +1 ǫ2, ǫ−3 +2 } min{n, ǫ−1 +1 ǫ−1/2 +2 +}). +Remark 4.5. +(i) Note that the above complexity results of Algorithm 2 are established without assuming +any constraint qualification (CQ). In contrast, similar complexity results are obtained in [60] for a +proximal AL method under a generalized LICQ condition. To the best of our knowledge, our work +provides the first study on complexity for finding a stochastic SOSP of (1) without CQ. +(ii) Letting (ǫ1, ǫ2) = (ǫ, √ǫ) for some ǫ ∈ (0, 1), we see that Algorithm 2 achieves a total inner iteration +complexity of �O(ǫ−11/2) and an operation complexity of �O(ǫ−11/2 min{n, ǫ−5/4}) for finding a stochastic +(ǫ, √ǫ)-SOSP of problem (1) without constraint qualification. +4.3 +Enhanced complexity of Algorithm 2 under constraint qualification +In this subsection we study complexity of Algorithm 2 under one additional assumption that a generalized +linear independence constraint qualification (GLICQ) holds for problem (1), which is introduced below. In +particular, under GLICQ we will obtain an enhanced total inner iteration and operation complexity for +Algorithm 2, which are significantly better than the ones in Theorem 4.4 when problem (1) has nonlinear +constraints. Moreover, when (ǫ1, ǫ2) = (ǫ, √ǫ) for some ǫ ∈ (0, 1), our enhanced complexity bounds are also +better than those obtained in [60] for a proximal AL method. We now introduce the GLICQ assumption for +problem (1). +Assumption 4.2 (GLICQ). ∇c(x) has full column rank for all x ∈ S(δf, δc), where S(δf, δc) is as in (24). +Remark 4.6. A related yet different GLICQ is imposed in [60, Assumption 2(ii)] for problem (1), which +assumes that ∇c(x) has full column rank for all x in a level set of f(·) + γ∥c(·)∥2/2. It is not hard to verify +that this assumption is generally stronger than the above GLICQ assumption. +The following theorem shows that under Assumption 4.2, the total inner iteration and operation com- +plexity results presented in Theorem 4.4 can be significantly improved, whose proof is deferred to Section +6.2. +Theorem 4.5. Suppose that Assumptions 4.1 and 4.2 hold and that ρ0 is sufficiently large such that δf,1 ≤ δf +and δc,1 ≤ δc, where δf,1 and δc,1 are defined in (31). Then the following statements hold. +(i) The total number of iterations of Algorithm 1 performed in Algorithm 2 is at most �O(ǫ−2 +1 +max{ǫ−2 +1 ǫ2, ǫ−3 +2 }). +If c is further assumed to be affine, then it is at most �O(max{ǫ−2 +1 ǫ2, ǫ−3 +2 }). +(ii) The total number of matrix-vector products performed by Algorithm 1 in Algorithm 2 is at most +�O(ǫ−2 +1 +max{ǫ−2 +1 ǫ2, ǫ−3 +2 } min{n, ǫ−1/2 +1 +ǫ−1/2 +2 +}). If c is further assumed to be affine, then it is at most +�O(max{ǫ−2 +1 ǫ2, ǫ−3 +2 } min{n, ǫ−1/2 +1 +ǫ−1/2 +2 +}). +Remark 4.7. +(i) As seen from Theorem 4.5, when problem (1) has nonlinear constraints, under GLICQ +and some other suitable assumptions, Algorithm 2 achieves significantly better complexity bounds than +the ones in Theorem 4.4 without constraint qualification. +12 + +(ii) Letting (ǫ1, ǫ2) = (ǫ, √ǫ) for some ǫ ∈ (0, 1), we see that when problem (1) has nonlinear constraints, +under GLICQ and some other suitable assumptions, Algorithm 2 achieves a total inner iteration com- +plexity of �O(ǫ−7/2) and an operation complexity of �O(ǫ−7/2 min{n, ǫ−3/4}). They are vastly better than +the total inner iteration complexity of �O(ǫ−11/2) and the operation complexity of �O(ǫ−11/2 min{n, ǫ−3/4}) +that are achieved by a proximal AL method in [60] for finding a stochastic (ǫ, √ǫ)-SOSP of (1) yet under +a generally stronger GLICQ. +5 +Numerical results +We conduct some preliminary experiments to test the performance of our proposed methods (Algorithms 1 +and 2), and compare them with the Newton-CG method in [56] and the proximal AL method in [60], +respectively. All the algorithms are coded in Matlab and all the computations are performed on a desktop +with a 3.79 GHz AMD 3900XT 12-Core processor and 32 GB of RAM. +5.1 +Regularized robust regression +In this subsection we consider the regularized robust regression problem +min +x∈Rn +m +� +i=1 +φ(aT +i x − bi) + µ∥x∥4 +4, +(38) +where φ(t) = t2/(1 + t2), ∥x∥p = (�n +i=1 |xi|p)1/p for any p ≥ 1, and µ > 0. +For each triple (n, m, µ), we randomly generate 10 instances of problem (38). In particular, we first +randomly generate ai, 1 ≤ i ≤ m, with all the entries independently chosen from the standard normal +distribution. We then randomly generate ¯bi according to the standard normal distribution and set bi = 2m¯bi +for i = 1, . . . , m. +Our aim is to find a (10−5, 10−5/2)-SOSP of (38) for the above instances by Algorithm 1 and the Newton- +CG method in [56] and compare their performance. For a fair comparison, we use a minimum eigenvalue +oracle that returns a deterministic output for them so that they both certainly output an approximate +second-order stationary point. Specifically, we use the Matlab subroutine [v,λ] = eigs(H,1,’smallestreal’) as +the minimum eigenvalue oracle to find the minimum eigenvalue λ and its associated unit eigenvector v of a +real symmetric matrix H. Also, for both methods, we choose the all-ones vector as the initial point, and set +θ = 0.8, ζ = 0.5, and η = 0.2. +The computational results of Algorithm 1 and the Newton-CG method in [56] for the instances randomly +generated above are presented in Table 2. In detail, the value of n, m, and µ is listed in the first three +columns, respectively. For each triple (n, m, µ), the average CPU time (in seconds), the average number +of iterations, and the average final objective value over 10 random instances are given in the rest of the +columns. One can observe that both methods output an approximate solution with a similar objective value, +while our Algorithm 1 substantially outperforms the Newton-CG method in [56] in terms of CPU time. This +is consistent with our theoretical finding that Algorithm 1 achieves a better iteration complexity than the +Newton-CG method in [56] in terms of dependence on the Lipschitz constant of the Hessian for finding an +approximate SOSP. +5.2 +Spherically constrained regularized robust regression +In subsection we consider the spherically constrained regularized robust regression problem +min +x∈Rn +m +� +i=1 +φ(aT +i x − bi) + µ∥x∥4 +4 +s. t. +∥x∥2 +2 = 1, +(39) +where φ(t) = t2/(1 + t2), ∥x∥p = (�n +i=1 |xi|p)1/p for any p ≥ 1, and µ > 0 is a tuning parameter. For +each triple (n, m, µ), we randomly generate 10 instances of problem (39) in the same manner as described +in Subsection 5.1. +13 + +Objective value +Iterations +CPU time (seconds) +n +m +µ +Algorithm 1 +Newton-CG +Algorithm 1 +Newton-CG +Algorithm 1 +Newton-CG +100 +10 +1 +5.9 +5.9 +85.7 +116.3 +1.4 +1.6 +100 +50 +1 +45.9 +45.9 +82.6 +158.2 +1.0 +2.7 +100 +90 +1 +84.8 +84.8 +102.2 +224.7 +2.0 +4.2 +500 +50 +5 +42.2 +42.5 +173.1 +344.7 +44.2 +72.2 +500 +250 +5 +243.0 +242.9 +145.5 +362.4 +41.9 +95.0 +500 +450 +5 +442.2 +442.2 +163.7 +425.2 +47.6 +138.3 +1000 +100 +10 +90.1 +90.4 +162.5 +361.0 +110.8 +259.0 +1000 +500 +10 +491.1 +491.2 +158.3 +475.4 +129.1 +558.4 +1000 +900 +10 +891.1 +891.1 +193.5 +300.7 +187.0 +298.5 +Table 2: Numerical results for problem (38) +Objective value +Feasibility violation (×10−4) +Total inner iterations +CPU time (seconds) +n +m +µ +Algorithm 2 +Prox-AL +Algorithm 2 +Prox-AL +Algorithm 2 +Prox-AL +Algorithm 2 +Prox-AL +100 +10 +1 +7.1 +7.1 +0.18 +0.27 +40.9 +97.3 +0.73 +2.2 +100 +50 +1 +46.6 +46.6 +0.21 +0.30 +37.0 +86.3 +0.78 +1.7 +100 +90 +1 +87.0 +87.0 +0.12 +0.40 +39.5 +68.6 +1.1 +1.9 +500 +50 +5 +44.4 +44.4 +0.40 +0.68 +59.0 +343.4 +11.4 +134.9 +500 +250 +5 +244.3 +244.3 +0.37 +0.47 +59.0 +543.3 +11.7 +178.2 +500 +450 +5 +444.0 +444.0 +0.27 +0.53 +66.7 +634.1 +17.1 +158.2 +1000 +100 +10 +92.8 +92.8 +0.28 +0.42 +95.0 +2054.6 +46.3 +1516.8 +1000 +500 +10 +491.9 +491.9 +0.22 +0.72 +68.3 +756.2 +39.5 +558.6 +1000 +900 +10 +893.4 +893.4 +0.19 +0.37 +81.8 +1281.4 +57.7 +1099.6 +Table 3: Numerical results for problem (39) +Our aim is to find a (10−4, 10−2)-SOSP of (39) for the above instances by Algorithm 2 and the proximal +AL method [60, Algorithm 3] and compare their performance. For a fair comparison, we use a minimum eigen- +value oracle that returns a deterministic output for them so that they both certainly output an approximate +second-order stationary point. Specifically, we use the Matlab subroutine [v,λ] = eigs(H,1,’smallestreal’) as the +minimum eigenvalue oracle to find the minimum eigenvalue λ and its associated unit eigenvector v of a real +symmetric matrix H. In addition, for both methods, we choose the initial point as z0 = (1/√n, . . . , 1/√n)T , +the initial Lagrangian multiplier as λ0 = 0, and the other parameters as +• Λ = 100, ρ0 = 10, α = 0.25, and r = 10 for Algorithm 2; +• η = 1, q = 10 and T0 = 2 for the proximal AL method ([60]). +The computational results of Algorithm 2 and the proximal AL method in [60] (abbreviated as Prox-AL) +for solving problem (39) for the instances randomly generated above are presented in Table 3. In detail, +the value of n, m, and µ is listed in the first three columns, respectively. For each triple (n, m, µ), the +average CPU time (in seconds), the average total number of inner iterations, the average final objective +value, and the average final feasibility violation over 10 random instances are given in the rest columns. +One can observe that both methods output an approximate solution of similar quality in terms of objective +value and feasibility violation, while our Algorithm 2 vastly outperforms the proximal AL method in [60] +in terms of CPU time. This corroborates our theoretical finding that Algorithm 2 achieves a significantly +better operation complexity than the proximal AL method in [60] for finding an approximate SOSP. +6 +Proof of the main results +We provide proofs of our main results in Sections 3 and 4, including Theorem 3.1, Lemma 4.1, and Theorems +4.2, 4.3, 4.4 and 4.5. +6.1 +Proof of the main results in Section 3 +In this subsection we first establish several technical lemmas and then use them to prove Theorem 3.1. +14 + +One can observe from Assumption 3.1(b) that for all x and y ∈ Ω, +∥∇F(y) − ∇F(x) − ∇2F(x)(y − x)∥ ≤ LF +H∥y − x∥2/2, +(40) +F(y) ≤ F(x) + ∇F(x)T (y − x) + (y − x)T ∇2F(x)(y − x)/2 + LF +H∥y − x∥3/6. +(41) +The next lemma provides useful properties of the output of Algorithm 3, whose proof is similar to the +ones in [56, Lemma 3] and [54, Lemma 7] and thus omitted here. +Lemma 6.1. Suppose that Assumption 3.1 holds and the direction dt results from the output d of Algorithm +3 with a type specified in d type at some iteration t of Algorithm 1. Then the following statements hold. +(i) If d type=SOL, then dt satisfies +ǫH∥dt∥2 ≤ (dt)T � +∇2F(xt) + 2ǫHI +� +dt, +(42) +∥dt∥ ≤ 1.1ǫ−1 +H ∥∇F(xt)∥, +(43) +(dt)T ∇F(xt) = −(dt)T � +∇2F(xt) + 2ǫHI +� +dt, +(44) +∥(∇2F(xt) + 2ǫHI)dt + ∇F(xt)∥ ≤ ǫHζ∥dt∥/2. +(45) +(ii) If d type=NC, then dt satisfies (dt)T ∇F(xt) ≤ 0 and +(dt)T ∇2F(xt)dt/∥dt∥2 = −∥dt∥ ≤ −ǫH. +(46) +The next lemma shows that when the search direction dt in Algorithm 1 is of type ‘SOL’, the line search +step results in a sufficient reduction on F. +Lemma 6.2. Suppose that Assumption 3.1 holds and the direction dt results from the output d of Algorithm 3 +with d type=SOL at some iteration t of Algorithm 1. Let U F +g and csol be given in (9) and (19), respectively. +Then the following statements hold. +(i) The step length αt is well-defined, and moreover, +αt ≥ min +� +1, +� +min{6(1 − η), 2} +1.1LF +HU F +g +θǫH +� +. +(47) +(ii) The next iterate xt+1 = xt + αtdt satisfies +F(xt) − F(xt+1) ≥ csol min{∥∇F(xt+1)∥2ǫ−1 +H , ǫ3 +H}. +(48) +Proof. One can observe that F is descent along the iterates (whenever well-defined) generated by Algorithm 1, +which together with x0 = u0 implies that F(xt) ≤ F(u0) and hence ∥∇F(xt)∥ ≤ U F +g due to (9). In addition, +since dt results from the output d of Algorithm 3 with d type=SOL, one can see that ∥∇F(xt)∥ > ǫg and +(42)-(45) hold for dt. Moreover, by ∥∇F(xt)∥ > ǫg and (45), one can conclude that dt ̸= 0. +We first prove statement (i). If (16) holds for j = 0, then αt = 1, which clearly implies that (47) holds. +We now suppose that (16) fails for j = 0. Claim that for all j ≥ 0 that violate (16), it holds that +θ2j ≥ min{6(1 − η), 2}ǫH(LF +H)−1∥dt∥−1. +(49) +Indeed, suppose that (16) is violated by some j ≥ 0. We now show that (49) holds for such j by considering +two separate cases below. +Case 1) F(xt + θjdt) > F(xt). Let φ(α) = F(xt + αdt). Then φ(θj) > φ(0). Also, since dt ̸= 0, by (42) +and (44), one has +φ′(0) = ∇F(xt)T dt = −(dt)T (∇2F(xt) + 2ǫHI)dt ≤ −ǫH∥dt∥2 < 0. +15 + +Using these, we can observe that there exists a local minimizer α∗ ∈ (0, θj) of φ such that φ′(α∗) = +∇F(xt + α∗dt)T dt = 0 and φ(α∗) < φ(0), which implies that F(xt + α∗dt) < F(xt) ≤ F(u0). Hence, (40) +holds for x = xt and y = xt + α∗dt. Using this, 0 < α∗ < θj ≤ 1 and ∇F(xt + α∗dt)T dt = 0, we obtain +(α∗)2LF +H +2 +∥dt∥3 (40) +≥ ∥dt∥∥∇F(xt + α∗dt) − ∇F(xt) − α∗∇2F(xt)dt∥ +≥ (dt)T (∇F(xt + α∗dt) − ∇F(xt) − α∗∇2F(xt)dt) = −(dt)T ∇F(xt) − α∗(dt)T ∇2F(xt)dt +(44) += (1 − α∗)(dt)T (∇2F(xt) + 2ǫHI)dt + 2α∗ǫH∥dt∥2 +(42) +≥ (1 + α∗)ǫH∥dt∥2 ≥ ǫH∥dt∥2, +which along with dt ̸= 0 implies that (α∗)2 ≥ 2ǫH(LF +H)−1∥dt∥−1. Using this and θj > α∗, we conclude that +(49) holds in this case. +Case 2) F(xt + θjdt) ≤ F(xt). This together with F(xt) ≤ F(u0) implies that (41) holds for x = xt and +y = xt + θjdt. Then, because j violates (16), we obtain +−ηǫHθ2j∥dt∥2 ≤ F(xt + θjdt) − F(xt) +(41) +≤ θj∇F(xt)T dt + θ2j +2 (dt)T ∇2F(xt)dt + LF +H +6 θ3j∥dt∥3 +(44) += −θj(dt)T (∇2F(xt) + 2ǫHI)dt + θ2j +2 (dt)T ∇2F(xt)dt + LF +H +6 θ3j∥dt∥3 += −θj +� +1 − θj +2 +� +(dt)T (∇2F(xt) + 2ǫHI)dt − θ2jǫH∥dt∥2 + LF +H +6 θ3j∥dt∥3 +(42) +≤ −θj +� +1 − θj +2 +� +ǫH∥dt∥2 − θ2jǫH∥dt∥2 + LF +H +6 θ3j∥dt∥3 ≤ −θjǫH∥dt∥2 + LF +H +6 θ3j∥dt∥3. +(50) +Recall that dt ̸= 0. Dividing both sides of (50) by LF +Hθj∥dt∥3/6 and using η, θ ∈ (0, 1), we obtain that +θ2j ≥ 6(1 − θjη)ǫH(LF +H)−1∥dt∥−1 ≥ 6(1 − η)ǫH(LF +H)−1∥dt∥−1. +Hence, (49) also holds in this case. +Combining the above two cases, we conclude that (49) holds for any j ≥ 0 that violates (16). By this +and θ ∈ (0, 1), one can see that all j ≥ 0 that violate (16) must be bounded above. It then follows that the +step length αt associated with (16) is well-defined. We next prove (47). Observe from the definition of jt in +Algorithm 1 that j = jt − 1 violates (16) and hence (49) holds for j = jt − 1. Then, by (49) with j = jt − 1 +and αt = θjt, one has +αt = θjt ≥ +� +min{6(1 − η), 2}ǫH(LF +H)−1 θ∥dt∥−1/2, +(51) +which, along with (43) and ∥∇F(xt)∥ ≤ U F +g , implies (47). This proves statement (i). +We next prove statement (ii) by considering two separate cases below. +Case 1) αt = 1. By this, one knows that (16) holds for j = 0. It then follows that F(xt + dt) ≤ F(xt) ≤ +F(u0), which implies that (40) holds for x = xt and y = xt + dt. By this and (45), one has +∥∇F(xt+1)∥ = ∥∇F(xt + dt)∥ +≤ +∥∇F(xt + dt) − ∇F(xt) − ∇2F(xt)dt∥ ++∥(∇2F(xt) + 2ǫHI)dt + ∇F(xt)∥ + 2ǫH∥dt∥ +≤ +LF +H +2 ∥dt∥2 + 4+ζ +2 ǫH∥dt∥, +where the last inequality follows from (40) and (45). Solving the above inequality for ∥dt∥ and using the fact +that ∥dt∥ > 0, we obtain that +∥dt∥ +≥ +−(4+ζ)ǫH+√ +(4+ζ)2ǫ2 +H+8LF +H∥∇F (xt+1)∥ +2LF +H +≥ +−(4+ζ)ǫH+√ +(4+ζ)2ǫ2 +H+8LF +Hǫ2 +H +2LF +H +min{∥∇F(xt+1)∥/ǫ2 +H, 1} += +4 +4+ζ+√ +(4+ζ)2+8LF +H +min{∥∇F(xt+1)∥/ǫH, ǫH}, +where the second inequality follows from the inequality −a + +√ +a2 + bs ≥ (−a + +√ +a2 + b) min{s, 1} for all +a, b, s ≥ 0, which can be verified by performing a rationalization to the terms −a+ +√ +a2 + b and −a+ +√ +a2 + bs, +respectively. By this, αt = 1, (16) and (19), one can see that (48) holds. +16 + +Case 2) αt < 1. It then follows that j = 0 violates (16) and hence (49) holds for j = 0. Now, letting +j = 0 in (49), we obtain that ∥dt∥ ≥ min{6(1 − η), 2}ǫH/LF +H, which together with (16) and (51) implies that +F(xt) − F(xt+1) ≥ ηǫHθ2jt∥dt∥2 ≥ η min{6(1 − η), 2}ǫ2 +H +LF +H +θ2∥dt∥ ≥ η +�min{6(1 − η), 2}θ +LF +H +�2 +ǫ3 +H. +By this and (19), one can see that (48) also holds in this case. +The following lemma shows that when the search direction dt in Algorithm 1 is of type ‘NC’, the line +search step results in a sufficient reduction on F as well. +Lemma 6.3. Suppose that Assumption 3.1 holds and the direction dt results from either the output d of +Algorithm 3 with d type=NC or the output v of Algorithm 4 at some iteration t of Algorithm 1. Let cnc be +defined in (20). Then the following statements hold. +(i) The step length αt is well-defined, and αt ≥ min{1, θ/LF +H, 3(1 − η)θ/LF +H}. +(ii) The next iterate xt+1 = xt + αtdt satisfies F(xt) − F(xt+1) ≥ cncǫ3 +H. +Proof. Observe that F is descent along the iterates (whenever well-defined) generated by Algorithm 1. Using +this and x0 = u0, we have F(xt) ≤ F(u0). By the assumption on dt, one can see from Algorithm 1 that dt is +a negative curvature direction given in (13) or (15). Also, notice that the vector v returned from Algorithm 4 +satisfies ∥v∥ = 1. By these, Lemma 6.1(ii), (13) and (15), one can observe that +∇F(xt)T dt ≤ 0, +(dt)T ∇2F(xt)dt = −∥dt∥3 < 0. +(52) +We first prove statement (i). If (17) holds for j = 0, then αt = 1, which clearly implies that αt ≥ +min{1, θ/LF +H, 3(1 − η)θ/LF +H}. We now suppose that (17) fails for j = 0. Claim that for all j ≥ 0 that violate +(17), it holds that +θj ≥ min{1/LF +H, 3(1 − η)/LF +H}. +(53) +Indeed, suppose that (17) is violated by some j ≥ 0. We now show that (53) holds for such j by considering +two separate cases below. +Case 1) F(xt + θjdt) > F(xt). Let φ(α) = F(xt + αdt). Then φ(θj) > φ(0). Also, by (52), one has +φ′(0) = ∇F(xt)T dt ≤ 0, +φ′′(0) = (dt)T ∇2F(xt)dt < 0. +Using these, we can observe that there exists a local minimizer α∗ ∈ (0, θj) of φ such that φ(α∗) < φ(0), +namely, F(xt + α∗dt) < F(xt). By the second-order optimality condition of φ at α∗, one has φ′′(α∗) = +(dt)T ∇2F(xt + α∗dt)dt ≥ 0. Since F(xt + α∗dt) < F(xt) ≤ F(u0), it follows that (8) holds for x = xt and +y = xt + α∗dt. Using this, the second relation in (52) and (dt)T ∇2F(xt + α∗dt)dt ≥ 0, we obtain that in (52) +and (dt)T ∇2F(xt + α∗dt)dt ≥ 0, we obtain that +LF +Hα∗∥dt∥3 +(8) +≥ +∥dt∥2∥∇2F(xt + α∗dt) − ∇2F(xt)∥ ≥ (dt)T (∇2F(xt + α∗dt) − ∇2F(xt))dt +≥ +−(dt)T ∇2F(xt)dt = ∥dt∥3. +(54) +Recall from (52) that dt ̸= 0. It then follows from (54) that α∗ ≥ 1/LF +H, which along with θj > α∗ implies +that θj > 1/LF +H. Hence, (53) holds in this case. +Case 2) F(xt + θjdt) ≤ F(xt). It follows from this and F(xt) ≤ F(u0) that (41) holds for x = xt and +y = xt + θjdt. By this and the fact that j violates (17), one has +− η +2θ2j∥dt∥3 +≤ +F(xt + θjdt) − F(xt) +(41) +≤ θj∇F(xt)T dt + θ2j +2 (dt)T ∇2F(xt)dt + LF +H +6 θ3j∥dt∥3 +(52) +≤ +− θ2j +2 ∥dt∥3 + LF +H +6 θ3j∥dt∥3, +which together with dt ̸= 0 implies that θj ≥ 3(1 − η)/LF +H. Hence, (53) also holds in this case. +17 + +Combining the above two cases, we conclude that (53) holds for any j ≥ 0 that violates (17). By this +and θ ∈ (0, 1), one can see that all j ≥ 0 that violate (17) must be bounded above. It then follows that the +step length αt associated with (17) is well-defined. We next derive a lower bound for αt. Notice from the +definition of jt in Algorithm 1 that j = jt − 1 violates (17) and hence (53) holds for j = jt − 1. Then, by +(53) with j = jt − 1 and αt = θjt, one has αt = θjt ≥ min{θ/LF +H, 3(1 − η)θ/LF +H}, which immediately yields +αt ≥ min{1, θ/LF +H, 3(1 − η)θ/LF +H} as desired. +We next prove statement (ii) by considering two separate cases below. +Case 1) dt results from the output d of Algorithm 3 with d type=NC. It then follows from (46) that +∥dt∥ ≥ ǫH. This together with (17) and statement (i) implies that statement (ii) holds. +Case 2) dt results from the output v of Algorithm 4. +Notice from Algorithm 4 that ∥v∥ = 1 and +vT ∇2F(xt)v ≤ −ǫH/2, which along with (15) yields ∥dt∥ ≥ ǫH/2. By this, (17) and statement (i), one can +see that statement (ii) again holds. +Proof of Theorem 3.1. For notational convenience, we let {xt}t∈T denote all the iterates generated by Algo- +rithm 1, where T is a set of consecutive nonnegative integers starting from 0. Notice that F is descent along +the iterates generated by Algorithm 1, which together with x0 = u0 implies that xt ∈ {x : F(x) ≤ F(u0)}. +It then follows from (9) that ∥∇2F(xt)∥ ≤ U F +H holds for all t ∈ T. +(i) Suppose for contradiction that the total number of calls of Algorithm 4 in Algorithm 1 is more than T2. +Notice from Algorithm 1 and Lemma 6.3(ii) that each of these calls, except the last one, returns a sufficiently +negative curvature direction, and each of them results in a reduction on F of at least cncǫ3 +H. Hence, +T2cncǫ3 +H ≤ +� +t∈T +[F(xt) − F(xt+1)] ≤ F(x0) − Flow = Fhi − Flow, +which contradicts the definition of T2 given in (18). Hence, statement (i) of Theorem 3.1 holds. +(ii) Suppose for contradiction that the total number of calls of Algorithm 3 in Algorithm 1 is more +than T1. +Observe that if Algorithm 3 is called at some iteration t and generates the next iterate xt+1 +satisfying ∥∇F(xt+1)∥ ≤ ǫg, then Algorithm 4 must be called at the next iteration t + 1. In view of this +and statement (i) of Theorem 3.1, we see that the total number of such iterations t is at most T2. Hence, +the total number of iterations t of Algorithm 1 at which Algorithm 3 is called and generates the next iterate +xt+1 satisfying ∥∇F(xt+1)∥ > ǫg is at least T1 �� T2 + 1. Moreover, for each of such iterations t, we observe +from Lemmas 6.2(ii) and 6.3(ii) that F(xt) − F(xt+1) ≥ min{csol, cnc} min{ǫ2 +gǫ−1 +H , ǫ3 +H}. It then follows that +(T1 − T2 + 1) min{csol, cnc} min{ǫ2 +gǫ−1 +H , ǫ3 +H} ≤ +� +t∈T +[F(xt) − F(xt+1)] ≤ Fhi − Flow, +which contradicts the definition of T1 and T2 given in (18). Hence, statement (ii) of Theorem 3.1 holds. +(iii) Notice that either Algorithm 3 or 4 is called at each iteration of Algorithm 1. It follows from this +and statements (i) and (ii) of Theorem 3.1 that the total number of iterations of Algorithm 1 is at most +T1 +T2. In addition, the relation (21) follows from (19), (20) and (18). One can also observe that the output +xt of Algorithm 1 satisfies ∥∇F(xt)∥ ≤ ǫg deterministically and λmin(∇2F(xt)) ≥ −ǫH with probability at +least 1 − δ for some 0 ≤ t ≤ T1 + T2, where the latter part is due to Algorithm 4. This completes the proof +of statement (ii) of Theorem 3.1. +(iv) By Theorem A.1 with (H, ε) = (∇2F(xt), ǫH) and the fact that ∥∇2F(xt)∥ ≤ U F +H, one can observe +that the number of Hessian-vector products required by each call of Algorithm 3 with input U = 0 is at +most �O(min{n, (U F +H/ǫH)1/2}). In addition, by Theorem B.1 with (H, ε) = (∇2F(xt), ǫH), ∥∇2F(xt)∥ ≤ U F +H, +and the fact that each iteration of the Lanczos method requires only one matrix-vector product, one can +observe that the number of Hessian-vector products required by each call of Algorithm 4 is also at most +�O(min{n, (U F +H/ǫH)1/2}). +Based on these observations and statement (iii) of Theorem 3.1, we see that +statement (iv) of this theorem holds. +18 + +6.2 +Proof of the main results in Section 4 +Recall from Assumption 4.1(a) that ∥c(zǫ1)∥ ≤ ǫ1/2 < 1. By virtue of this, (23) and the definition of ˜c in +(25), we obtain that +f(x) + γ∥˜c(x)∥2 ≥ f(x) + γ∥c(x)∥2/2 − γ∥c(zǫ1)∥2 ≥ flow − γ, +∀x ∈ Rn . +(55) +We now prove the following auxiliary lemma that will be used frequently later. +Lemma 6.4. Suppose that Assumption 4.1 holds. Let γ, fhi and flow be given in Assumption 4.1. Assume +that ρ > 2γ, λ ∈ Rm, and x ∈ Rn satisfy +�L(x, λ; ρ) ≤ fhi, +(56) +where �L is defined in (26). Then the following statements hold. +(i) f(x) ≤ fhi + ∥λ∥2/(2ρ). +(ii) ∥˜c(x)∥ ≤ +� +2(fhi − flow + γ)/(ρ − 2γ) + ∥λ∥2/(ρ − 2γ)2 + ∥λ∥/(ρ − 2γ). +(iii) If ρ ≥ ∥λ∥2/(2˜δf) for some ˜δf > 0, then f(x) ≤ fhi + ˜δf. +(iv) If +ρ ≥ 2(fhi − flow + γ)˜δ−2 +c ++ 2∥λ∥˜δ−1 +c ++ 2γ +(57) +for some ˜δc > 0, then ∥˜c(x)∥ ≤ ˜δc. +Proof. (i) It follows from (56) and the definition of �L in (26) that +fhi ≥ f(x) + λT ˜c(x) + ρ +2∥˜c(x)∥2 = f(x) + ρ +2 +���˜c(x) + λ +ρ +��� +2 +− ∥λ∥2 +2ρ +≥ f(x) − ∥λ∥2 +2ρ . +Hence, statement (i) holds. +(ii) In view of (55) and (56), one has +fhi +(56) +≥ f(x) + λT ˜c(x) + ρ +2∥˜c(x)∥2 = f(x) + γ∥˜c(x)∥2 + ρ−2γ +2 +���˜c(x) + +λ +ρ−2γ +��� +2 +− +∥λ∥2 +2(ρ−2γ) +(55) +≥ flow − γ + ρ−2γ +2 +���˜c(x) + +λ +ρ−2γ +��� +2 +− +∥λ∥2 +2(ρ−2γ). +It then follows that +���˜c(x) + +λ +ρ−2γ +��� ≤ +� +2(fhi−flow+γ) +ρ−2γ ++ +∥λ∥2 +(ρ−2γ)2 , which implies that statement (ii) holds. +(iii) Statement (iii) immediately follows from statement (i) and ρ ≥ ∥λ∥2/(2˜δf). +(iv) Suppose that (57) holds. Multiplying both sides of (57) by ˜δ2 +c and rearranging the terms, we have +(ρ − 2γ)˜δ2 +c − 2∥λ∥˜δc − 2(fhi − flow + γ) ≥ 0. +Recall that ρ > 2γ and ˜δc > 0. Solving this inequality for ˜δc yields +˜δc ≥ +� +2(fhi − flow + γ)/(ρ − 2γ) + ∥λ∥2/(ρ − 2γ)2 + ∥λ∥/(ρ − 2γ), +which along with statement (ii) implies that ∥˜c(x)∥ ≤ ˜δc. Hence, statement (iv) holds. +Proof of Lemma 4.1. (i) Let x be any point such that �L(x, λk; ρk) ≤ �L(xk +init, λk; ρk). It then follows from +(30) that �L(x, λk; ρk) ≤ fhi. By this, ∥λk∥ ≤ Λ, ρk ≥ ρ0 > 2γ, δf,1 ≤ δf, δc,1 ≤ δc, and Lemma 6.4 with +(λ, ρ) = (λk, ρk), one has +f(x) ≤ fhi + ∥λk∥2/(2ρk) ≤ fhi + Λ2/(2ρ0) = fhi + δf,1 ≤ fhi + δf, +∥˜c(x)∥ ≤ +� +2(fhi−flow+γ) +ρk−2γ ++ +∥λk∥2 +(ρk−2γ)2 + +∥λk∥ +ρk−2γ ≤ +� +2(fhi−flow+γ) +ρ0−2γ ++ +Λ2 +(ρ0−2γ)2 + +Λ +ρ0−2γ = δc,1 ≤ δc. +(58) +Also, recall from the definition of ˜c in (25) and ∥c(zǫ1)∥ ≤ 1 that ∥c(x)∥ ≤ 1 + ∥˜c(x)∥. This together with +the above inequalities and (24) implies x ∈ S(δf, δc). Hence, statement (i) of Lemma 4.1 holds. +19 + +(ii) Note that inf +x∈Rn �L(x, λk; ρk) = inf +x∈Rn{�L(x, λk; ρk) : �L(x, λk; ρk) ≤ �L(xk +init, λk; ρk)}. Consequently, to +prove statement (ii) of Lemma 4.1, it suffices to show that +inf +x∈Rn{�L(x, λk; ρk) : �L(x, λk; ρk) ≤ �L(xk +init, ��k; ρk)} ≥ flow − γ − Λδc. +(59) +To this end, let x be any point satisfying �L(x, λk; ρk) ≤ �L(xk +init, λk; ρk). +We then know from (58) that +∥˜c(x)∥ ≤ δc. By this, ∥λk∥ ≤ Λ, ρk > 2γ, and (55), one has +�L(x, λk; ρk) = f(x) + γ∥˜c(x)∥2 + (λk)T ˜c(x) + ρk−2γ +2 +∥˜c(x)∥2 +≥ f(x) + γ∥˜c(x)∥2 − Λ∥˜c(x)∥ ≥ flow − γ − Λδc, +and hence (59) holds as desired. +Proof of Theorem 4.2. Suppose that Algorithm 2 terminates at some iteration k, that is, τg +k ≤ ǫ1, τ H +k ≤ ǫ2, +and ∥c(xk+1)∥ ≤ ǫ1 hold. Then, by τg +k ≤ ǫ1, ˜λk+1 = λk + ρk˜c(xk+1), ∇˜c = ∇c and the second relation in +(27), one has +∥∇f(xk+1) + ∇c(xk+1)˜λk+1∥ = ∥∇f(xk+1) + ∇˜c(xk+1)(λk + ρk˜c(xk+1))∥ += ∥∇x�L(xk+1, λk; ρk)∥ ≤ τ g +k ≤ ǫ1. +Hence, (xk+1, ˜λk+1) satisfies the first relation in (5). In addition, by (28) and τ H +k ≤ ǫ2, one can show that +λmin(∇2 +xx�L(xk+1, λk; ρk)) ≥ −ǫ2 with probability at least 1 − δ, which leads to dT ∇2 +xx�L(xk+1, λk; ρk)d ≥ +−ǫ2∥d∥2 for all d ∈ Rn with probability at least 1 − δ. Using this, ˜λk+1 = λk + ρk˜c(xk+1), ∇˜c = ∇c, and +∇2˜ci = ∇2ci for 1 ≤ i ≤ m, we see that with probability at least 1 − δ, it holds that +dT +� +∇2f(xk+1) + +m +� +i=1 +˜λk+1 +i +∇2ci(xk+1) + ρk∇c(xk+1)∇c(xk+1)T +� +d ≥ −ǫ2∥d∥2 ∀d ∈ Rn, +which implies dT (∇2f(xk+1) + �m +i=1 ˜λk+1 +i +∇2ci(xk+1))d ≥ −ǫ2∥d∥2 for all d ∈ C(xk+1), where C(·) is defined +in (4). Hence, (xk+1, ˜λk+1) satisfies (6) with probability at least 1−δ. Combining these with ∥c(xk+1)∥ ≤ ǫ1, +we conclude that xk+1 is a deterministic ǫ1-FOSP of (1) and an (ǫ1, ǫ2)-SOSP of (1) with probability at least +1 − δ. Hence, Theorem 4.2 holds. +Proof of Theorem 4.3. It follows from (35) that ρǫ1 ≥ 2ρ0. By this, one has +Kǫ1 +(33) += +⌈log ǫ1/ log ω1⌉ +(32) += +⌈log 2/ log r⌉ ≤ log(ρǫ1ρ−1 +0 )/ log r + 1. +(60) +Notice that {ρk} is either unchanged or increased by a ratio r as k increases. By this fact and (60), we see +that +max +0≤k≤Kǫ1 +ρk ≤ rKǫ1 ρ0 +(60) +≤ r +log(ρǫ1 ρ−1 +0 +) +log r ++1ρ0 = rρǫ1. +(61) +In addition, notice that ρk > 2γ and ∥λk∥ ≤ Λ. Using these, (22), the first relation in (27), and Lemma +6.4(ii) with (x, λ, ρ) = (xk+1, λk, ρk), we obtain that +∥˜c(xk+1)∥ ≤ +� +2(fhi−flow+γ) +ρk−2γ ++ +∥λk∥2 +(ρk−2γ)2 + +∥λk∥ +ρk−2γ ≤ +� +2(fhi−flow+γ) +ρk−2γ ++ +Λ2 +(ρk−2γ)2 + +Λ +ρk−2γ . +(62) +Also, we observe from ∥c(zǫ1)∥ ≤ ǫ1/2 and the definition of ˜c in (25) that +∥c(xk+1)∥ ≤ ∥˜c(xk+1)∥ + ∥c(zǫ1)∥ ≤ ∥˜c(xk+1)∥ + ǫ1/2. +(63) +We now prove that Kǫ1 is finite. Suppose for contradiction that Kǫ1 is infinite. It then follows from this +and (36) that ∥c(xk+1)∥ > ǫ1 for all k ≥ Kǫ1, which along with (63) implies that ∥˜c(xk+1)∥ > ǫ1/2 for all +k ≥ Kǫ1. It then follows that ∥˜c(xk+1)∥ > α∥˜c(xk)∥ must hold for infinitely many k’s. Using this and the +update scheme on {ρk}, we deduce that ρk+1 = rρk holds for infinitely many k’s, which together with the +20 + +monotonicity of {ρk} implies that ρk → ∞ as k → ∞. By this and (62), one can see that ∥˜c(xk+1)∥ → 0 +as k → ∞, which contradicts the fact that ∥˜c(xk+1)∥ > ǫ1/2 holds for all k ≥ Kǫ1. Hence, Kǫ1 is finite. +In addition, notice from (32), (33) and (34) that (τ g +k , τ H +k ) = (ǫ1, ǫ2) for all k ≥ Kǫ1. This along with the +termination criterion of Algorithm 2 and the definition of Kǫ1 implies that Algorithm 2 must terminate at +iteration Kǫ1. +We next show that (37) and ρk ≤ rρǫ1 hold for 0 ≤ k ≤ Kǫ1 by considering two separate cases below. +Case 1) ∥c(xKǫ1 +1)∥ ≤ ǫ1. By this and (36), one can see that Kǫ1 = Kǫ1, which together with (60) and +(61) implies that (37) and ρk ≤ rρǫ1 hold for 0 ≤ k ≤ Kǫ1. +Case 2) ∥c(xKǫ1 +1)∥ > ǫ1. By this and (36), one can observe that Kǫ1 > Kǫ1 and also ∥c(xk+1)∥ > ǫ1 +for all Kǫ1 ≤ k ≤ Kǫ1 − 1, which together with (63) implies +∥˜c(xk+1)∥ > ǫ1/2, +∀Kǫ1 ≤ k ≤ Kǫ1 − 1. +(64) +It then follows from ∥λk∥ ≤ Λ, (22), the first relation in (27), and Lemma 6.4(iv) with (x, λ, ρ, ˜δc) = +(xk+1, λk, ρk, ǫ1/2) that +ρk < 8(fhi − flow + γ)ǫ−2 +1 ++ 4∥λk∥ǫ−1 +1 ++ 2γ +≤ 8(fhi − flow + γ)ǫ−2 +1 ++ 4Λǫ−1 +1 ++ 2γ +(35) +≤ ρǫ1, +∀Kǫ1 ≤ k ≤ Kǫ1 − 1. +(65) +Combining this relation, (61), and the fact ρKǫ1 ≤ rρKǫ1 −1, we conclude that ρk ≤ rρǫ1 holds for 0 ≤ k ≤ Kǫ1. +It remains to show that (37) holds. To this end, let +K = {k : ρk+1 = rρk, Kǫ1 ≤ k ≤ Kǫ1 − 2}. +It follows from (65) and the update scheme of ρk that +r| K |ρKǫ1 = +max +Kǫ1 ≤k≤Kǫ1 −1 +{ρk} ≤ ρǫ1, +which together with ρKǫ1 ≥ ρ0 implies that +| K | ≤ log(ρǫ1ρ−1 +Kǫ1 )/ log r ≤ log(ρǫ1ρ−1 +0 )/ log r. +(66) +Let {k1, k2, . . . , k| K |} denote all the elements of K arranged in ascending order, and let k0 = Kǫ1 and +k| K |+1 = Kǫ1 − 1. We next derive an upper bound for kj+1 − kj for j = 0, 1, . . ., | K |. By the definition of +K, one can observe that ρk = ρk′ for kj < k, k′ ≤ kj+1. Using this and the update scheme of ρk, we deduce +that +∥˜c(xk+1)∥ ≤ α∥˜c(xk)∥, +∀kj < k < kj+1. +(67) +On the other hand, by (31), (62) and ρk ≥ ρ0, one has ∥˜c(xk+1)∥ ≤ δc,1 for 0 ≤ k ≤ Kǫ1. By this and (64), +one can see that +ǫ1/2 < ∥˜c(xk+1)∥ ≤ δc,1, +∀Kǫ1 ≤ k ≤ Kǫ1 − 1. +(68) +Now, note that either kj+1 − kj = 1 or kj+1 − kj > 1. In the latter case, we can apply (67) with k = +kj+1 − 1, . . . , kj + 1 together with (68) to deduce that +ǫ1/2 < ∥˜c(xkj+1)∥ ≤ α∥˜c(xkj+1−1)∥ ≤ · · · ≤ αkj+1−kj−1∥˜c(xkj+1)∥ ≤ αkj+1−kj−1δc,1, +∀j = 0, 1, . . . , | K |. +Combining these two cases, we have +kj+1 − kj ≤ | log(ǫ1(2δc,1)−1))/ log α| + 1, +∀j = 0, 1, . . ., | K |. +(69) +Summing up these inequalities, and using (60), (66), k0 = Kǫ1 and k| K |+1 = Kǫ1 − 1, we have +Kǫ1 = 1 + k| K |+1 = 1 + k0 + �| K | +j=0(kj+1 − kj) +(69) +≤ 1 + Kǫ1 + (| K | + 1) +���� log(ǫ1(2δc,1)−1) +log α +��� + 1 +� +≤ 2 + log(ρǫ1 ρ−1 +0 +) +log r ++ +� log(ρǫ1 ρ−1 +0 +) +log r ++ 1 +� ���� log(ǫ1(2δc,1)−1) +log α +��� + 1 +� += 1 + +� log(ρǫ1 ρ−1 +0 +) +log r ++ 1 +� ���� log(ǫ1(2δc,1)−1) +log α +��� + 2 +� +, +where the second inequality is due to (60) and (66). Hence, (37) also holds in this case. +21 + +We next prove Theorem 4.4. Before proceeding, we introduce some notation that will be used shortly. +Let Lk,H denote the Lipschitz constant of ∇2 +xx�L(x, λk; ρk) on the convex open neighborhood Ω(δf, δc) of +S(δf, δc), where S(δf, δc) is defined in (24), and let Uk,H = supx∈S(δf ,δc) ∥∇2 +xx�L(x, λk; ρk)∥. Notice from (25) +and (26) that +∇2 +xx�L(x, λk; ρk) = ∇2f(x) + +m +� +i=1 +λk +i ∇2ci(x) + ρk +� +∇c(x)∇c(x)T + +m +� +i=1 +˜ci(x)∇2ci(x) +� +. +(70) +By this, ∥λk∥ ≤ Λ, the definition of ˜c, and the Lipschitz continuity of ∇2f and ∇2ci’s (see Assumption 4.1(c)), +one can observe that there exist some constants L1, L2, U1 and U2, depending only on f, c, Λ, δf and δc, +such that +Lk,H ≤ L1 + ρkL2, +Uk,H ≤ U1 + ρkU2. +(71) +Proof of Theorem 4.4. Let Tk and Nk denote the number of iterations and matrix-vector products performed +by Algorithm 1 at the outer iteration k of Algorithm 2, respectively. It then follows from Theorem 4.3 that +the total number of iterations and matrix-vector products performed by Algorithm 1 in Algorithm 2 are +�Kǫ1 +k=0 Tk and �Kǫ1 +k=0 Nk, respectively. In addition, notice from (35) and Theorem 4.3 that ρǫ1 = O(ǫ−2 +1 ) and +ρk ≤ rρǫ1, which yield ρk = O(ǫ−2 +1 ). +We first claim that (τ g +k )2/τ H +k +≥ min{ǫ2 +1/ǫ2, ǫ3 +2} holds for any k ≥ 0. Indeed, let ¯t = log ǫ1/ log ω1 and +ψ(t) = max{ǫ1, ωt +1}2/ max{ǫ2, ωt +2} for all t ∈ R. It then follows from (34) that ω¯t +1 = ǫ1 and ω¯t +2 = ǫ2. By this +and ω1, ω2 ∈ (0, 1), one can observe that ψ(t) = (ω2 +1/ω2)t if t ≤ ¯t and ψ(t) = ǫ2 +1/ǫ2 otherwise. This along +with ǫ2 ∈ (0, 1) implies that +min +t∈[0,∞) ψ(t) = min{ψ(0), ψ(¯t)} = min{1, ǫ2 +1/ǫ2} ≥ min{ǫ2 +1/ǫ2, ǫ3 +2}, +which together with (32) yields (τ g +k )2/τ H +k = ψ(k) ≥ min{ǫ2 +1/ǫ2, ǫ3 +2} for all k ≥ 0. +(i) From Lemma 4.1(i) and the definitions of Ω(δf, δc) and Lk,H, we see that Lk,H is a Lipschitz constant +of ∇2 +xx�L(x, λk; ρk) on a convex open neighborhood of {x : �L(x, λk; ρk) ≤ �L(xk +init, λk; ρk)}. Also, recall from +Lemma 4.1(ii) that infx∈Rn �L(x, λk; ρk) ≥ flow − γ − Λδc. By these, �L(xk +init, λk; ρk) ≤ fhi (see (30)) and +Theorem 3.1(iii) with (Fhi, Flow, LF +H, ǫg, ǫH) = (�L(xk +init, λk; ρk), flow − γ − Λδc, Lk,H, τ g +k , τ H +k ), one has +Tk += +O((fhi − flow + γ + Λδc)L2 +k,H max{(τ g +k )−2τ H +k , (τ H +k )−3}) +(71) += +O(ρ2 +k max{(τ g +k )−2τ H +k , (τ H +k )−3}) = O(ǫ−4 +1 +max{ǫ−2 +1 ǫ2, ǫ−3 +2 }), +(72) +where the last equality is from (τ g +k )2/τ H +k ≥ min{ǫ2 +1/ǫ2, ǫ3 +2}, τ H +k ≥ ǫ2, and ρk = O(ǫ−2 +1 ). +Next, if c(x) = Ax − b for some A ∈ Rm×n and b ∈ Rm, then ∇c(x) = AT and ∇2ci(x) = 0 for +1 ≤ i ≤ m. By these and (70), one has Lk,H = O(1). Using this and similar arguments as for (72), we obtain +that Tk = O(max{ǫ−2 +1 ǫ2, ǫ−3 +2 }). By this, (72) and Kǫ1 = O(| log ǫ1|2) (see Remark 4.4), we conclude that +statement (i) of Theorem 4.4 holds. +(ii) In view of Lemma 4.1(i) and the definition of Uk,H, one can see that +Uk,H ≥ sup +x∈Rn{∥∇2 +xx�L(x, λk; ρk)∥ : �L(x, λk; ρk) ≤ �L(xk +init, λk; ρk)}. +Using this, �L(xk +init, λk; ρk) ≤ fhi and Theorem 3.1(iv) with (Fhi, Flow, LF +H, U F +H, ǫg, ǫH) = (�L(xk +init, λk; ρk), flow− +γ − Λδc, Lk,H, Uk,H, τ g +k , τ H +k ), we obtain that +Nk += +�O((fhi − flow + γ + Λδc)L2 +k,H max{(τ g +k )−2τ H +k , (τ H +k )−3} min{n, (Uk,H/τ H +k )1/2}) +(71) += +�O(ρ2 +k max{(τ g +k )−2τ H +k , (τ H +k )−3} min{n, (ρk/τ H +k )1/2}) += +�O(ǫ−4 +1 +max{ǫ−2 +1 ǫ2, ǫ−3 +2 } min{n, ǫ−1 +1 ǫ−1/2 +2 +}), +(73) +where the last equality is from (τ g +k )2/τ H +k ≥ min{ǫ2 +1/ǫ2, ǫ3 +2}, τ H +k ≥ ǫ2, and ρk = O(ǫ−2 +1 ). +On the other hand, if c is assumed to be affine, it follows from the above discussion that Lk,H = O(1). Us- +ing this, Uk,H ≤ U1+ρkU2, and similar arguments as for (73), we obtain that Nk = �O(max{ǫ−2 +1 ǫ2, ǫ−3 +2 } min{n, ǫ−1 +1 ǫ−1/2 +2 +}). +By this, (73) and Kǫ1 = O(| log ǫ1|2) (see Remark 4.4), we conclude that statement (ii) of Theorem 4.4 +holds. +22 + +Next, we provide a proof of Theorem 4.5. To proceed, we first observe from Assumptions 4.1(c) and 4.2 +that there exist U f +g > 0, U c +g > 0 and σ > 0 such that +∥∇f(x)∥ ≤ U f +g , +∥∇c(x)∥ ≤ U c +g, +λmin(∇c(x)T ∇c(x)) ≥ σ2, +∀x ∈ S(δf, δc). +(74) +We next establish several technical lemmas that will be used shortly. +Lemma 6.5. Suppose that Assumptions 4.1 and 4.2 hold and that ρ0 is sufficiently large such that δf,1 ≤ δf +and δc,1 ≤ δc, where δf,1 and δc,1 are defined in (31). Let {(xk, λk, ρk)} be generated by Algorithm 2. Suppose +that +ρk ≥ max{Λ2(2δf)−1, 2(fhi − flow + γ)δ−2 +c ++ 2Λδ−1 +c ++ 2γ, 2(U f +g + U c +gΛ + 1)(σǫ1)−1} +(75) +for some k ≥ 0, where γ, fhi, flow, δf and δc are given in Assumption 4.1, and U f +g , U c +g and σ are given in +(74). Then it holds that ∥c(xk+1)∥ ≤ ǫ1. +Proof. By (75) and ∥λk∥ ≤ Λ (see step 6 of Algorithm 2), one can see that +ρk ≥ max{∥λk∥2(2δf)−1, 2(fhi − flow + γ)δ−2 +c ++ 2∥λk∥δ−1 +c ++ 2γ}. +Using this, (22), the first relation in (27), and Lemma 6.4(iii) and (iv) with (x, λ, ρ, ˜δf, ˜δc) = (xk+1, λk, ρk, δf, δc), +we obtain that f(xk+1) ≤ fhi + δf and ∥˜c(xk+1)∥ ≤ δc. In addition, recall from ∥c(zǫ1)∥ ≤ 1 and the defini- +tion of ˜c in (25) that ∥c(xk+1)∥ ≤ 1 + ∥˜c(xk+1)∥. These together with (24) show that xk+1 ∈ S(δf, δc). It +then follows from (74) that ∥∇f(xk+1)∥ ≤ U f +g , ∥∇c(xk+1)∥ ≤ U c +g, and λmin(∇c(xk+1)T ∇c(xk+1)) ≥ σ2. By +∥∇f(xk+1)∥ ≤ U f +g , ∥∇c(xk+1)∥ ≤ U c +g, τ g +k ≤ 1, ∥λk∥ ≤ Λ, (25) and (27), one has +ρk∥∇c(xk+1)˜c(xk+1)∥ ≤ ∥∇f(xk+1) + ∇c(xk+1)λk∥ + ∥∇x�L(xk+1, λk; ρk)∥ +(27) +≤ ∥∇f(xk+1)∥ + ∥∇c(xk+1)∥∥λk∥ + τ g +k ≤ U f +g + U c +gΛ + 1. +(76) +In addition, note that λmin(∇c(xk+1)T ∇c(xk+1)) ≥ σ2 implies that ∇c(xk+1)T ∇c(xk+1) is invertible. Using +this fact and (76), we obtain +∥˜c(xk+1)∥ ≤ ∥(∇c(xk+1)T ∇c(xk+1))−1∇c(xk+1)T ∥∥∇c(xk+1)˜c(xk+1)∥ += λmin(∇c(xk+1)T ∇c(xk+1))− 1 +2 ∥∇c(xk+1)˜c(xk+1)∥ +(76) +≤ (U f +g + U c +gΛ + 1)/(σρk). +(77) +We also observe from (75) that ρk ≥ 2(U f +g +U c +gΛ+1)(σǫ1)−1, which along with (77) proves ∥˜c(xk+1)∥ ≤ ǫ1/2. +Combining this with the definition of ˜c in (25) and ∥c(zǫ1)∥ ≤ ǫ1/2, we conclude that ∥c(xk+1)∥ ≤ ǫ1 holds +as desired. +The next lemma provides a stronger upper bound for {ρk} than the one in Theorem 4.3. +Lemma 6.6. Suppose that Assumptions 4.1 and 4.2 hold and that ρ0 is sufficiently large such that δf,1 ≤ δf +and δc,1 ≤ δc, where δf,1 and δc,1 are defined in (31). Let {ρk} be generated by Algorithm 2 and +˜ρǫ1 := max{Λ2(2δf)−1, 2(fhi − flow + γ)δ−2 +c ++ 2Λδ−1 +c ++ 2γ, 2(U f +g + U c +gΛ + 1)(σǫ1)−1, 2ρ0}, +(78) +where γ, fhi, flow, δf and δc are given in Assumption 4.1, and U f +g , U c +g and σ are given in (74). Then +ρk ≤ r˜ρǫ1 holds for 0 ≤ k ≤ Kǫ1, where Kǫ1 is defined in (36). +Proof. It follows from (78) that ˜ρǫ1 ≥ 2ρ0. +By this and similar arguments as for (60), one has Kǫ1 ≤ +log(˜ρǫ1ρ−1 +0 )/ log r + 1, where Kǫ1 is defined in (33). Using this, the update scheme for {ρk}, and similar +arguments as for (61), we obtain +max +0≤k≤Kǫ1 +ρk ≤ r˜ρǫ1. +(79) +If ∥c(xKǫ1 +1)∥ ≤ ǫ1, it follows from (36) that Kǫ1 = Kǫ1, which together with (79) implies that ρk ≤ r˜ρǫ1 +holds for 0 ≤ k ≤ Kǫ1. On the other hand, if ∥c(xKǫ1 +1)∥ > ǫ1, it follows from (36) that ∥c(xk+1)∥ > ǫ1 for +Kǫ1 ≤ k ≤ Kǫ1 − 1. This together with Lemma 6.5 and (78) implies that for all Kǫ1 ≤ k ≤ Kǫ1 − 1, +ρk < max{Λ2(2δf)−1, 2(fhi − flow + γ)δ−2 +c ++ 2Λδ−1 +c ++ 2γ, 2(U f +g + U c +gΛ + 1)(σǫ1)−1} +(78) +≤ ˜ρǫ1. +By this, (79), and ρKǫ1 ≤ rρKǫ1 −1, we also see that ρk ≤ r˜ρǫ1 holds for 0 ≤ k ≤ Kǫ1. +23 + +Proof of Theorem 4.5. Notice from (78) and Lemma 6.6 that ˜ρǫ1 = O(ǫ−1 +1 ) and ρk ≤ r˜ρǫ1, which yield +ρk = O(ǫ−1 +1 ). The conclusion of Theorem 4.5 then follows from this and the same arguments as for the proof +of Theorem 4.4 with ρk = O(ǫ−2 +1 ) replaced by ρk = O(ǫ−1 +1 ). +7 +Future work +There are several possible future studies on this work. First, it would be interesting to extend our AL method +to seek an approximate SOSP of nonconvex optimization with inequality or more general constraints. Indeed, +for nonconvex optimization with inequality constraints, one can reformulate it as an equality constrained +problem using squared slack variables (e.g., see [7]). It can be shown that an SOSP of the latter problem +induces a weak SOSP of the original problem and also linear independence constraint qualification holds for +the latter problem if it holds for the original problem. As a result, it is promising to find an approximate +weak SOSP of an inequality constrained problem by applying our AL method to the equivalent equality +constrained problem. Second, it is worth studying whether the enhanced complexity results in Section 4.3 +can be derived under weaker constraint qualification (e.g., see [5]). Third, the development of our AL method +is based on a strong assumption that a nearly feasible solution of the problem is known. It would make the +method applicable to a broader class of problems if such an assumption could be removed by modifying the +method possibly through the use of infeasibility detection techniques (e.g., see [19]). Lastly, more numerical +studies would be helpful to further improve our AL method from a practical perspective. +References +[1] N. Agarwal, Z. Allen-Zhu, B. Bullins, E. Hazan, and T. Ma, Finding approximate local minima +faster than gradient descent, in Proceedings of the 49th Annual ACM SIGACT Symposium on Theory +of Computing, 2017, pp. 1195–1199. +[2] R. Andreani, E. G. Birgin, J. M. Mart´ınez, and M. L. Schuverdt, On augmented Lagrangian +methods with general lower-level constraints, SIAM J. Optim., 18 (2008), pp. 1286–1309. +[3] R. Andreani, G. Haeser, and J. M. Mart´ınez, On sequential optimality conditions for smooth +constrained optimization, Optim., 60 (2011), pp. 627–641. +[4] R. Andreani, G. Haeser, A. Ramos, and P. J. Silva, A second-order sequential optimality condi- +tion associated to the convergence of optimization algorithms, IMA J. Numer. Anal., 37 (2017), pp. 1902– +1929. +[5] R. Andreani, G. Haeser, M. L. Schuverdt, and P. J. Silva, Two new weak constraint qualifica- +tions and applications, SIAM J. Optim., 22 (2012), pp. 1109–1135. +[6] P. Armand and N. N. Tran, An augmented Lagrangian method for equality constrained optimization +with rapid infeasibility detection capabilities, J. Optim. Theory Appl., 181 (2019), pp. 197–215. +[7] D. P. Bertsekas, Nonlinear Programming, Athena Scientific, 1999. +[8] W. Bian, X. Chen, and Y. Ye, Complexity analysis of interior point algorithms for non-Lipschitz +and nonconvex minimization, Math. Program., 149 (2015), pp. 301–327. +[9] E. G. Birgin, J. Gardenghi, J. M. Mart´ınez, S. A. Santos, and P. L. Toint, Evaluation +complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models, +SIAM J. Optim., 26 (2016), pp. 951–967. +[10] E. G. Birgin, G. Haeser, and A. Ramos, Augmented Lagrangians with constrained subproblems +and convergence to second-order stationary points, Comput. Optim. Appl., 69 (2018), pp. 51–75. +[11] E. G. Birgin and J. M. Mart´ınez, Practical Augmented Lagrangian Methods for Constrained Opti- +mization, SIAM, 2014. +24 + +[12] E. G. Birgin and J. M. Mart´ınez, The use of quadratic regularization with a cubic descent condition +for unconstrained optimization, SIAM J. Optim., 27 (2017), pp. 1049–1074. +[13] E. G. Birgin and J. M. Mart´ınez, Complexity and performance of an augmented Lagrangian algo- +rithm, Optim. Methods and Softw., 35 (2020), pp. 885–920. +[14] J. F. Bonnans and G. Launay, Sequential quadratic programming with penalization of the displace- +ment, SIAM J. Optim., 5 (1995), pp. 792–812. +[15] N. Boumal, V. Voroninski, and A. S. Bandeira, The non-convex Burer-Monteiro approach works +on smooth semidefinite programs, in Advances in Neural information Processing Systems, vol. 29, 2016, +pp. 2757–2765. +[16] L. F. Bueno and J. M. Mart´ınez, On the complexity of an inexact restoration method for constrained +optimization, SIAM J. Optim., 30 (2020), pp. 80–101. +[17] S. Burer and R. D. C. Monteiro, A nonlinear programming algorithm for solving semidefinite +programs via low-rank factorization, Math. Program., 95 (2003), pp. 329–357. +[18] S. Burer and R. D. C. Monteiro, Local minima and convergence in low-rank semidefinite program- +ming, Math. Program., 103 (2005), pp. 427–444. +[19] J. V. Burke, F. E. Curtis, and H. Wang, A sequential quadratic optimization algorithm with rapid +infeasibility detection, SIAM J. Optim., 24 (2014), pp. 839–872. +[20] R. H. Byrd, F. E. Curtis, and J. Nocedal, Infeasibility detection and SQP methods for nonlinear +optimization, SIAM J. Optim., 20 (2010), pp. 2281–2299. +[21] R. H. Byrd, R. B. Schnabel, and G. A. Shultz, A trust region algorithm for nonlinearly constrained +optimization, SIAM J. Numer. Anal., 24 (1987), pp. 1152–1170. +[22] Y. Carmon and J. C. Duchi, Gradient descent finds the cubic-regularized nonconvex Newton step, +SIAM J. Optim., 29 (2019), pp. 2146–2178. +[23] Y. Carmon, J. C. Duchi, O. Hinder, and A. Sidford, “Convex until proven guilty”: Dimension- +free acceleration of gradient descent on non-convex functions, in International Conference on Machine +Learning, PMLR, 2017, pp. 654–663. +[24] Y. Carmon, J. C. Duchi, O. Hinder, and A. Sidford, Accelerated methods for nonconvex opti- +mization, SIAM J. Optim., 28 (2018), pp. 1751–1772. +[25] C. Cartis, N. I. Gould, and P. L. Toint, Adaptive cubic regularisation methods for unconstrained +optimization. Part II: worst-case function-and derivative-evaluation complexity, Math. Program., 130 +(2011), pp. 295–319. +[26] C. Cartis, N. I. Gould, and P. L. Toint, On the evaluation complexity of cubic regularization +methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained +nonlinear optimization, SIAM J. Optim., 23 (2013), pp. 1553–1574. +[27] C. Cartis, N. I. Gould, and P. L. Toint, On the complexity of finding first-order critical points in +constrained nonlinear optimization, Math. Program., 144 (2014), pp. 93–106. +[28] C. Cartis, N. I. Gould, and P. L. Toint, On the evaluation complexity of constrained nonlin- +ear least-squares and general constrained nonlinear optimization using second-order methods, SIAM J. +Numer. Anal., 53 (2015), pp. 836–851. +[29] C. Cartis, N. I. Gould, and P. L. Toint, Evaluation complexity bounds for smooth constrained +nonlinear optimization using scaled KKT conditions, high-order models and the criticality measure χ, in +Approximation and Optimization, Springer, 2019, pp. 5–26. +25 + +[30] C. Cartis, N. I. Gould, and P. L. Toint, Optimality of orders one to three and beyond: char- +acterization and evaluation complexity in constrained nonconvex optimization, J. Complex., 53 (2019), +pp. 68–94. +[31] X. Chen, L. Guo, Z. Lu, and J. J. Ye, An augmented Lagrangian method for non-Lipschitz non- +convex programming, SIAM J. Numer. Anal., 55 (2017), pp. 168–193. +[32] D. Cifuentes and A. Moitra, Polynomial time guarantees for the Burer-Monteiro method, arXiv +preprint arXiv:1912.01745, (2019). +[33] T. F. Coleman, J. Liu, and W. Yuan, A new trust-region algorithm for equality constrained opti- +mization, Comput. Optim. Appl., 21 (2002), pp. 177–199. +[34] F. E. Curtis, D. P. Robinson, C. W. Royer, and S. J. Wright, Trust-region Newton-CG +with strong second-order complexity guarantees for nonconvex optimization, SIAM J Optim., 31 (2021), +pp. 518–544. +[35] F. E. Curtis, D. P. Robinson, and M. Samadi, A trust region algorithm with a worst-case iteration +complexity of O(ǫ−3/2) for nonconvex optimization, Math. Program., 162 (2017), pp. 1–32. +[36] F. E. Curtis, D. P. Robinson, and M. Samadi, Complexity analysis of a trust funnel algorithm for +equality constrained optimization, SIAM J. Optim., 28 (2018), pp. 1533–1563. +[37] G. N. Grapiglia and Y. Yuan, On the complexity of an augmented Lagrangian method for nonconvex +optimization, IMA J. Numer. Anal., 41 (2021), pp. 1508–1530. +[38] G. Haeser, H. Liu, and Y. Ye, Optimality condition and complexity analysis for linearly-constrained +optimization without differentiability on the boundary, Math. Program., (2019), pp. 1–37. +[39] M. R. Hestenes, Multiplier and gradient methods, J. Optim. Theory Appl., 4 (1969), pp. 303–320. +[40] M. Hong, D. Hajinezhad, and M.-M. Zhao, Prox-PDA: The proximal primal-dual algorithm for fast +distributed nonconvex optimization and learning over networks, in International Conference on Machine +Learning, PMLR, 2017, pp. 1529–1538. +[41] C. Jin, R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan, How to escape saddle points +efficiently, in International Conference on Machine Learning, PMLR, 2017, pp. 1724–1732. +[42] C. Kanzow and D. Steck, An example comparing the standard and safeguarded augmented La- +grangian methods, Oper. Res. Lett., 45 (2017), pp. 598–603. +[43] W. Kong, J. G. Melo, and R. D. C. Monteiro, Complexity of a quadratic penalty accelerated +inexact proximal point method for solving linearly constrained nonconvex composite programs, SIAM J. +Optim., 29 (2019), pp. 2566–2593. +[44] J. Kuczy´nski and H. Wo´zniakowski, Estimating the largest eigenvalue by the power and Lanczos +algorithms with a random start, SIAM J. Matrix Anal. Appl., 13 (1992), pp. 1094–1122. +[45] Z. Li, P.-Y. Chen, S. Liu, S. Lu, and Y. Xu, Rate-improved inexact augmented lagrangian method for +constrained nonconvex optimization, in International Conference on Artificial Intelligence and Statistics, +PMLR, 2021, pp. 2170–2178. +[46] S. Lu, A single-loop gradient descent and perturbed ascent algorithm for nonconvex functional con- +strained optimization, in International Conference on Machine Learning, PMLR, 2022, pp. 14315–14357. +[47] S. Lu, M. Razaviyayn, B. Yang, K. Huang, and M. Hong, Finding second-order stationary +points efficiently in smooth nonconvex linearly constrained optimization problems, Advances in Neural +Information Processing Systems, 33 (2020), pp. 2811–2822. +26 + +[48] Z. Lu and X. Li, Sparse recovery via partial regularization: models, theory, and algorithms, Math. +Oper. Res., 43 (2018), pp. 1290–1316. +[49] Z. Lu and Y. Zhang, An augmented Lagrangian approach for sparse principal component analysis, +Math. Program., 135 (2012), pp. 149–193. +[50] J. M. Mart´ınez and M. Raydan, Cubic-regularization counterpart of a variable-norm trust-region +method for unconstrained minimization, J. Glob. Optim., 68 (2017), pp. 367–385. +[51] J. G. Melo, R. D. Monteiro, and W. Kong, Iteration-complexity of an inner accelerated inexact +proximal augmented Lagrangian method based on the classical Lagrangian function and a full Lagrange +multiplier update, arXiv preprint arXiv:2008.00562, (2020). +[52] Y. Nesterov and B. T. Polyak, Cubic regularization of Newton method and its global performance, +Math. Program., 108 (2006), pp. 177–205. +[53] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2nd ed., 2006. +[54] M. O’Neill and S. J. Wright, A log-barrier Newton-CG method for bound constrained optimization +with complexity guarantees, IMA J. Numer. Anal., 41 (2021), pp. 84–121. +[55] R. T. Rockafellar, Lagrange multipliers and optimality, SIAM review, 35 (1993), pp. 183–238. +[56] C. W. Royer, M. O’Neill, and S. J. Wright, A Newton-CG algorithm with complexity guarantees +for smooth unconstrained optimization, Math. Program., 180 (2020), pp. 451–488. +[57] C. W. Royer and S. J. Wright, Complexity analysis of second-order line-search algorithms for +smooth nonconvex optimization, SIAM J. Optim., 28 (2018), pp. 1448–1477. +[58] M. F. Sahin, A. Eftekhari, A. Alacaoglu, F. Latorre, and V. Cevher, An inexact augmented +Lagrangian framework for nonconvex optimization with nonlinear constraints, Advances in Neural Infor- +mation Processing Systems, 32 (2019). +[59] Y. Xie and S. J. Wright, Complexity of projected Newton methods for bound-constrained optimization, +arXiv preprint arXiv:2103.15989, (2021). +[60] Y. Xie and S. J. Wright, Complexity of proximal augmented Lagrangian for nonconvex optimization +with nonlinear equality constraints, J. Sci. Comput., 86 (2021), pp. 1–30. +[61] L. Yang, D. Sun, and K. C. Toh, SDPNAL+: A majorized semismooth Newton-CG augmented La- +grangian method for semidefinite programming with nonnegative constraints, Math. Program. Comput., +7 (2015), pp. 331–366. +[62] X. Zhao, D. Sun, and K. C. Toh, A Newton-CG augmented Lagrangian method for semidefinite +programming, SIAM J. Optim., 20 (2010), pp. 1737–1765. +Appendix +A +A capped conjugate gradient method +In this part we present the capped CG method proposed in [56, Algorithm 1] for finding either an approximate +solution to the linear system (12) or a sufficiently negative curvature direction of the associated matrix H, +which has been briefly discussed in Section 3.1. Its details can be found in [56, Section 3.1]. +The following theorem presents the iteration complexity of Algorithm 3. +Theorem A.1 (iteration complexity of Algorithm 3). Consider applying Algorithm 3 with input U = 0 +to the linear system (12) with g ̸= 0, ε > 0, and H being an n × n symmetric matrix. Then the number of +iterations of Algorithm 3 is �O(min{n, +� +∥H∥/ε}). +27 + +Algorithm 3 A capped conjugate gradient method +Inputs: symmetric matrix H ∈ Rn×n, vector g ̸= 0, damping parameter ε ∈ (0, 1), desired relative accuracy ζ ∈ (0, 1). +Optional input: scalar U ≥ 0 (set to 0 if not provided). +Outputs: d type, d. +Secondary outputs: final values of U, κ, �ζ, τ, and T. +Set +¯ +H := H + 2εI, +κ := U+2ε +ε +, +�ζ := +ζ +3κ, +τ := +√κ +√κ+1, +T := +4κ4 +(1−√τ)2 , +y0 ← 0, r0 ← g, p0 ← −g, j ← 0. +if (p0)T ¯ +Hp0 < ε∥p0∥2 then +Set d ← p0 and terminate with d type = NC; +else if +∥Hp0∥ > U∥p0∥ then +Set U ← ∥Hp0∥/∥p0∥ and update κ, �ζ, τ, T accordingly; +end if +while TRUE do +αj ← (rj)T rj/(pj)T ¯ +Hpj; {Begin Standard CG Operations} +yj+1 ← yj + αjpj; +rj+1 ← rj + αj ¯ +Hpj; +βj+1 ← ∥rj+1∥2/∥rj∥2; +pj+1 ← −rj+1 + βj+1pj; {End Standard CG Operations} +j ← j + 1; +if ∥Hpj∥ > U∥pj∥ then +Set U ← ∥Hpj∥/∥pj∥ and update κ, �ζ, τ, T accordingly; +end if +if +∥Hyj∥ > U∥yj∥ then +Set U ← ∥Hyj∥/∥yj∥ and update κ, �ζ, τ, T accordingly; +end if +if +∥Hrj∥ > U∥rj∥ then +Set U ← ∥Hrj∥/∥rj∥ and update κ, �ζ, τ, T accordingly; +end if +if (yj)T ¯Hyj < ε∥yj∥2 then +Set d ← yj and terminate with d type = NC; +else if +∥rj∥ ≤ �ζ∥r0∥ then +Set d ← yj and terminate with d type = SOL; +else if +(pj)T ¯ +Hpj < ε∥pj∥2 then +Set d ← pj and terminate with d type = NC; +else if +∥rj∥ > +√ +Tτ j/2∥r0∥ then +Compute αj, yj+1 as in the main loop above; +Find i ∈ {0, . . . , j − 1} such that +(yj+1 − yi)T ¯ +H(yj+1 − yi) < ε∥yj+1 − yi∥2; +Set d ← yj+1 − yi and terminate with d type = NC; +end if +end while +Proof. From [56, Lemma 1], we know that the number of iterations of Algorithm 3 is bounded by min{n, J(U, ε, ζ)}, +where J(U, ε, ζ) is the smallest integer J such that +√ +Tτ J/2 ≤ �ζ, with U, �ζ, T and τ being the values returned +by Algorithm 3. In addition, it was shown in [56, Section 3.1] that J(U, ε, ζ) ≤ +��√κ + 1 +2 +� +ln +� +144(√κ+1)2κ6 +ζ2 +�� +, +where κ = O(U/ε) is an output by Algorithm 3. Then one can see that J(U, ε, ζ) = �O( +� +U/ε). Notice from +Algorithm 3 that the output U ≤ ∥H∥. Combining these, we obtain the conclusion as desired. +B +A randomized Lanczos based minimum eigenvalue oracle +In this part we present the randomized Lanczos method proposed in [56, Section 3.2], which can be used as +a minimum eigenvalue oracle for Algorithm 1. As briefly discussed in Section 3.1, this oracle outputs either +a sufficiently negative curvature direction of H or a certificate that H is nearly positive semidefinite with +high probability. More detailed motivation and explanation of it can be found in [56, Section 3.2]. +The following theorem justifies that Algorithm 4 is a suitable minimum eigenvalue oracle for Algorithm +1. Its proof is identical to that of [56, Lemma 2] and thus omitted. +28 + +Algorithm 4 A randomized Lanczos based minimum eigenvalue oracle +Input: symmetric matrix H ∈ Rn×n, tolerance ε > 0, and probability parameter δ ∈ (0, 1). +Output: a sufficiently negative curvature direction v satisfying vT Hv ≤ −ε/2 and ∥v∥ = 1; or a certificate +that λmin(H) ≥ −ε with probability at least 1 − δ. +Apply the Lanczos method [44] to estimate λmin(H) starting with a random vector uniformly generated on +the unit sphere, and run it for at most +N(ε, δ) := min +� +n, 1 + +� +ln(2.75n/δ2) +2 +� +∥H∥ +ε +�� +(80) +iterations. If a unit vector v with vT Hv ≤ −ε/2 is found at some iteration, terminate immediately and +return v. +Theorem B.1 (iteration complexity of Algorithm 4). Consider Algorithm 4 with tolerance ε > 0, +probability parameter δ ∈ (0, 1), and symmetric matrix H ∈ Rn×n as its input. +Then it either finds a +sufficiently negative curvature direction v satisfying vT Hv ≤ −ε/2 and ∥v∥ = 1 or certifies that λmin(H) ≥ +−ε holds with probability at least 1 − δ in at most N(ε, δ) iterations, where N(ε, δ) is defined in (80). +Notice that ∥H∥ is required in Algorithm 4. In general, computing ∥H∥ may not be cheap when n is +large. Nevertheless, ∥H∥ can be efficiently estimated via a randomization scheme with high confidence (e.g., +see the discussion in [56, Appendix B3]). +29 +