diff --git a/0NFIT4oBgHgl3EQf2ysh/content/tmp_files/2301.11378v1.pdf.txt b/0NFIT4oBgHgl3EQf2ysh/content/tmp_files/2301.11378v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5dc5ea8981266edcc40b4b12420bf1dd0dfa72b --- /dev/null +++ b/0NFIT4oBgHgl3EQf2ysh/content/tmp_files/2301.11378v1.pdf.txt @@ -0,0 +1,1298 @@ +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain +Decomposition Methods +Ali Taghibakhshi 1 Nicolas Nytko 2 Tareq Uz Zaman 3 Scott MacLachlan 4 Luke N. Olson 2 Matt West 1 +Abstract +Domain decomposition methods (DDMs) are pop- +ular solvers for discretized systems of partial dif- +ferential equations (PDEs), with one-level and +multilevel variants. These solvers rely on several +algorithmic and mathematical parameters, pre- +scribing overlap, subdomain boundary conditions, +and other properties of the DDM. While some +work has been done on optimizing these param- +eters, it has mostly focused on the one-level set- +ting or special cases such as structured-grid dis- +cretizations with regular subdomain construction. +In this paper, we propose multigrid graph neural +networks (MG-GNN), a novel GNN architecture +for learning optimized parameters in two-level +DDMs. We train MG-GNN using a new unsuper- +vised loss function, enabling effective training on +small problems that yields robust performance on +unstructured grids that are orders of magnitude +larger than those in the training set. We show +that MG-GNN outperforms popular hierarchical +graph network architectures for this optimization +and that our proposed loss function is critical to +achieving this improved performance. +1. Introduction +Among numerical methods for solving the systems of equa- +tions obtained from discretization of partial differential equa- +tions (PDEs), domain decomposition methods (DDMs) are +a popular approach (Toselli & Widlund, 2005; Quarteroni +& Valli, 1999; Dolean et al., 2015). They have been exten- +sively studied and applied to elliptic boundary value prob- +lems, but are also considered for time-dependent problems. +1Department of Mechanical Science and Engineering, Univer- +sity of Illinois at Urbana-Champaign 2Department of Computer +Science, University of Illinois at Urbana-Champaign 3Scientific +Computing Program, Memorial University of Newfoundland +4Department of Mathematics and Statistics, Memorial Univer- +sity of Newfoundland. Correspondence to: Ali Taghibakhshi +. +Schwarz methods are among the simplest and most popular +types of DDM, and map well to MPI-style parallelism, with +both one-level and multilevel variants. One-level methods +decompose the global problem into multiple subproblems +(subdomains), which are obtained either by discretizing +the same PDE over a physical subdomain or by projection +onto a discrete basis, using subproblem solutions to form a +preconditioner for the global problem. Classical Schwarz +methods generally consider Dirichlet or Neumann bound- +ary conditions between the subdomains, while Optimized +Schwarz methods (OSM) (Gander et al., 2000) consider +a combination of Dirichlet and Neumann boundary condi- +tions, known as Robin-type boundary conditions, to improve +the convergence of the method. Restricted additive Schwarz +(RAS) methods (Cai & Sarkis, 1999) are a common form +of Schwarz methods, and optimized versions of one-level +RAS has been theoretically studied by St-Cyr et al. (2007). +Two-level methods extend one-level approaches by adding +a (global) coarse-grid correction step to the preconditioner, +generally improving performance but at an added cost. +In recent years, there has been a growing focus on using +machine learning (ML) methods to learn optimized parame- +ters for iterative PDE solvers, including DDM and algebraic +multigrid (AMG). In Greenfeld et al. (2019) convolutional +neural networks (CNNs) are used to learn the interpola- +tion operator in AMG on structured problems, and in a +following study (Luz et al., 2020), graph neural networks +(GNNs) are used to extend the results to arbitrary unstruc- +tured grids. In a different fashion, reinforcement learning +methods along with GNNs are used to learn coarse-gird +selection in reduction-based AMG in Taghibakhshi et al. +(2021). As mentioned in Heinlein et al. (2021), when com- +bining ML methods with DDM, approaches can be catego- +rized into two main families, namely using ML within a clas- +sical DDM framework to obtain improved convergence and +using deep neural networks as the main solver or discretiza- +tion module for DDMs. In a recent study (Taghibakhshi +et al., 2022), GNNs are used to learn interface conditions +in optimized Schwarz DDMs that can be applied to many +subdomain problems, but their study is limited to one-level +solvers. Two-level domain decomposition methods often +converge significantly faster than one-level methods since +they include coarse-grid correction, but obtaining optimized +arXiv:2301.11378v1 [cs.LG] 26 Jan 2023 + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +multilevel DDMs for general unstructured problems with +arbitrary subdomains remains an open challenge. +Graph neural networks (GNNs) extend learning based meth- +ods and convolution operators to unstructured data. Similar +to structured problems, such as computer vision tasks, many +graph-based problems require information sharing beyond +just a limited local neighborhood in a given graph. How- +ever, unlike in CNNs, where often deep CNNs are used with +residual skip connections to achieve long range information +passing, GNNs dramatically suffer depth limitations. Stack- +ing too many GNN layers results in oversmoothing, which +is due to close relation of graph convolution operators to +Laplacian smoothing (Li et al., 2018; Oono & Suzuki, 2019). +Oversmoothing essentially results in indistinguishable node +representations after too many GNN layers, due to informa- +tion aggregation in a large local neighborhood. Inspired by +the Unet architecture in CNNs (Ronneberger et al., 2015), +graph U-nets (Gao & Ji, 2019) were introduced as a rem- +edy for longe-range information sharing in graphs without +using too many GNN layers. Similar to their CNN counter- +parts, graph-Unet architectures apply down-sampling layers +(pooling) to aggregate node information to a coarser repre- +sentation of the problem with fewer nodes. This is followed +by up-sampling layers (unpooling, with the same number +of layers as pooling) to reconstruct finer representations of +the problem and allow information to flow back to the finer +levels from the coarser ones. +As mentioned in Ke et al. (2017), U-net and graph-Unet +architectures suffer from a handful of problems and non- +optimalities. In these architectures, scale and abstraction are +combined, meaning earlier, finer layers cannot access the in- +formation of the coarse layers. In other words, initial layers +learn deep features only based on a local neighborhood with- +out considering the larger picture of the problem. Moreover, +finer levels do not benefit from information updates until +the information flow reaches the coarsest level and flows +back to the finer levels. That is, the information flow has +to complete a full (graph) U-net cycle to update the finest +level information, which could potentially require multiple +conventional layers, leading to oversmoothing in the case of +graph U-nets. More recently, there has been similar hierar- +chical GNN architectures utilized for solving PDEs, such +as those proposed by Fortunato et al. (2022) and Li et al. +(2020). In each case, the architecture is similar to a U-net, in +terms in terms of information flow (from the finest to coars- +est graph and back), and there is no cross-scale information +sharing, making them prone to the aforementioned U-net +type problems. +To fully unlock the ability of GNNs to learn optimal DDM +operators, and to mitigate the shortcomings of graph U- +nets mentioned above, we introduce here a novel GNN +architecture, multigrid graph neural networks (MG-GNN). +MG-GNN information flow is parallel at all scales, mean- +ing every MG-GNN layer processes information from both +coarse and fine scale graphs. We employ this MG-GNN +architecture to advance DDM-based solvers by developing +a learning-based approach for two-level optimized Schwarz +methods. Specifically, we learn the Robin-type subdomain +boundary conditions needed in OSM as well as the overall +coarse-to-fine interpolation operator. We also develop a +novel loss function essential for achieving superior perfor- +mance compared to previous two-level optimized RAS. The +summary of contributions of this paper is as follows: +• Introduce a multigrid graph neural network (MG- +GNN) architecture that outperforms existing hierarchi- +cal GNN architectures and scales linearly with problem +size; +• Improve the loss function with theoretical guarantees +essential for training two-level Schwarz methods; +• Enforce scalability, leading to effective training on +small problems and generalization to problems that are +orders of magnitude larger; and +• Outperform classical two-level RAS, both as station- +ary algorithm and as a preconditioner for the flexible +generalized minimum residual (FGMRES) iteration. +2. Background +In this section, we review one and two-level DDMs. Let Ω +be an open set in R2, and consider the Poisson equation: +−∆Φ = f, +(1) +where ∆ is the Laplace operator and f(x, y) and Φ(x, y) +are real-valued functions. Alongside (1), we consider inho- +mogeneous Dirichlet conditions on the boundary of Ω, ∂Ω, +and use a piecewise linear finite-element (FE) discretization +on arbitrary triangulations of Ω. In the linear FE discretiza- +tion, every node in the obtained graph corresponds to a +degree of freedom (DoF) in the discretization, and the set of +all nodes is denoted by D. The set D is decomposed into +S non-overlapping subdomains {D0 +1, D0 +2, . . . , D0 +S} (where +the superscript in the notation indicates the amount of over- +lap; hence, the superscript zero for the non-overlapping +decomposition). The union of the subdomains covers the +set of all DoFs, D = ∪D0 +i , so that each node in D is con- +tained in exactly one D0 +i . Denote the restriction operator +for discrete DoFs onto those in D0 +i by R0 +i and the corre- +sponding extension from D0 +i to D by (R0 +i )T . Following +the FE discretization of problem, we obtain a linear system +to solve, Ax = b, where A is the global stiffness matrix. +For every D0 +i , we obtain the subdomain stiffness matrix +as A0 +i = R0 +i A(R0 +i )T . In the OSM setting, alternative def- +initions to this Galerkin projection for A0 +i are possible as + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +noted below. To obtain the coarse-level representation of +the problem, let P ∈ RS×|D| be the piecewise-constant +interpolation operator that assigns every node in D0 +i to the +i-th coarse node. The coarse-level operator is then obtained +as AC = P T AP. +The restricted additive Schwarz method (RAS) (Cai & +Sarkis, 1999) is an important extension to the Schwarz +methodology for the case of overlapping subdomains, where +some nodes in D belong to more than one subdomain. De- +noting the overlap amount by δ ∈ N, we define the sub- +domains Dδ +i for δ > 0 by recursion, as Dδ +i = Dδ−1 +i +∪ +{j | akj ̸= 0 for k ∈ Dδ−1 +i +}. For the coarse-grid interpo- +lation operator, P, each of the overlapping nodes is now +associated with multiple columns of P, which is typically +chosen as a partition of unity, with rows of P having equal +non-zero weights (that can be interpreted as the probability +of assigning a fine node to a given subdomain). The con- +ventional two-level RAS preconditioner is then defined by +considering the fine-level operator, MRAS, and the coarse- +level correction operator, C2-RAS, given by +MRAS = +S +� +i=1 +( ˜Rδ +i )T (Aδ +i )−1Rδ +i , +(2) +C2-RAS = P(P T AP)−1P T , +(3) +where Aδ +i = (Rδ +i ) +T ARδ +i . The operator Rδ +i denotes restric- +tion for DoFs in D to those in Dδ +i while ˜Rδ +i is a modified +restriction from D to Dδ +i that takes nonzero values only for +DoFs in D0 +i . The two-level RAS preconditioner is given +as M2-RAS = C2-RAS + MRAS − C2-RASAMRAS, with the +property that I − M2-RASA = (I − C2-RASA)(I − MRASA). +In the case of optimized Schwarz, the subdomain systems +(fine-level Aδ +i ) are modified by imposing a Robin bound- +ary condition between subdomains, writing ˜Aδ +i = Aδ +i + Li, +where Li is the term resulting from the Robin-type condi- +tion: +αu + ⃗n · ∇u = g(x), +(4) +where g denotes inhomogeneous data and ⃗n is the outward +unit normal to the boundary. The fine-level operator for +optimized Schwarz is then given by +MORAS = +S +� +i=1 +� +˜Rδ +i +�T � +˜Aδ +i +�−1 +Rδ +i , +(5) +where the choice of weight, α, in the subdomain Robin +boundary condition is a parameter for optimization. Simi- +larly, the method can be improved by optimizing the choice +of coarse-level interpolation operator, P, but this has not +been fully explored in the OSM literature. Similarly to +with RAS, we define the two-level ORAS preconditioner as +M2-ORAS = C2-RAS + MORAS − C2-RASAMORAS. +The work of Taghibakhshi et al. (2022) suggests a method +to learn Li for one-level ORAS. Here, we learn both Li +and P for two-level methods since, as later shown in Fig- +ure 7, the two-level methods are significantly more robust. +Furthermore, as we show in Section 5.2, while learning +both ingredients improves the performance, learning the +interpolation operator, P, is significantly more important +than learning Li’s in order to obtain a two-level solver that +outperforms classical two-level RAS. +3. Multigrid graph neural network +The multigrid neural architecture (Ke et al., 2017) is an ar- +chitecture for CNNs that extracts higher level information in +an image more efficiently by cross-scale information shar- +ing, in contrast to other CNN architectures, such as U-nets, +where abstraction is combined with scale. That is, in one +multigrid layer, the information is passed between different +scales of the problem, removing the necessity of using deep +CNNs or having multilevel U-net architectures. Inspired +by Ke et al. (2017), we develop a multigrid architecture for +GNNs, enabling cross-scale message (information) passing +without making the GNN deeper; we call our architecture +Multigrid GNN, or MG-GNN. Figure 1 shows one layer of +the MG-GNN with two levels (a fine and a coarse level). +The input data to one layer of an MG-GNN has L different +graphs, from fine to coarse, denoted by G(ℓ) = (X(ℓ), A(ℓ)), +where A(ℓ) ∈ Rnℓ×nℓ and X(ℓ) ∈ Rnℓ×d are adjacency and +node feature matrices, respectively, and nℓ and d are the +number of nodes and node feature dimension in ℓ-th graph +for ℓ ∈ {0, 1, . . . , L − 1}, with ℓ = 0 denoting the finest +level. If the input graph does not have multiple levels, we +obtain the coarser levels recursively by considering a node +assignment matrix (clustering operator) R(ℓ) ∈ Rnℓ+1×nℓ, +for ℓ ∈ {0, 1, . . . , L − 2}: +X(ℓ+1) = R(ℓ)X(ℓ), +(6) +A(ℓ+1) = R(ℓ)A(ℓ)(R(ℓ))T . +(7) +We note that, in general, the assignment matrix Rℓ could +be any pooling/clustering operator, such as k-means clus- +tering, learnable pooling, etc. We denote R(ℓ→k) to be the +assignment matrix of graph level ℓ to level k (with ℓ < k), +which is constructed through R(ℓ→k) = +k−1 +� +j=ℓ +R(j) (down- +sampling). To complement this terminology, we also define +R(k→ℓ) = (R(ℓ→k))T for ℓ > k (up-sampling), and for +the case of ℓ = k, the assignment matrix is simply the +identity matrix of dimension nℓ. The mathematical for- + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +𝐶! +" +𝐶# +" +𝐶! +𝐶# +Figure 1. One layer of MG-GNN. ci and c′ +i denote the feature +dimensions of different levels before and after passing through an +MG-GNN layer, respectively. +malism of the m-th layer of the MG-GNN with L levels +is as follows: given all graphs feature matrices, X(ℓ) +m , for +ℓ ∈ {0, 1, . . . , L − 1}: +˙Xℓ→k = F ℓ→k(X(ℓ) +m , X(k) +m , R(ℓ→k)) +(8) +˜X(ℓ) +m = [ ˙X0→ℓ∥ ˙X1→ℓ∥ . . . ∥ ˙Xk−1→ℓ] +(9) +X(ℓ) +m+1 = GNN(ℓ)( ˜X(ℓ) +m , A(ℓ)) +(10) +where ∥ denotes concatenation, and GNN(ℓ) and F ℓ→ℓ +could be any homogeneous and heterogeneous GNNs, re- +spectively. For the case of ℓ ̸= k, we consider F ℓ→k to be +a heterogeneous message passing scheme between levels ℓ +and k, which is defined as follows. Consider any node v in +G(ℓ) and denote the row in X(ℓ) +m corresponding to the feature +vector of node v by xv. Then, F ℓ→k(X(ℓ) +m , X(k) +m , R(ℓ→k)) +is given by +mv = gℓ→k +� +□ +ω∈N (v)f ℓ→k(xv, xω, evω), xv +� +(11) +where evω is the feature vector of the edge (if any) connect- +ing v and ω, □ is any permutation invariant operator such +as sum, max, min, etc., and f ℓ→k and gℓ→k are learnable +multilayer perceptrons (MLPs). See Figure 2 for visualiza- +tion of up-sampling and down-sampling in MG-GNN. In +this study, we consider a two-level MG-GNN (see Figure 1) +and, for the clustering, we consider a k-means-based clus- +tering algorithm (best known as Lloyd’s algorithm) which +has O(n) time complexity and guarantees that every node +will be assigned to a subdomain (Bell, 2008; Lloyd, 1982)) +in a connected graph. As mentioned earlier, the MG-GNN +architecture could alternatively use any pooling/clustering +method such as DiffPool (Ying et al., 2018), top-K pool- +ing (Gao & Ji, 2019), ASAP (Ranjan et al., 2020), SAG- +Pool (Lee et al., 2019), to name but a few. However, for +the case of this paper, since RAS (and therefore, ORAS) +necessitates every node in the fine grid be assigned to a +subdomain, we do not consider the aforementioned pooling +(clustering) methods. +≡ +≡ +≡ +Figure 2. Upsampling and downsampling in MG-GNN. +4. Optimization problem and loss function +In this section, we denote the ℓ2 norm of a matrix or vector +by ∥ · ∥ and the spectral radius of matrix T by ρ(T). Our +objective is to minimize the asymptotic convergence factor +of the two-level ORAS method, defined as minimizing ρ(T), +where T = I−M2-ORASA = (I−C2-ORASA)(I−MORASA) +is the error propagation operator of the method. Since T +is not necessarily symmetric, ρ(T) is formally defined as +the extremal eigenvalue of T T T. As discussed in Wang +et al. (2019), numerical unsuitability of backpropagation +of an eigendecomposition makes it infeasible to directly +minimize ρ(T). To this end, Luz et al. (2020) relax the +spectral radius to the Frobenius norm (which is an upper +bound for it), and minimize that instead. However, for +the case of optimizing one-level DDM methods, the work +in Taghibakhshi et al. (2022) highlights that the Frobenius +norm is not a “tight” upper bound for ρ(T), and considers +minimizing a relaxation of ρ(T) inspired by Gelfand’s for- +mula, ∀K∈N +ρ(T) ≤ ∥T K∥ +1 +K = supx:∥x∥=1(∥T Kx∥) +1 +K . +We present a modified version of the loss function intro- +duced by Taghibakhshi et al. (2022) and, in Section 5.3, we +show the necessity of this modification for improving the +two-level RAS results. +Consider the discretized problem with DoF set D of size +n, decomposed into S subdomains, Dδ +1, Dδ +2, . . . , Dδ +S with +overlap δ. The GNN takes D, its decomposition, and a +sparsity pattern for the interface values and that of the inter- +polation operator as inputs and its outputs are the learned +interface values and interpolation operator (see Appendix B +for more discussion on inputs and outputs of the network): +P (θ), L(θ) +1 , L(θ) +2 , . . . , L(θ) +S +← ψ(θ)(D). +(12) +where ψθ denotes the GNN, and θ represents the learnable +parameters in the GNN. +We obtain the modified two-level ORAS (Optimized +Restricted Additive Schwarz) operator by using the +learned coarse grid correction operator, Cθ +2-ORAS += +P (θ) � +(P (θ))T AP (θ)�−1 � +P (θ)�T , and the fine grid oper- +ator, M (θ) +ORAS from (12). The associated 2-level error propa- + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +gation operator is then given by T (θ) = (I−C(θ) +2-ORASA)(I− +M (θ) +ORASA). +In order to obtain an approximate measure of ρ(T (θ)) while +avoiding eigendecomposition of the error propagation ma- +trix, similar to Taghibakhshi et al. (2022), we use stochas- +tic sampling of +��� +� +T (θ)�K���, generated by the sample set +X ∈ Rn×m for some m ∈ N, given as +X = [x1, x2, . . . , xm], ∀j xj ∼ Rn uniformly, ∥xj∥ = 1, +(13) +where each xj is sampled uniformly randomly on a unit +sphere in Rn using the method introduced in Box (1958). +We then define +Y (θ) +K += +����� +� +T (θ)�K +x1 +���� , +���� +� +T (θ)�K +x2 +���� , . . . , +���� +� +T (θ)�K +xm +���� +� +. +(14) +Note that +��� +� +T (θ)�K xj +��� is a lower bound for +��� +� +T (θ)�K���. +Taghibakhshi et al. (2022) use L(θ) = max(Y (θ) +K ) as a +practical loss function. However, for large values of K, this +loss function suffers from vanishing gradients. Moreover, +as we show in Section 5.3, employing this loss function +results in inferior performance of the learned method in +comparison to two-level RAS. To overcome these issues, +we define Z(θ) +k += max((Y (θ) +k +) +1 +k ) for 1 ≤ k ≤ K to arrive +at a new loss function, +L(θ) = ⟨softmax(Z(θ)), Z(θ)⟩ + γtr +� +(P (θ))T AP (θ)� +, +(15) +where Z(θ) = +� +Z(θ) +1 , Z(θ) +2 , . . . , Z(θ) +K +� +, 0 < γ is an ad- +justable constant, and tr(M) is the trace of matrix M. +Adding the term tr((P (θ))T AP (θ)) is inspired by energy +minimization principles, to obtain optimal interpolation op- +erators in theoretical analysis of multilevel solvers (Xu, +1992; Wan et al., 1999). In Section 5.3, we show the signifi- +cance of this term in the overall performance of our model. +Nevertheless, for the first part of the new loss function (15), +we prove that it convergence to the spectral radius of the er- +ror propagation matrix in a suitable limit. First, we include +two lemmas: +Lemma 1. For x, y ∈ R, with 0 ≤ y ≤ x and any K ∈ N, +x +1 +K − y +1 +K ≤ (x − y) +1 +K +Proof. See Lemma 3 from Taghibakhshi et al. (2022). +Lemma 2. For any nonzero square matrix T ∈ Rn×n, +k ∈ N, ϵ, ξ > 0, and 0 < δ < 1, there exists M ∈ N +such that for any m ≥ M, if we choose x1, x2, . . . , xm +uniformly random from {x ∈ Rn | ∥x∥ = 1}, and Z = +max{∥T kx1∥ +1 +k , ∥T kx2∥ +1 +k , . . . , ∥T kxm∥ +1 +k } then, with a +probability of at least (1 − δ), the following hold: +0 ≤ ∥T k∥ +1 +k − Z ≤ ϵ, +(16) +ρ(T) − ξ ≤ Z. +(17) +Proof. The left side of the first inequality is achieved by +considering the definition of matrix norm, i.e. for any 1 ≤ +i ≤ m, ∥T kxi∥ ≤ +sup +∥x∥=1 +∥T kx∥ = ∥T k∥, then taking +the kth root of both sides. For the right side of the first +inequality, consider the point x∗ ∈ {x ∈ Rn | ||x|| = 1} +such that ∥T kx∗∥ = sup +∥x∥=1 +∥T kx∥ (such a point exists since +Rn is finite dimensional). Let S be the total volume of the +surface of an n dimensional unit sphere around the origin, +and denote by ˜S the volume on this surface within distance +˜ϵ of the point x∗ in the ℓ2 measure, for ˜ϵ = +ϵ +∥T k∥ +1 +k . Let +m ≥ M1 > +log(δ) +log +� +1− ˜ +S +S +�, then, since 0 < δ < 1, we have: +P(∥x∗ − xi∥ > ˜ϵ, ∀i) = +� +1 − +˜S +S +�m +≤ δ +(18) +Therefore, with probability of at least (1−δ), there is one xi +within the ˜ϵ neighborhood of x∗ on the unit sphere. Without +loss of generality, let x1 be that point. Using Lemma 1 and +the reverse triangle inequality, we have +��T kx∗�� +1 +k − +��T kx1 +�� +1 +k ≤ +���T kx∗�� − +��T kx1 +��� 1 +k +≤ +��T k�� +1 +k ∥x∗ − x1∥ +1 +k ≤ ∥T k∥ +1 +k ˜ϵ = ϵ +(19) +which finishes the proof for the right side of the first inequal- +ity. +For the second inequality, since ρ(T) ≤ ∥T k∥ +1 +k , choose +M2 such that, with probability 1 − δ, (16) holds for ϵ = +∥T k∥ +1 +k − ρ(T) + ξ > 0. Rearranging (16) then yields (17) +for any m ≥ M = max{M1, M2}. +We next state the main result on optimality. +Theorem 3. For any nonzero matrix T, ϵ > 0, and +δ +< +1, there exist M, K +∈ +N such that for any +m +> +M, if one chooses m points, xj, uniformly +at random from {x +∈ +Rn | ∥x∥ += +1} and defines +Zk = max{∥T kx1∥ +1 +k , ∥T kx2∥ +1 +k , . . . , ∥T kxm∥ +1 +k }, then +Z = (Z1, Z2, . . . , ZK) satisfies: +P (|⟨softmax(Z), Z⟩ − ρ(T)| ≤ ϵ) > 1 − δ. +(20) +Proof. Since +ρ(T) +≤ +∥T k∥ +1 +k +for +any +k +and +limk→∞ ∥T k∥ +1 +k = ρ(T), for any 0 < α, there exists +K∗ ∈ N such that for any k > K∗, 0 ≤ ∥T k∥ +1 +k −ρ(T) < α. + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +Take 0 +< +α +< +min{ ϵ +2, log( e−ϵ(ϵ+ρ(T )) +ϵ +2 +ρ(T ) +)}, let u += +max{ max +1≤k≤K∗{∥T k∥ +1 +k }+α, ρ(T)+2α}, ˜δ = 1−(1−δ) +1 +K , +and take K > max{ +K∗(ueu−(ρ(T )+α)eρ(T )+α) +eρ(T )−ϵ(ϵ+ρ(T )−(ρ(T )+α)eα+ϵ), K∗}. +Note that, by the choice of α, we have ρ(T)+α < ρ(T)+ ϵ +2 +and eα+ϵ < +ρ(T )+ϵ +ρ(T )+ ϵ +2 , which (along with the choice of u) +guarantees a positive K. By Lemma 2, for any 1 ≤ i ≤ K∗ +and K∗ < j ≤ K, there exists ni, nj ∈ N such that: +P(ρ(T) − ϵ ≤ Zi ≤ u) > 1 − ˜δ +for m > ni, +(21) +P(ρ(T) − ϵ ≤ Zj ≤ ρ(T) + α) > 1 − ˜δ +for m > nj. +(22) +For any 1 ≤ k ≤ K, take nk independent points on unit +sphere so that the above inequalities are satisfied for all +k. Note that this can be achieved by taking M = +K +� +k=1 +nk. +Since the points for satisfying equations (21) and (22) are +chosen independently, for any m > M, with probability of +at least +K� +k=1 +(1 − ˜δ) = 1 − δ, we have ρ(T) − ϵ ≤ Zk for all +1 ≤ k ≤ K. Consequently: +− ϵ = +(ρ(T) − ϵ) +K +� +i=1 +eZi +K +� +i=1 +eZi +− ρ(T) ≤ +K +� +i=1 +ZieZi +K +� +i=1 +eZi +− ρ(T) +(23) += ⟨softmax(Z), Z⟩ − ρ(T) +(24) +≤ uK∗eu + (K − K∗)(ρ(T) + α)eρ(T )+α +Keρ(T )−ϵ +− ρ(T) +(25) += K∗(ueu − (ρ(T) + α)eρ(T )+α) +Keρ(T )−ϵ ++ (ρ(T) + α)eα+ϵ − ρ(T) ≤ ϵ, +(26) +where the last inequality is obtained by the choice of K. +In addition to these properties of the loss function, we now +show that obtaining the learned parameters using our MG- +GNN architecture scales linearly with the problem size. +Theorem 4. The time complexity to obtain the optimized +interface values and interpolation operator using our MG- +GNN is O(n), where n is the number of nodes in the grid. +Proof. Every in/cross-level graph convolution of the MG- +GNN has linear complexity. This must be the case when the +graph convolution is a message passing scheme due to the +sparsity in finite-element triangulations. For the case that +the graph convolution is a TAGConv layer, we have y = +�L +ℓ=1 Gℓxℓ + b1n, where xℓ ∈ Rn are the node features, +L is the node feature dimension, b is a learnable bias, and +Gℓ ∈ Rn×n is the graph filter. In TAGConv layers, the +graph filter is given as Gℓ = �J +j=0 gℓ,jM j, where M is +the adjacency matrix, J is a constant, and gℓ,j are the filter +polynomial coefficients. In other words, the graph filter it +is a polynomial in the adjacency matrix M of the graph. +Moreover, the matrix M is sparse, hence obtaining M j has +O(n) computation cost, resulting in full TAGConv O(n) +time complexity. Moreover, for both the interface value +head and the interpolation head of the network, the cost of +calculating edge feature and the feature networks are O(n), +resulting in overall O(n) cost of MG-GNN. +5. Experiments +5.1. Training +We train each model on 1000 grids of sizes ranging from +800–1000 nodes. The grids are generated randomly as a +convex polygon and using PyGMSH (Schl¨omer, 2021) for +meshing its interior. The subdomains are generated using +Lloyd clustering on the graph (Bell, 2008), the subdomain +overlap is set to one, and the weights of the edges along +the boundary determine the interface value operators, L(θ) +i . +As shown in the interpolation head of the network in Ap- +pendix B Figure 11, the weight of the edges connecting the +coarse and fine grids determine the interpolation operator. +In our case, the edges between the coarse and fine grids +connect every fine node to the coarse node corresponding to +its own subdomain and its neighboring subdomains. Alter- +natively, every fine node could connect only to the coarse +node corresponding to its subdomain but, as we discuss in +Section 5.3, this significantly impacts the performance of +the model. Moreover, each row of the interpolation operator, +P (θ), is scaled to have sum of one, as would be the case +for classical interpolation operators. Figure 3 shows several +example training grids. +The model is trained for 20 epochs with batch size of 10 +using the ADAM optimizer (Kingma & Ba, 2014) with a +fixed learning rate of 5 × 10−4. For the full discussion on +model architecture, see Appendix B and Figure 11. For the +loss function parameters introduced in Section 4, we use +K = 10 iterations and m = 100 samples. We developed +our code1 using PyAMG (Bell et al., 2022), NetworkX (Hag- +berg et al., 2008), and PyTorch Geometric (Fey & Lenssen, +2019). All training is executed on an i9 Macbook Pro CPU +with 8 cores. In the training procedure, we aim to mini- +mize the convergence of the stationary algorithm and, as +described in Section 4, we develop a loss function to achieve +this goal by numerically minimizing the spectral radius of +1All code and data for this paper is at https://github. +com/JRD971000/Code-Multilevel-MLORAS/ (MIT li- +censed). + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +Figure 3. Training grid examples with about 1k nodes. +Figure 4. Test grid example with about 7.4k nodes. +the error propagation matrix. In practice, optimized RAS +methods are often used as preconditioners for Krylov meth- +ods such as FGMRES; as shown in Appendix A, the trained +models using this procedure also outperform other baselines +when used as preconditioners for FGMRES. Directly train- +ing to minimize FGMRES iterations would require using +FGMRES in the training loop and backpropagation through +sparse-sparse matrix multiplication (Nytko et al., 2022), +which is left for future studies. +We evaluate the model on test grids that are generated in +the same fashion as the training grids, but are larger in size, +ranging from 800 to 60k DoFs. An example of a test grid is +shown in Figure 4. +5.2. Interface values and interpolation operator +As mentioned in the Section 4, to optimize two-level RAS, +one could optimize the parameters in the interface condi- +tions (2) and/or the interpolation operator (3). For one-level +RAS, on the other hand, there is no interpolation operator +(since there is no coarse grid), leaving only the interface +values to optimize, as was explored in Taghibakhshi et al. +(2022) and St-Cyr et al. (2007). To compare the importance +of these two ingredients in the two-level RAS optimization, +we compare three different models. Each of these models +is trained as described in Section 5.1; however, one of the +models (labeled “interface”) is trained by only learning the +interface values (ignoring the interpolation head of the net- +work), and using classical RAS interpolation to construct +T (θ). Another model, which we label “interpolation”, only +learns the interpolation operator weights, and uses zeros for +interface matrices L(θ) +i +to construct T (θ). The other model +uses both training heads (see Figure 11), learning the inter- +103 +104 +Grid size +25 +50 +75 +100 +125 +Stationary iterations +Interface +RAS +Interpolation +Ours (both) +Figure 5. Effect of learning interface values, interpolation operator, +or both on stationary iterations. +face values and the interpolation operator. We compare the +performance of these models with classical RAS in Figure 5 +as a stationary algorithm, and in Figure 9 as a preconditioner +for a Krylov method, FGMRES. +The results show that learning the interpolation operator is +more important in optimizing the 2-level RAS. Intuitively, +the coarse-grid correction process in (3) plays an important +role in scaling performance to large problems, due to its +global coupling of the discrete DOFs. The interpolation +operator is critical in achieving effective coarse-grid cor- +rection. On the other hand, the interface values are local +modifications to the subdomains (see (2)), that cannot (by +themselves) make up for a poor coarse-grid correction pro- +cess. Learning both operators clearly results in the best +performance in Figures 5 and 9, where the interpolation +operator can be adapted to best complement the effects of +the learned interface values. +5.3. Loss function and sparsity variants +We first compare five variants of our method with the RAS +baseline, as shown in Figures 6 and 9. The main model +is trained as described in Section 5.1 with the loss func- +tion from Section 4. All but one of the variants only dif- +fer in their loss function, and share the rest of the details. +The variant labeled “Max loss” is trained with the loss +function from Taghibakhshi et al. (2022). The variant la- +beled “Max+Trace loss” is trained with the loss function +from Taghibakhshi et al. (2022) plus the γtr(P T AP) term. +Similarly, the variant labeled “Softmax loss” is trained by re- +moving the γtr(P T AP) part from the loss function in (15). +For the last variant, we restrict the sparsity of the interpola- +tion operator to that obtained by only connecting every fine +node to its corresponding coarse node, labelled “DDM stan- +dard sparsity”, and trained using the loss function from (15). + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +RAS +Softmax+Trace loss (ours) +Max loss +Max+Trace loss +Softmax loss +DDM standard sparsity +40 +60 +80 +Avg stationary iterations +Figure 6. Effect of every ingredient in the model on average station- +ary iterations. All three variants outperforming the RAS baseline +are utilizing modifications introduced in this paper (see (15)) com- +pared to the “Max loss” from Taghibakhshi et al. (2022). +As shown in Figure 6, the learned operator using this variant +achieves worse performance than the baseline RAS. This +is partly because, for this variant, the constraint on unit row +sums of the interpolation operator effectively removes most +of the learned values, since many rows of interpolation have +only one nonzero entry in this sparsity pattern. +To show the effectiveness of the coarse-grid correction and +the learned operator, we also compare two-level RAS and +our two-level learned RAS (MLORAS 2-level) with one- +level RAS and one-level optimized RAS from Taghibakhshi +et al. (2022) in Figures 7 and 10. +5.4. Comparison to Graph U-net and number of layers +In this section, the performance of Graph U-net and MG- +GNN with different numbers of layers is studied. Figures 8 +and 10 show the performance of each of the models as +stationary iterations and preconditioners for FGMRES, re- +spectively. For a fair comparison, the MG-GNN and graph +U-nets that share the same number of layers also have the +same number of trainable parameters. As shown here, the +best performance is achieved with 4 layers of MG-GNN, and +MG-GNN strictly outperforms the graph U-net architecture +with the same number of layers. +6. Conclusion +In this study, we proposed a novel graph neural network +architecture, which we call multigrid graph neural network +(MG-GNN), to learn two-level optimized restricted additive +Schwarz (optimized RAS or ORAS) preconditioners. This +103 +104 +Grid size +102 +103 +Stationary iterations +RAS 1-level +MLORAS 1-level +RAS 2-level +MLORAS 2-level (ours) +Figure 7. Comparison of stationary iterations of 2-level methods +with 1-level methods from Taghibakhshi et al. (2022). +2 +4 +6 +8 +Number of layers +45 +50 +55 +Avg stationary iterations +Graph U-net +MG-GNN (ours) +Figure 8. Average stationary iterations of graph U-net and MG- +GNN with different number of layers on the test set. +new MG-GNN ensures cross-scale information sharing at +every layer, eliminating the need to use multiple graph con- +volutions for long range information passing, which was a +shortcoming of prior graph network architectures. More- +over, MG-GNN scales linearly with problem size, enabling +its use for large graph problems. We also introduce a novel +unsupervised loss function, which is essential to obtain im- +proved results compared to classical two-level RAS. We +train our method using relatively small graphs, but we test +it on graphs which are orders of magnitude larger than the +training set, and we show our method consistently outper- +forms the classical approach, both as a stationary algorithm +and as an FGMRES preconditioner. +References +Bell, N., Olson, L. N., and Schroder, J. +PyAMG: Al- +gebraic multigrid solvers in python. Journal of Open +Source Software, 7(72):4142, 2022. +doi: 10.21105/ + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +joss.04142. URL https://doi.org/10.21105/ +joss.04142. (code is MIT licensed). +Bell, W. N. Algebraic multigrid for discrete differential +forms. +PhD thesis, University of Illinois at Urbana- +Champaign, 2008. +Box, G. E. A note on the generation of random normal +deviates. Ann. Math. Statist., 29:610–611, 1958. +Cai, X.-C. and Sarkis, M. A restricted additive Schwarz +preconditioner for general sparse linear systems. SIAM +Journal on Scientific Computing, 21(2):792–797, 1999. +Dolean, V., Jolivet, P., and Nataf, F. +An introduction +to domain decomposition methods. Society for Indus- +trial and Applied Mathematics (SIAM), Philadelphia, +PA, 2015. ISBN 978-1-611974-05-8. doi: 10.1137/1. +9781611974065.ch1. +Du, J., Zhang, S., Wu, G., Moura, J. M., and Kar, S. +Topology adaptive graph convolutional networks. arXiv +preprint arXiv:1710.10370, 2017. +Fey, M. and Lenssen, J. E. Fast graph representation learning +with PyTorch Geometric. In ICLR Workshop on Represen- +tation Learning on Graphs and Manifolds, 2019. (code +is MIT licensed). +Fortunato, M., Pfaff, T., Wirnsberger, P., Pritzel, A., and +Battaglia, P. Multiscale meshgraphnets. arXiv preprint +arXiv:2210.00612, 2022. +Gander, M., Halpern, L., and Nataf, F. Optimized Schwarz +methods. In Proceedings of the 12th International Con- +ference on Domain Decomposition, pp. 15–27. ddm.org, +2000. +Gao, H. and Ji, S. Graph u-nets. In ICML, pp. 2083–2092. +PMLR, 2019. +Greenfeld, D., Galun, M., Basri, R., Yavneh, I., and Kim- +mel, R. Learning to optimize multigrid PDE solvers. +In International Conference on Machine Learning, pp. +2415–2423. PMLR, 2019. +Hagberg, A., Swart, P., and S Chult, D. Exploring net- +work structure, dynamics, and function using networkx. +Technical report, Los Alamos National Lab.(LANL), Los +Alamos, NM (United States), 2008. (code is BSD li- +censed). +Heinlein, A., Klawonn, A., Lanser, M., and Weber, J. Com- +bining machine learning and domain decomposition meth- +ods for the solution of partial differential equations—a +review. GAMM-Mitteilungen, 44(1):e202100001, 2021. +Ke, T.-W., Maire, M., and Yu, S. X. Multigrid neural archi- +tectures. In Proceedings of the IEEE Conference on Com- +puter Vision and Pattern Recognition, pp. 6665–6673, +2017. +Kingma, D. P. and Ba, J. Adam: A method for stochastic +optimization. arXiv preprint arXiv:1412.6980, 2014. +Lee, J., Lee, I., and Kang, J. Self-attention graph pooling. +In ICML, pp. 3734–3743. PMLR, 2019. +Li, Q., Han, Z., and Wu, X.-M. Deeper insights into graph +convolutional networks for semi-supervised learning. In +AAAI, 2018. +Li, Z., Kovachki, N., Azizzadenesheli, K., Liu, B., Stuart, +A., Bhattacharya, K., and Anandkumar, A. Multipole +graph neural operator for parametric partial differential +equations. Advances in Neural Information Processing +Systems, 33:6755–6766, 2020. +Lloyd, S. Least squares quantization in PCM. IEEE Trans- +actions on Information Theory, 28(2):129–137, 1982. +Luz, I., Galun, M., Maron, H., Basri, R., and Yavneh, I. +Learning algebraic multigrid using graph neural networks. +In International Conference on Machine Learning, pp. +6489–6499. PMLR, 2020. +Nytko, N., Taghibakhshi, A., Zaman, T. U., MacLachlan, +S., Olson, L. N., and West, M. Optimized sparse matrix +operations for reverse mode automatic differentiation. +arXiv preprint arXiv:2212.05159, 2022. +Oono, K. and Suzuki, T. Graph neural networks exponen- +tially lose expressive power for node classification. arXiv +preprint arXiv:1905.10947, 2019. +Quarteroni, A. and Valli, A. Domain decomposition methods +for partial differential equations. Numerical Mathematics +and Scientific Computation. The Clarendon Press, Oxford +University Press, New York, 1999. ISBN 0-19-850178-1. +Oxford Science Publications. +Ranjan, E., Sanyal, S., and Talukdar, P. Asap: Adaptive +structure aware pooling for learning hierarchical graph +representations. In Proceedings of the AAAI Conference +on Artificial Intelligence, volume 34, pp. 5470–5477, +2020. +Ronneberger, O., Fischer, P., and Brox, T. U-net: Con- +volutional networks for biomedical image segmentation. +In International Conference on Medical Image Comput- +ing and Computer Assisted Intervention, pp. 234–241. +Springer, 2015. +Schl¨omer, N. pygmsh: A Python frontend for Gmsh, 2021. +URL https://github.com/nschloe/pygmsh. +(GPL-3.0 License). + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +St-Cyr, A., Gander, M. J., and Thomas, S. J. Optimized +multiplicative, additive, and restricted additive Schwarz +preconditioning. SIAM Journal on Scientific Computing, +29(6):2402–2425, 2007. +Taghibakhshi, A., MacLachlan, S., Olson, L., and West, M. +Optimization-based algebraic multigrid coarsening using +reinforcement learning. Advances in Neural Information +Processing Systems, 34, 2021. +Taghibakhshi, A., Nytko, N., Zaman, T., MacLachlan, +S., Olson, L., and West, M. Learning interface condi- +tions in domain decomposition solvers. arXiv preprint +arXiv:2205.09833, 2022. +Toselli, A. and Widlund, O. +Domain decomposition +methods—algorithms and theory, volume 34 of Springer +Series in Computational Mathematics. Springer-Verlag, +Berlin, 2005. +ISBN 3-540-20696-5. +doi: 10.1007/ +b137868. +Wan, W. L., Chan, T. F., and Smith, B. +An energy- +minimizing interpolation for robust multigrid methods. +SIAM Journal on Scientific Computing, 21(4):1632–1649, +1999. +Wang, W., Dang, Z., Hu, Y., Fua, P., and Salzmann, M. +Backpropagation-friendly eigendecomposition. Advances +in Neural Information Processing Systems, 32, 2019. +Xu, J. Iterative methods by space decomposition and sub- +space correction. SIAM Review, 34(4):581–613, 1992. +Ying, Z., You, J., Morris, C., Ren, X., Hamilton, W., and +Leskovec, J. Hierarchical graph representation learning +with differentiable pooling. Advances in Neural informa- +tion processing systems, 31, 2018. + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +A. FGMRES plots +In Section 5, in Figures 5 to Figure 8, the performance of the methods was evaluated by considering the convergence +of stationary iterations. Here, we present another possible evaluation criterion, assessing the number of iterations to +convergence for the preconditioned systems using FGMRES, a standard Krylov method. The following figures are analogous +to those provided in the main paper, and demonstrate that our method also achieves superior results compared to other +methods, and that the MG-GNN architecture outperforms graph U-nets. +103 +104 +Grid size +30 +40 +50 +FGMRES iterations +Interface +RAS +Interpolation +Ours (both) +RAS +Softmax+Trace loss (ours) +Max loss +Max+Trace loss +Softmax loss +DDM standard sparsity +30.0 +32.5 +35.0 +37.5 +40.0 +42.5 +45.0 +Avg FGMRES iterations +Figure 9. Left: Effect of learning interface values, interpolation operator, or both on FGMRES iterations. All three variants outperforming +the RAS baseline are utilizing modifications introduced in this paper (15) compared to the “Max loss” from Taghibakhshi et al. (2022). +Right: Effect of every ingredient in the model on average FGMRES iterations. +103 +104 +Grid size +50 +100 +150 +FGMRES iterations +RAS 1-level +MLORAS 1-level +RAS 2-level +MLORAS 2-level (ours) +2 +4 +6 +8 +Number of layers +35.5 +36.0 +36.5 +37.0 +37.5 +38.0 +Avg FGMRES iterations +Graph U-net +MG-GNN (ours) +Figure 10. Left: Comparison of FGMRES iterations of 2-level methods with 1-level methods from Taghibakhshi et al. (2022). Right: +Average FGMRES iterations of graph U-net and MG-GNN with different number of layers on the test set. + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +B. Model architecture +Inputs and outputs: +The model takes any unstructured grid as its input, which consists of the node features, edge features, +and adjacency matrix of both the fine and coarse grids. Every node on the fine level has a binary feature, indicating whether +it lies on the boundary of a subdomain. Fine level edge features are obtained from the discretization of the underlying PDE, +A, and the adjacency matrix of the fine level is simply the sparsity of A. Similar attributes for the coarse level are obtained +as described in Section 3, Equations (6) and (7), and Lloyd aggregation has been used for obtaining subdomains throughout. +The outputs of the model are the learned interface values and the interpolation operator. +We use node and edge preprocessing (3 fully connected layers of dimension 128, followed by ReLU activations, in the +node and feature space, respectively) followed by 4 layers of MG-GNN. For GNN(ℓ) in (10) and F ℓ→ℓ in (8), we use a +TAGConv layer (Du et al., 2017) and, for F ℓ→k with ℓ ̸= k, we use a heterogeneous message passing GNN as shown in +Equation (11). Specifically, we choose summation as the permutation invariant operator in (11) and, for the MLPs, we use +two fully connected layers of size 128 with ReLU nonlinearity for f ℓ→k and gℓ→k(x, y) = x. +Following the MG-GNN layers, the network will split in two heads, each having a stack layer (which essentially concatenates +the features of nodes on each side of every edge) and an edge feature post-processing (see Figure 12 for details). The edge +weights between the coarse and fine level are the learned interpolation operator weights, and the edge values along the +subdomains in the fine level are the learned interface values. The upper head of the network has a masking block at the end, +which masks the edge values that are not along the boundary, hence only outputting the learned interface values. The overall +GNN architecture for learning the interpolation operator and the interface values is shown in Figure 11. + Edge Feature Post- +processing +Learned interface +edges +Dnode : Node features +Dedge : Edge features +Masking +Stack +MG-GNN +MG-GNN +4 Blocks + Edge Feature Pre- +processing +Coarse Grid +Fine Grid + Edge Feature Post- +processing +Learned interpolation +weights +Stack +Edge feature flow +Edge feature +Fine node feature +Edge +Fine node +Node feature flow +Coarse node +Coarse node feature + Node Feature Pre- +processing +Figure 11. GNN architecture used in this study. + +MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods +���������������� +���� +���� +���� +���� +���� +��������� +�������� +������� +������� +������� +������ +��������������� +������������� + Edge Feature +Postprocessing +Learned interface +edges +Dnode : Node features +Dedge : Edge features +Masking +Stack +HGNN +HGNN +4 Blocks + Edge Feature +Preprocessing +Coarse Grid +Fine Grid + Edge Feature +Postprocessing +Learned interpolation +weights +Stack +Edge feature flow +Edge feature +Fine node feature +Edge +Fine node +Node feature flow +Coarse node +Coarse node feature +Figure 12. Edge feature post-processing block. + diff --git a/0NFIT4oBgHgl3EQf2ysh/content/tmp_files/load_file.txt b/0NFIT4oBgHgl3EQf2ysh/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..308ff8d4b69a1d8150f3a4543150ca7ffd32937e --- /dev/null +++ b/0NFIT4oBgHgl3EQf2ysh/content/tmp_files/load_file.txt @@ -0,0 +1,666 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf,len=665 +page_content='MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain Decomposition Methods Ali Taghibakhshi 1 Nicolas Nytko 2 Tareq Uz Zaman 3 Scott MacLachlan 4 Luke N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' Olson 2 Matt West 1 Abstract Domain decomposition methods (DDMs) are pop- ular solvers for discretized systems of partial dif- ferential equations (PDEs), with one-level and multilevel variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' These solvers rely on several algorithmic and mathematical parameters, pre- scribing overlap, subdomain boundary conditions, and other properties of the DDM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' While some work has been done on optimizing these param- eters, it has mostly focused on the one-level set- ting or special cases such as structured-grid dis- cretizations with regular subdomain construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' In this paper, we propose multigrid graph neural networks (MG-GNN), a novel GNN architecture for learning optimized parameters in two-level DDMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' We train MG-GNN using a new unsuper- vised loss function, enabling effective training on small problems that yields robust performance on unstructured grids that are orders of magnitude larger than those in the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' We show that MG-GNN outperforms popular hierarchical graph network architectures for this optimization and that our proposed loss function is critical to achieving this improved performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' Introduction Among numerical methods for solving the systems of equa- tions obtained from discretization of partial differential equa- tions (PDEs), domain decomposition methods (DDMs) are a popular approach (Toselli & Widlund, 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' Quarteroni & Valli, 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' Dolean et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' They have been exten- sively studied and applied to elliptic boundary value prob- lems, but are also considered for time-dependent problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' 1Department of Mechanical Science and Engineering, Univer- sity of Illinois at Urbana-Champaign 2Department of Computer Science, University of Illinois at Urbana-Champaign 3Scientific Computing Program, Memorial University of Newfoundland 4Department of Mathematics and Statistics, Memorial Univer- sity of Newfoundland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0NFIT4oBgHgl3EQf2ysh/content/2301.11378v1.pdf'} +page_content=' Correspondence to: Ali Taghibakhshi