J Nonlinear Sci https://doi.org/10.1007/s00332-018-9468-8 An MBO Scheme for Minimizing the Graph Ohta–Kawasaki Functional Yves van Gennip Received: 13 November 2017 / Accepted: 15 May 2018 © The Author(s) 2018 Abstract We study a graph-based version of the Ohta–Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classiﬁcation. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the ﬁeld. We introduce a mass conserving Merriman–Bence–Osher (MBO) scheme for minimizing the graph Ohta–Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme -converge to the Ohta–Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta–Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta–Kawasaki functional with a mass constraint. Communicated by Dr. Mason A. Porter and Dr. Andrea L. Bertozzi. Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00332- 018-9468-8) contains supplementary material, which is available to authorized users. B Yves van Gennip y.vangennip@nottingham.ac.uk University of Nottingham, Nottingham, UK 123 J Nonlinear Sci Keywords PDEs on graphs · Variational methods · Ohta–Kawasaki functional · Merriman–Bence–Osher scheme · Threshold dynamics · -convergence Mathematics Subject Classiﬁcation 05C82 · 34A33 · 34B45 · 35A15 · 35B36 · 49N99 1 Introduction In this paper, we study the minimization problem min TV(u) + u − A(u) −1 on undirected graphs. Here TV and · are graph-based analogues of the continuum −1 −1 total variation seminorm and continuum H Sobolev norm, respectively, γ ≥ 0, and u is allowed to vary over the set of node functions with prescribed average mass A(u). These concepts will be made precise later in the paper, culminating in formulation (34) of the minimization problem. The main contributions of this paper are the introduction of the graph Ohta–Kawasaki functional into the literature, the development of an algorithm to produce (approximate) minimizers, and the study of that algorithm, which leads to, among other results, further insight into the connection between the graph Merriman–Bence–Osher (MBO) method and the graph total variation, following on from initial investigations in van Gennip et al. (2014). There are various reasons to study this minimization problem. First of all, it is the graph-based analogue of the continuum Ohta–Kawasaki variational model (Ohta and Kawasaki 1986; Kawasaki et al. 1988). This model was originally introduced as a model for pattern formation in diblock copolymer systems and has become a paradig- matic example of a variational model which exhibits pattern formation. It spawned a large mathematical literature which explores its properties analytically and computa- tionally. A complete literature overview for this area is outside the scope of this paper. For a brief overview of the continuum Ohta–Kawasaki model, see Section S1 in Sup- plementary Materials. (We use the preﬁx “S” to indicate a reference to Supplementary Materials.) For a sample of mathematical papers on this topic, see for example (Ren and Wei 2000; Choksi and Ren 2003; van Gennip et al. 2009; Choksi et al. 2009;Le 2010; Choksi et al. 2011; Glasner 2017) and other references mentioned in Section S1. The problem studied in this paper thus follows in the footsteps of a rich mathematical heritage, but at the same time, being the graph analogue of the continuum functional, connects with the recent interest in discrete PDE inspired problems. Recently, there has been a growing enthusiasm in the mathematical literature for graph-based variational methods and graph based dynamics which mimic continuum- based variational methods and partial differential equations (PDEs), respectively. This is partly driven by novel applications of such methods in data science and image analysis (Ta et al. 2011; Elmoataz et al. 2012; Bertozzi and Flenner 2012; Merkurjev et al. 2013;Huetal. 2013; Garcia-Cardona et al. 2014; Calatroni et al. 2017; Bosch et al. 2016; Merkurjev et al. 2017; Elmoataz et al. 2017) and partly by theoretical interest 123 J Nonlinear Sci in the new connections between graph theory and PDEs (van Gennip and Bertozzi 2012; van Gennip et al. 2014; Trillos and Slepce ˇ v 2016). Broadly speaking, these studies fall into one (or more) of three categories: papers connecting graph problems with continuum problems, for example through a limiting process (van Gennip and Bertozzi 2012; Trillos and Slepce ˇ v 2016; Trillos et al. 2016), papers adapting a PDE approach to a graph context in order to tackle a graph problem such as graph clustering and classiﬁcation (Bertozzi and Flenner 2016; Bresson et al. 2014; Merkurjev et al. 2016), maximum cut computations (Keetch and van Gennip in prep), and bipartite matching (Caracciolo et al. 2014; Caracciolo and Sicuro 2015), and papers studying the graph analogue of a PDE or variational problem that has interesting properties in the continuum, to explore what (potentially similar) properties are present in the graph based version of the problem (van Gennip et al. 2014; Luo and Bertozzi 2017; Elmoataz and Buyssens 2017). This paper mostly falls in the latter category. The study of the graph-based Ohta–Kawasaki model is also of interest, because it connects with graph methods, concepts, and questions that have recently attracted attention, such as the graph MBO method (also known as threshold dynamics), graph curvature, and the question how these concepts relate to each other. The MBO scheme was originally introduced (in a continuum setting) to approximate motion by mean curvature (Merriman et al. 1992, 1993, 1994). It is an iterative scheme, which alter- nates between a short-time diffusion step and a threshold step. Not only have these dynamics been proven to converge to motion by mean curvature (Evans 1993;Barles and Georgelin 1995; Swartz and Yip 2017), but they have been a very useful basis for numerical schemes as well, both in the continuum and on graphs. Without aiming for completeness, we mention some of the papers that investigate or use the MBO scheme: (Mascarenhas 1992; Ruuth 1998a, b; Chambolle and Novaga 2006; Esedoglu ¯ et al. 2008, 2010;Huetal. 2013; Merkurjev et al. 2013, 2014;Huetal. 2015; Esedoglu ¯ and Otto 2015). In this paper, we study two different MBO schemes, (OKMBO) and (mcOKMBO). The former is an extension of the standard graph MBO scheme van Gennip et al. (2014) in the sense that it replaces the diffusion step in the scheme with a step whose dynamics are related to the Ohta–Kawasaki model and reduce to diffusion in the special case when γ = 0 (for details, see Sect. 5.1). The latter uses the same dynamics as the former in the ﬁrst step, but incorporates mass conservation in the threshold step. The (mcOKMBO) scheme produces approximate graph Ohta–Kawasaki minimizers and is the one we use in our simulations which are presented in Sect. 7 and Section S9 of Supplementary Materials. The scheme (OKMBO) is of interest both as a precursor to (mcOKMBO) and as an extension of the standard graph MBO scheme. In van Gennip et al. (2014), it was conjectured that the standard graph MBO scheme is related to graph mean curvature ﬂow and minimizers of the graph total variation functional. This paper furthers the study of that conjecture (but does not provide a deﬁnitive answer): in Sect. 5.2 it is shown that the Lyapunov functionals associated with the (OKMBO) -converge to the graph Ohta–Kawasaki functional (which reduces to the total variation functional in the case when γ = 0). Moreover, in Sect. 6 we introduce a special class of graphs, C , dependent on γ . For graphs from this class the (OKMBO) scheme can be interpreted as the standard graph MBO scheme on a transformed graph. For such graphs we extend existing elliptic and parabolic comparison principles for 123 J Nonlinear Sci the graph Laplacian and graph diffusion equation to our new Ohta–Kawasaki operator and dynamics (Lemmas 6.13 and 6.15). A signiﬁcant role in the analysis presented in this paper is played by the equilibrium measures associated to a given node subset (Bendito et al. 2000b, 2003), especially in the construction of the aforementioned class C . In Sect. 3 we study these equilib- rium measures and the role they play in constructing Green’s functions for the graph Dirichlet and Poisson problems. The Poisson problem in particular, is an important −1 ingredient in the deﬁnition of the graph H norm and the graph Ohta–Kawasaki functional as they are introduced in Sect. 4. Both the equilibrium measures and the Ohta–Kawasaki functional itself are related to the graph curvature, which was intro- duced in van Gennip et al. (2014), as is shown in Lemma 3.6 and Corollary 4.12, respectively. The structure of the paper is as follows. In Sect. 2 we deﬁne our general setting. Section 3 introduces the equilibrium measures from Bendito et al. (2003) into the paper (the terminology is derived from potential theory; see, e.g. Simon 2007 and references therein) and uses them to study the Dirichlet and Poisson problems on graphs, generalizing some results from Bendito et al. (2003). In Sect. 4 we deﬁne −1 the H inner product and norm and use those to construct the object at the centre of our paper: the (sharp interface) Ohta–Kawasaki functional on graphs, F .Wealso brieﬂy consider F , a diffuse interface version of the Ohta–Kawasaki functional and its relationship with F . Moreover, in this section we start using tools from spectral analysis to study F . These tools will be one of the main ingredients in the remainder of the paper. In Sect. 5 the algorithms (OKMBO) and (mcOKMBO) are introduced and analysed. It is shown that both these algorithms have an associated Lyapunov functional (which extends a result from van Gennip et al. 2014) and that these functionals - converge to F in the limit when τ (the time parameter associated with the ﬁrst step in the MBO iteration) goes to zero. We introduce the class C in Sect. 6 and prove that the Ohta–Kawasaki dynamics [(i.e. the dynamics used in the ﬁrst steps of both (OKMBO) and (mcOKMBO)] on graphs from this class correspond to diffusion on a transformed graph. We also prove comparison principles for these graphs. In Sect. 7 we then use (mcOKMBO) to numerically construct (approximate) minimizers of F , before ending with a discussion of potential future research directions in Sect. 8. Supplementary Materials accompany this paper, which contain further background information, results, examples, numerical simulations, and deferred proofs. 2Setup In this paper we consider graphs G ∈ G, where G is the set consisting of all ﬁnite, simple, connected, undirected, edge-weighted graphs (V , E,ω) with n := |V|≥ 2 nodes. Here E ⊂ V × V and ω : E → (0, ∞). Because G ∈ G is undirected, we identify (i, j ) ∈ E with ( j, i ) ∈ E. If we want to consider an unweighted graph, we view it as a weighted graph with ω = 1on E. By ‘simple’ we mean here ‘without multiple edges between the same pair of vertices and without self- loops’ 123 J Nonlinear Sci Assume G ∈ G is given. Let V be the set of node functions u : V → R and E the set of skew-symmetric edge functions ϕ : E → R.For i ∈ V , u ∈ V, we write u := u(i ) and for (i, j ) ∈ E, ϕ ∈ E we write ϕ := ϕ((i, j )). To simplify notation, i ij we extend each ϕ ∈ E to a function ϕ : V → R (without changing notation) by setting ϕ = 0if (i, j)/ ∈ E. The condition that ϕ is skew-symmetric means that, for ij all i, j ∈ V , ϕ =−ϕ . Similarly, for the edge weights we write ω := ω((i, j )) ij ji ij and we extend ω (without changing notation) to a function ω : V →[0, ∞) by setting ω = 0 if and only if (i, j)/ ∈ E. Because G ∈ G is undirected, we have for ij all i, j ∈ V , ω = ω . ij ji The degree of node i ∈ V is d := ω and the minimum and maximum i ij j ∈V degrees of the graph are deﬁned as d := min d and d := max d , respectively. − i + i 1≤i ≤n 1≤i ≤n Because G ∈ G is connected and n ≥ 2, there are no isolated nodes and thus d , d > − + For a node i ∈ V , we denote the set of its neighbours by N (i ) := { j ∈ V : ω > 0}. (1) ij For simplicity of notation, we will assume that the nodes of a given graph G ∈ G are labelled such that V ={1,..., n}. For deﬁniteness and to avoid confusion we specify that we consider 0 ∈ / N, i.e. N ={1, 2, 3,...}, and when using the subset notation A ⊂ B we allow for the possibility that A = B. The characteristic function (or indicator function) χ of a node set S ⊂ V is deﬁned by (χ ) := 1if i ∈ S S S i and (χ ) := 0 otherwise. If S ={i }, we can use the Kronecker delta to write: S i 1, if i = j, (χ ) = δ := {i } j ij 0, otherwise. As justiﬁed in earlier work (Hein et al. 2007; van Gennip and Bertozzi 2012;van Gennip et al. 2014), we introduce the following inner products, 2q−1 u,v := u v d , ϕ, ψ := ϕ ψ ω , V i i E ij ij i ij i ∈V i, j ∈V for parameters q ∈[1/2, 1] and r ∈[0, 1]. We deﬁne the gradient ∇: V → E by, for all i, j ∈ V , 1−q ω (u − u ), if ω > 0, j i ij ij (∇u) := ij 0, otherwise. In the literature, the condition of skew-symmetry (i.e. ϕ =−ϕ ) is often, but not always included in ij ji deﬁnitions of the edge function space. We follow that convention, but it does not hinder or help us, except for simplifying a few expressions, such as that of the divergence below. When beneﬁcial for the readability, we will also write δ for δ . i, j ij Note that the powers 2q −1and 1 −q in the E inner product and in the gradient are zero for the admissible choices q = and q = 1 respectively. In these cases we deﬁne ω = 0 whenever ω = 0, so as not to ij 2 ij make the E inner product (or the gradient) nonlocal on the graph. 123 J Nonlinear Sci Note that ·, · is indeed an inner product on V if G has no isolated nodes (i.e. if d > 0 for all i ∈ V ), as is the case for G ∈ G. Furthermore, ·, · is an inner product i E on E (since functions in E are either only deﬁned on E or are required to be zero on V \E, depending on whether we consider them as edge functions or as extended edge functions, as explained above). Using these building blocks, we deﬁne the divergence as the adjoint of the gradient and the (graph) Laplacian as the divergence of the gradient, leading to, for all i ∈ V , −r (div ϕ) := ω ϕ ,(u) := (div (∇u)) = d ω (u − u ), i ji i ij i j r ij i j ∈V j ∈V as well as the following norms: u := u, u , ϕ := ϕ, ϕ V V E E u := max{|u |: i ∈ V }, ϕ := max{|ϕ |: i, j ∈ V }. i ij V ,∞ E ,∞ Note that we indeed have, for all u ∈ V and all ψ ∈ E, ∇u,ψ = u, div ψ . (2) E V In van Gennip et al. (2014, Lemma 2.2) it is proven that, for all u ∈ V, d u ≤u ≤ vol (V )u . (3) − V ,∞ V V ,∞ For a function u ∈ V, we deﬁne its support as supp(u) := {i ∈ V : u = 0}. The mass of a function u ∈ V is M(u) := u,χ = d u , and the volume of a V V i i ∈V i 2 r node set S ⊂ V is vol (S) := M(χ ) =χ = d . Note that, if r = 0, then S S i ∈S i vol (S) =|S|, where |S| denotes the number of elements in S.Using (2), we ﬁnd the useful property that, for all u ∈ V, M(u) = u,χ = ∇u, ∇χ = 0. (4) V V V E M(u) For u ∈ V, deﬁne the average mass function of u as A(u) := χ . Note in vol V ( ) particular that M(u − A(u)) = 0. (5) We also deﬁne the Dirichlet energy of a function u ∈ V, 1 1 2 2 ∇u = ω (u − u ) , (6) ij i j 2 4 i, j ∈V and the total variation of u ∈ V,TV(u) := max div ϕ, u : ϕ ∈ E , ϕ ≤ 1 V E ,∞ = ω |u − u |. i j 2 i, j ∈V ij Note that for the divergence we have used the assumption that ϕ is skew-symmetric. 123 J Nonlinear Sci Remark 2.1 We have introduced two parameters, q ∈[1/2, 1] and r ∈[0, 1], in our deﬁnitions so far. As we will see later in this paper, the choice q = 1 is the natural one for our purposes. In those cases where we do not require q = 1, however, we do keep the parameter q unspeciﬁed, because there are papers in the literature in which the choice q = 1/2 is made, such as Gilboa and Osher (2009). One reason for the choice q = 1/2 is that in that case ω appears in the graph gradient, graph divergence, ij and graph total variation with the same power (1/2), allowing one to think of ω as ij analogous to a reciprocal distance. The parameter r is the more interesting one of the two, as the choices r = 0 and r = 1 lead to two different graph Laplacians that appear in the spectral graph theory literature under the names combinatorial (or unnormalized) graph Laplacian and random walk (or normalized, or non-symmetrically normalized) graph Laplacian, respectively. Many of the results in this paper hold for all r ∈[0, 1], and we will clearly indicate whether and when further assumptions on r are required We note that, besides the graph Laplacian, also the mass of a function depends on r, whereas the total variation of a function does not depend on r, but does depend on q. The Dirichlet energy depends on neither parameter. Unless we explicitly mention any further restrictions on q or r, only the conditions q ∈[1/2, 1] and r ∈[0, 1] are implicitly assumed. Givenagraph G = (V , E,ω) ∈ G, we deﬁne the following useful subsets of V: the subset of node functions with a given mass M ∈ R, V := {u ∈ V : M(u) = M }; (7) the subset of nonnegative node functions, V := {u ∈ V :∀i ∈ Vu ≥ 0}; + i the subset of {0, 1}-valued binary node functions, V := {u ∈ V :∀i ∈ Vu ∈{0, 1}}; (8) the subset of {0, 1}-valued binary node functions with a given mass M ≥ 0, b b V := V ∩ V ; the subset of [0, 1]-valued node functions, K := {u ∈ V :∀i ∈ Vu ∈[0, 1]}; (9) the subset of [0, 1]-valued node functions with a given mass M ≥ 0, K := V ∩ K. (10) M M The space of zero mass node functions, V , will play an important role, as it is the space of admissible ‘right-hand side’ functions in the Poisson problem (17). Note that every u ∈ V is of the form u = χ for some S ⊂ V . 123 J Nonlinear Sci Observe that for M > vol (V ), V =∅. In fact, for a given ﬁnite graph there are only ﬁnitely many M ∈[0, vol (V )] such that V =∅. For a given graph, we deﬁne the (ﬁnite) set of admissible masses as M := {M ∈[0, vol (V )]: V =∅}. (11) In Lemma S6.5 of Supplementary Materials we construct M for the example of a star graph. 3 Dirichlet and Poisson Equations 3.1 A Comparison Principle Lemma 3.1 (Comparison principle I) Let G = (V , E,ω) ∈ G,let V be a proper subset of V , and let u,v ∈ V be such that, for all i ∈ V , (u) ≥ (v) and, for all i i i ∈ V \V ,u ≥ v . Then, for all i ∈ V, u ≥ v . i i i i Proof The result follows as a special case of the comparison principle for uniformly elliptic partial differential equations on graphs with Dirichlet boundary conditions in Manfredi et al. (2015, Theorem 1). For completeness (and future use in the proof of Lemma 6.13) we provide the proof of this special case here. In particular, we will prove that if w ∈ V is such that, for all i ∈ V , (w) ≥ 0, and, for all i ∈ V \V , w ≥ 0, then for all i ∈ V , w ≥ 0. Applying this to w = u − v gives the desired i i result. If V =∅, the result follows trivially. In what follows we assume that V =∅. Deﬁne the set U := i ∈ V : w = min w . Note that U =∅. For a proof i j ∈V j by contradiction, assume min w < 0, then U ⊂ V . By assumption V = V , j ∈V j hence ∅ = V \V ⊂ V \U.Let i ∈ V \U. Since G is connected, there is a path ∗ 6 ∗ from U to i . Fix such a path and let k be the ﬁrst node along this path such ∗ ∗ ∗ that k ∈ V \U and let j ∈ U be the node immediately preceding k in the path. 1−q ∗ ∗ ∗ ∗ ∗ Then, for all k ∈ V , (∇w) ≤ 0, and (∇w) = ω (w − w )< 0. Thus ∗ ∗ kj k j j k k j r ∗ ∗ ∗ d (w) = ω (∇w) < 0. Since j ∈ V , this contradicts one of the ∗ j ∗ kj k∈V j j k assumptions on w, hence min w ≥ 0 and the result is proven. i ∈V i We will see a generalization of Lemma 3.1 as well as another comparison principle in Sect. 6.2, but their proofs require some groundwork which is interesting in its own right as well. That is the topic of Sect. 6.1. 6 ∗ ∗ By a path from U to i we mean a ﬁnite sequence of nodes {i } , such that i ∈ U, i = i ,and for j 1 k j =1 all j ∈{2,..., k}, (i , i ) ∈ E. j j +1 123 J Nonlinear Sci 3.2 Equilibrium Measures Let G = (V , E,ω) ∈ G. Given a proper subset S ⊂ V , consider the equation (ν ) = 1, if i ∈ S, (12) ν = 0, if i ∈ V \S. We recall some properties that are proven in Bendito et al. (2003, Section 2). Lemma 3.2 Let G = (V , E,ω) ∈ G. The following results and properties hold: 1. The Laplacian is positive semideﬁnite on V and positive deﬁnite on V . 2. The Laplacian satisﬁes a maximum principle: for all u ∈ V , max(u) = + i i ∈V max (u) . i ∈supp(u) 3. For each proper subset S ⊂ V, (12) has a unique solution in V.If ν is this S S solution, then ν ∈ V and supp(ν ) = S. R S 4. If R ⊂ S are both proper subsets of V and ν ,ν ∈ V are the corresponding S R solutions of (12), then ν ≥ ν . Proof These properties are proven to hold in Bendito et al. (2003, Section 2) for r = 0; in Section S10.1 of Supplementary Materials, we give our own proofs for the general case in detail. Using property 3 in Lemma 3.2, we can now deﬁne the concept of the equilibrium measure of a node subset S. Deﬁnition 3.3 Let G = (V , E,ω) ∈ G. For any proper subset S ⊂ V,the equilibrium measure for S, ν , is the unique function in V which satisﬁes, for all i ∈ V,the equation in (12). In Lemmas S6.4 and S6.5 in Supplementary Materials, we construct equilibrium measures on a bipartite graph and a star graph, respectively. 3.3 Graph Curvature We recall the concept of graph curvature, which was introduced in van Gennip et al. (2014, Section 3). Deﬁnition 3.4 Let G ∈ G and S ⊂ V . Then we deﬁne the graph curvature of the set S by, for all i ∈ V , ω , if i ∈ S, q,r −r j ∈V \S ij (κ ) := d S i − ω , if i ∈ V \S. j ∈S ij The subset S is proper if S = V . Note that, by (4), the equation u = f on V can only have a solution u if f has zero mass. If S = V , this necessary zero mass solvability condition is not satisﬁed by Eq. (4). 123 J Nonlinear Sci We are mainly interested in the case q = 1 in this paper and in any given situation, if there are any restrictions on r ∈[0, 1], they will be clear from the context. Hence, for 1,r notational simplicity, we will write κ := κ . For future use, we also deﬁne κ := max (κ ) . (13) i ∈V The following lemma collects some useful properties of the graph curvature. q,r Lemma 3.5 Let G ∈ G,S ⊂ V , and let κ and κ be the graph curvatures from Deﬁnition 3.4. Then q,r TV(χ ) = κ ,χ (14) S S V and χ = κ , (15) S S + + Moreover, if κ is as in (13), then κ = max (κ ) . i ∈S S i S S Proof The properties in (14) and (15) are proven in van Gennip et al. (2014, Section 3) and can be checked by a direct computation. Note that the latter requires q = 1. The property for κ follows from the fact that κ is nonnegative on S and nonpositive on S . We can use Lemma 3.1 to connect the equilibrium measures from (12) with the graph curvature. Lemma 3.6 Assume G = (V , E,ω) ∈ G and let S be a proper subset of V . Let ν be the equilibrium measure for S from (12) and let κ be the graph curvature of S (for q = 1) and κ its maximum value, as in Deﬁnition 3.4. Then, for all i ∈ S, −1 ν ≥ κ . i S −1 −1 + Proof Deﬁne x := min (κ ) = κ . Since G is connected and S is a proper i ∈S S i S subset of V,max (κ ) > 0, and hence x is well-deﬁned. Using (15), we compute i ∈S S (x χ ) = x κ ≤ 1on V (and in particular on S). Hence, for i ∈ S, ( (x χ )) ≤ S S S S S 1 = ν . Furthermore, for i ∈ V \S, x (χ ) = 0 = ν . Thus, by Lemma 3.1,for i i i all i ∈ S, x = x (χ ) ≤ ν . S i We illustrate Lemma 3.6 with bipartite and star graph examples in Remark S6.6 in Supplementary Materials. 3.4 Green’s Functions Next we use the equilibrium measures to construct Green’s functions for Dirichlet and Poisson problems, following the discussion in Bendito et al. (2003, Section 3); see also Bendito et al. (2000a) and Chung and Yau (2000). In this section, all the results assume the context of a given graph G ∈ G.Inthis section and in some selected places later in the paper, we will also denote Green’s functions by the symbol G. It will always be very clear from the context whether G denotes a graph or a Green’s function in any given situation. 123 J Nonlinear Sci Deﬁnition 3.7 For a given subset S ⊂ V , we denote by V(S) the set of all real-valued node functions whose domain is S. Note that V(V ) = V. Given a nonempty, proper subset S ⊂ V and a function f ∈ V(S),the (semihomo- geneous) Dirichlet problem is to ﬁnd u ∈ V such that, for all i ∈ V , (u) = f , if i ∈ S, i i (16) u = 0, if i ∈ V \S. Given k ∈ V and f ∈ V ,the Poisson problem is to ﬁnd u ∈ V such that, u = f, (17) u = 0. Remark 3.8 Note that a general Dirichlet problem which prescribes u = g on V \S, for some g ∈ V(V \S), can be transformed into a semihomogeneous problem by considering the function u −˜ g, where, for all i ∈ S, g ˜ = 0 and for all i ∈ V \S, g ˜ = g . i i Lemma 3.9 Let S ⊂ V be a nonempty, proper subset, and f ∈ V(S). Then the Dirichlet problem (16) has at most one solution. Similarly, given k ∈ V and f ∈ V , the Poisson problem (17) has at most one solution. Proof Given two solutions u and v to the Dirichlet problem, we have (u − v) = 0 on S and u − v = 0on V \S. Since the graph is connected, this has as unique solution u − v = 0on V (see the uniqueness proof in point 3 of Lemma 3.2, which uses the comparison principle of Lemma 3.1). A similar argument proves the result for the Poisson problem. Next we will show that solutions to both the Dirichlet and Poisson problem exist, by explicitly constructing them using Green’s functions. Deﬁnition 3.10 Let S be a nonempty, proper subset of V . The function G : V ×S → R is a Green’s function for the Dirichlet equation,(16), if, for all f ∈ V(S), the function u ∈ V which is deﬁned by, for all i ∈ V , u := d G f , (18) i ij j j ∈S satisﬁes (16). Let k ∈ V . The function G : V × V → R is a Green’s function for the Poisson equation,(17), if, for all f ∈ V ,(17) is satisﬁed by the function u ∈ V which is deﬁned by, for all i ∈ V , u := d G f = G , f , (19) i ij j i V j ∈V where, for all i ∈ V , G : V → R. i · We can rewrite the Green’s function for the Dirichlet equation in terms of the V inner product as well, if we extend either G or f to be zero on V \S and extend the other function in any desired way to all of V . i · 123 J Nonlinear Sci In either case, for ﬁxed j ∈ S (Dirichlet) or ﬁxed j ∈ V (Poisson), we deﬁne G : V → R,by, forall i ∈ V , G := G . (20) ij Lemma 3.11 Let S be a nonempty, proper subset of V and let G : V × S → R. Then G is a Green’s function for the Dirichlet equation, (16), if and only if, for all i ∈ V and for all j ∈ S, j −r (G ) = d δ , if i ∈ S, i ij (21) G = 0, if i ∈ V \S. Let k ∈ V and G : V × V → R. Then G is a Green’s function for the Poisson equation (17), if and only if there is a q ∈ V which satisﬁes M(q) =−1(22) and there is a C ∈ R, such that G satisﬁes, for all i, j ∈ V, −r (G ) = d δ + q , i ij i (23) G = C. Proof For the Dirichlet case, let u be given by (18), then, for all i ∈ S, (u) = r j d (G ) f . If the function G is a Green’s function, then, for all f ∈ V(S) i j j ∈S j and for all i ∈ S, (u) = f . In particular, if we apply this to f = χ for j ∈ S, i i { j } r j we ﬁnd, for all i, j ∈ S, d (G ) = δ . Moreover, for all f ∈ V(S) and for all i ij i ∈ V \S, u = 0. Applying this again to f = χ for j ∈ S, we ﬁnd for all i ∈ V \S i { j } j j that d G = 0. Hence, for all i ∈ V \S, and for all j ∈ S, G = 0. This gives us (21). j i i Next assume G satisﬁes (21). By substituting G into (18), we ﬁnd that u satisﬁes (16) and thus G is a Green’s function. Now we consider the Poisson case and we let u be given by (19). Let q satisfy (22). If G is a Green’s function, then, for all f ∈ V , u = f .Let l , l ∈ V and apply 0 1 2 r r l l 1 2 u = f to f = d χ −d χ . It follows that, for all i ∈ V , G − G = {l } {l } l 1 l 2 1 2 i i −r −r d (χ − d χ . In particular, if l = i ≤ l the right-hand side in this {l } {l } 1 2 l 1 l 2 1 i 2 i equality is zero and thus for all i ∈ V , j → G is constant on V \{i }. In other words, there is a q ∈ V, such that, for all i ∈ V and for all j ∈ V \{ j }, G = q . Next let l ∈ V and apply u = f to the function f = χ − A χ . We compute {l} {l} r r r d d d k r i r k i k that (u) = G d − G − q (vol (V ) − d ). Hence, if l = i, i i k i i vol(V ) i vol(V ) r 2r r d d d i r i r i i u = f reduces to G d 1 − − d q + q = 1 − . We i i i i vol(V ) i vol(V ) vol(V ) i i −r solve this for G to ﬁnd G = d + q .If l ∈ V \{i }, u = f reduces to i i r r r r r d d d d d k r r l i l i i l G d − d + − G = 0 − . Using the expression for l l i vol(V ) vol(V ) i vol(V ) i k i G that we found above, we solve for G to ﬁnd G = q . i i i 123 J Nonlinear Sci −r Combining the above, we ﬁnd, for all i, j ∈ V , (G ) = d δ + q . Now we i ij i compute, for each j ∈ V , j −r 0 = G ,χ = d χ + q,χ = 1 + q,χ = 1 + M(q), V V { j } V V V V thus M(q) =−1. The ‘boundary condition’ u = 0for a ﬁxed k ∈ V in (17), holds for all f ∈ V . k 0 −r −r l l 1 2 Applying this again for f = d χ − d χ we ﬁnd G − G = 0. Hence there {l } {l } 1 2 l l k k 1 2 is a constant C ∈ R such that, for all j ∈ V , G = C. This gives us (23). Next assume G satisﬁes (23). By substituting G into (19) we ﬁnd that u satisﬁes (17). In particular, remember that f ∈ V . Thus, since q does not depend on j we have q, f = 0 and moreover u = CM( f ) = 0. Thus G is a Green’s function. V k Remark 3.12 Any choice of q in (23) consistent with (22) will lead to a valid Green’s function for the Poisson equation and hence to the same (and only) solution u of the Poisson problem (17)via (19). We make the following convenient choice: for all i ∈ V , −r q =−d δ . (24) i ik In Lemma 3.16, we will see that this choice of q leads to a symmetric Green’s function. Also any choice of C ∈ R in (23) will lead to a valid Green’s function for the ˜ ˜ ˜ ˜ Poisson equation. A function G satisﬁes (23) with C = C ∈ R if and only if G − C satisﬁes (23) with C = 0. Hence in Lemma 3.14, we will give a Green’s function for the Poisson equation for the choice C = 0. (25) Corollary 3.13 For a given nonempty, proper subset S ⊂ V , if there is a solution to (21), it is unique. Moreover, for given k ∈ V, q ∈ V , and C ∈ R,ifthereisa −1 solution to (23), it is unique. j j j j Proof Let j ∈ S (or j ∈ V ). If G and H both satisfy (21)[or (23)], then G − H satisﬁes a Dirichlet (or Poisson) problem of the form (16)[or (17)]. Hence by a similar j j argument as in the proof of Lemma 3.9, G − H = 0. For the following lemma, recall the deﬁnition of equilibrium measure from Deﬁ- nition 3.3. Lemma 3.14 Let S be a nonempty, proper subset of V . The function G : V × S → R, deﬁned by, for all i ∈ V and all j ∈ S, j S\{ j } G = ν − ν , (26) ij i i S S\{ j } M(ν ) − M(ν ) is the Green’s function for the Dirichlet equation, satisfying (21). 123 J Nonlinear Sci Let k ∈ V . The function G : V × V → R, deﬁned by, for all i, j ∈ V, V \{k} V \{ j } V \{ j } G = ν + ν − ν , (27) ij i k i vol (V ) is the Green’s function for the Poisson equation, satisfying (23) with (24) and (25). Proof This can be checked via direct computations. We provide the details in Sec- tion S10.2 of Supplementary Materials. Remark 3.15 Let G be the Green’s function from (27) for the Poisson equation. As shown in Lemma 3.14, G satisﬁes (23) with (24) and (25). Now let us try to ﬁnd another Green’s function satisfying (23) with (25) and with a different choice of q. −r Fix k ∈ V and deﬁne q ˜ ∈ V,by, forall i ∈ V , q ˜ := q + d δ . Then, by (22), i i ik M(q ˜ ) = 0. Hence, using (19) with the Green’s function G, we ﬁnd a function v ∈ V which satisﬁes, v =˜ q and v = 0. Hence, for all i, j ∈ V , i k −r −r ((G + v)) = d δ − d δ +˜ q , i ij ik i j k (G + v) = 0. So G + v is the new Green’s function we are looking for. Lemma 3.16 Let S be a nonempty, proper subset of V . If G : V × S → R is the Green’s function for the Dirichlet equation satisfying (21), then G is symmetric on S × S, i.e. for all i, j ∈ S, G = G . ij ji Let k ∈ V. If G : V × V → R is the Green’s function for the Poisson equation satisfying (23) with (24) (and any choice of C ∈ R), then G is symmetric, i.e. for all i, j ∈ V, G = G . ij ji Proof Let G : V ×S → R be the Green’s function for the Dirichlet equation, satisfying (21). Let u ∈ V be such that u = 0on V \S.Let i ∈ V , then i i i i i G , u = ω (G − G )u = ω (G − G )u jk j jk j j k j k j,k∈V j ∈S k∈V r −r = d d δ u = u , ji j i j ∈S −r −r i i i where the third equality follows from d δ (G ) = d ω (G − G ). ji j jk k∈V i j j k Now let i, j ∈ S and use the equality above with u = G to deduce i j i j i G = G = G , G = G ,G = G = G . ij V V ji Next we consider the Poisson case with Green’s function G : V × V → R, satisfying (23) with (24) and (25). Let k ∈ V and u ∈ V with u = 0. Then, similar to the computation above, for i ∈ V , we ﬁnd 123 J Nonlinear Sci i i r −r r G , u = (G ) u d = d δ + q u d j j ji j j j i j j ∈V j ∈V −r r = u − d δ u d = u − u = u , i jk j i k i j ∈V whereweused (24). If we use the identity above with u = G , we obtain, for i, j ∈ V , i j i j i G = G = G , G = G ,G = G = G , ij V V ji where we have applied that G = G = 0. k k ˜ ˜ Finally, if G : V ×V → R satisﬁes (23) with (24) and with C = 0, then G = G +C and hence G is also symmetric. The symmetry and support of the Green’s functions are discussed in some more detail in Remarks S5.3 and S5.4 in Supplementary Materials. Section S2 of Supple- mentary Materials gives a random walk interpretation for the Green’s function for the Poisson equation. 4 The Graph Ohta–Kawasaki Functional 4.1 A Negative Graph Sobolev Norm and Ohta–Kawasaki −1 In analogy with the negative H Sobolev norm (and underlying inner product) in the continuum (see for example Evans 2002; Adams and Fournier 2003; Brezis 1999), we −1 introduce the graph H inner product and norm. −1 Deﬁnition 4.1 The H inner product of u,v ∈ V is given by u,v −1 := ∇ϕ, ∇ψ where ϕ, ψ ∈ V are any functions such that ϕ = u and ψ = v hold on V . Remark 4.2 The zero mass conditions on u and v in Deﬁnition 4.1 are necessary and sufﬁcient conditions for the solutions ϕ and ψ to the Poisson equations above to exist as we have seen in Sect. 3.4. These solutions are unique up to an additive constant. Note that the choice of this constant does not inﬂuence the value of the inner product. Remark 4.3 It is useful to realize we can rewrite the inner product from Deﬁnition 4.1 as u,v −1 = ϕ, ψ = ϕ, v or ϕ, v = ϕ, v −1 . (28) H V V V H Remark 4.4 Note that for a connected graph the expression in Deﬁnition 4.1 indeed deﬁnes an inner product on V ,as u, u = 0 implies that (∇ϕ) = 0 for all −1 0 ij i, j ∈ V for which ω > 0. Hence, by connectivity, ϕ is constant on V and thus ij u = ϕ = 0on V . 123 J Nonlinear Sci −1 −1 The H inner product then also gives us the H norm: 2 2 u := u, u =∇ϕ = u,ϕ −1 −1 V H E Let k ∈ V.By(5), if u ∈ V, then u − A(u) ∈ V , and hence there exists a unique solution to the Poisson problem ϕ = u − A(u), (29) ϕ = 0, which can be expressed using the Green’s function from (27). We say that this solution ϕ solves (29) for u. Because the kernel of contains only the constant functions, the solution ϕ for any other choice of k will only differ by an additive constant. Hence the norm 2 2 2 u − A(u) =∇ϕ = ω (ϕ − ϕ ) , −1 ij i j i, j ∈V is independent of the choice of k. Note also that this norm in general does depend on r, since ϕ does. Contrast this with the Dirichlet energy in (6) which is independent of r. The norm does not depend on q. Using the Green’s function expansion from (19)for ϕ, with G being the Green’s function for the Poisson equation from (27), we can also write 2 2 r r u − A(u) = u − A(u), ϕ = (u − A(u)) d G d u − A(u) . −1 i ij j V i j i, j ∈V Note that this expression seems to depend on the choice of k,via G, but by the discussion above we know in fact that it does not depend on k. This can also be seen as follows. A different choice for k, leads to an additive constant change in the function G, which leaves the norm unchanged, since d (u − A(u)) = 0. i ∈V Let W : R → R be the double-well potential deﬁned by, for all x ∈ R, 2 2 W (x ) := x (x − 1) . (30) Note that W has wells of equal depth located at x = 0 and x = 1. Deﬁnition 4.5 For ε> 0, γ ≥ 0 and u ∈ V, we now deﬁne both the (epsilon) Ohta–Kawasaki functional (or diffuse interface Ohta–Kawasaki functional) 1 1 γ 2 2 F (u) := ∇u + W (u ) + u − A(u) , (31) ε i −1 2 ε 2 i ∈V and the limit Ohta–Kawasaki functional (or sharp interface Ohta–Kawasaki func- tional) F (u) := TV(u) + u − A(u) . (32) 0 −1 123 J Nonlinear Sci The nomenclature and notation is justiﬁed by the fact that F [with its domain restricted to V ,see (8)] is the -limit of F for ε → 0 (this is shown by a straightfor- ward adaptation of the results and proofs in van Gennip and Bertozzi (2012, Section 3); see Section S3 in Supplementary Materials). There are two minimization problems of interest here: min F (u), (33) u∈V min F (u), (34) u∈V for a given M ∈ R for the ﬁrst problem and a given M ∈ M for the second. In this paper we will mostly be concerned with the second problem (34). In Lemma S6.7 in Supplementary Materials, we describe a useful symmetry of F for the star graph example. 4.2 Ohta–Kawasaki in Spectral Form Because of the role the graph Laplacian plays in the Ohta–Kawasaki energies, it is useful to consider its spectrum. As is well known (see for example Chung 1997;von Luxburg 2007; van Gennip et al. 2014), for any r ∈[0, 1], the eigenvalues of , which we will denote by 0 = λ ≤ λ ≤ ··· ≤ λ , (35) 0 1 n−1 are real and nonnegative. The (algebraic and geometric) multiplicity of 0 as eigenvalue is equal to the number of connected components of the graph and the corresponding eigenspace is spanned by the indicator functions of those components. If G ∈ G, then G is connected, and thus, for all m ∈{1,..., n − 1}, λ > 0. We consider a set of corresponding V-orthonormal eigenfunctions φ ∈ V, i.e. for all m, l ∈{0,..., n−1}, m m m l φ = λ φ , and φ ,φ = δ , (36) m V ml where δ denotes the Kronecker delta. Note that, since and ·, · depend on r,but ml V not on q, so do the eigenvalues λ and the eigenfunctions φ . For deﬁniteness we choose 0 −1/2 φ := (vol (V )) χ . (37) The eigenfunctions form a V-orthonormal basis for V, hence, for any u ∈ V,wehave n−1 m m u = a φ , where a := u,φ . (38) m m V m=0 9 0 −1/2 As opposed to the equally valid choice φ =−(vol (V )) . 123 J Nonlinear Sci As an example, Laplacian eigenvalues and eigenfunctions for the star graph are given in Lemma S6.8 in Supplementary Materials. The following result will be useful later. n−1 Lemma 4.6 Let G = (V , E,ω) ∈ G,u ∈ V, and let {φ } be V-orthonormal m=0 Laplacian eigenfunctions as in (36). Then n−1 m 2 2 u,φ = M(u ). m=0 −r Proof Let j ∈ V and deﬁne f ∈ V by, for all i ∈ V , f := d δ , where δ denotes ij i i the Kronecker delta. Using the expansion in (38), we ﬁnd, for all i ∈ V , n−1 n−1 n−1 j m m −r r m m m m f = f ,φ φ = d δ d φ φ = φ φ . kj i i i k k i j i m=0 m=0 k∈V m=0 Hence n−1 n−1 m 2 r r m m r r u,φ = u u d d φ φ = u u d d f i j i j V i j i j i j i m=0 m=0 i, j ∈V i, j ∈V 2 2 = u ,χ = M(u ). V V Lemma 4.7 Let u ∈ V,k ∈ V , then ϕ satisﬁes ϕ = u − A(u), if and only if n−1 −1 m m ϕ = A(ϕ) + λ u,φ φ . (39) m=1 Proof Let ϕ satisfy ϕ = u − A(u). Using expansions as in (38)for ϕ and u − A(u), we have n−1 n−1 m m a φ = ϕ = u − A(u) = b φ , m m m=0 m=0 m m where, for all m ∈{0,..., n − 1}, a := ϕ, φ and b := u − A(u), φ m m V V n−1 n−1 m m Hence a λ φ = b φ and therefore, for any l ∈{0,..., n − 1}, m m m m=0 m=0 n−1 n−1 m l m l a λ = a λ φ ,φ = b φ ,φ = b . l l m m m l m=0 m=0 V V 123 J Nonlinear Sci −1 In particular, if m ≥ 1, then a = λ b . Because λ = 0, the identity above does not m m 0 0 m constrain a . Because, for m ≥ 1, φ ,φ = 0, it follows that, for m ∈{1,..., n −1}, M(u) m m m b = u − A(u), φ = u,φ − χ ,φ m V V V vol (V ) M(u) m 1/2 0 m = u,φ − (vol (V )) φ ,φ vol (V ) = u,φ . (40) −1 m and therefore, for all m ∈{1,..., n − 1}, a = λ u,φ . Furthermore m V 0 −1/2 −1/2 a = ϕ, φ = (vol (V )) ϕ, χ = (vol (V )) M(ϕ). 0 V V V Substituting these expressions for a and a into the expansion of ϕ, we ﬁnd that ϕ is 0 m as in (39). Conversely, if ϕ is as in (39), a direct computation shows that ϕ = u − A(u). Remark 4.8 From Lemma 4.7 we see that we can write ϕ − A(ϕ) = (u − A(u)), where is the Moore–Penrose pseudoinverse of (Dresden 1920; Bjerhammer 1951; Penrose 1955). q,r Lemma 4.9 Let q ∈[1/2, 1],S ⊂ V , and let κ , κ be the graph curvatures from Deﬁnition 3.4, then n−1 q,r m m TV(χ ) = κ ,φ χ ,φ S V S V m=1 Furthermore, if q = 1, then n−1 m 2 TV(χ ) = λ χ ,φ (41) S m s m=1 n−1 −1 m 2 = λ κ ,φ . (42) m=1 Proof Using an expansion as in (38)for χ together with (14), we ﬁnd n−1 n−1 q,r q,r m m m m TV(χ ) = κ , χ ,φ φ = κ ,φ χ ,φ S S V V S V S S m=0 m=0 n−1 q,r m m = κ ,φ χ ,φ V S V m=1 where the last equality follows from 123 J Nonlinear Sci ⎛ ⎞ q,r q q 0 −1/2 ⎝ ⎠ κ ,φ = (vol (V )) ω (χ ) − ω (χ ) = 0. V i V i S ij ij c c i ∈S j ∈S i ∈S j ∈S m m m Moreover, we use (15) to ﬁnd χ ,λ φ = χ ,φ = χ ,φ S m V S V S V κ ,φ , hence s V m −1 m χ ,φ = λ κ ,φ , (43) S V s V q,r If q = 1, such that κ = κ , then (41) and (42) follow. Lemma 4.10 Let q ∈[1/2, 1],S ⊂ V , and let κ be the graph curvature (with q = 1) from Deﬁnition 3.4, then n−1 n−1 2 −1 m 2 −3 m 2 χ − A(χ ) = λ χ ,φ = λ κ ,φ S S −1 S S m V m V m=1 m=1 Proof Let k ∈ V and let ϕ ∈ V solve ϕ = χ − A(χ ), with ϕ = 0. Using S S k n−1 −1 m m Lemma 4.7,wehave ϕ − A(ϕ) = λ χ ,φ φ . Because A(ϕ), χ − S V S m=1 A(χ ) = 0, we have S V χ − A(χ ) = ϕ − A(ϕ), χ − A(χ ) S S S S −1 V n−1 −1 m m = λ χ ,φ φ ,χ − A(χ ) S V S S V m=1 As in (40) (with u replaced by χ ), we have, for m ≥ 1, φ , A(χ ) = 0, and thus S S V n−1 2 −1 m 2 χ − A(χ ) = λ χ ,φ S S −1 S m=1 m m 2 −2 m 2 We use (43) to write χ ,φ = λ κ ,φ , and therefore χ − S S S V V n−1 2 −3 m 2 A(χ ) = λ κ ,φ S S −1 m=1 m Remark 4.11 Note that χ − A(χ ) is independent of q and thus the results S S −1 from Lemma 4.10 hold for all q ∈[1/2, 1]. However, the formulation involving the graph curvature relies on (43) and thus on the identity (15) which holds for κ only, q,r not for any κ .If q = 1 this leads to the somewhat unnatural situation of using κ (which corresponds to the case q = 1) in a situation where q = 1. Hence the curvature formulation in Lemma 4.10 is more natural, in this sense, when q = 1. Corollary 4.12 Let q = 1,S ⊂ V , and let F be the limit Ohta–Kawasaki functional from (32), then n−1 −1 m 2 F (χ ) = λ + γλ χ ,φ 0 S m s m=1 n−1 −1 −3 m 2 = λ + γλ κ ,φ . (44) m m V m=1 123 J Nonlinear Sci Proof This follows directly from the deﬁnition in (32) and Lemmas 4.9 and 4.10. Lemma S6.9 in Supplementary Materials explicitly computes F (χ ) for the 0 S unweighted star graph example, which allows us, in Corollary S6.10, to solve the binary minimization problem (34) for this example graph. Remarks S6.11 and S6.12 provide further discussion on these results. 5 Graph MBO Schemes 5.1 The Graph Ohta–Kawasaki MBO Scheme One way in which we can attempt to solve the F minimization problem in (33) (and thus approximately the F minimization problem in (34)inthe -convergence sense of Section S3 in Supplementary Materials) is via a gradient ﬂow. In Section S4 in Supplementary Materials we derive gradient ﬂows with respect to the V inner product (which, if r = 0 and each u ∈ V is identiﬁed with a vector in R , is just the Euclidean n −1 inner product on R ) and with respect to the H inner product which leads to the graph Allen–Cahn and graph Cahn–Hilliard type systems of equations, respectively. In our simulations later in the paper, however, we do not use these gradient ﬂows, but we use the MBO approximation. Heuristically, graph MBO type schemes [originally introduced in the continuum setting in Merriman et al. (1992) and Merriman et al. (1993)] can be seen as approx- imations to graph Allen–Cahn type equations [as in (S1)], obtained by replacing the double-well potential term in that equation by a hard thresholding step. This leads to the algorithm (OKMBO). In the algorithm we have used the set V , which we deﬁne to be the set of all functions u :[0, ∞) × V → R which are continuously differen- tiable in their ﬁrst argument (which we will typically denote by t). For such functions, we will use the notation u (t ) := u(t, i ). We note that where before u and ϕ denoted functions in V, here these same symbols are used to denote functions in V . For reasons that are explored in Remark S5.1 in Supplementary Materials, in the algorithm we use a variant of (29): for given u ∈ V,if ϕ ∈ V satisﬁes ϕ = u − A(u), (45) M(ϕ) = 0, we say ϕ solves (45) for u. If ϕ ∈ V solves (45) for a given u ∈ V and ϕ ˜ ∈ V solves (29) for the same u and a given k ∈ V , then (ϕ −˜ ϕ) = 0, hence there exists a C ∈ R, such that ϕ =˜ ϕ + C χ . Because ϕ ˜ = 0, we have C = ϕ . In particular, because (29) has a unique solution, k k so does (45). For a given γ ≥ 0, we deﬁne the operator L : V → V as follows. For u ∈ V,let Lu := u + γϕ, (46) where ϕ ∈ V is the solution to (45). 123 J Nonlinear Sci Algorithm (OKMBO): The graph Ohta–Kawasaki Merriman–Bence–Osher algorithm Data: An initial node subset S ⊂ V , a parameter γ ≥ 0, a parameter r ∈[0, 1], a time step τ> 0, and the number of iterations N ∈ N ∪{∞}. k N 0 Output: A sequence of node sets {S } , which is the (OKMBO) evolution of S . k=1 for k = 1 to N, do ODE step. Compute u ∈ V by solving ∂u =−Lu, on (0, ∞) × V , ∂t (47) u(0) = u , on V , where u = χ and L is as in (46) with ϕ ∈ V being such that, for all t ∈[0, ∞), ϕ(t ) k−1 ∞ solves (45)for u(t ). Threshold step. Deﬁne the subset S ⊂ V to be S := i ∈ V : u(τ ) ≥ . Remark 5.1 Since L, as deﬁned in (46), is a continuous linear operator from V to V (see (55)), by standard ODE theory (Hale 2009, Chapter 1; Coddington and Levinson 1984, Chapter 1) there exists a unique, continuously differentiable-in-t, solution u of (47)on (0, ∞) × V . In the threshold step of (OKMBO), however, we only require u(τ ), hence it sufﬁces to compute the solution u on (0,τ ]. By standard ODE arguments (Hale 2009, Chapter III.4) we can write (and interpret) −tL the solution of (47) as an exponential function: u(t ) = e u . In Supplementary Materials, Remark S5.1 and Lemma S5.2 address the relationship between solutions of (29) and (45). The next lemma will come in handy later in the paper. Lemma 5.2 Let G = (V , E,ω) ∈ G, γ ≥ 0, and u ∈ V, then the function −tL [0, ∞) → R, t → e u, u (48) is decreasing. Moreover, if u is not constant on V , then the function in (48) is strictly decreasing. Furthermore, for all t > 0, −tL M e u = 0. dt 123 J Nonlinear Sci Proof Using the expansion in (38)for u,wehave n−1 n−1 −tL −t m m l l e u, u = e u,φ φ , u,φ V V m=0 l=0 n−1 n−1 −t m l −t m 2 m m = e u,φ u,φ δ = e u,φ . (49) V V ml m,l=0 m=0 −t Since, for each m ∈{0,..., n −1}, the function t → e is decreasing, the function −t in (48) is decreasing. Moreover, for each m ∈{1,..., n − 1}, the function t → e is strictly decreasing; thus the function in (48) is strictly decreasing unless for all m ∈{1,..., n − 1}, u,φ = 0. Assume that for all m ∈{1,..., n − 1}, u,φ = 0. Then, by the expansion 0 0 −1 in (38) and the expression in (37), we have u = u,φ φ = vol (V ) M(u)χ . Hence u is constant. Thus, if u is not constant, then the function in (48) is strictly decreasing. The proof of the mass conservation property follows very closely the proof of (van Gennip et al. 2014, Lemma 2.6(a)). Using (4) and (45), we ﬁnd M(u(t )) = dt M u(t ) =−M(u(t )) − γ M(ϕ) = 0. ∂t The following lemma introduces a Lyapunov functional for the (OKMBO) scheme. Lemma 5.3 Let G = (V , E,ω) ∈ G, γ ≥ 0, and τ> 0. Deﬁne J : V → R by −τ L J (u) := χ − u, e u . (50) τ V V Then the functional J is strictly concave and Fréchet differentiable, with directional derivative at u ∈ V in the direction v ∈ V given by u −τ L dJ (v) := χ − 2e u,v . (51) V V 0 k N Furthermore, if S ⊂ V and {S } is a sequence generated by (OKMBO), then for k=1 all k ∈{1,..., N }, k−1 χ k ∈ argmin dJ (v), (52) v∈K where K is as deﬁned in (9). Moreover, J is a Lyapunov functional for the (OKMBO) scheme in the sense that, for all k ∈{1,..., N },J (χ k ) ≤ J (χ k−1 ), with equality τ τ S S k k−1 if and only if S = S . Proof This follows immediately from the proofs of (van Gennip et al. 2014, Lemma 4.5, Proposition 4.6) [(which in turn were based on the continuum case established in Esedoglu ¯ and Otto (2015)], as replacing in those proofs by L does not invalidate any of the statements. It is useful, however, to reproduce the proof here, especially with an eye to incorporating a mass constraint into the (OKMBO) scheme in Sect. 5.3. 123 J Nonlinear Sci First let u,v ∈ V and s ∈ R, then we compute dJ (u + sv) −τ L −τ L −τ L = χ − u, e v − v, e u = χ − 2e u,v V V V V V ds s=0 −τ L −τ L where we used that e is a self-adjoint operator and e χ = χ . Moreover, if V V v ∈ V\{0}, then d J (u + sv) −τ L =−2 v, e v < 0, ds s=0 where the inequality follows for example from the spectral expansion in (49). Hence J is strictly concave. k−1 To construct a minimizer v for the linear functional dJ over K,weset v = 1 τ i −τ L whenever 1 − 2 e χ k−1 ≤ 0 and v = 0 for those i ∈ V for which 1 − −τ L 10 k N k 2 e χ > 0. The sequence {S } generated in this way by setting S = k−1 i k=1 {i ∈ V : v = 1} corresponds exactly to the sequence generated by (OKMBO). k−1 Finally we note that, since J is strictly concave and dJ is linear, we have, if τ τ χ k+1 = χ k , then S S χ χ χ k k k S S S J χ k+1 − J χ k < dJ χ k+1 − χ k = dJ χ k+1 − dJ χ k ≤ 0, τ τ τ τ τ S S S S S S where the last inequality follows because of (52). Clearly, if χ k+1 = χ k , then S S J χ k+1 − J χ k = 0. τ τ S S Remark 5.4 It is worth elaborating brieﬂy on the underlying reason why (52)isthe right minimization problem to consider in the setting of Lemma 5.3. As is standard in sequential linear programming the minimization of J over K is attempted by approximating J by its linearization, χ χ χ k−1 k−1 k−1 S S S J (u) ≈ J χ k + dJ u − χ k = J χ k + dJ (u) − dJ χ k , τ τ τ τ τ τ S S S S and minimizing this linear approximation over all admissible u ∈ K. We can use Lemma 5.3 to prove that the (OKMBO) scheme converges in a ﬁnite number of steps to stationary state in sense of the following corollary. 0 k N Corollary 5.5 Let G = (V , E,ω) ∈ G, γ ≥ 0, and τ> 0.If S ⊂ V and {S } is k=1 a sequence generated by (OKMBO), then there is a K ≥ 0 such that, for all k ≥ K, k K S = S . Proof If N ∈ N the statement is trivially true, so now assume N =∞. Because |V | < ∞, there are only ﬁnitely many different possible subsets of V , hence there 10 −τ L Note that the arbitrary choice for those i for which 1 −2 e χ = 0 introduces non-uniqueness k−1 into the minimization problem (52). 123 J Nonlinear Sci K k exists K , k ∈ N such that k > K and S = S . Hence the set in l := min{l ∈ N : K K +l 11 S = S } is not empty and thus l ≥ 1. If l ≥ 2, then by Lemma 5.3 we know that J (χ )< J (χ )< ··· < J (χ ) = J (χ ). This is a contradiction, K +l K +l−1 K K +l τ τ τ τ S S S S K K +1 hence l = 1 and thus S = S . Because equation (47) has a unique solution (as k K noted in Remark 5.1), we have, for all k ≥ K , S = S . Remark 5.6 For given τ> 0, the minimization problem u ∈ argmin J (v) (53) v∈K has a solution u ∈ V , because J is strictly concave and K is compact and convex. −τ L This solution is not unique; for example, if u ˜ = χ −u, then, since e is self-adjoint, we have −tL −τ L J (u) = u˜, e (χ − u) = χ −˜ u, e u ˜ = J (u ˜). τ V V V V τ k N Lemma 5.3 shows that J does not increase in value along a sequence {S } of sets k=1 generated by the (OKMBO) algorithm, but this does not guarantee that (OKMBO) converges to the solution of the minimization problem in (53). In fact, we see in Lemma S5.10 and Remark S5.11 in Supplementary Materials that for every S ⊂ V 0 1 0 0 there is a value τ (S ) such that S = S if τ< τ (S ). Hence, unless S happens to ρ ρ be a solution to (53), if τ< τ (S ) the (OKMBO) algorithm will not converge to a solution. This observation and related issues concerning the minimization of J will become important in Sect. 5.2, see for example Remarks 5.12 and 5.16. We end this section with a look at the spectrum of L, which plays a role in our further study of (OKMBO). More information is given in Section S5.2 of Supplementary Materials. Moreover, in Section S5.3 we use this information about the spectrum to derive pinning and spreading bounds on the parameter τ in (OKMBO) along similar lines as corresponding results in van Gennip et al. (2014). Remark 5.7 Remembering from Remark 4.8 the Moore–Penrose pseudoinverse of , which we denote by , we see that the condition M(ϕ) = 0in (45) allows us to write ϕ = (u − A(u)). In particular, if ϕ satisﬁes (45), then n−1 −1 m m ϕ = λ u,φ φ , (54) m=1 where λ and φ are the eigenvalues of and corresponding eigenfunctions, respec- tively, as in (35), (36). Hence, if we expand u as in (38) and L is the operator deﬁned in (46), then n−1 m m L(u) = λ + u,φ φ . (55) m V m=1 Remember that we use the convention 0 ∈ / N. 123 J Nonlinear Sci In particular, L : V → V is a continuous, linear, bounded, self-adjoint, operator and for every c ∈ R, L(cχ ) = 0. If, given a u ∈ V, u ∈ V solves (47), then we V 0 ∞ −tL −tL have that u(t ) = e u . Note that the operator e is self-adjoint, because L is self-adjoint. Lemma 5.8 Let G = (V , E,ω) ∈ G, γ ≥ 0, and let L : V → V be the operator deﬁned in (46), then L has n eigenvalues (m ∈{0,..., n − 1}), given by 0, if m = 0, = (56) λ + , if m ≥ 1, m n−1 where the λ are the eigenvalues of as in (35). The set {φ } from (36) is a set m=0 of corresponding eigenfunctions. In particular, L is positive semideﬁnite. Proof This follows from (55) and the fact that λ = 0 and, for all m ≥ 1, λ > 0. 0 m In the remainder of this paper we use the notation λ for the eigenvalues of and for the eigenvalues of L, with corresponding eigenfunctions φ ,asin (35), (36), and Lemma 5.8. Using an expansion as in (38) and the eigenfunctions and eigenvalues as in Lemma 5.8 in the main paper, we can write the solution to (47)as n−1 −t m m u(t ) = e u ,φ φ . (57) 0 V m=0 5.2 -Convergence of the Lyapunov Functional In this section we prove that the functionals J : K → R, deﬁned by −τ L J (u) := χ − u, e u , (58) τ V V ˜ ˜ for τ> 0, -converge to F : K → R as τ → 0, where F is deﬁned by 0 0 F (u), if u ∈ K ∩ V , F (u) := (59) +∞, otherwise, where F is as in (32) with q = 1. We use the notation R := R ∪{−∞, +∞} for the extended real line. Remember that the set K was deﬁned in (9) as the subset of all [0, 1]-valued functions in V. We note that J = J | , where J is the Lyapunov τ τ K τ functional from Lemma 5.3. Compare the results in this section with the continuum results in (Esedoglu ¯ and Otto 2015, Appendix). 12 b b Note that K ∩ V = V . We included K explicity in the intersection here to emphasize that the domain of F is K, not V. 123 J Nonlinear Sci In this section we will encounter different variants of the functional J , such as J , J , and J , and similar variants of F . The differences between these functionals τ τ τ 0 are the domains on which they are deﬁned and the parts of their domains on which they take ﬁnite values: J is deﬁned on all of K, while J and J (which will be deﬁned τ τ τ later in this section) incorporate a mass constraint and relaxed mass constraint in their domains, respectively. For technical reasons, we thought it is prudent to distinguish these functionals through their notation, but intuitively they can be thought of as the same functional with different types of constraints (or lack thereof). For sequences in V we use convergence in the V-norm, i.e. if {u } ⊂ V, then k k∈N we say u → u as k →∞ if u − u → 0as k →∞. Note, however, that all k k V norms on the ﬁnite space V induce equivalent topologies, so different norms can be used without this affecting the convergence results in this section. Lemma 5.9 Let G = (V , E,ω) ∈ G. Let {u } ⊂ V and u ∈ V\V be such that k k∈N u → uas k →∞. Then there exists an i ∈ V, an η> 0, and a K > 0 such that for all k ≥ K we have (u ) ∈ R\ [−η, η]∪[1 − η, 1 + η] . k i Proof Because u ∈ V\V , there is an i ∈ V such that u ∈{ / 0, 1}. Since G ∈ G, we know that d > 0. Thus, since u → u as k →∞, we know that for every | | η> ˆ 0 there exists a K (η) ˆ > 0 such that for all k ≥ K (η) ˆ we have (u ) − u < η ˆ. k i i Deﬁne η := min{|u | , |u − 1|} > 0. Then, for all k ≥ K (η),wehave |(u ) | ≥ i i k i |(u ) − u | − |u | > |u |≥ η and similarly |(u ) − 1| >η. k i i i i k i Theorem 5.10 (-convergence). Let G = (V , E,ω) ∈ G,q = 1, and γ ≥ 0. Let {τ } be a sequence of positive real numbers such that τ → 0 as k →∞. Let k k∈N k u ∈ K. Then the following lower bound and upper bound hold: (LB) for every sequence {u } ⊂ K such that u → uas k →∞, F (u) ≤ k k∈N k 0 lim inf J (u ), and τ k k→∞ (UB) there exists a sequence {u } ⊂ K such that u → uas k →∞ and k k∈N k ˜ ˜ lim sup J (u ) ≤ F (u). τ k 0 k→∞ Proof With J the Lyapunov functional from Lemma 5.3, we compute, for τ> 0 and u ∈ V, −τ L −τ L −τ L −τ L J (u) = χ − u, e u = M e u − u, e u = M(u) − u, e u τ V V V V where we used the mass conservation property from Lemma 5.2. Using the expansion in (38) and Lemma 4.6, we ﬁnd n−1 n−1 −τ 1 1 e − 1 1 m 2 m 2 J (u) = M(u) − u,φ − u,φ V V τ τ τ τ m=0 m=0 n−1 −τ e − 1 1 m 2 2 =− u,φ + M(u) − M(u ) . τ τ m=0 123 J Nonlinear Sci Now we prove (LB). Let {τ } and {u } ⊂ K be as stated in the theo- k k∈N k n∈N −τ k m e − 1 rem. Then, for all m ∈{0,..., n − 1}, we have that − lim = and k→∞ τ m 2 m 2 lim u ,φ = u,φ , hence V V k→∞ n−1 n−1 −τ k m e − 1 m 2 m 2 lim − u ,φ = u,φ ≥ 0. (60) k m V V k→∞ τ m=0 m=0 Moreover, if u ∈ K ∩ V , then, combining the above with (44) (remember that q = 1) and Lemma 5.8, we ﬁnd n−1 −τ k m e − 1 m 2 lim − u ,φ = F (u). (61) k 0 k→∞ m=0 Furthermore, since, for every k ∈ N, u ∈ K, we have that, for all i ∈ V , u ≤ u and k i thus M(u ) − M(u ) ≥ 0. Hence n−1 −τ e − 1 m 2 lim inf J (u ) ≥−lim inf u ,φ = F (u). τ k k 0 k V k→∞ k→∞ τ m=0 Assume now that u ∈ K\V instead, then by Lemma 5.9 it follows that there are an i ∈ V and an η> 0, such that, for all k large enough, (u ) ∈ (η, 1 − η). Thus, for k i all k large enough, 2 r r 2 M(u ) − M(u ) ≥ d (u ) (1 − (u ) )> d η > 0. k k i k i k i i Combining this with (60) we deduce lim inf J (u ) ≥ lim inf M(u ) − M(u ) =+∞= F (u), τ k k 0 k k k→∞ k→∞ τ which completes the proof of (LB). To prove (UB), ﬁrst we note that, if u ∈ K\V , then F (u) =+∞ and the upper b 0 bound inequality is trivially satisﬁed. If instead u ∈ K ∩ V , then we deﬁne a so-called recovery sequence as follows: for all k ∈ N, u := u. We trivially have that u → u k k 2 2 as k →∞. Moreover, since u = u , we ﬁnd, for all k ∈ N, M(u ) − M(u ) = 0. Finally we ﬁnd n−1 −τ k m e − 1 m 2 lim sup J (u ) =− lim u,φ = F (u), τ k 0 k V k→∞ τ k→∞ k m=0 where we used a similar calculation as in (61). 123 J Nonlinear Sci Theorem 5.11 (Equi-coercivity). Let G = (V , E,ω) ∈ G and γ ≥ 0. Let {τ } be k k∈N a sequence of positive real numbers such that τ → 0 as k →∞ and let {u } ⊂ K k k k∈N be a sequence for which there exists a C > 0 such that, for all k ∈ N, J (u ) ≤ C. τ k Then there is a subsequence {u } ⊂{u } and a u ∈ V such that u → uas k l∈N k k∈N k l l l →∞. Proof Since, for all k ∈ N,wehave u ∈ K, it follows that, for all k ∈ N,0 ≤ u ≤ n, where · denotes the usual Euclidean norm on R pulled back to V k 2 2 via the natural identiﬁcation of each function in V with one and only one vector in R (thus, it is the norm · if r = 0). By the Bolzano–Weierstrass theorem it follows that there is a subsequence {u } ⊂{u } and a u ∈ V such that u → u with k l∈N k k∈N k l l respect to the norm · as l →∞. Because the V-norm is topologically equivalent r r 2 2 to the · norm (explicitly, d · ≤ · ≤ d · ), we also have u → u as 2 2 V 2 k − + l l →∞. Moreover, since convergence with respect to · implies convergence of each component of (u ) (i ∈ V)in R we have u ∈ K. k i Next we compute −τ L −τ L k k l l τ J (u ) = χ − u , e u = χ , u − u , e u k τ k V k k V V k V k k V l k l l l l l l ≥ χ − u , u , (62) V k k V l l −τ L where we used both the mass conservation property χ , e u = χ , u V k V V k V l l −τ L and the inequaltiy u , e u ≤ u , u from Lemma 5.2. Thus, for all k k V k k V l l l l l ∈ N,wehave 0 ≤ χ − u , u ≤ C τ . (63) V k k V k l l l Assume that u ∈ K\V , then there is an i ∈ V such that 0 < u < 1. Hence, by Lemma 5.9, there is a δ> 0 such that for all l large enough, δ< (u ) < 1 − δ and k i thus r r 2 χ − u , u ≥ d 1 − (u ) (u ) ≥ d δ . (64) V k k V k i k i l l i l l i r 2 Let l be large enough such that C τ < d δ and large enough such that (64) holds. l i Then we have arrived at a contradiction with (63) and thus we conclude that u ∈ V . Remark 5.12 The computation in (62) shows that, for all τ> 0 and for all u ∈ K, ˜ ˜ ˜ we have τ J (u) ≥ χ − u, u ≥ 0. Moreover, we have J (0) = J (χ ) = 0. τ V V τ τ V Furthermore, since each term of the sum in the inner product is nonnegative, we have χ − u, u = 0 if and only if u = 0or u = χ . Hence we also have J (u) = 0 V V V τ if and only if u = 0or u = χ . The minimization of J over K is thus not a very V τ interesting problem. Therefore we now extend our -convergence and equi-coercivity results from above to incorporate a mass constraint. As an aside, note that Lemma S5.10 and Remark S5.11 in Supplementary Materials 0 0 1 guarantee that for τ large enough and S such that vol S = vol (V ),the (OKMBO) algorithm converges in at most one step to the minimizer ∅ or the minimizer V . Let M ∈ M, where M is the set of admissible masses as deﬁned in (11). Remember from (10) that K is the set of [0, 1]-valued functions in V with mass equal to M.For 123 J Nonlinear Sci τ> 0 we deﬁne the following functionals with restricted domain. Deﬁne J : K → τ M ˜ ˜ R by J := J , where J is as deﬁned above in (58). Also deﬁne F : K → R τ τ τ 0 M ˜ ˜ ˜ by F (u) := F , where F is as in (59), with q = 1. Note that by deﬁnition, F , 0 0 0 0 and thus F , do not assign a ﬁnite value to functions u that are not in V . Theorem 5.13 Let G = (V , E,ω) ∈ G,q = 1, γ ≥ 0, and M ∈ M. Let {τ } be k k∈N a sequence of positive real numbers such that τ → 0 as k →∞. Let u ∈ K . Then k M the following lower bound and upper bound hold: (LB) for every sequence {u } ⊂ K such that u → uas k →∞, F (u) ≤ k k∈N M k 0 lim inf J (u ), and τ k k→∞ (UB) there exists a sequence {u } ⊂ K such that u → uas k →∞ and k M k k∈N lim sup J (u ) ≤ F (u). τ k 0 k→∞ Furthermore, if {v } ⊂ K is a sequence for which there exists a C > 0 such k k∈N M that, for all k ∈ N, J (v ) ≤ C, then there is a subsequence {v } ⊂{v } and τ k k l∈N k k∈N a v ∈ K ∩ V such that v → v as l →∞. M k Proof We note that any converging sequence in K with limit u is also a converging ˜ ˜ sequence in K with limit u. Moreover, on K we have J = J and F = F . Hence M τ τ 0 0 k k (LB) follows directly from (LB) in Theorem 5.10. For (UB) we note that if we deﬁne, for all k ∈ N, u := u, then trivially the mass constraint on u is satisﬁed for all k ∈ N and the result follows by a proof analogous to that of (UB) in Theorem 5.10. Finally, for the equi-coercivity result, we ﬁrst note that by Theorem 5.11 we imme- diately get the existence of a subsequence {v } ⊂{v } which converges to k l∈N k k∈N some v ∈ K. Since the functional M is continuous with respect to V-convergence, we conclude that in fact v ∈ K . Remark 5.14 Note that for τ> 0, M ∈ M, and u ∈ K ,wehave n−1 τ L −τ m 2 τ J (u) = M(u) − u, e u = M − e u,φ τ V m=0 n−1 −τ m 2 = M 1 − − e u,φ vol (V ) m=1 Hence ﬁnding the minimizer of J in K is equivalent to ﬁnding the maximizer of τ M n−1 −τ m 2 u → e u,φ in K . m=1 The following result shows that the -convergence and equi-coercivity results still hold, even if the mass conditions are not strictly satisﬁed along the sequence. 123 J Nonlinear Sci Corollary 5.15 Let G = (V , E,ω) ∈ G,q = 1, and γ ≥ 0 and let C ⊂ M be a set of admissible masses. For each k ∈ N,let C ⊂[0, ∞) be such that C = C and k k k∈N deﬁne, for all k ∈ N, K := {u ∈ K : M(u) ∈ C }. Let {τ } be a sequence of positive real numbers such that τ → 0 as k →∞. k k∈N k Deﬁne J : K → R by J (u), if u ∈ K , k M J (u) := +∞, otherwise. Furthermore, deﬁne F : K → R by F (u), if u ∈ K , 0 M F (u) := +∞, otherwise. Then the results of Theorem 5.13 hold with J and F replaced by J and F , τ 0 τ 0 k k respectively, and with the sequences {u } and {v } in (LB), (UB), and the equi- k k∈N k k∈N coercivity result taken in K instead of K , such that, for each k ∈ N,u ,v ∈ K . M k k Proof The proof is a slightly tweaked version of the proof of Theorem 5.13.On K ˜ ˜ we have that J = J and F = F . Hence (LB) follows from (LB) in Theorem 5.10. τ τ 0 0 k k For (UB) we note that, since C ⊃ C, the recovery sequence deﬁned by, for all k∈N k ∈ N, u := u, is admissible and the proof follows as in the proof of Theorem 5.10. Finally, for the equi-coercivity result, we obtain a converging subsequence {v } ⊂{v } with limit v ∈ K by Theorem 5.11. By continuity of M it follows k l∈N k k∈N that M(v) ∈ C , where · denotes the topological closure in [0, ∞) ⊂ R. k∈N Because M is a set of ﬁnite cardinality in R, we know C ⊂ C ⊂ M is closed, k∈N hence M(v) ∈ C ⊂ C and thus v ∈ K . k M k∈N Remark 5.16 By a standard -convergence result (Maso 1993, Chapter 7; Braides 2002, Section 1.5) we conclude from Theorem 5.13 that (for ﬁxed M ∈ M) minimizers of J converge (up to a subsequence) to a minimizer of F (with q = 1) when τ → 0. τ 0 By Lemma 5.3 we know that iterates of (OKMBO) solve (52) and decrease the value of J , for ﬁxed τ> 0 (and thus of J ). By Lemma S5.10 in Supplementary Materials, τ τ however, we know that when τ is sufﬁciently small, the (OKMBO) dynamics is pinned, in the sense that each iterate is equal to the initial condition. Hence, unless the initial condition is a minimizer of J , for small enough τ the (OKMBO) algorithm does 123 J Nonlinear Sci not generate minimizers of J and thus we cannot use Theorem 5.13 to conclude that solutions of (OKMBO) approximate minimizers of F when τ → 0. As an interesting aside that can be an interesting angle for future work, we note that it is not uncommon in sequential linear programming for the constraints (such as the constraint that the domain of J consists of [0, 1]-valued functions only) to be an obstacle to convergence; compare for example the Zoutendijk method with the Topkis and Veinott method (Bazaraa et al. 1993, Chapter 10). An analogous relaxation of the constraints might be a worthwhile direction for alternative MBO type methods for minimization of functionals like J . We will not follow that route in this paper. Instead, in the next section, we will look at a variant of (OKMBO) which conserves mass in each iteration. 5.3 A Mass Conserving Graph Ohta–Kawasaki MBO Scheme In Sect. 5.2 we saw that, for given M ∈ M, any solution to u ∈ argmin J (u ˜), (65) u ˜∈K where J is as in (50) is an approximate solution to the F minimization problem in τ 0 (34) (with q = 1) in the -convergence sense discussed in Remark 5.16. We propose the (mcOKMBO) scheme described below to include the mass condi- tion into the (OKMBO) scheme. As part of the algorithm we need a node relabelling function. For u ∈ V,let R : V →{1,..., n} be a bijection such that, for all i, j ∈ V , R (i)< R ( j ) if and only if u ≥ u . Note that such a function need not be unique, u u i j as it is possible that u = u while i = j. Given a relabelling function R , we will i j u deﬁne the relabelled version of u denoted by u ∈ V,by, forall i ∈ V , u := u −1 . (66) R (i ) In other words, R relabels the nodes in V with labels in {1,..., n}, such that in the R R R new labelling we have u ≥ u ≥ ··· ≥ u . 1 2 n Because this will be of importance later in the paper, we introduce the new set of almost binary functions with prescribed mass M ≥ 0: ab V := u ∈ K :∃i ∈ V ∀ j ∈ V \{i } u ∈{0, 1} . M j In Sect. 5.2 we required various rescaled versions of J deﬁned on different domains, for technical reasons related to the -convergence proof. Any of those functionals could be substituted in (65)for J ,as long as their domain contains K . 14 0 M In the (mcOKMBO) algorithm we choose the initial function v from V . As the algorithm enforces ab the mass condition in each iteration, it is not necessary for the initial condition to satisfy the mass condition (or even to be almost binary, it could be any function in K) in order to for the output functions v to be in V , but for a cleaner presentation we assume it does (and is). ab 123 J Nonlinear Sci Algorithm (mcOKMBO): The mass conserving graph Ohta–Kawasaki Merriman–Bence–Osher algorithm ab Data: A prescribed mass value M ∈ M, an initial function v ∈ V , a parameter r ∈[0, 1],a parameter γ ≥ 0, a time step τ> 0, and the number of iterations N ∈ N ∪{∞}. k N ab 0 Output: A sequence of functions {v } ⊂ V , which is the (mcOKMBO) evolution of v . k=1 M for k = 1 to N, do k−1 ODE step. Compute u ∈ V by solving (47), where u = v . ∞ 0 Mass conserving threshold step. Let R be a relabelling function and u the relabelled version of u as in (66). Let i be the unique i ∈ V such that ∗ ∗ i i +1 r r d u ≤ M and d u > M. i i i i i =1 i =1 Deﬁne v ∈ V by, for all i ∈ V , 1, if 1 ≤ i ≤ i , k −r i r ∗ v := d M − d u , if i = i + 1, i i =1 i i 0, if i + 2 ≤ i ≤ n. We see that the ODE step in (mcOKMBO) is as the ODE step in (OKMBO),using the outcome of the previous iteration as initial condition. However, the threshold step is signiﬁcantly different. In creating the function v , it assigns the available mass to the nodes {1,..., i } on which u has the highest value. Note that if r = 0, there is exactly enough mass to assign the value 1 to each node in {1,..., i }, since we assumed that M ∈ M and each node contributes the same value to the mass via the factor d = 1. In this case we see that v = 0. However, if r ∈ (0, 1], this is not necessarily the i +1 case and it is possible to end up with a value in (0, 1) being assigned to v (even if i +1 k−1 b k ab k b v ∈ V ). Hence, in general v ∈ V , but not necessarily v ∈ V . M M M k k Of course there is no issue in evaluating F (v ) for almost binary functions v ,but strictly speaking an almost binary v cannot serve as approximate solution to the F minimization problem in (34) as it is not admissible. We can either accept that the qualiﬁer “approximate” refers not only to approximate minimization, but also to the N ∗ fact that v is binary when restricted to V \{i + 1}, but not necessarily on all of V,or N ∗ we can apply a ﬁnal thresholding step to v and set the value at node i + 1toeither 0 or 1 depending on which choice leads to the lowest value of F and/or the smallest deviation of the mass from the prescribed mass M. In the latter case, the function will be binary, but the adherence to the mass constraint will be “approximate”. We emphasize again that this is not an issue when r = 0 (or on a regular graph; i.e. a graph in which each node has the same degree). This case is the most interesting case, as the mass condition can be very restrictive when r ∈ (0, 1], especially on (weighted) graphs in which most nodes each have a different degree. When r ∈ (0, 1], our deﬁnition of (mcOKMBO) suggests the ﬁrst interpretation of “approximate”, i.e. we use v as is and accept that its value at node i + 1 may be in (0, 1). All our numerical examples in Sect. 7 (and Section S9 in Supplementary Materials) use r = 0. 123 J Nonlinear Sci k N Note that the sequence {v } generated by the (mcOKMBO) scheme is not nec- k=1 essarily unique, as the relabelling function R in the mass conserving threshold step is not uniquely determined if there are two different nodes i, j ∈ V such that u = u . i j This non-uniqueness of R can lead to non-uniqueness in v if exchanging the labels R (i ) and R ( j ) of those nodes leads to a different ‘threshold node’ i . In the practice u u of our examples in Sect. 7 (and Section S9 in Supplementary Materials) we used the MATLAB function sort(·, ‘descend’) to order the nodes. Lemma 5.18 shows that some of the important properties of (OKMBO) from Lemma 5.3 and Corollary 5.5 also hold for (mcOKMBO). First we state an inter- mediate lemma. Lemma 5.17 Let G = (V , E,ω) ∈ G,M ≥ 0 and z ∈ V . Consider the minimization problem min w z , subject to w = M and ∀l ∈ V 0 ≤ w ≤ d . (67) l l l l w∈V l∈V l∈V ∗ ∗ Let w ∈ V satisfy the constraints in (67). Then w is a minimizer for (67) if and only ∗ r ∗ if for all i, j ∈ V, if z < z , then w = d or w = 0. i j i i j Proof See Section S10.3 in Supplementary Materials. Lemma 5.18 Let G = (V , E,ω) ∈ G, γ ≥ 0, τ> 0, and M ≥ 0. Let J : V → R be 0 ab k N ab as in (50), v ∈ V , and let {v } ⊂ V be a sequence generated by (mcOKMBO). M k=1 M Then, for all k ∈{1,..., N }, k−1 k v v ∈ argmin dJ (v), (68) v∈K k k−1 where d J is given in (51). Moreover, for all k ∈{1,..., N },J (v ) ≤ J (v ), with τ τ τ k k−1 equality if and only if v = v . Finally, there is a K ≥ 0 such that for all k ≥ K, k K v = v . r −τ L k−1 Proof For all i ∈ V , deﬁne w := d v and z := χ − 2e v . Then the i i i V minimization problem (68) turns into (67). Hence, by Lemma 5.17, v is a solution of (68) if and only if v satisﬁes the constraints in (68) and for all i, j ∈ V,if −τ L k−1 −τ L k−1 ∗ r ∗ k e v > e v , then v = d or v = 0. It is easily checked that v i i j i j k−1 generated from v by one iteration of the (mcOKMBO) algorithm satisﬁes these properties. We note that (68) differs from (52) only in the set of admissible functions over which the minimization takes place. This difference does not necessitate any change in the proof of the second part of the lemma compared to the proof of the equivalent statements at the end of Lemma 5.3. The ﬁnal part of the lemma is trivially true if N ∈ N. Now assume N =∞.The proof is similar to that of Corollary 5.5. In the current case, however, our functions v are not necessarily binary. We note that for each k, there is at most one node i (k) ∈ V at which v can take a value in (0, 1). For ﬁxed k and i (k), there are only 123 J Nonlinear Sci k k ﬁnitely many different possible functions that v can be. Because M v = V \{i (k)} k r v +d v = M, this leads to ﬁnitely many possible values i ∈V \{i (k)} V \{i (k)} i (k) i (k) v can have. Since i (k) can be only one of ﬁnitely many (n) nodes, there are only i (k) ﬁnitely many possible functions that v can be. Hence the proof now follows as in Corollary 5.5. Remark 5.19 Similar to what we saw in Remark 5.4 about (OKMBO), we note that (68) is a sequential linear programming approach to minimizing J over K ; the linear τ M approximation of J over K is minimized instead. τ M Remark S8.2 in Supplementary Materials discusses the behaviour of (mcOKMBO) at small and large τ.If τ is too small, pinning can occur similar to, but for different reasons than, the pinning behaviour of (OKMBO) at small τ . 6 Special Classes of Graphs There are certain classes of graphs on which the dynamics of equation (47), can be directly related to graph diffusion equations, in a way which we will make precise in Sect. 6.1. The tools which we develop in that section will again be used in Sect. 6.2 to prove additional comparison principles. 6.1 Graph Transformation V \{ j } Deﬁnition 6.1 Let G = (V , E,ω) ∈ G. For all j ∈ V,let ν be the equilibrium measure which solves (12)for S = V \{ j } and deﬁne the functions f ∈ V as j V \{ j } V \{ j } f := ν − A ν . (69) Now we introduce the following classes of graphs: ! " 1. C := G ∈ G :∀ j ∈ V ∀i ∈ V \{ j } f ≥ 0 , ! " 2. C := G ∈ G :∀ j ∈ V ∀i ∈ V \{ j } ω > 0or f ≥ 0 , ij 0 −r j 3. C := G ∈ C :∀ j ∈ V ∀i ∈ V \{ j } ω = 0or d ω + γ f > 0 , γ ij ij i i vol(V ) for γ> 0. For γ = 0, we deﬁne C := G. Remark 6.2 Let us have a closer look at the properties of graphs in C .Let γ> 0. If G ∈ C , then per deﬁnition G ∈ C .Let i, j ∈ V.If ω = 0, then per deﬁnition of γ ij The deﬁnition of C is purely for notational convenience, so that we do not always need to treat the case γ = 0 separately. Note that if we were to substitute γ = 0 directly into the deﬁnition of C , we would have C = C instead, which would be unneccessarily restrictive. 123 J Nonlinear Sci j j 0 −r j C , f ≥ 0 and thus d ω + γ f ≥ 0. On the other hand, if ω > 0, then ij ij i i vol V i ( ) −r j per deﬁnition of C , d ω + γ f > 0. γ ij i vol(V ) i Lemma 6.3 Let the setting and notation be as in Deﬁnition 6.1. Then, C ⊂ C and, for all γ ≥ 0, C ⊂ C . Moreover, if G ∈ C \C, there is a γ (G)> 0 such that, for all γ ∗ γ ∈[0,γ (G)),G ∈ C . ∗ γ Proof The ﬁrst two inclusions stated in the lemma follow immediately from the def- initions of the sets involved. If γ = 0, then G ∈ C in the ﬁnal statement is trivially true. To prove it for γ = 0, let G ∈ C \C and let j ∈ V , i ∈ V \{ j }.If f ≥ 0, then, j j −r j −r ω = 0or, for all γ> 0, d ω + γ f > d ω ≥ 0. If f < 0 (and, by the ij ij ij i vol(V ) i i i assumption that G ∈ / C, there are j ∈ V , i ∈ V \{ j } for which this is the case), then by deﬁnition of C we have ω > 0. Deﬁne ij −1 −r −r j j γ (G) := vol (V ) min d d ω f : j ∈ V , i ∈ V \{ j } such that f < 0 ∗ ij i j i i r r d d −r j −r j i and let γ ∈ (0,γ (G)), then d ω + γ f > d ω − γ (G) | f |≥ 0. ∗ ij ij ∗ i i i j vol(V ) vol(V ) Lemma S7.1 in Supplementary Materials shows C is not empty; in particular, unweighted star graphs with three or more nodes are in C. Remark S7.2 shows that C \C =∅. Lemma S7.3 and Remarks S7.4 and S7.6 give and illustrate different sufﬁ- cient conditions for graphs to be in C or C , which are used in Corollary S7.5 to show that complete graphs are in C . The following lemma hints at the reason for our interest in the functions f from (69). Lemma 6.4 Let G = (V , E,ω) ∈ G. Let j ∈ V and let f ∈ V be as in (69). Then the function ϕ ∈ V, deﬁned by j j ϕ := − f , (70) vol (V ) solves (45) for χ . { j } Proof From (69) and (12) it follows immediately that, for all i ∈ V \{ j }, f = V \{ j } j ν = 1. Thus, for all i ∈ V \{ j }, ϕ =− = χ − A χ . { j } { j } i i vol(V ) i j r j r j Moreover, by (5)wehave0 = M ϕ = d ϕ + d ϕ , and j i j i i ∈V \{ j } thus 123 J Nonlinear Sci r r d vol (V ) − d j j j −r r j −r r ϕ =−d d ϕ = d d = i i j j j i vol V vol V ( ) ( ) i ∈V \{ j } i ∈V \{ j } = 1 − = χ − A χ . { j } { j } j j vol (V ) j j Finally, by (69), M f = 0, thus M ϕ = 0. Corollary 6.5 Let G = (V , E,ω) ∈ G. Let λ and φ be the eigenvalues and corresponding eigenfunctions of the graph Laplacian (with parameter r), as in (35), (36). Let j ∈ V. If ϕ ∈ V is as deﬁned in (70), then, for all i ∈ V, n−1 −1 r m m ϕ = λ d φ φ . (71) m j i j m=1 In particular, if f is as in (69) and i ∈ V , then f ≥ 0 if and only if n−1 −1 r m m λ d φ φ ≤ 0. m=1 m j i j Proof Let j ∈ V . By Lemma 6.4 we know that ϕ solves (45)for χ . Then by (54) { j } we can write, for all i ∈ V , n−1 n−1 −1 m m −1 r m m ϕ = λ χ ,φ φ = λ d φ φ , { j } V m m j i j m=1 m=1 where we used that, m r m r m χ ,φ = d δ φ = d φ , (72) { j } V jk k k j j k∈V where δ is the Kronecker delta. jk The ﬁnal statement follows from the deﬁnition of ϕ in Lemma 6.4, which shows j j that, for all i ∈ V , f ≥ 0 if and only ϕ ≤ 0. i i j j Corollary 6.6 Let G = (V , E,ω) ∈ G. For all j ∈ V, let ϕ be as in (70),let f be V \{ j } as in (69), and let ν be the equilibrium measure for V \{ j } as in (12).If r = 0, j j i i then, for all i, j ∈ V, ϕ = ϕ ,f = f , and i j i j V \{ j } V \{i } V \{ j } V \{i } ν − A ν = ν − A ν . i j i j Proof This follows immediately from (71), (70), and (69). Remark 6.7 The result of Corollary 6.5 is not only an ingredient in the proof of Theorem 6.9, but can also be useful when testing numerically whether or not a graph is in C or in C . 123 J Nonlinear Sci Lemma 6.8 Let γ ≥ 0 and let G = (V , E,ω) ∈ C . Let L be as deﬁned in (46). Let λ and φ be the eigenvalues and corresponding eigenfunctions of the graph Laplacian (with parameter r), as in (35), (36) and deﬁne n−1 r m m −d φ φ , if i = j, j m=1 i j ω ˜ := (73) ij 0, if i = j, where is deﬁned in (56). Then, for all i, j ∈ V, ω ˜ ≥ 0. Moreover, if ω > 0, m ij ij −r then ω ˜ > 0. If, additionally, G ∈ C, then ω ˜ ≥ d ω . ij ij ij Proof Expanding χ as in (38) and using (72), we ﬁnd, for i, j ∈ V , { j } n−1 n−1 m m r m m (Lχ ) = χ ,φ Lφ = d φ φ . (74) { j } i { j } V m j j i m=1 m=1 Note in particular that, if i = j, then ω ˜ =−(Lχ ) . ij { j } i For i, j ∈ V we also compute −r −r (χ ) = d ω (δ − δ ) = d (d δ − ω ), (75) { j } i ik ji jk i ji ij i i k∈V hence, if i = j, then ω =−d χ . Combining the above with (70), we ﬁnd ij { j } for i = j, j −r j ω ˜ =−(Lχ ) =− χ − γϕ = d ω + γ f ≥ 0, (76) ij { j } i { j } ij i i i vol (V ) where the inequality follows since G ∈ C (note that for γ = 0 the inequality follows from the nonnegativity of ω). Moreover, if ω > 0, then, by deﬁnition of C ,the ij γ inequality is strict, and thus ω ˜ > 0. ij If additionally G ∈ C, then, for i = j, f ≥ 0 and thus by (76), ω ˜ ≥ ω . ij ij Lemma 6.8 suggests that, given a graph G ∈ C with edge weights ω, we can construct a new graph G with edge weights ω ˜ as in (73), that are also nonnegative. The next theorem shows that, in fact, if r = 0, then this new graph is in G and the ˜ ˜ graph Laplacian on G is related to L. Theorem 6.9 Let γ ≥ 0 and let G = (V , E,ω) ∈ C . Let L be as deﬁned in (46). Let λ and φ be the eigenvalues and corresponding eigenfunctions of the graph Laplacian (with parameter r), as in (35), (36). Assume r = 0 and let ω ˜ be as in (73). Let E ⊂ V contain an undirected edge (i, j ) between i ∈ V and j ∈ Vif ˜ ˜ ˜ and only if ω ˜ > 0. Then G = (V , E , ω) ˜ ∈ G. Let be the graph Laplacian (with ij ˜ ˜ parameter r) ˜ on G. If r˜ = 0, then = L. This property is used to prove connectedness of G in Theorem 6.9, which is the reason why we did not −r j deﬁne C to simply be G ∈ G :∀ j ∈ V ∀i ∈ V \{ j } d ω + γ f ≥ 0 . γ ij i vol(V ) i 123 J Nonlinear Sci Proof In the following it is instructive to keep r, r˜ ∈[0, 1] as unspeciﬁed parameters in the proof and point out explicitly where the assumptions r = 0 and r˜ = 0 are used. From the deﬁnition of ω ˜ in (73) it follows directly that G has no self-loops ij (ω ˜ = 0). Moreover, using r = 0in (73), we see that ω ˜ =˜ ω and thus G is ii ij ji undirected. Furthermore, by Lemma 6.8 we know that, for all i, j ∈ V,if ω > 0, ij ˜ ˜ then ω ˜ > 0. Thus G is connected, because G is connected. Hence G ∈ G. ij Repeating the computation from (75)for instead of , we ﬁnd, for i, j ∈ V , −˜r ˜ ˜ ˜ χ = d d δ −˜ ω , (77) { j } i ji ij where d := ω ˜ . Combining this with (74), we ﬁnd that, if j ∈ V and i ∈ i ij j ∈V V \{ j }, then n−1 −˜r −˜r −˜r r m m ˜ ˜ ˜ ˜ χ =−d ω ˜ = d d φ φ = d Lχ . (78) { j } ij m { j } i i j j i i m=1 m r m Since we have 0 = φ ,χ = d φ , it follows that, for all i ∈ V , j j j ∈V r m r m d φ =− d φ . Thus, for i ∈ V , d = ω ˜ = ω ˜ = i ij ij i i j ∈V \{i } j j j ∈V j ∈V \{i } n−1 n−1 r m m r m − d φ φ = d φ . By (74) and (77) with i = j, m m m=1 j ∈V \{i } m=1 j j i i i we then have 1−˜r n−1 2 1−˜r 1−˜r r m χ = d = d φ = (Lχ ) . (79) {i } m {i } i i i m=1 Now we use r˜ = 0in (78) and (79) to deduce that, for all j ∈ V , χ = Lχ . { j } { j } Since {χ ∈ V : i ∈ V } is a basis for the vector space V, we conclude = L. {i } Remark 6.10 In the proof of Theorem 6.9, we can trace the roles that r and r˜ play. We only used the assumption r = 0 in order te deduce that G is undirected. The assumption r˜ = 0 is necessary to obtain equality between and L in equations (78) and (79). These assumptions on r and r˜ have a further interesting consequence. Since the graphs G and G have the same node set, both graphs also have the same associated set of node functions V. Moreover, since r =˜r = 0, the V-inner product is the same for both graphs. Hence we can view V corresponding to G as the same inner product space ˜ ˜ as V corresponding to G. In this setting the operator equality = L from Theorem 6.9 holds not only between operators on the vector space V, but also between operators on the inner product space V. Lemma 6.11 Let γ ≥ 0,q = 1, and let G = (V , E,ω) ∈ C . Assume r = 0. Let ω ˜ be as in (73) and E as in Theorem 6.9. Let r˜ be the r-parameter corresponding to the graph ˜ ˜ ˜ G = (V , E , ω) ˜ . Suppose S ⊂ V, F is as in (32), for all i ∈ V d := ω ˜ , 0 i ij j ∈V 123 J Nonlinear Sci and κ ˜ is the graph curvature of S as in Deﬁnition 3.4 corresponding to ω ˜ . Then F (χ ) = ω ˜ . Moreover, if r˜ = 0,F (χ ) = d − (κ ˜ ) . 0 S ij 0 S i S i, j ∈S i ∈S Proof From Corollary 4.12 and (56) we ﬁnd n−1 n−1 m 2 r r m m F (χ ) = χ ,φ = (χ ) i (χ ) d d φ φ 0 S m S m S S V i j i j m=1 m=1 i, j ∈V = (χ ) (χ ) ω ˜ , S S ij i j i, j ∈V where we used that r = 0. Moreover, if r˜ = 0, ω ˜ = ω ˜ − ij ij i, j ∈S i ∈S j ∈V ω ˜ = d − (κ ˜ ) . ij i S i j ∈V \S i ∈S Lemma S7.7 in Supplementary Materials gives upper and lower bounds on ω ˜ − ω in terms of the Laplacian eigenvalues and eigenfunctions. Remarks S7.8 and S7.9 interpret these conditions in terms of the algebraic connectivity of the graph and use them to give some intuition about the (mcOKMBO) dynamics. Lemma S7.10 and Remarks S7.11 and S7.12 use the star graph to illustrate the results from this section. 6.2 More Comparison Principles Theorem 6.9 tells us that, if γ ≥ 0 is such that G ∈ C and if r = 0, then the dynamics in (47) can be viewed as graph diffusion on a new graph with the same node set, but a different edge set and weights, as the original graph G. We can use this to prove that properties of also hold for L on such graphs. Note that, when γ = 0, L = ,so this can be viewed as a generalization of results for to L. In this section, we prove a generalization of Lemma 3.1 and a generalization of the comparison principle in (van Gennip et al. 2014, Lemma 2.6(d)). In fact, despite the new graph construction in Theorem 6.9 requiring r = 0 for symmetry reasons (see Remark 6.10), the crucial ingredient that will allow these generalizations is that G ∈ C ; the assumption on r is not required. We will also see a counterexample illustrating that this generalization does not extend (at least not without further assumptions) to graphs that are not in C . Lemma 6.12 gives a result which we need to prove the comparison principles in Lemmas 6.13 and 6.15. Lemma 6.12 Let γ ≥ 0,G = (V , E,ω) ∈ C , w ∈ V, and let i ∈ V be such that w ∗ = min w . Let ϕ ∈ V solve (45) for w. Then ϕ ∗ ≤ 0. i i ∈V i i Proof Let j ∈ V and let ϕ ∈ V be as in (70). Then, by Lemma 6.4, we have that j j ϕ = χ − A χ and M ϕ = 0. Furthermore, by Deﬁnition 6.1 and (70), { j } { j } it follows that, for all i ∈ V \{ j }, ϕ ≤ 0. Because w = w χ ,wehave j { j } j ∈V A(w) = w A χ and thus ϕ = w χ − A χ . Since j { j } j { j } { j } j ∈V j ∈V 123 J Nonlinear Sci j j j also M w ϕ = M w ϕ = 0, we ﬁnd that ϕ = ϕ . j j j ∈V j ∈V j ∈V j j Hence ϕ ∗ = w ϕ = w ∗ ϕ + w ϕ . For j = i , we know ∗ ∗ ∗ i j i j i ∗ i j ∈V j ∈V \{i } j j j ∗ ∗ ∗ ∗ that w ≤ w and ϕ ≤ 0, hence w ϕ ≤ w ϕ . Therefore ϕ ≤ w ϕ + i j ∗ j ∗ i ∗ i i ∗ i i i i j j j j ∗ ∗ w ϕ = w ϕ . If we deﬁne ϕ ˜ := ϕ = (χ ) ϕ , i ∗ i ∗ V ∗ i i j ∈V \{i } j ∈V j ∈V j ∈V then by a similar argument as above, ϕ ˜ = χ − A (χ ) = 0 and M (ϕ ˜) = 0. Thus V V ϕ ˜ = 0 and we conclude that ϕ ≤ 0. Lemma 6.13 (Generalization of comparison principle I). Let γ ≥ 0,G = (V , E,ω) ∈ C , and let V be a proper subset of V . Assume that u,v ∈ V are such that, for all i ∈ V , (Lu) ≥ (Lv) and, for all i ∈ V \V ,u ≥ v . Then, for all i i i i i ∈ V, u ≥ v . i i ˜ ˜ Proof When r = 0, we know that L = , where is the graph Laplacian on the ˜ ˜ graph G, in the notation from Theorem 6.9. Because G and G have the same node set V , the result follows immediately by applying Lemma 3.1 to . We will, however, prove the generalization for any r ∈[0, 1]. Let the situation and notation be as in the proof of Lemma 3.1, with the exception that now, for all i ∈ V , (Lw) ≥ 0 (instead of (w) ≥ 0). Let ϕ ∈ V by such that i i ϕ = w − A(w) and M(w) = 0. Proceed with the proof in the same way as the proof of Lemma 3.1, up to and including the assumption that min w < 0 and j ∈V j ∗ ∗ the subsequent construction of the path from U to i and the special nodes j and ∗ ∗ k on this path. Then, as in that proof, we know that (w) < 0. Moreover, since w ∗ = min w , we know by Lemma 6.12 that ϕ ∗ ≤ 0. Hence, for all γ ≥ 0, j i ∈V i j (Lw) < 0. This contradicts the assumption that, for all i ∈ V , (Lw) ≥ 0, hence min w ≥ 0 and the result is proven. i ∈V i The following corollary of Lemma 6.12 will be useful in proving Lemma 6.15 Corollary 6.14 Let γ ≥ 0,G = (V , E,ω) ∈ C . Assume that u, u ˜ ∈ V satisfy, for all ∗ ∗ ∗ ∗ i ∈ V, u ≤˜ u , and let there be an i ∈ V such that u =˜ u . Then (Lu) ≥ (Lu ˜) . i i i i i i Proof Deﬁne w := u ˜ − u, then, for all i ∈ V , w ≥ 0 and w = 0. We compute ∗ ∗ ∗ ∗ ∗ d ∗ (w) = d w − ω w =− ω w ≤ 0. i i i i j j i j j j ∈V j ∈V Let ϕ ∈ V solve (45)for w. Since w = min w ,wehavebyLemma 6.12 that i i ∈V i ϕ ≤ 0. Hence ∗ ∗ ∗ ∗ ∗ (Lu ˜) − (Lu) = (Lw) = (w) + γϕ ≤ 0. i i i i i Lemma 6.15 (Comparison principle II). Let γ ≥ 0,G = (V , E,ω) ∈ C ,u ,v ∈ V, γ 0 0 and let u,v ∈ V be solutions to (47), with initial conditions u and v , respectively. ∞ 0 0 If, for all i ∈ V, (u ) ≤ (v ) , then, for all t ≥ 0 and for all i ∈ V, u (t ) ≤ v (t ). 0 i 0 i i i 123 J Nonlinear Sci Proof If r = 0 we note that, by Theorem 6.9, L can be rewritten as a graph Laplacian on a new graph G with the same node set V . The result in (van Gennip et al. 2014, Lemma 2.6(d)) shows the desired conclusion holds for graph Laplacians (i.e. when γ = 0) and thus we can apply it to the graph Laplacian on G to obtain to result for L on G. In the general case when r ∈[0, 1], Corollary 6.14 tells us that L satisﬁes the condition which is called W in Szarski (1965, Section 4). Since, for a given initial condition, the solution to (47) is unique, the result now follows by applying (Szarski 1965, Theorem 9.3 or Szarski 1965, Theorem 9.4). Corollary 6.16 Let γ ≥ 0,G = (V , E,ω) ∈ C , and let w ∈ V be a solution γ ∞ to (47) with initial condition w ∈ V. Let c , c ∈ R be such that, for all i ∈ V, 0 1 2 c ≤ (w ) ≤ c . Then, for all t ≥ 0 and for all i ∈ V, c ≤ w (t ) ≤ c . 1 0 i 2 1 i 2 In particular, for all t ≥ 0, w(t ) ≤w . V ,∞ 0 V ,∞ Proof First note that c and c always exist, since V is ﬁnite. 1 2 If u ∈ V solves (47) with initial condition u = c χ ∈ V, then, for all t ≥ 0, ∞ 0 1 V u(t ) = c χ . Applying Lemma 6.15 with v = w and v = w, we obtain that, for 1 V 0 0 all t ≥ 0 and for all i ∈ V , w (t ) ≥ c . Similarly, if v ∈ V solves (47) with initial i 1 ∞ condition v = c χ ∈ V, then, for all t ≥ 0, u(t ) = c χ . Hence, Lemma 6.15 with 0 2 V 2 V u = w and u = w tells us that, for all t ≥ 0 and for all i ∈ V , w (t ) ≤ c . 0 0 i 2 The ﬁnal statement follows by noting that, for all i ∈ V , −w ≤ (w ) ≤ 0 0 i V ,∞ w . V ,∞ Remark 6.17 Numerical simulations show that when G ∈ / C , the results from Corol- lary 6.16 do not necessarily hold for all t > 0. For example, consider an unweighted 4-regular graph (in the notation of Section S9.1 in Supplementary Materials we take −r the graph G (900)) with r = 0 and γ = 0.7. We compute min (d ω + torus i, j ∈V ij j j γ f ) ≈−0.1906 in MATLAB using (70), (71), so the graph is not in C . 0.7 vol(V ) i −0.01L 0 0 18 Computing v(0.01) = e v , where v is a {0, 1}-valued initial condition, we ﬁnd min v (0.01) ≈− 0.0033 < 0 and max v (0.01) ≈ 1.0033 > 1. Hence i ∈V i i ∈V i the conclusions of Corollary 6.16 do not hold in this case. We can use the result from Corollary 6.16 to prove a second pinning bound, in the vein of Lemma S5.10, for graphs in C ; see Lemma S8.1 in Supplementary Materials. 7 Numerical Implementations We implement (mcOKMBO) (in MATLAB version 9.2.0.538062 (R2017a)) by com- puting the eigenvalues with corresponding eigenfunctions λ and then using the m m The property of L in Corollary 6.14 is sometimes also called quasimonotonicity, or, more properly it can be seen as a consequence of quasimonotonicity in the sense of Volkmann (1972); Chaljub-Simon et al. (1992); Herzog (2004). 18 0 To be precise, here we choose v based on the eigenvector corresponding to option (c) explained in Section S9.3 in Supplementary Materials, with M = 450. 123 J Nonlinear Sci spectral expansion from (57)tosolve (47). This is similar in spirit to the spectral expansion methods used in, for example (Bertozzi and Flenner 2012; Calatroni et al. 2017). However, in those papers an iterative method is used to deal with additional terms in the equation. Here, we can deal with the operator L in (47) in one go. Note that in other applications of spectral expansion methods, such as those in Bertozzi and Flenner (2016), sometimes only a subset of the eigenvalues and corresponding eigenfunctions is used. When n is very large, computation time can be saved, often without a great loss of accuracy, by using a truncated version of (38) which only uses the K n smallest eigenvalues with corresponding eigenfunctions. The exam- ples we show in this paper (and Supplementary Materials) are small enough that such an approximation was not necessary, but it might be considered if the method is to be run on large graphs. Figure 1 shows the initial conditions and ﬁnal states for three runs of (mcOKMBO) on a two-moon graph G , with different values for γ . Figure 2 shows the cor- moons k k responding values of J (v ) and F (v ) as a function of the iteration number k.In τ 0 k k−1 each case, the algorithm was terminated when v = v , which is why in each plot in Fig. 2 the ﬁnal two values are the same. As expected from Lemma 5.18, J decreases along the iterates. By and large F τ 0 also decreases, although Fig. 2d shows this is not necessarily always the case; also note that the value at the ﬁnal iterate is not guaranteed to be the minimum value among all iterates (although in our tests it always is close, if not equal, to that minimum; see the ﬁgures in Section S9 in Supplementary Materials). In Section S9 of Supplementary Materials we provide additional results obtained by running (mcOKMBO) on various different graphs, as well as in-depth discussions about those results and the choice of τ , of the initial condition, and of the other parameters (γ , q, r, M, N) in the graph Ohta–Kawasaki model and the (mcOKMBO) algorithm. 8 Discussion and Future Work In this paper we presented three main results: the Lyapunov functionals associated with the (mass conserving) Ohta–Kawasaki MBO schemes -converge to the sharp interface Ohta–Kawasaki functional; there exists a class of graphs on which this MBO scheme can be interpreted as a standard graph MBO scheme on a transformed graph (and for which additional comparison principles hold); the mass conserving Ohta– Kawasaki MBO scheme works well in practice when attempting to minimize the sharp interface graph Ohta–Kawasaki functional under a mass constraint. Along the way we have also further developed the theory of PDE inspired graph problems and added to the theoretical underpinnings of this ﬁeld. Future research on the graph Ohta–Kawasaki functional can mirror the research on the continuum Ohta–Kawasaki functional and attempt to prove the existence of certain structures in minimizers on certain graphs, analogous to structures such as lamellae For more speciﬁcs about the construction of G and the initial conditions, see Sections S9.1 and S9.3 moons in Supplementary Materials, respectively. 123 J Nonlinear Sci Fig. 1 Initial (left column) and ﬁnal (right column) states of Algorithm (mcOKMBO) applied to G moons with r = 0, M = 300, τ = 1 for a different value of γ in each row. The initial conditions are eigenfunction based in the sense of option (c) in Section S9.3 in Supplementary Materials. The values of F at the ﬁnal iterates are approximately 109.48 (top row), 230.48 (middle row), and 626.89 (bottom row). a Initial condition for γ = 0.1, b Final iterate (k = 21) for γ = 0.1, c Initial condition for γ = 1, d Final iterate (k = 9) for γ = 1, e Initial condition for γ = 10 and f Final iterate (k = 7) for γ = 10 123 J Nonlinear Sci k k Fig. 2 Plots of J v (left column) and F v (right column) for the applications of (mcOKMBO) 1 0 k k k corresponding to Fig. 1. a Plot of J v for γ = 0.1, b Plot of F v for γ = 0.1, c Plot of J v 1 0 1 k k k for γ = 1, d Plot of F v for γ = 1, e Plot of J v for γ = 10 and f Plot of F v for γ = 10 0 1 0 123 J Nonlinear Sci and droplets in the continuum case. The numerical methods presented in this paper might also prove useful for simulations of minimizers of the continuum functional. The -convergence results presented in this paper also ﬁt in well with the ongoing programme, started in van Gennip et al. (2014), aimed at improving our understanding how various PDE inspired graph-based processes, such as the graph MBO scheme, graph Allen–Cahn equation, and graph mean curvature ﬂow, are connected. One of the initial hopes for the graph Ohta–Kawaski functional when starting this research was that it might be helpful to detect particular structures in graphs [(similar to how the graph Ginzburg–Landau functional can be used to detect cluster structures (Bertozzi and Flenner 2012) and to how the signless graph Ginzburg–Landau func- tional detects bipartite structures (Keetch and van Gennip in prep)]. So far this line of research has not yielded concrete results, but it is worth keeping in mind as a potential application, if such a structure can be identiﬁed. We thank the anonymous referee of the ﬁrst draft of this paper for the suggestion that the mass conserving MBO scheme can be useful for data clustering with prescribed cluster sizes. It would be interesting to pursue this idea in future research. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References Adams, R.A., Fournier, J.J.F.: Sobolev Spaces, Pure and Applied Mathematics, vol. 140. Academic Press, Oxford (2003) Barles, G., Georgelin, C.: A simple proof of convergence for an approximation scheme for computing motions by mean curvature. SIAM J. Numer. Anal. 32(2), 484–500 (1995) Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Algorithms, 2nd edn. Wiley, Hoboken (1993) Bendito, E., Carmona, Á., Encinas, A.M.: Shortest paths in distance-regular graphs. Eur. J. Comb. 21, 153–166 (2000a) Bendito, E., Carmona, Á., Encinas, A.M.: Solving boundary value problems on networks using equilibrium measures. J. Funct. Anal. 171(1), 155–176 (2000b) Bendito, E., Carmona, A., Encinas, A.M.: Solving Dirichlet and Poisson problems on graphs by means of equilibrium measures. Eur. J. Comb. 24(4), 365–375 (2003) Bertozzi, A.L., Flenner, A.: Diffuse interface models on graphs for analysis of high dimensional data. Multiscale Model. Simul. 10(3), 1090–1118 (2012) Bertozzi, A.L., Flenner, A.: Diffuse interface models on graphs for classiﬁcation of high dimensional data. SIAM Rev. 58(2), 293–328 (2016) Bjerhammer, A.: Application of calculus of matrices to method of least squareswith special reference to geodetic calculations. kungl. tekniska hogskolanc handlingar, Transactions of the Royal Institute of Technology, Stockholm, Sweden (1951) Bosch, J., Klamt, S., Stoll, M.: Generalizing diffuse interface methods on graphs: non-smooth potentials and hypergraphs (2016). Preprint arXiv:1611.06094 Braides, A.: -Convergence for Beginners. Oxford Lecture Series in Mathematics and Its Applications, vol. 22, 1st edn. Oxford University Press, Oxford (2002) Bresson, X., Hu, H., Laurent, T., Szlam, A., von Brecht, J.: An incremental reseeding strategy for clustering (2014). Preprint arXiv:1406.3837 Brezis, H.: Analyse Fonctionelle—Théorie et Applications. Dunod, Paris (1999) 123 J Nonlinear Sci Calatroni, L., van Gennip, Y., Schönlieb, C.-B., Rowland, H.M., Flenner, A.: Graph clustering, variational image segmentation methods and Hough transform scale detection for object measurement in images. J. Math. Imaging Vis. 57(2), 269–291 (2017) Caracciolo, S., Sicuro, G.: Scaling hypothesis for the euclidean bipartite matching problem. II. Correlation functions. Phys. Rev. E 91, 062125 (2015) Caracciolo, S., Lucibello, C., Parisi, G., Sicuro, G.: Scaling hypothesis for the Euclidean bipartite matching problem. Phys. Rev. E 90(1), 012118 (2014) Chaljub-Simon, A., Lemmert, R., Schmidt, S., Volkmann, P.: Gewöhnliche differentialgleichungen mit quasimonoton wachsenden rechten seiten in geordneten banachräumen, pp. 307–320. Birkhäuser Basel, Basel (1992) Chambolle, A., Novaga, M.: Convergence of an algorithm for the anisotropic and crystalline mean curvature ﬂow. SIAM J. Math. Anal. 37(6), 1978–1987 (2006) Choksi, R., Ren, X.: On the derivation of a density functional theory for microphase separation of diblock copolymers. J. Stat. Phys. 113(1/2), 151–176 (2003) Choksi, R., Peletier, M.A., Williams, J.F.: On the phase diagram for microphase separation of diblock copolymers: an approach via a nonlocal Cahn–Hilliard functional. SIAM J. Appl. Math. 69(6), 1712– 1738 (2009). MR 2496714 Choksi, R., Maras, M., Williams, J.F.: 2D phase diagram for minimizers of a Cahn–Hilliard functional with long-range interactions. SIAM J. Appl. Dyn. Syst. 10(4), 1344–1362 (2011). MR 2854591 Chung, F.R.K.: Spectral graph theory, CBMS regional conference series in mathematics, vol. 92, Pub- lished for the Conference Board of the Mathematical Sciences, Washington, DC, by the American Mathematical Society, Providence, Rhode Island (1997). MR 1421568 (97k:58183) Chung, F.R.K., Yau, S.T.: Discrete green’s functions. J. Comb. Theory Ser. A 91, 191–214 (2000) Coddington, E.A., Levinson, N.: Theory of Ordinary Differential Equations. Robert E. Krieger Publishing Company, Inc., Malabar (1984) (Originally published by McGraw-Hill, New York, 1955) Dal Maso, G.: An Introduction to -Convergence. Progress in Nonlinear Differential Equations and Their Applications, vol. 8, 1st edn. Birkhäuser, Boston (1993) Dresden, A.: The fourteenth western meeting of the american mathematical society. Bull. Am. Math. Soc. 26(9), 385–396 (1920) Elmoataz, A., Buyssens, P.: On the connection between tug-of-war games and nonlocal pdes on graphs. C. R. Mécanique 345(3), 177–183 (2017) Elmoataz, A., Desquesnes, X., Lézoray, O.: Non-local morphological PDEs and p-laplacian equation on graphs with applications in image processing and machine learning. IEEE J. Sel. Top. Signal Process. 6(7), 764–779 (2012) Elmoataz, A., Desquesnes, X., Toutain, M.: On the game p-Laplacian on weighted graphs with applications in image processing and data clustering. Eur. J. Appl. Math. 28, 1–27 (2017) Esedoglu, ¯ S., Otto, F.: Threshold dynamics for networks with arbitrary surface tensions. Commun. Pure Appl. Math. 68(5), 808–864 (2015) Esedoglu, ¯ S., Ruuth, S.J., Tsai, R.: Threshold dynamics for high order geometric motions. Interfaces Free Boundaries 10(3), 263–282 (2008) Esedoglu, ¯ S., Ruuth, S., Tsai, R.: Diffusion generated motion using signed distance functions. J. Comput. Phys. 229(4), 1017–1042 (2010) Evans, L.C.: Convergence of an algorithm for mean curvature motion. Indiana Univ. Math. J. 42(2), 533–557 (1993) Evans, L.C.: Partial Differential Equations. Graduate Studies in Mathematics, vol. 19, 1st edn. American Mathematical Society, Providence (2002) Garcia-Cardona, C., Merkurjev, E., Bertozzi, A.L., Flenner, A., Percus, A.G.: Multiclass data segmentation using diffuse interface methods on graphs. IEEE Trans. Pattern Anal. Mach. Intell. 36(8), 1600–1613 (2014) Gilboa, G., Osher, S.: Nonlocal operators with applications to image processing. Multiscale Model. Simul. 7(3), 1005–1028 (2009) Glasner, K.: Multilayered equilibria in a density functional model of copolymer-solvent mixtures. SIAM J. Math. Anal. 49(2), 1593–1620 (2017) Hale, J.K.: Ordinary Differential Equations, 2nd edn. Dover Publications Inc, Mineola (2009) Hein, M., Audibert, J.-Y., von Luxburg, U.: Graph Laplacians and their convergence on random neighbor- hood graphs. J. Mach. Learn. Res. 8, 1325–1368 (2007). MR 2332434 (2008h:60034) 123 J Nonlinear Sci Herzog, G.: A Characterization of Quasimonotone Increasing Functions (2004). http://www.mathematik. uni-karlsruhe.de/user/~Seminar_LV/lv19.pdf Hu, H., Laurent, T., Porter, M.A., Bertozzi, A.L.: A method based on total variation for network modularity optimization using the MBO scheme. SIAM J. Appl. Math. 73(6), 2224–2246 (2013) Hu, H., Sunu, J., Bertozzi, A.L.: Multi-class graph Mumford-Shah model for plume detection using the MBO scheme. In: Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 209–222 (2015) Kawasaki, K., Ohta, T., Kohrogui, M.: Equilibrium morpholoy of block copolymer melts. 2. Macromolecules 21, 2972–2980 (1988) Keetch, B., van Gennip, Y.: A Max-Cut approximation using a graph based MBO scheme. https://arxiv.org/ abs/1711.02419 Le, N.Q.: On the convergence of the Ohta–Kawasaki equation to motion by nonlocal Mullins–Sekerka law. SIAM J. Math. Anal. 42(4), 1602–1638 (2010) Luo, X., Bertozzi, A.L.: Convergence of the graph Allen–Cahn scheme. J. Stat. Phys. 167(3), 934–958 (2017) Manfredi, J.J., Oberman, A.M., Sviridov, A.P.: Nonlinear elliptic partial differential equations and p- harmonic functions on graphs. Differ. Integral Equs. 28(1–2), 79–102 (2015) Mascarenhas, P.: Diffusion generated motion by mean curvature. UCLA Department of Mathematics CAM report CAM, pp. 92–33 (1992) Merkurjev, E., Kostic, T., Bertozzi, A.: An MBO scheme on graphs for segmentation and image processing. SIAM J. Imaging Sci. 6(4), 1903–1930 (2013) Merkurjev, E., Sunu, J., Bertozzi, A.L.: Graph MBO method for multiclass segmentation of hyperspectral stand-off detection video. In: 2014 IEEE International Conference on Image Processing (ICIP). IEEE, pp. 689–693 (2014) Merkurjev, E., Bertozzi, A.L., Chung, F.R.K.: A semi-supervised heat kernel pagerank mbo algorithm for data classiﬁcation. Tech. report, University of California, Los Angeles, United States (2016) Merkurjev, E., Bertozzi, A., Yan, X., Lerman, K.: Modiﬁed cheeger and ratio cut methods using the ginzbur- glandau functional for classiﬁcation of high-dimensional data. Inverse Probl 33(7), 074003 (2017) Merriman, B., Bence, J.K., Osher, S.J..: Diffusion generated motion by mean curvature. UCLA Department of Mathematics CAM report CAM, pp. 06–32 (1992) Merriman, B., Bence, J.K., Osher, S.J.: Diffusion generated motion by mean curvature. In: AMS Selected Letters, Crystal Grower’s Workshop pp. 73–83 (1993) Merriman, B., Bence, J.K., Osher, S.J.: Motion of multiple functions: a level set approach. J. Comput. Phys. 112(2), 334–363 (1994) Ohta, T., Kawasaki, K.: Equilibrium morpholoy of block copolymer melts. Macromolecules 19, 2621–2632 (1986) Penrose, R.: A generalized inverse for matrices. Math. Proc. Camb. Philos. Soc. 51(3), 406–413 (1955) Ren, X., Wei, J.: On the multiplicity of solutions of two nonlocal variational problems. SIAM J. Math. Anal. 31(4), 909–924 (2000) Ruuth, S.J.: A diffusion-generated approach to multiphase motion. J. Comput. Phys. 145(1), 166–192 (1998a) Ruuth, S.J.: Efﬁcient algorithms for diffusion-generated motion by mean curvature. J. Comput. Phys. 144(2), 603–625 (1998b) Simon, B.: Equilibrium measures and capacities in spectral theory. Inverse Probl. Imaging 1(4), 713–772 (2007) Swartz, D., Yip, N.K.: Convergence of diffusion generated motion to motion by mean curvature (2017). Preprint arXiv:1703.06519 Szarski, J.: Differential inequalities, Monograﬁe Matematyczne, Tom 43, Panstw ´ owe Wydawnictwo Naukowe, Warsaw (1965). MR 0190409 Ta, V.-T., Elmoataz, A., Lézoray, O.: Nonlocal PDEs-based morphology on weighted graphs for image and data processing. IEEE Trans. Image Process. 20(6), 1504–1516 (2011) Trillos, N.G., Slepce ˇ v, D.: Continuum limit of total variation on point clouds. Arch. Ration. Mech. Anal. 220, 193–241 (2016) Trillos, N.G., Slepcev, D., Von Brecht, J., Laurent, T., Bresson, X.: Consistency of cheeger and ratio graph cuts. J. Mach. Learn. Res. 17(181), 1–46 (2016) van Gennip, Y., Bertozzi, A.L.: -Convergence of graph Ginzburg–Landau functionals. Adv. Differ. Equs. 17(11–12), 1115–1180 (2012) 123 J Nonlinear Sci van Gennip, Y., Guillen, N., Osting, B., Bertozzi, A.L.: Stability of monolayers and bilayers in a copolymer– homopolymer blend model. Interfaces Free Bound. 11(3), 331–373 (2009) van Gennip, Y., Guillen, N., Osting, B., Bertozzi, A.L.: Mean curvature, threshold dynamics, and phase ﬁeld theory on ﬁnite graphs. Milan J. Math. 82(1), 3–65 (2014) Volkmann, P.: Gewöhnliche Differentialungleichungen mit quasimonoton wachsenden Funktionen in topol- ogischen Vektorräumen. Math. Z. 127(2), 157–164 (1972) von Luxburg, U.: A tutorial on spectral clustering. Stat. Comput. 17(4), 395–416 (2007)
Journal of Nonlinear Science – Springer Journals
Published: Jun 1, 2018
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.
Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.
All the latest content is available, no embargo periods.
“Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”
Daniel C.
“Whoa! It’s like Spotify but for academic articles.”
@Phil_Robichaud
“I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”
@deepthiw
“My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”
@JoseServera
DeepDyve Freelancer | DeepDyve Pro | |
---|---|---|
Price | FREE | $49/month |
Save searches from | ||
Create lists to | ||
Export lists, citations | ||
Read DeepDyve articles | Abstract access only | Unlimited access to over |
20 pages / month | ||
PDF Discount | 20% off | |
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.
ok to continue