Add Journal to My Library
IMA Journal of Numerical Analysis
, Volume Advance Article – Jun 21, 2017

45 pages

/lp/ou_press/a-finite-element-method-for-quantum-graphs-ACQw072fsS

- Publisher
- Oxford University Press
- Copyright
- © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
- ISSN
- 0272-4979
- eISSN
- 1464-3642
- D.O.I.
- 10.1093/imanum/drx029
- Publisher site
- See Article on Publisher Site

Abstract We study the numerical solution of boundary and initial value problems for differential equations posed on graphs or networks. The graphs of interest are quantum graphs, i.e., metric graphs endowed with a differential operator acting on functions defined on the graph’s edges with suitable side conditions. We describe and analyse the use of linear finite elements to discretize the spatial derivatives for a class of linear elliptic model problems. The solution of the discrete equations is discussed in detail in the context of a (nonoverlapping) domain decomposition approach. For model elliptic problems and a wide class of graphs, we show that a combination of Schur complement reduction and diagonally preconditioned conjugate gradients results in optimal complexity. For problems of parabolic type, we consider the use of exponential integrators based on Krylov subspace methods. Numerical results are given for both simple and complex graph topologies. 1. Introduction Differential equations posed on networks, or graphs, play an important role in the modeling and simulation of a wide variety of complex phenomena (see Newman, 2010). An important example is the study of diffusion phenomena on networks, which in the simplest cases reduces to the solution of initial value problems for linear systems of ordinary differential equations (ODEs). Although these models may be adequate for studying simple situations, such as those where the relations between the constituents of the system (corresponding to graph vertices) can be modeled by a simple binary relation (connected or not connected), more sophisticated models are necessary when dealing with more complex situations. Metric and quantum graphs provide useful models for a variety of physical systems including conjugated molecules, quantum wires, photonic crystals, thin waveguides, carbon nanostructures, and so forth. We refer to Berkolaiko & Kuchment (2013) for details and references (see also Lagnese et al. (1994) for related problems not considered in Berkolaiko & Kuchment (2013)). A metric graph is a graph in which each edge is endowed with an implicit metric structure. Often (but not always), its edges can be identified with intervals on the real line. In technical terms, a metric graph is an example of one-dimensional topological manifold, or one-dimensional simplicial complex. A quantum graph is a metric graph equipped with a differential operator (‘Hamiltonian’) and suitable vertex conditions (see the next section for more precise definitions). This differential operator acts on functions defined on the edges and vertices of the underlying metric graph. In physics and engineering applications, there is a strong interest in the spectral properties of the Hamiltonian, as well as on associated wave propagation, transport and diffusion phenomena. The literature on quantum graphs is vast; most papers deal with theoretical issues such as spectral theory, well posedness and so forth, or with physical applications. On the other hand, the literature devoted to computational issues is almost nonexistent, apart from a handful of references dealing with rather special situations (see, e.g., Pesenson, 2005; Nordenfelt, 2007; Herty et al., 2010; Hild & Leugering, 2012; Wybo et al., 2015; Zlotnik et al., 2015), and the numerical aspects are typically not the main focus. Here, we take a first step in the systematic study of numerical methods for the solution of differential problems involving quantum graphs. We discuss a simple spatial discretization using linear finite elements and techniques for the fast solution of the discrete equations, using simple elliptic and parabolic model problems to illustrate our approach. We focus on this type of discretization because it allows us to highlight interesting relations connecting the discretized Hamiltonian with the graph Laplacian of the underlying combinatorial graph. In addition to equations posed on highly structured graphs, which are of interest in physics, we also consider the case of complex graphs with nontrivial topologies, in view of potential applications in fields such as physiology, biology and engineering. Not surprisingly, we observe significant differences with the numerical solution of PDEs posed on more typical spatial domains. The remainder of the article is organized as follows. The necessary background information on metric and quantum graphs is provided in Section 2. In Section 3, we introduce a simple linear finite element method and analyse its convergence for a model quantum graph. The useful notion of extended graph is introduced in Section 4; in this section, we also give a detailed description of the matrices obtained from the finite element method. The conditioning of the mass matrix is studied in Section 5. Section 6 contains some remarks on the spectrum of discretized quantum graphs. Solution algorithms from the discretized equations, including preconditioned conjugate gradient (PCG) methods and schemes for integrating time-dependent problems, are discussed in Section 7. Numerical experiments for some simple elliptic and parabolic model problems are presented in Section 8. Finally, in Section 9, we present our conclusions and a list of topics for future work. 2. Definitions and notations We give in sequence the definitions, the notations and the assumptions that we will use in the following. We refer to Berkolaiko & Kuchment (2013) for a comprehensive introduction to the theory of quantum graphs. Definition 2.1 A combinatorial graph$${\it{\Gamma}} = ({\scr V}, {\scr E})$$ is a collection of a finite number of vertices and of edges connecting pairs of distinct vertices. We will denote by $${\scr V} = \left\lbrace {\mathtt{v}}_1, \dots , {\mathtt{v}}_N \right\rbrace $$ the set of vertices and by $${\scr E} = \left\lbrace e_j = ({\mathtt{v}}_i, {\mathtt{v}}_k) \right\rbrace_{j=1, \dots , M}$$ the set of edges. Thus, an edge can be identified with a pair of vertices. The graph is undirected if no orientation is assigned to the edges; in this case, we do not distinguish between $$({\mathtt{v}}_i,{\mathtt{v}}_j)$$ and $$({\mathtt{v}}_j,{\mathtt{v}}_i)$$. Otherwise, the graph is directed. For an undirected graph, we define for each vertex $${\mathtt{v}}_i$$ its degree$$d_{{\mathtt{v}}_i}$$ as the number of edges $$e_k$$ such that $$e_k = ({\mathtt{v}}_i , {\mathtt{v}}_j)$$. Since only a single edge (at most) is allowed between any two vertices, $$d_{{\mathtt{v}}_i}$$ is the number of vertices adjacent to $${\mathtt{v}}_i$$ (i.e., the number of ‘immediate neighbors’ of $${\mathtt{v}}_i$$ in $${\it{\Gamma}}$$). We restrict our attention to graphs with no self-loops: $$({\mathtt{v}}_i,{\mathtt{v}}_i)\notin {\scr E}$$, for all $$i$$. A graph is connected if from each vertex $${\mathtt{v}}_i$$ in $${\scr V}$$ there exists a path $$({\mathtt{v}}_i,{\mathtt{v}}_k), ({\mathtt{v}}_k,{\mathtt{v}}_j),\,\ldots$$ made by edges in $${\scr E}$$ connecting it to any of the other vertices. In this work, we only consider sparse graphs, which we define as those graphs with $$M=O(N)$$. We assume that the initial graph $${\it{\Gamma}}$$ is undirected, but we assign an arbitrarily chosen direction to each edge to define an incidence matrix of $${\it{\Gamma}}$$. This is a matrix $${\bf E} \in {\mathbb R}^{N \times M}$$ where each column corresponds to an edge and has only two nonzero entries corresponding to the two vertices identifying the edge. We will arbitrarily fix the first nonzero entry in the column to the value $$1$$ and the second nonzero entry to the value $$-1$$ (this is equivalent to assigning an orientation to the edges). We emphasize that the choice of the signs is irrelevant for the purposes of this exposition. The advantage of using $${\bf E}$$ will become more clear after the introduction of the concept of quantum graph. It is important to remark that $${\bf E}$$ has an interesting interpretation as a finite-dimensional operator mimicking the divergence operator on differentiable functions (see Arioli & Manzini, 2003, 2006 for a similar discussion related to mixed finite element problems). Finally, we recall that $${\bf E}$$ has rank $$N-1$$ in the case of a connected graph. The matrix $${\bf E}^{\rm T}$$ is also interpretable as the finite-dimensional equivalent of the gradient operator acting on differentiable functions. It is also important to observe that the classical (combinatorial) graph Laplacian coincides with the matrix $${\bf L}_{{\it{\Gamma}}} = {\bf E} {\bf E}^{\rm T}$$. Note that $${\bf L}_{{\it{\Gamma}}}$$ does not depend on the choice of orientation used for the edges, since $${\bf E} {\bf Q} ({\bf E} {\bf Q})^{\rm T} = {\bf E} {\bf E}^{\rm T}$$ for any $$M\times M$$ diagonal matrix $${\bf Q}$$ with entries $$\pm 1$$ on the main diagonal. The graph Laplacian is a symmetric positive semidefinite $$M$$-matrix. The multiplicity of $$0$$ as an eigenvalue of $${\bf L}_{{\it{\Gamma}}}$$ equals the number of connected components of $${\it{\Gamma}}$$; if $${\it{\Gamma}}$$ is connected, the null space of $${\bf L}_{{\it{\Gamma}}}$$ is one-dimensional and is spanned by the vector of all ones. Let the matrix $${\bf D}$$ be the diagonal of the matrix $${\bf L}_{{\it{\Gamma}}}$$. The diagonal entries of $${\bf D}$$ are just the degrees of the corresponding vertices. The matrix $${\bf Ad} = {\bf D} - {\bf L}_{{\it{\Gamma}}}$$ is the adjacency matrix of the graph, i.e., the matrix where the $$(i,j)$$ entry is either 1 or 0 according to whether $$({\mathtt{v}}_i,{\mathtt{v}}_j)\in {\scr E} $$. Definition 2.2 A connected graph $${\it{\Gamma}}$$ is said to be a metric graph if, to each edge $$e$$ is assigned a length $$\ell_e$$ such that $$0 < \ell_e < \infty$$, and each edge is assigned a coordinate $$x^e\in [0,\ell_e]$$, which increases in a specified (but otherwise arbitrary) direction along the edge. In general, the edges could be simple differentiable curves (i.e., no loops). However, to simplify the notation, we assume that each edge is a straight line joining the two vertices defining the edge. The direction used to assign coordinates to points on a given edge will be the same one used to define the incidence matrix. In our definition of a metric graph, we assume that all lengths are finite. We refer to Berkolaiko & Kuchment (2013) for discussions of infinite metric graphs with some edges having infinite length. We define the volume of a metric graph $${\it{\Gamma}}$$ as $${\mathbf{vol}} ({\it{\Gamma}}) = \sum_{e\in {\scr E}} \ell_e$$. In the following, we will discuss only finite graphs with all edge lengths finite, hence with $${\mathbf{vol}} ({\it{\Gamma}})< \infty$$. Note that we do not assume that metric graphs are embedded in $${\mathbb R}^n$$ for some $$n$$. With the structure described above, the metric graph $${\it{\Gamma}}$$ becomes a one-dimensional domain, where for each edge $$e$$ we have a variable $$x^e$$ representing locally the global coordinate variable $$x$$. As noted earlier, a sequence of contiguous vertices defines a path in $${\it{\Gamma}}$$ formed by $$\{ e_j \}_{j=1}^{\hat{M}}$$ and the associated path length is simply $$\sum \ell_{e_j}$$. We define the distance$$d({\mathtt{v}}_i,{\mathtt{v}}_j)$$ between two vertices $${\mathtt{v}}_i$$ and $${\mathtt{v}}_j$$ as the length of a shortest path in $${\it{\Gamma}}$$ between them. This notion of distance can be extended in a natural way to define the distance between any two points (possibly lying on different edges) in the one-dimensional simplicial complex. Endowed with this distance, $${\it{\Gamma}}$$ is readily seen to be a metric space. Next, we proceed to introduce function spaces on a metric graph $${\it{\Gamma}}$$ and linear differential operators defined on these spaces. Definition 2.3 The space $$L^2({\it{\Gamma}}) = \bigoplus_e L^2(e)$$ is the space of all square-integrable measurable functions $$u$$ on the edges of $${\it{\Gamma}}$$; i.e., ‖u‖L2(Γ)2=∑e∈E‖u|e‖L2(e)2<∞. The inner product in this space will be denoted by $$(u,v)_{L^2({\it{\Gamma}})} = \int_{{\it{\Gamma}}} u(x) v(x) \,{\rm d}x$$. The Sobolev space $$H^1({\it{\Gamma}}) = \bigoplus_e H^1(e)\cap C^0 ({\it{\Gamma}})$$ is the space of all continuous functions $$u$$ on $${\it{\Gamma}}$$, $$u\in C^0 ({\it{\Gamma}})$$, such that $$u|_e$$ belongs to $$H^1(e)$$ for each edge $$e$$, i.e., ‖u‖H1(Γ)2=∑e∈E‖u|e‖H1(e)2<∞. We remind that ‖u‖H1(e)2=∫e(dudx)2dx+∫eu2dx and therefore ‖u‖H1(Γ)2=∫Γ(dudx)2dx+∫Γu2dx. Because of the properties of the Sobolev spaces of functions of one variable, the functions belonging to $$H^1(e)$$ are continuous (Brezis, 2010, Chapter 8). This justifies the assumption in Definition 2.3 of restricting membership in $$H^1({\it{\Gamma}})$$ to the continuous functions. Moreover, the global continuity assumption implies automatically that the functions on all edges adjacent to a vertex $${\mathtt{v}}$$ assume the same value at $${\mathtt{v}}$$. The operators that we consider here are quite simple, but suitable for describing interesting dynamics on metric graphs. More specifically, besides the continuity of the functions on $${\it{\Gamma}}$$, we will use ‘Neumann–Kirchhoff’ conditions on the vertices, i.e., denoting by $${\scr E}_{\mathtt{v}}$$ the subset of $${\scr E}$$ comprising the edges incident to the vertex $${\mathtt{v}}$$, we impose the condition ∑e∈Evdudx(v)=0∀v∈V. (2.1) This corresponds to assuming Neumann conditions at all vertices. Strictly speaking, (2.1) requires that $$u|_e \in H^2(e)$$ for each edge $$e\in \mathscr{E}$$. As discussed below (see Theorem 3.2), this condition is usually satisfied in our setting. Conditions (2.1) express the conservation of currents if the metric graph $${\it{\Gamma}}$$ is viewed as an electrical network, hence the name. Note that to give a meaning to (2.1), we need to assume that the derivatives are taken in the directions away from the vertex, which we call the outgoing directions (see Berkolaiko & Kuchment, 2013). Moreover, we observe that (2.1) are the natural boundary conditions satisfied by the following one-dimensional Schrödinger-type operator: H:u(x)↦−d2udx2+v(x)u(x). (2.2) The function $$v(x)$$ in (2.2) is a potential; throughout the article, we assume that $$v\in L^{\infty}({\it{\Gamma}})$$. The operator (2.2) is defined for functions $$u\in L^2({\it{\Gamma}})$$ such that $$u|_e\in H^2(e)$$ for all $$e\in \scr E$$. We observe that $$u|_e\in H^2(e) \Rightarrow u|_e \in C^1(\bar{e})$$, thus, because of the regularity property of $$u|_e$$, the Neumann–Kirchhoff conditions (2.1) are well defined in the classical sense. However, for our purposes, it is convenient to introduce a weak form of (2.2), which requires that $$u \in H^1({\it{\Gamma}})$$. Here, we follow the treatment in Berkolaiko & Kuchment (2013, p. 25). From (2.1) and (2.2), we have that the bilinear form $${\mathfrak{h}}$$ corresponding to the Hamiltonian $${\scr H}$$ is h:H1(Γ)×H1(Γ)⟶R,h(u,g)=∑e∈E{∫edudxdgdxdx+∫eu(x)g(x)v(x)dx}. (2.3) The corresponding energy functional is given by J[u]=12∑e∈E{∫e(dudx)2+u(x)2v(x)dx}∀u∈H1(Γ). (2.4) Furthermore, if $$v(x)\ge v_0$$ for some constant $$v_0 > 0$$, then the symmetric bilinear form (2.3) is continuous and coercive on $$H^1({\it{\Gamma}})$$, with coercivity constant $$\gamma_0 = \min{(1,v_0)}$$ and continuity constant $$\gamma_1= \max{(1,\Vert v\Vert_\infty)}$$, thus $$J[\cdot]$$ is strictly convex and has a unique minimum in $$H^1({\it{\Gamma}})$$ (see Theorem 3.2 below). Moreover, for $$f \in L^2({\it{\Gamma}})$$, the Euler equation for $$J[u]$$: h(u,g)=∫Γf(x)g(x)dx∀g∈H1(Γ) (2.5) has a unique solution (Lax–Milgram Theorem). We observe that even though here we are taking into account only Neumann conditions, it is possible (if required) to fix the values of the functions on a subset of the vertices that will become the Dirichlet boundary vertices. These will play the same role of Dirichlet boundary conditions in the classical sense. More general conditions at the vertices can be imposed and we refer to Berkolaiko & Kuchment (2013) for a deeper discussion. Hereafter, we define a quantum graph as follows: Definition 2.4 A quantum graph is a metric graph equipped with the Hamiltonian operator $${\scr H}$$ defined by the operator (2.2) subject to the conditions (2.1) at the vertices. Although this definition is more restrictive than the one found, e.g., in Berkolaiko & Kuchment (2013), it is adequate for our purposes. Finally, among our goals is the analysis of parabolic problems on metric graphs. In this case, we assume that the functions we use also depend on a second variable $$t$$ representing time (see Raviart et al., 1983), i.e., u(x,t):Γ×[0,T]⟶R, and that they are elements of suitable Bochner spaces, specified below. Definition 2.5 Let $$V$$ denote either $$L^2({\it{\Gamma}})$$ or $$H^1({\it{\Gamma}})$$. Let $$C^0\bigl([0,T];V\bigr)$$ be the space of functions $$u(x,t)$$ that are continuous in $$t$$ with values in $$V$$, i.e., for each fixed value $$t^*$$ of $$t$$ we have that $$u(\cdot, t^*) \in V$$. This space is equipped with the norm ‖u‖C0([0,T];V)=sup0≤t≤T‖u(⋅,t)‖V. Let $$L^2\bigl([0,T];V \bigr)$$ be the space of functions $$u(x,t)$$ that are square integrable in $$t$$ for the $$\,{\rm d}t$$ measure with values in $$V$$, i.e., for each fixed value $$t^*$$ of $$t$$ we have that $$u(\cdot ,t) \in V$$. This space is equipped with the norm ‖u‖L2([0,T];V)=(∫0T‖u(⋅,t)‖V2dt)12 and scalar product (u,g)L2([0,T];V)=∫0T(u(⋅,t),g(⋅,t))Vdt. We note that all these definitions can be easily modified to deal with self-adjoint operators acting on spaces of complex-valued functions, as required, e.g., in quantum–mechanical applications. 3. Finite element approximation of quantum graphs On each edge of the quantum graph, it is possible to use the classical one-dimensional finite element method. Let $$e $$ be a generic edge identified by two vertices, which we denote by $${\mathtt{v}}_a$$ and $${\mathtt{v}}_b$$. The first step is to subdivide the edge in $$n_e$$ intervals of length $$h_e$$, with $$n_e \ge 2$$. The points {Ve={xje}j=1ne−1}∪{vae}∪{vbe} form a chain linking vertex $${\mathtt{v}}_a^e$$ to vertex $${\mathtt{v}}_b^e$$ lying on the edge $$e$$. The internal points $$x_j^e$$ are said to be the nodes of the discretization. For each of the internal nodes, we denote by $$\left\lbrace \psi^e_j \right\rbrace_{j=1}^{n_e-1}$$ the standard hat basis functions ψje(xe)={1−|xje−xe|heifxj−1e≤xe≤xj+1e0otherwise, (3.1) where we have set $$x_0^e = {\mathtt{v}}_a^e$$ and $$x_{n_e}^e = {\mathtt{v}}_b^e$$. We also define the neighboring set$${\scr W}_{\mathtt{v}}$$ of a vertex $${\mathtt{v}}\in {\scr V}$$ as the union of all the sets $$[{\mathtt{v}}^e_a,x^e_1]$$ and $$[x^e_{n_e},{\mathtt{v}}^e_b]$$: Wv={⋃e∈{e∈Ev s.t. vae=v}[v,x1e]}∪{⋃e∈{e∈Ev s.t. vbe=v}[xne−1e,v]}. Note that $${\scr W}_{\mathtt{v}}$$ is itself a (star shaped) metric graph because, as a sub-graph, it inherits the metric properties of the original graph. We introduce the functions $$\phi_{\mathtt{v}}(x)$$ with support $${\text {supp}}(\phi_{\mathtt{v}}(x)) = {\scr W}_{\mathtt{v}}$$ such that ϕv(x)|Wv∩e={1−|xve−xe|heifxe∈Wv∩e;e∈Ev0otherwise, (3.2) where $$x^e_{\mathtt{v}}$$ is either $$0$$ or $$\ell_e$$ depending on the direction on the edge and its parametrization. Fig. 1 describes a simple example. The functions $$\psi^e_j$$ are a basis for the finite-dimensional space Fig. 1. View largeDownload slide Example of $$\psi_j$$ at a vertex $${\mathtt{v}}$$. Fig. 1. View largeDownload slide Example of $$\psi_j$$ at a vertex $${\mathtt{v}}$$. Vhe={w∈H01(e);w|[xje,xj+1e]∈P1j=0,1,…,ne−1}, where $$P_1$$ is the space of linear functions. Globally, we construct the space Vh(Γ)=(⨁e∈EVhe)⊕span{ϕv}v∈V and we have Vh(Γ)⊂C0(Γ). This is a finite-dimensional space of functions that belong to $$H^1({\it{\Gamma}})$$. Any function $$w_h \in V_h({\it{\Gamma}})$$ is then a linear combination of the $$\psi^e_j$$ and $$\phi_{\mathtt{v}}$$: wh(x)=∑e∈E∑j=1ne−1αjeψje(x)+∑v∈Vβvϕv(x)x∈Γ. When we approximate the variational equation h(u,g)=∫Γfgdx∀g∈H1(Γ), (3.3) with $$f \in L^2({\it{\Gamma}})$$, on $$V_h$$ we test on all the $$\psi_k^e$$, $$\phi_{\mathtt{v}}$$ and, because of $${\scr W}_{\mathtt{v}} \cap {\scr W}_{\mathtt{z}} = \emptyset$$ if $${\mathtt{v}} \neq {\mathtt{z}}$$, we have the following finite-dimensional (discrete) system of equations: hh(wh,ψke)=∑e∈E∑j=1ne−1αje{∫edψjedxdψkedxdx+∫eψjeψkev(x)dx}+∑v∈Vβv{∫Wvdϕvdxdψkedxdx+∫Wvϕvψkev(x)dx}=∫supp(ψke)fψkedx∀ψke;hh(wh,ϕv)=∑e∈E∑j=1ne−1αje{∫Wvdψjedxdϕvdxdx+∫Wvψjeϕvv(x)dx}+βv{∫Wvdϕvdxdϕvdxdx+∫Wvϕvϕvv(x)dx}=∫Wvfϕvdx∀ϕv.} (3.4) Under the hypotheses of the previous section (continuity and coercivity of the Hamiltonian), the system (3.4) has a unique solution in $$V_h({\it{\Gamma}})$$. Henceforth, we will denote with $$u_h\in V_h({\it{\Gamma}})$$ the solution of the discrete variational equation (3.4). The following theorem gives the properties and the structure of the discretized Hamiltonian $${\bf H}$$ corresponding to the system (3.4). Theorem 3.1 Let us denote by uE=[u1⋮uM],whereue=[u1e⋮une−1e](e=1,…,M), and by uV=[u1⋮uN] the values that give uh(x)=∑e∈E∑j=1ne−1ujeψje(x)+∑v∈Vuvϕv(x)x∈Γ. Let $${\bf f}_{{\scr E}}$$ and $${\bf f}_{{\scr V}}$$ be the vectors fE=[f1⋮fM],wherefe=[f1e⋮fne−1e],fV=[f1⋮fN], with $$ f_k^e = \int_{\text{supp}(\psi_k^e)} f \psi_k^e\, \,{\rm d}x $$ and $$f_{\mathtt{v}} = \int_{{\scr W}_{\mathtt{v}}}f \phi_{\mathtt{v}}\, \,{\rm d}x$$. Then, the linear system (3.4) can be written as H[uEuV]=[fEfV], where $${\bf H}$$ has the following structure: H=[H11H12H12TH22]=[ABBTG]+[VCCTF]=L+Mv. (3.5) Moreover, (i) $${\bf H}$$ is a symmetric positive definite matrix. (ii) Both $${\bf A}$$ and $${\bf V}$$ are block diagonal matrices and each diagonal block (of dimension $$n_e-1$$) can be permuted into a symmetric tridiagonal nonsingular matrix corresponding to the edge $$e$$. (iii) The entries of $${\bf A}$$ are given by ∫edψjedxdψkedxdx={2/heifj=k−1/heif|j−k|=10otherwise, (3.6) whereas the entries of $${\bf B}$$ are given by ∫Wvdϕvdxdψkedxdx={−1/heife∈Wv∩Ev≠∅0otherwise. (3.7) Moreover, $${\bf G}$$ is diagonal and the entries of $${\bf G}$$ are given by ∫Wvdϕvdxdϕvdxdx=∑e∈Ev1/he. (3.8) (iv) The entries of $${\bf V}$$ are $$ \int_e \psi_j^e \psi_k^e v(x) \,{\rm d}x ,$$ the entries of $${\bf C}$$ are $$ \int_{{\scr W}_{\mathtt{v}}} \phi_{\mathtt{v}} \psi_k^e v(x) \,{\rm d}x , $$ the entries of $${\bf F}$$ are $$ \int_{{\scr W}_{\mathtt{v}}}\phi_{\mathtt{v}} \phi_{\mathtt{v}} v(x) \,{\rm d}x $$ and $${\bf F}$$ is diagonal. Moreover, we have |∫Wv∩eϕvψkev(x)dx|≤χheand|∫Wvϕvϕvv(x)dx|≤χ∑e∈Evhe, (3.9) where $$\chi = \max_{\it{\Gamma}} | v(x) |$$. Proof. (i) This follows from the coercivity and the continuity of the self-adjoint Hamiltonian. (ii) The nodes on the edges will describe a chain path between two vertices, therefore the block corresponding to the edge can be permuted into a tridiagonal matrix ordering the internal nodes by increasing or decreasing distance from one of the vertices defining the edge. Because the supports of the hat functions corresponding to internal nodes of distinct edges have empty intersections, we have a block diagonal structure for both $${\bf A}$$ and $${\bf V}$$. (iii) The values of the entries in $${\bf A}$$ can be easily computed, observing that the derivative of each hat function $$\psi_j^e$$ on the interval $$[x^e_j,x^e_{j+1}]$$ is a constant equal to $$\pm1/h_e$$, depending on the orientation we choose on the edge. Taking into account the restrictions of the $$\phi_{\mathtt{v}}$$ on the incident edges and the orientation chosen on the edge $$e$$, we have that on the interval $$[x^e_{\mathtt{v}}, x^e_1]$$ (or $$[x^e_{n_e-1},x^e_{\mathtt{v}}]$$) the derivatives of $$\psi^e_1$$ (respectively, $$\psi^e_{n_e-1}$$) and $$\phi_{\mathtt{v}}\vert_e$$ are constants of opposite sign. Hence, we have that the integrals are equal to $$-1/h_e$$ for the nonzero entries in $${\bf B}$$. The expression (3.8) for the diagonal entries of $${\bf G}$$ is straightforward. (iv) The bounds (3.9) follow from $$0 \leq \psi_j^e \leq 1$$ and $$0 \leq \phi_{\mathtt{v}} \leq 1$$. This completes the proof. □ Next, we have the following error estimate. Theorem 3.2 Let $$f\in L^2 ({\it{\Gamma}})$$ and $$v\in L^{\infty}({\it{\Gamma}})$$, with $$v(x)\ge v_0 >0$$ on $${\it{\Gamma}}$$. Then the approximate solution $$u_h$$ of (3.3) satisfies ‖u−uh‖H1(Γ)≤γ1γ0ΘCh^∑e∈E‖u|e‖H2(e), (3.10) where $$\hat h = \max_e h_e$$, $$C$$ is a constant independent of $$u$$ and $$\hat h$$ and $${\it{\Theta}} = {{\mathbf{vol}}}({\it{\Gamma}}) = \sum_{e\in {\scr E}} \ell_e$$, the volume of $${\it{\Gamma}}$$. Proof. As a consequence of the assumptions, the bilinear form h(z,w)=∫Γdzdxdwdxdx+∫Γv(x)zwdx on $$H^1({\it{\Gamma}})$$ is coercive and continuous, i.e., {γ0‖z‖H1(Γ)2≤h(z,z)|h(z,w)|≤γ1‖z‖H1(Γ)‖w‖H1(Γ). From these inequalities, we have that the energy norm $$||| z |||^2 ={\mathfrak{h}} (z,z)$$ and the $$H^1({\it{\Gamma}})$$ norm are equivalent. From the fact that $$V_h({\it{\Gamma}}) \subset H^1({\it{\Gamma}}),$$ it follows that $$z = u - u_h$$ minimizes the energy norm of $$z$$ in $$V_h({\it{\Gamma}})$$ and, thus, γ0‖u−uh‖H1(Γ)2≤h(z,z)≤h(u−uhI,u−uhI)≤γ1‖u−uhI‖H1(Γ)2, where $$u^I_h$$ is the interpolant of $$u$$ in the nodes and the vertices. We observe that under our assumptions, the solution to (3.3) satisfies $$u|_e \in H^2(e)$$ on each edge $$e\in \scr E$$ (see Brezis, 2010, Chapter 8). On each edge $$e=({\mathtt{v}}_a,{\mathtt{v}}_b)$$, the $$\psi^e_j ,\; (j=1,\dots ,n_e-1)$$ and the restrictions to $$e$$ of $$\phi_{{\mathtt{v}}_a}$$ and $$\phi_{{\mathtt{v}}_b}$$ form a basis for the finite element approximation of $$H^1(e)$$. Thus, we have the standard error estimate ‖u|e−uh|e‖H1(e)≤γ1γ0ℓeChe‖u|e‖H2(e)∀e∈E, (3.11) where $$C$$ is independent of $$u$$ and $$h_e$$ (see Raviart et al., 1983, Chapter 3.2; Brenner & Scott, 2002, Section 4.4). Letting now $$\hat h = \max_e h_e$$, the sum of all the local errors and of their upper bounds gives the global upper bound (3.10) for $$\Vert u - u_h \Vert_{H^1({\it{\Gamma}})}$$ (see Dupont & Scott, 1980, Theorem 7.1). □ 3.1 Error estimates for the Neumann–Kirchhoff conditions In general, unfortunately, the Neumann–Kirchhoff conditions cannot be exactly satisfied for any value of $$h>0$$; for the special case of a single edge (i.e., an interval), see the discussion in Raviart et al. (1983, p. 71). However, they are asymptotically satisfied as the discretization is refined. The following result provides an upper bound on how much the discrete solution $$u_h$$ can deviate from satisfying the Neumann–Kirchhoff conditions at a given vertex $${\mathtt{v}}$$ of $${\it{\Gamma}}$$. Theorem 3.3 If $$f\in L^2({\it{\Gamma}})$$, then for any vertex $${\mathtt{v}}$$ of $${\it{\Gamma}}$$ with neighboring set $${\scr W}_{\mathtt{v}}$$ and degree $$d_{{\mathtt{v}}}$$, the finite element solution $$u_h$$ of (3.4) satisfies |∑e∈Vvduhdxe|≤2vol(Vv)(‖f‖L2(Γ)+χ‖u‖L2(Γ))≤2dvh^(‖f‖L2(Γ)+χ‖u‖L2(Γ)), (3.12) where $$\hat h = \max_e h_e$$ and $$\chi = \max_{\it{\Gamma}} | v(x) |$$. Proof. The entry in the right-hand side corresponding to a $${\mathtt{v}} \in {\scr V}$$ is given by fv=∫Wvfϕvdx. Taking into account that the supports of each $$\phi_{\mathtt{v}}|_e$$ have length $$h_e$$ and, using the triangle and Cauchy–Schwarz inequalities, we get |fv|≤vol(Vv)‖f‖L2(Γ)≤h^dv‖f‖L2(Γ). (3.13) Taking into account that on $${\scr W}_{{\mathtt{v}}} \cap e$$ the function $$u_h(x)$$ is linear, and thus differentiable, we obtain from (3.7) and (3.8) that [BTuin+GuV]v=∑e∈Evduhdxe(v). Similarly, from (3.1), we have that the entry corresponding to $${\mathtt{v}}$$ in $$[{\bf C}^{\rm T} {\bf u}_{in} + {\bf F} {\bf u}_{{\scr V}}]$$ satisfies [CTuin+FuV]v=∫Wvuhϕvv(x)dx. Therefore, from (3.13) we have that at each vertex $${\mathtt{v}}$$ and for $$\phi_{\mathtt{v}}$$, ∑e∈Wvduhdxe(v)=∫Wv(f−uhv(x))ϕvdx, and thus from (3.9), we have |∑e∈Vvduhdxe(v)|≤2vol(Vv)(‖f‖L2(Γ)+χ‖u‖L2(Γ))≤2dvh^(‖f‖L2(Γ)+χ‖u‖L2(Γ)). This completes the proof. □ Inequality (3.12) shows that, in general, the Neumann–Kirchhoff interface conditions are satisfied only in the limit for $$\hat h\searrow0$$. Remark 3.4 Taking into account (3.13) and (3.12), for vertices $${\mathtt{v}}$$ with a large degree $$d_{{\mathtt{v}}} \gg 1$$ (so-called hubs), the Neumann–Kirchhoff conditions can be poorly satisfied even for a reasonably small value of $$h$$. This suggests that in these cases, an adaptive choice of the mesh points should be made. In particular, the analysis suggests that on the edges having an end point corresponding to a hub, the mesh node $$x_1^e$$ (or $$x_{n^e-1}$$) in $${\scr W}_{{\mathtt{v}}}$$ should be chosen much closer to $$x^e_{\mathtt{v}}$$ than the distance between two consecutive nodes on $$e$$, i.e., $$|x_1^e - x^e_{\mathtt{v}} | = \tilde{h}_e$$ (or $$|x_{n_e-1}^e - x^e_{\mathtt{v}} | = \tilde{h}_e$$), where $$\tilde{h}$$ is such that $$\tilde{h}_e d_{{\mathtt{v}}} \leq \hat h .$$ 4. Extended graphs The discretization of a metric graph $${\it{\Gamma}}$$ leads naturally to a new graph $$\scr G$$, which we refer to as the extended graph, having as vertices the vertices of $${\it{\Gamma}}$$ plus all the discretization nodes $$x_j^e$$ and as edges all the intervals cut out by the discretization on each edge. Hereafter, given a set of matrices $$\left\lbrace {\bf Y}_k\right\rbrace_{k=1}^{K} $$, we will denote by $$\mathbf{blkdiag}\left (\lbrace {\bf Y}_k\rbrace_{k=1}^{K} \right )$$ the block diagonal matrix obtained using the $${\bf Y}_k$$ as diagonal blocks. We remark that we do not require that the blocks be square. 4.1 Construction of the coefficient matrices It is possible to build the matrix $${\bf H}$$ using an extended incidence matrix $$\widetilde{{\bf E}}$$ obtained from the incidence matrix $${\bf E}$$ used to describe the original graph $${\it{\Gamma}}$$. In particular, as $${\bf H} = {\bf L} + {\bf M},$$ where $${\bf L}$$ is the stiffness matrix and $${\bf M}$$ is the mass matrix (or more generally the matrix $${\bf M}_v$$ describing the discretized potential $$v(x)$$, see (3.5)) on the extended graph $$\scr G$$, we will focus on the construction of $${\bf L}$$ and $${\bf M}$$ using $${\bf E}$$. Let us define the matrices E+=12(E+|E|)andE−=12(E−|E|), where $$|{\bf E}|$$ denotes the entry-wise absolute value of $${\bf E}$$. Note that $${\bf E} = {\bf E}^+ + {\bf E}^-$$. Lemma 4.1 Let $${\bf E} \in {\mathbb R}^{N \times M}$$ be the incidence matrix describing a graph with $$N$$ vertices and $$M$$ edges without loops. We have E+(E+)T+E−(E−)T=D,E+(E−)T+E−(E+)T=−Ad, (4.1) where $${\bf D}$$ and $${\bf Ad}$$ are the degree and adjacency matrix of the graph, respectively. Proof. First note that the rows of both $${\bf E}^+$$ and $${\bf E}^-$$ are mutually orthogonal, since we separate the in-coming edges from the out-going edges in $${\bf E}$$. Thus, both $${\bf E}^+ \bigl( {\bf E}^+ \bigr)^{\rm T}$$ and $${\bf E}^- \bigl({\bf E}^-\bigr)^{\rm T} $$ are diagonal matrices. Every row in each matrix corresponds to a vertex in the graph and will have a number of nonzeros equal to the number of edges that, respectively, have that vertex as the origin ($${\bf E}^+$$) or ending ($${\bf E}^-$$). Therefore, the sum of the two diagonal matrices will result in the total number of edges incident to each vertex. The second relation is a straightforward consequence of $${\bf E} {\bf E}^{\rm T} = {\bf L}_{{\it{\Gamma}}} = {\bf D} -{\bf Ad}$$ and $${\bf E} = {\bf E}^+ + {\bf E}^-$$. □ Consider now the $$n_e - 1$$ nodes interior to the generic edge $$e$$. Because of the chain structure of them, to each edge we associate the $$(n_e - 1) \times n_e$$ matrix Ee=[−11−11⋱⋱−11]. Note that, strictly speaking, this is not an incidence matrix, because of the first and last column. Next, we define the block diagonal matrix $$\bar{{\bf E}} = \mathbf{blkdiag}(\{{\bf E}_e\}_{e\in {\scr E}})$$ of size $$\widetilde{n} \times( \widetilde{n}+M)$$, where $$\widetilde{n} =\sum_{e\in {\scr E}} (n_e - 1)$$. If we denote by $$ {\bf e}^{n_e}_1$$ and by $$ {\bf e}^{n_e}_{n_e}$$, respectively, the first and the last column of the identity matrix $${\bf I}_{n_e}$$, $$e \in {\scr E}$$, then we can build the $$N\times (\widetilde{n} +M)$$ matrix $$\widehat{{\bf E}} $$ by expanding each column $${\bf E}^+_{j}$$ of $${\bf E}^+$$ in an $$N \times n_{e_j}$$ block equal to E^j+=Ej+⊗(e1nej,)T, (4.2) and by expanding each column $${\bf E}^-_{j}$$ of $${\bf E}^-$$ in an $$N \times n_{e_j}$$ block equal to E^j−=Ej−⊗(enejnej)T. (4.3) Then, by adding these matrices, we can construct the new matrix E^=[E^1++E^1−,…,E^M++E^M−]. (4.4) Finally, the incidence matrix of the extended graph $$\scr G$$ is given by E~=[E¯E^]∈R(n~+N)×(n~+M). In Fig. 2, we give an example of a simple planar graph and its incidence matrix. In Fig. 3, we show the extended graph and its corresponding incidence matrix when four extra nodes are added internally to each edge. Fig. 2. View largeDownload slide Example of a simple planar metric graph and its incidence matrix. Fig. 2. View largeDownload slide Example of a simple planar metric graph and its incidence matrix. Fig. 3. View largeDownload slide Example of the extension of the graph given in Fig. 2 when four-node chains are added internally to each edge and its incidence matrix. Fig. 3. View largeDownload slide Example of the extension of the graph given in Fig. 2 when four-node chains are added internally to each edge and its incidence matrix. Next, we introduce the following block diagonal matrix of weights: W=blkdiag({1heIne}e∈E). The matrix L=E~WE~T=[ABBTG], (4.5) where the leading principal block has order $$\widetilde n$$ and is block diagonal, is the stiffness matrix corresponding to the Laplace operator $$-\frac{\,{\rm d}^2}{\,{\rm d}x^2}$$ on $${\it{\Gamma}}$$. In Fig. 4 we show the nonzero pattern of $${\bf L}$$, where the blue bullets correspond to the internal nodes on each edge and the red ones are the original vertices. Fig. 4. View largeDownload slide Pattern of the matrix $${\bf L},$$ where the red bullets correspond to the original vertices and the blue ones to the internal nodes on each edge. See online version for colour. Fig. 4. View largeDownload slide Pattern of the matrix $${\bf L},$$ where the red bullets correspond to the original vertices and the blue ones to the internal nodes on each edge. See online version for colour. The matrix $${\bf M}_v$$ describing the discretized potential (which reduces to the mass matrix $${\bf M}$$ for $$v(x)\equiv 1$$) requires the values of the integrals ∫eψjeψkev(x)dx,∫Wvϕvψkev(x)dxand∫Wvϕvϕvv(x)dx, which can be computed numerically using either the trapezoidal formula or Simpson’s rule (Raviart et al., 1983). In the first case, we need to have the values of the function $$v(x)$$ on all the vertices of the extended graph, i.e., in each vertex in the set V^=V∪(⋃eVe). In the second case, we also need the values of $$v(x)$$ at each vertex in the set Vmid=⋃eV~e, where V~e={xj+1e+xje2}j=1ne−2∪{vae+x1e2}∪{vbe+xne−1e2}. Let $${\bf K}_e \in {\mathbb R}^{(n_e-1) \times (n_e-1)}$$ be the diagonal matrices $${\bf K}_e = \mathbf{diag}( v({\scr V}_e))$$ for all $$e \in {\scr E}$$, i.e., the matrices having the values of $$v(x)$$ on the set $${\scr V}_e$$ on the main diagonal; let $${\bf K}_{{\scr{V}}} \in {\mathbb R}^{\widetilde{n} \times \widetilde{n} }$$ be the diagonal matrix $$ {\bf K}_{{\scr{E}}} = \mathbf{blkdiag}\bigl(\{{\bf K}_e\}_{e\in \scr E}\bigr) $$ and $${\bf K}_{\scr{V}} \in {\mathbb R}^{N \times N}$$ be the diagonal matrix $${\bf K}_{\scr{V}} = \mathbf{diag}( v({\scr{V}}) )$$ with the values of $$v(x)$$ on the vertices of the original graph $${\it{\Gamma}}$$ on the main diagonal. Finally, we can assemble the diagonal matrix $${\bf K}_1 \in {\mathbb R}^{(\widetilde{n}+N) \times {(\widetilde{n}+N) }}$$: K1=blkdiag(KE,KV). Next, we form the diagonal matrix $${\bf K}_2 \in {\mathbb R}^{(\widetilde{n}+M) \times {(\widetilde{n}+M) }}$$ given by $${\bf K}_2 = \mathbf{blkdiag}\bigl( v \bigl( \widetilde{{\scr{V}}}_2 \bigr) \bigr)$$ and the matrix W^=blkdiag({heIne}e∈E). Therefore, the mass matrix $${\bf M}$$ and the potential part $${\bf M}_v$$ of the Hamiltonian computed by the Simpson quadrature rule are, respectively, M=16(|E~|W^|E~|T+diag({(|E~|W^|E~|T)i,i}i=1n~+M))and Mv=16(|E~|K2W^|E~|T+K1diag({(|E~|W^|E~|T)i,i}i=1n~+M)). If instead the simple trapezoidal rule is used for the computation of the mass matrix and the potential then we will take the diagonal part of the two previous matrices. Although the Simpson rule, being exact for quadratic polynomials, gives ‘exact’ values for the entries of $${\bf M}$$ (and also of $${\bf M}_v$$ when $$v(x) = \nu$$ with $$\nu> 0$$ a constant), the trapezoidal rule will only give approximations that are accurate up to an error of $${\mathscr O}(h^2)$$. Clearly, in the limit $$h\to 0,$$ the bound (3.10) remains valid. We observe that the mass matrix $${\bf M}$$ has a block structure that matches the block structure of $${\bf L}$$, Theorem 3.1: M=[VCCTF]. (4.6) In both cases, the block structure pattern is a straightforward consequence of the partitioning of the rows in $$\widetilde{{\bf E}}$$. In Figs 5 and 6, we give the structures of $$\bar{{\bf E}}$$ and $$\widehat{{\bf E}}$$ for the simple graph in Fig. 2. Fig. 5. View largeDownload slide Example of a simple planar metric graph: $$\bar{{\bf E}}$$. Fig. 5. View largeDownload slide Example of a simple planar metric graph: $$\bar{{\bf E}}$$. Fig. 6. View largeDownload slide Example of a simple planar metric graph: $$\widehat{{\bf E}}$$. Fig. 6. View largeDownload slide Example of a simple planar metric graph: $$\widehat{{\bf E}}$$. In matrices $${\bf L}$$ and $${\bf M}$$ the $$N\times \widetilde{n}$$ blocks $${\bf B}^{\rm T}$$ and $${\bf C}^{\rm T}$$ are given by BT=E^E¯TW1andCT=|E^||E¯T|W^1, where $${\bf W}_1$$ and $$\widehat{{\bf W}}_1$$ are the first blocks corresponding to $$\bar{{\bf E}}^{\rm T}$$ (and its absolute value) of $${\bf W}$$ and $$\widehat{{\bf W}}$$. Because of the lower triangular structure of $$\bar{{\bf E}}^{\rm T}$$ and $$| \bar{{\bf E}}|^{\rm T}$$, the pattern of $${\bf B}^{\rm T}$$ and $${\bf C}^{\rm T}$$ will be the same of $$\widehat{{\bf E}} $$ (see Figs 5 and 6). Therefore, we have that both $${\bf B}^{\rm T} {\bf B}$$ and $${\bf C}^{\rm T} {\bf C}$$ are diagonal. Finally, we observe that $${\bf G} = \widehat{{\bf E}} {\bf W}_2 \widehat{{\bf E}}^{\rm T}$$, where $${\bf W}_2$$ is the second diagonal block of $${\bf W}$$ corresponding to $$\widehat{{\bf E}}$$, is also a diagonal matrix. Similar considerations show that $${\bf F}$$ is also a diagonal matrix. In particular, for the Laplace case with $$n_e =n$$ and $$\ell_e = \ell$$ for all $$e\in{\scr E}$$, we have that BTB=1hD=Gand CTC=h3D=F. (4.7) Furthermore, we have $$h_e=h$$ and BT =1h(E+⊗(e1n)T+E−⊗(enn)T)(IM⊗EeT),CT =h6(|E+|⊗(e1n)T+|E−|⊗(enn)T)(IM⊗|Ee|T)=−h26BT. (4.8) In Section 4.2 we will describe the relation between the underlying domain decomposition techniques and the original graph Laplacian. Remark 4.2 If in the original graph all the $$\ell_e$$ are equal, all the $$n_e$$ are equal to $$n$$ and $$v(x) = \nu$$ with $$\nu $$ a constant in the Hamiltonian (2.2), then the extended graph will produce a matrix $${\bf H}$$ H=L+νM. Moreover, if the Hamiltonian (2.2) is given by −ddx(α(x)dudx)α(x)≥α0>0,α(x)∈L∞(Γ) and $$v(x)|_ e = \nu_e$$ with $$\nu_e$$ constant on each edge $$e$$ (the Hamiltonian is still continuous and coercive), it suffices to modify the weight matrix $${\bf W}$$ as W=blkdiag({1hediag({wj}j=1ne)}e∈E), where wj=1he∫(j−1)hejheα(x)dx, the matrix $$\widehat{{\bf W}}$$ as W^=blkdiag({heνeIne}e∈E), and to proceed as described above to obtain the matrix $${\bf H}$$ approximating the infinite-dimensional Hamiltonian $$\scr H$$. We finally remark that the block structure of $${\bf H}$$ (Theorem 3.1) will be determined by the corresponding block structure of $${\bf L}$$ and $${\bf M}$$. 4.2 Extended graph and domain decomposition In this section, we investigate the structure of the Schur complement system arising from the elimination of the nodes internal to each edge. We begin by observing that, because of the node ordering scheme used, the matrix $${\bf H}$$ is partitioned as in a (nonoverlapping) domain decomposition approach, where individual edges $$e\in {\scr E}$$ of the original metric graph are the subdomains and the vertices $${\mathtt{v}} \in {\scr V}$$ are the interfaces between them. The following theorem shows that, under certain conditions, the Schur complement of the discretization of the second derivative operator coincides with the graph Laplacian of the underlying combinatorial graph, $${\it{\Gamma}}$$. Theorem 4.3 Assume that $$n_e = n$$ for all $$e \in {\scr E}$$ and that all edges in the metric graph $${\it{\Gamma}}$$ have the same length $$\ell_e = \ell = 1$$. Assume further that $$v(x) \equiv 0$$. Then, S:=G−BTA−1B=LΓ. Proof. Under our hypotheses, we have L=nE~E~T, see (4.5). Because of the partition of the rows of $$\widetilde{{\bf E}}$$ into nodes internal to the edges (corresponding to rows of $$\bar{{\bf E}}$$) and the vertices of the original graph (corresponding to rows of $$\widehat{{\bf E}}$$), we have A=nE¯T,B=nE¯E^T,G=nE^E^T=nD. (4.10) The nonzero pattern of the Schur complement S=G−BTA−1B coincides with that of BTA−1B=nE^(E¯T(E¯E¯T)−1E¯)E^T. We point out that the matrix $$n \, \bar{{\bf E}}^{\rm T} \bigl( \bar{{\bf E}} \bar{{\bf E}}^{\rm T} \bigr)^{-1} \bar{{\bf E}}$$ is block diagonal and that each of the $$M$$ blocks can be easily computed for our simple Hamiltonian: nE¯T(E¯E¯T)−1E¯=IM⊗T, (4.11) with T=nIn−1In1InT, where $${\rm 1}\kern-0.24em{\rm I}_j$$ is the vector of all ones of dimension $$j$$. Moreover, from (4.4), the matrix $$\widehat{{\bf E}} $$ is a stretching of the incidence matrix of the original graph. Under our current hypotheses, it is given by E^=E+⊗(e1n)T+E−⊗(enn)T, (4.12) where $${\bf e}^{n}_1 \in {\mathbb R}^{n}$$ and $${\bf e}^{n}_{n} \in {\mathbb R}^{n} $$ are, respectively, the first and the last column of the identity matrix. The Schur complement is given by S=G−(E+⊗(e1n)T+E−⊗(enn)T)(IM⊗T)(E+⊗(e1n)T+E−⊗(enn)T)T=G−(E+⊗(e1n)T+E−⊗(enn)T)((E+)T⊗Te1n+(E−)T⊗Tenn)=G−(E+(E+)T⊗(e1n)TTe1n+E−(E+)T⊗(enn)TTe1n+E+(E−)T⊗(e1n)TTenn+E−(E−)T⊗(enn)TTenn). Moreover, from the following relations: (e1n)TTe1n=(enn)TTenn=n−1and(enn)TTe1n=(e1n)TTenn=−1, and from Lemma 4.1, we have, taking into account that $${\bf G} = n\, \widehat{{\bf E}} \widehat{{\bf E}}^{\rm T}$$, that S=G−(n−1)(E+(E+)T+E−(E−)T)+(E−(E+)T+E+(E−)T)=nD−(n−1)D−Ad=EET=LΓ. This completes the proof. □ In Fig. 7, we display the sparsity pattern of the product $${\bf A}^{-1} {\bf B}$$ for the simple example considered in Figs 2 and 3. Fig. 7. View largeDownload slide Example of a simple planar metric graph: $${\bf B}^{\rm T}$$ (top) and $${\bf A}^{-1} {\bf B}$$ (bottom). Fig. 7. View largeDownload slide Example of a simple planar metric graph: $${\bf B}^{\rm T}$$ (top) and $${\bf A}^{-1} {\bf B}$$ (bottom). Hence, we have shown that upon elimination of the internal nodes on the edges, the resulting Schur complement reduces to the combinatorial Laplacian associated with the original graph. In particular, $${\bf S}$$ is sparse. This fact is important enough to warrant further comment. Indeed, since the inverse of an irreducible tridiagonal matrix is full (Meurant, 1992), a priori one could have expected the Schur complement $${\bf S} = {\bf G} - {\bf B}^{\rm T} {\bf A}^{-1}{\bf B}$$ to incur significant fill-in, similar to what happens when solving discretized elliptic PDEs in two-dimensional and three-dimensional domains. The fact that $${\bf S}$$ is sparse has important implications when solving the discretized equations. This will be discussed further in Section 7. We observe that the Schur complement remains sparse even if we drop the assumption that all edges on the metric graph $${\it{\Gamma}}$$ have equal length, and that the same number of interior nodes is used in discretizing each edge. In this case, however, it is no longer true that $${\bf S}$$ coincides with the graph Laplacian $${\bf L}_{{\it{\Gamma}}}$$. Remark 4.4 In Theorem 4.3, we have assumed that $$v(x)\equiv 0$$. In the general case, where we have a more complex Hamiltonian, the Schur complement will contain additional information; indeed, the values in a vertex $${\mathtt{v}}$$ now take into account the contributions of the solutions on the edges incident to it. However, the above argument shows that the nonzero pattern of the resulting $${\bf S}$$ will still coincide with the nonzero pattern of $${\bf L}_{{\it{\Gamma}}}$$. Additional details are given in the appendix. 5. Conditioning of the mass matrix In this section, we examine the conditioning of the mass matrix $${\bf M}$$ in terms of $$h$$ and of the maximum degree of $${\it{\Gamma}}$$. It is well known (Wathen, 1987) that for a number of different types of finite elements in one, two and three dimensions, the condition number of the mass matrix is independent of $$h$$, and that a simple diagonal scaling of the mass matrix yields a well-conditioned matrix. This fact will be useful when we discuss the solution of parabolic problems. In our situation, the condition number of $${\bf M}$$ is also independent of $$h$$. However, in the case of complex graphs, the spectrum can be quite spread out. With the scaling adopted in this article, and assuming for simplicity a constant $$h$$ throughout the graph, the eigenvalues range between $${\scr O}(h)$$ and $${\scr O}(d_{\max} h)$$, where $$d_{\max}$$ is the maximum degree of a vertex in $${\it{\Gamma}}$$. More precisely, we have the following result. Theorem 5.1 Let $${\it{\Gamma}}$$ be a graph having $$N$$ vertices and $$M$$ edges and with at least one node $${{\mathtt{v}}_i}$$ of degree $$d _{{\mathtt{v}}_i} > 6$$. Let $${\bf M}$$ be the mass matrix relative to a piecewise linear, continuous finite-element approximation of the $$L^2({\it{\Gamma}})$$ norm by using $$n$$ nodes on each edge of $${\it{\Gamma}}$$, and $$h =\frac{1}{n+1}$$. Then the spectrum $${\bf{sp}}({\bf M})$$ of $${\bf M}$$ satisfies sp(M)⊂h6[O(1),O(dmax)]. Proof. The proof is a simple application of Gershgorin’s Theorem. Let $$\widetilde{{\bf M}} = 6(n+1) {\bf M}$$, then we have that the diagonal entries of $$\widetilde{{\bf M}}_{ii}$$ for $$i=1, \dots , nM$$ are equal to $$4$$ and the corresponding off-diagonal sum of the entries is $$2$$ (note that the entries of $${\bf M}$$ are non-negative). The last $$N$$ diagonal entries of $$\widetilde{{\bf M}}_{ii}, \;i=nM+1, \dots , nM+N$$ are equal to $$2 d_{{\mathtt{v}}_i}$$, and the sum of the corresponding off-diagonal entries is equal to $$d_{{\mathtt{v}}_i}$$. Therefore, under the hypothesis $$d _{{\mathtt{v}}_i} > 6$$, the spectrum of $$\widetilde{{\bf M}}$$ lays in the union of two disjoint sets and at least one eigenvalue lays in the union of the circles corresponding to the highest degree. □ Remark 5.2 If the highest degree of a vertex in $${\it{\Gamma}}$$ is much larger than the others, then one eigenvalue of $$\widetilde{{\bf M}}$$ is of the same order of it. Numerical experiments on graphs with a power law degree distribution indicate that the condition number of the mass matrix $$\widetilde{{\bf M}}$$ grows like $${\scr O}(d_{\max})$$ as $$N$$, and therefore $$d_{\max}$$ increases. We note that this phenomenon has no analog in the ‘standard’ finite element theory because of the requirement that the triangles or the tetrahedra must satisfy the minimum angle condition (Raviart et al., 1983, p. 109), which imposes a bound on the maximum number of edges incident to a vertex, independent of $$N$$. 6. A generalized eigenvalue problem In this section, we analyse the generalized eigenvalue problem Hw=λMw. (6.1) The finite element discretization of the eigenvalue problem $${\scr H} u = \lambda u$$ leads to algebraic eigenvalue probems of the form (6.1). In particular, for $$v(x) = 0$$ the eigensolutions $$(\lambda, {\bf w})$$ of (6.1) are approximations of the eigenvalues and eigenfunctions of the simple Hamiltonian $${\scr H} = -\frac{\,{\rm d}2}{\,{\rm d}x^2}$$ (we assume Neumann–Kirchhoff conditions throughout). This operator has discrete spectrum and a complete orthonormal basis of eigenfunctions (as a special case of Berkolaiko & Kuchment, 2013, Theorem 3.1.1). Note that $${\bf H} = {\bf L}$$ is positive semidefinite, and the kernel of $${\bf H}$$ is spanned by the vector of all ones. Our motivation for the study of (6.1) is that it plays a central role in the solution of diffusion problems on $${\it{\Gamma}}$$. The spectrum of a quantum graph is also of fundamental interest in scattering theory, photonics, quantum chaos, and so forth (Berkolaiko & Kuchment, 2013). For any sufficiently small, fixed $$h$$, only the leftmost part of the spectrum of $$\scr H$$, which is unbounded above, is well approximated by the corresponding eigenvalues of (6.1). However, in many problems, these are the only eigenvalues of interest. In the study of diffusion processes on combinatorial graphs $${\it{\Gamma}}$$, the behavior of the solution is essentially determined by the spectrum of the graph Laplacian $${\bf L}_{{\it{\Gamma}}}$$, which, to a large extent, reflects structural properties of $${\it{\Gamma}}$$. An important question is therefore to determine whether the eigenvalues of the graph Laplacian bear any relation to the spectrum of the Laplace operator $${\scr H} = -\frac {\,{\rm d}2}{\,{\rm d}x^2}$$, with Neumann–Kirchhoff conditions on the quantum graph constructed from $${\it{\Gamma}}$$. The spectral analysis of quantum graphs is generally quite challenging (see Berkolaiko & Kuchment, 2013). A simple result is the following. Theorem 6.1 Let $$\mu_j$$ and $${\bf q}_j$$$$ j = 1, \dots , (n-1) M$$ be the eigenvalues of the symmetric positive definite pencil $$\bigl({\bf H}_{11},{\bf V}\bigr)$$, i.e., H11qj=μjVqj∀j. We have that $$ {\lambda}_* \not\in \Bigr\{ \mu_j \Bigr\}_{j=1}^{(n-1)M}$$ is an eigenvalue for Hx=λMx if and only if $${\lambda}_*$$ is a root of the algebraic (rational) equation $$\det {\bf T}(\lambda) = 0$$, where T(λ)=H22−λF−(H12−λC)T(H11−λV)−1(H12−λC), (6.2) i.e., if and only if there exists $${\bf y} \neq \bf 0$$ such that $${\bf T}(\lambda_* ) {\bf y} =\bf 0$$. Proof. Given the generalized eigenvalue problems for the symmetric positive definite pencils $$\bigl({\bf H},{\bf M}\bigr)$$ and $$\bigl({\bf H}_{11},{\bf V}\bigr)$$, if $$\lambda$$ is not an eigenvalue of the pencil $$\bigl({\bf H}_{11},{\bf V}\bigr)$$, the matrix $${\bf H}_{11} - \lambda {\bf V}$$ is nonsingular. If we partition the eigenvectors of the pencil $$\bigl({\bf H},{\bf M}\bigr)$$ conformally to the block structure of $${\bf H}$$ and $${\bf M}$$, we have H11q1+H12q2=λ(Vq1+Cq2), (6.3) H12Tq1+H22q2=λ(CTq1+Fq2). (6.4) Thus, we have from (6.3) that (H11−λV)q1=(λC−H12)q2 and q1=(H11−λV)−1(λC−H12)q2. Substituting $${\bf q}_1$$ in (6.4), we have (H12T(H11−λV)−1(λC−H12)+H22)q2=λ(CT(H11−λV)−1(λC−H12)+F)q2. Rearranging terms, we obtain the desired result. □ We note that this result is not new, being known in the context of algebraic substructuring methods for large-scale linear eigenvalue problems (see, e.g., Bekas & Saad, 2005) and in the approximation of certain resonance problems (Bindel & Hood, 2013). What is of interest here is that Theorem 6.1 has several continuous (i.e., infinite-dimensional) counterparts described in Kuchment (2004, 2005). For self-adjoint Hamiltonians, a $${\lambda}_*$$ which is not in the spectrum of the Hamiltonian restricted to any edge of $${\it{\Gamma}}$$ will be an eigenvalue if and only if it is a root of an algebraic equation obtained by imposing certain conditions at each vertex of $${\it{\Gamma}}$$, just as in Theorem 6.1 above. It is important to remark that for diffusion problems, the difference between the global eigenvalues of $${\bf L}$$ on the extended graph and the eigenvalues of the combinatorial graph Laplacian $${\bf L}_{{\it{\Gamma}}}$$ can make the quantum graph version of the problem more challenging and richer insofar the behavior of the solution on the graph, seen as a quantum graph, is more difficult to predict a priori. This is due to the possible occurrence of ‘Dirichlet eigenvalues’ associated with the edges, namely eigenvalues of the pencil $$\bigl({\bf H}_{11},{\bf V}\bigr)$$, which are also eigenvalues of the pencil $$({\bf H}, {\bf M})$$ (see Kuchment, 2004). Also, it is clear that while the spectrum of $${\bf L}_{{\it{\Gamma}}}$$ is necessarily bounded for a fixed $$N$$, the spectrum of the discrete Hamiltonian is unbounded above as $$h$$ is refined and $$N$$ is kept fixed, reflecting the unboundedness of the infinite-dimensional Hamiltonian $$\scr H$$. Some numerical comparisons between the eigenvalues of $${\bf L}_{{\it{\Gamma}}}$$ and those of the simple Hamiltonian $${\scr H} = - \frac{\,{\rm d}^2}{\,{\rm d}x^2}$$ for a few small graphs can be found in Section 8.2, where the behavior of the eigenfunctions of $${\scr H}$$ is also shown. 7. Solution of the discretized equations In this section, we discuss the solution of problems arising from the finite element discretization of quantum graphs, namely: (1) Systems of linear algebraic equations $${{\bf H}} {{\bf u}}_h = {{\bf f}}_h$$. (2) Initial value problems for systems of linear ODEs. Problem (1) arises in particular when solving variational problems of the form (3.3) using finite element methods. Problem (2) arises from the semidiscretization of (the weak form of) parabolic PDEs on $${\it{\Gamma}}$$: Given $$u_0 \in H^1({\it{\Gamma}})$$ and $$f \in L^2\big([0,T];L^2({\it{\Gamma}})\bigr)$$ find $$u \in L^2\bigl( [0,T];H^1({\it{\Gamma}})\bigr) \cap C^0\bigl([0,T];H^1({\it{\Gamma}}) \bigr)$$, such that {∂u∂t−∂2u∂x2+mu=fonΓ×[0,T],u(x,0)=u0x∈Γ, (7.1) where $$m \ge 0$$ (see also Definition 2.5). We further assume that $$u(\cdot, t)$$ satisfies the Neumann–Kirchhoff conditions on the vertices $${\mathtt{v}}$$ of $${\it{\Gamma}}$$, for all $$t\in [0,T]$$. The need for solving large linear systems on the extended graph $$\mathscr G$$ also arises in the solution of problem (7.1) by fully implicit methods, and in the solution of generalized eigenvalue problems of the form (6.1) when shift-and-invert methods are used. 7.1 Solution of linear algebraic systems We assume again for simplicity of notation that each edge $$e\in \mathscr E$$ contains the same number $$n-1$$ of internal nodes. For solving linear systems of the form $${{\bf H}} {{\bf u}}_h = {{\bf f}}_h$$ corresponding to the extended graph $$\mathscr G$$, we make use of the block LU factorization H=[H11H12H12TH22]=[H11OH12TS][I(ne−1)MH11−1H12OIN]. (7.2) We recall that when the potential $$v(x)$$ is positive on the (metric) graph $${\it{\Gamma}}$$, the matrix $${{\bf H}}$$ is guaranteed to be positive definite. In particular, both $${{\bf H}}_{11}$$ and the Schur complement S=H22−H12TH11−1H12 are symmetric and positive definite (SPD). The factorization (7.2) corresponds to the following block elimination procedure frequently used in domain decomposition algorithms. First, the following $$N\times N$$ reduced linear system is solved: Suhv=fhv−H12TH11−1fhe≡ch, (7.3) where $${{\bf u}}_h^{{\mathtt{v}}}$$ and $${{\bf f}}_h^{{\mathtt{v}}}$$ are the values of the discrete solution $${{\bf u}}_h$$ and external load $${{\bf f}}_h$$ at the vertices $${\mathtt{v}}\in \mathscr V$$, and $${{\bf f}}_h^e$$ are the values of $${{\bf f}}_h$$ on the nodes internal to the edges $$e\in \mathscr E$$. Next, the values of the solution $${{\bf u}}_h^e$$ at the internal nodes are obtained by solving the $$(n_e-1)M\times (n_e-1)M$$ linear system H11uhe=fhe−H12uhv, (7.4) where $${{\bf f}}_h^e$$ is the vector containing the values of $${{\bf f}}_h$$ at the nodes internal to the edges. As already observed, the coefficient matrix $${{\bf S}}$$ of (7.3) is sparse, and it can be constructed explicitly. The resulting linear system can be solved either by a direct solver (sparse Cholesky factorization) or by an iterative method, such as PCG. In the case of PCG, the explicit assembling of the Schur complement can be avoided. If the $$M$$ diagonal blocks of $${{\bf H}}_{11}$$ are factored at the outset, then each matrix–vector product with $${{\bf S}}$$ involves, at each PCG iteration, a diagonal scaling (with $${{\bf H}}_{22}$$), two sparse matrix–vector products (with $${{\bf H}}_{12}$$ and $${{\bf H}}_{12}^{\rm T}$$) and $$M$$ independent tridiagonal solves (using the precomputed factorizations) at each iteration. Although this procedure is more expensive than a matrix–vector product with the assembled $${{\bf S}}$$, it has the advantage that it can be more easily performed in parallel in a distributed environment. The linear system (7.4), which consists of $$M$$ completely uncoupled tridiagonal linear systems, can be solved in $$O(n_eM)$$ time (or less if parallelism is exploited), essentially the cost of one CG iteration on (7.3) if the Schur complement is not explicitly assembled. Note that in practice, moderate values of $$n_e$$ may suffice for a sufficiently accurate solution. On the other hand, $$M$$ may be very large for a large graph $${\it{\Gamma}}$$. Clearly, the critical step is the solution of the reduced system (7.3). A sparse Cholesky factorization may be appropriate if $$N$$ is not too large, but one should keep in mind that in the case of complex graphs such as small-world and power law graphs (see Newman, 2010), the Cholesky factor will frequently incur enormous amounts of fill-in, regardless of the ordering used. This may discourage the use of a sparse direct solver even for moderate values of $$N$$; see Benzi & Kuhlemann (2013) for an example from computational biology. Thus, here we focus instead on iterative solvers, particularly on PCG.1 As is well known, the rate of convergence of the CG algorithm depends on the distribution of the eigenvalues of the coefficient matrix, and the key to rapid convergence is the choice of an appropriate preconditioner. A wide variety of algebraic preconditioners is available, including preconditioners based on splittings of the coefficient matrix, incomplete factorizations and algebraic multigrid (AMG) methods (Benzi, 2002; Saad, 2003). With relatively simple graph topologies, incomplete Cholesky preconditioning or AMG can be expected to give good results; however, for such problems, a sparse direct solver may be the best choice, especially if the original graph $${\it{\Gamma}}$$ is planar. Generally, in the case of large, complex graphs, however, where direct solvers are not an option, incomplete Cholesky factorization turns out to be not competitive (Benzi & Kuhlemann, 2013). It turns out, however, that when $${\it{\Gamma}}$$ is a graph with a power law degree distribution, information about the eigenvalue distribution of the graph Laplacian $${{\bf L}}_{{\it{\Gamma}}}$$ is available, which suggests the use of a simple diagonal preconditioner. In particular, it is known (Chung & Lu, 2006) that under certain conditions on $${\it{\Gamma}}$$, the nonzero eigenvalues of the normalized Laplacian L^Γ=D−12LΓD−12=IN−D−12AdD−12 lie in a small interval $$(1-\delta, 1+\delta)$$ with $$0 < \delta < 1$$ independent of $$N$$. This guarantees that diagonally preconditioned CG will converge rapidly, with a rate that is independent of the number of vertices $$N$$ in $${\it{\Gamma}}$$, when applied to linear systems involving the graph Laplacian, and we expect a similar behavior when solving systems of the form (7.3) whenever $${{\bf S}}$$ is in some sense ‘close’ to $${{\bf L}}_{{\it{\Gamma}}}$$. We will return to this topic in Section 8. Another simple preconditioner that can be easily implemented is the first-degree polynomial preconditioner given by P−1=D−1+D−1(D−S)D−1≈S−1. (7.5) This approximation is obtained from the identity S−1=(IN−D−1(D−S))−1D−1 by truncating to first order the Neumann series expansion (IN−D−1(D−S))−1=∑k=0∞(D−1(D−S))k. This preconditioner has been found to be effective in solving certain Laplacian-type linear systems stemming from complex network analysis, see Salkuyeh et al. (2014). It is only slightly more expensive than the diagonal one. For a typical sparse problem averaging about five nonzeros per row in $${{\bf S}}$$, the cost of each PCG iteration is about $$50\%$$ higher than with diagonal scaling, but convergence is faster. In Section 8, we present the results of numerical experiments using diagonal and polynomial preconditioning on a few different types of graphs. 7.2 Solution of systems of ODEs Semidiscretization of the weak form of (7.1) leads to initial value problems of the form Mu˙h=−Huh+fhuh(0)=uh,0, (7.6) where the dot denotes differentiation with respect to time. Let us consider for simplicity the case of the heat equation with $$f = 0$$ and $$m$$ independent of time. Then, $${{\bf f}}_h = \bf 0$$ and the solution of (7.6) can be given explicitly in terms of matrix exponentials: uh(t)=exp(−tM−1H)uh,0∀t∈[0,T]. (7.7) Approximations to the exact semidiscrete solution (7.7) can be obtained in various ways. Here, we consider time-stepping methods and exponential integrators based on Krylov subspace approximations. Besides explicit methods, which we do not discuss here due to the well-known restrictions on the time step required for stability, the most commonly used time-stepping methods are backward Euler and Crank–Nicolson. These methods require at each step the solution of linear systems with coefficient matrices I+ΔtM−1HandI+Δt2M−1H, (7.8) respectively, where $${\it{\Delta}} t > 0$$ is the time step. On the other hand, Krylov-based exponential integrators compute directly, for any value of $$t\in (0,T]$$, an approximation of (7.7) by projecting the problem on a suitable low-dimensional subspace and forming the exponential of the (small) projected matrix explicitly. This is equivalent to a polynomial approximation of (7.7). Both Lanczos- and Arnoldi-type methods can be used, depending on whether the underlying problem is symmetric (Hermitian in the complex case) or not. Here, we note that the matrix $${{\bf M}}^{-1}{{\bf H}}$$, which occurs in both (7.7) and (7.8), is generally nonsymmetric, apparently precluding the use of the more efficient Lanczos-based methods. However, since both $${{\bf H}}$$ and $${{\bf M}}$$ are SPD, the product $${{\bf M}}^{-1}{{\bf H}}$$ is symmetrizable. For example, if $${{\bf M}} = {{\bf R}} ^{\rm T} {{\bf R}}$$ is the Cholesky factorization of $${{\bf M}}$$ with $${{\bf R}}$$ upper triangular, introducing the new variable $${{\bf v}}_h = {{\bf R}} {{\bf u}}_h$$ leads to the solution vector in (7.7), becoming uh(t)=R−1vh(t)wherevh(t)=exp(−tH~)vh,0∀t∈[0,T], (7.9) with $$\widetilde{{{\bf H}}} = {{\bf R}}^{-T}{{\bf H}} {{\bf R}}^{-1}$$ and $${{\bf v}}_{h,0} = {{\bf R}} {{\bf u}}_{h,0}$$. Here $$\widetilde{{{\bf H}}}$$ is symmetric, and Lanczos-based methods can be applied to approximate the solution. A similar symmetrization applied to the matrices in (7.8) allows one to use the PCG method to solve the linear systems arising from implicit methods, rather than the more expensive nonsymmetric Krylov methods. The price to pay for this is the need to compute the Cholesky factorization of $${{\bf M}}$$ and to perform triangular solves at each step or iteration. Unfortunately, for large graphs, the Cholesky factorization $${{\bf M}} = {{\bf R}} ^{\rm T} {{\bf R}}$$ of the mass matrix can be prohibitive when the integrals that define $${{\bf M}}$$ are computed with the Simpson rule. In this case, however, it is possible to replace the mass matrix $${{\bf M}}$$ with a diagonal approximation, $$\hat {{{\bf M}}}\approx {{\bf M}}$$, obtained, for example, by lumping. Symmetrization is then trivial, at the expense of an additional error, which, however, is $$O(h)$$ as $$h\searrow 0$$. The same holds true if the integrals in the mass matrix are approximated with the simple trapezoidal rule, in which case $${{\bf M}}$$ is diagonal. In Section 8.3, we discuss the results of experiments comparing the two quadrature rules, where we apply a Krylov-based method (Afanasjew et al., 2008; Güttel, 2017) to approximate the solution of a simple diffusion problem on different types of graphs. 8. Numerical experiments In this section, we illustrate the results of numerical studies, including the solution of simple elliptic and parabolic PDEs and eigenvalue problems on graphs. 8.1 Solution of simple elliptic problems We begin by showing some results of experiments using the PCG method to solve linear systems arising from the discretization of simple elliptic problems posed on scale-free graphs obtained using the Barabási–Albert model (Barabasi & Albert, 1999). We are especially interested in seeing how PCG iterations scale with problem size. We consider two main situations that lead to linear systems of increasing size: The mesh size $$h$$ is fixed, but the size of the graph $${\it{\Gamma}}$$ increases. The graph $${\it{\Gamma}}$$ is fixed, but $$h$$ decreases. In the first situation, we assume that the graph’s average degree is kept roughly constant. In the second situation, we are applying PCG to a reduced system of fixed size $$N$$, but a priori the number of iterations could grow since the condition number (more precisely, the eigenvalue distribution) of the Schur complement may worsen as $$h\searrow 0$$. We use the Matlab toolkit CONTEST (Taylor & Higham, 2009) to generate scale-free graphs with different numbers $$N$$ of vertices and $$M$$ of edges, while keeping the average degree constant. In Table 1, we report the sizes of three graphs $${\it{\Gamma}}$$ generated using CONTEST together with the number of vertices ($$m = (n_e-1)M + N$$), and number of edges ($$p = n_e M$$) of the corresponding extended graphs $$\mathscr G$$ for the constant choice $$h=\frac{1}{21}$$ of the mesh size. All edges are assumed to have unit length. In particular, we do not consider here adaptive discretizations that take into account the presence of hubs (see Section 3.1), although this is not difficult to do. We recall that $$m$$ is the order of the discrete Hamiltonian $${{\bf H}}$$, whereas $$N$$ is the order of the corresponding Schur complement $${{\bf S}}$$. For these problems, the Schur complement has only about four nonzeros per row. Table 1 Number of vertices and edges for three scale-free graphs and in the corresponding extended graphs Graph $$N$$ $$M$$ $$m$$ $$p$$ $${\it{\Gamma}}_1$$ 2000 3974 81480 79480 $${\it{\Gamma}}_2$$ 5000 9968 204360 199360 $${\it{\Gamma}}_3$$ 10000 19965 409300 399300 Graph $$N$$ $$M$$ $$m$$ $$p$$ $${\it{\Gamma}}_1$$ 2000 3974 81480 79480 $${\it{\Gamma}}_2$$ 5000 9968 204360 199360 $${\it{\Gamma}}_3$$ 10000 19965 409300 399300 Table 1 Number of vertices and edges for three scale-free graphs and in the corresponding extended graphs Graph $$N$$ $$M$$ $$m$$ $$p$$ $${\it{\Gamma}}_1$$ 2000 3974 81480 79480 $${\it{\Gamma}}_2$$ 5000 9968 204360 199360 $${\it{\Gamma}}_3$$ 10000 19965 409300 399300 Graph $$N$$ $$M$$ $$m$$ $$p$$ $${\it{\Gamma}}_1$$ 2000 3974 81480 79480 $${\it{\Gamma}}_2$$ 5000 9968 204360 199360 $${\it{\Gamma}}_3$$ 10000 19965 409300 399300 For the Hamiltonian, we consider the simple elliptic operator Hu=(−d2dx2+ν)u, (8.1) where $$\nu$$ is constant, with Neumann–Kirchhoff conditions at the vertices. The discrete Hamiltonian $${{\bf H}}$$ obtained applying one-dimensional linear finite elements to the weak form of (8.1) is then SPD for all $$\nu > 0$$. We choose the right-hand side $${{\bf f}}_h = {\bf e}_1$$, corresponding to a unit load applied to the first-numbered vertex $$v_1$$ of the graph $${\it{\Gamma}}$$. In Table 2, we report iteration counts for the conjugate gradient method with no preconditioning, with diagonal preconditioning and with the polynomial preconditioner (7.5) for $$\nu = 0.1$$, using the three graphs of Table 1. In all cases, the initial guess is the zero vector and the stopping criterion is a reduction of the relative residual norm below $$\sqrt{{\rm eps}}$$, where $${\rm eps} \approx 2.2204\cdot 10^{-16}$$. In all cases, the relative 2-norm of the error is of the order of $$\sqrt{{\rm eps}}\approx 10^{-8}$$. Table 2 PCG iteration counts for Schur complement system, different preconditioners ($$\nu = 0.1$$) Graph No prec. Diagonal Polynomial $${\it{\Gamma}}_1$$ 078 28 15 $${\it{\Gamma}}_2$$ 102 28 15 $${\it{\Gamma}}_3$$ 115 28 15 Graph No prec. Diagonal Polynomial $${\it{\Gamma}}_1$$ 078 28 15 $${\it{\Gamma}}_2$$ 102 28 15 $${\it{\Gamma}}_3$$ 115 28 15 Table 2 PCG iteration counts for Schur complement system, different preconditioners ($$\nu = 0.1$$) Graph No prec. Diagonal Polynomial $${\it{\Gamma}}_1$$ 078 28 15 $${\it{\Gamma}}_2$$ 102 28 15 $${\it{\Gamma}}_3$$ 115 28 15 Graph No prec. Diagonal Polynomial $${\it{\Gamma}}_1$$ 078 28 15 $${\it{\Gamma}}_2$$ 102 28 15 $${\it{\Gamma}}_3$$ 115 28 15 From the results, one can see that, while the number of iterations increases without preconditioning for increasing graph size, it remains constant (and quite small) with both diagonal and polynomial preconditioning. Qualitatively similar results are observed for other values of $$\nu$$, with the convergence being faster as $$\nu$$ increases. Next, we fix the metric graph (using $${\it{\Gamma}}_1$$) and we refine the discretization of the edges. In Table 3, we report iteration counts for four different values of $$h$$, corresponding to $$n=20,40,80, 100$$ equally spaced nodes per edge. We also report the order $$m$$ of the discrete Hamiltonian, $${{\bf H}}$$. The order of the Schur complement $${{\bf S}}$$ is fixed ($$N=2000$$). Although this Schur complement is so small that a sparse direct solver suffices, we are interested in the behavior of the PCG iteration as a function of $$h$$, expecting a qualitatively similar behavior for larger graphs. The results show that the convergence of the conjugate gradient algorithm, even in the absence of preconditioning, is completely $$h$$-independent, suggesting that even for rather fine meshes, the Schur complement is close to a well-conditioned matrix. As before, convergence is faster (slower) if $$\nu$$ is taken larger (respectively, smaller). Table 3 PCG iteration counts for Schur complement system, different values of $$h$$. Graph: $${\it{{\it{\Gamma}}}}_1$$ ($$\nu = 0.1$$) $$h^{-1}$$ $$m$$ No preconditioning Diagonal Polynomial 021 81480 78 28 15 041 160960 78 28 15 081 319920 78 28 15 101 399400 78 28 15 $$h^{-1}$$ $$m$$ No preconditioning Diagonal Polynomial 021 81480 78 28 15 041 160960 78 28 15 081 319920 78 28 15 101 399400 78 28 15 Table 3 PCG iteration counts for Schur complement system, different values of $$h$$. Graph: $${\it{{\it{\Gamma}}}}_1$$ ($$\nu = 0.1$$) $$h^{-1}$$ $$m$$ No preconditioning Diagonal Polynomial 021 81480 78 28 15 041 160960 78 28 15 081 319920 78 28 15 101 399400 78 28 15 $$h^{-1}$$ $$m$$ No preconditioning Diagonal Polynomial 021 81480 78 28 15 041 160960 78 28 15 081 319920 78 28 15 101 399400 78 28 15 We conclude that for this class of quantum graphs, the reduced system approach, combined with diagonally preconditioned CG for solving the Schur complement system, results in rates of convergence that are both $$h$$- and $$N$$-independent, and thus it is optimal, in the sense that the total solution cost scales linearly in the number $$m=(n_e-1)M + N$$ of degrees of freedom. This approach has also very high inherent parallelism, especially if the Schur complement is not explicitly assembled (except for the diagonal entries of $${{\bf S}}$$ if they are needed). Polynomial preconditioning may lead to slighlty less work overall, but the difference is small. The observed convergence behavior can be explained as follows. For $$\nu > 0$$ and $$h > 0$$ fixed and sufficiently small, the Schur complement $${{\bf S}}$$ is, up to a constant factor, a small perturbation of the combinatorial graph Laplacian, $${{\bf L}}_{\it{\Gamma}}$$ (see the appendix). Hence, we expect the convergence behavior of PCG applied to the Schur complement system to be close to that of PCG applied to a (consistent) linear system of the form $${{\bf L}}_{\it{\Gamma}} {{{\bf x}}} = {{{\bf b}}}$$. We remark that, although this system is singular, the singularity is benign, the kernel being one-dimensional and spanned by a known vector since $${\it{\Gamma}}$$ is connected. In particular, the eigenvalue $$\lambda_1 = 0$$ of $${{\bf L}}_{{\it{\Gamma}}}$$ plays no role in determining the rate of convergence of the conjugate gradient method. Now, the distribution of the extreme (nonzero) eigenvalues of the Laplacian of scale-free graphs is known. Quite a lot is known also about the nonzero eigenvalues of the normalized Laplacian $${\widehat {{\bf L}}}_{\it{\Gamma}} = {{\bf D}}^{-\frac12}{{\bf L}}_{\it{\Gamma}} {{\bf D}}^{-\frac12}.$$ Of course, since scale-free graphs obtained using the Barabási–Albert model have an element of randomness, these results are to be taken in a probabilistic sense. We first consider the case of the normalized Laplacian. Note that, since the matrix $${{\bf D}}^{-\frac 12}{{\bf Ad}} {{\bf D}}^{-\frac 12}$$ is symmetric and stochastic, the eigenvalues of $${\widehat {{\bf L}}}_{\it{\Gamma}} = {{\bf I}}_N - {{\bf D}}^{-\frac 12}{{\bf Ad}} {{\bf D}}^{-\frac 12}$$ lie in the interval $$[0,2]$$. Moreover, for scale-free graphs with sufficiently large minimum degree $$d_{\min}$$, the nonzero eigenvalues of the normalized Laplacian can be expected to fall with high probability for $$N\to \infty$$ in the interval I=(1−2w,1+2w),wherew=davg. (8.2) Here, $$d_{\text{avg}}$$ denotes the average expected degree for $${\it{\Gamma}}$$. See (Chung & Lu, 2006, Chapter 9) for the precise statement of this result. While the assumption on the minimum degree is quite restrictive, the conclusions of this theorem appear to hold in practice even for power law random graphs such as those considered here, for which $$d_{\min}$$ is rather small. Our three power law graphs satisfy 0.83<2w<0.84, so we expect the nontrivial eigenvalues of the normalized Laplacian to lie between 1−2w≈0.16and1+2w≈1.84. (8.3) Hence, the effective condition number$$\kappa_2^{{\rm eff}} ({\widehat {{\bf L}}}_{\it{\Gamma}}),$$ defined as the ratio of the largest and the smallest nontrivial eigenvalues of $${\hat {{\bf L}}}_{\it{\Gamma}}$$, can be expected to satisfy κ2eff(L^Γ)≤1.84/0.16=11.5, independently of $$N$$. Therefore, the conjugate gradient method with diagonal scaling can be expected to converge rapidly with a rate independent of $$N$$, which is what we observe in practice. Although this argument is not rigorous, we found in practice that it gives reliable estimates of the condition number of the normalized Laplacian of random power law graphs, and thus of the rate of convergence of PCG applied to the Schur complement system. Now we turn to the case of the (unnormalized) Laplacian $${{\bf L}}_{{\it{\Gamma}}}$$. Using, for example, Ostrowski’s Theorem (a quantitative version of Sylvester’s Law of Inertia, see Horn & Johnson, 2012, Theorem 4.5.9), one can easily show that the first nonzero eigenvalue of $${{\bf L}}_{{\it{\Gamma}}}$$ is related to the first nonzero eigenvalue of $${\widehat {{\bf L}}}_{\it{\Gamma}}$$ by the following inequality: λ2(LΓ)≥dmin⋅λ2(L^Γ), (8.4) where again $$d_{\min}$$ denotes the minimum degree of any vertex in $${\it{\Gamma}}$$. In our case, $$d_{\min} = 2$$ (by construction, see Taylor & Higham, 2009). It follows from (8.3) and (8.4) that for any of our three graphs $${\it{\Gamma}}_i$$, the smallest nonzero eigenvalue can be expected to satisfy λ2(LΓ)≥0.32,independentofN. (8.5) Of course, the same holds for any power law graph with the same minimum and average degree. To four decimals, the smallest nonzero Laplacian eigenvalue for the three graphs used in our experiments was given by $$\lambda_2 = 0.5259$$ for $${\it{\Gamma}}_1$$ $$\lambda_2 = 0.5338$$ for $${\it{\Gamma}}_2$$ $$\lambda_2 = 0.5257$$ for $${\it{\Gamma}}_3$$. Given the random nature of these graphs, we repeated the calculation several times, always getting similar values. Hence, although the lower bound (8.5) is slightly pessimistic, it does correctly predict that the smallest nonzero eigenvalue of a random power law graph can be expected to remain bounded away from zero as $$N$$ increases. Since we saw (cf. Table 2) that without preconditioning the number of PCG iterations required to solve the Schur complement system increases with $$N$$, we predict that the largest eigenvalues of $${{\bf S}}$$ must increase with $$N$$, all else being constant. Once again, we replace $${{\bf S}}$$ with the graph Laplacian $${{\bf L}}_{{\it{\Gamma}}}$$, for which analytical results are available. In Elsässer (2006), the following remarkable and a priori unexpected result is proved: In a power law graph, the upper portion of the spectrum of $${{\bf L}}_{{\it{\Gamma}}}$$ is distributed like the largest degrees of vertices of $${\it{\Gamma}}$$ (we refer to Elsässer, 2006 for the precise statement). Looking at the three graphs $${\it{\Gamma}}_i$$ ($$i=1,2,3$$) used in our test, we obtained the following results. For $${\it{\Gamma}}_1$$, the five largest degrees are 185,66,63,54,45, and the five largest eigenvalues of $${{\bf L}}_{{\it{\Gamma}}_1}$$ are approximately 186.03,67.07,64.06,55.19,45.99. For $${\it{\Gamma}}_2$$, the five largest degrees are 227,122,90,89,88, and the five largest eigenvalues of $${{\bf L}}_{{\it{\Gamma}}_2}$$ are approximately 228.03,123.06,91.14,90.07,89.01. For $${\it{\Gamma}}_3$$, the five largest degrees are 430,237,147,128,98, and the five largest eigenvalues of $${{\bf L}}_{{\it{\Gamma}}_3}$$ are approximately 431.02,238.02,148.04,129.08,99.00. This shows that the theory developed in Elsässer (2006) is remarkably accurate. Hence, without preconditioning, the effective condition number $$\kappa_2^{{\rm eff}}({{\bf L}}_{{\it{\Gamma}}})$$ of the Laplacian grows like $$O(d_{\max})$$ as $$N\to \infty$$, where $$d_{\max}$$ denotes the maximum degree. As shown in the appendix, if $$\nu < h^{-1}$$ then in the limit for $$\nu, h\to 0$$ the Schur complement matrix $${{\bf S}}$$ reduces to the graph Laplacian $${{\bf L}}_{{\it{\Gamma}}}$$. Hence, for $$\nu$$ and $$h$$ sufficiently small (but fixed), we expect the iteration count of unpreconditioned CG applied to the Schur complement system to grow with $$N$$, as observed in our experiments. Fortunately, a simple diagonal scaling is sufficient to remove this dependency on $$N$$. We stress the fact that the optimality of the Schur complement reduction approach with diagonally scaled CG is a phenomenon that has no analog in the usual two-dimensional or three-dimensional PDE setting. 8.2 Eigenvalues and eigenfunctions It is instructive to compare numerically the eigenvalues of $${{\bf L}}_{\it{\Gamma}}$$ with the first few eigenvalues of the Hamiltonian $${\mathscr H} = - \frac{\,{\rm d}^2}{\,{\rm d}x^2}$$ on the metric graph based on $${\it{\Gamma}}$$ and to investigate the behavior of the corresponding eigenfunctions. In order to be able to visualize the eigenfunctions, we consider at first only a few small, simple graphs: a cross (or star graph) with five vertices, a simple graphene-like structure with 12 vertices and a tree with 16 vertices. In all cases, we assume the edges have unit length, and we discretize the eigenvalue problem using linear finite elements with 100 internal discretization nodes on each edge, leading to a generalized eigenvalue problem of the form (6.1). Throughout this section, we assume Neumann–Kirchhoff conditions. In Table 4, we report the eigenvalues of the graph Laplacian $${{\bf L}}_{\it{\Gamma}}$$ for each graph. Table 4 Eigenvalues of $${{\bf L}}_{\it{\Gamma}}$$ for each of the three graphs Cross Graphene Tree 0 0 0 1 0.1578 0.0968 1 1 0.2679 1 1 0.2679 5 1 0.4965 1.4931 1 3 1 3 1 3 1 3.5069 1.7356 4 2.1939 4.8422 3.5767 3.7321 3.7321 4.7093 5.1912 Cross Graphene Tree 0 0 0 1 0.1578 0.0968 1 1 0.2679 1 1 0.2679 5 1 0.4965 1.4931 1 3 1 3 1 3 1 3.5069 1.7356 4 2.1939 4.8422 3.5767 3.7321 3.7321 4.7093 5.1912 Table 4 Eigenvalues of $${{\bf L}}_{\it{\Gamma}}$$ for each of the three graphs Cross Graphene Tree 0 0 0 1 0.1578 0.0968 1 1 0.2679 1 1 0.2679 5 1 0.4965 1.4931 1 3 1 3 1 3 1 3.5069 1.7356 4 2.1939 4.8422 3.5767 3.7321 3.7321 4.7093 5.1912 Cross Graphene Tree 0 0 0 1 0.1578 0.0968 1 1 0.2679 1 1 0.2679 5 1 0.4965 1.4931 1 3 1 3 1 3 1 3.5069 1.7356 4 2.1939 4.8422 3.5767 3.7321 3.7321 4.7093 5.1912 In Fig. 8, we display the six smallest eigenpairs obtained approximating $$-u'' = \lambda u$$ on the cross graph via (6.1). Note that the first nonzero eigenvalue $$\lambda_2 = 2.4675$$ has multiplicity three. Fig. 8. View largeDownload slide Eigenvalues–eigenfunctions for $$-u'' = \lambda u$$ on a star graph ($$n=100$$ internal points). Fig. 8. View largeDownload slide Eigenvalues–eigenfunctions for $$-u'' = \lambda u$$ on a star graph ($$n=100$$ internal points). In Fig. 9, we display the computed approximations to the 10 smallest eigenpairs for the small graphene graph consisting of two hexagons connected by an edge. Note the peculiar behavior of the eigenfunctions. More complex graphene-like graph models show similar behavior, but, unfortunately, the cluttered displays become difficult to visualize. Fig. 9. View largeDownload slide Eigenvalues–eigenfunctions for $$-u'' = \lambda u$$ on a simple graphene graph ($$n=100$$ internal points). Fig. 9. View largeDownload slide Eigenvalues–eigenfunctions for $$-u'' = \lambda u$$ on a simple graphene graph ($$n=100$$ internal points). Finally, we display the results for the same simple Hamiltonian $${\mathscr H}u=-u''$$ on a binary tree with an extra vertex connected to the root. In Fig. 10, we display the first eight leftmost eigenpairs and, in Fig. 11, the next eight eigenpairs. As in the previous examples, we note the existence of repeated eigenvalues (reflecting the symmetry of the underlying graphs) and the fact that, as expected, the eigenfunctions corresponding to higher eigenvalues become increasingly oscillatory. Fig. 10. View largeDownload slide Eigenvalue–eigenfunctions for $$-u'' = \lambda u$$ on a simple tree ($$n=100$$ internal points). Fig. 10. View largeDownload slide Eigenvalue–eigenfunctions for $$-u'' = \lambda u$$ on a simple tree ($$n=100$$ internal points). Fig. 11. View largeDownload slide Eigenvalues–eigenfunctions for $$-u'' = \lambda u$$ on a simple tree ($$n=100$$ internal points). Fig. 11. View largeDownload slide Eigenvalues–eigenfunctions for $$-u'' = \lambda u$$ on a simple tree ($$n=100$$ internal points). We note that for these small, highly regular graphs, the smallest eigenvalues of the (discretized) Hamiltonian tend to mimic the behavior of the eigenvalues of the graph Laplacian $${{\bf L}}_{{\it{\Gamma}}}$$. It is therefore natural to ask whether this is true in general. To answer this question, we performed some eigenvalue computations on a few graphs with irregular, complex topologies. For the experiments we used three graphs, one synthetic and the other two representing real-world networks. The first is a Barabási–Albert graph (preferential attachment model) with 2000 vertices and 3974 edges, of the same kind as the graph $${\it{\Gamma}}_1$$ described in Table 1. The second one is a graph describing a social network of drug users in Colorado Springs, where the edges indicate pairs of individuals that have exchanged needles in the last three months (Estrada, 2011, p. 122). The third graph represents protein–protein interactions in beer yeast Newman, 2010, p. 89). We denote the three networks $${\it{\Gamma}}_1$$, $${\it{\Gamma}}_d$$ and $${\it{\Gamma}}_y$$, respectively. The number of vertices/edges for the networks $${\it{\Gamma}}_d$$ and $${\it{\Gamma}}_y$$ are 616/2012 and 2224/6609, respectively. In Table 5, we report the six smallest eigenvalues of the graph Laplacian $${{\bf L}}_{{\it{\Gamma}}}$$ for each of these three graphs, together with approximations to the six smallest eigenvalues of the simple Hamiltonian $${\mathscr H} = -\frac{\,{\rm d}^2}{\,{\rm d}x^2}$$, with Neumann–Kirchhoff conditions obtained by solving the discrete problem (6.1). Again, each edge is assumed to have unit length, and 100 interior nodes are used to discretize $$\mathscr H$$ on each edge. The eigenvalues appear to have converged to an approximation with at least three accurate significant digits. These results show that it is difficult to make general statements, as the relationship between the small eigenvalues of $$\mathscr H$$ and those of $${{\bf L}}_{{\it{\Gamma}}}$$ appears to be very much graph dependent. This fact confirms the observation made in Section 6 that the diffusion dynamics could be rather different for the combinatorial graph $${\it{\Gamma}}$$ and for the corresponding metric graph, even for very simple PDEs. This is discussed further in the next section. Table 5 The smallest six eigenvalues of $${{\bf L}}_{\it{\Gamma}}$$ and of $${\mathscr H} = -\frac{\,{\rm d}^2}{\,{\rm d}x^2}$$ for each of the three graphs $${\it{\Gamma}}_1$$ $${\it{\Gamma}}_d$$ $${\it{\Gamma}}_y$$ Eigenvalue $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $$\lambda_1$$ 0 0 0 0 0 0 $$\lambda_2$$ 0.5227 0.3374 0.0107 0.0035 0.0600 0.0664 $$\lambda_3$$ 0.5335 0.3453 0.0148 0.0073 0.0727 0.0765 $$\lambda_4$$ 0.5431 0.3486 0.0317 0.0095 0.0904 0.0826 $$\lambda_5$$ 0.5502 0.3522 0.0410 0.0157 0.1177 0.0909 $$\lambda_6$$ 0.5572 0.3559 0.0617 0.0370 0.1226 0.0918 $${\it{\Gamma}}_1$$ $${\it{\Gamma}}_d$$ $${\it{\Gamma}}_y$$ Eigenvalue $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $$\lambda_1$$ 0 0 0 0 0 0 $$\lambda_2$$ 0.5227 0.3374 0.0107 0.0035 0.0600 0.0664 $$\lambda_3$$ 0.5335 0.3453 0.0148 0.0073 0.0727 0.0765 $$\lambda_4$$ 0.5431 0.3486 0.0317 0.0095 0.0904 0.0826 $$\lambda_5$$ 0.5502 0.3522 0.0410 0.0157 0.1177 0.0909 $$\lambda_6$$ 0.5572 0.3559 0.0617 0.0370 0.1226 0.0918 Table 5 The smallest six eigenvalues of $${{\bf L}}_{\it{\Gamma}}$$ and of $${\mathscr H} = -\frac{\,{\rm d}^2}{\,{\rm d}x^2}$$ for each of the three graphs $${\it{\Gamma}}_1$$ $${\it{\Gamma}}_d$$ $${\it{\Gamma}}_y$$ Eigenvalue $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $$\lambda_1$$ 0 0 0 0 0 0 $$\lambda_2$$ 0.5227 0.3374 0.0107 0.0035 0.0600 0.0664 $$\lambda_3$$ 0.5335 0.3453 0.0148 0.0073 0.0727 0.0765 $$\lambda_4$$ 0.5431 0.3486 0.0317 0.0095 0.0904 0.0826 $$\lambda_5$$ 0.5502 0.3522 0.0410 0.0157 0.1177 0.0909 $$\lambda_6$$ 0.5572 0.3559 0.0617 0.0370 0.1226 0.0918 $${\it{\Gamma}}_1$$ $${\it{\Gamma}}_d$$ $${\it{\Gamma}}_y$$ Eigenvalue $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $${{\bf L}}_{{\it{\Gamma}}}$$ $$\mathscr H$$ $$\lambda_1$$ 0 0 0 0 0 0 $$\lambda_2$$ 0.5227 0.3374 0.0107 0.0035 0.0600 0.0664 $$\lambda_3$$ 0.5335 0.3453 0.0148 0.0073 0.0727 0.0765 $$\lambda_4$$ 0.5431 0.3486 0.0317 0.0095 0.0904 0.0826 $$\lambda_5$$ 0.5502 0.3522 0.0410 0.0157 0.1177 0.0909 $$\lambda_6$$ 0.5572 0.3559 0.0617 0.0370 0.1226 0.0918 8.3 Parabolic problems Here, we consider approximating the simple diffusion-type equation ∂u∂t=∂2u∂x2(x,t)∈Γ×(0,T];u(x,0)=u0(x)x∈Γ, (8.6) where $$u_0(x)$$ is an initial temperature distribution. As always in this article, Neumann–Kirchhoff conditions are imposed at the vertices. Note that as $$t\to \infty$$, the solution $$u(x,t)$$ of (8.6) must decay to a steady state, everywhere constant solution at an exponential rate. Finite element discretization of the second derivative in (8.6) leads to a system of linear ODEs of the form (7.6) with $${\bf f}_h = \bf 0$$, the solution of which is given by (7.7) with $${{\bf H}} = {{\bf L}} $$. For the initial temperature distribution, we have chosen the function $$u_0(x)$$ that is linear on each edge and such that $$u_0(x|_e) = x$$. Moreover, $$u_0$$ is normalized so that $$\|u_0\|_{L^2({\it{\Gamma}})} = 1$$. We have used the package in Güttel (2017) to approximate the action of the matrix exponential in (7.7) at times $$t_1 = 0.0002$$, $$t_2 = 0.0004$$, $$\ldots$$, $$t_{10} = 0.002$$ for four different graphs: the graphs $${\it{\Gamma}}_1$$, $${\it{\Gamma}}_d$$ and $${\it{\Gamma}}_y$$ from the previous section and a graphene-like lattice $${\it{\Gamma}}_g$$ consisting of 200 contiguous hexagons. Each edge in each graph is assumed to have unit length, and 20 interior nodes are used to discretize each edge. In the case of the graphene-like graph, which has 840 vertices and 1210 edges, the extended graph has 25040 nodes. Because the steady state is approached quickly for all graphs, we limit ourselves to a very small time interval; slower decay can be obtained by premultiplying the diffusion operator in (8.6) by a small diffusivity coefficient, but doing so does not change the relative rate of decay obtained for different graphs. Consider the discrete solution (7.7). The rate of decay to steady state is governed primarily by the first nonzero eigenvalue of the matrix pencil $$({{\bf H}}, {{\bf M}})$$ (i.e., of the matrix $${{\bf M}}^{-1}{{\bf H}}$$). To see this, let $$m$$ be the total number of nodes on the extended graph $$\mathscr G$$ and $${\bf q}_1,\ {\bf q}_2, \ldots , {\bf q}_m$$ be the normalized eigenvectors corresponding to the generalized eigenvalues $$\lambda_1 < \lambda_2 \le \cdots \le \lambda_m$$. Then, the solution (7.7) is given, for all times $$t\ge 0$$, by uh(t)=e−tλ1(q1Tuh,0)q1+e−tλ2(q2Tuh,0)q2+⋯+e−tλm(qmTuh,0)qm. (8.7) In our case $${{\bf H}} = {{\bf L}}$$, hence $$\lambda_1 = 0$$ and $$\bf q_1 = \frac{1}{\sqrt m} {{\rm 1}\kern-0.24em{\rm I}}$$ (where $${{\rm 1}\kern-0.24em{\rm I}}$$ is the vector of all ones), therefore $${\bf u}_h (t) \to \frac{1}{m} ({{\rm 1}\kern-0.24em{\rm I}}^{\rm T}{\bf u}_{h,0}) {{\rm 1}\kern-0.24em{\rm I}}$$ as $$t\to \infty$$. In other words, the solution tends to a state of thermal equilibrium, where the temperature is the same everywhere on $$\mathscr G$$ and is given by the space average of the initial solution. It is also clear from (8.7) that the rate of convergence depends primarily on the magnitude of the smallest nonzero eigenvalue, $$\lambda_2$$, since all the terms corresponding to larger eigenvalues tend to zero faster. The six smallest eigenvalues of the Hamiltonian for the graphs $${\it{\Gamma}}_1$$, $${\it{\Gamma}}_d$$ and $${\it{\Gamma}}_y$$ have been reported in Table 5. For the graphene-like graph $${\it{\Gamma}}_g$$, the corresponding values are $$\lambda_1 = 0$$, $$\lambda_2 = 0.0014$$, $$\lambda_3 = 0.0056$$, $$\lambda_4 = 0.0126$$, $$\lambda_5 = 0.0160$$ and $$\lambda_6 = 0.0176$$. Hence, one would expect the convergence to steady state to take longer on the graphene-like graph than on the other graphs, with the Barábasi–Albert graph $${\it{\Gamma}}_1$$ exhibiting the fastest rate of decay ($$\lambda_2 = 0.3374$$). This is also intuitive in view of the fact that $${\it{\Gamma}}_g$$ is a two-dimensional lattice with large diameter, whereas the Barábasi–Albert graph is a small-world graph with small diameter, and so diffusion should take place faster on the latter graph than on the former (with the other two graphs occupying somewhat intermediate positions). However, things are not quite so simple. It is important to keep in mind that we are dealing with solutions to a partial differential equation, and that the actual decay behavior may be different when measured by different, nonequivalent norms. Recall (see, e.g., Brezis, 2010, Chapter 11) that the solution $$u(t)$$ of problem (8.6) satisfies at each time $$t$$ the relation 12‖u(t)‖L2(Γ)2+∫0T(Hu(s),u(s))L2(Γ)ds=12‖u(0)‖L2(Γ)2, (8.8) and if $$u(t) \in C^1({\it{\Gamma}})$$ for all $$t$$, we have ddt(12‖u(t)‖L2(Γ)2)+(Hu(s),u(s))L2(Γ)=0 (8.9) (where, for ease of notation, we have suppressed the dependence of $$u$$ on $$x$$). We point out that the finite element approximation gives the relations Mu˙h=−Huhuh(0)=uh,0, and therefore the semidiscrete solutions $${\bf u}_h(t)$$ will satisfy for each $$h$$ ddt(12‖uh(t)‖L2(Γ)2)+uh(t)THuh(t)=0. (8.10) We also note that in our case $$({\mathscr H} u(s), u(s))_{L^2({\it{\Gamma}})} = \|u(s)\|_{H^1({\it{\Gamma}})}^2$$, the square of the $$H^1$$ (semi)norm. This quantity has the physical meaning of an energy. Since the quadratic form $${\bf u}_h(t)^{\rm T} {{\bf H}} {\bf u}_h(t)$$ is the discrete $$H^1$$ seminorm (squared), (8.10) implies that large energy solutions at a time $$t$$ will show a faster decrease of the $$L^2({\it{\Gamma}})$$ norm. In Fig. 12, we display for our test problems both the $$H^1$$ seminorm and the $$L^2$$-norm of the solutions (squared). These plots show that the decay behavior is different in different norms. In particular, the approach to equilibrium is fastest for the graphene-like graph and slowest for the Barábasi–Albert graph when measured in the $$L^2$$-norm, but the situation is reversed when decay is measured in terms of energy, with the decay of the $$H^1$$ seminorm now being noticeably slower for the highly regular graphene-like graph (note the semilogarithmic scale used for the energy plot). This observation is consistent with our remarks above. Fig. 12. View largeDownload slide $$L^2({\it{\Gamma}})$$ norm squared and energy of solution $$u(x,t)$$ to the heat equation as a function of time ($$n=20$$ internal nodes). Note the logarithmic scale on the vertical axis in the second plot. Fig. 12. View largeDownload slide $$L^2({\it{\Gamma}})$$ norm squared and energy of solution $$u(x,t)$$ to the heat equation as a function of time ($$n=20$$ internal nodes). Note the logarithmic scale on the vertical axis in the second plot. As a final note, one of the purposes of these experiments was to compare the use of two variants of the mass matrix $${{\bf M}}$$, the one obtained from the Simpson rule and the diagonal approximation of it obtained using the trapezoidal rule. To generate the results shown in Fig. 12, we made use of the trapezoidal rule. The results for the Simpson rule were found to be virtually identical and therefore are not shown. The essential equivalence of the two variants of the mass matrix is in line with what one would expect from a simple error analysis of the two quadrature rules. The Simpson rule, however, requires computing the Cholesky factorization of the mass matrix and two triangular solves at each step of the Krylov subspace method (see the discussion in Section 7.2). In contrast, the trapezoidal rule does not necessitate any of this, since it leads to a diagonal approximation of the mass matrix, resulting in computing times that are orders of magnitude smaller. 9. Conclusions and future work In this article, we have introduced and analysed a linear finite element method for the discretization of elliptic, parabolic and eigenvalue problems posed on graphs, with special attention to the important case of Neumann–Kirchhoff vertex conditions. The structure and main properties of the resulting stiffness and mass matrices have been carefully described, together with the associated notion of extended graph. Numerical linear algebra aspects have been discussed, as well as the numerical integration of simple parabolic PDEs on graphs. The effect of graph topology on the convergence behavior of iterative solvers has been discussed and illustrated by numerical experiments, showing that a combination of Schur complement reduction (closely related to a nonoverlapping domain decomposition approach) with diagonally scaled CG results in optimal solvers for scale-free graphs. This approach has also very high inherent parallelism. Not surprisingly, we have found that the solution of PDEs on graphs, particularly complex networks, can lead to new phenomena that are not typically observed when solving PDEs posed on more typical two-dimensional and three-dimensional domains. The numerical analysis of PDEs on graphs and networks is still in its infancy, and much work remains to be done in this area. Future work should address the numerical solution of more complex types of differential equations on graphs. These include hyperbolic problems (especially nonlinear conservation laws, which are important for the description of transport phenomena on networks, as well as for the propagation of shocks), Schrödinger-type equations, systems of PDEs (such as the Dirac equations, which are important in the modeling of graphene) and nonlocal PDEs of fractional type. In particular, the influence of the underlying graph topology on the discretization and solver behavior should be investigated for these more complex PDEs. Finally, in this article we have focused on a simple linear continuous finite element discretization, which allowed us to relate the discretized Hamiltonian to standard linear operators associated with graphs, such as the incidence matrix and the graph Laplacian. It would also be of interest to compare the efficacy of different discretization strategies (discontinuous or higher order finite elements, finite differences, finite volumes and spectral methods) in solving PDEs on graphs. Acknowledgements We are grateful to an anonymous referee for suggestions that greatly improved the presentation. We would also like to thank Maxim Olshanskii for useful discussions and Peter Benner for providing pointers to the literature. Funding Engineering and Physical Sciences Research Council (EP/E053351/1 to M.A.); Emerson Fellowship from Emory University (to M.A.); National Science Foundation (DMS-1418889 to M.B.). Appendix We assume that we approximate by uniform linear finite elements the simple Hamiltonian Hu=−d2udx2+νu ($$\nu =$$ constant), with $$\ell_e = 1$$ and $$n_e = n$$$$\forall e \in {{\mathscr E}}$$, so that the block H11=A+V=I⊗(1hT^+hνM^) is the combination of the two tridiagonal matrices of order $$n-1$$: T^=tridiag{−1,2,−1},M^=16tridiag{1,4,1}, the block H12=B+C=(−1h+νh6)|B|, and H22=G+F=(1h+νh3)D. Both the matrices $${\widehat{{{\bf T}}}}$$ and $${\widehat{{{\bf M}}}}$$ are diagonalized by the symmetric, orthonormal matrix $${{\bf U}}$$ (Meurant, 1992): U=(ui,j){ui,j=2hsin(ijπh)}i,j=1n−1. UT^U=ΛandUM^U=16(2I+Λ), where $$\lambda_i = 4 \sin^2\left (i \frac{\pi}{2} h\right )$$. Moreover, we have UH11U=Θ=1hΛ+hν6(2I+Λ). We observe the following useful relations: sin(iπh) = (−1)i−1sin(inπh), (A.1) sin(iπh) =2cos(iπ2h)sin(iπ2h), (A.2) sin(iπh)2 =14(4−λi)λi. (A.3) The Schur complement $${{\bf S}}$$ of the Hamiltonian in our specific case is S =G+F−(B+C)T(A+V)−1(B+C) =(1h+νh3)D−(−1h+νh6)2|E^E¯T|(I⊗(UΘ−1U))|E¯E^T|. Taking into account that E¯T=IM⊗EeT, with $${{\bf E}}_e \in {\mathbb{R}}^{(n-1) \times n}$$, and in view of (4.2), (4.3) and (4.4) with $$n_{e_j} = n$$, we have |E^E¯T| = (E+⊗(e1n)T+|E−|⊗(enn)T)(IM⊗|Ee|T) = (E+⊗(e1n−1)T+|E−|⊗(en−1n−1)T). Finally, we have S =(1h+νh3)D −(−1h+νh6)2(E+⊗(e1n−1)T(UΘ−1U)+|E−|⊗(en−1n−1)T)((E+)T⊗e1n−1+|E−|T⊗en−1n−1). By expanding the products, we have S =(1h+νh3)D−(−1h+νh6)2 ⋅[(E+(E+)T)⊗(e1n−1)T(UΘ−1U)e1n−1 +(E+|E−|T)⊗(e1n−1)T(UΘ−1U)en−1n−1 +(|E−|(E+)T)⊗(en−1n−1)T(UΘ−1U)e1n−1 +(|E−||E−|T)⊗(en−1n−1)T(UΘ−1U)en−1n−1]. Because of the symmetry of $${{\bf U}}$$ and from (A.1), (A.2) and (A.3), we have (en−1n−1)T(UΘ−1U)e1n−1 =(e1n−1)T(UΘ−1U)en−1n−1,(en−1n−1)T(UΘ−1U)en−1n−1 =(e1n−1)T(UΘ−1U)e1n−1. Thus, from Lemma 4.1 and the property $$\vert {{\bf E}}^- \vert = -{{\bf E}}^-$$, it follows that S =(1h+νh3)D−(−1h+νh6)2[(E+E+T+E−E−T)⊗(e1n−1)T(UΘ−1U)e1n−1 −(E+E−T+E−E+T)⊗(en−1n−1)T(UΘ−1U)e1n−1] =(1h+νh3)D−(−1h+νh6)2[D⊗(e1n−1)T(UΘ−1U)e1n−1+Ad⊗(en−1n−1)T(UΘ−1U)e1n−1]. Moreover, we have S = [(1h+νh3)−(−1h+νh6)2(e1n−1)T(UΘ−1U)e1n−1]D −[(−1h+νh6)2(en−1n−1)T(UΘ−1U)e1n−1]Ad. In the general case $$\nu >0$$, it follows that the nonzero pattern of $${{\bf S}}$$ will always coincide with that of $${{\bf L}}_{\it{\Gamma}}$$. Lemma A.1 (1h+νh3)−(−1h+νh6)2(e1n−1)T(UΘ−1U)e1n−1⪅1+(n−1)ν3π2+ν+O(h). (A.4) If $$\nu < h^{-1}$$, then we have the estimate (−1h+νh6)2(en−1n−1)T(UΘ−1U)e1n−1≈−11+ν3π2+O(h). (A.5) Proof. We have −(−1h+νh6)2(e1n−1)T(UΘ−1U)e1n−1 =−1h2(1−νh26)2∑j=1n−12h2sin2(jπh)(1+νh26)λj+νh23=−(1−νh26)2∑j=1n−1[2cos2(jπh2)]λj(1+νh26)λj+νh23 =−(1−νh26)2∑j=1n−1[2cos2(jπh2)](1+νh26)+νh23λj≤−(1−νh26)2∑j=1n−1[2cos2(jπh2)](1+νh26)+νh23λ1 ≈−(1−νh26)2∑j=1n−1[2cos2(jπh2)](1+νh26)+ν3π2=−(1−νh26)21+νh26+ν3π2(n−1). Here, we have used the identity ∑j=1n−12cos2(jθ2)=∑j=1n−1[cos(jθ)+1]=∑j=1n−1cos(jθ)+n−1 (with $$\theta = \pi h = \pi/n$$) and the Dirichlet kernel formula 1−2∑j=1n−1cos(jθ)=sin((n−12)θ)sin(θ2), which together yield ∑j=1n−12cos2(jθ2)=n−1. Therefore, we have (A.4). From (A.1) and expanding $$\lambda^{-1} = \csc^2(j\pi h/2)/4$$ to the first-order ($$j\pi h/2 < \pi$$), we obtain that (−1h+νh6)2(en−1n−1)T(UΘ−1U)e1n−1=(1−νh26)2∑j=1n−1(−1)j2cos2(jπ2h)(1+νh26)+νh23λj ≈(1−νh26)2∑j=1n−1(−1)j2cos2(jπ2h)(1+νh26)+ν3[1jπ +jπh248]2 ≈(1−νh26)2∑j=1n−1(−1)j2cos2(jπ2h)1+ν3π2[1+O(π2h48)]2≈11+ν3π2∑j=1n−1(−1)j2cos2(jπ2h)+O(h). Finally, we obtain (A.5) by observing that $$\sum_{j=1}^{n-1} \bigl(-1\bigr)^j 2 \cos^2(j \frac{\pi}{2} h) = -1$$. □ Remark A.2 We observe that the estimate (A.5) is only a crude approximation of the real value of the nonzero entries of $${{\bf S}}$$ (corresponding to the entries of $${{\bf Ad}}$$) when $$\nu \geq h^{-1}$$. In this case, we can only say that the off-diagonal entries in the Schur complement will go to zero at least as fast as $$\nu^{-1}$$ for $$\nu \to \infty$$. However, numerically we observe that they are much closer to zero than the estimated value in (A.5). References Afanasjew M., Eiermann M., Ernst O. & Güttel S. ( 2008) Implementation of a restarted Krylov subspace method for the evaluation of matrix functions. Linear Algebra Appl. , 429, 2293– 2314. Google Scholar CrossRef Search ADS Arioli M. & Manzini G. ( 2003) Null space algorithm and spanning trees in solving Darcy’s equation. BIT Numer. Math. , 43, 839– 848. Google Scholar CrossRef Search ADS Arioli M. & Manzini G. ( 2006) A network programming approach in solving Darcy’s equations by mixed finite-element methods. Electron. Trans. Numer. Anal. , 22, 41– 70. Barabasi A.-L. & Albert R. ( 1999) Emergence of scaling in random networks. Science , 286, 509– 512. Google Scholar CrossRef Search ADS PubMed Bekas C. & Saad Y. ( 2005) Computation of smallest eigenvalues using spectral Schur complements. SIAM J. Sci. Comput. , 27, 458– 481. Google Scholar CrossRef Search ADS Benzi M. ( 2002) Preconditioning techniques for large linear systems: a survey. J. Comput. Phys. , 182, 418– 477. Google Scholar CrossRef Search ADS Benzi M. & Kuhlemann V. ( 2013) Chebyshev acceleration of the GeneRank algorithm. Electron. Trans. Numer. Anal. , 40, 311– 320. Berkolaiko G. & Kuchment P. ( 2013) Introduction to Quantum Graphs . Mathematical Surveys and Monographs. Providence, RI: American Mathematical Society. Bindel D. & Hood A. ( 2013) Localization theorems for nonlinear eigenvalues. SIAM J. Matrix Anal. Appl. , 34, 1728– 1749. Google Scholar CrossRef Search ADS Brenner S. & Scott L. ( 2002) The Mathematical Theory of Finite Element Methods . Texts in Applied Mathematics. New York: Springer. Brezis H. ( 2010) Functional Analysis, Sobolev Spaces and Partial Differential Equations . Universitext. New York: Springer. Chung F. & Lu L. ( 2006) Complex Graphs and Networks. CBMS Regional Conference Series in Mathematics. Boston, MA: American Mathematical Society. Dupont T. & Scott R. ( 1980) Polynomial approximation of functions in Sobolev spaces. Math. Comput., 34, 441– 463. Google Scholar CrossRef Search ADS Elsässer R. ( 2006) Toward the eigenvalue power law. Mathematical Foundations of Computer Science 2006, 31st International Symposium, MFCS 2006, Stará Lesná, Slovakia, August 28-September 1, 2006, Proceedings ( Kralovic R. & Urzyczyn P. eds). Lecture Notes in Computer Science, vol. 4162. New York: Springer, pp. 351– 362. Estrada E. ( 2011) The Structure of Complex Networks: Theory and Applications . New York: Oxford University Press, Inc. Güttel S. ( 2017) funm_kryl: A Restart Code for the Evaluation of Matrix Functions . Available at http://guettel.com/funm_kryl/ ( accessed on 15 May 2017). Herty M., Mohring J. & Sachers V. ( 2010) A new model for gas flow in pipe networks. Math. Methods Appl. Sci. , 33, 845– 855. Hild J. & Leugering G. ( 2012) Real-Time control of urban drainage systems. Mathematical Optimization of Water Networks . International Series on Numerical Mathematics. Basel: Springer, pp. 129– 150. Horn R. A. & Johnson C. R. ( 2012) Matrix Analysis , 2nd edn. New York: Cambridge University Press. Kuchment P. ( 2004) Quantum graphs: I. Some basic structures. Waves Random Media , 14, S107– S128. Google Scholar CrossRef Search ADS Kuchment P. ( 2005) Quantum graphs: II. Some spectral properties of quantum and combinatorial graphs. J. Phys. A: Math. Gen. , 38, 4887. Google Scholar CrossRef Search ADS Lagnese J. E., Schmidt E. J. & Leugering G. ( 1994) Modeling, Analysis and Control of Dynamic Elastic Multi-Link Structures . Boston, MA: Birkhäuser. Meurant G. ( 1992) A review on the inverse of symmetric tridiagonal and block tridiagonal matrices. SIAM J. Matrix Anal. Appl. , 13, 707– 728. Google Scholar CrossRef Search ADS Newman M. ( 2010) Networks: An Introduction . Oxford: Oxford University Press. Nordenfelt A. ( 2007) Spectral analysis of discrete approximations of quantum graphs. Master’s Thesis . Centre for Mathematical Sciences, Lund University, Sweden. Pesenson I. ( 2005) Polynomial splines and eigenvalue approximations on quantum graphs. J. Approx. Theor. , 135, 203– 220. Google Scholar CrossRef Search ADS Raviart P., Thomas J. & Ciarlet P. ( 1983) Introduction à l’Analyse Numérique des Équations aux Dérivées Partielles . Mathématiques appliquées pour la maîtrise. Paris, France: Masson. Saad Y. ( 2003) Iterative Methods for Sparse Linear Systems , 2nd edn. Philadelphia, PA: Society for Industrial and Applied Mathematics. Salkuyeh D. K., Edalatpour V. & Hezari D. ( 2014) Polynomial preconditioning for the GeneRank problem. Electron. Trans. Numer. Anal. , 41, 179– 189. Taylor A. & Higham D. J. ( 2009) CONTEST: A controllable test matrix toolbox for MATLAB. ACM Trans. Math. Software , 35, 26. Google Scholar CrossRef Search ADS Wathen A. J. ( 1987) Realistic eigenvalue bounds for the Galerkin mass matrix. IMA J. Numer. Anal. , 7, 449– 457. Google Scholar CrossRef Search ADS Wybo W. A. M., Boccalini D., Torben-Nielsen B. & Gewaltig M.-O. ( 2015) A sparse reformulation of the Green’s function formalism allows efficient simulations of morphological neuron models. Neural Comput. , 27, 2587– 2622. Google Scholar CrossRef Search ADS PubMed Zlotnik A., Chertkov M. & Backhaus S. ( 2015) Optimal control of transient flow in natural gas networks. Decision and Control (CDC), 2015 IEEE 54th Annual Conference on . New York: IEEE, pp. 4563– 4570. Footnotes 1Chebyshev semi-iteration is also quite attractive for problems, where bounds on the extreme eigenvalues of $${{\bf S}}$$ are available (see Benzi & Kuhlemann, 2013). © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

IMA Journal of Numerical Analysis – Oxford University Press

**Published: ** Jun 21, 2017

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.