# Increasing the smoothness of vector and Hermite subdivision schemes

Increasing the smoothness of vector and Hermite subdivision schemes Abstract In this paper we suggest a method for transforming a vector subdivision scheme (VSS) generating Cℓ limits to another such scheme of the same dimension, generating Cℓ+1 limits. In scalar subdivision, it is well known that a scheme generating Cℓ limit curves can be transformed to a new scheme producing Cℓ+1 limit curves by multiplying the scheme’s symbol with the smoothing factor $$\tfrac{z+1}{2}$$. First we extend this approach to VSSs, by manipulating symbols. The algorithms presented in this paper allow us to construct VSSs of arbitrarily high regularity from a convergent vector scheme. Furthermore, from a certain class of converging Hermite schemes of dimension 2, we are able to obtain new Hermite schemes of dimension 2 with regularity of any order. 1. Introduction Subdivision schemes are algorithms which iteratively refine discrete input data and produce smooth curves or surfaces in the limit. The regularity of the limit curve resp. surface is a topic of high interest. In this paper we are concerned with the stationary and univariate case, i.e. with subdivision schemes using the same set of coefficients (called mask) in every refinement step and which have curves as limits. We study two types of such schemes: vector and Hermite subdivision schemes (HSSs). The mostly studied schemes are scalar subdivision schemes with real-valued sequences as masks. These schemes are in fact a special case of vector subdivision, with matrix-valued sequences as masks, which refine sequences of vectors. For vector subdivision schemes (VSSs) many results concerning convergence and smoothness are available. An incomplete list of references for the analysis of scalar and VSSs is Cavaretta et al. (1991), Dyn et al. (1991), Dyn (1992), Micchelli & Sauer (1998), Dyn & Levin (2002), Sauer (2002) and Charina et al. (2005). In Hermite subdivision the refined data are also a sequence of vectors interpreted as function and derivatives values. This results in level-dependent vector subdivision, where the convergence of a scheme already includes the regularity of the limit curve. Corresponding literature can be found in the studies by Dyn & Levin (1995, 1999), Dubuc & Merrien (2005), Han et al. (2005), Dubuc (2006), Guglielmi et al. (2011) and Merrien & Sauer (2012) and references therein. Note that here we consider Hermite schemes of dimension 2 (function and first derivative values). To emphasize this, we denote them by Hermite(2). Also, we consider inherently stationary Hermite(2) schemes (Conti et al., 2014), which means that the level-dependence arises only from the specific interpretation of the input data. Inherently nonstationary Hermite schemes are discussed, e.g. in the study by Conti et al. (2016). The convergence and smoothness analysis of subdivision schemes is strongly connected to the existence of the derived scheme or in the Hermite case to the existence of the Taylor scheme. The derived scheme (the Taylor scheme) is obtained by an appropriate factorization of the symbols (Dyn & Levin, 2002; Charina et al., 2005; Merrien & Sauer, 2012). In the scalar and vector case we have the following result: if the derived scheme produces Cℓ (ℓ ≥ 0) limit curves, then the original scheme produces Cℓ+1 limit curves, see Dyn & Levin (2002) and Charina et al. (2005). In the Hermite case, in addition to the assumption that the Taylor scheme is Cℓ, we also need that its limit functions have vanishing first component (Merrien & Sauer, 2012). These results are essential tools in our approach for obtaining schemes with increased smoothness. We start from a scheme which is known to have a certain regularity and regard it as the derived scheme (the Taylor scheme) of a new, to be computed scheme. By the above result, the regularity of the new scheme is increased by 1. This idea comes from univariate scalar subdivision, where it is well known that a scheme with symbol α*(z) is the derived scheme of $$\boldsymbol{\beta }^{\ast }(z)=\tfrac{1+z}{2}z^{-1}\boldsymbol{\alpha }^{\ast }(z)$$ (Dyn & Levin, 2002), and thus if Sα generates Cℓ limits, Sβ generates limits which are Cℓ+1. It is possible to generalize this process to obtain vector (Hermite(2)) subdivision schemes of arbitrarily high smoothness from a convergent vector scheme (a Hermite(2) scheme, whose Taylor scheme is convergent with limit functions of vanishing first component). The presentation of such a general procedure is the main aim of this paper. We would like to mention other approaches which increase the regularity of subdivision schemes: while the case of univariate scalar schemes is presented, e.g. in the study by Dyn & Levin (2002), the paper by Sauer (2003) generalizes such a smoothing procedure to the multivariate scalar setting. Although VSSs appear naturally in the analysis of smoothness of multivariate scalar schemes, yet the aim in the study by Sauer (2003) is to increase the smoothness of scalar schemes. There are many approaches which increase the smoothness of some well known HSSs by 1. As shown in the study by Conti et al. (2014), the de Rham transform (Dubuc & Merrien, 2008) increases the regularity of some of the interpolatory Hermite schemes presented in the studies by Merrien (1992, 1999). Also Merrien & Sauer (2017) present examples of Hermite schemes with increased smoothness based on an approach which extends the dimension of the matrices of the mask and the dimension of the refined data. The recent paper by Jeong & Yoon (2017) defines a class of HSSs with tension parameters having high polynomial reproduction and smoothness. This class generalizes and unifies schemes from the studies by Merrien (1992), Han (2001) and Conti et al. (2014). Compared to the approaches just mentioned, our paper presents the first general method, which can be applied to any convergent vector or Hermite(2) subdivision scheme. Our procedure works by algebraically manipulating symbols and generalizing the scalar smoothing factor $$\tfrac{z+1}{2}$$. Therefore, it contains, as a special case, the univariate scalar smoothing procedure (Dyn & Levin, 2002). Another benefit is its iterative nature, which allows us to construct schemes of arbitrarily high regularity from any given convergent scheme. Although in the Hermite(2) case we have a possible increase of support length by 5 (see Corollary 5.15, Example 5.17 and Example 5.18), which is bigger than the increase of support in the above mentioned papers, this is the only drawback, and we believe that it is outweighed by the high generality of our method. In the vector case, however, the support increase is only by a maximum of 2 (see Corollary 4.3), independently of the size of the scheme. Our paper is organized as follows. In Section 2 we introduce the notation used throughout this text and recall some definitions concerning subdivision schemes. Section 3 presents the well known procedure for increasing the smoothness of univariate scalar subdivision schemes (Dyn & Levin, 2002). However, we introduce new notation, to emphasize the analogy to the procedures we presented in Sections 4 and 5 for vector and Hermite(2) schemes, respectively. We conclude by two examples, applying our procedure to an interpolatory Hermite(2) scheme of Merrien (1992) and to a Hermite(2) scheme of de Rham-type (Dubuc & Merrien, 2008), and obtain schemes with limit curves of regularity C2 and C3, respectively. 2. Notation and background In this section we introduce the notation which is used throughout this paper and recall some known facts about scalar, vector and HSSs. Vectors in $$\mathbb{R}^{p}$$ will be labeled by lowercase letters c. The standard basis is denoted by e1, …, ep. Sequences of elements in $$\mathbb{R}^{p}$$ are denoted by boldface letters $$\mathbf{c}=\{c_{i} \in \mathbb{R}^{p}: i \in \mathbb{Z} \}$$. The space of all such sequences is $$\ell (\mathbb{R}^{p})$$. We define a subdivision operator $$S_{\boldsymbol{\alpha }}: \ell (\mathbb{R}^{p}) \to \ell (\mathbb{R}^{p})$$ with a scalar mask $$\boldsymbol{\alpha } \in \ell (\mathbb{R})$$ by \begin{align} (S_{\boldsymbol{\alpha}}\mathbf{c})_{i}=\sum_{j \in \mathbb{Z}} \alpha_{i-2j}c_{j}, \quad i \in \mathbb{Z}, \: \mathbf{c} \in \ell(\mathbb{R}^{p}). \end{align} (2.1) We study the case of finitely supported masks, with support contained in [−N, N]. In this case the sum in (2.1) is finite and the scheme is local. We also consider matrix-valued masks. To distinguish them from the scalar case, we denote matrices in $$\mathbb{R}^{p \times p}$$ by uppercase letters. Sequences of matrices are denoted by boldface letters $$\mathbf{A}=\{A_{i} \in \mathbb{R}^{p \times p}: i \in \mathbb{Z} \}$$. We define a vector subdivision operator $$S_{\mathbf{A}}: \ell (\mathbb{R}^{p}) \to \ell (\mathbb{R}^{p})$$ with a finitely supported matrix mask $$\mathbf{A} \in \ell (\mathbb{R}^{p \times p})$$ by \begin{align} (S_{\mathbf{A}}\mathbf{c})_{i}=\sum_{j \in \mathbb{Z}} A_{i-2j}c_{j}, \quad i \in \mathbb{Z}, \: \mathbf{c} \in \ell(\mathbb{R}^{p}). \end{align} (2.2) We define three kinds of subdivision schemes: Definition 2.1 A scalar subdivision scheme is the procedure of constructing $$\mathbf{c}^{n}\: (n\geqslant 1)$$ from input data $$\mathbf{c}^{0} \in \ell (\mathbb{R}^{p})$$ by the rule cn = Sαcn−1, where $$\boldsymbol{\alpha } \in \ell (\mathbb{R})$$ is a scalar mask. A VSS is the procedure of constructing $$\mathbf{c}^{n}\: (n\geqslant 1)$$ from input data $$\mathbf{c}^{0} \in \ell (\mathbb{R}^{p})$$ by the rule cn = SAcn−1, where A is a matrix-valued mask. An HSS is the procedure of constructing $$\mathbf{c}^{n}\: (n\geqslant 1)$$ from $$\mathbf{c}^{0}\in \ell (\mathbb{R}^{p})$$ by the rule Dncn = SADn−1cn−1, where A is a matrix-valued mask and D is the dilation matrix $$D=\left(\begin{array}{@{}cccc@{}} 1 & & & \\ & \frac{1}{2} & & \\ & & \ddots & \\ & & & \frac{1}{2^{p-1}} \end{array}\right)\!.$$ An HSS of dimension 2 is also denoted by HSS(2). The difference between scalar and vector subdivision lies in the dimension of the mask. In scalar subdivision the components of c are refined independently of each other. This is not the case in vector subdivision. Note also that scalar schemes are a special case of vector schemes with mask Ai = αiIp, where Ip is the (p × p) unit matrix. In Hermite subdivision, on the other hand, the components of c are interpreted as function and derivatives values up to order p − 1. This is represented by the matrix D. In particular, Hermite subdivision is a level-dependent case of vector subdivision: $$\mathbf{c}^{n}=S_{\hat{\mathbf{A}}_{n}}\mathbf{c}^{n-1}$$ with $$\hat{\mathbf{A}}_{n}=\{D^{-n}A_{i}D^{n-1}:i \in \mathbb{Z}\}$$. On the space $$\ell (\mathbb{R}^{p})$$ we define a norm by $$\|\mathbf{c}\|_{\infty}=\sup_{i \in \mathbb{Z}}\|c_{i}\|,$$ where ∥⋅∥ is a norm on $$\mathbb{R}^{p}$$. The Banach space of all bounded sequences is denoted by $$\ell ^{\infty }(\mathbb{R}^{p})$$. A subdivision operator Sα with finitely supported mask, restricted to a map $$\ell ^{\infty }(\mathbb{R}^{p}) \to \ell ^{\infty }(\mathbb{R}^{p})$$ has an induced operator norm: $$\|S_{\boldsymbol{\alpha}}\|_{\infty}=\sup\left\{\left\|S_{\boldsymbol{\alpha}}\mathbf{c} \right\|_{\infty}: \mathbf{c} \in \ell^{\infty}\left(\mathbb{R}^{p}\right)\ \textrm{and} \ \|\mathbf{c}\|_{\infty}=1\right\}\!.$$ This is also true for subdivision operators with matrix masks. Next we define convergence of scalar, vector and Hermite(2) subdivision schemes. We start with scalar and vector schemes: Definition 2.2 A scalar (vector) subdivision scheme associated with the mask α (A) is convergent in $$\ell ^{\infty }(\mathbb{R}^{p})$$, also called C0, if for all input data $$\mathbf{c}^{0} \in \ell ^{\infty }(\mathbb{R}^{p})$$ there exists a function $$\varPsi \in C(\mathbb{R},\mathbb{R}^{p})$$, such that the sequences cn = Sαnc0$$\left (\mathbf{c}^{n}=S_{\mathbf{A}}^{n}\mathbf{c}^{0}\right )$$ satisfy $$\sup_{i\in \mathbb{Z}}\|{c^{n}_{i}}-\varPsi\left(\tfrac{i}{2^{n}}\right)\| \to 0, \quad \textrm{as } n \to \infty,$$ and $$\varPsi \neq 0$$ for some $$\mathbf{c}^{0} \in \ell ^{\infty }(\mathbb{R}^{p})$$. We say that the scheme is Cℓ, if in addition Ψ is ℓ-times continuously differentiable for any initial data. In Section 5 we consider HSSs which refine function and first derivative values. The case of point-tangent data is treated componentwise. With this approach it is sufficient to consider convergence for data in $$\ell (\mathbb{R}^{2})$$. For the reason why we treat only function and first derivative values, and not higher derivatives, see the beginning of Section 5. In order to distinguish between the convergence of VSSs and the convergence of HSSs, we use the notation introduced in the study by Conti et al. (2014): Definition 2.3 An HSS(2) associated with the mask A is said to be HCℓconvergent with $$\ell \geqslant 1$$, if for any input data $$\mathbf{c}^{0} \in \ell ^{\infty }(\mathbb{R}^{2})$$, there exists a function $$\varPsi ={\varPsi ^{0} \choose \varPsi ^{1}}$$ with $$\varPsi ^{0} \in C^{\ell }(\mathbb{R},\mathbb{R})$$ and Ψ1 being the derivative of Ψ0, such that the sequences cn = D−nSAnc0, $$n \geqslant 1$$, satisfy $$\sup_{i \in \mathbb{Z}}\left\|{c^{n}_{i}}-\varPsi\left(\tfrac{i}{2^{n}}\right)\right\| \to 0, \quad \textrm{as } n\to \infty,$$ and $$\varPsi \neq 0$$ for some input data $$\mathbf{c}^{0} \in \ell ^{\infty }\left (\mathbb{R}^{2}\right )$$. Note that in contrast to the vector case, an HSS(2) is convergent only if the limit already possesses a certain degree of smoothness. We conclude by recalling some facts about the generating function of a sequence c, which is the formal Laurent series $$\mathbf{c}^{\ast}(z)=\sum_{i \in \mathbb{Z}}c_{i}z^{i}.$$ The generating function of a mask of a subdivision scheme is called the symbol of the scheme. We study the symbol of both scalar (α) and matrix (A) masks, defined by $$\boldsymbol{\alpha}^{\ast}(z)=\sum_{i \in \mathbb{Z}}\alpha_{i}z^{i} \quad \textrm{and} \quad \mathbf{A}^{\ast}(z)=\sum_{i \in \mathbb{Z}}A_{i}z^{i}.$$ Due to the finite support assumption, symbols are Laurent polynomials. It is easy to see (e.g. in Dyn & Levin, 2002) that the following properties are satisfied: Lemma 2.4 Let c be a sequence and let α be a scalar or a matrix mask. By $$\varDelta$$ we denote the forward-difference operator ($$\varDelta$$c)i = ci+1 − ci. Then we have: $$(\varDelta\mathbf{c})^{\ast}(z)=(z^{-1}-1)\mathbf{c}^{\ast}(z) \;\textrm{and}\; \left(S_{\boldsymbol{\alpha}}\mathbf{c}\right)^{\ast}(z)=\boldsymbol{\alpha}^{\ast}(z)\mathbf{c}^{\ast}\left(z^{2}\right)\!.$$ Furthermore, for finite sequences we have the equalities \begin{align*} \mathbf{c}^{\ast}(1)&=\sum_{i \in \mathbb{Z}}c_{2i}+\sum_{i \in \mathbb{Z}}c_{2i+1} \quad \textrm{and} \quad \mathbf{c}^{\ast}(-1)=\sum_{i \in \mathbb{Z}}c_{2i}-\sum_{i \in \mathbb{Z}}c_{2i+1},\\{\mathbf{c}^{\ast}}^{\prime}(1)&=\sum_{i \in \mathbb{Z}}c_{2i}(2i)+\sum_{i \in \mathbb{Z}}c_{2i+1}(2i+1) \quad \textrm{and} \quad{\mathbf{c}^{\ast}}^{\prime}(-1)=\sum_{i \in \mathbb{Z}}c_{2i+1}(2i+1)-\sum_{i \in \mathbb{Z}}c_{2i}(2i). \end{align*} 3. Increasing the smoothness of scalar subdivision schemes In this section we recall a procedure increasing the smoothness of scalar subdivision schemes, which is realized by the smoothing factor $$\tfrac{z+1}{2}$$. The results of this section are taken from Section 4 in the study by Dyn & Levin (2002). We introduce notation in order to illustrate the analogy to the procedures we present in Section 4 for VSSs. The condition $$\sum _{i \in \mathbb{Z}}\alpha _{2i}=\sum _{i \in \mathbb{Z}}\alpha _{2i+1}=1$$ on the mask α is necessary for the convergence of Sα. In this case α*(−1) = 0, implying that α*(z) has a factor (z + 1) and there exists a mask ∂α such that \begin{align} \varDelta S_{\boldsymbol{\alpha}}=\tfrac{1}{2}S_{\partial \boldsymbol{\alpha}}\varDelta. \end{align} (3.1) The scalar scheme associated with ∂α is called the derived scheme. It is easy to see that \begin{align} (\partial \boldsymbol{\alpha})^{\ast}(z)=2z\tfrac{\boldsymbol{\alpha}^{\ast}(z)}{z+1} \end{align} (3.2) and that (∂α)* is a Laurent polynomial. The convergence and smoothness analysis of a scalar subdivision scheme associated with α depends on the properties of ∂α: Theorem 3.1 Let α be a mask which satisfies α*(1) = 2 and α*(−1) = 0. The scalar scheme associated with α is convergent if and only if the scalar scheme associated with $$\tfrac{1}{2}\partial \boldsymbol{\alpha }$$ is contractive, namely $$\left \|\left (\frac{1}{2}S_{\partial \boldsymbol{\alpha }}\right )^{L}\right \|_{\infty }<1$$ for some $$L \in \mathbb{N}$$. If the scalar scheme associated with ∂α is $$C^{\ell }\enspace (\ell \geqslant 0)$$ then the scalar subdivision scheme associated with α is Cℓ+1. Theorem 3.1 allows us to define a procedure for increasing the smoothness of a scalar subdivision scheme: for a mask α, define a new mask $$\mathcal{I}\boldsymbol{\alpha }$$ by $$(\mathcal{I}\boldsymbol{\alpha })^{\ast }(z)=\tfrac{(1+z)}{2}z^{-1}\boldsymbol{\alpha }^{\ast }(z)$$. Then $$(\mathcal{I}\boldsymbol{\alpha })^{\ast }(-1)=0$$ and from equation (3.2) we get $$\partial (\mathcal{I}\boldsymbol{\alpha })=\boldsymbol{\alpha. }$$ (Note that if ∂α exists, then also $$\mathcal{I}(\partial \boldsymbol{\alpha })=\boldsymbol{\alpha }$$.) Corollary 3.2 Let α be a mask associated with a Cℓ ($$\ell \geqslant 0$$) scalar subdivision scheme. Then the mask $$\mathcal{I}\boldsymbol{\alpha }$$ gives rise to a Cℓ+1 scheme. Therefore, by a repeated application of $$\mathcal{I}$$, a scalar subdivision scheme which is at least convergent can be transformed to a new scheme of arbitrarily high regularity. We call $$\mathcal{I}$$ a smoothing operator and $$\tfrac{z+1}{2}$$ a smoothing factor. Note that the factor z−1 in $$\mathcal{I}$$ is an index shift. Example 3.3 (B-Spline schemes). The symbol of the scheme generating B-Spline curves of degree $$\ell \geqslant 1$$ and smoothness Cℓ−1 is $$\boldsymbol{\alpha}_{\ell}^{\ast}(z)=\Big(\tfrac{(z+1)}{2}z^{-1}\Big)^{\ell}(z+1).$$ Obviously $${\boldsymbol{\alpha }}^{\ast }_{\ell }(z)=\tfrac{(z+1)}{2}z^{-1}{\boldsymbol{\alpha }}^{\ast }_{\ell -1}(z)=(\mathcal{I}{\boldsymbol{\alpha }}_{\ell -1})^{\ast }(z)$$. 4. Increasing the smoothness of VSSs In this section we describe a procedure for increasing the smoothness of VSSs, which is similar to the scalar case. It is more involved since we consider masks consisting of matrix sequences. 4.1. Convergence and smoothness analysis First we present results concerning the convergence and smoothness of VSSs. Their proofs can be found in the studies by Cohen et al. (1996), Micchelli & Sauer (1998), Sauer (2002) and Charina et al. (2005). For a mask A of a VSS we define \begin{align} A^{0}=\sum_{i \in \mathbb{Z}}A_{2i}, \quad A^{1}=\sum_{i \in \mathbb{Z}}A_{2i+1}. \end{align} (4.1) Following Micchelli & Sauer (1998), let \begin{align} \mathscr{E}_{\mathbf{A}}=\left\{v \in \mathbb{R}^{p}: A^{0}v=v\ \textrm{and} \ A^{1}v=v\right\} \end{align} (4.2) and $$k=\dim \mathscr{E}_{\mathbf{A}}$$. A priori, $$0 \leqslant k \leqslant p$$. However, for a convergent VSS, $$\mathscr{E}_{\mathbf{A}}\neq \{0\}$$, i.e. $$1 \leqslant k \leqslant p$$. Therefore, the existence of a common eigenvector of A0 and A1 w.r.t. the eigenvalue 1 is a necessary condition for convergence. The next lemma reduces the convergence analysis to the case $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}.$$ Lemma 4.1 Let SA be a Cℓ$$(\ell \geqslant 0)$$ convergent VSS. Given an invertible matrix $$R \in \mathbb{R}^{p\times p}$$, define a new mask $$\hat{\mathbf{A}}$$ by $$\hat{A}_{i}=R^{-1}A_{i}R$$ for $$i \in \mathbb{Z}$$. The VSS associated with $$\hat{\mathbf{A}}$$ is also Cℓ. There exist invertible matrices such that $$\hat{\mathbf{A}}$$ satisfies $$\mathscr{E}_{\hat{\mathbf{A}}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, where $$k=\dim \mathscr{E}_{\mathbf{A}}$$. In the studies by Cohen et al. (1996) and Sauer (2002) the following generalization of the forward-difference operator $$\varDelta$$ is introduced: \begin{align} \varDelta_{k}=\begin{pmatrix} \varDelta I_{k} & 0 \\ 0 & I_{p-k} \end{pmatrix}\!, \end{align} (4.3) where Ik is the (k × k) unit matrix. It is shown there that if \begin{align} \mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}, \end{align} (4.4) then in analogy to equation (3.1), there exists a matrix mask ∂kA such that \begin{align} \varDelta_{k}S_{\mathbf{A}}=\tfrac{1}{2}S_{\partial_{k}\mathbf{A}}\varDelta_{k}. \end{align} (4.5) Algebraic conditions guaranteeing equation (4.5) are stated and proved in the next subsection. We denote by ∂kA any mask satisfying equation (4.5). The vector scheme associated with ∂kA is called the derived scheme of A with respect to $$\varDelta$$k. Furthermore, we have the following result concerning the convergence of SA in terms of $$S_{\partial _{k}\mathbf{A}}$$: Theorem 4.2 Let A be a mask such that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. If $$\|(\tfrac{1}{2}S_{\partial _{k}\mathbf{A}})^{L}\|<1$$ for some $$L \in \mathbb{N}$$ (that is, $$\tfrac{1}{2}S_{\partial _{k}\mathbf{A}}$$ is contractive), then the vector scheme associated with A is convergent. In fact there is a stronger result in the studies by Charina et al. (2005) and Cohen et al. (1996), but we only need this special case. Two important results for the analysis of smoothness of VSSs are as follows: Theorem 4.3 (Micchelli & Sauer, 1998) Let A be a mask of a convergent VSS, such that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$ for $$k\leqslant p$$, then \begin{align} \dim \mathscr{E}_{\partial_{k} \mathbf{A}}=\dim \mathscr{E}_{\mathbf{A}}. \end{align} (4.6) Theorem 4.4 (Charina et al., 2005) Let A be a mask such that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. If the VSS associated with ∂kA is Cℓ for $$\ell \geqslant 0$$, then the VSS associated with A is Cℓ+1. Remark 4.5 In the last theorem we omitted the assumption that SA is convergent required in Charina et al. (2005). This is possible because if $$S_{\partial _{k}\mathbf{A}}$$ is Cℓ, then $$\frac{1}{2}S_{\partial _{k}\mathbf{A}}$$ is contractive implying that SA is convergent in view of Theorem 4.2. A useful observation for our analysis is as follows: Lemma 4.6 Let A be a matrix mask. Then $$\mathscr{E}_{\mathbf{A}}=\left\{v \in \mathbb{R}^{p}: \mathbf{A}^{\ast}(1)v=2v \ \textrm{and} \ \mathbf{A}^{\ast}(-1)v=0\right\}\!.$$ Proof. It follows immediately from equation (4.1) and the definition of a symbol that $$A^{0}=\tfrac{1}{2}\Big (\mathbf{A}^{\ast }(1)+\mathbf{A}^{\ast }(-1)\Big )$$ and $$A^{1}=\tfrac{1}{2}\Big (\mathbf{A}^{\ast }(1)-\mathbf{A}^{\ast }(-1)\Big )$$. This, together with equation (4.2), implies the claim of the lemma. 4.2. Algebraic conditions We would like to modify a given mask B of a Cℓ VSS to obtain a new scheme SA which is Cℓ+1. The idea is to define A such that ∂kA = B, i.e. such that equation (4.5) is satisfied for some k. If we can prove that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, then by Theorem 4.4, the scheme SA is Cℓ+1. There are some immediate questions: Under what conditions on a mask B can we define a mask A such that ∂kA = B? How to choose k? In order to answer these questions, we have to study in more detail the mask of the derived scheme ∂kA and its relation to the mask A. Definition 4.7 For a mask A of dimension p, i.e. $$A_{i} \in \mathbb{R}^{p\times p}$$ for $$i\in \mathbb{Z}$$, and a fixed k ∈ {1, …, p}, we introduce the block notation $$\mathbf{A}=\begin{pmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22}\end{pmatrix}\!,$$ with A11 of size (k × k). In the next lemma, we present algebraic conditions on a symbol A*(z) guaranteeing the existence of ∂kA for a fixed k ∈ {1, …, p}, and also show that if $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$ these conditions hold. Lemma 4.8 Let A be a mask of dimension p. With the notation of Definition 4.7 we have If there exists k ∈ {1, …, p} such that $$\mathbf{A}^{\ast }_{11}(-1)=0, \mathbf{A}^{\ast }_{21}(-1)=0$$ and $$\mathbf{A}^{\ast }_{21}(1)=0$$, then there exists a mask ∂kA satisfying equation (4.5). If $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, then A*(z) satisfies the conditions of (1). Proof. Under the assumptions of (1), the matrix \begin{align} 2\begin{pmatrix} \mathbf{A}_{11}^{\ast}(z)/(z^{-1}+1) & (z^{-1}-1)\mathbf{A}_{12}^{\ast}(z)\\[0.2cm] \mathbf{A}_{21}^{\ast}(z)/(z^{-2}-1) & \mathbf{A}^{\ast}_{22}(z) \end{pmatrix} \end{align} (4.7) is a matrix Laurent polynomial. If we denote it by (∂kA)*(z), then the equation $$\varDelta _{k}S_{\mathbf{A}}=\tfrac{1}{2}S_{\partial _{k}\mathbf{A}}\varDelta _{k}$$ is satisfied. Indeed, if we write this last equation in terms of symbols, we get \begin{align} &\begin{pmatrix} \left(z^{-1}-1\right)I_{k} & 0 \\ 0 & I_{p-k}\end{pmatrix} \begin{pmatrix} \mathbf{A}^{\ast}_{11}(z) & \mathbf{A}^{\ast}_{12}(z) \\ \mathbf{A}^{\ast}_{21}(z) & \mathbf{A}^{\ast}_{22}(z)\end{pmatrix}\nonumber\\ &\quad= \begin{pmatrix} \mathbf{A}_{11}^{\ast}(z)/\left(z^{-1}+1\right) & \left(z^{-1}-1\right)\mathbf{A}_{12}^{\ast}(z)\\[0.2cm] \mathbf{A}_{21}^{\ast}(z)/\left(z^{-2}-1\right) & \mathbf{A}^{\ast}_{22}(z) \end{pmatrix} \begin{pmatrix} \left(z^{-2}-1\right)I_{k} & 0 \\ 0 & I_{p-k}\end{pmatrix}\!. \end{align} (4.8) It is easy to verify the validity of equation (4.8). In order to prove (2), we deduce from Lemma 4.6 that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$ implies the properties of A required in (1). In the proof of the validity of our smoothing procedure for VSSs and HSS(2)s, we work with the algebraic conditions (1) in Lemma 4.6 rather than with assumption (4.4). The reason is that the algebraic conditions can be checked and handled more easily. In order to define a procedure for increasing the smoothness of VSSs, we start by answering question (1): Lemma 4.9 Let A, B be masks of dimension p and let k ∈ {1, …, p}. With the notation of Definition 4.7, if $$\mathbf{B}_{12}^{\ast }(1)=0$$, then there exists a mask $$\mathcal{I}_{k} \mathbf{B}$$ satisfying \begin{align} \varDelta_{k}S_{\mathcal{I}_{k}\mathbf{B}}=\tfrac{1}{2}S_{\mathbf{B}}\varDelta_{k}, \end{align} (4.9) where $$\varDelta$$k is defined in equation (4.3). Proof. Defining \begin{align} (\mathcal{I}_{k} \mathbf{B})^{\ast}(z)=\frac{1}{2}\begin{pmatrix} \left(z^{-1}+1\right)\mathbf{B}_{11}^{\ast}(z) & \mathbf{B}_{12}^{\ast}(z)/\left(z^{-1}-1\right)\\[2pt] \left(z^{-2}-1\right)\mathbf{B}_{21}^{\ast}(z) & \mathbf{B}^{\ast}_{22}(z) \end{pmatrix}\!, \end{align} (4.10) we note that under the condition $$\mathbf{B}_{12}^{\ast }(1)=0$$, the above matrix is a matrix Laurent polynomial. It is easy to verify that the matrix $$(\mathcal{I}_{k} \mathbf{B})^{\ast }(z)$$ in equation (4.10) satisfies equation (4.9). Remark 4.10 If k = p in Lemma 4.9 then $$(\mathcal{I}_{p}\mathbf{B})^{\ast }(z)=\tfrac{z^{-1}+1}{2}\mathbf{B}^{\ast }(z)$$, where $$\tfrac{z^{-1}+1}{2}$$ is the smoothing factor in the scalar case. In Lemmas 4.6 and 4.9 we constructed two operators ∂k and $$\mathcal{I}_{k}$$ operating on masks, which (under some conditions) are inverse to each other. Denote by $${\ell _{a}^{k}}$$ the set of all masks satisfying the conditions (1) of Lemma 4.6 and by $${\ell _{b}^{k}}$$ the set of all masks satisfying the condition of Lemma 4.9. Then it is easy to show that \begin{align} \partial_{k}: \quad{\ell_{a}^{k}} \to{\ell_{b}^{k}}\qquad\qquad \mathcal{I}_{k}: \quad{\ell_{b}^{k}} \to{\ell_{a}^{k}} \end{align} (4.11) and that \begin{align} \partial_{k}(\mathcal{I}_{k} \mathbf{B})=\mathbf{B} \quad \textrm{and} \quad \mathcal{I}_{k}(\partial_{k} \mathbf{A})=\mathbf{A}. \end{align} (4.12) This shows that the condition of Lemma 4.9 on a mask B allows to define a mask $$\mathbf{A}=\mathcal{I}_{k} \mathbf{B}$$ such that ∂kA = B. This answers question (1). Still we need to deal with question (2). Remark 4.11 It follows from Lemmas 4.6 and 4.9 that the existence of ∂kA and $$\mathcal{I}_{k} \mathbf{B}$$ depends only on algebraic conditions. Yet this is not sufficient to define a procedure for changing the mask of a VSS in order to get a mask associated with a smoother VSS. Even if $$\mathcal{I}_{k}\mathbf{B}$$ exists for some k, the application of Theorem 4.4, in view of Lemma 4.7, to $$\mathbf{A}=\mathcal{I}_{k}\mathbf{B}$$ is based on the dimension of $$\mathscr{E}_{\mathbf{A}}$$ which is not necessarily k. But if $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, we can conclude from Theorem 4.4 that SA has smoothness increased by 1 compared to the smoothness of SB. In the next section we show that if for B associated with a converging VSS $$\dim \mathscr{E}_{\mathbf{A}}=k$$, then there exists a canonical transformation $$\overline{R}$$ such that $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}$$ satisfies the algebraic conditions of Lemma 4.9 and $$\mathscr{E}_{\mathcal{I}_{k}\mathbf{B}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. Therefore by Theorem 4.4, if SB is Cℓ, then $$S_{\mathcal{I}_{k}\overline{\mathbf{B}}}$$ is Cℓ+1. 4.3. The canonical transformations to the standard basis Let B be a mask of a convergent VSS SB. Denote by $$k=\dim \mathscr{E}_{\mathbf{B}}$$. We define a new mask $$\overline{\mathbf{B}}$$ such that \begin{align} \mathscr{E}_{\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}, \, \overline{\mathbf{B}} \in{\ell^{k}_{b}}\ \textrm{and} \ \mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}. \end{align} (4.13) This is achieved by considering the matrix $$M_{\mathbf{B}}=\tfrac{1}{2}\left (B^{0}+B^{1}\right )$$. First we state a result of importance to our analysis, which follows from Theorem 2.2 in the study by Cohen et al. (1996) and from its proof. Theorem 4.12 Let B be a mask of a convergent VSS. A basis of $$\mathscr{E}_{\mathbf{B}}$$ is also a basis of the eigenspace of $$M_{\mathbf{B}}=\tfrac{1}{2}\left (B^{0}+B^{1}\right )$$ corresponding to the eigenvalue 1. Moreover $$\lim _{n \to \infty }M_{\mathbf{B}}^{n}$$ exists. A direct consequence of the last theorem, concluded from the existence of $$\lim _{n \to \infty }M_{\mathbf{B}}^{n}$$, is as follows: Corollary 4.13 Let B be a mask associated with a converging VSS. Then the algebraic multiplicity of the eigenvalue 1 of MB equals its geometric multiplicity, and all its other eigenvalues have modulus less than 1. In particular, since $$M_{\mathbf{B}}=\frac{1}{2}\mathbf{B}^{\ast }(1)$$, Theorem 4.12 implies that if SB is a convergent VSS, then $$\mathscr{E}_{\mathbf{B}}$$ is the eigenspace of B*(1) w.r.t. to the eigenvalue 2. We proceed to define from a mask B associated with a convergent VSS, a new mask $$\overline{\mathbf{B}}$$ satisfying equation (4.13). Let B be a mask associated with a convergent VSS and let $$\mathcal{V}=\{v_{1},\ldots ,v_{k}\}$$ be a basis of $$\mathscr{E}_{\mathbf{B}}$$ (and therefore also a basis of the eigenspace w.r.t. 1 of MB). We define a real matrix \begin{align} \overline{R}=\left[v_{1}, \ldots, v_{k} | Q\right], \end{align} (4.14) where the columns of Q span the invariant space of MB corresponding to the eigenvalues different from 1 of MB. Q completes $$\mathcal{V}$$ to a basis of $$\mathbb{R}^{p}$$ and $$\overline{R}$$ is an invertible matrix. We call $$\overline{R}$$ defined by equation (4.14) a canonical transformation. There are many canonical transformations, since Q is not unique. Any canonical transformation $$\overline{R}$$ can be used to increase the smoothness of a vector scheme (see also Remark 4.5). Define a modified mask $$\overline{\mathbf{B}}$$ by \begin{align} \overline{B}_{i}=\overline{R}^{-1}B_{i}\overline{R}, \quad \textrm{for } i \in \mathbb{Z}. \end{align} (4.15) Then by equation (4.14) and Theorem 4.12 we have that $$\mathscr{E}_{\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. This proves the first claim in equation (4.13). Also by Lemma 4.1, $$S_{\overline{\mathbf{B}}}$$ is convergent and has the same smoothness as SB. Furthermore, by equation (4.14), \begin{align} M_{\overline{\mathbf{B}}}=\tfrac{1}{2}\left(\overline{B}^{0}+\overline{B}^{1}\right)=\overline{R}^{-1}M_{\mathbf{B}}\overline{R}=\begin{pmatrix} I_{k} && 0 \\ 0 && J \end{pmatrix} \end{align} (4.16) is the Jordan form of MB. By Corollary 4.1, J has eigenvalues with modulus less than 1. Transformations $$\overline{R}$$ which result in representations of MB similar to the one in equation (4.16) have already been considered in the studies by e.g. Cohen et al. (1996) and Sauer (2002). The special structure of $$M_{\overline{\mathbf{B}}}$$ is the key to our smoothing procedure. The next theorem follows from equation (4.16) and proves the remaining claims of equation (4.13). Theorem 4.14 Let SB be a convergent VSS and let $$k=\dim \mathscr{E}_{\mathbf{B}}$$. Define $$\overline{\mathbf{B}}$$ by equation (4.15) with $$\overline{R}$$ a canonical transformation. Then $$\overline{\mathbf{B}}$$ has the following properties: $$\overline{\mathbf{B}} \in{\ell ^{k}_{b}}$$, $$\mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. Proof. We start by proving (1). Since \begin{align} \overline{\mathbf{B}}^{\ast}(1)=\overline{B}^{0}+\overline{B}^{1}=2M_{\overline{\mathbf{B}}}=2\begin{pmatrix} I_{k} && 0 \\ 0 && J \end{pmatrix}\!, \end{align} (4.17) it follows that $$\overline{\mathbf{B}}^{\ast }_{12}(1)=0$$. Thus by Lemma 4.9, $$\overline{\mathbf{B}} \in{\ell ^{k}_{b}}$$ and therefore $$\mathcal{I}_{k}\overline{\mathbf{B}}$$ exists. In order to prove (2), we use Lemma 4.6 and show that $$\, \mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}\, =\, \left \{v\in \mathbb{R}^{p}\, :\, \left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast } (1)v\, =\, 2v\quad \textrm{and}\right.\\ \left.\left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast }(-1)v\, =\, 0 \right \}$$ is spanned by e1, …, ek. Indeed by equation (4.17) it follows that $$\overline{\mathbf{B}}^{\ast }_{11}(1)=2I_{k} \;\textrm{and}\; \overline{\mathbf{B}}^{\ast }_{22}(1)=2J$$. Since by equation (4.17) $$\overline{\mathbf{B}}^{\ast }_{12}(1)=0$$, there exists a symbol C*(z) such that $$\overline{\mathbf{B}}^{\ast }_{12}(z)=(z^{-1}-1)\mathbf{C}^{\ast }(z)$$, and therefore equation (4.10) implies the block form: \begin{align} \left(\mathcal{I}_{k}\overline{\mathbf{B}}\right)^{\ast}(1)=\begin{pmatrix} 2I_{k} && \tfrac{1}{2}\mathbf{C}^{\ast}(1) \\[0.2cm] 0 && J \end{pmatrix}, \quad \left(\mathcal{I}_{k}\overline{\mathbf{B}}\right)^{\ast}(-1)=\begin{pmatrix}0 && \tfrac{1}{2}\mathbf{C}^{\ast}(-1) \\[0.2cm] 0 && \tfrac{1}{2}\overline{\mathbf{B}}^{\ast}_{22}(-1) \end{pmatrix}\!. \end{align} (4.18) Equation (4.18), in view of Lemma 4.6, implies that $$\operatorname{span}\{e_{1},\ldots ,e_{k}\}=\mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}$$, since the eigenspace of $$\left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast }(1)$$ w.r.t. the eigenvalue 2 is exactly span{e1, …, ek} (the matrix J only contributes eigenvalues with modulus less than 1), and these vectors are in the kernel of $$\left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast }(-1)$$. Summarizing the above results, we arrive at the following: Corollary 4.15 Let B be a mask of a convergent VSS, let $$k=\dim \mathscr{E}_{\mathbf{B}}$$ and let $$\overline{\mathbf{B}}$$ be as in Theorem 4.14. Then $$\mathcal{I}_{k}\overline{\mathbf{B}}$$ exists and $$\mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}= \mathscr{E}_{\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}.$$ 4.4. A procedure for increasing the smoothness Theorem 4.14 allows us to define the following procedure which generates VSSs of higher smoothness from given convergent VSSs: Procedure 4.16 The input data is a mask B associated with a Cℓ VSS, $$\ell \geqslant 0$$, and the output is a mask A associated with a Cℓ+1 VSS. Choose a basis $$\mathcal{V}$$ of $$\mathscr{E}_{\mathbf{B}}$$ and define $$\overline{R}$$, a canonical transformation, as in equation (4.14). Define $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}$$. Define $$k=\operatorname{dim}(\mathscr{E}_{\mathbf{B}})$$. Define $$\overline{\mathbf{A}}=\mathcal{I}_{k} \overline{\mathbf{B}}$$ as in equation (4.10). Define $$\mathbf{A}=\overline{R}\,\overline{\mathbf{A}}\, \overline{R}^{-1}$$. A schematic representation of Procedure 4.16 is given in Fig. 1. Remark 4.17 Step 5 in Procedure 4.16 is not essential. The scheme $$S_{\overline{\mathbf{A}}}$$ is already Cℓ+1. Step 5 guarantees that $$\mathscr{E}_{\mathbf{A}}=\mathscr{E}_{\mathbf{B}}$$. In both cases to apply another smoothing procedure to get a Cℓ+2 VSS, a new canonical transformation has to be applied. Remark 4.18 Procedure 4.16 depends on the choice of a canonical transformation $$\overline{R}$$, which is built from a basis of $$\mathscr{E}_{\mathbf{B}}$$ and a matrix Q as in equation (4.14). Every choice of a canonical transformation gives rise to a different vector scheme. Our smoothing procedure is ‘independent’ of this choice in the sense that every $$\overline{R}$$ increases the smoothness by one. Fig. 1. View largeDownload slide A schematic representation of Procedure 4.16. Fig. 1. View largeDownload slide A schematic representation of Procedure 4.16. In the notation of Procedure 4.16, we define the smoothing operator $$\mathcal{I}_{k}$$ applied to a mask B of a convergent VSS as \begin{align} \mathcal{I}_{k}\mathbf{B}=\overline{R}\left(\mathcal{I}_{k}\overline{\mathbf{B}}\right)\overline{R}^{-1}. \end{align} (4.19) This is a generalization of the smoothing operator in the case of scalar subdivision schemes. An important property of Procedure 4.16, which is easily seen from equation (4.10), is as follows: Corollary 4.19 Assume that B and A are masks as in Procedure 4.16. If the support of B is contained in [−N1, N2] with $$N_{1},N_{2} \in \mathbb{N}$$, then the support of A is contained in [−N1 − 2, N2]. Therefore Procedure 4.16 increases the support length by at most 2, independently of the dimension of the mask. Recall that in the scalar case the support size is increased by 1. An interesting observation follows from Procedure 4.16, equations (4.17) and (4.18), Corollary 4.20 Assume that A, B are masks as in Procedure 4.16. Then A*(1) and B*(1) share the eigenvalue 2 and the corresponding eigenspace. To each eigenvalue $$\lambda \neq 2$$ of B*(1) there is an eigenvalue $$\frac{1}{2}\lambda$$ of A*(1). Note that a similar result to that in Corollary 4.20 is in general not true for B*(−1) and A*(−1). However, Example 4.21 shows that this can well be the case. Example 4.21 (Double-knot cubic spline subdivision) We consider the VSS with symbol \begin{align} \mathbf{B}^{\ast}(z)=\frac{1}{8}\begin{pmatrix} 2+6z+z^{2} & 2z+5z^{2} \\ 5+2z & 1+6z+2z^{2} \end{pmatrix}\!. \end{align} (4.20) It is known that this scheme produces C1 limit curves (see e.g. Dyn & Levin, 2002). We apply Procedure 4.16 to B to obtain a VSS SA of regularity C2: First we find a basis of $$\mathscr{E}_{\mathbf{B}}$$ in order to compute a canonical transformation $$\overline{R}$$. The matrices B*(1) and B*(−1) are given by $$\mathbf{B}^{\ast}(1)=\frac{1}{8}\begin{pmatrix} 9 & 7 \\ 7 & 9 \end{pmatrix}, \quad \mathbf{B}^{\ast}(-1)=\frac{1}{8}\Big(\begin{array}{r r} -3 & 3 \\ 3 & -3 \end{array}\Big)$$ and have the following eigenvalues and eigenvectors \begin{align} & \textrm{For } \mathbf{B}^{\ast}(1): \quad \textrm{eigenvalues}: 2, \tfrac{1}{4},\quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -1 \\ 1 \end{array}\Big), \text{ resp.}\\ \nonumber & \textrm{For } \mathbf{B}^{\ast}(-1): \quad \textrm{eigenvalues}: 0, -\tfrac{3}{4}, \quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -1 \\ 1 \end{array}\Big), \text{ resp. } \end{align} (4.21) Therefore $$\mathscr{E}_{\mathbf{B}}$$ is spanned by $${1 \choose 1}$$. The transformation $$\overline{R}$$ is determined by the eigenvectors of B*(1): $$R=\Big(\begin{array}{r r} 1 & -1 \\ 1 & 1 \end{array}\Big), \quad R^{-1}=\tfrac{1}{2}\Big(\begin{array}{r r} 1 & 1 \\ -1 & 1 \end{array}\Big).$$ We continue by computing $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}$$ from the symbol of B in equation (4.20), and get $$\overline{\mathbf{B}}^{\ast}(z)=\frac{1}{8}\begin{pmatrix} 4(1+z)^{2} & 3(z^{2}-1) \\ -2(z^{2}-1) & -1+4z-z^{2} \end{pmatrix}\!.$$ From Step 1 we see that $$k=\dim \mathscr{E}_{\mathbf{B}}=1$$. We compute $$\overline{\mathbf{A}}=\mathcal{I}_{1}\overline{\mathbf{B}}$$ by computing its symbol. $$\overline{\mathbf{A}}^{\ast}(z)=\frac{1}{16}\begin{pmatrix} 4z^{-1}(1+z)^{3} & -3z^{-1}(z+1) \\ 2z^{-2}(z^{2}-1)^{2} & -1+4z-z^{2} \end{pmatrix}\!.$$ In this step we transform back to the original basis $$\mathbf{A}=\overline{R}\,\overline{\mathbf{A}}\,\overline{R}^{-1}$$, by deriving A*(z). \begin{align} \mathbf{A}^{\ast}(z)=\frac{1}{32}z^{-2}\begin{pmatrix} z^{4}+16z^{3}+18z^{2}+7z-2 & \enspace 3z^{4}+8z^{3}+14z^{2}+z-2 \\[0.3cm] 7z^{4}+8z^{3}+12z^{2}+7z+2 & \enspace 5z^{4}+16z^{3}+4z^{2}+z+2 \end{pmatrix}\!. \end{align} (4.22) It follows from the analysis preceeding Procedure 4.16 that SA is C2. To verify Remark 4.17 we show that $$\mathscr{E}_{\mathbf{A}}$$ has the same basis as $$\mathscr{E}_{\mathbf{B}}$$. We compute $$\mathbf{A}^{\ast}(1)=\frac{1}{8}\Big(\begin{array}{r r} 10 & 6 \\[0.1cm] 9 & 7 \end{array}\Big), \quad \mathbf{A}^{\ast}(-1)=\frac{1}{16}\Big(\begin{array}{r r} -3 & 3 \\ 3 & -3 \end{array}\Big)$$ and their eigenvalues and eigenvectors: \begin{align} &\textrm{For } \mathbf{A}^{\ast}(1): \quad \textrm{eigenvalues}: 2, \tfrac{1}{8},\quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -2 \\ 3 \end{array}\Big), \text{ resp. }\\ \nonumber &\textrm{For } \mathbf{A}^{\ast}(-1): \quad \textrm{eigenvalues}: 0, -\tfrac{3}{8}, \quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -1 \\ 1 \end{array}\Big),\text{ resp. } \end{align} (4.23) Therefore by (4.21), (4.23) and Lemma 4.6, $$\mathscr{E}_{\mathbf{A}}$$ and $$\mathscr{E}_{\mathbf{B}}$$ are spanned by $${1 \choose 1}$$. Note that the eigenvectors corresponding to the eigenvalues which have modulus less than 1, of MA and MB are different. Thus in order to generate a C3 scheme from SA, a new canonical transformation has to be computed. Also, comparing the eigenvalues of A*(1) and B*(1) we see that Corollary 4.20 is satisfied. In fact in this example, also the eigenvalues of A*(−1) and B*(−1) have the same property. It is easy to see from equation (4.22) that the support of the mask A is 4, and from equation (4.20) that the support of B is 2, in accordance with Corollary 4.19. 5. Increasing the smoothness of Hermite(2) subdivision schemes In this section we describe a procedure for increasing the smoothness of HSSs refining function and first derivative values, based on the procedure for the vector case described in Section 4. We consider HSSs which operate on data $$\mathbf{c} \in \ell \left (\mathbb{R}^{2}\right )$$, using the notation of Section 2. The reason why we only consider function and first derivative values (and not higher derivatives), i.e. p = 2, is due to the algebraic conditions described in Section 5.1, on which our method is based. While it is rather easy to derive the algebraic conditions equivalent to the spectral condition (Lemma 5.1) in the case p = 2, we believe that the derivation of such conditions for general p and the resulting Taylor conditions analogous to Definition 5.3 requires a paper on its own. If such conditions were available, however, we are confident that our method can be extended to the case of general p. 5.1. Algebraic conditions As in the vector case, HSS(2)s use matrix-valued masks $$\mathbf{A}=\{A_{i} \in \mathbb{R}^{2\times 2}: i\in \mathbb{Z} \}$$ and subdivision operators SA as defined in equation (2.2). The input data $$\mathbf{c}^{0}\in \ell \left (\mathbb{R}^{2}\right )$$ are refined via Dncn = SAnc0, where D is the dilation matrix $$D=\begin{pmatrix} 1 & 0 \\ 0 & \frac{1}{2} \end{pmatrix}\!.$$ An HSS(2) is called interpolatory if its mask A satisfies A0 = D and A2i = 0 for all $$i \in \mathbb{Z} \backslash \{0\}$$. We always assume that an HSS(2) satisfies the spectral condition (Dubuc & Merrien, 2009). This condition requires that there is $$\varphi \in \mathbb{R}$$ such that both the constant sequence $${\mathbf{k}}=\left \{\left ({ 1 \atop 0 }\right ): i \in \mathbb{Z}\right \}$$ and the linear sequence $${ {\boldsymbol{\ell }}}=\left \{\left ({i +\varphi \atop 1 }\right ): i \in \mathbb{Z}\right \}$$ obey the rule \begin{align} S_{\mathbf{A}}{\mathbf{k}}={\mathbf{k}}, \quad S_{\mathbf{A}}{{\boldsymbol{\ell}}}=\tfrac{1}{2}{{{\boldsymbol{\ell}}}}. \end{align} (5.1) The spectral condition is crucial for the convergence and smoothness analysis of linear HSS(2)s. If the HSS(2) is interpolatory we can choose φ = 0. We now characterize the spectral condition in terms of the symbol of the mask A. We introduce the notation \begin{align} \mathbf{A}=\begin{pmatrix}\boldsymbol{\alpha}_{11} & \boldsymbol{\alpha}_{12} \\ \boldsymbol{\alpha}_{21} & \boldsymbol{\alpha}_{22}\end{pmatrix}\!, \end{align} (5.2) where $$\boldsymbol{\alpha }_{ij} \in \ell (\mathbb{R})$$ for i, j ∈ {1, 2}. It is easy to verify that the spectral condition in equation (5.1) is equivalent to the algebraic conditions in the next lemma. Lemma 5.1 A mask A satisfies the spectral condition given by equation (5.1) with $$\varphi \in \mathbb{R}$$ if and only if its symbol A*(z) satisfies $$\boldsymbol{\alpha }^{\ast }_{11}(1)=2,\: \boldsymbol{\alpha }^{\ast }_{11}(-1)=0$$. $$\boldsymbol{\alpha }^{\ast }_{21}(1)=0,\: \boldsymbol{\alpha }^{\ast }_{21}(-1)=0$$. $${\boldsymbol{\alpha }^{\ast }_{11}}^{\prime }(1)-2\boldsymbol{\alpha }^{\ast }_{12}(1)=2\varphi ,\:{\boldsymbol{\alpha }^{\ast }_{11}}^{\prime }(-1)+2\boldsymbol{\alpha }^{\ast }_{12}(-1)=0$$. $${\boldsymbol{\alpha }^{\ast }_{21}}^{\prime }(1)-2\boldsymbol{\alpha }^{\ast }_{22}(1)=-2,\:{\boldsymbol{\alpha }^{\ast }_{21}}^{\prime }(-1)+2\boldsymbol{\alpha }^{\ast }_{22}(-1)=0.$$ Parts (1) and (2) relate to the reproduction of constants, whereas parts (3) and (4) are related to the reproduction of linear functions. Next we cite results on HCℓ smoothness of HSS(2). Consider the Taylor operator T, first introduced in the study by Merrien & Sauer (2012): $$T=\Big(\begin{array}{@{}r r@{}} \varDelta & -1 \\ 0 & 1 \end{array}\Big).$$ The Taylor operator is a natural analogue of the operator $$\varDelta$$k for VSSs and the forward difference operator $$\varDelta$$ in scalar subdivision. We have the following result analogous to equation (4.5): Lemma 5.2 (Merrien & Sauer, 2012) If the HSS(2) associated with a mask A satisfies the spectral condition of equation (5.1), then there exists a matrix mask of dimension 2, ∂tA, such that \begin{align} TS_{\mathbf{A}}=\tfrac{1}{2}S_{\partial_{t}\mathbf{A}}T. \end{align} (5.3) The mask ∂tA determines a VSS called the Taylor scheme associated with A. 5.2. Properties of the Taylor scheme In order to increase the smoothness of an HSS(2), the obvious idea is to pass to its Taylor scheme defined in equation (5.3), increase the smoothness of this VSS by Procedure 4.16 and then use the resulting VSS as the Taylor scheme of a new HSS(2). The first question which arises in this process is if the last step is always possible, i.e. if the smoothing operator $$\mathcal{I}_{k}$$ of equation (4.19) maps Taylor schemes to Taylor schemes. To answer this question depicted in Fig. 2, we state algebraic conditions on a mask B of a VSS guaranteeing that SB is a Taylor scheme. Definition 5.3 The algebraic conditions on a mask B, $$\boldsymbol{\beta }_{12}^{\ast }(1)=0, \boldsymbol{\beta }_{12}^{\ast }(-1)=0,$$ $$\boldsymbol{\beta }_{22}^{\ast }(1)=2, \boldsymbol{\beta }_{22}^{\ast }(-1)=0,$$ $$\boldsymbol{\beta }_{11}^{\ast }(1)+\boldsymbol{\beta }_{21}^{\ast }(1)=2,$$ are called Taylor conditions. (Here we use the notation of equation (5.2).) Fig. 2. View largeDownload slide A schematic representation of the idea for smoothing HSS(2)s. Fig. 2. View largeDownload slide A schematic representation of the idea for smoothing HSS(2)s. We prove in Lemma 5.5 that the mask ∂tA obtained via equation (5.3) satisfies the Taylor conditions. This justifies the name Taylor conditions. Remark 5.4 It is easy to verify that conditions (1) and (2) of Definition 5.3 are equivalent to $$e_{2} \in \mathscr{E}_{\mathbf{B}}$$. The next lemmas are concerned with the connection between masks satisfying the spectral condition of equation (5.1) and masks satisfying the Taylor conditions of Definition 5.3. Lemma 5.5 Let A be a mask satisfying the spectral condition. Then we can define a mask ∂tA such that equation (5.3) is satisfied, and ∂tA satisfies the Taylor conditions. Note that the existence of ∂tA in Lemma 5.5 is a result of the study by Merrien & Sauer (2012) (see Lemma 5.2). We prove it here because its proof is used in our analysis. Proof. By solving equation (5.3) in terms of symbols for ∂tA, it is easy to see that \begin{align} (\partial_{t}\mathbf{A})^{\ast}_{11}(z)=2\Big(\frac{\boldsymbol{\alpha}_{11}^{\ast}(z)}{z^{-1}+1}-\frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1}\Big),\,\,\,\qquad\qquad\qquad\qquad\qquad\qquad \end{align} (5.4) \begin{align}\! (\partial_{t}\mathbf{A})^{\ast}_{12}(z)=2\left(\left(z^{-1}-1\right)\boldsymbol{\alpha}_{12}^{\ast}(z)-\boldsymbol{\alpha}_{22}^{\ast}(z)+\frac{\boldsymbol{\alpha}_{11}^{\ast}(z)}{z^{-1}+1}-\frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1}\right)\!, \end{align} (5.5) \begin{align} (\partial_{t}\mathbf{A})^{\ast}_{21}(z)=2\frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1},\,\,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align} (5.6) \begin{align} (\partial_{t}\mathbf{A})^{\ast}_{22}(z)=2\Big(\boldsymbol{\alpha}_{22}^{\ast}(z)+ \frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1}\Big).\!\!\!\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align} (5.7) By the algebraic conditions of Lemma 5.1, (∂tA)*(z) defined by equations (5.4–5.7) is a Laurent polynomial. Note that we only need the first two conditions of Lemma 5.1 equivalent to the reproduction of constants to define ∂tA. We now show that ∂tA satisfies the Taylor conditions. Multiplying equation (5.5) with the factor (z−2 − 1), differentiating with respect to z, substituting z = 1 and z = −1, and applying Lemma 5.1, we obtain: \begin{align*} (\partial_{t} \mathbf{A})^{\ast}_{12}(1)&=-2\boldsymbol{\alpha}_{22}^{\ast}(1)+\boldsymbol{\alpha}_{11}^{\ast}(1)+{\boldsymbol{\alpha}_{21}^{\ast{\prime}}}(1)=0, \\ (\partial_{t} \mathbf{A})^{\ast}_{12}(-1)&=-4\boldsymbol{\alpha}_{12}^{\ast}(-1)-2\boldsymbol{\alpha}_{22}^{\ast}(-1)-2{\boldsymbol{\alpha}_{11}^{\ast{\prime}}}(-1)-\boldsymbol{\alpha}_{11}^{\ast}(-1)-{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(-1)=0. \end{align*} This proves that part (1) of Definition 5.3 is satisfied. Applying the same procedure to equation (5.7), we obtain \begin{align*} (\partial_{t} \mathbf{A})^{\ast}_{22}(1)&=2\boldsymbol{\alpha}^{\ast}_{22}(1)-{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)=2,\\ (\partial_{t} \mathbf{A})^{\ast}_{22}(-1)&=2\boldsymbol{\alpha}^{\ast}_{22}(-1)+{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(-1)=0.\\ \end{align*} This concludes part (2) of Definition 5.3. Similarly equations (5.4) and (5.6) imply $$\left(\partial_{t} \mathbf{A}\right)^{\ast}_{11}(1)+(\partial_{t} \mathbf{A})^{\ast}_{21}(1)=\left(2+{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)\right)-{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)=2,$$ which proves (3) of Definition 5.3. Lemma 5.6 Let B be a mask satisfying the Taylor conditions. Then we can define a mask $$\mathcal{I}_{t}\mathbf{B}$$ such that $$TS_{\mathcal{I}_{t}\mathbf{B}}=\tfrac{1}{2}S_{\mathbf{B}}T$$ is satisfied, and $$\mathcal{I}_{t}\mathbf{B}$$ satisfies the spectral condition. Proof. Suppose that B satisfies the Taylor conditions. We define a mask $$\mathcal{I}_{t}\mathbf{B}$$ satisfying the equation $$TS_{\mathcal{I}_{t}\mathbf{B}}=\tfrac{1}{2}S_{\mathbf{B}}T$$ by writing it in terms of symbols. This yields the symbol \begin{align}\nonumber \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{11}(z)=& \tfrac{1}{2}\left(z^{-1}+1\right)\left(\boldsymbol{\beta}^{\ast}_{11}(z)+\boldsymbol{\beta}^{\ast}_{21}(z)\right)\!,\\ \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{12}(z)=& \tfrac{1}{2}\Big(\boldsymbol{\beta}^{\ast}_{12}(z)-\boldsymbol{\beta}^{\ast}_{11}(z)-\boldsymbol{\beta}^{\ast}_{21}(z)+\boldsymbol{\beta}^{\ast}_{22}(z)\Big)\Big/\left(z^{-1}-1\right)\!,\\ \nonumber \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{21}(z)=& \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{21}(z)\left(z^{-2}-1\right)\!,\\ \nonumber \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{22}(z)=& \tfrac{1}{2}\left(\boldsymbol{\beta}^{\ast}_{22}(z)-\boldsymbol{\beta}^{\ast}_{21}(z)\right)\!. \end{align} (5.8) It follows from the Taylor conditions that $$\left (\mathcal{I}_{t}\mathbf{B}\right )^{\ast }(z)$$ is a Laurent polynomial, and thus well defined. We continue by showing that $$\mathcal{I}_{t}\mathbf{B}$$ satisfies the spectral condition. It is immediately clear from the definition of $$\mathcal{I}_{t}\mathbf{B}$$ that (1) and (2) of Lemma 5.1 are satisfied. Furthermore, it is easy to see that \begin{align*} {\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{21}}^{\prime}(1)-2\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{22}(1)&=-\boldsymbol{\beta}_{21}^{\ast}(1) -\boldsymbol{\beta}_{22}^{\ast}(1)+\boldsymbol{\beta}_{21}^{\ast}(1)=-2,\\{\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{21}}^{\prime}(-1)+2\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{22}(-1)&=\boldsymbol{\beta}_{21}^{\ast}(1) +\boldsymbol{\beta}_{22}^{\ast}(-1)-\boldsymbol{\beta}_{21}^{\ast}(-1)=0, \end{align*} which proves (4) of Lemma 5.1. From the definition of $$\mathcal{I}_{t}\mathbf{B}$$ we see that \begin{align*} {(\mathcal{I}_{t}\mathbf{B})^{\ast}_{11}}^{\prime}(-1)+2(\mathcal{I}_{t}\mathbf{B})^{\ast}_{12}(-1)=\:& -\tfrac{1}{2}\left(\boldsymbol{\beta}_{11}^{\ast}(-1)+\boldsymbol{\beta}_{21}^{\ast}(-1)\right)\\ & -\tfrac{1}{2}\left(\boldsymbol{\beta}_{12}^{\ast}(-1)-\boldsymbol{\beta}_{11}^{\ast}(-1)-\boldsymbol{\beta}_{21}^{\ast}(-1)+\boldsymbol{\beta}_{22}^{\ast}(-1)\right)\\=\:&\:0. \end{align*} Furthermore, by multiplying equation (5.8) with the factor $$\left (z^{-1}-1\right )$$, differentiating this equation with respect to z, substituting z = 1 and using the Taylor conditions, we obtain $$\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{12}(1)=-\tfrac{1}{2}\Big({\boldsymbol{\beta}^{\ast}_{12}}^{\prime}(1)-{\boldsymbol{\beta}^{\ast}_{11}}^{\prime}(1)+{\boldsymbol{\beta}^{\ast}_{22}}^{\prime}(1)-{\boldsymbol{\beta}^{\ast}_{21}}^{\prime}(1) \Big).$$ This implies $${(\mathcal{I}_{t}\mathbf{B})^{\ast}_{11}}^{\prime}(1)-2(\mathcal{I}_{t}\mathbf{B})^{\ast}_{12}(1)=2\varphi,$$ where φ is defined by $$\varphi =\tfrac{1}{2}({\boldsymbol{\beta }^{\ast }_{12}}^{\prime }(1)+{\boldsymbol{\beta }^{\ast }_{22}}^{\prime }(1)-1)$$. This proves property (3) of Lemma 5.1, concluding the proof of the lemma. In Lemma 5.5 and Lemma 5.6 we defined two operators ∂t and $$\mathcal{I}_{t}$$ which are inverse to each other. Denote by ℓs be the set of all masks satisfying the spectral condition of equation (5.1) and by ℓt the set of all masks satisfying the Taylor conditions of Definition 5.3. Then \begin{align} \partial_{t}: \quad \ell_{s} \to \ell_{t} \qquad \qquad \mathcal{I}_{t}: \quad \ell_{t} \to \ell_{s} \end{align} (5.9) and it is easy to verify that \begin{align} \partial_{t}(\mathcal{I}_{t} \mathbf{B})=\mathbf{B} \quad \textrm{and} \quad \mathcal{I}_{t}(\partial_{t} \mathbf{A})=\mathbf{A}. \end{align} (5.10) 5.3. Relations between converging vector and Hermite(2) schemes In the previous section we derived a one-to-one correspondence between a mask satisfying the spectral condition and a mask satisfying the Taylor conditions. For masks of converging schemes we formulate a result based on Theorem 21 in the study by Merrien & Sauer (2012), and on the results of Section 5.2. Theorem 5.7 A $$C^{\ell }, \ell \geqslant 0,$$ VSS SB satisfying the Taylor conditions with limit functions with vanishing first component, gives rise to an HCℓ+1 Hermite(2) scheme SA satisfying the spectral condition. In the next lemma we show that the condition of vanishing first component in the limits generated by SB can be replaced by a condition on the mask B. This also follows from results in the study by Micchelli & Sauer (1998). Lemma 5.8 Let SB be a convergent VSS. Denote by $$\varPsi _{\mathbf{c}}={\psi _{1,\mathbf{c}} \choose \psi _{2,\mathbf{c}}}$$ the limit function generated from the initial data $$\mathbf{c} \in \ell (\mathbb{R}^{2})$$. Then $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\} \iff \psi_{1,\mathbf{c}}=0 \textrm{ for all initial data } \mathbf{c}.$$ Proof. First we show that $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$ implies ψ1, c = 0 for all c. This follows from the observation that $$\varPsi _{\mathbf{c}}(x)\in \mathscr{E}_{\mathbf{B}}$$ for all $$x \in \mathbb{R}$$. The observation follows from the convergence of SB to a continuous limit and from the basic refinement rules for large k $$\left(S_{\mathbf{B}}^{k+1}\mathbf{c}\right)_{2i}=\sum_{j \in \mathbb{Z}}B_{2j}\left(S_{\mathbf{B}}^{k}\mathbf{c}\right)_{i-j}, \quad \left(S_{\mathbf{B}}^{k+1}\mathbf{c}\right)_{2i+1}=\sum_{j \in \mathbb{Z}}B_{2j+1}\left(S_{\mathbf{B}}^{k}\mathbf{c}\right)_{i-j}, \ \textrm{for}\ i \in \mathbb{Z}.$$ To prove the other direction we use the proof of Theorem 2.2 in the study by Cohen et al. (1996). It shows that \begin{align} \lim_{n \to \infty}M_{\mathbf{B}}^{n}=\int_{\mathbb{R}} \varPhi(x) \ \mathrm{d}x, \end{align} (5.11) where MB is defined in Theorem 4.4, and Φ is the limit function generated by SB from the initial data δI2. Here I2 is the identity matrix of dimension 2 and $$\boldsymbol{\delta } \in \ell (\mathbb{R})$$ satisfies δ0 = 1, $$\boldsymbol{\delta }_{i}=0, i \neq 0, i \in \mathbb{Z}$$, or equivalently $${\phi _{1j}(x) \choose \phi _{2j}(x) }$$ is the limit from the initial data δej for j ∈ {1, 2}. Thus $$\phi_{11}(x)=\phi_{12}(x)=0 \quad \textrm{for } x \in \mathbb{R}.$$ It follows from equation (5.11) that \begin{align} \lim_{n \to \infty}M_{\mathbf{B}}^{n}=\begin{pmatrix} 0 & 0 \\ \nu & \theta \end{pmatrix}\!, \quad \nu, \theta \in \mathbb{R}. \end{align} (5.12) Assume $$\mathscr{E}_{\mathbf{B}} \neq \operatorname{span}\{e_{2}\}$$. Then $$\mathscr{E}_{\mathbf{B}}=\mathbb{R}^{2}$$, and MB = I2, since by Theorem 4.4 the eigenspace of MB with respect to 1 is exactly $$\mathscr{E}_{\mathbf{B}}$$. Thus $$\lim _{n \to \infty }M_{\mathbf{B}}^{n}=I_{2}$$ in contradiction to equation (5.12). 5.4. Imposing the Taylor conditions Denote by $$\tilde{\ell }_{t} \subsetneqq \ell _{t}$$ the set of masks satisfying B ∈ ℓt and $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$. It follows from Theorem 5.7 and Lemma 5.8 that for $$\mathbf{B} \in \tilde{\ell }_{t}$$, a mask of a Cℓ VSS, if also $$\mathcal{I}_{1}\mathbf{B} \in \tilde{\ell }_{t}$$, then $$\mathcal{I}_{1}\mathbf{B}$$ is a mask of a Cℓ+1 VSS which is the Taylor scheme of an HCℓ+2 Hermite(2) scheme. The next results show that $$\mathcal{I}_{1}\left (\tilde{\ell }_{t}\right ) \subseteq \tilde{\ell }_{t}$$ does not hold in general. Nevertheless, in the following we construct a transformation $$\mathcal{R}$$ such that $$\mathcal{R}^{-1}(\mathcal{I}_{1}\mathbf{B}) \mathcal{R} \in \tilde{\ell }_{t}$$ for $$\mathbf{B} \in \tilde{\ell }_{t}$$. First we look for a canonical transformation of a mask B ∈ ℓt to define $$\mathcal{I}_{1}\mathbf{B}$$. Lemma 5.9 Let B ∈ ℓt. Then MB has the eigenvalue 1 with eigenvector $$\binom{0}{1}$$ and the eigenvalue $$\tfrac{1}{2}\boldsymbol{\beta }^{\ast }_{11}(1)$$ with eigenvector $$\choose{1}{-1}$$. A canonical transformation and its inverse are $$\overline{R}=\Big( \begin{array}{@{}r r@{}}0 & 1\\ 1 & -1 \end{array}\Big) \quad \textrm{with inverse} \quad \overline{R}^{-1}=\Big( \begin{array}{@{}r r@{}}1 & 1\\ 1 & 0 \end{array}\Big).$$ Proof. From the Taylor conditions we immediately get $$M_{\mathbf{B}}=\tfrac{1}{2}\left(B^{0}+B^{1}\right)=\tfrac{1}{2}\mathbf{B}^{\ast}(1)=\begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) & 0 \\[0.2cm] \tfrac{1}{2}\boldsymbol{\beta}_{21}^{\ast}(1) & 1 \end{pmatrix}\!.$$ The eigenvalues of MB can now be read from the diagonal. Also, it is clear that $${0 \choose 1}$$ is an eigenvector with eigenvalue 1. For the other eigenvector we use the Taylor condition (3) (in Definition 5.3) in the third equality below, and obtain \begin{align*} M_{\mathbf{B}}\Big(\begin{array}{r}1 \\ -1\end{array}\Big)&=\begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) & 0 \\[0.2cm] \tfrac{1}{2}\boldsymbol{\beta}_{21}^{\ast}(1) & 1 \end{pmatrix}\Big(\begin{array}{r}1 \\ -1\end{array}\Big)= \begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) \\[0.2cm] \tfrac{1}{2}\boldsymbol{\beta}_{21}^{\ast}(1)-1\end{pmatrix}= \begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) \\[0.2cm] -\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1)\end{pmatrix}\\ &=\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1)\Big(\begin{array}{r}1 \\ -1\end{array}\Big). \end{align*} The structure of $$\overline{R}$$ follows directly from equation (4.14). Lemma 5.9 leads to the following: Theorem 5.10 Let $$\mathbf{B} \in \tilde{\ell }_{t}$$ and let its associated vector scheme SB be convergent. Let $$\mathcal{I}_{1}$$ be the smoothing operator for VSSs in equation (4.19). Then $$\mathcal{I}_{1}\mathbf{B} \in \tilde{\ell }_{t}$$ if and only if the Laurent polynomial $$\boldsymbol{\beta }_{11}^{\ast }(z)+\boldsymbol{\beta }_{21}^{\ast }(z)-\boldsymbol{\beta }_{12}^{\ast }(z)-\boldsymbol{\beta }_{22}^{\ast }(z)$$ has a root at 1 of multiplicity at least 2. Proof. From Remark 4.4 we know that $$\mathscr{E}_{\mathcal{I}_{1}\mathbf{B}}=\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$. Furthermore, recall from equation (4.19) that $$\mathcal{I}_{1}\mathbf{B}=\overline{R}\left (\mathcal{I}_{1}\overline{\mathbf{B}}\right )\overline{R}^{-1}$$ with $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}.$$ In Lemma 5.9 a canonical transformation $$\overline{R}$$ is computed. Therefore $$\overline{\mathbf{B}}$$ is given by \begin{align} \overline{\mathbf{B}}=\left(\begin{array}{@{}c c@{}} \overline{\boldsymbol{\beta}}_{11}& \overline{\boldsymbol{\beta}}_{12}\\ \overline{\boldsymbol{\beta}}_{21} & \overline{\boldsymbol{\beta}}_{22} \end{array}\right)= \left(\begin{array}{@{}c c@{}} \boldsymbol{\beta}_{12}+\boldsymbol{\beta}_{22} & \boldsymbol{\beta}_{11}+\boldsymbol{\beta}_{21}-\boldsymbol{\beta}_{12}-\boldsymbol{\beta}_{22}\\ \boldsymbol{\beta}_{12} & \boldsymbol{\beta}_{11}-\boldsymbol{\beta}_{12} \end{array}\right)\!. \end{align} (5.13) The parts of the Taylor conditions concerning the elements of B*(1) imply that the symbol $$\overline{\boldsymbol{\beta }}^{\ast }_{12}(z)$$ has a root at 1. Therefore there exists a Laurent polynomial κ*(z) such that $$\overline{\boldsymbol{\beta }}^{\ast }_{12}(z)=(z^{-1}-1)\boldsymbol{\kappa }^{\ast }(z)$$. Combining (5.13) with (4.10) we obtain $$(\mathcal{I}_{1}\overline{\mathbf{B}})^{\ast}(1)=\left(\begin{array}{@{}c c@{}} 2 & \tfrac{1}{2}\boldsymbol{\kappa}^{\ast}(1)\\[0.2cm] 0 & \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{11}(1) \end{array}\right) \quad \textrm{and} \quad (\mathcal{I}_{1}\overline{\mathbf{B}})^{\ast}(-1)=\left(\begin{array}{@{}c c@{}} 0 & \tfrac{1}{2}\boldsymbol{\kappa}^{\ast}(-1)\\[0.2cm] 0 & \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{11}(-1) \end{array}\right)\!.$$ Therefore \begin{align} (\mathcal{I}_{1}\mathbf{B})^{\ast}(1)=\overline{R}\left(\mathcal{I}_{1}\overline{\mathbf{B}}\right)^{\ast}(1)\overline{R}^{-1}&=\left(\begin{array}{@{}c c@{}} \tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) & 0\\[0.1cm] 2+\tfrac{1}{2}\left(\boldsymbol{\kappa}^{\ast}(1)-\boldsymbol{\beta}^{\ast}_{11}(1)\right) & 2 \end{array}\right) \quad \textrm{and} \\[8pt] \nonumber (\mathcal{I}_{1}\mathbf{B})^{\ast}(-1)=\overline{R}\left(\mathcal{I}_{1}\overline{\mathbf{B}}\right)^{\ast}(-1)\overline{R}^{-1}&=\left(\begin{array}{@{}c c@{}} \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{11}(-1) & 0\\[0.2cm] \tfrac{1}{2}\left(\boldsymbol{\kappa}^{\ast}(-1)-\boldsymbol{\beta}^{\ast}_{11}(-1)\right) & 0 \end{array}\right)\!. \end{align} (5.14) By equation (5.14), (1) and (2) of the Taylor conditions in Definition 5.3 are satisfied by $$\mathcal{I}_{1}{\mathbf{B}}$$. The mask $$\mathcal{I}_{1}{\mathbf{B}}$$ satisfies (3) of the Taylor conditions if and only if κ*(1) = 0. By the definition of κ, this is equivalent to the Laurent polynomial $$\overline{\boldsymbol{\beta }}^{\ast }_{12}(z)=\boldsymbol{\beta }_{11}^{\ast }(z)+\boldsymbol{\beta }_{21}^{\ast }(z)-\boldsymbol{\beta }_{12}^{\ast }(z)-\boldsymbol{\beta }_{22}^{\ast }(z)$$ having a root of multiplicity 2 at 1. Thus, in general, $$\mathcal{I}_{1}\left (\tilde{\ell }_{t}\right )\nsubseteq \tilde{\ell }_{t}$$. In the next two lemmas we solve this problem. Lemma 5.11 Let B be a mask of a converging VSS satisfying $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$ and $$\boldsymbol{\beta }_{11}^{\ast }(1)\neq 2$$. Then there exists a transformation $$\mathcal{R}$$ such that $$\widetilde{\mathbf{B}}=\mathcal{R}^{-1}\mathbf{B} \mathcal{R} \in \tilde{\ell }_{t}$$. Proof. First we note that by Remark 5.4, the mask B satisfies (1) and (2) of the Taylor conditions and obtain $$\mathbf{B}^{\ast}(1)=\begin{pmatrix} a & 0\\ b & 2 \end{pmatrix}\!,$$ with $$a,b \in \mathbb{R}$$ and $$a\neq 2$$ by the assumption of the lemma. To impose (3) of the Taylor conditions we take $$\mathcal{R}$$ with a second column e2 in order to retain the above second columns. A normalized choice of the first column of $$\mathcal{R}$$ yields \begin{align} \mathcal{R}=\left(\begin{array}{@{}r r@{}} 1 & 0\\ \eta & 1 \end{array}\right), \quad \mathcal{R}^{-1}=\left(\begin{array}{@{}r r@{}} 1 & 0\\ -\eta & 1 \end{array}\right)\!, \end{align} (5.15) and we obtain $$\widetilde{\mathbf{B}}^{\ast}(1)=\begin{pmatrix} a & 0\\ (2-a)\eta +b & 2 \end{pmatrix}\!.$$ To satisfy (3) of the Taylor conditions (2 − a)η + b + a = 2. Therefore we choose $$\eta =1+\frac{b}{a-2}$$. From the form of $$\widetilde{\mathbf{B}}^{\ast }(1)$$ and since $$a\neq 2$$, we see that $$\mathscr{E}_{\widetilde{\mathbf{B}}}=\operatorname{span}\{e_{2}\}$$. Next we show that we can apply the smoothing procedure and transform the resulting mask to a mask in $$\tilde{\ell }_{t}$$. Corollary 5.12 Let $$\mathbf{B} \in \tilde{\ell }_{t}$$ such that SB is a Cℓ VSS, for $$\ell \geqslant 0$$. Then $$\widetilde{\mathcal{I}_{1}(\mathbf{B})}\in \tilde{\ell }_{t}$$ and $$S_{\widetilde{\mathcal{I}_{1}(\mathbf{B})}}$$ is a Cℓ+1 VSS. Proof. It follows from Remark 4.4 that $$\mathscr{E}_{\mathcal{I}_{1}\mathbf{B}}=\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$. Equation (5.14) implies $$(\mathcal{I}_{1}\mathbf{B})^{\ast }_{11}(1)=\tfrac{1}{2}\boldsymbol{\beta }_{11}^{\ast }(1)$$. From Lemma 5.9 we know that $$\tfrac{1}{2}\boldsymbol{\beta }_{11}^{\ast }(1)$$ is an eigenvalue of MB. By Corollary 4.1, $$\frac{1}{2}\left |\boldsymbol{\beta }_{11}^{\ast }(1)\right |\leqslant 1$$. In particular $$\left (\mathcal{I}_{1}\mathbf{B}\right )^{\ast }_{11}(1)\neq 2$$. Therefore, $$\mathcal{I}_{1}\mathbf{B}$$ satisfies the conditions of Lemma 5.11 and with the transformation $$\mathcal{R}$$ in equation (5.15), $$\mathcal{R}^{-1}(\mathcal{I}_{1}\mathbf{B}) \mathcal{R} \in \tilde{\ell }_{t}$$. The statement about smoothness follows from the construction of $$\mathcal{I}_{1}$$ in equation (4.19). 5.5. A procedure for increasing the smoothness of Hermite(2) schemes Theorem 5.10 and Corollary 5.12 allow us to define the following procedure for increasing the smoothness of HSS(2)s: Procedure 5.13 The input is a mask A satisfying the spectral condition (Lemma 5.1). Furthermore, we assume that its Taylor scheme is Cℓ−1 for $$\ell \geqslant 1$$ and that the limit functions have vanishing first component for all input data (this implies that SA is HCℓ). The output is a mask C which satisfies the spectral condition and its associated Hermite(2) scheme SC is HCℓ+1. Compute the Taylor scheme ∂tA (Lemma 5.5). Apply Procedure 4.16 and Lemma 5.11 to obtain $$\mathbf{B}=\widetilde{\mathcal{I}_{1}(\partial _{t}\mathbf{A})}$$. Define $$\mathbf{C}=\mathcal{I}_{t}(\mathbf{B})$$ (Lemma 5.6). In the following we execute Procedure 5.13 for a general mask A satisfying the assumptions of the procedure, and present explicitly C*(z). From the definition of η in the proof of Lemma 5.11 it is easy to see that $$\eta =\frac{\boldsymbol{\alpha }^{\ast }_{12}(1)}{2-\boldsymbol{\alpha }_{22}^{\ast }(1)}$$. This is well defined, since $$M_{\partial _{t}\mathbf{A}}$$ has $$\boldsymbol{\alpha }^{\ast }_{22}(1)$$ as an eigenvalue. By Corollary 4.1, $$\boldsymbol{\alpha }^{\ast }_{22}(1)\neq 2$$. Then with ζ = η + 1 we get \begin{align} \boldsymbol{\gamma}^{\ast}_{11}(z) = & \ \tfrac{1}{2}\left(z^{-1}+1\right)\Big( \boldsymbol{\alpha}^{\ast}_{12}(z)\left(\left({\zeta}-{\zeta}^{2}\right)z^{-3}+{\zeta}^{2}z^{-2}+ \left({\zeta}^{2}-1\right)z^{-1}-\left({\zeta}^{2}+{\zeta}\right)\right)\nonumber\\ &+\boldsymbol{\alpha}_{11}^{\ast}(z)\Big({\zeta}\left(z^{-1}-1\right)(1-{\zeta})+{\zeta}\Big) +\boldsymbol{\alpha}_{22}^{\ast}(z)\left({\zeta}\left(z^{-2}-1\right)-1\right)({\zeta}-1)\nonumber\\ &+\boldsymbol{\alpha}_{21}^{\ast}(z)\left({\zeta}^{2}-{\zeta}\right) \Big),\\ \boldsymbol{\gamma}^{\ast}_{12}(z) = & \ \tfrac{1}{2}\Big( \boldsymbol{\alpha}^{\ast}_{12}(z)\Big((1-{\zeta})^{2}z^{-3}+{\zeta}(1-{\zeta}) z^{-2}+{\zeta}(1-{\zeta})z^{-1}+{\zeta}^{2}\Big)\nonumber\\ &+ \boldsymbol{\alpha}^{\ast}_{22}(z)\Big(-\left(z^{-2}-1\right)(1-{\zeta})^{2}+{\zeta}-1 \Big)\nonumber\\ &+\boldsymbol{\alpha}^{\ast}_{11}(z)\Big(\left(z^{-1}-1\right)(1-{\zeta})^{2}+1-{\zeta} \Big) -\boldsymbol{\alpha}^{\ast}_{21}(z)(1-{\zeta})^{2} \Big)\Big/ \left(z^{-1}-1\right)\!,\nonumber\\ \boldsymbol{\gamma}^{\ast}_{21}(z)= & \ \tfrac{1}{2}\left(z^{-2}-1\right)\Big( \boldsymbol{\alpha}^{\ast}_{12}(z)\Big(-{\zeta}^{2}z^{-3}+\left({\zeta}+{\zeta}^{2}\right)\left(z^{-2}+z^{-1}\right)-({\zeta}+1)^{2} \Big)\nonumber\\ &+\boldsymbol{\alpha}^{\ast}_{11}(z){\zeta}\left(1-{\zeta}\left(z^{-1}-1\right)\right)+\boldsymbol{\alpha}^{\ast}_{22}(z) {\zeta}\left({\zeta}\left(z^{-2}-1\right)-1\right)+{\zeta}^{2}\boldsymbol{\alpha}^{\ast}_{21}(z) \Big),\nonumber\\ \boldsymbol{\gamma}^{\ast}_{22}(z) = & \ \tfrac{1}{2} \Big(\boldsymbol{\alpha}^{\ast}_{12}(z)\Big(\left({\zeta}^{2}-{\zeta}\right)z^{-3}+\left(1-{\zeta}^{2}\right)z^{-2}-{\zeta}^{2}z^{-1}+\left({\zeta}^{2}+{\zeta}\right) \Big)\nonumber\\ & +\boldsymbol{\alpha}^{\ast}_{11}(z)(1-{\zeta})\left(1-{\zeta}\left(z^{-1}-1\right)\right)+\boldsymbol{\alpha}^{\ast}_{22}(z){\zeta}\left(\left(1-{\zeta}\right)\left(z^{-2}-1\right)+1\right)\nonumber\\ &+\boldsymbol{\alpha}^{\ast}_{21}(z)\left({\zeta}-{\zeta}^{2}\right) \Big)\nonumber. \end{align} (5.16) In the special case $$\boldsymbol{\alpha }^{\ast }_{12}(1)=0$$, ζ = 1, C*(z) reduces to \begin{align} \boldsymbol{\gamma}^{\ast}_{11}(z) = & \ \tfrac{1}{2}(z^{-1}+1)\Big((z^{-2}-2)\boldsymbol{\alpha}^{\ast}_{12}(z)+\boldsymbol{\alpha}_{11}^{\ast}(z)\Big),\\ \nonumber \boldsymbol{\gamma}^{\ast}_{12}(z) = & \ \tfrac{1}{2}\frac{\boldsymbol{\alpha}^{\ast}_{12}(z)}{(z^{-1}-1)},\\ \nonumber \boldsymbol{\gamma}^{\ast}_{21}(z)= & \ \tfrac{1}{2}(z^{-2}-1)\Big(\boldsymbol{\alpha}_{21}^{\ast}(z)-\boldsymbol{\alpha}_{11}^{\ast}(z)(z^{-1}-2) \\ \nonumber &+\boldsymbol{\alpha}_{22}^{\ast}(z)(z^{-2}-2)-\boldsymbol{\alpha}_{12}^{\ast}(z)(z^{-1}-2)(z^{-2}-2)\Big),\\ \nonumber \boldsymbol{\gamma}^{\ast}_{22}(z) = & \ \tfrac{1}{2}(\boldsymbol{\alpha}^{\ast}_{22}(z)-(z^{-1}-2)\boldsymbol{\alpha}_{12}^{\ast}(z)). \end{align} (5.17) With the explicit form of C, we can prove the following: Lemma 5.14 Let φA be the constant corresponding to the spectral condition in equation (5.1) satisfied by A. Then the constant corresponding to the spectral condition satisfied by C is $$\varphi _{\mathbf{C}}=\varphi _{\mathbf{A}}-\frac{1}{2}$$. In particular, the application of Procedure 5.13 to interpolatory HSS(2)s does not result in interpolatory HSS(2)s. Proof. Differentiating $$\boldsymbol{\gamma }_{11}^{\ast }(z)$$ and $$\boldsymbol{\gamma }_{12}^{\ast }(z)$$ given in equation (5.16), and evaluating at z = 1 we obtain in view of condition (3) in Lemma 5.1 \begin{align*} 2 \varphi_{\mathbf{C}}={\boldsymbol{\gamma}_{11}^{\ast}}^{\prime}(1)-2\boldsymbol{\gamma}_{12}^{\ast}(1)=& \:{\boldsymbol{\alpha}_{11}^{\ast}}^{\prime}(1)-2\boldsymbol{\alpha}_{12}^{\ast}(1) +({\zeta}-1)\left({\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)-2\boldsymbol{\alpha}_{22}^{\ast}(1)\right)\\ &+2({\zeta}-1)+\frac{1}{2}\boldsymbol{\alpha}_{12}^{\ast}(1)-{\zeta}-\frac{1}{2}\boldsymbol{\alpha}_{22}^{\ast}(1)(1-{\zeta})\\ =&\: 2\varphi_{\mathbf{A}} +\frac{1}{2}\left(\boldsymbol{\alpha}_{12}^{\ast}(1)-\boldsymbol{\alpha}_{22}^{\ast}(1)\right)-\frac{1}{2}{\zeta}\left(2-\boldsymbol{\alpha}_{22}^{\ast}(1)\right)\\ =&\: 2\left(\varphi_{\mathbf{A}} -\frac{1}{2}\right)\!.\end{align*} From the explicit form of C we can infer the following: Corollary 5.15 Let A and C be masks as in Procedure 5.13. If A has support contained in [−N1, N2] with $$N_{1},N_{2} \in \mathbb{N}$$, then the support of C is contained in [−N1 − 5, N2]. Therefore Procedure 5.13 increases the support length at most by 5. Corollary 5.16 Let A be a mask satisfying the spectral condition of (5.1) and let its associated Taylor scheme be convergent. Assume that $$\boldsymbol{\alpha }_{12}^{\ast }(1)=0$$ (i.e. ζ = 1). Denote by C the mask obtained via Procedure 5.13. Then $$\boldsymbol{\gamma }_{12}^{\ast }(1)=0$$ if and only if $${\boldsymbol{\alpha }_{12}^{\ast }}^{\prime }(1)=0$$. Proof. From the definition of C in (5.16) it is easy to see that $$\boldsymbol{\gamma }_{12}^{\ast }(1)=-\tfrac{1}{2}{\boldsymbol{\alpha }_{12}^{\ast }}^{\prime }(1)$$. Therefore $$\boldsymbol{\gamma }_{12}^{\ast }(1)=0$$ iff $${\boldsymbol{\alpha }_{12}^{\ast }}^{\prime }(1)=0$$. Let r be the multiplicity of the root at 1 of $$\boldsymbol{\alpha }_{12}^{\ast }(z)$$. Corollary 5.16 implies that r − 1 iterations of the smoothing procedure stay within the special case of ζ = 1. Example 5.17 We consider the Hermite(2) scheme generating C1 piecewise cubic polynomials interpolating the initial data (see Merrien, 1992). The mask of the scheme is given by $$A_{-1}=\left(\begin{array}{@{}r r@{}} \frac{1}{2} & -\frac{1}{8} \\[6pt] \frac{3}{4} & -\frac{1}{8} \end{array}\right)\!, \quad A_{0}=\left(\begin{array}{@{}r r@{}} 1 & 0\\[6pt] 0 & \frac{1}{2} \end{array}\right)\!, \quad A_{1}=\left(\begin{array}{@{}r r@{}} \frac{1}{2} & \frac{1}{8}\\[6pt] -\frac{3}{4} & -\frac{1}{8} \end{array}\right)\!.$$ It is easy to see that it satisfies the spectral condition of equation (5.1) with φA = 0. In the studies by Merrien & Sauer (2012) it is proved that its Taylor scheme is convergent with limit functions of vanishing first component (and thus the original HSS(2) is HC1). We apply Procedure 5.13 to this scheme to obtain a new HSS(2) of regularity HC2, using the explicit expressions in equations (5.16) and (5.17). First we compute the symbol: $$\mathbf{A}^{\ast}(z)=\left(\begin{array}{@{}c c@{}} \frac{1}{2}(1+z)^{2}z^{-1} & -\frac{1}{8}(1-z^{2})z^{-1} \\[6pt] \frac{3}{4}(1-z^{2})z^{-1} & -\frac{1}{8}z^{-1}+\frac{1}{2}- \frac{1}{8}z \end{array}\right)\!.$$ Note that $$\boldsymbol{\alpha }^{\ast }_{12}(1)=0$$ with multiplicity 1. Therefore we are in the special case ζ = 1. We apply equation (5.17) and obtain the symbol of C: $$\mathbf{C}^{\ast}(z)=\frac{1}{16} \left(\begin{array}{@{}c c@{}} \left(z^{-1}+1\right)^{2}\left(-z^{-2}+z^{-1}+6+2z\right) & -z-1 \\[6pt] \left(z^{-2}-1\right)\Big(z^{-4}-3z^{-3}-3z^{-2}+13z^{-1}+6\Big) & z^{-2}-3z^{-1}+3+z \end{array}\right)\!.$$ From Lemma 5.14 we also know that C satisfies the spectral condition with $$\varphi _{\mathbf{C}}=-\tfrac{1}{2}$$. Therefore the HSS(2) associated with C is an HC2 scheme which is not interpolatory. A basic limit function of this scheme is depicted in Fig. 3. Note that the support of C is [−6, 1] and has thus increased from length of 3 to the length of 8. If we want to apply another round of Procedure 5.13, we have to use (5.16) with $$\zeta =\tfrac{14}{15}$$. Example 5.18 We consider one of the de Rham-type HSS(2)s of Dubuc & Merrien (2008) obtained from the scheme of Example 5.17. Its mask is given by \begin{align*} &A_{-2}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{5}{4} & -\frac{3}{8} \\[6pt] \frac{9}{2} & -\frac{5}{4} \end{array}\right)\!, \quad A_{-1}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{27}{4} & -\frac{9}{8} \\[6pt] \frac{9}{2} & \frac{3}{4} \end{array}\right)\!,\\[0.3cm] &A_{0}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{27}{4} & \frac{9}{8} \\[6pt] -\frac{9}{2} & \frac{3}{4} \end{array}\right)\!,\quad A_{1}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{5}{4} & \frac{3}{8} \\[6pt] -\frac{9}{2} & -\frac{5}{4} \end{array}\right)\!. \end{align*} It is easy to see that it satisfies the spectral condition of equation (5.1) with $$\varphi _{\mathbf{A}}=-\tfrac{1}{2}$$. In the study by Conti et al. (2014) it is proved that its Taylor scheme is C1 with limit functions of vanishing first component (and thus the original HSS(2) is HC2). We apply Algorithm 5.13 to this scheme to obtain a new HSS(2) of regularity HC3. First we compute the symbol: $$\mathbf{A}^{\ast}(z)=\frac{1}{16}\left(\begin{array}{@{}c c@{}} \frac{1}{2}\left(z^{-1}+1\right)\left(5z+2z+5z^{-1}\right) & -\frac{3}{4}\left(z^{-1}-1\right)\left(z+4+z^{-1}\right) \\[6pt] 9\left(z^{-2}-1\right)(z+1) & \frac{1}{2}\left(z^{-1}+1\right)\left(-5z+8-5z^{-1}\right) \end{array}\right)\!.$$ Note that $$\boldsymbol{\alpha }^{\ast }_{12}(1)=0$$ with multiplicity 1. Therefore, as in Example 5.17, we are in the special case ζ = 1. We apply equation (5.17) and obtain the symbol of C: \begin{align*} \boldsymbol{\gamma}^{\ast}_{11}(z)&=\frac{1}{128}\left(z^{-1}+1\right)\left(-3z^{-4}-9z^{-3}+25z^{-2}+75z^{-1}+36+4z\right)\!,\\ \boldsymbol{\gamma}^{\ast}_{12}(z)&=-\frac{3}{128}\left(z+4+z^{-1}\right)\!,\\ \boldsymbol{\gamma}^{\ast}_{21}(z)&=\frac{1}{128}\left(z^{-2}-1\right)\Big(3z^{-5}-7z^{-4}-37z^{-3}+37z^{-2}+128z^{-1}+20-8z\Big),\\ \boldsymbol{\gamma}^{\ast}_{22}(z)&=\frac{1}{128}\left(3z^{-3}-7z^{-2}-21z^{-1}+21-4z\right)\!. \end{align*} We also know from Lemma 5.14 that C satisfies the spectral condition with φC = −1. Therefore the HSS(2) associated with C is an HC3 scheme which is not interpolatory. A basic limit function of this scheme is depicted in Fig. 4. Note that the support of C is [−7, 1] and has thus increased from length of 4 to the length of 9. If we want to apply another round of Procedure 5.13, we have to use equation (5.16) with $$\zeta =\tfrac{41}{44}$$. Fig. 3. View largeDownload slide Basic limit functions and their first derivatives of the HSS(2)s of Example 5.17. First column: interpolatory HC1 scheme SA with basic limit function f. Second column: the smoothed noninterpolatory HC2 scheme SC with basic limit function g. Fig. 3. View largeDownload slide Basic limit functions and their first derivatives of the HSS(2)s of Example 5.17. First column: interpolatory HC1 scheme SA with basic limit function f. Second column: the smoothed noninterpolatory HC2 scheme SC with basic limit function g. Fig. 4. View largeDownload slide Basic limit functions, their first and second derivatives of the HSS(2)s of Example 5.18. First column: noninterpolatory HC2 scheme SA with basic limit function f. Second column: smoothed noninterpolatory HC3 scheme SC with basic limit function g. Fig. 4. View largeDownload slide Basic limit functions, their first and second derivatives of the HSS(2)s of Example 5.18. First column: noninterpolatory HC2 scheme SA with basic limit function f. Second column: smoothed noninterpolatory HC3 scheme SC with basic limit function g. Acknowledgements Most of this research was done while the first author was with TU Graz. The authors thank Costanza Conti, Tomas Sauer and Johannes Wallner for their valuable comments and suggestions. We are also grateful to the anonymous reviewers who helped to improve this paper in many aspects. Funding Austrian Science Fund (W1230, I705). References Cavaretta , A. , Dahmen , W. & Micchelli , C. ( 1991 ) Stationary subdivison . Am. Math. Soc. Charina , M. , Conti , C. & Sauer , T. ( 2005 ) Regularity of multivariate vector subdivision schemes . Numer. Algorithms , 39 , 97 -- 113 . Google Scholar CrossRef Search ADS Cohen , A. , Dyn , N. & Levin , D. ( 1996 ) Stability and inter-dependence of matrix subdivision schemes . Advanced Topics in Multivariate Approximation (F. Fontanella, K. Jetter & P. J. Laurent eds), Ser. Approx. Decompos. 8, World Scientific Publishing, River Edge , pp. 33 -- 45 . Conti , C. , Merrien , J.-L. & Romani , L. ( 2014 ) Dual Hermite subdivision schemes of de Rham-type . BIT Numer. Math. , 54 , 955 -- 977 . Google Scholar CrossRef Search ADS Conti , C. , Cotronei , M. & Sauer , T. ( 2016 ) Factorization of Hermite subdivision operators preserving exponentials and polynomials . Adv. Comput. Math. , 42 , 1055 -- 1079 . Google Scholar CrossRef Search ADS Dubuc , S . ( 2006 ) Scalar and Hermite subdivision schemes . Appl. Comput. Harmon. Anal. , 21 , 376 -- 394 . Google Scholar CrossRef Search ADS Dubuc , S. & Merrien , J.-L. ( 2005 ) Convergent vector and Hermite subdivision schemes . Constr. Approx. , 23 , 1 -- 22 . Google Scholar CrossRef Search ADS Dubuc , S. & Merrien , J.-L. ( 2008 ) de Rham transform of a Hermite subdivision scheme . Approximation Theory XII (M. Neamtu & L. L. Schumaker eds). Nashville, TN: Nashboro Press , pp. 121 -- 132 . Dubuc , S. & Merrien , J.-L. ( 2009 ) Hermite subdivision schemes and Taylor polynomials . Constr. Approx. , 29 , 219 -- 245 . Google Scholar CrossRef Search ADS Dyn , N. , Gregory , J. A. & Levin , D. ( 1991 ) Analysis of uniform binary subdivision schemes for curve design . Constr. Approx. , 7 , 127 -- 147 . Google Scholar CrossRef Search ADS Dyn , N. ( 1992 ) Subdivision schemes in computer-aided geometric design. Advances in Numerical Analysis . New York: Oxford University Press , pp. 36 -- 104 . Dyn , N. & Levin , D. ( 1995 ) Analysis of Hermite-type subdivision schemes . Approximation Theory VIII. Wavelets and Multilevel Approximation (C. K. Chui & L. L. Schumaker eds), vol. 2. River Edge, NJ: World Sci. , pp. 117 -- 124 . Dyn , N. & Levin , D. ( 1999 ) Analysis of Hermite-interpolatory subdivision schemes . Spline Functions and the Theory of Wavelets (S. Dubuc & G. Deslauriers eds). Providence, RI: Amer. Math. Soc. , pp. 105 -- 113 . Dyn , N. & Levin , D. ( 2002 ) Subdivision schemes in geometric modelling . Acta Numer. , 11 , 73 -- 144 . Google Scholar CrossRef Search ADS Guglielmi , N. , Manni , C. & Vitale , D. ( 2011 ) Convergence analysis of C2 Hermite interpolatory subdivision schemes by explicit joint spectral radius formulas . Linear Algebra Appl. , 434 , 884 -- 902 . Google Scholar CrossRef Search ADS Han , B . ( 2001 ) Approximation properties and construction of Hermite interpolants and biorthogonal multiwavelets . J. Approx. Theory , 110 , 18 -- 53 . Google Scholar CrossRef Search ADS Han , B. , Yu , T. & Xue , Y. ( 2005 ) Noninterpolatory Hermite subdivision schemes . Math. Comput. , 74 , 1345 -- 1367 . Google Scholar CrossRef Search ADS Jeong , B. & Yoon , J. ( 2017 ) Construction of Hermite subdivision schemes reproducing polynomials . J. Math. Anal. Appl. , 451 , 565 -- 582 . Google Scholar CrossRef Search ADS Merrien , J.-L. ( 1992 ) A family of Hermite interpolants by bisection algorithms . Numer. Algorithms , 2 , 187 -- 200 . Google Scholar CrossRef Search ADS Merrien , J.-L. ( 1999 ) Interpolants d’Hermite C2 obtenus par subdivision . ESAIM Math. Model. Numer. Anal. , 33 , 55 -- 65 . Google Scholar CrossRef Search ADS Merrien , J.-L. & Sauer , T. ( 2012 ) From Hermite to stationary subdivision schemes in one and several variables . Adv. Comput. Math. , 36 , 547 -- 579 . Google Scholar CrossRef Search ADS Merrien , J.-L. & Sauer , T. ( 2017 ) Extended Hermite subdivision schemes . J. Comput. Appl. Math. , 317 , 343 -- 361 . Google Scholar CrossRef Search ADS Micchelli , C. & Sauer , T. ( 1998 ) On vector subdivision . Math. Z. , 229 , 621 -- 674 . Google Scholar CrossRef Search ADS Sauer , T. ( 2002 ) Stationary vector subdivision—quotient ideals, differences and approximation power. Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales . Serie A. Matemáticas. RACSAM , 96 , 257 -- 277 . Sauer , T. ( 2003 ) How to generate smoother refinable functions from given ones . Modern Developments in Multivariate Approximation , vol. 145. Basel: Birkhäuser , pp. 279 -- 293 . © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

# Increasing the smoothness of vector and Hermite subdivision schemes

, Volume Advance Article – Mar 29, 2018
28 pages

/lp/ou_press/increasing-the-smoothness-of-vector-and-hermite-subdivision-schemes-tE10tkY50S
Publisher
Oxford University Press
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/dry010
Publisher site
See Article on Publisher Site

### Abstract

Abstract In this paper we suggest a method for transforming a vector subdivision scheme (VSS) generating Cℓ limits to another such scheme of the same dimension, generating Cℓ+1 limits. In scalar subdivision, it is well known that a scheme generating Cℓ limit curves can be transformed to a new scheme producing Cℓ+1 limit curves by multiplying the scheme’s symbol with the smoothing factor $$\tfrac{z+1}{2}$$. First we extend this approach to VSSs, by manipulating symbols. The algorithms presented in this paper allow us to construct VSSs of arbitrarily high regularity from a convergent vector scheme. Furthermore, from a certain class of converging Hermite schemes of dimension 2, we are able to obtain new Hermite schemes of dimension 2 with regularity of any order. 1. Introduction Subdivision schemes are algorithms which iteratively refine discrete input data and produce smooth curves or surfaces in the limit. The regularity of the limit curve resp. surface is a topic of high interest. In this paper we are concerned with the stationary and univariate case, i.e. with subdivision schemes using the same set of coefficients (called mask) in every refinement step and which have curves as limits. We study two types of such schemes: vector and Hermite subdivision schemes (HSSs). The mostly studied schemes are scalar subdivision schemes with real-valued sequences as masks. These schemes are in fact a special case of vector subdivision, with matrix-valued sequences as masks, which refine sequences of vectors. For vector subdivision schemes (VSSs) many results concerning convergence and smoothness are available. An incomplete list of references for the analysis of scalar and VSSs is Cavaretta et al. (1991), Dyn et al. (1991), Dyn (1992), Micchelli & Sauer (1998), Dyn & Levin (2002), Sauer (2002) and Charina et al. (2005). In Hermite subdivision the refined data are also a sequence of vectors interpreted as function and derivatives values. This results in level-dependent vector subdivision, where the convergence of a scheme already includes the regularity of the limit curve. Corresponding literature can be found in the studies by Dyn & Levin (1995, 1999), Dubuc & Merrien (2005), Han et al. (2005), Dubuc (2006), Guglielmi et al. (2011) and Merrien & Sauer (2012) and references therein. Note that here we consider Hermite schemes of dimension 2 (function and first derivative values). To emphasize this, we denote them by Hermite(2). Also, we consider inherently stationary Hermite(2) schemes (Conti et al., 2014), which means that the level-dependence arises only from the specific interpretation of the input data. Inherently nonstationary Hermite schemes are discussed, e.g. in the study by Conti et al. (2016). The convergence and smoothness analysis of subdivision schemes is strongly connected to the existence of the derived scheme or in the Hermite case to the existence of the Taylor scheme. The derived scheme (the Taylor scheme) is obtained by an appropriate factorization of the symbols (Dyn & Levin, 2002; Charina et al., 2005; Merrien & Sauer, 2012). In the scalar and vector case we have the following result: if the derived scheme produces Cℓ (ℓ ≥ 0) limit curves, then the original scheme produces Cℓ+1 limit curves, see Dyn & Levin (2002) and Charina et al. (2005). In the Hermite case, in addition to the assumption that the Taylor scheme is Cℓ, we also need that its limit functions have vanishing first component (Merrien & Sauer, 2012). These results are essential tools in our approach for obtaining schemes with increased smoothness. We start from a scheme which is known to have a certain regularity and regard it as the derived scheme (the Taylor scheme) of a new, to be computed scheme. By the above result, the regularity of the new scheme is increased by 1. This idea comes from univariate scalar subdivision, where it is well known that a scheme with symbol α*(z) is the derived scheme of $$\boldsymbol{\beta }^{\ast }(z)=\tfrac{1+z}{2}z^{-1}\boldsymbol{\alpha }^{\ast }(z)$$ (Dyn & Levin, 2002), and thus if Sα generates Cℓ limits, Sβ generates limits which are Cℓ+1. It is possible to generalize this process to obtain vector (Hermite(2)) subdivision schemes of arbitrarily high smoothness from a convergent vector scheme (a Hermite(2) scheme, whose Taylor scheme is convergent with limit functions of vanishing first component). The presentation of such a general procedure is the main aim of this paper. We would like to mention other approaches which increase the regularity of subdivision schemes: while the case of univariate scalar schemes is presented, e.g. in the study by Dyn & Levin (2002), the paper by Sauer (2003) generalizes such a smoothing procedure to the multivariate scalar setting. Although VSSs appear naturally in the analysis of smoothness of multivariate scalar schemes, yet the aim in the study by Sauer (2003) is to increase the smoothness of scalar schemes. There are many approaches which increase the smoothness of some well known HSSs by 1. As shown in the study by Conti et al. (2014), the de Rham transform (Dubuc & Merrien, 2008) increases the regularity of some of the interpolatory Hermite schemes presented in the studies by Merrien (1992, 1999). Also Merrien & Sauer (2017) present examples of Hermite schemes with increased smoothness based on an approach which extends the dimension of the matrices of the mask and the dimension of the refined data. The recent paper by Jeong & Yoon (2017) defines a class of HSSs with tension parameters having high polynomial reproduction and smoothness. This class generalizes and unifies schemes from the studies by Merrien (1992), Han (2001) and Conti et al. (2014). Compared to the approaches just mentioned, our paper presents the first general method, which can be applied to any convergent vector or Hermite(2) subdivision scheme. Our procedure works by algebraically manipulating symbols and generalizing the scalar smoothing factor $$\tfrac{z+1}{2}$$. Therefore, it contains, as a special case, the univariate scalar smoothing procedure (Dyn & Levin, 2002). Another benefit is its iterative nature, which allows us to construct schemes of arbitrarily high regularity from any given convergent scheme. Although in the Hermite(2) case we have a possible increase of support length by 5 (see Corollary 5.15, Example 5.17 and Example 5.18), which is bigger than the increase of support in the above mentioned papers, this is the only drawback, and we believe that it is outweighed by the high generality of our method. In the vector case, however, the support increase is only by a maximum of 2 (see Corollary 4.3), independently of the size of the scheme. Our paper is organized as follows. In Section 2 we introduce the notation used throughout this text and recall some definitions concerning subdivision schemes. Section 3 presents the well known procedure for increasing the smoothness of univariate scalar subdivision schemes (Dyn & Levin, 2002). However, we introduce new notation, to emphasize the analogy to the procedures we presented in Sections 4 and 5 for vector and Hermite(2) schemes, respectively. We conclude by two examples, applying our procedure to an interpolatory Hermite(2) scheme of Merrien (1992) and to a Hermite(2) scheme of de Rham-type (Dubuc & Merrien, 2008), and obtain schemes with limit curves of regularity C2 and C3, respectively. 2. Notation and background In this section we introduce the notation which is used throughout this paper and recall some known facts about scalar, vector and HSSs. Vectors in $$\mathbb{R}^{p}$$ will be labeled by lowercase letters c. The standard basis is denoted by e1, …, ep. Sequences of elements in $$\mathbb{R}^{p}$$ are denoted by boldface letters $$\mathbf{c}=\{c_{i} \in \mathbb{R}^{p}: i \in \mathbb{Z} \}$$. The space of all such sequences is $$\ell (\mathbb{R}^{p})$$. We define a subdivision operator $$S_{\boldsymbol{\alpha }}: \ell (\mathbb{R}^{p}) \to \ell (\mathbb{R}^{p})$$ with a scalar mask $$\boldsymbol{\alpha } \in \ell (\mathbb{R})$$ by \begin{align} (S_{\boldsymbol{\alpha}}\mathbf{c})_{i}=\sum_{j \in \mathbb{Z}} \alpha_{i-2j}c_{j}, \quad i \in \mathbb{Z}, \: \mathbf{c} \in \ell(\mathbb{R}^{p}). \end{align} (2.1) We study the case of finitely supported masks, with support contained in [−N, N]. In this case the sum in (2.1) is finite and the scheme is local. We also consider matrix-valued masks. To distinguish them from the scalar case, we denote matrices in $$\mathbb{R}^{p \times p}$$ by uppercase letters. Sequences of matrices are denoted by boldface letters $$\mathbf{A}=\{A_{i} \in \mathbb{R}^{p \times p}: i \in \mathbb{Z} \}$$. We define a vector subdivision operator $$S_{\mathbf{A}}: \ell (\mathbb{R}^{p}) \to \ell (\mathbb{R}^{p})$$ with a finitely supported matrix mask $$\mathbf{A} \in \ell (\mathbb{R}^{p \times p})$$ by \begin{align} (S_{\mathbf{A}}\mathbf{c})_{i}=\sum_{j \in \mathbb{Z}} A_{i-2j}c_{j}, \quad i \in \mathbb{Z}, \: \mathbf{c} \in \ell(\mathbb{R}^{p}). \end{align} (2.2) We define three kinds of subdivision schemes: Definition 2.1 A scalar subdivision scheme is the procedure of constructing $$\mathbf{c}^{n}\: (n\geqslant 1)$$ from input data $$\mathbf{c}^{0} \in \ell (\mathbb{R}^{p})$$ by the rule cn = Sαcn−1, where $$\boldsymbol{\alpha } \in \ell (\mathbb{R})$$ is a scalar mask. A VSS is the procedure of constructing $$\mathbf{c}^{n}\: (n\geqslant 1)$$ from input data $$\mathbf{c}^{0} \in \ell (\mathbb{R}^{p})$$ by the rule cn = SAcn−1, where A is a matrix-valued mask. An HSS is the procedure of constructing $$\mathbf{c}^{n}\: (n\geqslant 1)$$ from $$\mathbf{c}^{0}\in \ell (\mathbb{R}^{p})$$ by the rule Dncn = SADn−1cn−1, where A is a matrix-valued mask and D is the dilation matrix $$D=\left(\begin{array}{@{}cccc@{}} 1 & & & \\ & \frac{1}{2} & & \\ & & \ddots & \\ & & & \frac{1}{2^{p-1}} \end{array}\right)\!.$$ An HSS of dimension 2 is also denoted by HSS(2). The difference between scalar and vector subdivision lies in the dimension of the mask. In scalar subdivision the components of c are refined independently of each other. This is not the case in vector subdivision. Note also that scalar schemes are a special case of vector schemes with mask Ai = αiIp, where Ip is the (p × p) unit matrix. In Hermite subdivision, on the other hand, the components of c are interpreted as function and derivatives values up to order p − 1. This is represented by the matrix D. In particular, Hermite subdivision is a level-dependent case of vector subdivision: $$\mathbf{c}^{n}=S_{\hat{\mathbf{A}}_{n}}\mathbf{c}^{n-1}$$ with $$\hat{\mathbf{A}}_{n}=\{D^{-n}A_{i}D^{n-1}:i \in \mathbb{Z}\}$$. On the space $$\ell (\mathbb{R}^{p})$$ we define a norm by $$\|\mathbf{c}\|_{\infty}=\sup_{i \in \mathbb{Z}}\|c_{i}\|,$$ where ∥⋅∥ is a norm on $$\mathbb{R}^{p}$$. The Banach space of all bounded sequences is denoted by $$\ell ^{\infty }(\mathbb{R}^{p})$$. A subdivision operator Sα with finitely supported mask, restricted to a map $$\ell ^{\infty }(\mathbb{R}^{p}) \to \ell ^{\infty }(\mathbb{R}^{p})$$ has an induced operator norm: $$\|S_{\boldsymbol{\alpha}}\|_{\infty}=\sup\left\{\left\|S_{\boldsymbol{\alpha}}\mathbf{c} \right\|_{\infty}: \mathbf{c} \in \ell^{\infty}\left(\mathbb{R}^{p}\right)\ \textrm{and} \ \|\mathbf{c}\|_{\infty}=1\right\}\!.$$ This is also true for subdivision operators with matrix masks. Next we define convergence of scalar, vector and Hermite(2) subdivision schemes. We start with scalar and vector schemes: Definition 2.2 A scalar (vector) subdivision scheme associated with the mask α (A) is convergent in $$\ell ^{\infty }(\mathbb{R}^{p})$$, also called C0, if for all input data $$\mathbf{c}^{0} \in \ell ^{\infty }(\mathbb{R}^{p})$$ there exists a function $$\varPsi \in C(\mathbb{R},\mathbb{R}^{p})$$, such that the sequences cn = Sαnc0$$\left (\mathbf{c}^{n}=S_{\mathbf{A}}^{n}\mathbf{c}^{0}\right )$$ satisfy $$\sup_{i\in \mathbb{Z}}\|{c^{n}_{i}}-\varPsi\left(\tfrac{i}{2^{n}}\right)\| \to 0, \quad \textrm{as } n \to \infty,$$ and $$\varPsi \neq 0$$ for some $$\mathbf{c}^{0} \in \ell ^{\infty }(\mathbb{R}^{p})$$. We say that the scheme is Cℓ, if in addition Ψ is ℓ-times continuously differentiable for any initial data. In Section 5 we consider HSSs which refine function and first derivative values. The case of point-tangent data is treated componentwise. With this approach it is sufficient to consider convergence for data in $$\ell (\mathbb{R}^{2})$$. For the reason why we treat only function and first derivative values, and not higher derivatives, see the beginning of Section 5. In order to distinguish between the convergence of VSSs and the convergence of HSSs, we use the notation introduced in the study by Conti et al. (2014): Definition 2.3 An HSS(2) associated with the mask A is said to be HCℓconvergent with $$\ell \geqslant 1$$, if for any input data $$\mathbf{c}^{0} \in \ell ^{\infty }(\mathbb{R}^{2})$$, there exists a function $$\varPsi ={\varPsi ^{0} \choose \varPsi ^{1}}$$ with $$\varPsi ^{0} \in C^{\ell }(\mathbb{R},\mathbb{R})$$ and Ψ1 being the derivative of Ψ0, such that the sequences cn = D−nSAnc0, $$n \geqslant 1$$, satisfy $$\sup_{i \in \mathbb{Z}}\left\|{c^{n}_{i}}-\varPsi\left(\tfrac{i}{2^{n}}\right)\right\| \to 0, \quad \textrm{as } n\to \infty,$$ and $$\varPsi \neq 0$$ for some input data $$\mathbf{c}^{0} \in \ell ^{\infty }\left (\mathbb{R}^{2}\right )$$. Note that in contrast to the vector case, an HSS(2) is convergent only if the limit already possesses a certain degree of smoothness. We conclude by recalling some facts about the generating function of a sequence c, which is the formal Laurent series $$\mathbf{c}^{\ast}(z)=\sum_{i \in \mathbb{Z}}c_{i}z^{i}.$$ The generating function of a mask of a subdivision scheme is called the symbol of the scheme. We study the symbol of both scalar (α) and matrix (A) masks, defined by $$\boldsymbol{\alpha}^{\ast}(z)=\sum_{i \in \mathbb{Z}}\alpha_{i}z^{i} \quad \textrm{and} \quad \mathbf{A}^{\ast}(z)=\sum_{i \in \mathbb{Z}}A_{i}z^{i}.$$ Due to the finite support assumption, symbols are Laurent polynomials. It is easy to see (e.g. in Dyn & Levin, 2002) that the following properties are satisfied: Lemma 2.4 Let c be a sequence and let α be a scalar or a matrix mask. By $$\varDelta$$ we denote the forward-difference operator ($$\varDelta$$c)i = ci+1 − ci. Then we have: $$(\varDelta\mathbf{c})^{\ast}(z)=(z^{-1}-1)\mathbf{c}^{\ast}(z) \;\textrm{and}\; \left(S_{\boldsymbol{\alpha}}\mathbf{c}\right)^{\ast}(z)=\boldsymbol{\alpha}^{\ast}(z)\mathbf{c}^{\ast}\left(z^{2}\right)\!.$$ Furthermore, for finite sequences we have the equalities \begin{align*} \mathbf{c}^{\ast}(1)&=\sum_{i \in \mathbb{Z}}c_{2i}+\sum_{i \in \mathbb{Z}}c_{2i+1} \quad \textrm{and} \quad \mathbf{c}^{\ast}(-1)=\sum_{i \in \mathbb{Z}}c_{2i}-\sum_{i \in \mathbb{Z}}c_{2i+1},\\{\mathbf{c}^{\ast}}^{\prime}(1)&=\sum_{i \in \mathbb{Z}}c_{2i}(2i)+\sum_{i \in \mathbb{Z}}c_{2i+1}(2i+1) \quad \textrm{and} \quad{\mathbf{c}^{\ast}}^{\prime}(-1)=\sum_{i \in \mathbb{Z}}c_{2i+1}(2i+1)-\sum_{i \in \mathbb{Z}}c_{2i}(2i). \end{align*} 3. Increasing the smoothness of scalar subdivision schemes In this section we recall a procedure increasing the smoothness of scalar subdivision schemes, which is realized by the smoothing factor $$\tfrac{z+1}{2}$$. The results of this section are taken from Section 4 in the study by Dyn & Levin (2002). We introduce notation in order to illustrate the analogy to the procedures we present in Section 4 for VSSs. The condition $$\sum _{i \in \mathbb{Z}}\alpha _{2i}=\sum _{i \in \mathbb{Z}}\alpha _{2i+1}=1$$ on the mask α is necessary for the convergence of Sα. In this case α*(−1) = 0, implying that α*(z) has a factor (z + 1) and there exists a mask ∂α such that \begin{align} \varDelta S_{\boldsymbol{\alpha}}=\tfrac{1}{2}S_{\partial \boldsymbol{\alpha}}\varDelta. \end{align} (3.1) The scalar scheme associated with ∂α is called the derived scheme. It is easy to see that \begin{align} (\partial \boldsymbol{\alpha})^{\ast}(z)=2z\tfrac{\boldsymbol{\alpha}^{\ast}(z)}{z+1} \end{align} (3.2) and that (∂α)* is a Laurent polynomial. The convergence and smoothness analysis of a scalar subdivision scheme associated with α depends on the properties of ∂α: Theorem 3.1 Let α be a mask which satisfies α*(1) = 2 and α*(−1) = 0. The scalar scheme associated with α is convergent if and only if the scalar scheme associated with $$\tfrac{1}{2}\partial \boldsymbol{\alpha }$$ is contractive, namely $$\left \|\left (\frac{1}{2}S_{\partial \boldsymbol{\alpha }}\right )^{L}\right \|_{\infty }<1$$ for some $$L \in \mathbb{N}$$. If the scalar scheme associated with ∂α is $$C^{\ell }\enspace (\ell \geqslant 0)$$ then the scalar subdivision scheme associated with α is Cℓ+1. Theorem 3.1 allows us to define a procedure for increasing the smoothness of a scalar subdivision scheme: for a mask α, define a new mask $$\mathcal{I}\boldsymbol{\alpha }$$ by $$(\mathcal{I}\boldsymbol{\alpha })^{\ast }(z)=\tfrac{(1+z)}{2}z^{-1}\boldsymbol{\alpha }^{\ast }(z)$$. Then $$(\mathcal{I}\boldsymbol{\alpha })^{\ast }(-1)=0$$ and from equation (3.2) we get $$\partial (\mathcal{I}\boldsymbol{\alpha })=\boldsymbol{\alpha. }$$ (Note that if ∂α exists, then also $$\mathcal{I}(\partial \boldsymbol{\alpha })=\boldsymbol{\alpha }$$.) Corollary 3.2 Let α be a mask associated with a Cℓ ($$\ell \geqslant 0$$) scalar subdivision scheme. Then the mask $$\mathcal{I}\boldsymbol{\alpha }$$ gives rise to a Cℓ+1 scheme. Therefore, by a repeated application of $$\mathcal{I}$$, a scalar subdivision scheme which is at least convergent can be transformed to a new scheme of arbitrarily high regularity. We call $$\mathcal{I}$$ a smoothing operator and $$\tfrac{z+1}{2}$$ a smoothing factor. Note that the factor z−1 in $$\mathcal{I}$$ is an index shift. Example 3.3 (B-Spline schemes). The symbol of the scheme generating B-Spline curves of degree $$\ell \geqslant 1$$ and smoothness Cℓ−1 is $$\boldsymbol{\alpha}_{\ell}^{\ast}(z)=\Big(\tfrac{(z+1)}{2}z^{-1}\Big)^{\ell}(z+1).$$ Obviously $${\boldsymbol{\alpha }}^{\ast }_{\ell }(z)=\tfrac{(z+1)}{2}z^{-1}{\boldsymbol{\alpha }}^{\ast }_{\ell -1}(z)=(\mathcal{I}{\boldsymbol{\alpha }}_{\ell -1})^{\ast }(z)$$. 4. Increasing the smoothness of VSSs In this section we describe a procedure for increasing the smoothness of VSSs, which is similar to the scalar case. It is more involved since we consider masks consisting of matrix sequences. 4.1. Convergence and smoothness analysis First we present results concerning the convergence and smoothness of VSSs. Their proofs can be found in the studies by Cohen et al. (1996), Micchelli & Sauer (1998), Sauer (2002) and Charina et al. (2005). For a mask A of a VSS we define \begin{align} A^{0}=\sum_{i \in \mathbb{Z}}A_{2i}, \quad A^{1}=\sum_{i \in \mathbb{Z}}A_{2i+1}. \end{align} (4.1) Following Micchelli & Sauer (1998), let \begin{align} \mathscr{E}_{\mathbf{A}}=\left\{v \in \mathbb{R}^{p}: A^{0}v=v\ \textrm{and} \ A^{1}v=v\right\} \end{align} (4.2) and $$k=\dim \mathscr{E}_{\mathbf{A}}$$. A priori, $$0 \leqslant k \leqslant p$$. However, for a convergent VSS, $$\mathscr{E}_{\mathbf{A}}\neq \{0\}$$, i.e. $$1 \leqslant k \leqslant p$$. Therefore, the existence of a common eigenvector of A0 and A1 w.r.t. the eigenvalue 1 is a necessary condition for convergence. The next lemma reduces the convergence analysis to the case $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}.$$ Lemma 4.1 Let SA be a Cℓ$$(\ell \geqslant 0)$$ convergent VSS. Given an invertible matrix $$R \in \mathbb{R}^{p\times p}$$, define a new mask $$\hat{\mathbf{A}}$$ by $$\hat{A}_{i}=R^{-1}A_{i}R$$ for $$i \in \mathbb{Z}$$. The VSS associated with $$\hat{\mathbf{A}}$$ is also Cℓ. There exist invertible matrices such that $$\hat{\mathbf{A}}$$ satisfies $$\mathscr{E}_{\hat{\mathbf{A}}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, where $$k=\dim \mathscr{E}_{\mathbf{A}}$$. In the studies by Cohen et al. (1996) and Sauer (2002) the following generalization of the forward-difference operator $$\varDelta$$ is introduced: \begin{align} \varDelta_{k}=\begin{pmatrix} \varDelta I_{k} & 0 \\ 0 & I_{p-k} \end{pmatrix}\!, \end{align} (4.3) where Ik is the (k × k) unit matrix. It is shown there that if \begin{align} \mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}, \end{align} (4.4) then in analogy to equation (3.1), there exists a matrix mask ∂kA such that \begin{align} \varDelta_{k}S_{\mathbf{A}}=\tfrac{1}{2}S_{\partial_{k}\mathbf{A}}\varDelta_{k}. \end{align} (4.5) Algebraic conditions guaranteeing equation (4.5) are stated and proved in the next subsection. We denote by ∂kA any mask satisfying equation (4.5). The vector scheme associated with ∂kA is called the derived scheme of A with respect to $$\varDelta$$k. Furthermore, we have the following result concerning the convergence of SA in terms of $$S_{\partial _{k}\mathbf{A}}$$: Theorem 4.2 Let A be a mask such that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. If $$\|(\tfrac{1}{2}S_{\partial _{k}\mathbf{A}})^{L}\|<1$$ for some $$L \in \mathbb{N}$$ (that is, $$\tfrac{1}{2}S_{\partial _{k}\mathbf{A}}$$ is contractive), then the vector scheme associated with A is convergent. In fact there is a stronger result in the studies by Charina et al. (2005) and Cohen et al. (1996), but we only need this special case. Two important results for the analysis of smoothness of VSSs are as follows: Theorem 4.3 (Micchelli & Sauer, 1998) Let A be a mask of a convergent VSS, such that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$ for $$k\leqslant p$$, then \begin{align} \dim \mathscr{E}_{\partial_{k} \mathbf{A}}=\dim \mathscr{E}_{\mathbf{A}}. \end{align} (4.6) Theorem 4.4 (Charina et al., 2005) Let A be a mask such that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. If the VSS associated with ∂kA is Cℓ for $$\ell \geqslant 0$$, then the VSS associated with A is Cℓ+1. Remark 4.5 In the last theorem we omitted the assumption that SA is convergent required in Charina et al. (2005). This is possible because if $$S_{\partial _{k}\mathbf{A}}$$ is Cℓ, then $$\frac{1}{2}S_{\partial _{k}\mathbf{A}}$$ is contractive implying that SA is convergent in view of Theorem 4.2. A useful observation for our analysis is as follows: Lemma 4.6 Let A be a matrix mask. Then $$\mathscr{E}_{\mathbf{A}}=\left\{v \in \mathbb{R}^{p}: \mathbf{A}^{\ast}(1)v=2v \ \textrm{and} \ \mathbf{A}^{\ast}(-1)v=0\right\}\!.$$ Proof. It follows immediately from equation (4.1) and the definition of a symbol that $$A^{0}=\tfrac{1}{2}\Big (\mathbf{A}^{\ast }(1)+\mathbf{A}^{\ast }(-1)\Big )$$ and $$A^{1}=\tfrac{1}{2}\Big (\mathbf{A}^{\ast }(1)-\mathbf{A}^{\ast }(-1)\Big )$$. This, together with equation (4.2), implies the claim of the lemma. 4.2. Algebraic conditions We would like to modify a given mask B of a Cℓ VSS to obtain a new scheme SA which is Cℓ+1. The idea is to define A such that ∂kA = B, i.e. such that equation (4.5) is satisfied for some k. If we can prove that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, then by Theorem 4.4, the scheme SA is Cℓ+1. There are some immediate questions: Under what conditions on a mask B can we define a mask A such that ∂kA = B? How to choose k? In order to answer these questions, we have to study in more detail the mask of the derived scheme ∂kA and its relation to the mask A. Definition 4.7 For a mask A of dimension p, i.e. $$A_{i} \in \mathbb{R}^{p\times p}$$ for $$i\in \mathbb{Z}$$, and a fixed k ∈ {1, …, p}, we introduce the block notation $$\mathbf{A}=\begin{pmatrix} \mathbf{A}_{11} & \mathbf{A}_{12} \\ \mathbf{A}_{21} & \mathbf{A}_{22}\end{pmatrix}\!,$$ with A11 of size (k × k). In the next lemma, we present algebraic conditions on a symbol A*(z) guaranteeing the existence of ∂kA for a fixed k ∈ {1, …, p}, and also show that if $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$ these conditions hold. Lemma 4.8 Let A be a mask of dimension p. With the notation of Definition 4.7 we have If there exists k ∈ {1, …, p} such that $$\mathbf{A}^{\ast }_{11}(-1)=0, \mathbf{A}^{\ast }_{21}(-1)=0$$ and $$\mathbf{A}^{\ast }_{21}(1)=0$$, then there exists a mask ∂kA satisfying equation (4.5). If $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, then A*(z) satisfies the conditions of (1). Proof. Under the assumptions of (1), the matrix \begin{align} 2\begin{pmatrix} \mathbf{A}_{11}^{\ast}(z)/(z^{-1}+1) & (z^{-1}-1)\mathbf{A}_{12}^{\ast}(z)\\[0.2cm] \mathbf{A}_{21}^{\ast}(z)/(z^{-2}-1) & \mathbf{A}^{\ast}_{22}(z) \end{pmatrix} \end{align} (4.7) is a matrix Laurent polynomial. If we denote it by (∂kA)*(z), then the equation $$\varDelta _{k}S_{\mathbf{A}}=\tfrac{1}{2}S_{\partial _{k}\mathbf{A}}\varDelta _{k}$$ is satisfied. Indeed, if we write this last equation in terms of symbols, we get \begin{align} &\begin{pmatrix} \left(z^{-1}-1\right)I_{k} & 0 \\ 0 & I_{p-k}\end{pmatrix} \begin{pmatrix} \mathbf{A}^{\ast}_{11}(z) & \mathbf{A}^{\ast}_{12}(z) \\ \mathbf{A}^{\ast}_{21}(z) & \mathbf{A}^{\ast}_{22}(z)\end{pmatrix}\nonumber\\ &\quad= \begin{pmatrix} \mathbf{A}_{11}^{\ast}(z)/\left(z^{-1}+1\right) & \left(z^{-1}-1\right)\mathbf{A}_{12}^{\ast}(z)\\[0.2cm] \mathbf{A}_{21}^{\ast}(z)/\left(z^{-2}-1\right) & \mathbf{A}^{\ast}_{22}(z) \end{pmatrix} \begin{pmatrix} \left(z^{-2}-1\right)I_{k} & 0 \\ 0 & I_{p-k}\end{pmatrix}\!. \end{align} (4.8) It is easy to verify the validity of equation (4.8). In order to prove (2), we deduce from Lemma 4.6 that $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$ implies the properties of A required in (1). In the proof of the validity of our smoothing procedure for VSSs and HSS(2)s, we work with the algebraic conditions (1) in Lemma 4.6 rather than with assumption (4.4). The reason is that the algebraic conditions can be checked and handled more easily. In order to define a procedure for increasing the smoothness of VSSs, we start by answering question (1): Lemma 4.9 Let A, B be masks of dimension p and let k ∈ {1, …, p}. With the notation of Definition 4.7, if $$\mathbf{B}_{12}^{\ast }(1)=0$$, then there exists a mask $$\mathcal{I}_{k} \mathbf{B}$$ satisfying \begin{align} \varDelta_{k}S_{\mathcal{I}_{k}\mathbf{B}}=\tfrac{1}{2}S_{\mathbf{B}}\varDelta_{k}, \end{align} (4.9) where $$\varDelta$$k is defined in equation (4.3). Proof. Defining \begin{align} (\mathcal{I}_{k} \mathbf{B})^{\ast}(z)=\frac{1}{2}\begin{pmatrix} \left(z^{-1}+1\right)\mathbf{B}_{11}^{\ast}(z) & \mathbf{B}_{12}^{\ast}(z)/\left(z^{-1}-1\right)\\[2pt] \left(z^{-2}-1\right)\mathbf{B}_{21}^{\ast}(z) & \mathbf{B}^{\ast}_{22}(z) \end{pmatrix}\!, \end{align} (4.10) we note that under the condition $$\mathbf{B}_{12}^{\ast }(1)=0$$, the above matrix is a matrix Laurent polynomial. It is easy to verify that the matrix $$(\mathcal{I}_{k} \mathbf{B})^{\ast }(z)$$ in equation (4.10) satisfies equation (4.9). Remark 4.10 If k = p in Lemma 4.9 then $$(\mathcal{I}_{p}\mathbf{B})^{\ast }(z)=\tfrac{z^{-1}+1}{2}\mathbf{B}^{\ast }(z)$$, where $$\tfrac{z^{-1}+1}{2}$$ is the smoothing factor in the scalar case. In Lemmas 4.6 and 4.9 we constructed two operators ∂k and $$\mathcal{I}_{k}$$ operating on masks, which (under some conditions) are inverse to each other. Denote by $${\ell _{a}^{k}}$$ the set of all masks satisfying the conditions (1) of Lemma 4.6 and by $${\ell _{b}^{k}}$$ the set of all masks satisfying the condition of Lemma 4.9. Then it is easy to show that \begin{align} \partial_{k}: \quad{\ell_{a}^{k}} \to{\ell_{b}^{k}}\qquad\qquad \mathcal{I}_{k}: \quad{\ell_{b}^{k}} \to{\ell_{a}^{k}} \end{align} (4.11) and that \begin{align} \partial_{k}(\mathcal{I}_{k} \mathbf{B})=\mathbf{B} \quad \textrm{and} \quad \mathcal{I}_{k}(\partial_{k} \mathbf{A})=\mathbf{A}. \end{align} (4.12) This shows that the condition of Lemma 4.9 on a mask B allows to define a mask $$\mathbf{A}=\mathcal{I}_{k} \mathbf{B}$$ such that ∂kA = B. This answers question (1). Still we need to deal with question (2). Remark 4.11 It follows from Lemmas 4.6 and 4.9 that the existence of ∂kA and $$\mathcal{I}_{k} \mathbf{B}$$ depends only on algebraic conditions. Yet this is not sufficient to define a procedure for changing the mask of a VSS in order to get a mask associated with a smoother VSS. Even if $$\mathcal{I}_{k}\mathbf{B}$$ exists for some k, the application of Theorem 4.4, in view of Lemma 4.7, to $$\mathbf{A}=\mathcal{I}_{k}\mathbf{B}$$ is based on the dimension of $$\mathscr{E}_{\mathbf{A}}$$ which is not necessarily k. But if $$\mathscr{E}_{\mathbf{A}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$, we can conclude from Theorem 4.4 that SA has smoothness increased by 1 compared to the smoothness of SB. In the next section we show that if for B associated with a converging VSS $$\dim \mathscr{E}_{\mathbf{A}}=k$$, then there exists a canonical transformation $$\overline{R}$$ such that $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}$$ satisfies the algebraic conditions of Lemma 4.9 and $$\mathscr{E}_{\mathcal{I}_{k}\mathbf{B}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. Therefore by Theorem 4.4, if SB is Cℓ, then $$S_{\mathcal{I}_{k}\overline{\mathbf{B}}}$$ is Cℓ+1. 4.3. The canonical transformations to the standard basis Let B be a mask of a convergent VSS SB. Denote by $$k=\dim \mathscr{E}_{\mathbf{B}}$$. We define a new mask $$\overline{\mathbf{B}}$$ such that \begin{align} \mathscr{E}_{\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}, \, \overline{\mathbf{B}} \in{\ell^{k}_{b}}\ \textrm{and} \ \mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}. \end{align} (4.13) This is achieved by considering the matrix $$M_{\mathbf{B}}=\tfrac{1}{2}\left (B^{0}+B^{1}\right )$$. First we state a result of importance to our analysis, which follows from Theorem 2.2 in the study by Cohen et al. (1996) and from its proof. Theorem 4.12 Let B be a mask of a convergent VSS. A basis of $$\mathscr{E}_{\mathbf{B}}$$ is also a basis of the eigenspace of $$M_{\mathbf{B}}=\tfrac{1}{2}\left (B^{0}+B^{1}\right )$$ corresponding to the eigenvalue 1. Moreover $$\lim _{n \to \infty }M_{\mathbf{B}}^{n}$$ exists. A direct consequence of the last theorem, concluded from the existence of $$\lim _{n \to \infty }M_{\mathbf{B}}^{n}$$, is as follows: Corollary 4.13 Let B be a mask associated with a converging VSS. Then the algebraic multiplicity of the eigenvalue 1 of MB equals its geometric multiplicity, and all its other eigenvalues have modulus less than 1. In particular, since $$M_{\mathbf{B}}=\frac{1}{2}\mathbf{B}^{\ast }(1)$$, Theorem 4.12 implies that if SB is a convergent VSS, then $$\mathscr{E}_{\mathbf{B}}$$ is the eigenspace of B*(1) w.r.t. to the eigenvalue 2. We proceed to define from a mask B associated with a convergent VSS, a new mask $$\overline{\mathbf{B}}$$ satisfying equation (4.13). Let B be a mask associated with a convergent VSS and let $$\mathcal{V}=\{v_{1},\ldots ,v_{k}\}$$ be a basis of $$\mathscr{E}_{\mathbf{B}}$$ (and therefore also a basis of the eigenspace w.r.t. 1 of MB). We define a real matrix \begin{align} \overline{R}=\left[v_{1}, \ldots, v_{k} | Q\right], \end{align} (4.14) where the columns of Q span the invariant space of MB corresponding to the eigenvalues different from 1 of MB. Q completes $$\mathcal{V}$$ to a basis of $$\mathbb{R}^{p}$$ and $$\overline{R}$$ is an invertible matrix. We call $$\overline{R}$$ defined by equation (4.14) a canonical transformation. There are many canonical transformations, since Q is not unique. Any canonical transformation $$\overline{R}$$ can be used to increase the smoothness of a vector scheme (see also Remark 4.5). Define a modified mask $$\overline{\mathbf{B}}$$ by \begin{align} \overline{B}_{i}=\overline{R}^{-1}B_{i}\overline{R}, \quad \textrm{for } i \in \mathbb{Z}. \end{align} (4.15) Then by equation (4.14) and Theorem 4.12 we have that $$\mathscr{E}_{\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. This proves the first claim in equation (4.13). Also by Lemma 4.1, $$S_{\overline{\mathbf{B}}}$$ is convergent and has the same smoothness as SB. Furthermore, by equation (4.14), \begin{align} M_{\overline{\mathbf{B}}}=\tfrac{1}{2}\left(\overline{B}^{0}+\overline{B}^{1}\right)=\overline{R}^{-1}M_{\mathbf{B}}\overline{R}=\begin{pmatrix} I_{k} && 0 \\ 0 && J \end{pmatrix} \end{align} (4.16) is the Jordan form of MB. By Corollary 4.1, J has eigenvalues with modulus less than 1. Transformations $$\overline{R}$$ which result in representations of MB similar to the one in equation (4.16) have already been considered in the studies by e.g. Cohen et al. (1996) and Sauer (2002). The special structure of $$M_{\overline{\mathbf{B}}}$$ is the key to our smoothing procedure. The next theorem follows from equation (4.16) and proves the remaining claims of equation (4.13). Theorem 4.14 Let SB be a convergent VSS and let $$k=\dim \mathscr{E}_{\mathbf{B}}$$. Define $$\overline{\mathbf{B}}$$ by equation (4.15) with $$\overline{R}$$ a canonical transformation. Then $$\overline{\mathbf{B}}$$ has the following properties: $$\overline{\mathbf{B}} \in{\ell ^{k}_{b}}$$, $$\mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots ,e_{k}\}$$. Proof. We start by proving (1). Since \begin{align} \overline{\mathbf{B}}^{\ast}(1)=\overline{B}^{0}+\overline{B}^{1}=2M_{\overline{\mathbf{B}}}=2\begin{pmatrix} I_{k} && 0 \\ 0 && J \end{pmatrix}\!, \end{align} (4.17) it follows that $$\overline{\mathbf{B}}^{\ast }_{12}(1)=0$$. Thus by Lemma 4.9, $$\overline{\mathbf{B}} \in{\ell ^{k}_{b}}$$ and therefore $$\mathcal{I}_{k}\overline{\mathbf{B}}$$ exists. In order to prove (2), we use Lemma 4.6 and show that $$\, \mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}\, =\, \left \{v\in \mathbb{R}^{p}\, :\, \left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast } (1)v\, =\, 2v\quad \textrm{and}\right.\\ \left.\left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast }(-1)v\, =\, 0 \right \}$$ is spanned by e1, …, ek. Indeed by equation (4.17) it follows that $$\overline{\mathbf{B}}^{\ast }_{11}(1)=2I_{k} \;\textrm{and}\; \overline{\mathbf{B}}^{\ast }_{22}(1)=2J$$. Since by equation (4.17) $$\overline{\mathbf{B}}^{\ast }_{12}(1)=0$$, there exists a symbol C*(z) such that $$\overline{\mathbf{B}}^{\ast }_{12}(z)=(z^{-1}-1)\mathbf{C}^{\ast }(z)$$, and therefore equation (4.10) implies the block form: \begin{align} \left(\mathcal{I}_{k}\overline{\mathbf{B}}\right)^{\ast}(1)=\begin{pmatrix} 2I_{k} && \tfrac{1}{2}\mathbf{C}^{\ast}(1) \\[0.2cm] 0 && J \end{pmatrix}, \quad \left(\mathcal{I}_{k}\overline{\mathbf{B}}\right)^{\ast}(-1)=\begin{pmatrix}0 && \tfrac{1}{2}\mathbf{C}^{\ast}(-1) \\[0.2cm] 0 && \tfrac{1}{2}\overline{\mathbf{B}}^{\ast}_{22}(-1) \end{pmatrix}\!. \end{align} (4.18) Equation (4.18), in view of Lemma 4.6, implies that $$\operatorname{span}\{e_{1},\ldots ,e_{k}\}=\mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}$$, since the eigenspace of $$\left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast }(1)$$ w.r.t. the eigenvalue 2 is exactly span{e1, …, ek} (the matrix J only contributes eigenvalues with modulus less than 1), and these vectors are in the kernel of $$\left (\mathcal{I}_{k}\overline{\mathbf{B}}\right )^{\ast }(-1)$$. Summarizing the above results, we arrive at the following: Corollary 4.15 Let B be a mask of a convergent VSS, let $$k=\dim \mathscr{E}_{\mathbf{B}}$$ and let $$\overline{\mathbf{B}}$$ be as in Theorem 4.14. Then $$\mathcal{I}_{k}\overline{\mathbf{B}}$$ exists and $$\mathscr{E}_{\mathcal{I}_{k}\overline{\mathbf{B}}}= \mathscr{E}_{\overline{\mathbf{B}}}=\operatorname{span}\{e_{1},\ldots,e_{k}\}.$$ 4.4. A procedure for increasing the smoothness Theorem 4.14 allows us to define the following procedure which generates VSSs of higher smoothness from given convergent VSSs: Procedure 4.16 The input data is a mask B associated with a Cℓ VSS, $$\ell \geqslant 0$$, and the output is a mask A associated with a Cℓ+1 VSS. Choose a basis $$\mathcal{V}$$ of $$\mathscr{E}_{\mathbf{B}}$$ and define $$\overline{R}$$, a canonical transformation, as in equation (4.14). Define $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}$$. Define $$k=\operatorname{dim}(\mathscr{E}_{\mathbf{B}})$$. Define $$\overline{\mathbf{A}}=\mathcal{I}_{k} \overline{\mathbf{B}}$$ as in equation (4.10). Define $$\mathbf{A}=\overline{R}\,\overline{\mathbf{A}}\, \overline{R}^{-1}$$. A schematic representation of Procedure 4.16 is given in Fig. 1. Remark 4.17 Step 5 in Procedure 4.16 is not essential. The scheme $$S_{\overline{\mathbf{A}}}$$ is already Cℓ+1. Step 5 guarantees that $$\mathscr{E}_{\mathbf{A}}=\mathscr{E}_{\mathbf{B}}$$. In both cases to apply another smoothing procedure to get a Cℓ+2 VSS, a new canonical transformation has to be applied. Remark 4.18 Procedure 4.16 depends on the choice of a canonical transformation $$\overline{R}$$, which is built from a basis of $$\mathscr{E}_{\mathbf{B}}$$ and a matrix Q as in equation (4.14). Every choice of a canonical transformation gives rise to a different vector scheme. Our smoothing procedure is ‘independent’ of this choice in the sense that every $$\overline{R}$$ increases the smoothness by one. Fig. 1. View largeDownload slide A schematic representation of Procedure 4.16. Fig. 1. View largeDownload slide A schematic representation of Procedure 4.16. In the notation of Procedure 4.16, we define the smoothing operator $$\mathcal{I}_{k}$$ applied to a mask B of a convergent VSS as \begin{align} \mathcal{I}_{k}\mathbf{B}=\overline{R}\left(\mathcal{I}_{k}\overline{\mathbf{B}}\right)\overline{R}^{-1}. \end{align} (4.19) This is a generalization of the smoothing operator in the case of scalar subdivision schemes. An important property of Procedure 4.16, which is easily seen from equation (4.10), is as follows: Corollary 4.19 Assume that B and A are masks as in Procedure 4.16. If the support of B is contained in [−N1, N2] with $$N_{1},N_{2} \in \mathbb{N}$$, then the support of A is contained in [−N1 − 2, N2]. Therefore Procedure 4.16 increases the support length by at most 2, independently of the dimension of the mask. Recall that in the scalar case the support size is increased by 1. An interesting observation follows from Procedure 4.16, equations (4.17) and (4.18), Corollary 4.20 Assume that A, B are masks as in Procedure 4.16. Then A*(1) and B*(1) share the eigenvalue 2 and the corresponding eigenspace. To each eigenvalue $$\lambda \neq 2$$ of B*(1) there is an eigenvalue $$\frac{1}{2}\lambda$$ of A*(1). Note that a similar result to that in Corollary 4.20 is in general not true for B*(−1) and A*(−1). However, Example 4.21 shows that this can well be the case. Example 4.21 (Double-knot cubic spline subdivision) We consider the VSS with symbol \begin{align} \mathbf{B}^{\ast}(z)=\frac{1}{8}\begin{pmatrix} 2+6z+z^{2} & 2z+5z^{2} \\ 5+2z & 1+6z+2z^{2} \end{pmatrix}\!. \end{align} (4.20) It is known that this scheme produces C1 limit curves (see e.g. Dyn & Levin, 2002). We apply Procedure 4.16 to B to obtain a VSS SA of regularity C2: First we find a basis of $$\mathscr{E}_{\mathbf{B}}$$ in order to compute a canonical transformation $$\overline{R}$$. The matrices B*(1) and B*(−1) are given by $$\mathbf{B}^{\ast}(1)=\frac{1}{8}\begin{pmatrix} 9 & 7 \\ 7 & 9 \end{pmatrix}, \quad \mathbf{B}^{\ast}(-1)=\frac{1}{8}\Big(\begin{array}{r r} -3 & 3 \\ 3 & -3 \end{array}\Big)$$ and have the following eigenvalues and eigenvectors \begin{align} & \textrm{For } \mathbf{B}^{\ast}(1): \quad \textrm{eigenvalues}: 2, \tfrac{1}{4},\quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -1 \\ 1 \end{array}\Big), \text{ resp.}\\ \nonumber & \textrm{For } \mathbf{B}^{\ast}(-1): \quad \textrm{eigenvalues}: 0, -\tfrac{3}{4}, \quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -1 \\ 1 \end{array}\Big), \text{ resp. } \end{align} (4.21) Therefore $$\mathscr{E}_{\mathbf{B}}$$ is spanned by $${1 \choose 1}$$. The transformation $$\overline{R}$$ is determined by the eigenvectors of B*(1): $$R=\Big(\begin{array}{r r} 1 & -1 \\ 1 & 1 \end{array}\Big), \quad R^{-1}=\tfrac{1}{2}\Big(\begin{array}{r r} 1 & 1 \\ -1 & 1 \end{array}\Big).$$ We continue by computing $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}$$ from the symbol of B in equation (4.20), and get $$\overline{\mathbf{B}}^{\ast}(z)=\frac{1}{8}\begin{pmatrix} 4(1+z)^{2} & 3(z^{2}-1) \\ -2(z^{2}-1) & -1+4z-z^{2} \end{pmatrix}\!.$$ From Step 1 we see that $$k=\dim \mathscr{E}_{\mathbf{B}}=1$$. We compute $$\overline{\mathbf{A}}=\mathcal{I}_{1}\overline{\mathbf{B}}$$ by computing its symbol. $$\overline{\mathbf{A}}^{\ast}(z)=\frac{1}{16}\begin{pmatrix} 4z^{-1}(1+z)^{3} & -3z^{-1}(z+1) \\ 2z^{-2}(z^{2}-1)^{2} & -1+4z-z^{2} \end{pmatrix}\!.$$ In this step we transform back to the original basis $$\mathbf{A}=\overline{R}\,\overline{\mathbf{A}}\,\overline{R}^{-1}$$, by deriving A*(z). \begin{align} \mathbf{A}^{\ast}(z)=\frac{1}{32}z^{-2}\begin{pmatrix} z^{4}+16z^{3}+18z^{2}+7z-2 & \enspace 3z^{4}+8z^{3}+14z^{2}+z-2 \\[0.3cm] 7z^{4}+8z^{3}+12z^{2}+7z+2 & \enspace 5z^{4}+16z^{3}+4z^{2}+z+2 \end{pmatrix}\!. \end{align} (4.22) It follows from the analysis preceeding Procedure 4.16 that SA is C2. To verify Remark 4.17 we show that $$\mathscr{E}_{\mathbf{A}}$$ has the same basis as $$\mathscr{E}_{\mathbf{B}}$$. We compute $$\mathbf{A}^{\ast}(1)=\frac{1}{8}\Big(\begin{array}{r r} 10 & 6 \\[0.1cm] 9 & 7 \end{array}\Big), \quad \mathbf{A}^{\ast}(-1)=\frac{1}{16}\Big(\begin{array}{r r} -3 & 3 \\ 3 & -3 \end{array}\Big)$$ and their eigenvalues and eigenvectors: \begin{align} &\textrm{For } \mathbf{A}^{\ast}(1): \quad \textrm{eigenvalues}: 2, \tfrac{1}{8},\quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -2 \\ 3 \end{array}\Big), \text{ resp. }\\ \nonumber &\textrm{For } \mathbf{A}^{\ast}(-1): \quad \textrm{eigenvalues}: 0, -\tfrac{3}{8}, \quad \text{eigenvectors: }\Big(\begin{array}{r r} 1 \\ 1 \end{array}\Big), \Big(\begin{array}{r r} -1 \\ 1 \end{array}\Big),\text{ resp. } \end{align} (4.23) Therefore by (4.21), (4.23) and Lemma 4.6, $$\mathscr{E}_{\mathbf{A}}$$ and $$\mathscr{E}_{\mathbf{B}}$$ are spanned by $${1 \choose 1}$$. Note that the eigenvectors corresponding to the eigenvalues which have modulus less than 1, of MA and MB are different. Thus in order to generate a C3 scheme from SA, a new canonical transformation has to be computed. Also, comparing the eigenvalues of A*(1) and B*(1) we see that Corollary 4.20 is satisfied. In fact in this example, also the eigenvalues of A*(−1) and B*(−1) have the same property. It is easy to see from equation (4.22) that the support of the mask A is 4, and from equation (4.20) that the support of B is 2, in accordance with Corollary 4.19. 5. Increasing the smoothness of Hermite(2) subdivision schemes In this section we describe a procedure for increasing the smoothness of HSSs refining function and first derivative values, based on the procedure for the vector case described in Section 4. We consider HSSs which operate on data $$\mathbf{c} \in \ell \left (\mathbb{R}^{2}\right )$$, using the notation of Section 2. The reason why we only consider function and first derivative values (and not higher derivatives), i.e. p = 2, is due to the algebraic conditions described in Section 5.1, on which our method is based. While it is rather easy to derive the algebraic conditions equivalent to the spectral condition (Lemma 5.1) in the case p = 2, we believe that the derivation of such conditions for general p and the resulting Taylor conditions analogous to Definition 5.3 requires a paper on its own. If such conditions were available, however, we are confident that our method can be extended to the case of general p. 5.1. Algebraic conditions As in the vector case, HSS(2)s use matrix-valued masks $$\mathbf{A}=\{A_{i} \in \mathbb{R}^{2\times 2}: i\in \mathbb{Z} \}$$ and subdivision operators SA as defined in equation (2.2). The input data $$\mathbf{c}^{0}\in \ell \left (\mathbb{R}^{2}\right )$$ are refined via Dncn = SAnc0, where D is the dilation matrix $$D=\begin{pmatrix} 1 & 0 \\ 0 & \frac{1}{2} \end{pmatrix}\!.$$ An HSS(2) is called interpolatory if its mask A satisfies A0 = D and A2i = 0 for all $$i \in \mathbb{Z} \backslash \{0\}$$. We always assume that an HSS(2) satisfies the spectral condition (Dubuc & Merrien, 2009). This condition requires that there is $$\varphi \in \mathbb{R}$$ such that both the constant sequence $${\mathbf{k}}=\left \{\left ({ 1 \atop 0 }\right ): i \in \mathbb{Z}\right \}$$ and the linear sequence $${ {\boldsymbol{\ell }}}=\left \{\left ({i +\varphi \atop 1 }\right ): i \in \mathbb{Z}\right \}$$ obey the rule \begin{align} S_{\mathbf{A}}{\mathbf{k}}={\mathbf{k}}, \quad S_{\mathbf{A}}{{\boldsymbol{\ell}}}=\tfrac{1}{2}{{{\boldsymbol{\ell}}}}. \end{align} (5.1) The spectral condition is crucial for the convergence and smoothness analysis of linear HSS(2)s. If the HSS(2) is interpolatory we can choose φ = 0. We now characterize the spectral condition in terms of the symbol of the mask A. We introduce the notation \begin{align} \mathbf{A}=\begin{pmatrix}\boldsymbol{\alpha}_{11} & \boldsymbol{\alpha}_{12} \\ \boldsymbol{\alpha}_{21} & \boldsymbol{\alpha}_{22}\end{pmatrix}\!, \end{align} (5.2) where $$\boldsymbol{\alpha }_{ij} \in \ell (\mathbb{R})$$ for i, j ∈ {1, 2}. It is easy to verify that the spectral condition in equation (5.1) is equivalent to the algebraic conditions in the next lemma. Lemma 5.1 A mask A satisfies the spectral condition given by equation (5.1) with $$\varphi \in \mathbb{R}$$ if and only if its symbol A*(z) satisfies $$\boldsymbol{\alpha }^{\ast }_{11}(1)=2,\: \boldsymbol{\alpha }^{\ast }_{11}(-1)=0$$. $$\boldsymbol{\alpha }^{\ast }_{21}(1)=0,\: \boldsymbol{\alpha }^{\ast }_{21}(-1)=0$$. $${\boldsymbol{\alpha }^{\ast }_{11}}^{\prime }(1)-2\boldsymbol{\alpha }^{\ast }_{12}(1)=2\varphi ,\:{\boldsymbol{\alpha }^{\ast }_{11}}^{\prime }(-1)+2\boldsymbol{\alpha }^{\ast }_{12}(-1)=0$$. $${\boldsymbol{\alpha }^{\ast }_{21}}^{\prime }(1)-2\boldsymbol{\alpha }^{\ast }_{22}(1)=-2,\:{\boldsymbol{\alpha }^{\ast }_{21}}^{\prime }(-1)+2\boldsymbol{\alpha }^{\ast }_{22}(-1)=0.$$ Parts (1) and (2) relate to the reproduction of constants, whereas parts (3) and (4) are related to the reproduction of linear functions. Next we cite results on HCℓ smoothness of HSS(2). Consider the Taylor operator T, first introduced in the study by Merrien & Sauer (2012): $$T=\Big(\begin{array}{@{}r r@{}} \varDelta & -1 \\ 0 & 1 \end{array}\Big).$$ The Taylor operator is a natural analogue of the operator $$\varDelta$$k for VSSs and the forward difference operator $$\varDelta$$ in scalar subdivision. We have the following result analogous to equation (4.5): Lemma 5.2 (Merrien & Sauer, 2012) If the HSS(2) associated with a mask A satisfies the spectral condition of equation (5.1), then there exists a matrix mask of dimension 2, ∂tA, such that \begin{align} TS_{\mathbf{A}}=\tfrac{1}{2}S_{\partial_{t}\mathbf{A}}T. \end{align} (5.3) The mask ∂tA determines a VSS called the Taylor scheme associated with A. 5.2. Properties of the Taylor scheme In order to increase the smoothness of an HSS(2), the obvious idea is to pass to its Taylor scheme defined in equation (5.3), increase the smoothness of this VSS by Procedure 4.16 and then use the resulting VSS as the Taylor scheme of a new HSS(2). The first question which arises in this process is if the last step is always possible, i.e. if the smoothing operator $$\mathcal{I}_{k}$$ of equation (4.19) maps Taylor schemes to Taylor schemes. To answer this question depicted in Fig. 2, we state algebraic conditions on a mask B of a VSS guaranteeing that SB is a Taylor scheme. Definition 5.3 The algebraic conditions on a mask B, $$\boldsymbol{\beta }_{12}^{\ast }(1)=0, \boldsymbol{\beta }_{12}^{\ast }(-1)=0,$$ $$\boldsymbol{\beta }_{22}^{\ast }(1)=2, \boldsymbol{\beta }_{22}^{\ast }(-1)=0,$$ $$\boldsymbol{\beta }_{11}^{\ast }(1)+\boldsymbol{\beta }_{21}^{\ast }(1)=2,$$ are called Taylor conditions. (Here we use the notation of equation (5.2).) Fig. 2. View largeDownload slide A schematic representation of the idea for smoothing HSS(2)s. Fig. 2. View largeDownload slide A schematic representation of the idea for smoothing HSS(2)s. We prove in Lemma 5.5 that the mask ∂tA obtained via equation (5.3) satisfies the Taylor conditions. This justifies the name Taylor conditions. Remark 5.4 It is easy to verify that conditions (1) and (2) of Definition 5.3 are equivalent to $$e_{2} \in \mathscr{E}_{\mathbf{B}}$$. The next lemmas are concerned with the connection between masks satisfying the spectral condition of equation (5.1) and masks satisfying the Taylor conditions of Definition 5.3. Lemma 5.5 Let A be a mask satisfying the spectral condition. Then we can define a mask ∂tA such that equation (5.3) is satisfied, and ∂tA satisfies the Taylor conditions. Note that the existence of ∂tA in Lemma 5.5 is a result of the study by Merrien & Sauer (2012) (see Lemma 5.2). We prove it here because its proof is used in our analysis. Proof. By solving equation (5.3) in terms of symbols for ∂tA, it is easy to see that \begin{align} (\partial_{t}\mathbf{A})^{\ast}_{11}(z)=2\Big(\frac{\boldsymbol{\alpha}_{11}^{\ast}(z)}{z^{-1}+1}-\frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1}\Big),\,\,\,\qquad\qquad\qquad\qquad\qquad\qquad \end{align} (5.4) \begin{align}\! (\partial_{t}\mathbf{A})^{\ast}_{12}(z)=2\left(\left(z^{-1}-1\right)\boldsymbol{\alpha}_{12}^{\ast}(z)-\boldsymbol{\alpha}_{22}^{\ast}(z)+\frac{\boldsymbol{\alpha}_{11}^{\ast}(z)}{z^{-1}+1}-\frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1}\right)\!, \end{align} (5.5) \begin{align} (\partial_{t}\mathbf{A})^{\ast}_{21}(z)=2\frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1},\,\,\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align} (5.6) \begin{align} (\partial_{t}\mathbf{A})^{\ast}_{22}(z)=2\Big(\boldsymbol{\alpha}_{22}^{\ast}(z)+ \frac{\boldsymbol{\alpha}_{21}^{\ast}(z)}{z^{-2}-1}\Big).\!\!\!\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align} (5.7) By the algebraic conditions of Lemma 5.1, (∂tA)*(z) defined by equations (5.4–5.7) is a Laurent polynomial. Note that we only need the first two conditions of Lemma 5.1 equivalent to the reproduction of constants to define ∂tA. We now show that ∂tA satisfies the Taylor conditions. Multiplying equation (5.5) with the factor (z−2 − 1), differentiating with respect to z, substituting z = 1 and z = −1, and applying Lemma 5.1, we obtain: \begin{align*} (\partial_{t} \mathbf{A})^{\ast}_{12}(1)&=-2\boldsymbol{\alpha}_{22}^{\ast}(1)+\boldsymbol{\alpha}_{11}^{\ast}(1)+{\boldsymbol{\alpha}_{21}^{\ast{\prime}}}(1)=0, \\ (\partial_{t} \mathbf{A})^{\ast}_{12}(-1)&=-4\boldsymbol{\alpha}_{12}^{\ast}(-1)-2\boldsymbol{\alpha}_{22}^{\ast}(-1)-2{\boldsymbol{\alpha}_{11}^{\ast{\prime}}}(-1)-\boldsymbol{\alpha}_{11}^{\ast}(-1)-{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(-1)=0. \end{align*} This proves that part (1) of Definition 5.3 is satisfied. Applying the same procedure to equation (5.7), we obtain \begin{align*} (\partial_{t} \mathbf{A})^{\ast}_{22}(1)&=2\boldsymbol{\alpha}^{\ast}_{22}(1)-{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)=2,\\ (\partial_{t} \mathbf{A})^{\ast}_{22}(-1)&=2\boldsymbol{\alpha}^{\ast}_{22}(-1)+{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(-1)=0.\\ \end{align*} This concludes part (2) of Definition 5.3. Similarly equations (5.4) and (5.6) imply $$\left(\partial_{t} \mathbf{A}\right)^{\ast}_{11}(1)+(\partial_{t} \mathbf{A})^{\ast}_{21}(1)=\left(2+{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)\right)-{\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)=2,$$ which proves (3) of Definition 5.3. Lemma 5.6 Let B be a mask satisfying the Taylor conditions. Then we can define a mask $$\mathcal{I}_{t}\mathbf{B}$$ such that $$TS_{\mathcal{I}_{t}\mathbf{B}}=\tfrac{1}{2}S_{\mathbf{B}}T$$ is satisfied, and $$\mathcal{I}_{t}\mathbf{B}$$ satisfies the spectral condition. Proof. Suppose that B satisfies the Taylor conditions. We define a mask $$\mathcal{I}_{t}\mathbf{B}$$ satisfying the equation $$TS_{\mathcal{I}_{t}\mathbf{B}}=\tfrac{1}{2}S_{\mathbf{B}}T$$ by writing it in terms of symbols. This yields the symbol \begin{align}\nonumber \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{11}(z)=& \tfrac{1}{2}\left(z^{-1}+1\right)\left(\boldsymbol{\beta}^{\ast}_{11}(z)+\boldsymbol{\beta}^{\ast}_{21}(z)\right)\!,\\ \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{12}(z)=& \tfrac{1}{2}\Big(\boldsymbol{\beta}^{\ast}_{12}(z)-\boldsymbol{\beta}^{\ast}_{11}(z)-\boldsymbol{\beta}^{\ast}_{21}(z)+\boldsymbol{\beta}^{\ast}_{22}(z)\Big)\Big/\left(z^{-1}-1\right)\!,\\ \nonumber \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{21}(z)=& \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{21}(z)\left(z^{-2}-1\right)\!,\\ \nonumber \left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{22}(z)=& \tfrac{1}{2}\left(\boldsymbol{\beta}^{\ast}_{22}(z)-\boldsymbol{\beta}^{\ast}_{21}(z)\right)\!. \end{align} (5.8) It follows from the Taylor conditions that $$\left (\mathcal{I}_{t}\mathbf{B}\right )^{\ast }(z)$$ is a Laurent polynomial, and thus well defined. We continue by showing that $$\mathcal{I}_{t}\mathbf{B}$$ satisfies the spectral condition. It is immediately clear from the definition of $$\mathcal{I}_{t}\mathbf{B}$$ that (1) and (2) of Lemma 5.1 are satisfied. Furthermore, it is easy to see that \begin{align*} {\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{21}}^{\prime}(1)-2\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{22}(1)&=-\boldsymbol{\beta}_{21}^{\ast}(1) -\boldsymbol{\beta}_{22}^{\ast}(1)+\boldsymbol{\beta}_{21}^{\ast}(1)=-2,\\{\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{21}}^{\prime}(-1)+2\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{22}(-1)&=\boldsymbol{\beta}_{21}^{\ast}(1) +\boldsymbol{\beta}_{22}^{\ast}(-1)-\boldsymbol{\beta}_{21}^{\ast}(-1)=0, \end{align*} which proves (4) of Lemma 5.1. From the definition of $$\mathcal{I}_{t}\mathbf{B}$$ we see that \begin{align*} {(\mathcal{I}_{t}\mathbf{B})^{\ast}_{11}}^{\prime}(-1)+2(\mathcal{I}_{t}\mathbf{B})^{\ast}_{12}(-1)=\:& -\tfrac{1}{2}\left(\boldsymbol{\beta}_{11}^{\ast}(-1)+\boldsymbol{\beta}_{21}^{\ast}(-1)\right)\\ & -\tfrac{1}{2}\left(\boldsymbol{\beta}_{12}^{\ast}(-1)-\boldsymbol{\beta}_{11}^{\ast}(-1)-\boldsymbol{\beta}_{21}^{\ast}(-1)+\boldsymbol{\beta}_{22}^{\ast}(-1)\right)\\=\:&\:0. \end{align*} Furthermore, by multiplying equation (5.8) with the factor $$\left (z^{-1}-1\right )$$, differentiating this equation with respect to z, substituting z = 1 and using the Taylor conditions, we obtain $$\left(\mathcal{I}_{t}\mathbf{B}\right)^{\ast}_{12}(1)=-\tfrac{1}{2}\Big({\boldsymbol{\beta}^{\ast}_{12}}^{\prime}(1)-{\boldsymbol{\beta}^{\ast}_{11}}^{\prime}(1)+{\boldsymbol{\beta}^{\ast}_{22}}^{\prime}(1)-{\boldsymbol{\beta}^{\ast}_{21}}^{\prime}(1) \Big).$$ This implies $${(\mathcal{I}_{t}\mathbf{B})^{\ast}_{11}}^{\prime}(1)-2(\mathcal{I}_{t}\mathbf{B})^{\ast}_{12}(1)=2\varphi,$$ where φ is defined by $$\varphi =\tfrac{1}{2}({\boldsymbol{\beta }^{\ast }_{12}}^{\prime }(1)+{\boldsymbol{\beta }^{\ast }_{22}}^{\prime }(1)-1)$$. This proves property (3) of Lemma 5.1, concluding the proof of the lemma. In Lemma 5.5 and Lemma 5.6 we defined two operators ∂t and $$\mathcal{I}_{t}$$ which are inverse to each other. Denote by ℓs be the set of all masks satisfying the spectral condition of equation (5.1) and by ℓt the set of all masks satisfying the Taylor conditions of Definition 5.3. Then \begin{align} \partial_{t}: \quad \ell_{s} \to \ell_{t} \qquad \qquad \mathcal{I}_{t}: \quad \ell_{t} \to \ell_{s} \end{align} (5.9) and it is easy to verify that \begin{align} \partial_{t}(\mathcal{I}_{t} \mathbf{B})=\mathbf{B} \quad \textrm{and} \quad \mathcal{I}_{t}(\partial_{t} \mathbf{A})=\mathbf{A}. \end{align} (5.10) 5.3. Relations between converging vector and Hermite(2) schemes In the previous section we derived a one-to-one correspondence between a mask satisfying the spectral condition and a mask satisfying the Taylor conditions. For masks of converging schemes we formulate a result based on Theorem 21 in the study by Merrien & Sauer (2012), and on the results of Section 5.2. Theorem 5.7 A $$C^{\ell }, \ell \geqslant 0,$$ VSS SB satisfying the Taylor conditions with limit functions with vanishing first component, gives rise to an HCℓ+1 Hermite(2) scheme SA satisfying the spectral condition. In the next lemma we show that the condition of vanishing first component in the limits generated by SB can be replaced by a condition on the mask B. This also follows from results in the study by Micchelli & Sauer (1998). Lemma 5.8 Let SB be a convergent VSS. Denote by $$\varPsi _{\mathbf{c}}={\psi _{1,\mathbf{c}} \choose \psi _{2,\mathbf{c}}}$$ the limit function generated from the initial data $$\mathbf{c} \in \ell (\mathbb{R}^{2})$$. Then $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\} \iff \psi_{1,\mathbf{c}}=0 \textrm{ for all initial data } \mathbf{c}.$$ Proof. First we show that $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$ implies ψ1, c = 0 for all c. This follows from the observation that $$\varPsi _{\mathbf{c}}(x)\in \mathscr{E}_{\mathbf{B}}$$ for all $$x \in \mathbb{R}$$. The observation follows from the convergence of SB to a continuous limit and from the basic refinement rules for large k $$\left(S_{\mathbf{B}}^{k+1}\mathbf{c}\right)_{2i}=\sum_{j \in \mathbb{Z}}B_{2j}\left(S_{\mathbf{B}}^{k}\mathbf{c}\right)_{i-j}, \quad \left(S_{\mathbf{B}}^{k+1}\mathbf{c}\right)_{2i+1}=\sum_{j \in \mathbb{Z}}B_{2j+1}\left(S_{\mathbf{B}}^{k}\mathbf{c}\right)_{i-j}, \ \textrm{for}\ i \in \mathbb{Z}.$$ To prove the other direction we use the proof of Theorem 2.2 in the study by Cohen et al. (1996). It shows that \begin{align} \lim_{n \to \infty}M_{\mathbf{B}}^{n}=\int_{\mathbb{R}} \varPhi(x) \ \mathrm{d}x, \end{align} (5.11) where MB is defined in Theorem 4.4, and Φ is the limit function generated by SB from the initial data δI2. Here I2 is the identity matrix of dimension 2 and $$\boldsymbol{\delta } \in \ell (\mathbb{R})$$ satisfies δ0 = 1, $$\boldsymbol{\delta }_{i}=0, i \neq 0, i \in \mathbb{Z}$$, or equivalently $${\phi _{1j}(x) \choose \phi _{2j}(x) }$$ is the limit from the initial data δej for j ∈ {1, 2}. Thus $$\phi_{11}(x)=\phi_{12}(x)=0 \quad \textrm{for } x \in \mathbb{R}.$$ It follows from equation (5.11) that \begin{align} \lim_{n \to \infty}M_{\mathbf{B}}^{n}=\begin{pmatrix} 0 & 0 \\ \nu & \theta \end{pmatrix}\!, \quad \nu, \theta \in \mathbb{R}. \end{align} (5.12) Assume $$\mathscr{E}_{\mathbf{B}} \neq \operatorname{span}\{e_{2}\}$$. Then $$\mathscr{E}_{\mathbf{B}}=\mathbb{R}^{2}$$, and MB = I2, since by Theorem 4.4 the eigenspace of MB with respect to 1 is exactly $$\mathscr{E}_{\mathbf{B}}$$. Thus $$\lim _{n \to \infty }M_{\mathbf{B}}^{n}=I_{2}$$ in contradiction to equation (5.12). 5.4. Imposing the Taylor conditions Denote by $$\tilde{\ell }_{t} \subsetneqq \ell _{t}$$ the set of masks satisfying B ∈ ℓt and $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$. It follows from Theorem 5.7 and Lemma 5.8 that for $$\mathbf{B} \in \tilde{\ell }_{t}$$, a mask of a Cℓ VSS, if also $$\mathcal{I}_{1}\mathbf{B} \in \tilde{\ell }_{t}$$, then $$\mathcal{I}_{1}\mathbf{B}$$ is a mask of a Cℓ+1 VSS which is the Taylor scheme of an HCℓ+2 Hermite(2) scheme. The next results show that $$\mathcal{I}_{1}\left (\tilde{\ell }_{t}\right ) \subseteq \tilde{\ell }_{t}$$ does not hold in general. Nevertheless, in the following we construct a transformation $$\mathcal{R}$$ such that $$\mathcal{R}^{-1}(\mathcal{I}_{1}\mathbf{B}) \mathcal{R} \in \tilde{\ell }_{t}$$ for $$\mathbf{B} \in \tilde{\ell }_{t}$$. First we look for a canonical transformation of a mask B ∈ ℓt to define $$\mathcal{I}_{1}\mathbf{B}$$. Lemma 5.9 Let B ∈ ℓt. Then MB has the eigenvalue 1 with eigenvector $$\binom{0}{1}$$ and the eigenvalue $$\tfrac{1}{2}\boldsymbol{\beta }^{\ast }_{11}(1)$$ with eigenvector $$\choose{1}{-1}$$. A canonical transformation and its inverse are $$\overline{R}=\Big( \begin{array}{@{}r r@{}}0 & 1\\ 1 & -1 \end{array}\Big) \quad \textrm{with inverse} \quad \overline{R}^{-1}=\Big( \begin{array}{@{}r r@{}}1 & 1\\ 1 & 0 \end{array}\Big).$$ Proof. From the Taylor conditions we immediately get $$M_{\mathbf{B}}=\tfrac{1}{2}\left(B^{0}+B^{1}\right)=\tfrac{1}{2}\mathbf{B}^{\ast}(1)=\begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) & 0 \\[0.2cm] \tfrac{1}{2}\boldsymbol{\beta}_{21}^{\ast}(1) & 1 \end{pmatrix}\!.$$ The eigenvalues of MB can now be read from the diagonal. Also, it is clear that $${0 \choose 1}$$ is an eigenvector with eigenvalue 1. For the other eigenvector we use the Taylor condition (3) (in Definition 5.3) in the third equality below, and obtain \begin{align*} M_{\mathbf{B}}\Big(\begin{array}{r}1 \\ -1\end{array}\Big)&=\begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) & 0 \\[0.2cm] \tfrac{1}{2}\boldsymbol{\beta}_{21}^{\ast}(1) & 1 \end{pmatrix}\Big(\begin{array}{r}1 \\ -1\end{array}\Big)= \begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) \\[0.2cm] \tfrac{1}{2}\boldsymbol{\beta}_{21}^{\ast}(1)-1\end{pmatrix}= \begin{pmatrix}\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) \\[0.2cm] -\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1)\end{pmatrix}\\ &=\tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1)\Big(\begin{array}{r}1 \\ -1\end{array}\Big). \end{align*} The structure of $$\overline{R}$$ follows directly from equation (4.14). Lemma 5.9 leads to the following: Theorem 5.10 Let $$\mathbf{B} \in \tilde{\ell }_{t}$$ and let its associated vector scheme SB be convergent. Let $$\mathcal{I}_{1}$$ be the smoothing operator for VSSs in equation (4.19). Then $$\mathcal{I}_{1}\mathbf{B} \in \tilde{\ell }_{t}$$ if and only if the Laurent polynomial $$\boldsymbol{\beta }_{11}^{\ast }(z)+\boldsymbol{\beta }_{21}^{\ast }(z)-\boldsymbol{\beta }_{12}^{\ast }(z)-\boldsymbol{\beta }_{22}^{\ast }(z)$$ has a root at 1 of multiplicity at least 2. Proof. From Remark 4.4 we know that $$\mathscr{E}_{\mathcal{I}_{1}\mathbf{B}}=\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$. Furthermore, recall from equation (4.19) that $$\mathcal{I}_{1}\mathbf{B}=\overline{R}\left (\mathcal{I}_{1}\overline{\mathbf{B}}\right )\overline{R}^{-1}$$ with $$\overline{\mathbf{B}}=\overline{R}^{-1}\mathbf{B} \overline{R}.$$ In Lemma 5.9 a canonical transformation $$\overline{R}$$ is computed. Therefore $$\overline{\mathbf{B}}$$ is given by \begin{align} \overline{\mathbf{B}}=\left(\begin{array}{@{}c c@{}} \overline{\boldsymbol{\beta}}_{11}& \overline{\boldsymbol{\beta}}_{12}\\ \overline{\boldsymbol{\beta}}_{21} & \overline{\boldsymbol{\beta}}_{22} \end{array}\right)= \left(\begin{array}{@{}c c@{}} \boldsymbol{\beta}_{12}+\boldsymbol{\beta}_{22} & \boldsymbol{\beta}_{11}+\boldsymbol{\beta}_{21}-\boldsymbol{\beta}_{12}-\boldsymbol{\beta}_{22}\\ \boldsymbol{\beta}_{12} & \boldsymbol{\beta}_{11}-\boldsymbol{\beta}_{12} \end{array}\right)\!. \end{align} (5.13) The parts of the Taylor conditions concerning the elements of B*(1) imply that the symbol $$\overline{\boldsymbol{\beta }}^{\ast }_{12}(z)$$ has a root at 1. Therefore there exists a Laurent polynomial κ*(z) such that $$\overline{\boldsymbol{\beta }}^{\ast }_{12}(z)=(z^{-1}-1)\boldsymbol{\kappa }^{\ast }(z)$$. Combining (5.13) with (4.10) we obtain $$(\mathcal{I}_{1}\overline{\mathbf{B}})^{\ast}(1)=\left(\begin{array}{@{}c c@{}} 2 & \tfrac{1}{2}\boldsymbol{\kappa}^{\ast}(1)\\[0.2cm] 0 & \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{11}(1) \end{array}\right) \quad \textrm{and} \quad (\mathcal{I}_{1}\overline{\mathbf{B}})^{\ast}(-1)=\left(\begin{array}{@{}c c@{}} 0 & \tfrac{1}{2}\boldsymbol{\kappa}^{\ast}(-1)\\[0.2cm] 0 & \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{11}(-1) \end{array}\right)\!.$$ Therefore \begin{align} (\mathcal{I}_{1}\mathbf{B})^{\ast}(1)=\overline{R}\left(\mathcal{I}_{1}\overline{\mathbf{B}}\right)^{\ast}(1)\overline{R}^{-1}&=\left(\begin{array}{@{}c c@{}} \tfrac{1}{2}\boldsymbol{\beta}_{11}^{\ast}(1) & 0\\[0.1cm] 2+\tfrac{1}{2}\left(\boldsymbol{\kappa}^{\ast}(1)-\boldsymbol{\beta}^{\ast}_{11}(1)\right) & 2 \end{array}\right) \quad \textrm{and} \\[8pt] \nonumber (\mathcal{I}_{1}\mathbf{B})^{\ast}(-1)=\overline{R}\left(\mathcal{I}_{1}\overline{\mathbf{B}}\right)^{\ast}(-1)\overline{R}^{-1}&=\left(\begin{array}{@{}c c@{}} \tfrac{1}{2}\boldsymbol{\beta}^{\ast}_{11}(-1) & 0\\[0.2cm] \tfrac{1}{2}\left(\boldsymbol{\kappa}^{\ast}(-1)-\boldsymbol{\beta}^{\ast}_{11}(-1)\right) & 0 \end{array}\right)\!. \end{align} (5.14) By equation (5.14), (1) and (2) of the Taylor conditions in Definition 5.3 are satisfied by $$\mathcal{I}_{1}{\mathbf{B}}$$. The mask $$\mathcal{I}_{1}{\mathbf{B}}$$ satisfies (3) of the Taylor conditions if and only if κ*(1) = 0. By the definition of κ, this is equivalent to the Laurent polynomial $$\overline{\boldsymbol{\beta }}^{\ast }_{12}(z)=\boldsymbol{\beta }_{11}^{\ast }(z)+\boldsymbol{\beta }_{21}^{\ast }(z)-\boldsymbol{\beta }_{12}^{\ast }(z)-\boldsymbol{\beta }_{22}^{\ast }(z)$$ having a root of multiplicity 2 at 1. Thus, in general, $$\mathcal{I}_{1}\left (\tilde{\ell }_{t}\right )\nsubseteq \tilde{\ell }_{t}$$. In the next two lemmas we solve this problem. Lemma 5.11 Let B be a mask of a converging VSS satisfying $$\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$ and $$\boldsymbol{\beta }_{11}^{\ast }(1)\neq 2$$. Then there exists a transformation $$\mathcal{R}$$ such that $$\widetilde{\mathbf{B}}=\mathcal{R}^{-1}\mathbf{B} \mathcal{R} \in \tilde{\ell }_{t}$$. Proof. First we note that by Remark 5.4, the mask B satisfies (1) and (2) of the Taylor conditions and obtain $$\mathbf{B}^{\ast}(1)=\begin{pmatrix} a & 0\\ b & 2 \end{pmatrix}\!,$$ with $$a,b \in \mathbb{R}$$ and $$a\neq 2$$ by the assumption of the lemma. To impose (3) of the Taylor conditions we take $$\mathcal{R}$$ with a second column e2 in order to retain the above second columns. A normalized choice of the first column of $$\mathcal{R}$$ yields \begin{align} \mathcal{R}=\left(\begin{array}{@{}r r@{}} 1 & 0\\ \eta & 1 \end{array}\right), \quad \mathcal{R}^{-1}=\left(\begin{array}{@{}r r@{}} 1 & 0\\ -\eta & 1 \end{array}\right)\!, \end{align} (5.15) and we obtain $$\widetilde{\mathbf{B}}^{\ast}(1)=\begin{pmatrix} a & 0\\ (2-a)\eta +b & 2 \end{pmatrix}\!.$$ To satisfy (3) of the Taylor conditions (2 − a)η + b + a = 2. Therefore we choose $$\eta =1+\frac{b}{a-2}$$. From the form of $$\widetilde{\mathbf{B}}^{\ast }(1)$$ and since $$a\neq 2$$, we see that $$\mathscr{E}_{\widetilde{\mathbf{B}}}=\operatorname{span}\{e_{2}\}$$. Next we show that we can apply the smoothing procedure and transform the resulting mask to a mask in $$\tilde{\ell }_{t}$$. Corollary 5.12 Let $$\mathbf{B} \in \tilde{\ell }_{t}$$ such that SB is a Cℓ VSS, for $$\ell \geqslant 0$$. Then $$\widetilde{\mathcal{I}_{1}(\mathbf{B})}\in \tilde{\ell }_{t}$$ and $$S_{\widetilde{\mathcal{I}_{1}(\mathbf{B})}}$$ is a Cℓ+1 VSS. Proof. It follows from Remark 4.4 that $$\mathscr{E}_{\mathcal{I}_{1}\mathbf{B}}=\mathscr{E}_{\mathbf{B}}=\operatorname{span}\{e_{2}\}$$. Equation (5.14) implies $$(\mathcal{I}_{1}\mathbf{B})^{\ast }_{11}(1)=\tfrac{1}{2}\boldsymbol{\beta }_{11}^{\ast }(1)$$. From Lemma 5.9 we know that $$\tfrac{1}{2}\boldsymbol{\beta }_{11}^{\ast }(1)$$ is an eigenvalue of MB. By Corollary 4.1, $$\frac{1}{2}\left |\boldsymbol{\beta }_{11}^{\ast }(1)\right |\leqslant 1$$. In particular $$\left (\mathcal{I}_{1}\mathbf{B}\right )^{\ast }_{11}(1)\neq 2$$. Therefore, $$\mathcal{I}_{1}\mathbf{B}$$ satisfies the conditions of Lemma 5.11 and with the transformation $$\mathcal{R}$$ in equation (5.15), $$\mathcal{R}^{-1}(\mathcal{I}_{1}\mathbf{B}) \mathcal{R} \in \tilde{\ell }_{t}$$. The statement about smoothness follows from the construction of $$\mathcal{I}_{1}$$ in equation (4.19). 5.5. A procedure for increasing the smoothness of Hermite(2) schemes Theorem 5.10 and Corollary 5.12 allow us to define the following procedure for increasing the smoothness of HSS(2)s: Procedure 5.13 The input is a mask A satisfying the spectral condition (Lemma 5.1). Furthermore, we assume that its Taylor scheme is Cℓ−1 for $$\ell \geqslant 1$$ and that the limit functions have vanishing first component for all input data (this implies that SA is HCℓ). The output is a mask C which satisfies the spectral condition and its associated Hermite(2) scheme SC is HCℓ+1. Compute the Taylor scheme ∂tA (Lemma 5.5). Apply Procedure 4.16 and Lemma 5.11 to obtain $$\mathbf{B}=\widetilde{\mathcal{I}_{1}(\partial _{t}\mathbf{A})}$$. Define $$\mathbf{C}=\mathcal{I}_{t}(\mathbf{B})$$ (Lemma 5.6). In the following we execute Procedure 5.13 for a general mask A satisfying the assumptions of the procedure, and present explicitly C*(z). From the definition of η in the proof of Lemma 5.11 it is easy to see that $$\eta =\frac{\boldsymbol{\alpha }^{\ast }_{12}(1)}{2-\boldsymbol{\alpha }_{22}^{\ast }(1)}$$. This is well defined, since $$M_{\partial _{t}\mathbf{A}}$$ has $$\boldsymbol{\alpha }^{\ast }_{22}(1)$$ as an eigenvalue. By Corollary 4.1, $$\boldsymbol{\alpha }^{\ast }_{22}(1)\neq 2$$. Then with ζ = η + 1 we get \begin{align} \boldsymbol{\gamma}^{\ast}_{11}(z) = & \ \tfrac{1}{2}\left(z^{-1}+1\right)\Big( \boldsymbol{\alpha}^{\ast}_{12}(z)\left(\left({\zeta}-{\zeta}^{2}\right)z^{-3}+{\zeta}^{2}z^{-2}+ \left({\zeta}^{2}-1\right)z^{-1}-\left({\zeta}^{2}+{\zeta}\right)\right)\nonumber\\ &+\boldsymbol{\alpha}_{11}^{\ast}(z)\Big({\zeta}\left(z^{-1}-1\right)(1-{\zeta})+{\zeta}\Big) +\boldsymbol{\alpha}_{22}^{\ast}(z)\left({\zeta}\left(z^{-2}-1\right)-1\right)({\zeta}-1)\nonumber\\ &+\boldsymbol{\alpha}_{21}^{\ast}(z)\left({\zeta}^{2}-{\zeta}\right) \Big),\\ \boldsymbol{\gamma}^{\ast}_{12}(z) = & \ \tfrac{1}{2}\Big( \boldsymbol{\alpha}^{\ast}_{12}(z)\Big((1-{\zeta})^{2}z^{-3}+{\zeta}(1-{\zeta}) z^{-2}+{\zeta}(1-{\zeta})z^{-1}+{\zeta}^{2}\Big)\nonumber\\ &+ \boldsymbol{\alpha}^{\ast}_{22}(z)\Big(-\left(z^{-2}-1\right)(1-{\zeta})^{2}+{\zeta}-1 \Big)\nonumber\\ &+\boldsymbol{\alpha}^{\ast}_{11}(z)\Big(\left(z^{-1}-1\right)(1-{\zeta})^{2}+1-{\zeta} \Big) -\boldsymbol{\alpha}^{\ast}_{21}(z)(1-{\zeta})^{2} \Big)\Big/ \left(z^{-1}-1\right)\!,\nonumber\\ \boldsymbol{\gamma}^{\ast}_{21}(z)= & \ \tfrac{1}{2}\left(z^{-2}-1\right)\Big( \boldsymbol{\alpha}^{\ast}_{12}(z)\Big(-{\zeta}^{2}z^{-3}+\left({\zeta}+{\zeta}^{2}\right)\left(z^{-2}+z^{-1}\right)-({\zeta}+1)^{2} \Big)\nonumber\\ &+\boldsymbol{\alpha}^{\ast}_{11}(z){\zeta}\left(1-{\zeta}\left(z^{-1}-1\right)\right)+\boldsymbol{\alpha}^{\ast}_{22}(z) {\zeta}\left({\zeta}\left(z^{-2}-1\right)-1\right)+{\zeta}^{2}\boldsymbol{\alpha}^{\ast}_{21}(z) \Big),\nonumber\\ \boldsymbol{\gamma}^{\ast}_{22}(z) = & \ \tfrac{1}{2} \Big(\boldsymbol{\alpha}^{\ast}_{12}(z)\Big(\left({\zeta}^{2}-{\zeta}\right)z^{-3}+\left(1-{\zeta}^{2}\right)z^{-2}-{\zeta}^{2}z^{-1}+\left({\zeta}^{2}+{\zeta}\right) \Big)\nonumber\\ & +\boldsymbol{\alpha}^{\ast}_{11}(z)(1-{\zeta})\left(1-{\zeta}\left(z^{-1}-1\right)\right)+\boldsymbol{\alpha}^{\ast}_{22}(z){\zeta}\left(\left(1-{\zeta}\right)\left(z^{-2}-1\right)+1\right)\nonumber\\ &+\boldsymbol{\alpha}^{\ast}_{21}(z)\left({\zeta}-{\zeta}^{2}\right) \Big)\nonumber. \end{align} (5.16) In the special case $$\boldsymbol{\alpha }^{\ast }_{12}(1)=0$$, ζ = 1, C*(z) reduces to \begin{align} \boldsymbol{\gamma}^{\ast}_{11}(z) = & \ \tfrac{1}{2}(z^{-1}+1)\Big((z^{-2}-2)\boldsymbol{\alpha}^{\ast}_{12}(z)+\boldsymbol{\alpha}_{11}^{\ast}(z)\Big),\\ \nonumber \boldsymbol{\gamma}^{\ast}_{12}(z) = & \ \tfrac{1}{2}\frac{\boldsymbol{\alpha}^{\ast}_{12}(z)}{(z^{-1}-1)},\\ \nonumber \boldsymbol{\gamma}^{\ast}_{21}(z)= & \ \tfrac{1}{2}(z^{-2}-1)\Big(\boldsymbol{\alpha}_{21}^{\ast}(z)-\boldsymbol{\alpha}_{11}^{\ast}(z)(z^{-1}-2) \\ \nonumber &+\boldsymbol{\alpha}_{22}^{\ast}(z)(z^{-2}-2)-\boldsymbol{\alpha}_{12}^{\ast}(z)(z^{-1}-2)(z^{-2}-2)\Big),\\ \nonumber \boldsymbol{\gamma}^{\ast}_{22}(z) = & \ \tfrac{1}{2}(\boldsymbol{\alpha}^{\ast}_{22}(z)-(z^{-1}-2)\boldsymbol{\alpha}_{12}^{\ast}(z)). \end{align} (5.17) With the explicit form of C, we can prove the following: Lemma 5.14 Let φA be the constant corresponding to the spectral condition in equation (5.1) satisfied by A. Then the constant corresponding to the spectral condition satisfied by C is $$\varphi _{\mathbf{C}}=\varphi _{\mathbf{A}}-\frac{1}{2}$$. In particular, the application of Procedure 5.13 to interpolatory HSS(2)s does not result in interpolatory HSS(2)s. Proof. Differentiating $$\boldsymbol{\gamma }_{11}^{\ast }(z)$$ and $$\boldsymbol{\gamma }_{12}^{\ast }(z)$$ given in equation (5.16), and evaluating at z = 1 we obtain in view of condition (3) in Lemma 5.1 \begin{align*} 2 \varphi_{\mathbf{C}}={\boldsymbol{\gamma}_{11}^{\ast}}^{\prime}(1)-2\boldsymbol{\gamma}_{12}^{\ast}(1)=& \:{\boldsymbol{\alpha}_{11}^{\ast}}^{\prime}(1)-2\boldsymbol{\alpha}_{12}^{\ast}(1) +({\zeta}-1)\left({\boldsymbol{\alpha}_{21}^{\ast}}^{\prime}(1)-2\boldsymbol{\alpha}_{22}^{\ast}(1)\right)\\ &+2({\zeta}-1)+\frac{1}{2}\boldsymbol{\alpha}_{12}^{\ast}(1)-{\zeta}-\frac{1}{2}\boldsymbol{\alpha}_{22}^{\ast}(1)(1-{\zeta})\\ =&\: 2\varphi_{\mathbf{A}} +\frac{1}{2}\left(\boldsymbol{\alpha}_{12}^{\ast}(1)-\boldsymbol{\alpha}_{22}^{\ast}(1)\right)-\frac{1}{2}{\zeta}\left(2-\boldsymbol{\alpha}_{22}^{\ast}(1)\right)\\ =&\: 2\left(\varphi_{\mathbf{A}} -\frac{1}{2}\right)\!.\end{align*} From the explicit form of C we can infer the following: Corollary 5.15 Let A and C be masks as in Procedure 5.13. If A has support contained in [−N1, N2] with $$N_{1},N_{2} \in \mathbb{N}$$, then the support of C is contained in [−N1 − 5, N2]. Therefore Procedure 5.13 increases the support length at most by 5. Corollary 5.16 Let A be a mask satisfying the spectral condition of (5.1) and let its associated Taylor scheme be convergent. Assume that $$\boldsymbol{\alpha }_{12}^{\ast }(1)=0$$ (i.e. ζ = 1). Denote by C the mask obtained via Procedure 5.13. Then $$\boldsymbol{\gamma }_{12}^{\ast }(1)=0$$ if and only if $${\boldsymbol{\alpha }_{12}^{\ast }}^{\prime }(1)=0$$. Proof. From the definition of C in (5.16) it is easy to see that $$\boldsymbol{\gamma }_{12}^{\ast }(1)=-\tfrac{1}{2}{\boldsymbol{\alpha }_{12}^{\ast }}^{\prime }(1)$$. Therefore $$\boldsymbol{\gamma }_{12}^{\ast }(1)=0$$ iff $${\boldsymbol{\alpha }_{12}^{\ast }}^{\prime }(1)=0$$. Let r be the multiplicity of the root at 1 of $$\boldsymbol{\alpha }_{12}^{\ast }(z)$$. Corollary 5.16 implies that r − 1 iterations of the smoothing procedure stay within the special case of ζ = 1. Example 5.17 We consider the Hermite(2) scheme generating C1 piecewise cubic polynomials interpolating the initial data (see Merrien, 1992). The mask of the scheme is given by $$A_{-1}=\left(\begin{array}{@{}r r@{}} \frac{1}{2} & -\frac{1}{8} \\[6pt] \frac{3}{4} & -\frac{1}{8} \end{array}\right)\!, \quad A_{0}=\left(\begin{array}{@{}r r@{}} 1 & 0\\[6pt] 0 & \frac{1}{2} \end{array}\right)\!, \quad A_{1}=\left(\begin{array}{@{}r r@{}} \frac{1}{2} & \frac{1}{8}\\[6pt] -\frac{3}{4} & -\frac{1}{8} \end{array}\right)\!.$$ It is easy to see that it satisfies the spectral condition of equation (5.1) with φA = 0. In the studies by Merrien & Sauer (2012) it is proved that its Taylor scheme is convergent with limit functions of vanishing first component (and thus the original HSS(2) is HC1). We apply Procedure 5.13 to this scheme to obtain a new HSS(2) of regularity HC2, using the explicit expressions in equations (5.16) and (5.17). First we compute the symbol: $$\mathbf{A}^{\ast}(z)=\left(\begin{array}{@{}c c@{}} \frac{1}{2}(1+z)^{2}z^{-1} & -\frac{1}{8}(1-z^{2})z^{-1} \\[6pt] \frac{3}{4}(1-z^{2})z^{-1} & -\frac{1}{8}z^{-1}+\frac{1}{2}- \frac{1}{8}z \end{array}\right)\!.$$ Note that $$\boldsymbol{\alpha }^{\ast }_{12}(1)=0$$ with multiplicity 1. Therefore we are in the special case ζ = 1. We apply equation (5.17) and obtain the symbol of C: $$\mathbf{C}^{\ast}(z)=\frac{1}{16} \left(\begin{array}{@{}c c@{}} \left(z^{-1}+1\right)^{2}\left(-z^{-2}+z^{-1}+6+2z\right) & -z-1 \\[6pt] \left(z^{-2}-1\right)\Big(z^{-4}-3z^{-3}-3z^{-2}+13z^{-1}+6\Big) & z^{-2}-3z^{-1}+3+z \end{array}\right)\!.$$ From Lemma 5.14 we also know that C satisfies the spectral condition with $$\varphi _{\mathbf{C}}=-\tfrac{1}{2}$$. Therefore the HSS(2) associated with C is an HC2 scheme which is not interpolatory. A basic limit function of this scheme is depicted in Fig. 3. Note that the support of C is [−6, 1] and has thus increased from length of 3 to the length of 8. If we want to apply another round of Procedure 5.13, we have to use (5.16) with $$\zeta =\tfrac{14}{15}$$. Example 5.18 We consider one of the de Rham-type HSS(2)s of Dubuc & Merrien (2008) obtained from the scheme of Example 5.17. Its mask is given by \begin{align*} &A_{-2}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{5}{4} & -\frac{3}{8} \\[6pt] \frac{9}{2} & -\frac{5}{4} \end{array}\right)\!, \quad A_{-1}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{27}{4} & -\frac{9}{8} \\[6pt] \frac{9}{2} & \frac{3}{4} \end{array}\right)\!,\\[0.3cm] &A_{0}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{27}{4} & \frac{9}{8} \\[6pt] -\frac{9}{2} & \frac{3}{4} \end{array}\right)\!,\quad A_{1}=\frac{1}{8}\left(\begin{array}{@{}r r@{}} \frac{5}{4} & \frac{3}{8} \\[6pt] -\frac{9}{2} & -\frac{5}{4} \end{array}\right)\!. \end{align*} It is easy to see that it satisfies the spectral condition of equation (5.1) with $$\varphi _{\mathbf{A}}=-\tfrac{1}{2}$$. In the study by Conti et al. (2014) it is proved that its Taylor scheme is C1 with limit functions of vanishing first component (and thus the original HSS(2) is HC2). We apply Algorithm 5.13 to this scheme to obtain a new HSS(2) of regularity HC3. First we compute the symbol: $$\mathbf{A}^{\ast}(z)=\frac{1}{16}\left(\begin{array}{@{}c c@{}} \frac{1}{2}\left(z^{-1}+1\right)\left(5z+2z+5z^{-1}\right) & -\frac{3}{4}\left(z^{-1}-1\right)\left(z+4+z^{-1}\right) \\[6pt] 9\left(z^{-2}-1\right)(z+1) & \frac{1}{2}\left(z^{-1}+1\right)\left(-5z+8-5z^{-1}\right) \end{array}\right)\!.$$ Note that $$\boldsymbol{\alpha }^{\ast }_{12}(1)=0$$ with multiplicity 1. Therefore, as in Example 5.17, we are in the special case ζ = 1. We apply equation (5.17) and obtain the symbol of C: \begin{align*} \boldsymbol{\gamma}^{\ast}_{11}(z)&=\frac{1}{128}\left(z^{-1}+1\right)\left(-3z^{-4}-9z^{-3}+25z^{-2}+75z^{-1}+36+4z\right)\!,\\ \boldsymbol{\gamma}^{\ast}_{12}(z)&=-\frac{3}{128}\left(z+4+z^{-1}\right)\!,\\ \boldsymbol{\gamma}^{\ast}_{21}(z)&=\frac{1}{128}\left(z^{-2}-1\right)\Big(3z^{-5}-7z^{-4}-37z^{-3}+37z^{-2}+128z^{-1}+20-8z\Big),\\ \boldsymbol{\gamma}^{\ast}_{22}(z)&=\frac{1}{128}\left(3z^{-3}-7z^{-2}-21z^{-1}+21-4z\right)\!. \end{align*} We also know from Lemma 5.14 that C satisfies the spectral condition with φC = −1. Therefore the HSS(2) associated with C is an HC3 scheme which is not interpolatory. A basic limit function of this scheme is depicted in Fig. 4. Note that the support of C is [−7, 1] and has thus increased from length of 4 to the length of 9. If we want to apply another round of Procedure 5.13, we have to use equation (5.16) with $$\zeta =\tfrac{41}{44}$$. Fig. 3. View largeDownload slide Basic limit functions and their first derivatives of the HSS(2)s of Example 5.17. First column: interpolatory HC1 scheme SA with basic limit function f. Second column: the smoothed noninterpolatory HC2 scheme SC with basic limit function g. Fig. 3. View largeDownload slide Basic limit functions and their first derivatives of the HSS(2)s of Example 5.17. First column: interpolatory HC1 scheme SA with basic limit function f. Second column: the smoothed noninterpolatory HC2 scheme SC with basic limit function g. Fig. 4. View largeDownload slide Basic limit functions, their first and second derivatives of the HSS(2)s of Example 5.18. First column: noninterpolatory HC2 scheme SA with basic limit function f. Second column: smoothed noninterpolatory HC3 scheme SC with basic limit function g. Fig. 4. View largeDownload slide Basic limit functions, their first and second derivatives of the HSS(2)s of Example 5.18. First column: noninterpolatory HC2 scheme SA with basic limit function f. Second column: smoothed noninterpolatory HC3 scheme SC with basic limit function g. Acknowledgements Most of this research was done while the first author was with TU Graz. The authors thank Costanza Conti, Tomas Sauer and Johannes Wallner for their valuable comments and suggestions. We are also grateful to the anonymous reviewers who helped to improve this paper in many aspects. Funding Austrian Science Fund (W1230, I705). References Cavaretta , A. , Dahmen , W. & Micchelli , C. ( 1991 ) Stationary subdivison . Am. Math. Soc. Charina , M. , Conti , C. & Sauer , T. ( 2005 ) Regularity of multivariate vector subdivision schemes . Numer. Algorithms , 39 , 97 -- 113 . Google Scholar CrossRef Search ADS Cohen , A. , Dyn , N. & Levin , D. ( 1996 ) Stability and inter-dependence of matrix subdivision schemes . Advanced Topics in Multivariate Approximation (F. Fontanella, K. Jetter & P. J. Laurent eds), Ser. Approx. Decompos. 8, World Scientific Publishing, River Edge , pp. 33 -- 45 . Conti , C. , Merrien , J.-L. & Romani , L. ( 2014 ) Dual Hermite subdivision schemes of de Rham-type . BIT Numer. Math. , 54 , 955 -- 977 . Google Scholar CrossRef Search ADS Conti , C. , Cotronei , M. & Sauer , T. ( 2016 ) Factorization of Hermite subdivision operators preserving exponentials and polynomials . Adv. Comput. Math. , 42 , 1055 -- 1079 . Google Scholar CrossRef Search ADS Dubuc , S . ( 2006 ) Scalar and Hermite subdivision schemes . Appl. Comput. Harmon. Anal. , 21 , 376 -- 394 . Google Scholar CrossRef Search ADS Dubuc , S. & Merrien , J.-L. ( 2005 ) Convergent vector and Hermite subdivision schemes . Constr. Approx. , 23 , 1 -- 22 . Google Scholar CrossRef Search ADS Dubuc , S. & Merrien , J.-L. ( 2008 ) de Rham transform of a Hermite subdivision scheme . Approximation Theory XII (M. Neamtu & L. L. Schumaker eds). Nashville, TN: Nashboro Press , pp. 121 -- 132 . Dubuc , S. & Merrien , J.-L. ( 2009 ) Hermite subdivision schemes and Taylor polynomials . Constr. Approx. , 29 , 219 -- 245 . Google Scholar CrossRef Search ADS Dyn , N. , Gregory , J. A. & Levin , D. ( 1991 ) Analysis of uniform binary subdivision schemes for curve design . Constr. Approx. , 7 , 127 -- 147 . Google Scholar CrossRef Search ADS Dyn , N. ( 1992 ) Subdivision schemes in computer-aided geometric design. Advances in Numerical Analysis . New York: Oxford University Press , pp. 36 -- 104 . Dyn , N. & Levin , D. ( 1995 ) Analysis of Hermite-type subdivision schemes . Approximation Theory VIII. Wavelets and Multilevel Approximation (C. K. Chui & L. L. Schumaker eds), vol. 2. River Edge, NJ: World Sci. , pp. 117 -- 124 . Dyn , N. & Levin , D. ( 1999 ) Analysis of Hermite-interpolatory subdivision schemes . Spline Functions and the Theory of Wavelets (S. Dubuc & G. Deslauriers eds). Providence, RI: Amer. Math. Soc. , pp. 105 -- 113 . Dyn , N. & Levin , D. ( 2002 ) Subdivision schemes in geometric modelling . Acta Numer. , 11 , 73 -- 144 . Google Scholar CrossRef Search ADS Guglielmi , N. , Manni , C. & Vitale , D. ( 2011 ) Convergence analysis of C2 Hermite interpolatory subdivision schemes by explicit joint spectral radius formulas . Linear Algebra Appl. , 434 , 884 -- 902 . Google Scholar CrossRef Search ADS Han , B . ( 2001 ) Approximation properties and construction of Hermite interpolants and biorthogonal multiwavelets . J. Approx. Theory , 110 , 18 -- 53 . Google Scholar CrossRef Search ADS Han , B. , Yu , T. & Xue , Y. ( 2005 ) Noninterpolatory Hermite subdivision schemes . Math. Comput. , 74 , 1345 -- 1367 . Google Scholar CrossRef Search ADS Jeong , B. & Yoon , J. ( 2017 ) Construction of Hermite subdivision schemes reproducing polynomials . J. Math. Anal. Appl. , 451 , 565 -- 582 . Google Scholar CrossRef Search ADS Merrien , J.-L. ( 1992 ) A family of Hermite interpolants by bisection algorithms . Numer. Algorithms , 2 , 187 -- 200 . Google Scholar CrossRef Search ADS Merrien , J.-L. ( 1999 ) Interpolants d’Hermite C2 obtenus par subdivision . ESAIM Math. Model. Numer. Anal. , 33 , 55 -- 65 . Google Scholar CrossRef Search ADS Merrien , J.-L. & Sauer , T. ( 2012 ) From Hermite to stationary subdivision schemes in one and several variables . Adv. Comput. Math. , 36 , 547 -- 579 . Google Scholar CrossRef Search ADS Merrien , J.-L. & Sauer , T. ( 2017 ) Extended Hermite subdivision schemes . J. Comput. Appl. Math. , 317 , 343 -- 361 . Google Scholar CrossRef Search ADS Micchelli , C. & Sauer , T. ( 1998 ) On vector subdivision . Math. Z. , 229 , 621 -- 674 . Google Scholar CrossRef Search ADS Sauer , T. ( 2002 ) Stationary vector subdivision—quotient ideals, differences and approximation power. Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales . Serie A. Matemáticas. RACSAM , 96 , 257 -- 277 . Sauer , T. ( 2003 ) How to generate smoother refinable functions from given ones . Modern Developments in Multivariate Approximation , vol. 145. Basel: Birkhäuser , pp. 279 -- 293 . © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com

### Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: Mar 29, 2018

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations