Quasi–Monte Carlo integration with product weights for elliptic PDEs with log-normal coefficients

Quasi–Monte Carlo integration with product weights for elliptic PDEs with log-normal coefficients Abstract Quasi–Monte Carlo (QMC) integration of output functionals of solutions of the diffusion problem with a log-normal random coefficient is considered. The random coefficient is assumed to be given by an exponential of a Gaussian random field that is represented by a series expansion of some system of functions. Graham et al. (2015, Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients. Numer. Math., 131, 329–368) developed a lattice-based QMC theory for this problem and established a quadrature error decay rate ≈ 1 with respect to the number of quadrature points. The key assumption there was a suitable summability condition on the aforementioned system of functions. As a consequence, product-order-dependent weights were used to construct the lattice rule. In this paper, a different assumption on the system is considered. This assumption, originally considered by Bachmayr et al. (2017c, Sparse polynomial approximation of parametric elliptic PDEs. Part I: affine coefficients. ESAIM Math. Model. Numer. Anal., 51, 321–339) to utilise the locality of support of basis functions in the context of polynomial approximations applied to the same type of the diffusion problem, is shown to work well in the same lattice-based QMC method considered by Graham et al.: the assumption leads us to product weights, which enables the construction of the QMC method with a smaller computational cost than Graham et al. A quadrature error decay rate ≈ 1 is established, and the theory developed here is applied to a wavelet stochastic model. By a characterisation of the Besov smoothness, it is shown that a wide class of path smoothness can be treated with this framework. 1. Introduction This paper is concerned with quasi–Monte Carlo (QMC) integration of output functionals of solutions of the diffusion problem with a random coefficient of the form \begin{equation} -\nabla\cdot (a(x,{\boldsymbol{y}} )\nabla u(x,{\boldsymbol{y}} )) = f(x)\quad \textrm{in } D\subset{\mathbb{R}}^d, \qquad u=0\ \textrm{ on }\partial{D}, \end{equation} (1.1) where $${\boldsymbol{y}}\in \varOmega $$ is an element of a suitable probability space $$(\varOmega ,{\mathscr{F}},\mathbb{P})$$ (clarified below), and $$D\subset{\mathbb{R}}^d$$ is a bounded domain with Lipschitz boundary. Our interest is in the log-normal case, that is, $$a(\cdot ,\cdot )\colon{D}\times \varOmega \to{\mathbb{R}}$$ is assumed to have the form \begin{equation} a(x,{\boldsymbol{y}} )=a_*(x)+a_0(x)\exp(T(x,{\boldsymbol{y}} )) \end{equation} (1.2) with continuous functions $$a_*{(x)}\geqslant 0$$, $$a_0{(x)}>0$$ on $$\overline{D}$$, and Gaussian random field $$T(\cdot ,\cdot )\colon D\times \varOmega \to{\mathbb{R}}$$ represented by a series expansion \begin{equation} T(x,{\boldsymbol{y}})=\sum_{j= 1}^\infty Y_j ({\boldsymbol{y}}) \psi_{j}(x),\quad x\in D, \end{equation} (1.3) where $$\{Y_j\}$$ is a collection of independent standard normal random variables on $$(\varOmega ,{\mathscr{F}},\mathbb{P})$$, and $$(\psi _{j})_{j\geqslant 1}$$ is a suitable system of real-valued measurable functions on D. To handle a wide class of a and f, we consider the weak formulation of the problem (1.1). By V we denote the zero-trace Sobolev space $$H^1_0(D)$$ endowed with the norm \begin{equation} \left\| v \right\|_V := \left( \int_{D} |\nabla v(x)|^2\,{\textrm{d}}x\right)^{\frac12}, \end{equation} (1.4) and by $$V^{\prime}:=H^{-1}(D)$$ the topological dual space of V. For the given random coefficient a(x, y), we define the bilinear form $$\mathscr{A}({\boldsymbol{y}};\cdot ,\cdot )\colon V\times V\to{\mathbb{R}}$$ by \begin{equation} \varOmega\ni{\boldsymbol{y}} \mapsto \mathscr{A}({\boldsymbol{y}};v,w) := \int_{D} a(x,{\boldsymbol{y}} ) \nabla v(x)\cdot \nabla w(x)\,{\textrm{d}}x\ \textrm{ for all }v,w\in V. \end{equation} (1.5) Then for any $${\boldsymbol{y}}\in \varOmega $$ the weak formulation of (1.1) reads as follows: find u(⋅, y) ∈ V such that \begin{equation} \mathscr{A}({\boldsymbol{y}};u(\cdot,{\boldsymbol{y}}),v) =\langle f, v \rangle\quad \textrm{ for all }v\in V, \end{equation} (1.6) where f is assumed to be in V′, and ⟨⋅, ⋅⟩ denotes the duality paring between V′ and V. We impose further conditions to ensure the well-posedness of the problem, which we will discuss later. The ultimate goal is to compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$, the expected value of $$\mathcal{G}(u(\cdot ,{\boldsymbol{y}}))$$, where $$\mathcal{G}$$ is a linear bounded functional on V. The problem (1.1), and of computing $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$, often arises in many applications such as hydrology (Dagan, 1984; Naff et al., 1998a,b) and has attracted attention in computational uncertainty quantification (UQ). See, for example, Schwab & Gittelson (2011); Cohen & DeVore (2015); Kuo & Nuyens (2016) and references therein. Two major ways to tackle this problem are function approximation, and quadrature, in particular, QMC methods. Our interest is in QMC. It is now well known that the QMC methods beat the plain-vanilla Monte Carlo methods in various settings when applied to the problems of computing $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$ (Kuo et al., 2012; Graham et al., 2015; Kuo & Nuyens, 2016). Among the QMC methods, the algorithm we consider is randomly shifted lattice rules. Graham et al. (2015) showed that when the randomly shifted lattice rules are applied to the class of partial differential equations (PDEs) we consider, a QMC convergence rate, in terms of expected root square mean root, ≈ 1 is achievable, which is known to be optimal for lattice rules in the function space they consider. More precisely, they showed that quadrature points for randomly shifted lattice rules that achieve such a rate can be constructed using an algorithm called component-by-component (CBC) construction. The algorithm uses weights, which represent the relative importance of subsets of the variables of the integrand, as an input, and the cost of it is dependent on the type of weights. The weights considered by Graham et al. (2015) are so-called product-order-dependent (POD) weights, which were determined by minimising an error bound. For POD weights, the CBC construction takes $$\mathcal{O}(s n\log n+s^2n)$$ operations, where n is the number of quadrature points and s is the dimension of truncation $$\sum _{j= 1}^{s} Y_j ({\boldsymbol{y}}) \psi _{j}(x)$$. The contributions of the current paper are twofold: proof of a convergence rate ≈ 1 with product weights, and an application to a stochastic model with wavelets. In more detail, we show that for the currently considered problem, the CBC construction can be constructed with weights called product weights and achieves the optimal rate ≈ 1 in the function space we consider, and further, we show that the developed theory can be applied to a stochastic model that covers a wide class of wavelet bases. Often in practice we want to approximate the random coefficients well, and consequently s has to be taken to be large, in which case the second term of $$\mathcal{O}(s n\log n+s^2n)$$ becomes dominant. The use of the POD weights originates from the summability condition imposed on $$(\psi _j)$$ by Graham et al. (2015). We consider a different condition, the one proposed by Bachmayr et al. (2017c) to utilise the locality of supports of $$(\psi _j)$$ in the context of polynomial approximations applied to PDEs with random coefficients. We show that under this condition, the shifted lattice rule for the PDE problem can be constructed with a CBC algorithm with the computational cost $$\mathcal{O}(s n\log n)$$, the cost with the product weights as shown by Dick et al. (2014). Further, the stochastic model we consider broadens the range of applicability of QMC methods to PDEs with log-normal coefficients. One concern about the conditions, in particular the summability condition on $$(\psi _j)$$, imposed in Graham et al. (2015) is that it is so strong that only random coefficients with smooth realisations are in the scope of the theory. We show that at least for d = 1, 2, such random coefficients (e.g., realisations with just some Hölder smoothness) can be considered. Upon finalising this paper, we learnt about the paper by Herrmann & Schwab (2016). Our works share the same spirit in that we are all inspired by the work by Bachmayr et al. (2017a). We provide a different, arguably simpler, proof for the same convergence rate with the exponential weight function, and we discuss the roughness of the realisations that can be considered. Herrmann & Schwab (2016) develops a theory under a setting essentially the same as ours. In contrast to our paper, they treat the truncation error in a general setting, and as for the QMC integration error, they consider both the exponential weight functions and the Gaussian weight function for the weighted Sobolev space. As for the exponential weight function, the current paper and Herrmann & Schwab (2016) impose essentially the same assumptions (Assumption 2.1 below) and show the same convergence rate. However, our proof strategy is different, which turns out to result in different (product) weights. This can lead to a smaller implied constant in the estimate, especially when the field's fluctuation is large, as we discuss later. Further, in contrast to Herrmann & Schwab (2016), we provide a discussion of the roughness of the realisations of random coefficients as mentioned above. In Section 5, we provide a discussion via the Besov characterisation of the realisations of the random coefficients and the embedding results. We now give a remark on the uniform case setting. One of the keys in the current paper, which deals with the log-normal case, is the estimate of the derivative given in Corollary 3.2. This result essentially follows from the bounds obtained by Bachmayr et al. (2017a). A similar argument employed in the current paper is applicable to the randomly shifted lattice rules applied to PDEs with uniform random coefficients considered by Kuo et al. (2012), using the derivative bounds for the uniform case considered by Bachmayr et al. (2017c). For this, we defer to Gantner et al. (2018), in which not only the randomly shifted lattice rules but also higher-order QMCs were considered. The outline of the rest of the paper is as follows. In Section 2 we describe the problem we consider in detail. In Section 3 we discuss bounds on mixed derivatives. Then in Section 4, we develop a QMC theory applied to a PDE problem with log-normal coefficients using the product weights. Section 5 provides an application of the theory: we consider a stochastic model represented by a wavelet Riesz basis. Then we close this paper with concluding remarks in Section 6. 2. Setting We assume that the Gaussian random field T admits a series representation as in (1.3). For simplicity we fix $$(\varOmega ,{\mathscr{F}},\mathbb{P}):=({\mathbb{R}}^{{\mathbb{N}}},\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}}),\mathbb{P}_{Y})$$, where $${\mathbb{N}}:=\{1,2,\dotsc \}$$, $$\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}})$$ is the Borel $$\sigma $$-algebra generated by the product topology in $${\mathbb{R}}^{{\mathbb{N}}}$$ and $$\mathbb{P}_{Y}:=\prod _{j=1}^{\infty }\mathbb{P}_{Y_j}$$ is the product measure on $$({\mathbb{R}}^{{\mathbb{N}}},\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}}))$$ defined by the standard normal distributions $$\{\mathbb{P}_{Y_j} \}_{j\in{\mathbb{N}}}$$ on $${\mathbb{R}}$$ (see, for example, Itô, 1984, Chapter 2 for details). Then for each $$\boldsymbol{y}\in \varOmega $$ we may see $$Y_j(\boldsymbol{y})$$ ($$j\in{\mathbb{N}}$$) as given by the projection (or the canonical coordinate function) $$ \varOmega={\mathbb{R}}^{{\mathbb{N}}}\ni \boldsymbol{y}\mapsto Y_j(\boldsymbol{y})=:y_j\in{\mathbb{R}}. $$ Note in particular that from the continuity of the projection, the mapping $$\boldsymbol{y}\mapsto y_j$$ is $$\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}})/\mathcal{B}({\mathbb{R}})$$-measurable. In the following, we write T above as \begin{equation} T(x,\boldsymbol{y})=\sum_{j= 1}^\infty y_j \psi_{j}(x),\quad x\in D \end{equation} (2.1) and see it as a deterministically parametrised function on D. We will impose a condition considered by Bachmayr et al. (2017c) on $$(\psi _j)$$ (see Assumption 2.1 below), that is particularly suitable for $$\psi _j$$ with local support. To ensure the law on $${\mathbb{R}}^D$$ is well defined, we suppose \begin{equation} \sum_{j= 1}^\infty \psi_{j}(x)^2<\infty\quad\textrm{for all}x\in D, \end{equation} (2.2) so that the covariance function $$\mathbb{E}[T(x_1)T(x_2)]=\sum _{j\geqslant 1}\psi _{j}(x_1)\psi _{j}(x_2)$$ ($$x_1,x_2\in D$$) is well defined. We consider the parametrised elliptic PDE (1.1). To prove well-posedness of the variational problem (1.6), we use the Lax–Milgram lemma. Conditions that ensure that the bilinear form $$\mathscr{A}(\boldsymbol{y};\cdot ,\cdot )$$ defined by the diffusion coefficient a is coercive and bounded are discussed later. Motivated by UQ applications, we are interested in expected values of bounded linear functionals of the solution of the above PDEs. We note that the linearity is for the sake of the theoretical interest. Theoretical treatment of nonlinear functionals will require suitable smoothness and mild growth of suitable derivatives, but it is almost unexplored with an exception being an attempt by Scheichl et al. (2017). Given a continuous linear functional $$\mathcal{G}\in{V^{^{\prime}}}$$ we wish to compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]:=\int _{{\mathbb{R}}^{\mathbb{N}}}\mathcal{G}(u(\cdot ,\boldsymbol{y})) \mathrm{d}\mathbb{P}_{Y}(\boldsymbol{y})$$, where the measurability of the integrands will be discussed later. To compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$ we use a sampling method: generate realisations of a(x, y), which yields the solution u(x, y) via the PDE (1.1), and from these we compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$. In practice, these operations cannot be performed exactly, and numerical methods need to be employed. This paper gives an analysis of the error incurred by the method outlined as follows. We truncate the series (2.1) to s terms for some integer s ⩾ 1, which results in the coefficient $$a(x,(y_1,\dots ,y_s,0,0,0,\dots ))$$ of the PDE (1.1). Further, the expectation of the random variable that is the solution of the corresponding solution of the PDE applied to the linear functional $$\mathcal{G}$$ is approximated by a QMC method. Let $$u^s(x)=u^s(x,\boldsymbol{y})$$ be the solution of (1.1) with $$\boldsymbol{y}=(y_1,\dotsc ,y_s,0,0,0,\dotsc )$$, that is, of the following problem: find $$u^s\in V$$ such that \begin{equation} -{\nabla} \cdot(a(x,(y_1,\dotsc,y_s,0,0,\cdots))\nabla u^s(x)) = f(x) \ \ \textrm{in } D, \quad u^s = 0 \textrm{ on } \partial D. \end{equation} (2.3) Here, even though the dependence of $$u^s$$ on y is only on $$(y_1,\dotsc ,y_s)$$, we abuse the notation slightly by writing $$u^s(x,\boldsymbol{y}):= u^s(y_1,\dotsc ,y_s,0,0,0,\dotsc )$$. Let $$\varPhi _s^{-1}\colon [0,1]^s\ni \boldsymbol{v}\mapsto \varPhi _s^{-1}(\boldsymbol{v})\in{\mathbb{R}}^s$$ be the inverse of the cumulative normal distribution function applied to each entry of v. We write $$F(\boldsymbol{y}):=F(y_1,\dotsc ,y_s)=\mathcal{G}(u^s(\cdot ,\boldsymbol{y}))$$ and \begin{equation} I_s(F) := \int_{{\boldsymbol{v}\in(0,1)^s}} F(\varPhi_s^{-1}(\boldsymbol{v})) \mathrm{\ d}{\boldsymbol{v}} =\int_{{\boldsymbol{y}\in\mathbb{R}^s}} \mathcal{G}(u^s(\cdot,\boldsymbol{y})) \prod\limits_{j=1}^{s}\phi(y_j)\mathrm{\ d}\boldsymbol{y} =\mathbb{E}[\mathcal{G}(u^s)], \end{equation} (2.4) where $$\phi $$ is the probability density function of the standard normal random variable. The measurability of the mapping $${\mathbb{R}}^s\ni \boldsymbol{y}\mapsto \mathcal{G}(u^s(\cdot ,\boldsymbol{y}))\in{\mathbb{R}}$$ will be discussed later. In order to approximate $$I_s(F)$$, we employ a QMC method called a randomly shifted lattice rule. This is an equal-weight quadrature rule of the form $$ \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) :=\frac{1}n\sum_{i=1}^nF\left(\varPhi^{-1}_s\left(\mathrm{frac}\left(\frac{i\boldsymbol{z}}{n}+\boldsymbol{\varDelta}\right)\right)\right), $$ where the function $$\mathrm{frac}(\cdot )\colon{\mathbb{R}}^s\ni \boldsymbol{y}\mapsto \mathrm{frac}(\boldsymbol{y})\in [0,1)^s$$ takes the fractional part of each component in y. Here, $$\boldsymbol{z}\in{\mathbb{N}}^s$$ is a carefully chosen point called the (deterministic) generating vector and $$\boldsymbol{\varDelta }\in [0,1]^s$$ is the random shift. We assume the random shift $$\boldsymbol{\varDelta }$$ is a $$[0,1]^s$$-valued uniform random variable defined on a suitable probability space different from $$(\varOmega ,\mathscr{F},\mathbb{P})$$. For further details of the randomly shifted lattice rules, we refer to the surveys Dick et al. (2013); Kuo & Nuyens (2016) and references therein. We want to evaluate the root-mean-square error \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left[ \big( \mathbb{E}[\mathcal{G}(u)] -\mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \big)^2 \right] } \end{equation} (2.5) where $$\mathbb{E}^{\boldsymbol{\varDelta }}$$ is the expectation with respect to the random shift. Note that in practice the solution $$u^s$$ needs to be approximated by some numerical scheme $$\widetilde{u}^s$$, such as the finite element method as considered in Graham et al. (2015); Kuo & Nuyens (2016), which results in computing $$\widetilde{F}(\boldsymbol{y}):=\mathcal{G}(\widetilde{u}^s(\boldsymbol{y}))$$. Thus, the error $$e_{s,n}:=\sqrt{ \mathbb{E}^{\boldsymbol{\varDelta }} [ ( \mathbb{E}[\mathcal{G}(u)] -\mathcal{Q}_{s,n}(\boldsymbol{\varDelta };\widetilde{F}) )^2 ] }$$ is what we need to evaluate in practice. Via the trivial decompositions we have, using $$\mathbb{E}^{\boldsymbol{\varDelta }}[\mathcal{Q}_{s,n}(\boldsymbol{\varDelta };\widetilde{F})] ={\mathbb{E}[\mathcal{G}(\widetilde{u}^s)]}$$ (see, for example, Dick et al., 2013), \begin{equation} e_{s,n}^2 ={ (\mathbb{E}[\mathcal{G}(u)-\mathcal{G}(\widetilde{u}^s)])^2 + \mathbb{E}^{\boldsymbol{\varDelta}} \left[ \big( \mathbb{E}[\mathcal{G}(\widetilde{u}^s)] -\mathcal{Q}_{s,n}(\boldsymbol{\varDelta};\widetilde{F}) \big)^2 \right].} \end{equation} (2.6) For the sake of simplicity, we forgo the discussion on the numerical approximation of the solution of the PDE. Instead, we discuss the smoothness of the realisations of the random coefficient. Then given a suitable smoothness of the boundary ∂D, the convergence rate of $$\mathbb{E}[\mathcal{G}(u)-\mathcal{G}(\widetilde{u})]$$ is typically obtained from the smoothness of the realisations of the coefficients a(⋅, y), via the regularity of the solution u. See Graham et al. (2015); Kuo & Nuyens (2016). Therefore, in the following we concentrate on the truncation error and the quadrature error, and the realisations of a. In the course of the error analyses, we assume $$(\psi _j)$$ satisfies the following assumption. Assumption 2.1 The system $$(\psi _j)$$ satisfies the following. There exists a positive sequence $$(\rho _j)$$ such that \begin{equation} \sup_{x\in{D} }\sum_{j\geqslant 1}\rho_j|\psi_j(x)|=:\kappa <\ln2, \end{equation} (2.7) and further, \begin{equation} (1/\rho_j)\in\ell^{q}\qquad\textrm{ for some }{q}\in(0,1]. \end{equation} (2.8) We also use the following weaker assumption. Assumption 2.2 This is the same as Assumption 2.1, only with the condition (2.8) being replaced with \begin{equation} (1/\rho_j)\in\ell^{q}\qquad\textrm{ for some }q\in(0,\infty). \end{equation} (2.9) We note that (2.9), and thus also (2.8), implies $$\rho _j\to \infty $$ as $$j\to \infty $$. Some remarks on the assumptions are in order. First note that Assumption 2.1 implies $$\sum _{j\geqslant 1}|\psi (x)|<\infty $$ for any x ∈ D, and hence (2.2). Assumption 2.2 is used to obtain an estimate on the mixed derivative with respect to the random parameter $$y_j$$, and further, ensures the almost surely well-posedness of problem (1.6)—see Corollary 3.2 and Remark 3.3. Assumption 2.1 is used to obtain a dimension-independent QMC error estimate—see Theorems 4.4 and 5.2. The stronger condition (2.8) the system $$(\psi _j)$$ satisfies, that is, the smaller is q, the smoother the realisations of the random coefficient become. In Section 5.2.1, we discuss smoothness of realisations allowed by these conditions. 3. Bounds on mixed derivatives In this section, we discuss bounds on mixed derivatives. In order to motivate the discussion in this section, first we explain how the derivative bounds come into play in the QMC analysis developed in the next section. Application of QMC methods to elliptic PDEs with log-normal random coefficients was initiated with computational results by Graham et al. (2011), and an analysis was followed by Graham et al. (2015). Following the discussion by Graham et al. (2015), we assume that the integrand F is in the space called the weighted unanchored Sobolev space $$\mathcal{W}^s$$, consisting of measurable functions $$F\colon{\mathbb{R}}^s\to{\mathbb{R}}$$ such that \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 = \sum_{{\mathfrak{u}}\subseteq\{1:s\}} \frac1{\gamma_{{\mathfrak{u}}}} \int_{\mathbb{R}^{|{\mathfrak{u}}|}}\!\!\! \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \frac{\partial^{|{\mathfrak{u}}|} F}{\partial y_{{\mathfrak{u}}}}(\boldsymbol{y}_{\mathfrak{u}};\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}}) \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^2\!\! \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}}<\infty, \end{equation} (3.1) where we assume, similarly to Graham et al. (2015), that \begin{equation} w_j^2(y_j)=\exp(-2\alpha_j|y|) \end{equation} (3.2) for some $$\alpha _j>0$$. Here, {1 : s} is shorthand notation for the set $$\{1,\dotsc ,s\}$$, $$\frac{\partial ^{|{\mathfrak{u}}|} F}{\partial y_{{\mathfrak{u}}}}$$ denotes the mixed first derivative with respect to each of the ‘active’ variables $$y_j$$ with $$j\in{\mathfrak{u}}\subseteq \{1:s\}$$ and $$\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}}$$ denotes the ‘inactive’ variables $$y_j$$ with $$j\not \in{\mathfrak{u}}$$. Further, weights $$({\gamma _{{\mathfrak{u}}}})$$ describe the relative importance of the variables $$\{y_j\}_{j\in{\mathfrak{u}}}$$. Note that the measures $$\int _{\cdot }\,{\textrm{d}}y_{{\mathfrak{u}}}$$ and $$\int _{\cdot }\frac 1{\gamma _{{\mathfrak{u}}}}\,{\textrm{d}}y_{{\mathfrak{u}}}$$ differ by at most a constant factor depending on $${\mathfrak{u}}$$. Weights $$({\gamma _{{\mathfrak{u}}}})$$ play an important role in deriving error estimates independently of the dimension s, and further, in obtaining the generating vector z for the lattice rule via the CBC algorithm. Depending on the problem, different types of weights have been considered to derive error estimates. For the randomly shifted lattice rules, ‘POD weights’ and ‘product weights’ have been considered (Dick et al., 2013; Kuo & Nuyens, 2016). When applied to the PDE parametrised with log-normal coefficients, the result in Graham et al. (2015) suggests the use of POD weights for the problem. We wish to develop a theory on the applicability of product weights, which has an advantage in terms of computational cost. The computational cost of the CBC construction is $$\mathcal{O}(sn\log n+ns^2)$$ in the case of POD weights, compared to $$\mathcal{O}(sn\log n)$$ for product weights (Dick et al., 2014). Since we often want to approximate the random field well, and so necessarily we have large s, the applicability of product weights is of clear interest. Estimates of derivatives of the integrand F(y) with respect to the parameter y, that is, the variable with which F(y) is integrated, are one of the keys in the error analysis of QMC. In Graham et al. (2015), it was the estimates being of ‘POD form’ that led their theory to the POD weights. Under an assumption on the system $$(\psi _j)$$, which is different from that in Graham et al. (2015), we show that the derivative estimates turn out to be of ‘product form’, and further that, under a suitable assumption, we achieve the same error convergence rate close to 1 with product weights. Now we derive an estimate of the product form. Let $$ \mathcal{F}:=\{\mu=(\mu_1,\mu_2,\dotsc)\in\mathbb{N}_0^{\mathbb{N}}\mid \textrm{ all but finite number of components of }\mu\textrm{ are zero} \}.$$ For $$\mu \in \mathcal{F}$$ we use the notation $$|\mu |=\sum _{j\geqslant 1}\mu _j$$, $${\mu }!=\prod \limits _{j\geqslant 1}\mu _j!$$, $$\rho ^{\mu }=\prod \limits _{j\geqslant 1}\rho _j^{\mu _j}$$ for $$\rho =(\rho _j)_{j\geqslant 1}\in \mathbb{R}^{\mathbb{N}}$$, and \begin{equation} \partial^\mu u= \frac{\partial^{|\mu|}}{y_{j(1)}^{\mu_{j(1)}}\dotsb y_{j{(k)}}^{\mu_{j(k)}}}u, \end{equation} (3.3) where $$k=\#\{j\mid \mu _j\neq 0\}$$. We have the following bound on mixed derivatives of order r ⩾ 1 (although in our application we will need only r = 1). The proof follows essentially the same argument as the proof of Bachmayr et al. (2017c, Theorem 4.1). Here, we show a tighter bound by changing the condition from $$\frac{\ln 2}{\sqrt{r}}$$ to $$\frac{\ln 2}{{r}}$$ in Bachmayr et al. (2017c, (91)), and we have $$\rho ^{2\mu }$$ in (3.4) in place of $$\frac{\rho ^{2\mu }}{\mu !}$$ in the left-hand side of Bachmayr et al. (2017c, (92)). Proposition 3.1 Let r ⩾ 1 be an integer. Suppose $$(\psi _j)$$ satisfies condition (2.7) with $$\ln 2$$ replaced by $$\frac{\ln 2}{r}$$, with a positive sequence $$(\rho _j)$$. Then there exists a constant $$C_0=C_0(r)$$ that depends on $$\kappa $$ and r, such that \begin{equation} \sum_{\substack{\mu\in\mathcal{F}\\ \left\| \mu \right\|_\infty\leqslant r}} {\rho^{2\mu}} \int_{D} a(\boldsymbol{y})|\nabla(\partial^\mu u(\boldsymbol{y}))|^2\,{\textrm{d}}x \leqslant C_0 \int_{D} a(\boldsymbol{y})|\nabla u(\boldsymbol{y})|^2 \,{\textrm{d}}x \end{equation} (3.4) for all y that satisfy $$ \|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, where u(y) is the solution of (1.6) for such y. The same bound holds also for $$u^s(\boldsymbol{y})$$, the solution of (1.6) with $$\boldsymbol{y}=(y_1,\dotsc ,y_s,0,0,\dotsc )$$. Proof. Let \begin{equation*} \varLambda_{k}:= \{\mu\in\mathcal{F}\mid|\mu|=k\textrm{ and }\left\| \mu \right\|_{\ell_\infty}\leqslant r\},\ \, \textrm{and}\ \, S_{\mu}:= \{\nu\in\mathcal{F}\mid\nu\leqslant \mu\textrm{ and }\nu\neq\mu\}\textrm{ for }\mu\in\mathcal{F}, \end{equation*} with ⩽ denoting the componentwise partial order between multi-indices. Let us introduce the notation $$ \left \| v \right \|_{a(\boldsymbol{y})}^2:=\int _{D} a(\boldsymbol{y})|\nabla v|^2 \,{\textrm{d}}x $$ for all v ∈ V, and let $$ \sigma_k:=\sum_{\mu\in \varLambda_k}\rho^{2\mu} \left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2.$$ We show below that we can choose $$\delta =\delta (r)<1$$ such that \begin{equation} \sigma_k\leqslant \sigma_0\delta^k\qquad\textrm{ for all }k\geqslant 0. \end{equation} (3.5) Note that if this holds then we have \begin{equation} \sum_{\left\| \mu \right\|_\infty\leqslant r}{\rho^{2\mu}}\left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2 = \sum_{k=0}^\infty \sum_{\mu\in\varLambda_k}\rho^{2\mu}\left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2= \sum_{k=0}^\infty \sigma_k\leqslant \sigma_0\sum_{k=0}^\infty \delta^k<\infty, \end{equation} (3.6) and the statement will follow with $$C_0=C_0(r)=\sum _{k=0}^\infty \delta (r)^k$$. We now show $$\sigma _k\leqslant \sigma _0\delta ^k$$. Note that from the assumption $$\|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, in view of Bachmayr et al. (2017c, Lemma 3.2), we have $$\partial ^\mu u\in V$$ for any $$\mu \in \mathcal{F}$$. Thus, by taking $$v:=\partial ^\mu u$$ ($$\mu \in \varLambda _k$$) in Bachmayr et al. (2017c, (74)), we have \begin{align} {\sigma_k}&=\sum_{\mu\in \varLambda_k}\rho^{2\mu}\!\!\! \int_D\!\! a(\boldsymbol{y})|\nabla\partial^\mu u(\boldsymbol{y})|^2\,{\textrm{d}}x\nonumber\\ &\leqslant \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \left(\prod\limits_{j\geqslant 1} \frac{\mu_j!\rho_j^{\mu_j-\nu_j}\rho^{\mu_j}\rho^{\nu_j}} {\nu_j!(\mu_j-\nu_j)!} \right) \int_D a(\boldsymbol{y}) \left( \prod\limits_{j\geqslant 1}|\psi_j|^{\mu_j-\nu_j} \right) |\nabla\partial^\nu u(\boldsymbol{y})| |\nabla\partial^\mu u(\boldsymbol{y})| \,{\textrm{d}}x. \end{align} (3.7) Using the notation \begin{equation} \varepsilon(\mu,\nu)(x) := \varepsilon(\mu,\nu) := \frac{\mu!}{\nu!}\frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!}, \end{equation} (3.8) and the Cauchy–Schwarz inequality for the sum over $$S_\mu $$, it follows that \begin{equation} {\sigma_k}\leqslant \int_D \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})| |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})| \,{\textrm{d}}x \end{equation} (3.9) \begin{equation} \leqslant \int_D \sum_{\mu\in \varLambda_k} \left(\sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \left(\sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \,{\textrm{d}}x. \end{equation} (3.10) Let $$ S_{\mu,\ell}:=\{\nu\in S_\mu\mid |\mu-\nu|=\ell\}.$$ Then for $$\mu \in \varLambda _k$$ we have $$ S_{\mu}=\{\nu\in\mathcal{F}\mid\nu\leqslant \mu,\ \nu\neq\mu\}=\bigcup\limits_{\ell=1}^{|\mu|}\{\nu\in\mathcal{F}\mid\nu\leqslant \mu,\ |\mu-\nu|=\ell \} = \bigcup\limits_{\ell=1}^{|\mu|}S_{\mu,\ell} ,$$ and further, from $$|\mu |=k$$, we have \begin{equation} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) = \sum_{\ell=1}^{k} \sum_{\nu\in S_{\mu,\ell}} \varepsilon(\mu,\nu) = \sum_{\ell=1}^{k} \sum_{\nu\in S_{\mu,\ell}} \frac{\mu!}{\nu!}\frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!}. \end{equation} (3.11) Since $$\nu \in S_{\mu ,\ell }$$ implies $$\sum _{j\in \operatorname{supp}\mu }(\mu _j-\nu _j)=\ell $$, there are ℓ factors in $$\frac{\mu !}{\nu !}=\prod _{j\in \operatorname{supp}\mu }\mu _j(\mu _j-1)\dotsb (\nu _j+1)$$. From $$\mu _j\leqslant r$$ ($$j\in \operatorname{supp}\mu $$), each of the factors is at most r. Thus, $$ \frac{\mu!}{\nu!} \leqslant r^\ell\quad\textrm{ for }\mu\in \varLambda_k,\ \nu\in S_{\mu,\ell}. $$ Therefore, from the multinomial theorem, for each x ∈ D it follows from (3.11) that \begin{equation} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) \leqslant \sum_{\ell=1}^{k} r^\ell \sum_{\nu\in S_{\mu,\ell}} \frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!} \leqslant \sum_{\ell=1}^{k} r^\ell \sum_{|\tau|=\ell} \frac{\rho^{\tau}|\psi|^{\tau}}{\tau!} = \sum_{\ell=1}^{k} r^\ell \frac{1}{\ell!} \sum_{|\tau|=\ell} \frac{\ell!}{\tau!} \rho^{\tau}|\psi|^{\tau} \end{equation} (3.12) \begin{equation} = \sum_{\ell=1}^{k} r^\ell \frac{1}{\ell!} \left(\sum_{j=1}^\infty \rho_j|\psi_j|\right)^\ell \leqslant \sum_{\ell=1}^{k} r^\ell \frac{1}{\ell!} \kappa^\ell \leqslant e^{r\kappa}-1\leqslant e^{{\ln 2}}-1=1. \end{equation} (3.13) Inserting into (3.10), we have \begin{equation} \sum_{\mu\in \varLambda_k}\!\!\rho^{2\mu} \left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2 \leqslant \int_D \sum_{\mu\in \varLambda_k} \left( \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \left( a(\boldsymbol{y}) |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \,{\textrm{d}}x. \end{equation} (3.14) Again applying the Cauchy–Schwarz inequality to the summation over $$\varLambda _k$$ and then to the integral, we have \begin{align*} \begin{split} \sigma_k &\leqslant \int_D \left( \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}}\!\! \left( \sum_{\mu\in \varLambda_k} a(\boldsymbol{y}) |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \,{\textrm{d}}x\nonumber \end{split} \\ &\leqslant \left(\int_D \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \,{\textrm{d}}x \right)^{\frac{1}{2}} \sigma_k^{\frac{1}{2}}, \end{align*} and hence \begin{equation} {\sigma_k\leqslant \int_D \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \,{\textrm{d}}x.} \end{equation} (3.15) Now, for any k ⩾ 1 and any $$\nu \in \varLambda _\ell =\{\nu \in \mathcal{F}\mid |\nu |=\ell ,\ \left \| \nu \right \|_\infty \leqslant r\}$$ with ℓ ⩽ k − 1, let $$ R_{\nu,\ell,k}:= \{\mu\in\varLambda_k\mid \nu\in S_\mu \} = \{\mu\in\mathcal{F}\mid |\mu|=k,\ \left\| \mu \right\|_\infty\leqslant r,\ \mu\geqslant \nu,\ \mu\neq\nu \}. $$ Then for fixed k ⩾ 1 we can write \begin{equation} \bigcup_{\mu\in\varLambda_k} \bigcup_{\nu\in S_\mu} (\mu,\nu) = \bigcup_{\ell=0}^{k-1} \bigcup_{\nu\in \varLambda_\ell} \bigcup_{\mu\in R_{\nu,\ell,k}} (\mu,\nu). \end{equation} (3.16) Thus, we have \begin{equation} \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 = \sum_{\ell=0}^{k-1} \sum_{\nu\in \varLambda_\ell} a(\boldsymbol{y})|\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \sum_{\mu\in R_{\nu,\ell,k}} \varepsilon(\mu,\nu). \end{equation} (3.17) Now, note that $$k-\ell ={\sum _{j\in \operatorname{supp} \mu }\mu _j -\sum _{j\in \operatorname{supp} \mu }\nu _j=} |\mu -\nu |$$. Thus, we have $$\frac{\mu !}{\nu !}\leqslant r^{k-\ell }$$. It follows that \begin{equation} \sum_{\mu\in R_{\nu,\ell,k}} \varepsilon(\mu,\nu) = \sum_{ \nu\in R_{\nu,\ell,k} } \frac{\mu!}{\nu!}\frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!} \leqslant r^{k-\ell} \sum_{ \nu\in R_{\nu,\ell,k} } \frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!} \end{equation} (3.18) \begin{equation} \leqslant r^{k-\ell} \sum_{|\tau|=k-\ell} \frac{\rho^{\tau}|\psi|^{\tau}}{\tau!} \leqslant r^{k-\ell}\frac1{(k-\ell)!}\kappa^{k-\ell}. \end{equation} (3.19) Then substituting (3.19) into (3.17) we obtain from (3.15), \begin{equation} \sigma_k \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (r\kappa)^{k-\ell} \sigma_\ell. \end{equation} (3.20) From the assumption we have $$\kappa <{\frac{\ln 2}{r}}$$. Thus, we can take $$\delta =\delta (r)<1$$ such that $$\kappa <\delta{\frac{\ln 2}r}.$$ We show $$\sigma _k\leqslant \sigma _0\delta ^k$$ for all k ⩾ 0 by induction. This is clearly true for k = 0. Suppose $$\sigma _{\ell }\leqslant \sigma _0\delta ^{\ell }$$ holds for $$\ell =0,\dotsc ,k-1$$. Then for ℓ = k we have \begin{equation} \sigma_k \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (r\kappa)^{k-\ell} \sigma_\ell \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (r\kappa)^{k-\ell} \sigma_0\delta^\ell \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (\delta{\ln 2})^{k-\ell} \sigma_0\delta^\ell \end{equation} (3.21) \begin{equation} = \sigma_0\delta^k \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} ({\ln 2})^{k-\ell} \leqslant\sigma_0\delta^k(e^{{\ln 2}}-1)=\sigma_0\delta^k, \end{equation} (3.22) which completes the proof. With the notation \begin{equation} \check{a}(\boldsymbol{y}):=\mathop{\textrm{ess}\inf}_{x\in D} a(x,\boldsymbol{y})\quad\textrm{and}\quad \hat{a}(\boldsymbol{y}):=\mathop{\textrm{ess}\sup}_{x\in D} a(x,\boldsymbol{y}), \end{equation} (3.23) we have the following corollary, where here and from now on we set r = 1. Corollary 3.2 Suppose $$(\psi _j)$$ satisfies Assumption 2.2 with a positive sequence $$(\rho _j)$$. Then for $$C_0=C_0(1)$$ as in Proposition 3.1, for any $${\mathfrak{u}}\subset{\mathbb{N}}$$ of finite cardinality we have \begin{equation} \left\| \frac{\partial^{|{\mathfrak{u}}|} u(\boldsymbol{y})}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right\|_{V} \leqslant \sqrt{C_0} \frac{\left\| f \right\|_{{V^{^{\prime}}}}}{{\check{a}(\boldsymbol{y})}} \prod_{j\in{\mathfrak{u}}}\frac{1}{\rho_j}<\infty, \quad\text{almost surely,} \end{equation} (3.24) where $$\left \| \cdot \right \|_{{V^{^{\prime}}}}$$ is the norm in the dual space $${V^{^{\prime}}}$$. The same bound holds also for $$\left \| \frac{\partial ^{|{\mathfrak{u}}|} u^s}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right \|_{V}$$, with $$\boldsymbol{y}=(y_1,\dotsc ,y_s,0,0,\dotsc )$$. Proof. First, if $$\boldsymbol{y}\in{\mathbb{R}}^{\mathbb{N}}$$ satisfies $$\|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, then we have $$\frac{1}{(\check{a}(\boldsymbol{y}))}<\infty $$: \begin{equation} {\check{a}(\boldsymbol{y})} \geqslant \big(\inf_{x\in D}a_0(x)\big) \exp\left(-\mathop{\textrm{ess}\sup}_{x\in D}\Big|\sum_{j\geqslant1}y_j\psi_j(x)\Big|\right) , \end{equation} (3.25) and thus \begin{equation} \frac1{\check{a}(\boldsymbol{y})} \leqslant \frac1{\big(\inf_{x\in D}a_0(x)\big)} \exp\left(\mathop{\textrm{ess}\sup}_{x\in D}\big|\sum_{j\geqslant1}y_j\psi_j(x)\big|\right). \end{equation} (3.26) Now, from $$(1/\rho _j)\in \ell ^{q}$$ for some $$q\in (0,\infty )$$, in view of Bachmayr et al. (2017a, Remark 2.2) we have $$ \mathbb{E} \left[ \exp\left( k \bigg\| \sum_{j\geqslant1}y_j\psi_j \bigg\|_{L^{\infty}(D)} \right) \right]<\infty $$ for any $$0\leqslant k\ {<}\ \infty $$. Thus, $$\|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, and the right-hand side of (3.24) is bounded with full (Gaussian) measure. We remark that the $${\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}})}/\mathcal{B}({\mathbb{R}})$$-measurability of the mapping $$\boldsymbol{y}\mapsto \big \| \sum _{j\geqslant 1}y_j\psi _j \big \|_{L^{\infty }(D)}$$ is not an issue. See Bachmayr et al. (2017a, Remark 2.2) noting the continuity of norms, together with, for example, Reed & Simon (1980, Appendix to IV. 5). Now, recalling the standard argument regarding the continuous dependence of the solution of the variational problem (1.6) on f, we have $$\int _{D} a(\boldsymbol{y})|\nabla (u(\boldsymbol{y}))|^2\,{\textrm{d}}x\leqslant \frac{\left \| f \right \|_{{V^{^{\prime}}}}^2}{\check{a}(\boldsymbol{y})}$$. Then the claim follows from Proposition 3.1, noting that for any $${\mathfrak{u}}\subset{\mathbb{N}}$$ of finite cardinality we have \begin{equation} \check{a}(\boldsymbol{y})\int_{D} \Big|\nabla\left(\frac{\partial^{|{\mathfrak{u}}|} u}{\partial \boldsymbol{y}_{{\mathfrak{u}}}}\right)\Big|^2 \,{\textrm{d}}x \leqslant \sum_{\substack{\mu\in\mathcal{F}\\\left\| \mu \right\|_\infty\leqslant 1}} {\rho^{2\mu}} \int_{D} a(\boldsymbol{y})|\nabla(\partial^\mu u(\boldsymbol{y}))|^2\,{\textrm{d}}x. \end{equation} (3.27) Remark 3.3 We note that following a similar discussion to the above, $$\hat{a}(\boldsymbol{y})$$ can be bounded almost surely. Thus, under Assumption 2.2, the well-posedness of problem (1.6) readily follows almost surely. Further, Assumption 2.2 implies the measurability of the mapping $${\mathbb{R}}^s\ni \boldsymbol{y}\mapsto \mathcal{G}(u^s(\cdot ,\boldsymbol{y}))\in{\mathbb{R}}$$. See Bachmayr et al. (2017a, Corollary 2.1, Remark 2.2) noting $$\mathcal{G}\in{V^{^{\prime}}}$$, together with the fact that a strongly $${\mathscr{F}}$$-measurable V-valued mapping is weakly $${\mathscr{F}}$$-measurable. For more details on the measurability of vector-valued functions, see, for example, Reed & Simon (1980); Yosida (1995). 4. QMC integration error with product weights Based on the bound on mixed derivatives obtained in the previous section, now we derive a QMC convergence rate with product weights. We first introduce some notation. Let \begin{equation} \varsigma_j(\lambda) := 2 \left( \frac{ \sqrt{2\pi}\exp(\alpha_j^2/\varLambda^{*}) }{ \pi^{2-2\varLambda^{*}}(1-\varLambda^{*})\varLambda^{*} } \right)^\lambda \zeta\left(\lambda+\frac12\right), \end{equation} (4.1) where $$\varLambda ^{*}:=\frac{2\lambda -1}{4\lambda }$$, and $$\zeta (x):=\sum _{k=1}^\infty k^{-x}$$ denotes the Riemann zeta function. We record the following result by Graham et al. (2015). Theorem 4.1 (Graham et al., 2015, Theorem 15). Let $$F\in \mathcal{W}^s$$. Given s, $$n\in \mathbb{N}$$ with $$2\leqslant n\leqslant 10^{30}$$, weights $$\boldsymbol{\gamma }=(\gamma _{{\mathfrak{u}}})_{{\mathfrak{u}}\subset \mathbb{N}}$$, and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that, for all $$\lambda \in (1/2,1]$$, \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \big| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \big|^2 } \leqslant{9} \left( \sum_{\emptyset\neq{\mathfrak{u}}\subseteq\{1:s\}} \gamma_{{\mathfrak{u}}}^\lambda \prod\limits_{j\in{\mathfrak{u}}}\varsigma_j(\lambda) \right)^{\frac1{2\lambda}} n^{-\frac{1}{2\lambda}} \left\| F \right\|_{\mathcal{W}^s}. \end{equation} (4.2) For the weight function (3.2) we assume that the $$\alpha _j$$ satisfy for some constants $$0<\alpha _{\min }<\alpha _{\max }<\infty $$, \begin{equation} \max\left\{\frac{\ln 2}{\rho_j},\alpha_{\min}\right\}<\alpha_j\leqslant \alpha_{\max},\qquad j\in\mathbb{N}. \end{equation} (4.3) For example, under Assumption 2.2 letting $$\alpha _j:=1+\frac{\ln 2}{\rho _j}$$ satisfies (4.3) with $$\alpha _{\min }:=1$$ and $$\alpha _{\max }:=1+\sup _{j\geqslant 1}\frac{\ln 2}{\rho _j}$$. We have the following bound on $$\left \| F \right \|_{\mathcal{W}^s}^2$$. The argument is essentially by Graham et al. (2015, Theorem 16). Proposition 4.2 Suppose Assumption 2.2 is satisfied with a positive sequence $$(\rho _j)$$ such that \begin{equation} (1/\rho_j)\in\ell^{1}. \end{equation} (4.4) Then we have \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 \leqslant (C^{\ast})^2\sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \left(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\right)^{\!\!2} \prod\limits_{j\in{\mathfrak{u}}} \frac{1}{\alpha_j-({\ln 2})/\rho_j}, \end{equation} (4.5) with a positive constant $$ C^{\ast }:= \frac{\left \| f \right \|_{V^{^{\prime}}}\left \| \mathcal{G} \right \|_{V^{^{\prime}}}\sqrt{C_0}}{\inf _{x\in{D}}a_0(x)} \left[\exp \left( \frac 12\sum _{j\geqslant 1}\frac{(\ln 2)^2}{\rho _j^2}+\frac 2{\sqrt{2\pi }}\sum _{j\geqslant 1}\frac{\ln 2}{\rho _j} \right)\right ]<\infty . $$ Proof. In this proof we abuse the notation slightly and y always denotes $$(y_1,\dotsc ,y_s,0,0,\dotsc )\in \mathbb{R}^{\mathbb{N}}$$. From (2.7) and (4.4), in view of Corollary 3.2 for $$\mathbb{P}_Y$$-almost every y we have \begin{equation} \left| \frac{\partial^{|{\mathfrak{u}}|} F}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right| \leqslant \left\| \mathcal{G} \right\|_{{V^{^{\prime}}}}\left\| \frac{\partial^{|{\mathfrak{u}}|} u^s}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right\|_V \leqslant \left\| \mathcal{G} \right\|_{{V^{^{\prime}}}}\sqrt{{C_0}}\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j} \frac{\left\| f \right\|_{{V^{^{\prime}}}}}{\check{a}(\boldsymbol{y})}. \end{equation} (4.6) Since $$ \sup_{x\in D}\sum_{j\geqslant1}|y_j||\psi_j(x)| \leqslant \left( \sup_{j\geqslant1}\frac{|y_j|}{\rho_j} \right) \sup_{x\in D}\sum_{j\geqslant1}\rho_j|\psi_j(x)|\leqslant \left( \sum_{j\geqslant1}\frac{|y_j|}{\rho_j} \right) \sup_{x\in D}\sum_{j\geqslant1}\rho_j|\psi_j(x)|,$$ condition (2.7) and equations (4.6) and (3.26) together with $$y_j=0$$ for j > s, imply \begin{equation} \left| \frac{\partial^{|{\mathfrak{u}}|} F}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right| \leqslant \frac{K^{\ast}}{\prod\limits_{j\in{\mathfrak{u}}}\rho_j} \prod_{j\in\{1:s\}}\exp\left( \frac{\ln2}{\rho_j}|y_j| \right), \end{equation} (4.7) where $$K^{\ast }:=\frac{\left \| f \right \|_{V^{^{\prime}}}\left \| \mathcal{G} \right \|_{V^{^{\prime}}}\sqrt{{C_0}}} {\inf _{x\in{D}} a_0(x)} $$. Then it follows from (3.1) that \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 {=} \sum_{{\mathfrak{u}}\subseteq\{1:s\}} \frac1{\gamma_{{\mathfrak{u}}}} \int_{\mathbb{R}^{|{\mathfrak{u}}|}}\!\!\! \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \left| \frac{\partial^{|{\mathfrak{u}}|} F}{\partial \boldsymbol{y}_{{\mathfrak{u}}}}(\boldsymbol{y}_{\mathfrak{u}};\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}}) \right| \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^{\!\!\!2} \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}} \end{equation} (4.8) \begin{equation} \leqslant \sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \int_{\mathbb{R}^{|{\mathfrak{u}}|}}\!\!\! \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \frac{K^{\ast}}{\prod\limits_{j\in{\mathfrak{u}}}\rho_j} \prod_{j\in\{1:s\}}\exp\!\left( \frac{\ln2}{\rho_j}|y_j| \right)\! \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^{\!\!\!2} \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}} \end{equation} (4.9) \begin{align} =& \, (K^{\ast})^2\sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \left(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\right)^{\!\!2}\nonumber\\ &\times \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \prod_{j\in\{1:s\}\setminus{\mathfrak{u}}}\exp\left( \frac{\ln2}{\rho_j}|y_j| \right) \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^{\!\!2}\nonumber\\ &\times \int_{\mathbb{R}^{|{\mathfrak{u}}|}} \prod_{j\in{\mathfrak{u}}}\exp\left( \frac{2\ln2}{\rho_j}|y_j| \right) \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}}. \end{align} (4.10) Note that this takes essentially the same form as Graham et al. (2015, (4.14)). Thus, the rest of the proof parallels that of Graham et al. (2015, Theorem 16). Noting that $$2\alpha _j-\frac{2\ln 2}{\rho _j}<0$$, and following the same argument as in Graham et al. (2015, (4.15)–(4.17)), we have \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 \leqslant (K^{\ast})^2\sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \left(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\right)^{\!\!2} \left( \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} 2\exp\left(\frac{(\ln2)^2}{2\rho_j^2}\right)\varPhi\left(\frac{\ln2}{\rho_j}\right) \right)^{\!\!2} \prod_{j\in{\mathfrak{u}}}\frac1{\alpha_j-\frac{\ln2}{\rho_j}}, \end{equation} (4.11) with $$\varPhi (\cdot )$$ denoting the cumulative standard normal distribution function. Comparing this to Graham et al. (2015, (4.17)), the statement follows from the rest of the proof of Graham et al. (2015, Theorem 16). As in Graham et al. (2015, Theorem 17), from Theorem 4.1 and Proposition 4.2 we have the following. Proposition 4.3 For each j ⩾ 1, let $$w_j(t)=\exp (-2\alpha _j|t|)$$ ($$t\in \mathbb{R}$$) with $$\alpha _j$$ satisfying (4.3). Given s, $$n\in \mathbb{N}$$ with $$2\leqslant n\leqslant 10^{30}$$, weights $$\boldsymbol{\gamma }=(\gamma _{{\mathfrak{u}}})_{{\mathfrak{u}}\subset \mathbb{N}}$$ and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that, for all $$\lambda \in (1/2,1]$$, \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|{}^2 } \leqslant 9C^{\ast}C_{\boldsymbol{\gamma},s}(\lambda)n^{-\frac{1}{2\lambda}}, \end{equation} (4.12) with \begin{equation} C_{\boldsymbol{\gamma},s}(\lambda):= \left( \sum_{\emptyset\neq{\mathfrak{u}}\subseteq\{1:s\}} \gamma_{{\mathfrak{u}}}^\lambda \prod\limits_{j\in{\mathfrak{u}}}\varsigma_j(\lambda) \right)^{\frac{1}{\!\!2\lambda}} \left( \sum_{{\mathfrak{u}}\subseteq\{1:s\}} \frac1{\gamma_{{\mathfrak{u}}}} \bigg(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\bigg)^{\!\!2} \prod\limits_{j\in{\mathfrak{u}}} \frac{1}{[\alpha_j-{\ln2}/{\rho_j}]} \right)^{\!\!\frac{1}{2}}, \end{equation} (4.13) and $$C^{\ast }$$ defined as in Proposition 4.2. We choose weights of the product form \begin{equation} \gamma_{\mathfrak{u}}=\gamma^{\ast}_{\mathfrak{u}}(\lambda) := \left[ \bigg(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\bigg)^{\!\!2} \prod\limits_{j\in{\mathfrak{u}}} \frac{1} {\varsigma_j(\lambda)[\alpha_j-\ln 2/\rho_j]} \right]^{\frac{1}{1+\lambda}}. \end{equation} (4.14) In particular, with $$\alpha _j:=1+\ln 2/\rho _j$$ we have \begin{equation} \gamma_{\mathfrak{u}}= \prod\limits_{j\in{\mathfrak{u}}} \left( \frac{1} {\rho_j^2\varsigma_j(\lambda)} \right)^{\frac{1}{1+\lambda}}. \end{equation} (4.15) Then it turns out that under a suitable value of $$\lambda $$ the constant (4.13) can be bounded independently of s, and we have the QMC error bound as follows. Theorem 4.4 For each j ⩾ 1, let $$w_j(t)=\exp (-2\alpha _j|t|)$$ ($$t\in \mathbb{R}$$) with $$\alpha _j$$ satisfying (4.3). Let $$\varsigma _{\max }(\lambda )$$ be $$\varsigma _j$$ defined by (4.1) but $$\alpha _j$$ replaced by $$\alpha _{\max }$$. Suppose $$(\psi _j)$$ satisfies Assumption 2.1. Suppose further that we choose $$\lambda $$ as \begin{equation} \lambda= \begin{cases} \frac1{2-2\delta}\textrm{ for arbitrary }\ \delta\in(0,\frac12] &\textrm{ when }q\in(0,\frac23],\\ \frac{q}{2-q} &\textrm{ when }q\in(\frac23,1], \end{cases} \end{equation} (4.16) and choose the weights$$\ \gamma _{\mathfrak{u}}$$ as in (4.14). Then given s, $$n\in \mathbb{N}$$ with $$n\leqslant 10^{30}$$, and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|{}^2 } \leqslant \begin{cases} 9C_{\rho,q,\delta}C^{\ast} n^{-(1-\delta)} &\textrm{when } 0<q\leqslant\frac23,\\ 9C_{\rho,q}C^{\ast} n^{-\frac{2-q}{2q}} &\textrm{when } \frac23<q\leqslant1, \end{cases} \end{equation} (4.17) where the constants $$C_{\rho ,q,\delta }$$, (resp. $$C_{\rho ,q}$$) are independent of s but depend on $$\rho :=(\rho _j)$$, q and $$\delta $$ (resp. $$\rho $$ and q), and $$C^{\ast }$$ is defined as in Proposition 4.2. In particular, with $$\alpha _j:=1+\ln 2/\rho _j$$ the finite constants $$C_{\rho ,q,\delta }$$ and $$C_{\rho ,q}$$ are both given by $$ {C_{\rho,q,\delta}=C_{\rho,q}=} \left( \prod_{j=1}^{\infty} \left( 1+ \left( \frac{\varsigma_j(\lambda)}{\rho_j^{2\lambda}} \right)^{\!\!\frac{1}{1+\lambda}} \right) -1 \right)^{\!\!\frac{1}{2\lambda}} \left( \prod_{j=1}^{\infty} \left( 1+ \left( \frac{\varsigma_j(\lambda)}{\rho_j^{2\lambda}} \right)^{\!\!\frac{1}{1+\lambda}} \right) \right)^{\!\!\frac{1}{2}}, $$ with $$\lambda $$ given by (4.16). Proof. Let $$\beta _j(\lambda ):=( \frac{(\varsigma _j(\lambda ))^{\frac 1{\lambda }}} {\rho _j^2[\alpha _j-\ln 2/\rho _j]}) ^{\frac{\lambda }{1+\lambda }}$$. Observe that with the choice of weights (4.14) we have \begin{equation} C_{\boldsymbol{\gamma},s}(\lambda) = \left( \sum\limits_{\emptyset\neq{\mathfrak{u}}\subseteq\{1:s\}} \prod\limits_{j\in{\mathfrak{u}}} \beta_j(\lambda) \right)^{\frac{1}{2\lambda}} \left( \sum\limits_{{\mathfrak{u}}\subseteq\{1:s\}} \prod\limits_{j\in{\mathfrak{u}}} \beta_j(\lambda) \right)^{\frac{1}{2}} \end{equation} (4.18) \begin{equation} = \left( \bigg( \prod\limits_{j=1}^{s} (1+\beta_j(\lambda)) \bigg) -1 \right)^{\frac{1}{2\lambda}} \left( \prod\limits_{j=1}^{s} (1+\beta_j(\lambda)) \right)^{\frac{1}{2}}. \end{equation} (4.19) Now, let $$\mathcal{J}:=\inf _{j\geqslant 1}(\alpha _j-\ln 2/\rho _j)$$, which is a positive value from (4.3). Further, note that $$\varsigma _j(\lambda )\leqslant \varsigma _{\max }(\lambda )$$ for j ⩾ 1. Then from $$\beta _j(\lambda )\geqslant 0$$ we have \begin{equation} \prod\limits_{j=1}^{s} (1+\beta_j(\lambda)) \leqslant \prod\limits_{j=1}^{s} \exp({\beta_j(\lambda)}) \leqslant \exp\left(\sum_{j\geqslant1}{\beta_j(\lambda)}\right) \leqslant \exp\left( \left[ \frac{[\varsigma_{\max}(\lambda)]^{\frac{1}{\lambda}}}{\mathcal{J}} \right]^{\frac{\lambda}{1+\lambda}} \sum_{j\geqslant1} \left[ \frac{1}{\rho_j} \right]^{\frac{2\lambda}{1+\lambda}} \right). \end{equation} (4.20) Thus, if $$ \sum _{j\geqslant 1}[ \frac{1}{\rho _j}]^{\frac{2\lambda }{1+\lambda }} <\infty $$ we can conclude that $$C_{\boldsymbol{\gamma },s}(\lambda )$$ is bounded independently of s. We discuss the relation between q and the exponent $${\frac{2\lambda }{1+\lambda }}$$. First note that from $$\lambda \in (\frac 12,1]$$, we have $$ \frac 23<{\frac{2\lambda }{1+\lambda }}\leqslant 1. $$ Suppose $$0<q\leqslant \frac 23$$. In this case, we always have $$q<{\frac{2\lambda }{1+\lambda }}$$, and thus $$(1/\rho _j)\in \ell ^{\frac{2\lambda }{1+\lambda }}$$. Thus, $$\sum _{j\geqslant 1} [ \frac{1}{\rho _j}]^{\frac{2\lambda }{1+\lambda }} <\infty $$ follows. Letting $$ \lambda :=\frac 1{2-2\delta } $$ with an arbitrary $$\delta \in (0,\frac 12]$$, we obtain the result for $$q\in (0,\frac 23]$$. Next consider the case $$\frac 23<q\leqslant 1$$. Then letting $$\lambda :=\lambda (q)=\frac{q}{2-q}$$, we have $$\lambda \in (1/2,1]$$ and \begin{equation} \frac{2\lambda}{1+\lambda} = \frac{2\frac{q}{2-q}}{1+\frac{q}{2-q}} = \frac{2{q}}{2-q+q}=q, \end{equation} (4.21) and thus $$\sum _{j\geqslant 1} \left [ \frac{1}{\rho _j} \right ]^{\frac{2\lambda }{1+\lambda }} <\infty $$. 4.1 On the estimate of the constant Estimate (4.17) gives the same convergence rate as the one obtained by Herrmann & Schwab (2016, Theorem 13). The weights used there are simpler than (4.15). See Herrmann & Schwab (2016, Equation (24)). The essential difference is that we incorporate the function $$1/{\varsigma _j(\lambda )}$$ into the weights as in (4.14) and (4.15). An advantage of this is that, roughly speaking, when the magnitude of $$\{\sup _{x\in D}|\psi _{j}|\}$$ is large, our estimate gives a smaller constant, as shown in Proposition 4.6 below. To make a comparison, following Herrmann & Schwab (2016) we let $$a_*\equiv 0$$, and $$a_0\equiv 1$$. Suppose the sequence $$\{\rho _j \}$$ that satisfies Assumption 2.1 is given by $$\rho _{j}=c_b \frac 1{b_j}$$ with a constant $$c_b>0$$ and a sequence $$\{b_j\}$$, and let \begin{equation} K_{\mathrm{HS}}:=\sup_{x\in D} \sum_{j\geqslant 1}\frac{|\psi_{j}(x)|}{b_j}=\frac{\kappa}{c_b} < \frac{\ln 2}{c_b}. \end{equation} (HS-A1) This is essentially the same assumption as Herrmann & Schwab (2016, Assumption (A1)). We quote the following result. Theorem 4.5 (Herrmann & Schwab, 2016, Theorem 13). Suppose $$(\psi _j)$$ satisfies Assumption 2.1 with a sequence $$\{\rho _j\}$$ that is of the form $$\rho _{j}=c_b \frac 1{b_j}$$ with a constant $$c_b>0$$ and a sequence $$\{b_j\}$$. Let $$w_j(t)=\exp (-2\alpha |t|)$$ ($$t\in \mathbb{R}$$) with a parameter $$\alpha>\frac{\kappa }{c_b} \sup _{j\geqslant 1}\{b_j\}$$. Let $$\varsigma _{\mathrm{HS}}(\lambda )$$ be $$\varsigma _j$$ defined by (4.1) but with $$\alpha _j$$ replaced by $$\alpha $$. Suppose further that $$\lambda $$ is chosen as in (4.16), and that the weights $$\gamma _{\mathfrak{u}}$$ are chosen as $$\gamma _{\mathfrak{u}}:=\prod _{j\in{\mathfrak{u}}}b_j^{\frac{2}{1+\lambda }}$$. Then given s, $$n\in \mathbb{N}$$ with $$n\leqslant 10^{30}$$, and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|^2 } \leqslant 9 {\left\| f \right\|_{V^{^{\prime}}}\left\| \mathcal{G} \right\|_{V^{^{\prime}}}\sqrt{C_0}} C_{\mathrm{HS},1} C_{\mathrm{HS},2} C_{\mathrm{HS},3} n^{-\frac{1}{2\lambda}}, \end{equation} (4.22) with \begin{equation} C_{\mathrm{HS},1} :=\exp\left( \sum_{j\geqslant 1}\left( (K_{\mathrm{HS}} b_j)^2+\frac{2}{\sqrt{2\pi}}K_{\mathrm{HS}} b_j \right) \right), \end{equation} (4.23) \begin{equation} C_{\mathrm{HS},2} :=\exp\left( \frac1{2\lambda}\sum_{j\geqslant 1} b_j^{q} \varsigma_{\mathrm{HS}}(\lambda) \right) \quad\textrm{ and }\quad C_{\mathrm{HS},3} :=\exp\left( \frac12\sum_{j\geqslant 1} \frac{b_j^{q}/c^2}{\alpha/2 - 2 K_{\mathrm{HS}} b_j} \right), \end{equation} (4.24) with an arbitrarily fixed constant $$c\in (0,\ln 2/K_{\mathrm{HS}})$$, where $$C_0$$ is defined in Proposition 3.1. To compare, we note that (4.17) can be further bounded as \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|^2 } \leqslant 9 {\left\| f \right\|_{V^{^{\prime}}}\left\| \mathcal{G} \right\|_{V^{^{\prime}}}\sqrt{C_0}} C_{{}1} C_{{}2} n^{-\frac{1}{2\lambda}}, \end{equation} (4.25) with \begin{equation} C_{{}1} :=\exp\left( \sum_{j\geqslant 1}\left( \frac12\left(\frac{\kappa}{\rho_j} \right)^2 + \frac{2}{\sqrt{2\pi}}\frac{\kappa}{\rho_j} \right) \right) \end{equation} (4.26) and \begin{equation} C_{{}2} := \exp\left( \left( \frac1{2\lambda}+\frac12 \right) \left[ \varsigma_{\max}(\lambda) \right]^{\frac{1}{1+\lambda}} \sum_{j\geqslant1} \left[ \frac{1}{\rho_j} \right]^{\frac{2\lambda}{1+\lambda}} \right), \end{equation} (4.27) with the choice $$\alpha _j:=1 + \frac{\kappa }{\rho _j}$$. Note that the scalar $$\ln 2$$ in (4.3) and (4.17) can be replaced by $$\kappa $$, which is defined as in (2.7). We have the following result on the comparison of the constants. Proposition 4.6 Fix $$\varepsilon _{\mathrm{HS}}>0$$ arbitrarily. Let the assumptions of Theorem 4.5 hold with $$\alpha :=\varepsilon _{\mathrm{HS}}+\kappa \sup _{j}\{{1}/{\rho _{j}}\}$$. Then we have $$C_{{}1}<C_{\mathrm{HS},1}$$ and $$1<C_{\mathrm{HS},3}$$. Further, for $$\lambda \in (1/2,1]$$ suppose $$ \kappa \sup_{j\geqslant1}\frac{1}{\rho_{j}}\geqslant \frac{\sqrt{1+\lambda}+1}{\lambda} $$ holds. Then we have $$C_{{}2}< C_{\mathrm{HS},2}$$, and therefore $$C_{{}1}C_{{}2}< C_{\mathrm{HS},1}C_{\mathrm{HS},2}C_{\mathrm{HS},3}$$. Proof. Clearly, we have $$1<C_{\mathrm{HS},3}$$. The equations (HS-A1) and $$\rho _{j}=c_b \frac 1{b_j}$$ imply $$K_{\mathrm{HS}}b_j = \frac{\kappa }{\rho _j}$$, and thus $$C_{{}1}<C_{\mathrm{HS},1}$$ follows. To show $$C_{{}2}< C_{\mathrm{HS},2}$$, it suffices to show \begin{align} &\left( \frac1{2\lambda}+\frac12 \right) \left( \frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}} \right)^{\frac{\lambda}{1+\lambda}} \big[ 2 \exp({\lambda \widetilde{\alpha}_{\max }^2/\varLambda^{\ast} }) \big]^{\frac{1}{1+\lambda}} \sum_{j\geqslant1} \left[ \frac{1}{\rho_j} \right]^{\frac{2\lambda}{1+\lambda}} \zeta(\lambda+1/2)^{\frac{1}{1+\lambda}}\nonumber\\ & \phantom{blahblahblah} < \frac{1}{\lambda} \left( \frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}} \right)^{\lambda} \exp({\lambda\alpha^2}/{\varLambda^{\ast}}) \sum_{j\geqslant 1} b_j^{\frac{2\lambda}{1+\lambda}} \zeta(\lambda+1/2). \end{align} (4.28) For $$\lambda \in (1/2,1]$$, we have $$\varLambda ^{\ast }=\varLambda ^{\ast }(\lambda )=\frac{2\lambda -1}{4\lambda }\in (0,1/4]$$, and thus $$ 1<\frac{\sqrt{2\pi}}{\pi^{2}}\frac{16}{3}=\frac{\sqrt{2\pi}}{\pi^{2-2\varLambda^{\ast}(1/2)} (1-\varLambda^{\ast}(1))\varLambda^{\ast}(1) } \leqslant \frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}(\lambda)}(1-\varLambda^{\ast}(\lambda))\varLambda^{\ast}(\lambda)}. $$ Hence, we have \begin{equation} \left(\frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}}\right)^{\frac{\lambda}{1+\lambda}}< \left(\frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}} \right)^{\lambda}. \end{equation} (4.29) Further, from $$(1/(2\lambda )+1/2)2^{\frac{1}{1+\lambda }} \leqslant 2^{\frac{\lambda }{1+\lambda }}/\lambda $$, noting $$2<\zeta (3/2)\leqslant \zeta (\lambda +1/2)$$ we have \begin{equation} \left(\frac{1}{2\lambda}+\frac{1}{2}\right)2^{\frac{1}{1+\lambda}} \zeta(\lambda+1/2)^{\frac{1}{1+\lambda}} \leqslant \frac{2^{\frac{\lambda}{1+\lambda}}}{\lambda} \zeta(\lambda+1/2)^{\frac{1}{1+\lambda}} \leqslant \frac{\zeta(\lambda+1/2)}{\lambda}. \end{equation} (4.30) We now show $$\frac 1{1+\lambda }\widetilde{\alpha }_{\max }=1+\sup _{j\geqslant 1}\frac{\kappa }{\rho _j}< \alpha $$. The assumption $$\kappa \sup _{j}\frac{1}{\rho _{j}} \geqslant \frac{\sqrt{1+\lambda }+1}{\lambda } = \frac{1}{\sqrt{1+\lambda }-1} $$ implies $$ \frac{ (1+\kappa \sup _{j}\{{1}/{\rho _{j}}\})^2 } {(\kappa \sup _{j}\{{1}/{\rho _{j}}\})^2} \leqslant 1+\lambda $$, and thus \begin{equation} \frac{1}{1+\lambda}\left(1+\kappa \sup_{j}\{{1}/{\rho_{j}}\}\right)^2 <\left(\varepsilon_{\mathrm{HS}}+\kappa \sup_{j}\{{1}/{\rho_{j}}\}\right)^2. \end{equation} (4.31) Hence, from the above together with (4.29) and (4.30) we conclude that (4.28) holds, which is the desired result. 5. Application to a wavelet stochastic model Cioica et al. (2012) considered a stochastic model in which users can choose the smoothness at will. In this section, we consider the Gaussian case and show that the theory developed in Section 4 can be applicable to the model with a wide range of smoothness. 5.1 Stochastic model For simplicity we assume $$D\,{\subset \mathbb{R}^{d}}$$ is a bounded convex polygonal domain. Consider a wavelet system $$(\varphi _{\xi })_{\xi \in \nabla }$$ that is a Riesz basis for $$L^2(D)$$-space. We explain the notation and outline the standard properties we assume as follows. The indices $$\xi \in \nabla $$ typically encodes both the scale, often denoted by $$|\xi |$$, and the spatial location and also the type of the wavelet. Since our analysis does not rely on the choice of a type of wavelet, we often use the notation $$\xi =(\ell ,k)$$, and $$\nabla =\{(\ell ,k)\mid \ell \geqslant \ell _0,k\in \nabla _{\ell }\}$$ where $$\nabla _{\ell }$$ is some countable index set. The scale level ℓ of $$\varphi _\xi $$ is denoted by $$|\xi |=|(\ell ,k)|=\ell $$. Furthermore, $$(\widetilde{\varphi }_{\xi })_{\xi \in \nabla }$$ denotes the dual wavelet basis, i.e., $$\langle{\varphi }_{\xi }, \widetilde{\varphi }_{\xi ^{\prime}} \rangle _{L^2(D)}=\delta _{\xi \xi ^{\prime}}$$, $$\xi ,\xi ^{\prime}\in \nabla $$. In the following, $$\alpha \lesssim \beta $$ means that $$\alpha $$ can be bounded by some constant times $$\beta $$ uniformly with respect to any parameters on which $$\alpha $$ and $$\beta $$ may depend. Further, $$\alpha \sim \beta $$ means that $$\alpha \lesssim \beta $$ and $$\beta \lesssim \alpha $$. We list the assumption on wavelets: (W1) The wavelets $$(\varphi _{\xi })_{\xi \in \nabla }$$ form a Riesz basis for $$L^2(D)$$. (W2) The cardinality of the index set $${\nabla _{\ell }}$$ satisfies $$\#{\nabla _{\ell }}=C_{\nabla } 2^{\ell d}$$ for some constant $$C_{\nabla }>0$$, where d is the spatial dimension of D. (W3) The wavelets are local. That is, the supports of $$\varphi _{{\ell ,k}}$$ are contained in balls of diameter $$\sim{2^{-\ell }}$$ and do not overlap too much in the following sense: there exists a constant M > 0 independent of ℓ such that for each given ℓ for any x ∈ D, \begin{equation} \#\{ k\in\nabla_{\ell} \mid \varphi_{\ell,k} (x)\neq 0 \} \leqslant M. \end{equation} (5.1) (W4) The wavelets satisfy the cancellation property \begin{equation*} |\langle v,\varphi_{\xi}\rangle_{L^2(D)}| \lesssim 2^{-|\xi|(\frac{d}{2}+\tilde{m})}|v|_{W^{\tilde{m},\infty}(\operatorname{supp}(\varphi_{\xi}))} \end{equation*} for $$|\xi |\geqslant \ell _0$$ with some parameter $$\tilde{m}\in{\mathbb{N}}$$, where $$|\cdot |_{W^{\tilde{m},\infty }}$$ denotes the usual Sobolev semi-norm. That is, the inner product is small when the function v is smooth on the support $$\operatorname{supp}(\varphi _{\xi })$$. (W5) The wavelet basis induces characterisations of Besov spaces $$B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))$$ for $${1 \leqslant }{{\bar{\texttt{p}}}},{{\bar{\texttt{q}}}}<\infty $$ and all t with $$d\max \{1/{{\bar{\texttt{p}}}}-1,0\}<t<t_*$$ for some parameter $$t_*>0$$: \begin{equation} \left\| v \right\|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))} := \left( \sum_{\ell=\ell_0}^{\infty} 2^{\ell\big( t+d\big(\frac12-\frac{1}{{{\bar{\texttt{p}}}}}\big) \big){{\bar{\texttt{q}}}}} \left( \sum_{k\in\nabla_{\ell}} |\langle v, \tilde{\varphi}_{\ell,k}\rangle_{L^2(D)}|^{{{\bar{\texttt{p}}}}} \right)^{\frac{{{\bar{\texttt{q}}}}}{{{\bar{\texttt{p}}}}}} \right)^{\frac1{{{\bar{\texttt{q}}}}}}. \end{equation} (5.2) The upper bound $$t_*$$ depends on the choice of wavelet basis. Since t we consider is typically small, here for simplicity we define the Besov norm as above. (W6) The wavelets satisfy \begin{equation} \sup_{x\in D}|\varphi_{\ell,k}(x)|=C_{\varphi} 2^{\frac{\beta_0 d}{2}\ell} \qquad \textrm{ with some }{\beta_0}\in\mathbb{R_{+}}, \end{equation} (5.3) for some constant $$C_{\varphi }>0$$. Typically we have $$\varphi _{\ell ,k}\sim 2^{\frac{d}2\ell }\psi (2^{\ell }(x-x_{\ell ,k}))$$ for some bounded function $$\psi $$. In this case we have $$\beta _0=1.$$ See Cioica et al. (2012, Section 2.1) and references therein for further details. See also Cohen (2003); DeVore (1998); Urban (2002). We now investigate a stochastic model expanded by the wavelet basis described above. Let $$\{Y_{\ell ,k}\}$$ be a collection of independent standard normal random variables on a suitable probability space $$(\varOmega ^{\prime},{\mathscr{F}}^{\prime},\mathbb{P}^{\prime})$$. We assume the random field (1.2) is given with T such that \begin{equation} T(x,{\boldsymbol{y}}^{\prime}) = \sum_{\ell=\ell_0}^\infty\sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x), \end{equation} (5.4) where \begin{equation} \sigma_{\ell}:=2^{-\frac{\beta_1d }2\ell}\textrm{ with }\beta_1>1. \end{equation} (5.5) Thanks to the decaying factor $$\sigma _{\ell }$$, in view of (W5) the series (5.4) converges $$\mathbb{P}^{\prime}$$-almost surely in $$L^2(D)$$: $$\mathbb{E}_{\mathbb{P}^{\prime}} \big( \sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} Y_{\ell ,k}({\boldsymbol{y}}^{\prime})^2 \sigma _{\ell }^2\big) =C_{\nabla }\sum _{\ell =\ell _0}^{\infty } 2^{-(\beta _1-1)d \ell } <\infty $$. Further, $$\sigma _{\ell }$$ will be used for $$\{\sigma _{\ell } \varphi _{\ell ,k}\}$$ to satisfy condition (2.7). To replace (1.2), we consider the following log-normal stochastic model: \begin{equation} a(x,{\boldsymbol{y}}^{\prime})=a_*(x) + {a_0(x)} \exp \left( \sum_{\ell=\ell_0}^\infty \sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime})\sigma_{\ell} \varphi_{\ell,k}(x)\right). \end{equation} (5.6) In the following, we argue that we can reorder $$\sigma _{\ell } \varphi _{\ell ,k}$$ lexicographically as $$\sigma _{j} \varphi _{j}$$ and see it as $$\psi _j$$, while keeping the law. Throughout this section, we assume that the parameters $$\beta _0$$ and $$\beta _1$$satisfy \begin{equation} {0}<{\beta_1} -{\beta_0}, \end{equation} (5.7) and that point evaluation $$\varphi _{\ell ,k}(x)$$ ((ℓ, k) ∈ ∇) is well defined for any x ∈ D. Under this assumption, reordering $$(Y_{\ell ,k}\sigma _{\ell } \varphi _{\ell ,k})$$ lexicographically does not change the law of (5.4) on $${\mathbb{R}}^{D}$$. To see this, from the Gaussianity it suffices to show that the covariance function $$\mathbb{E}_{\mathbb{P}^{\prime}}[T(\cdot )T(\cdot )]\colon D\times D\to{\mathbb{R}}$$ is invariant under the reordering. Fix x ∈ D arbitrarily. For any L, L′ (L>L′), from the independence of $$\{Y_{\ell ,k}\}$$ we have \begin{equation} \mathbb{E}_{\mathbb{P}^{\prime}} \left( \sum_{\ell=\ell_0}^{L} \sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x) - \sum_{\ell=\ell_0}^{L^{\prime}} \sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x) \right)^2= \sum_{\ell=L^{\prime}+1}^L \sum_{k\in\nabla_{\ell}} {\sigma_{\ell}^2} \varphi_{\ell,k}^2(x) \end{equation} (5.8) \begin{equation} \leqslant{C_{\varphi}^2M \sum_{\ell=L^{\prime}+1}^L 2^{-(\beta_1-\beta_0 )d\ell}}<\infty. \end{equation} (5.9) Hence, the sequence $$\big \{\sum _{\ell =\ell _0}^{L} \sum _{k\in \nabla _{\ell }} Y_{\ell ,k}({\boldsymbol{y}}^{\prime}) \sigma _{\ell } \varphi _{\ell ,k}(x)\big \}_L$$ is convergent in $$L^2(\varOmega ^{\prime},\mathbb{P}^{\prime})$$. The continuity of the inner product $$\mathbb{E}_{\mathbb{P}^{\prime}}[\cdot ,\cdot ]$$ on $$L^2(\varOmega ^{\prime})$$ in each variable yields \begin{equation} \mathbb{E}_{\mathbb{P}^{\prime}}[T(x_1)T(x_2)] = \sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sum_{\ell^{\prime}=\ell^{\prime}_0}^{\infty} \sum_{k^{\prime}\in\nabla_{\ell^{\prime}}} \mathbb{E}_{\mathbb{P}^{\prime}}[ Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x_1) Y_{\ell^{\prime},k^{\prime}}({\boldsymbol{y}}^{\prime}) \sigma_{\ell^{\prime}} \varphi_{\ell^{\prime},k^{\prime}}(x_2) ] \end{equation} (5.10) \begin{equation} =\sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2 \varphi_{\ell,k}(x_1)\varphi_{\ell,k}(x_2) \qquad \text{for any {$x_1$}, {$x_2\in D$}.} \end{equation} (5.11) But we have $$\sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} \sigma _{\ell }^2 |\varphi _{\ell ,k}(x_1)\varphi _{\ell ,k}(x_2)| \leqslant{C_{\varphi }^2M \sum _{\ell =L^{\prime}+1}^L 2^{-(\beta _1-\beta _0 )d\ell }} $$. Hence $$ \mathbb{E}_{\mathbb{P}^{\prime}}[T(x_1)T(x_2)]=\sum_{j\geqslant1}\sigma_{j}^2\varphi_{j}(x_1)\varphi_{j}(x_2), \qquad x_1,\,x_2\in D. $$ Following a similar discussion, we see that the series $$\sum _{j\geqslant 1}\sigma _{j}^2 y_j\varphi _{j}(x)$$ converges in $$L^2(\varOmega )$$ for each x ∈ D and has the covariance function $$\sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} \sigma _{\ell }^2 \varphi _{\ell ,k}(x_1)\varphi _{\ell ,k}(x_2)$$. Hence the law on $${\mathbb{R}}^{D}$$ is the same. Thus, abusing the notation slightly we write T(⋅, y) := T(⋅, y′), $$y_{\ell ,k}:=Y_{\ell ,k}({\boldsymbol{y}}^{\prime})$$, $$\varOmega ={\mathbb{R}}^{{\mathbb{N}}}:=\varOmega ^{\prime}$$, $${\mathscr{F}}:={\mathscr{F}}^{\prime}$$, $$\mathbb{P}_{Y}:=\mathbb{P}^{\prime}$$ and $$\mathbb{E}[\cdot ]:=\mathbb{E}_{\mathbb{P}^{\prime}}[\cdot ]$$. Remark 5.1 Our theory at present is restricted to the Gaussian random fields with covariance functions of the form (5.11). Although there are attempts to represent a given Gaussian random field with wavelet-like functions (see Bachmayr et al.,2017b and references therein), unfortunately it does not seem to be the case that arbitrary covariance functions, in particular Matérn covariance functions, are representable as in (5.11) with wavelets with the properties (W1–W6). Next we discuss the applicability of the theory developed in Section 4 to the wavelet stochastic model above. We need to check Assumption 2.1. Take $$\theta \in (0,\frac{d}2({\beta _1}-{\beta _0}))$$ and for $$\xi =(\ell ,k)$$ let \begin{equation} \rho_{\xi}:=c2^{\theta |\xi|}=c2^{\theta \ell} \end{equation} (5.12) with some constant $$0<c<{\ln 2}\big (M{C_{\varphi }}\sum _{\ell =\ell _0}^\infty 2^{\ell (\theta -\frac{d}2({\beta _1}-{\beta _0}))}\big )^{-1}$$. Then by virtue of the locality property (5.1) we have (2.7): \begin{equation} \sup_{x\in D}\sum_{\xi} \rho_{\xi} |\sigma_{\xi}\varphi_{\xi}(x)| \leqslant \sum_{\ell=\ell_0}^\infty \rho_{\ell} \sup_{x\in D} \sum_{k\in\nabla_{\ell}} \left|2^{-\frac{\beta_1d\ell}2}\varphi_{\ell,k}(x)\right| \leqslant cM{C_{\varphi}}\sum_{\ell=\ell_0}^\infty 2^{\theta \ell} 2^{-\frac{\beta_1d\ell}2} 2^{\frac{\beta_0 d}{2}\ell}<\ln2. \end{equation} (5.13) Further, we note that by reordering for sufficiently large j we have \begin{equation} \sup_{x\in D}|\sigma_{j}\varphi_{j}(x)| \sim j^ { -\frac12({\beta_1} -{\beta_0 }), }. \end{equation} (5.14) To see this, first recall that there are $$\mathcal{O}(2^{{\ell d}})$$ wavelets at level ℓ. Thus, for an arbitrary but sufficiently large j we have $$ 2^{\ell _jd}\lesssim j \lesssim 2^{(\ell _j+1)d} $$ for some $$\ell _j\geqslant \ell _0$$, which is equivalent to $$ 2^{-(\ell_j+1)d}\lesssim j^{-1} \lesssim 2^{-\ell_jd}.$$ Now, let $$\xi _j\in \nabla _{\ell _j}$$ be the index corresponding to j. Since $$|\xi _j| = \ell _j$$, noting $$\beta _1-\beta _0>0$$ we have \begin{equation} \sup_{x\in D} |\sigma_{j} \varphi_{j} (x)| = \sup_{x\in D} |\sigma_{\ell_j} \varphi_{\xi_j} (x)| {=} C_{\varphi} 2^{-\frac{\beta_1 d}2 \ell_j} 2^{\frac{\beta_0 d}{2} \ell_j} \lesssim C_{\varphi}{2^{\frac{d}2\beta^{\ast}}} j^{-\frac12({\beta_1} -{\beta_0} )}\quad{\textrm{for any }\beta^{\ast}\geqslant \beta_0-\beta_1.} \end{equation} (5.15) The opposite direction can be derived as \begin{equation} j^{-\frac12({\beta_1} -{\beta_0} )} \lesssim 2^{{-\ell_j d(\frac12(\beta_1-\beta_0))}} =\frac{1}{C_{\varphi}} \sup_{x\in D}|\sigma_{j} \varphi_{j}(x)|. \end{equation} (5.16) Similarly, we have \begin{equation} \rho_{j}\sim j^{\frac{\theta}{d}}. \end{equation} (5.17) Thus, to have $$\sum _{j\geqslant 1} \frac 1{\rho _j}<\infty $$, the weakest condition on the summability on $$(1/\rho _j)$$ for Assumption 2.1 to be satisfied, it is necessary (and sufficient) to have $$\theta> d$$. The following proposition summarises the discussion above. Theorem 5.2 Suppose the random coefficient (1.2) is given by T as in (5.4) with $$(\varphi _{\ell ,k})$$ that satisfies (5.3), and non-negative numbers $$(\sigma _{\ell })$$ that satisfy (5.5). Let $$(\rho _{\xi })$$ be defined by (5.12). Further, assume $${\beta _0}$$ and $$\beta _1$$ satisfy \begin{equation} \frac{2}{q} <{\beta_1} -{\beta_0 } \end{equation} (5.18) for some q ∈ (0, 1]. Then the reordered system $$(\sigma _{j}\varphi _{j})$$ with the reordered $$(\rho _j)$$ satisfies Assumption 2.1, and under the same conditions on $$w_j(t)$$, $$\alpha _j$$ and $$\varsigma _j$$ as in Theorem 4.4 we have the QMC error bound (4.17): \begin{equation*} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q\!}_{s,n}(\boldsymbol{\varDelta};F) \right|{}^2 } = \begin{cases} \mathcal{O}\!(n^{-(1-\delta)}) &\textrm{when } 0<q\leqslant\frac23,\\ \mathcal{O}\!(n^{-\frac{2-q}{2q}}) &\textrm{when } \frac23<q\leqslant1, \end{cases} \end{equation*} where $$\delta \in (0,1/2]$$ is arbitrary, and the implied constants are as in Theorem 4.4. Proof. Take $$\theta \in (\frac{d}q,\frac{d}2(\beta _1-\beta _0))$$, and define $$(\rho _{\xi })$$ as in (5.12), reorder the components lexicographically and denote the reordered $$(\rho _{\xi })$$ by $$(\rho _j)$$. Then we have (2.8), \begin{equation} \sum_{j\geqslant 1} \left( \frac1{\rho_j}\right)^{q} \lesssim \sum_{j\geqslant 1} \left( \frac1{j}\right)^{ \frac{{q}\theta}d }<\infty. \end{equation} (5.19) Further, from $${\theta -\frac{\beta _1d}{2}+\frac{\beta _0 d}{2}}<0$$ we have (5.13), and thus (2.7) holds. Hence, from the discussion in this section, Assumption 2.1 is satisfied, and thus in view of Theorem 4.4 we have (4.17). 5.2 Smoothness of the stochastic model 5.2.1 Hölder smoothness of the realisations Often, random fields T with realisations that are not smooth are regularly of interest. In this section, we see that the stochastic model we consider, (5.6), allows reasonably rough random fields (Hölder smoothness) for d = 1, 2. The result is shown via Sobolev embedding results. We provide a necessary and sufficient condition to have specified Sobolev smoothness (Theorem 5.3). Recall that embedding results are in general optimal (see, for example, Adams & Fournier, 2003, 4.12, 4.40–4.44), and in this sense we have a sharp condition for our model to have Hölder smoothness. A building block is a Besov characterisation of the realisations which is essentially due to Cioica et al. (2012, Theorem 6). Here we define $$s:=s(L):=\sum _{\ell =\ell _0}^{L}\#(\nabla _{\ell })$$, that is, the truncation is considered in terms of the level L. Theorem 5.3 (Cioica et al., 2012, Theorem 6). Let $${{\bar{\texttt{p}}}},{{\bar{\texttt{q}}}}\in [1,\infty )$$, and $$t\in (d\max \{1/{{\bar{\texttt{p}}}}-1,0\},t_*)$$, where $$t_*$$ is the parameter in (W5). Then \begin{equation} t< d\left( \frac{\beta_1 - 1}2 \right) \end{equation} (5.20) if and only if $$T\in B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))$$ a.s. Further, if (5.20) is satisfied, then the stochastic model (5.6) satisfies $$\mathbb{E}[\left \| T^{s(L)} \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]\leqslant \mathbb{E}[\left \| T \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]<\infty $$ for all $$L\in{\mathbb{N}}$$. Proof. First, from the proof of Cioica et al. (2012, Theorem 6), we see that $$T\in B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))$$ a.s. is equivalent to $$ \sum_{\ell=\ell_0}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \sigma_{\ell}^{{{\bar{\texttt{q}}}}} ( \# \nabla_{\ell} )^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}} \sim \sum_{\ell=\ell_0}^{\infty} 2^{\ell{{\bar{\texttt{q}}}} (t-\frac{d}2(\beta_1-1) )}<\infty, $$ which holds from the assumption $$t< d\big ( \frac{\beta _1 - 1}2 \big )$$. Similarly, from the proof of Cioica et al. (2012, Theorem 6) we have $$ \mathbb{E}[\left\| T \right\|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]\,\,{\lesssim} \sum_{\ell=\ell_0}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \sigma_{\ell}^{{{\bar{\texttt{q}}}}} ( \# \nabla_{\ell} )^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}}<\infty. $$ Finally, from (W5) we have $$\mathbb{E}[\left \| T^{s} \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}] {= \sum _{\ell =\ell _0}^{s} 2^{\ell (t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \mathbb{E}[( \sum _{k\in \nabla _{\ell }}|Y_{\ell ,k}|^{{\bar{\texttt{p}}}})^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}} ]} \leqslant \mathbb{E}[\left \| T \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]$$, completing the proof. To establish Hölder smoothness we employ embedding results. To invoke them, we first establish that the realisations are continuous: we want the measurability and want to keep the law of T on $${\mathbb{R}}^{D}$$. The Hölder norm involves taking the supremum over the uncountable set D, and thus whether the resulting function $$\varOmega \ni \boldsymbol{y}\mapsto \left \| T(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}\in{\mathbb{R}}$$, where $$t_1\in (0,1]$$ is a Hölder exponent, is an $${\mathbb{R}}$$-valued random variable is not immediately clear. We see that by the continuity the measurability is preserved. Sobolev embeddings are achieved by finding a suitable representative by changing values of functions on measure zero sets of D. This change could affect the law on $${\mathbb{R}}^{D}$$, since it is determined by the laws of arbitrary finitely many random variables $$(T(x_1),\dotsc ,T(x_m))$$ ($$\{x_i\}_{i=1,\dotsc ,m}\subset D$$) on $${\mathbb{R}}^{m}$$. To avoid this, we establish the existence of continuous modification, thereby taking the continuous element of a Besov function that respects the law of T from the outset. We want realisations of T to have continuous paths. Now, suppose that there exist positive constants $$\iota _1$$, $$C_{\mathrm{KT}}$$ and $$\iota _2(>d)$$ satisfying \begin{equation} \mathbb{E}[|T(x_1)-T(x_2)|^{\iota_1}]\leqslant C_{\mathrm{KT}}\left\| x_1 - x_2 \right\|_2^{\iota_2} \qquad \textrm{ for any }x_1,x_2\in D. \end{equation} (5.21) Then by virtue of Kolmogorov–Totoki’s theorem (Kunita, 2004, Theorem 4.1) T has a continuous modification. Further, the continuous modification is uniformly continuous on D and it can be extended to the closure $$\overline{D}$$. Thus, we want T to satisfy (5.21). Hölder smoothness of $$(\varphi _{k,l})$$ is sufficient for (5.21) to hold. Proposition 5.4 Suppose that $$(\sigma _{\ell })$$ satisfies (5.5). Further, suppose that for each (ℓ,k) ∈ ∇, the function $$\varphi _{\ell ,k}$$ is $$t_0$$-Hölder continuous on D for some $$t_0\in (0,1]$$. Then (5.21) holds, in particular, T has a modification that is uniformly continuous on D and can be extended to the closure $$\overline{D}$$. Proof. It suffices to show (5.21) holds. Fix $$x_1,x_2\in D$$ arbitrarily. First note that \begin{equation} {\sigma_*^2}:=\mathbb{E}[|T(x_1)-T(x_2)|^{2}] =\sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2( \varphi_{\ell,k}(x_1) - \varphi_{\ell,k}(x_2))^2 \end{equation} (5.22) \begin{equation} {\leqslant C} \left\| x_1 - x_2 \right\|_2^{2t_0} \sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2<\infty, \end{equation} (5.23) where C is the $$t_0$$-Hölder constant. Then since $$T(x_1)-T(x_2)\sim \mathcal{N}(0,{\sigma _*^2})$$ we observe that, with $$X_{\mathrm{std}}\sim \mathcal{N}(0,1)$$ we have \begin{equation} \mathbb{E}[|T(x_1)-T(x_2)|^{2m}] =\mathbb{E}[|X_{\mathrm{std}} \sigma_*|^{2m}] =\sigma_*^{2m} \mathbb{E}[|X_{\mathrm{std}}|^{2m}] \end{equation} (5.24) \begin{equation} {\leqslant C^m}\left\| x_1 - x_2 \right\|_2^{2t_0 m} \left( \sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2 \right)^{{m}} \mathbb{E}[|X_{\mathrm{std}}|^{2m}]\,\,\textrm{for any }m\in{\mathbb{N}}. \end{equation} (5.25) Taking $$m>\frac{d}{2t_0}$$, we have (5.21) with $$\iota _1:=2m$$, $$C_{\mathrm{KT}}:= C^m \big ( \sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} \sigma _{\ell }^2 \big )^{m} \mathbb{E}[|X_{\mathrm{std}}|^{2m}]$$ and $$\iota _2:=2t_0m(>d)$$, and thus the statement follows. In the following, we assume $$\varphi _{\ell ,k}$$ is $$t_0$$-Hölder continuous on D for some $$t_0\in (0,1]$$. Note that under this assumption, we may assume $$\varphi _{\ell ,k}$$ is continuous on $$\overline{D}$$. Using the fact that $$T(\cdot ,\boldsymbol{y})\in B^t_{2}(L_{2}(D))=H^{t}(D)$$ a.s., now we establish the expected Hölder smoothness of the random coefficient a. This implies a spatial regularity of the solution u, given a suitable regularity of D and f. In turn, for example, the convergence rate of the finite element method using the piecewise linear functions is readily obtained, under a certain condition on the output functional $$\mathcal{G}$$. See Teckentrup et al. (2013, Lemma 3.3) or Graham et al. (2015, Theorem 6). First, we argue that to analyse the Hölder smoothness of the realisations of a, without loss of generality we may assume $$a_*\equiv 0$$ and $$a_0\equiv 1$$. To see this, suppose $$a_*$$, $$a_0$$ in (5.6) satisfies $$a_*,a_0\in C^{t_1}(\overline{D})$$ for some $$t_1\in (0,1]$$. By virtue of \begin{equation} |\textit{e}^{a} - \textit{e}^{b}|=\bigg|\int_{a}^{b}\textit{e}^{r}{\,{\textrm{d}}r} \bigg| \leqslant \max\{\textit{e}^{a},\textit{e}^{b}\}|b-a|\leqslant (\textit{e}^{a}+\textit{e}^{b})|b-a| \quad \textrm{for all}\,\,a,b\in{\mathbb{R}}, \end{equation} (5.26) for any $$x_0,x_1,x_2\in \overline{D}$$ ($$x_1\neq x_2$$) we have \begin{equation} {\big|\textit{e}^{T(x_0)}\big|+\frac{\big|\textit{e}^{T(x_1)}-\textit{e}^{T(x_2)}\big|}{\left\| x_1-x_2 \right\|_2^{t_1}} \leqslant \left(\sup_{x\in\overline{D}}\big|\textit{e}^{T(x)}\big|\right) \left(1+2 \frac{|T(x_2) - T(x_3)|}{\left\| x_1-x_2 \right\|_2^{t_1}} \right).} \end{equation} (5.27) Noting that $$ \| a_0 \textit{e}^{T} \|_{C^{t_1}(\overline{D})}\leqslant C_{t_1} \left \| a_0 \right \|_{C^{t_1}(\overline{D})} \| \textit{e}^{T} \|_{C^{t_1}(\overline{D})} $$ (see, for example, Gilbarg & Trudinger, 1983, p. 53) we have \begin{equation} \left\| a \right\|_{C^{t_1}(\overline{D})} \leqslant{ \left\| a_* \right\|_{C^{t_1}(\overline{D})} + C_{t_1}\left\| a_0 \right\|_{C^{t_1}(\overline{D})} \left(\sup_{x\in\overline{D}}|\textit{e}^{T(x)}|\right)} \left(1+2 \left\| T \right\|_{C^{t_1}(\overline{D})} \right). \end{equation} (5.28) Thus, given $$a_*,a_0\in C^{t_1}(\overline{D})$$, it suffices to show $$(\sup _{x\in \overline{D}}|\textit{e}^{T(x)}|) (1+2 \left \| T \right \|_{C^{t_1}(\overline{D})})<\infty $$ for the Hölder smoothness of the realisations of a. Therefore, in the rest of this subsection, for simplicity we assume $$a_*\equiv 0$$ and $$a_0\equiv 1$$. In order to invoke embedding results we assume $$t_*$$ satisfies $$\frac{d}2<\lfloor t_*\rfloor $$, and that we can take $$t\in (0,\frac{d}2(\beta _1-1))$$ such that $$\frac{d}2<\lfloor t\rfloor $$. For the latter to hold, taking $$\beta _1\geqslant 3$$, implying $$\frac{d}2<\lfloor \frac{d}2(\beta _1-1)\rfloor $$, is sufficient, which is always satisfied for the presented QMC theory to be applicable. See Section 5.2.2. Now, take $$t_1\in (0,1]\cap (0,\lfloor t\rfloor -\frac{d}2]$$. Then from $$B^t_{2}(L_{2}(D))=H^{t}(D)$$ and the Sobolev embedding (for example, Adams & Fournier, 2003, Theorem 4.12) we have \begin{equation} \left\| a \right\|_{C^{t_1}(\overline{D})}\lesssim \left(\sup_{x\in\overline{D}}|a(x)|\right) \left(1+2 \left\| T \right\|_{B^t_{2}(L_{2}(D))} \right) \end{equation} (5.29) Similarly, we have $$\left \| a^{s} \right \|_{C^{t_1}(\overline{D})} \lesssim (\sup _{x\in \overline{D}}|a^s(x)|)(1+2 \left \| T^s \right \|_{B^t_{2}(L_{2}(D))} )$$. We want to take the expectation of $$\left \| a \right \|_{C^{t_1}(\overline{D})}$$. To do this we establish the $${\mathscr{F}}/\mathcal{B}({\mathbb{R}})$$-measurability of $$\boldsymbol{y}\mapsto \left \| a(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}$$. Taking continuous modifications of T if necessary, we may assume paths of a are continuous on $$\overline{D}$$. Then from the continuity of the mapping $$ \{(x_1,x_2)\in\overline{D}\times\overline{D}\mid x_1\neq x_2 \}\ni (x_1,x_2)\mapsto \frac{|a(x_1)-a(x_2)|}{\left\| x_1-x_2 \right\|_2^{t_1}}\in{\mathbb{R}}, $$ with a countable set G that is dense in $$\{(x_1,x_2)\in \overline{D}\times \overline{D}\mid x_1\neq x_2 \}\subset{\mathbb{R}}^d\times{\mathbb{R}}^d$$ we have \begin{equation} \sup_{x_1,x_2\in \overline{D},\,x_1\neq x_2} \frac{|a(x_1)-a(x_2)|}{\left\| x_1-x_2 \right\|_2^{t_1}} = \sup_{(x_1,x_2)\in G} \frac{|a(x_1)-a(x_2)|}{\left\| x_1-x_2 \right\|_2^{t_1}}. \end{equation} (5.30) Thus, $$\boldsymbol{y}\mapsto \left \| a(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}$$, and by the same argument, $$\boldsymbol{y}\mapsto \left \| a^s(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}$$, are $$\mathcal{B}({\mathbb{R}}^{\mathbb{N}})/\mathcal{B}(\overline{{\mathbb{R}}})$$-measurable, where $$\overline{{\mathbb{R}}}:={\mathbb{R}}\cup \{-\infty \}\cup \{\infty \}$$. From $$\mathbb{E}[\left \| T^s \right \|_{C(\overline{D})}]\lesssim \mathbb{E}[\left \| T^s \right \|_{ B^t_{2}(L_{2}(D)) }]\leqslant \mathbb{E}[\left \| T \right \|_{ B^t_{2}(L_{2}(D)) }] \lesssim{(\sum _{\ell =\ell _0}^{\infty } 2^{\ell (2t-{d}(\beta _1-1) )} )^{1/2}}<\infty $$ independently of s, and $$\mathbb{E}[\left \| T \right \|_{C(\overline{D})}]\lesssim{(\sum _{\ell =\ell _0}^{\infty } 2^{\ell (2t-d(\beta _1-1) )} )^{1/2}}<\infty $$, following the discussion by Charrier (2012, Proof of Proposition 3.10) utilising Fernique’s theorem, there exists a constant $$M_p>0$$ independent of s such that \begin{align} \max\Big\{\mathbb{E}[\exp(p\|{T^s(\cdot,\boldsymbol{y})}\|_{C(\overline{D})})] , \mathbb{E}[\exp(p\|{T(\cdot,\boldsymbol{y})}\|_{C(\overline{D})})] \Big\}<M_{p}, \end{align} (5.31) for any $$p\in (0,\infty )$$. Together with $$\sup _{x\in \overline{D}}|a(x)|\leqslant \exp (\sup _{x\in \overline{D}}|T(x)|)$$ we have $$ \max\left\{\mathbb{E}[(\sup_{x\in\overline{(D)}}|a^s(x)|)^{ 2p }], \mathbb{E}[(\sup_{x\in\overline{(D)}}|a(x)|)^{ 2p }]\right\}<M_{2p} \quad \text{for any {$p\in(0,\infty)$}.}$$ Hence, from (5.29) we conclude that \begin{equation} \mathbb{E}[\left\| a \right\|_{C^{t_1}(\overline{D})}^{ p }] \leqslant \max\{1,2^{p-1/2}\}\sqrt{\mathbb{E}\left[ \left(\sup_{x\in\overline{D}}|a(x)|\right)^{ 2p } \right]} \sqrt{1+4^{ p } \mathbb{E}\left[ \left\| T \right\|_{B^t_{2}(L_{2}(D))}^{2p} \right]} <\infty. \end{equation} (5.32) Similarly we have $$ \mathbb{E}[\left\| a^s \right\|_{C^{t_1}(\overline{D})}^{ p }] \leqslant \max\{1,2^{p-1/2}\}\sqrt{\mathbb{E}\left[ \left(\sup_{x\in\overline{D}}|a^s(x)|\right)^{ 2p } \right]} \sqrt{1+4^{ p } \mathbb{E}\left[ \left\| T^s \right\|_{B^t_{2}(L_{2}(D))}^{2p} \right]}<\infty, $$ where the right-hand side can be bounded independently of s. 5.2.2 On the smoothness of the realisations our theory can treat We now discuss the smoothness of the realisations that the currently developed theory permits. From the conditions imposed on the basis functions, e.g., the summability conditions, random fields with smooth realisations are easily in the scope of the QMC theory applied to PDEs. Here, the capability of taking a reasonably rough random field into account is of interest. Thus, we are interested in the smallest $$\beta _1$$ our theory allows us to take: given $$\beta _0$$, in view of Theorem 5.3 the smaller the decay rate $$\beta _1$$ of $$\sigma _\ell $$ is, the rougher the realisations of (5.4) are. Typically, $$L^2$$ wavelet Riesz bases have growth rate $$\beta _0=1$$. Then the condition $$2 < \beta _1 - \beta _0 $$, the weakest condition on $$\beta _1$$ in Theorem 5.2, is equivalent to \begin{equation} \beta_1=3+{\frac{2}{d}}\varepsilon\quad\textrm{ for any }\varepsilon>0, \end{equation} (A1) where the factor $$\frac{2}{d}$$ is introduced to simplify the notation in the following discussion. In the following, we let $$\beta _0=1$$, take $$\beta _1$$ as in (A1) and discuss the smoothness of (5.4) achieved by taking small $$\varepsilon>0$$, i.e., smallest $$\beta _1$$ possible. Our discussion will be based on Sobolev embedding results. We first note that from $$B^t_{2}(L^{2}(D))=H^{t}(D)$$, in view of Theorem 5.3, $$T(\cdot , \boldsymbol{y})\in H^{t}(D)$$ a.s. if and only if condition (5.20), holds. In applications, d = 1, 2, 3 are of interest. We recall the following embedding results. See, for example, Adams & Fournier (2003, p. 85). For d = 1, 2 and 3 respectively, with $$\beta _1=3+{{2\varepsilon }/d}$$ the condition (5.20) reads $$t<1 + \varepsilon $$, $$t<2 + \varepsilon $$ and $$t<3 + \varepsilon $$. See Table 1, which summarises condition (5.20) with (A1). Table 1 For $$\beta _0=1$$, the upper bound for the exponent t for realisations of T to have $$H^{t}$$-smoothness is $$d/2(\beta _1-1)$$. Column 2 shows this upper bound varied with the spatial dimension d. Column 3 shows the smallest bound on t allowed by the presented QMC theory: the case $$\beta _1=3+2\varepsilon /d$$ for small $$\varepsilon>0$$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ View Large Table 1 For $$\beta _0=1$$, the upper bound for the exponent t for realisations of T to have $$H^{t}$$-smoothness is $$d/2(\beta _1-1)$$. Column 2 shows this upper bound varied with the spatial dimension d. Column 3 shows the smallest bound on t allowed by the presented QMC theory: the case $$\beta _1=3+2\varepsilon /d$$ for small $$\varepsilon>0$$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ View Large For d = 1 and d = 2, realisations allowed by $$t<1+\varepsilon $$, and respectively $$t<2+\varepsilon $$, seem to be rough enough. For d = 1, $$H^1(D)$$ is characterised as a space of absolutely continuous functions. Since in practice we employ a suitable numerical method to solve PDEs, the validity of point evaluations demands a(⋅, y) ∈ C(D). For d = 2, we know $$H^2(D)$$ can be embedded to $$C^{0,t}(\overline{D})$$ (t ∈ (0, 1)). This is a standard assumption to have the convergence of FEM with the hat function elements on polygonal domains. For d = 3, we have $$t<3+\varepsilon $$. We know $$H^3(D) = H^{1+2}(D)$$ can be embedded to $$C^{1,t}(\overline{D})$$ ($$t\in (0,2 - \frac 32] = (0, \frac 12]$$). In practice, we employ quadrature rules to compute the integrals in bilinear form. That $$a \in C^{1,t}(\overline{D})$$ ($$t \in (0,{\frac 12}]$$) is a reasonable assumption to get the convergence rate for FEM with quadratures. As a matter of fact, we want $$a(\cdot ,\boldsymbol{y}) \in C^{2r}(\overline{D})$$ to have the $$\mathcal{O}(H^{2r})$$ convergence of the expected $$L^{p}(\varOmega )$$-moment of $$L^2(D)$$-error even for $$C^2$$-bounded domains. See Charrier et al. (2013, Remark 3.14) and Teckentrup et al. (2013, Remark 3.2). Finally, we note that these embedding results are in general optimal (see, for example, Adams & Fournier, 2003, 4.12, 4.40–4.44), and in this sense, together with the characterisation (Theorem 5.3), the condition for our model to have Hölder smoothness is sharp. 5.3 Dimension truncation error In this section we estimate the truncation error $$ \mathbb{E} \left \| u - u^s \right \|_V $$. As in the previous section, the truncation is considered in terms of the level L and we let $$s=s(L)=\sum _{\ell =\ell _0}^{L}\#(\nabla _{\ell })$$. Let $$a^s$$ be a(x, y) with $$y_j=0$$ for j > s, and define $$\check{a}^s(\boldsymbol{y})$$, $$\hat{a}^s(\boldsymbol{y})$$ accordingly. Proposition 5.5 Let u be the solution of the variational problem (1.6) with the coefficient given by the stochastic model (5.6) defined with (5.4) and (5.5). Let $$u^{s(L)}$$ be the solution of the same problem but with $$y_j:=0$$ for j > s(L). Then we have \begin{equation} \mathbb{E}\left[\big\| u - u^{s(L)}\big\|_V\right] \lesssim \left(\sum_{\ell=L+1}^{\infty} 2^{\ell ({-d(\beta_1-1)+\epsilon^{\prime}})}\right)^{\frac12} \end{equation} (5.33) for any $$\epsilon ^{\prime}\in (0,2\min \{t_*,d/2({\beta _1 - 1})\})$$. Proof. By a variant of Strang’s lemma, we have \begin{equation} \left\| u - u^s \right\|_V \leqslant \left\| a-a^s \right\|_{L^\infty(D)}\frac{\left\| f \right\|_{{V^{^{\prime}}}}}{\check{a}(\boldsymbol{y})\check{a}^s(\boldsymbol{y})} \end{equation} (5.34) for y such that $$\check{a}(\boldsymbol{y})$$, $$\check{a}^s(\boldsymbol{y})>0$$. We first derive an estimate on $$\left \| a-a^s \right \|_{L^\infty (D)}$$. Fix $$t\in (0,\min \{t_*,d/2({\beta _1 - 1})\})$$ arbitrarily, where $$t_*$$ is the parameter in (W5). For $$t\in (0,\frac{d}2(\beta _1-1))$$, choose $${{\bar{\texttt{p}}}}_0\in [1,\infty )$$ such that $$\frac{d}{{{\bar{\texttt{p}}}}_0}\leqslant t$$ so that we can invoke the Besov embedding results. Since $$\max \{d(\frac{1}{{{\bar{\texttt{p}}}}_0}-1),0\}<t$$, from Theorem 5.3 there exists a set $$\varOmega _0\subset \varOmega $$ such that $$\mathbb{P}(\varOmega _0)=1$$ and $$T(\cdot ,\boldsymbol{y})\in B^t_{{{\bar{\texttt{q}}}}}(L^{{{\bar{\texttt{p}}}}_0}(D))$$ for all $$\boldsymbol{y}\in \varOmega _0$$ with any $${{\bar{\texttt{q}}}}\in [1,\infty )$$. Then letting $$T^L(x,\boldsymbol{y}):=\sum _{\ell =\ell _0}^{L} \sum _{k\in \nabla _{\ell }} y_{\ell ,k} \sigma _{\ell } \varphi _{\ell ,k}(x)$$, from the embedding result of Besov spaces (Adams & Fournier, 2003, Chapter 7), and the characterisation by wavelets (W5) for any L, L′ ⩾ 1 (L ⩾ L′) we have \begin{equation} \left\| T^L(\cdot,\boldsymbol{y})-T^{L^{\prime}}(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)} \lesssim \left\| T^L(\cdot,\boldsymbol{y})-T^{L^{\prime}}(\cdot,\boldsymbol{y}) \right\|_{B^t_{{{\bar{\texttt{q}}}}}(L^{{{\bar{\texttt{p}}}}_0}(D))} \end{equation} (5.35) \begin{equation} \sim \left( \sum_{\ell=L^{\prime}+1}^L 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}_0})) {{\bar{\texttt{q}}}} } \left( \sum_{k\in\nabla_{\ell}} |\sigma_{\ell} y_{\ell,k} |^{{{\bar{\texttt{p}}}}_0} \right)^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}_0} \right)^{1/{{\bar{\texttt{q}}}}}<\infty \end{equation} (5.36) for all $$\boldsymbol{y}\in \varOmega _0$$. Thus, the sequence $$\{T^{L}(\cdot ,\boldsymbol{y})\}_L$$ ($$\boldsymbol{y}\in \varOmega _0$$) is Cauchy, and thus convergent in $$L^{\infty }(D)$$. Hence, we obtain \begin{equation} \left\| T(\cdot,\boldsymbol{y})-T^{L}(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)}^{{{\bar{\texttt{q}}}}} \lesssim \sum_{\ell=L+1}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \left( \sum_{k\in\nabla_{\ell}} |\sigma_{\ell} y_{\ell,k} |^{{{\bar{\texttt{p}}}}} \right)^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}}\qquad \text{a.s.,} \end{equation} (5.37) for all $${{\bar{\texttt{p}}}}\in [1,\infty )$$ such that $$\frac{d}{{{\bar{\texttt{p}}}}}\leqslant t$$, and any $${{\bar{\texttt{q}}}}\in [1,\infty )$$. For such $${{\bar{\texttt{p}}}}$$ and $${{\bar{\texttt{q}}}}$$, from Cioica et al. (2012, Proof of Theorem 6) we have \begin{equation} \mathbb{E}\left[ \big\| T(\cdot,\boldsymbol{y})-T^{L}(\cdot,\boldsymbol{y}) \big\|_{L^{\infty}(D)}^{{{\bar{\texttt{q}}}}} \right]\lesssim \sum_{\ell=L+1}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \sigma_{\ell}^{{{\bar{\texttt{q}}}}} ( \# \nabla_{\ell} )^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}} \sim \sum_{\ell=L+1}^{\infty} 2^{\ell{{\bar{\texttt{q}}}} (t-\frac{d}2(\beta_1-1) )}<\infty. \end{equation} (5.38) Further, from (5.26) we have \begin{align} &\mathbb{E}\Big[\left\| a(x,\boldsymbol{y})-a^{s(L)}(x,\boldsymbol{y}) \right\|_{L^{\infty}(D)}^2\Big]\nonumber\\ &\qquad\leqslant (\sup_{x\in D}|a_0(x)|^2) \mathbb{E}[\exp(2\left\| T(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)}) + \exp(2\|T^L(\cdot,\boldsymbol{y})\|_{L^{\infty}(D)})]\,\, \mathbb{E}\left[\big\| T-T^L \big\|_{L^{\infty}(D)}^2\right]. \end{align} (5.39) The sequence $$(\rho _\xi )$$ defined by (5.12), when reordered, satisfies $$(1/\rho _j)\in \ell ^{\frac{d}{\theta }+\varepsilon }$$ for any $$\varepsilon>0$$. Thus, from the proof of Corollary 3.2, as in Bachmayr et al. (2017a, Remark 2.2), we have \begin{equation} \max\left\{ \mathbb{E}[\exp(2\left\| T(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)})], \mathbb{E}[ \exp(2\|T^L(\cdot,\boldsymbol{y})\|_{L^{\infty}(D)})] \right\}<M_2, \end{equation} (5.40) where the constant $$M_2>0$$ is independent of L. Together with (5.34), we have \begin{equation} \mathbb{E}[\left\| u - u^s \right\|_V] \leqslant \left\| f \right\|_{{V^{^{\prime}}}} \mathbb{E}\left[ \frac{1}{(\check{a}(\boldsymbol{y}))^4}\right]^{\frac14} \mathbb{E}\left[ \frac{1}{(\check{a}^s(\boldsymbol{y}))^4}\right]^{\frac14} \mathbb{E}\left[\left\| a-a^s \right\|_{L^\infty(D)}^2\right]^{\frac12}<\infty, \end{equation} (5.41) where the Cauchy–Schwarz inequality is employed in the right-hand side of (5.34). To see the finiteness of the right-hand side of (5.41) note that $$ \frac1{\check{a}(\boldsymbol{y})}\leqslant \frac1{\inf_{x\in D} a_0(x)}\exp(\|T\|_{L^{\infty}(D)}),\quad\ \frac1{\check{a}^s(\boldsymbol{y})}\leqslant\frac1{\inf_{x\in D} a_0(x)} \exp(\|T^L\|_{L^{\infty}(D)}),$$ and further, from the same argument as above we have \begin{equation} \max\left\{ \mathbb{E}[\exp(4\left\| T(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)})], \mathbb{E}[ \exp(4\|T^L(\cdot,\boldsymbol{y})\|_{L^{\infty}(D)})] \right\}<M_4, \end{equation} (5.42) where the constant $$M_4>0$$ is independent of L. Therefore, from (5.38), (5.39) and (5.41) we obtain \begin{equation} \mathbb{E}\big[\big\| u - u^{s(L)}\big\|_V\big] \lesssim \mathbb{E}\left[\big\| T-T^L \big\|_{L^{\infty}(D)}^2\right]^{\frac12} \lesssim \left(\sum_{\ell=L+1}^{\infty} 2^{\ell (2t-d(\beta_1-1) )}\right)^{\frac12}. \end{equation} (5.43) Letting $$\varepsilon ^{\prime}:=2t$$ completes the proof. We conclude this section with a remark on other examples to which the currently developed QMC theory is applicable. Bachmayr et al. (2017c) considered so-called functions $$(\psi _j)$$ with finitely overlapping supports, for example, indicator functions of a partition of the domain D. It is easy to find a positive sequence $$(\rho _j)$$ such that Assumption 2.1 holds, and thus Theorem 4.4 readily follows. However, for these examples, due to the lack of smoothness it does not seem that it is easy to obtain a meaningful analysis as given above, and thus we forgo elaborating on them. 6. Concluding remark We considered a QMC theory for a class of elliptic PDEs with a log-normal random coefficient. Using an estimate on the partial derivative with respect to the parameter $$y_{{\mathfrak{u}}}$$ that is of product form, we established a convergence rate ≈ 1 of randomly shifted lattice rules. Further, we considered a stochastic model with wavelets, and analysed the smoothness of the realisations, and truncation errors. In closing we note that the currently developed theory works well for $$(\psi _j)$$ with local supports such as wavelets as described, but does not work so well for functions with arbitrary supports. In fact, under the same summability condition $$ \sum_{j\geqslant1}(\sup_{x\in D}|\psi_j(x)|)^p<\infty\quad\textrm{for some}\,\, p\in(0,1] $$ considered by Graham et al. (2015), letting $$\rho _j:=c(\sup _{x\in D}|\psi _j(x)|)^{p-1}$$ with a suitable constant c > 0, one can apply Theorem 4.4 with $$q:=q(p):=\frac{p}{1-p}$$. Consequently, one gets a convergence rate ≈ 1 for $$p\in (0,\frac 25+\varepsilon ]$$ for small $$\varepsilon $$. Under a weaker summability condition than this—similarly to the uniform case (Kuo et al., 2012, p. 3368), for $$p\in (0,\frac 12]$$—the rate ≈ 1 with product weights already follows from the results by Graham et al. (2015); we are grateful to Frances Y. Kuo for bringing this point to our attention. Another point related to the above concerns the cost of the CBC construction. Suppose that we can represent a given random field with two representations: spatial functions with local support and global support. Let s(L) be the truncation degree as in Section 5.2.1 for the local support, and $$\tilde{s}$$ be that for global support as in Graham et al. (2015). We mentioned that the generating vector for the lattice rule can be constructed with the cost $$\mathcal{O}(s(L) n \log n)$$ with the CBC construction algorithm. In Graham et al. (2015), the POD weights led to the cost $$\mathcal{O}(\tilde{s}n\log n + \tilde{s}^2 n)$$. Given a target error, it is not clear which order is larger: we might require $$s(L)\gg \tilde{s}$$ to have a desired truncation error. Acknowledgements I would like to express my sincere gratitude to Frances Y. Kuo, Klaus Ritter, Ian H. Sloan and anonymous reviewers for their stimulating comments. References Adams , R. A. & Fournier , J. J. ( 2003 ) Sobolev Spaces , vol. 140. Amsterdam : Academic Press . Bachmayr , M. , Cohen , A. , DeVore , R. & Migliorati , G. ( 2017a ) Sparse polynomial approximation of parametric elliptic PDEs. Part II: lognormal coefficients . ESAIM Math. Model. Numer. Anal. , 51 , 341 – 363 . Google Scholar CrossRef Search ADS Bachmayr , M. , Cohen , A. & Migliorati , G. ( 2017b ) Representations of Gaussian random fields and approximation of elliptic PDEs with lognormal coefficients . J. Fourier Anal. Appl . http://gs.jku.at/gismo. Bachmayr , M. , Cohen , A. & Migliorati , G. ( 2017c ) Sparse polynomial approximation of parametric elliptic PDEs. Part I: affine coefficients . ESAIM Math. Model. Numer. Anal. , 51 , 321 – 339 . Google Scholar CrossRef Search ADS Charrier , J . ( 2012 ) Strong and weak error estimates for elliptic partial differential equations with random coefficients . SIAM J. Numer. Anal. , 50 , 216 – 246 . Google Scholar CrossRef Search ADS Charrier , J. , Scheichl , R. & Teckentrup , A. L. (2013 ) Finite element error analysis of elliptic PDEs with random coefficients and its application to multilevel Monte Carlo methods . SIAM J. Numer. Anal. , 51 , 322 – 352 . Google Scholar CrossRef Search ADS Cioica , P. A. , Dahlke , S. , Döhring , N. , Kinzel , S. , Lindner , F. , Raasch , T. , Ritter , K. & Schilling , R. L. ( 2012 ) Adaptive wavelet methods for the stochastic Poisson equation . BIT Numer. Math. , 52 , 589 – 614 . Google Scholar CrossRef Search ADS Cohen , A . ( 2003 ) Numerical Analysis of Wavelet Methods . Studies in Mathematics and its Applications, vol. 32. Amsterdam : Elsevier (North-Holland Publishing) . Cohen , A. & DeVore , R. ( 2015 ) Approximation of high-dimensional parametric PDEs . Acta Numer ., 24 , 1 – 159 . Google Scholar CrossRef Search ADS Dagan , G . ( 1984 ) Solute transport in heterogeneous porous formations . J. Fluid Mech. , 145 , 151 – 177 . Google Scholar CrossRef Search ADS DeVore , R. A. ( 1998 ) Nonlinear approximation . Acta Numer ., 7 , 51 – 150 . Google Scholar CrossRef Search ADS Dick , J. , Kuo , F. Y. , Le Gia , Q. T. , Nuyens , D. & Schwab , C. ( 2014 ) Higher order QMC Petrov–Galerkin discretization for affine parametric operator equations with random field inputs . SIAM J. Numer. Anal. , 52 , 2676 – 2702 . Google Scholar CrossRef Search ADS Dick , J. , Kuo , F. Y. & Sloan , I. H. ( 2013 ) High-dimensional integration: the quasi-Monte Carlo way . Acta Numer. , 22 , 133 – 288 . Google Scholar CrossRef Search ADS Gantner , R. N. , Herrmann , L. & Schwab , C. ( 2018 ) Quasi-Monte Carlo integration for affine-parametric, elliptic PDEs: local supports imply product weights . SIAM J. Numer. Anal. , 56 , 111 – 135 . Google Scholar CrossRef Search ADS Gilbarg , D. & Trudinger , N. S. ( 1983 ) Elliptic Partial Differential Equations of Second Order . Berlin, Heidelberg : Springer . Google Scholar CrossRef Search ADS Graham , I. G. , Kuo , F. Y. , Nichols , J. A. , Scheichl , R. , Schwab , C. & Sloan , I. H. ( 2015 ) Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients . Numer. Math. , 131 , 329 – 368 . Google Scholar CrossRef Search ADS Graham , I. G. , Kuo , F. Y. , Nuyens , D. , Scheichl , R. & Sloan , I. H. ( 2011 ) Quasi-Monte Carlo methods for elliptic PDEs with random coefficients and applications . J. Comput. Phys. , 230 , 3668 – 3694 . Google Scholar CrossRef Search ADS Herrmann , L. & Schwab , C. ( 2016 ) Quasi-Monte Carlo integration for lognormal-parametric, elliptic PDEs: local supports imply product weights . Seminar for Applied Mathematics , ETH Zürich Research Report No. 2016-39 . Itô , K . ( 1984 ) Introduction to Probability Theory . Cambridge : Cambridge University Press . Kunita , H. ( 2004 ) Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms . Real and Stochastic Analysis, Trends Math. (M. M. Rao ed.). Boston : Birkhäuser , pp. 305 – 373 . Kuo , F. Y. & Nuyens , D. ( 2016 ) Application of quasi-Monte Carlo methods to elliptic PDEs with random diffusion coefficients: a survey of analysis and implementation . Foundations Comput. Math. (in press) . Kuo , F. Y. , Schwab , C. & Sloan , I. H. ( 2012 ) Quasi-Monte Carlo finite element methods for a class of elliptic partial differential equations with random coefficients . SIAM J. Numer. Anal. , 50 , 3351 – 3374 . Google Scholar CrossRef Search ADS Naff , R. L. , Haley , D. F. & Sudicky , E. A. ( 1998a ) High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 1. Methodology and flow results . Water Resour. Res. , 34 , 663 – 677 . Google Scholar CrossRef Search ADS Naff , R. L. , Haley , D. F. & Sudicky , E. A. ( 1998b ) High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 2. Transport results . Water Resour. Res. , 34 , 679 – 697 . Google Scholar CrossRef Search ADS Reed , M. & Simon , B. ( 1980 ) Methods of Modern Mathematical Physics. I, 2nd edn. New York: Academic Press . Scheichl , R. , Stuart , A. M. & Teckentrup , A. L. ( 2017 ) Quasi-Monte Carlo and multilevel Monte Carlo methods for computing posterior expectations in elliptic inverse problems . SIAM/ASA J. Uncertain. Quantif. , 5 , 493 – 518 . Google Scholar CrossRef Search ADS Schwab , C. & Gittelson , C. J. ( 2011 ) Sparse tensor discretizations of high-dimensional parametric and stochastic PDEs . Acta Numer ., 20 , 291 – 467 . Google Scholar CrossRef Search ADS Teckentrup , A. , Scheichl , R. , Giles , M. & Ullmann , E. ( 2013 ) Further analysis of multilevel Monte Carlo methods for elliptic PDEs with random coefficients . Numer. Math. , 125 , 569 – 600 . Google Scholar CrossRef Search ADS Urban , K. ( 2002 ) Wavelets in Numerical Simulation , Lecture Notes in Computational Science and Engineering , vol. 22. Berlin, Heidelberg : Springer . Yosida , K . ( 1995 ) Functional Analysis. Classics in Mathematics . Berlin : Springer . Google Scholar CrossRef Search ADS © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

Quasi–Monte Carlo integration with product weights for elliptic PDEs with log-normal coefficients

Loading next page...
 
/lp/ou_press/quasi-monte-carlo-integration-with-product-weights-for-elliptic-pdes-0SLw4nirRf
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/dry028
Publisher site
See Article on Publisher Site

Abstract

Abstract Quasi–Monte Carlo (QMC) integration of output functionals of solutions of the diffusion problem with a log-normal random coefficient is considered. The random coefficient is assumed to be given by an exponential of a Gaussian random field that is represented by a series expansion of some system of functions. Graham et al. (2015, Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients. Numer. Math., 131, 329–368) developed a lattice-based QMC theory for this problem and established a quadrature error decay rate ≈ 1 with respect to the number of quadrature points. The key assumption there was a suitable summability condition on the aforementioned system of functions. As a consequence, product-order-dependent weights were used to construct the lattice rule. In this paper, a different assumption on the system is considered. This assumption, originally considered by Bachmayr et al. (2017c, Sparse polynomial approximation of parametric elliptic PDEs. Part I: affine coefficients. ESAIM Math. Model. Numer. Anal., 51, 321–339) to utilise the locality of support of basis functions in the context of polynomial approximations applied to the same type of the diffusion problem, is shown to work well in the same lattice-based QMC method considered by Graham et al.: the assumption leads us to product weights, which enables the construction of the QMC method with a smaller computational cost than Graham et al. A quadrature error decay rate ≈ 1 is established, and the theory developed here is applied to a wavelet stochastic model. By a characterisation of the Besov smoothness, it is shown that a wide class of path smoothness can be treated with this framework. 1. Introduction This paper is concerned with quasi–Monte Carlo (QMC) integration of output functionals of solutions of the diffusion problem with a random coefficient of the form \begin{equation} -\nabla\cdot (a(x,{\boldsymbol{y}} )\nabla u(x,{\boldsymbol{y}} )) = f(x)\quad \textrm{in } D\subset{\mathbb{R}}^d, \qquad u=0\ \textrm{ on }\partial{D}, \end{equation} (1.1) where $${\boldsymbol{y}}\in \varOmega $$ is an element of a suitable probability space $$(\varOmega ,{\mathscr{F}},\mathbb{P})$$ (clarified below), and $$D\subset{\mathbb{R}}^d$$ is a bounded domain with Lipschitz boundary. Our interest is in the log-normal case, that is, $$a(\cdot ,\cdot )\colon{D}\times \varOmega \to{\mathbb{R}}$$ is assumed to have the form \begin{equation} a(x,{\boldsymbol{y}} )=a_*(x)+a_0(x)\exp(T(x,{\boldsymbol{y}} )) \end{equation} (1.2) with continuous functions $$a_*{(x)}\geqslant 0$$, $$a_0{(x)}>0$$ on $$\overline{D}$$, and Gaussian random field $$T(\cdot ,\cdot )\colon D\times \varOmega \to{\mathbb{R}}$$ represented by a series expansion \begin{equation} T(x,{\boldsymbol{y}})=\sum_{j= 1}^\infty Y_j ({\boldsymbol{y}}) \psi_{j}(x),\quad x\in D, \end{equation} (1.3) where $$\{Y_j\}$$ is a collection of independent standard normal random variables on $$(\varOmega ,{\mathscr{F}},\mathbb{P})$$, and $$(\psi _{j})_{j\geqslant 1}$$ is a suitable system of real-valued measurable functions on D. To handle a wide class of a and f, we consider the weak formulation of the problem (1.1). By V we denote the zero-trace Sobolev space $$H^1_0(D)$$ endowed with the norm \begin{equation} \left\| v \right\|_V := \left( \int_{D} |\nabla v(x)|^2\,{\textrm{d}}x\right)^{\frac12}, \end{equation} (1.4) and by $$V^{\prime}:=H^{-1}(D)$$ the topological dual space of V. For the given random coefficient a(x, y), we define the bilinear form $$\mathscr{A}({\boldsymbol{y}};\cdot ,\cdot )\colon V\times V\to{\mathbb{R}}$$ by \begin{equation} \varOmega\ni{\boldsymbol{y}} \mapsto \mathscr{A}({\boldsymbol{y}};v,w) := \int_{D} a(x,{\boldsymbol{y}} ) \nabla v(x)\cdot \nabla w(x)\,{\textrm{d}}x\ \textrm{ for all }v,w\in V. \end{equation} (1.5) Then for any $${\boldsymbol{y}}\in \varOmega $$ the weak formulation of (1.1) reads as follows: find u(⋅, y) ∈ V such that \begin{equation} \mathscr{A}({\boldsymbol{y}};u(\cdot,{\boldsymbol{y}}),v) =\langle f, v \rangle\quad \textrm{ for all }v\in V, \end{equation} (1.6) where f is assumed to be in V′, and ⟨⋅, ⋅⟩ denotes the duality paring between V′ and V. We impose further conditions to ensure the well-posedness of the problem, which we will discuss later. The ultimate goal is to compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$, the expected value of $$\mathcal{G}(u(\cdot ,{\boldsymbol{y}}))$$, where $$\mathcal{G}$$ is a linear bounded functional on V. The problem (1.1), and of computing $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$, often arises in many applications such as hydrology (Dagan, 1984; Naff et al., 1998a,b) and has attracted attention in computational uncertainty quantification (UQ). See, for example, Schwab & Gittelson (2011); Cohen & DeVore (2015); Kuo & Nuyens (2016) and references therein. Two major ways to tackle this problem are function approximation, and quadrature, in particular, QMC methods. Our interest is in QMC. It is now well known that the QMC methods beat the plain-vanilla Monte Carlo methods in various settings when applied to the problems of computing $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$ (Kuo et al., 2012; Graham et al., 2015; Kuo & Nuyens, 2016). Among the QMC methods, the algorithm we consider is randomly shifted lattice rules. Graham et al. (2015) showed that when the randomly shifted lattice rules are applied to the class of partial differential equations (PDEs) we consider, a QMC convergence rate, in terms of expected root square mean root, ≈ 1 is achievable, which is known to be optimal for lattice rules in the function space they consider. More precisely, they showed that quadrature points for randomly shifted lattice rules that achieve such a rate can be constructed using an algorithm called component-by-component (CBC) construction. The algorithm uses weights, which represent the relative importance of subsets of the variables of the integrand, as an input, and the cost of it is dependent on the type of weights. The weights considered by Graham et al. (2015) are so-called product-order-dependent (POD) weights, which were determined by minimising an error bound. For POD weights, the CBC construction takes $$\mathcal{O}(s n\log n+s^2n)$$ operations, where n is the number of quadrature points and s is the dimension of truncation $$\sum _{j= 1}^{s} Y_j ({\boldsymbol{y}}) \psi _{j}(x)$$. The contributions of the current paper are twofold: proof of a convergence rate ≈ 1 with product weights, and an application to a stochastic model with wavelets. In more detail, we show that for the currently considered problem, the CBC construction can be constructed with weights called product weights and achieves the optimal rate ≈ 1 in the function space we consider, and further, we show that the developed theory can be applied to a stochastic model that covers a wide class of wavelet bases. Often in practice we want to approximate the random coefficients well, and consequently s has to be taken to be large, in which case the second term of $$\mathcal{O}(s n\log n+s^2n)$$ becomes dominant. The use of the POD weights originates from the summability condition imposed on $$(\psi _j)$$ by Graham et al. (2015). We consider a different condition, the one proposed by Bachmayr et al. (2017c) to utilise the locality of supports of $$(\psi _j)$$ in the context of polynomial approximations applied to PDEs with random coefficients. We show that under this condition, the shifted lattice rule for the PDE problem can be constructed with a CBC algorithm with the computational cost $$\mathcal{O}(s n\log n)$$, the cost with the product weights as shown by Dick et al. (2014). Further, the stochastic model we consider broadens the range of applicability of QMC methods to PDEs with log-normal coefficients. One concern about the conditions, in particular the summability condition on $$(\psi _j)$$, imposed in Graham et al. (2015) is that it is so strong that only random coefficients with smooth realisations are in the scope of the theory. We show that at least for d = 1, 2, such random coefficients (e.g., realisations with just some Hölder smoothness) can be considered. Upon finalising this paper, we learnt about the paper by Herrmann & Schwab (2016). Our works share the same spirit in that we are all inspired by the work by Bachmayr et al. (2017a). We provide a different, arguably simpler, proof for the same convergence rate with the exponential weight function, and we discuss the roughness of the realisations that can be considered. Herrmann & Schwab (2016) develops a theory under a setting essentially the same as ours. In contrast to our paper, they treat the truncation error in a general setting, and as for the QMC integration error, they consider both the exponential weight functions and the Gaussian weight function for the weighted Sobolev space. As for the exponential weight function, the current paper and Herrmann & Schwab (2016) impose essentially the same assumptions (Assumption 2.1 below) and show the same convergence rate. However, our proof strategy is different, which turns out to result in different (product) weights. This can lead to a smaller implied constant in the estimate, especially when the field's fluctuation is large, as we discuss later. Further, in contrast to Herrmann & Schwab (2016), we provide a discussion of the roughness of the realisations of random coefficients as mentioned above. In Section 5, we provide a discussion via the Besov characterisation of the realisations of the random coefficients and the embedding results. We now give a remark on the uniform case setting. One of the keys in the current paper, which deals with the log-normal case, is the estimate of the derivative given in Corollary 3.2. This result essentially follows from the bounds obtained by Bachmayr et al. (2017a). A similar argument employed in the current paper is applicable to the randomly shifted lattice rules applied to PDEs with uniform random coefficients considered by Kuo et al. (2012), using the derivative bounds for the uniform case considered by Bachmayr et al. (2017c). For this, we defer to Gantner et al. (2018), in which not only the randomly shifted lattice rules but also higher-order QMCs were considered. The outline of the rest of the paper is as follows. In Section 2 we describe the problem we consider in detail. In Section 3 we discuss bounds on mixed derivatives. Then in Section 4, we develop a QMC theory applied to a PDE problem with log-normal coefficients using the product weights. Section 5 provides an application of the theory: we consider a stochastic model represented by a wavelet Riesz basis. Then we close this paper with concluding remarks in Section 6. 2. Setting We assume that the Gaussian random field T admits a series representation as in (1.3). For simplicity we fix $$(\varOmega ,{\mathscr{F}},\mathbb{P}):=({\mathbb{R}}^{{\mathbb{N}}},\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}}),\mathbb{P}_{Y})$$, where $${\mathbb{N}}:=\{1,2,\dotsc \}$$, $$\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}})$$ is the Borel $$\sigma $$-algebra generated by the product topology in $${\mathbb{R}}^{{\mathbb{N}}}$$ and $$\mathbb{P}_{Y}:=\prod _{j=1}^{\infty }\mathbb{P}_{Y_j}$$ is the product measure on $$({\mathbb{R}}^{{\mathbb{N}}},\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}}))$$ defined by the standard normal distributions $$\{\mathbb{P}_{Y_j} \}_{j\in{\mathbb{N}}}$$ on $${\mathbb{R}}$$ (see, for example, Itô, 1984, Chapter 2 for details). Then for each $$\boldsymbol{y}\in \varOmega $$ we may see $$Y_j(\boldsymbol{y})$$ ($$j\in{\mathbb{N}}$$) as given by the projection (or the canonical coordinate function) $$ \varOmega={\mathbb{R}}^{{\mathbb{N}}}\ni \boldsymbol{y}\mapsto Y_j(\boldsymbol{y})=:y_j\in{\mathbb{R}}. $$ Note in particular that from the continuity of the projection, the mapping $$\boldsymbol{y}\mapsto y_j$$ is $$\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}})/\mathcal{B}({\mathbb{R}})$$-measurable. In the following, we write T above as \begin{equation} T(x,\boldsymbol{y})=\sum_{j= 1}^\infty y_j \psi_{j}(x),\quad x\in D \end{equation} (2.1) and see it as a deterministically parametrised function on D. We will impose a condition considered by Bachmayr et al. (2017c) on $$(\psi _j)$$ (see Assumption 2.1 below), that is particularly suitable for $$\psi _j$$ with local support. To ensure the law on $${\mathbb{R}}^D$$ is well defined, we suppose \begin{equation} \sum_{j= 1}^\infty \psi_{j}(x)^2<\infty\quad\textrm{for all}x\in D, \end{equation} (2.2) so that the covariance function $$\mathbb{E}[T(x_1)T(x_2)]=\sum _{j\geqslant 1}\psi _{j}(x_1)\psi _{j}(x_2)$$ ($$x_1,x_2\in D$$) is well defined. We consider the parametrised elliptic PDE (1.1). To prove well-posedness of the variational problem (1.6), we use the Lax–Milgram lemma. Conditions that ensure that the bilinear form $$\mathscr{A}(\boldsymbol{y};\cdot ,\cdot )$$ defined by the diffusion coefficient a is coercive and bounded are discussed later. Motivated by UQ applications, we are interested in expected values of bounded linear functionals of the solution of the above PDEs. We note that the linearity is for the sake of the theoretical interest. Theoretical treatment of nonlinear functionals will require suitable smoothness and mild growth of suitable derivatives, but it is almost unexplored with an exception being an attempt by Scheichl et al. (2017). Given a continuous linear functional $$\mathcal{G}\in{V^{^{\prime}}}$$ we wish to compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]:=\int _{{\mathbb{R}}^{\mathbb{N}}}\mathcal{G}(u(\cdot ,\boldsymbol{y})) \mathrm{d}\mathbb{P}_{Y}(\boldsymbol{y})$$, where the measurability of the integrands will be discussed later. To compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$ we use a sampling method: generate realisations of a(x, y), which yields the solution u(x, y) via the PDE (1.1), and from these we compute $$\mathbb{E}[\mathcal{G}(u(\cdot ))]$$. In practice, these operations cannot be performed exactly, and numerical methods need to be employed. This paper gives an analysis of the error incurred by the method outlined as follows. We truncate the series (2.1) to s terms for some integer s ⩾ 1, which results in the coefficient $$a(x,(y_1,\dots ,y_s,0,0,0,\dots ))$$ of the PDE (1.1). Further, the expectation of the random variable that is the solution of the corresponding solution of the PDE applied to the linear functional $$\mathcal{G}$$ is approximated by a QMC method. Let $$u^s(x)=u^s(x,\boldsymbol{y})$$ be the solution of (1.1) with $$\boldsymbol{y}=(y_1,\dotsc ,y_s,0,0,0,\dotsc )$$, that is, of the following problem: find $$u^s\in V$$ such that \begin{equation} -{\nabla} \cdot(a(x,(y_1,\dotsc,y_s,0,0,\cdots))\nabla u^s(x)) = f(x) \ \ \textrm{in } D, \quad u^s = 0 \textrm{ on } \partial D. \end{equation} (2.3) Here, even though the dependence of $$u^s$$ on y is only on $$(y_1,\dotsc ,y_s)$$, we abuse the notation slightly by writing $$u^s(x,\boldsymbol{y}):= u^s(y_1,\dotsc ,y_s,0,0,0,\dotsc )$$. Let $$\varPhi _s^{-1}\colon [0,1]^s\ni \boldsymbol{v}\mapsto \varPhi _s^{-1}(\boldsymbol{v})\in{\mathbb{R}}^s$$ be the inverse of the cumulative normal distribution function applied to each entry of v. We write $$F(\boldsymbol{y}):=F(y_1,\dotsc ,y_s)=\mathcal{G}(u^s(\cdot ,\boldsymbol{y}))$$ and \begin{equation} I_s(F) := \int_{{\boldsymbol{v}\in(0,1)^s}} F(\varPhi_s^{-1}(\boldsymbol{v})) \mathrm{\ d}{\boldsymbol{v}} =\int_{{\boldsymbol{y}\in\mathbb{R}^s}} \mathcal{G}(u^s(\cdot,\boldsymbol{y})) \prod\limits_{j=1}^{s}\phi(y_j)\mathrm{\ d}\boldsymbol{y} =\mathbb{E}[\mathcal{G}(u^s)], \end{equation} (2.4) where $$\phi $$ is the probability density function of the standard normal random variable. The measurability of the mapping $${\mathbb{R}}^s\ni \boldsymbol{y}\mapsto \mathcal{G}(u^s(\cdot ,\boldsymbol{y}))\in{\mathbb{R}}$$ will be discussed later. In order to approximate $$I_s(F)$$, we employ a QMC method called a randomly shifted lattice rule. This is an equal-weight quadrature rule of the form $$ \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) :=\frac{1}n\sum_{i=1}^nF\left(\varPhi^{-1}_s\left(\mathrm{frac}\left(\frac{i\boldsymbol{z}}{n}+\boldsymbol{\varDelta}\right)\right)\right), $$ where the function $$\mathrm{frac}(\cdot )\colon{\mathbb{R}}^s\ni \boldsymbol{y}\mapsto \mathrm{frac}(\boldsymbol{y})\in [0,1)^s$$ takes the fractional part of each component in y. Here, $$\boldsymbol{z}\in{\mathbb{N}}^s$$ is a carefully chosen point called the (deterministic) generating vector and $$\boldsymbol{\varDelta }\in [0,1]^s$$ is the random shift. We assume the random shift $$\boldsymbol{\varDelta }$$ is a $$[0,1]^s$$-valued uniform random variable defined on a suitable probability space different from $$(\varOmega ,\mathscr{F},\mathbb{P})$$. For further details of the randomly shifted lattice rules, we refer to the surveys Dick et al. (2013); Kuo & Nuyens (2016) and references therein. We want to evaluate the root-mean-square error \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left[ \big( \mathbb{E}[\mathcal{G}(u)] -\mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \big)^2 \right] } \end{equation} (2.5) where $$\mathbb{E}^{\boldsymbol{\varDelta }}$$ is the expectation with respect to the random shift. Note that in practice the solution $$u^s$$ needs to be approximated by some numerical scheme $$\widetilde{u}^s$$, such as the finite element method as considered in Graham et al. (2015); Kuo & Nuyens (2016), which results in computing $$\widetilde{F}(\boldsymbol{y}):=\mathcal{G}(\widetilde{u}^s(\boldsymbol{y}))$$. Thus, the error $$e_{s,n}:=\sqrt{ \mathbb{E}^{\boldsymbol{\varDelta }} [ ( \mathbb{E}[\mathcal{G}(u)] -\mathcal{Q}_{s,n}(\boldsymbol{\varDelta };\widetilde{F}) )^2 ] }$$ is what we need to evaluate in practice. Via the trivial decompositions we have, using $$\mathbb{E}^{\boldsymbol{\varDelta }}[\mathcal{Q}_{s,n}(\boldsymbol{\varDelta };\widetilde{F})] ={\mathbb{E}[\mathcal{G}(\widetilde{u}^s)]}$$ (see, for example, Dick et al., 2013), \begin{equation} e_{s,n}^2 ={ (\mathbb{E}[\mathcal{G}(u)-\mathcal{G}(\widetilde{u}^s)])^2 + \mathbb{E}^{\boldsymbol{\varDelta}} \left[ \big( \mathbb{E}[\mathcal{G}(\widetilde{u}^s)] -\mathcal{Q}_{s,n}(\boldsymbol{\varDelta};\widetilde{F}) \big)^2 \right].} \end{equation} (2.6) For the sake of simplicity, we forgo the discussion on the numerical approximation of the solution of the PDE. Instead, we discuss the smoothness of the realisations of the random coefficient. Then given a suitable smoothness of the boundary ∂D, the convergence rate of $$\mathbb{E}[\mathcal{G}(u)-\mathcal{G}(\widetilde{u})]$$ is typically obtained from the smoothness of the realisations of the coefficients a(⋅, y), via the regularity of the solution u. See Graham et al. (2015); Kuo & Nuyens (2016). Therefore, in the following we concentrate on the truncation error and the quadrature error, and the realisations of a. In the course of the error analyses, we assume $$(\psi _j)$$ satisfies the following assumption. Assumption 2.1 The system $$(\psi _j)$$ satisfies the following. There exists a positive sequence $$(\rho _j)$$ such that \begin{equation} \sup_{x\in{D} }\sum_{j\geqslant 1}\rho_j|\psi_j(x)|=:\kappa <\ln2, \end{equation} (2.7) and further, \begin{equation} (1/\rho_j)\in\ell^{q}\qquad\textrm{ for some }{q}\in(0,1]. \end{equation} (2.8) We also use the following weaker assumption. Assumption 2.2 This is the same as Assumption 2.1, only with the condition (2.8) being replaced with \begin{equation} (1/\rho_j)\in\ell^{q}\qquad\textrm{ for some }q\in(0,\infty). \end{equation} (2.9) We note that (2.9), and thus also (2.8), implies $$\rho _j\to \infty $$ as $$j\to \infty $$. Some remarks on the assumptions are in order. First note that Assumption 2.1 implies $$\sum _{j\geqslant 1}|\psi (x)|<\infty $$ for any x ∈ D, and hence (2.2). Assumption 2.2 is used to obtain an estimate on the mixed derivative with respect to the random parameter $$y_j$$, and further, ensures the almost surely well-posedness of problem (1.6)—see Corollary 3.2 and Remark 3.3. Assumption 2.1 is used to obtain a dimension-independent QMC error estimate—see Theorems 4.4 and 5.2. The stronger condition (2.8) the system $$(\psi _j)$$ satisfies, that is, the smaller is q, the smoother the realisations of the random coefficient become. In Section 5.2.1, we discuss smoothness of realisations allowed by these conditions. 3. Bounds on mixed derivatives In this section, we discuss bounds on mixed derivatives. In order to motivate the discussion in this section, first we explain how the derivative bounds come into play in the QMC analysis developed in the next section. Application of QMC methods to elliptic PDEs with log-normal random coefficients was initiated with computational results by Graham et al. (2011), and an analysis was followed by Graham et al. (2015). Following the discussion by Graham et al. (2015), we assume that the integrand F is in the space called the weighted unanchored Sobolev space $$\mathcal{W}^s$$, consisting of measurable functions $$F\colon{\mathbb{R}}^s\to{\mathbb{R}}$$ such that \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 = \sum_{{\mathfrak{u}}\subseteq\{1:s\}} \frac1{\gamma_{{\mathfrak{u}}}} \int_{\mathbb{R}^{|{\mathfrak{u}}|}}\!\!\! \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \frac{\partial^{|{\mathfrak{u}}|} F}{\partial y_{{\mathfrak{u}}}}(\boldsymbol{y}_{\mathfrak{u}};\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}}) \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^2\!\! \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}}<\infty, \end{equation} (3.1) where we assume, similarly to Graham et al. (2015), that \begin{equation} w_j^2(y_j)=\exp(-2\alpha_j|y|) \end{equation} (3.2) for some $$\alpha _j>0$$. Here, {1 : s} is shorthand notation for the set $$\{1,\dotsc ,s\}$$, $$\frac{\partial ^{|{\mathfrak{u}}|} F}{\partial y_{{\mathfrak{u}}}}$$ denotes the mixed first derivative with respect to each of the ‘active’ variables $$y_j$$ with $$j\in{\mathfrak{u}}\subseteq \{1:s\}$$ and $$\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}}$$ denotes the ‘inactive’ variables $$y_j$$ with $$j\not \in{\mathfrak{u}}$$. Further, weights $$({\gamma _{{\mathfrak{u}}}})$$ describe the relative importance of the variables $$\{y_j\}_{j\in{\mathfrak{u}}}$$. Note that the measures $$\int _{\cdot }\,{\textrm{d}}y_{{\mathfrak{u}}}$$ and $$\int _{\cdot }\frac 1{\gamma _{{\mathfrak{u}}}}\,{\textrm{d}}y_{{\mathfrak{u}}}$$ differ by at most a constant factor depending on $${\mathfrak{u}}$$. Weights $$({\gamma _{{\mathfrak{u}}}})$$ play an important role in deriving error estimates independently of the dimension s, and further, in obtaining the generating vector z for the lattice rule via the CBC algorithm. Depending on the problem, different types of weights have been considered to derive error estimates. For the randomly shifted lattice rules, ‘POD weights’ and ‘product weights’ have been considered (Dick et al., 2013; Kuo & Nuyens, 2016). When applied to the PDE parametrised with log-normal coefficients, the result in Graham et al. (2015) suggests the use of POD weights for the problem. We wish to develop a theory on the applicability of product weights, which has an advantage in terms of computational cost. The computational cost of the CBC construction is $$\mathcal{O}(sn\log n+ns^2)$$ in the case of POD weights, compared to $$\mathcal{O}(sn\log n)$$ for product weights (Dick et al., 2014). Since we often want to approximate the random field well, and so necessarily we have large s, the applicability of product weights is of clear interest. Estimates of derivatives of the integrand F(y) with respect to the parameter y, that is, the variable with which F(y) is integrated, are one of the keys in the error analysis of QMC. In Graham et al. (2015), it was the estimates being of ‘POD form’ that led their theory to the POD weights. Under an assumption on the system $$(\psi _j)$$, which is different from that in Graham et al. (2015), we show that the derivative estimates turn out to be of ‘product form’, and further that, under a suitable assumption, we achieve the same error convergence rate close to 1 with product weights. Now we derive an estimate of the product form. Let $$ \mathcal{F}:=\{\mu=(\mu_1,\mu_2,\dotsc)\in\mathbb{N}_0^{\mathbb{N}}\mid \textrm{ all but finite number of components of }\mu\textrm{ are zero} \}.$$ For $$\mu \in \mathcal{F}$$ we use the notation $$|\mu |=\sum _{j\geqslant 1}\mu _j$$, $${\mu }!=\prod \limits _{j\geqslant 1}\mu _j!$$, $$\rho ^{\mu }=\prod \limits _{j\geqslant 1}\rho _j^{\mu _j}$$ for $$\rho =(\rho _j)_{j\geqslant 1}\in \mathbb{R}^{\mathbb{N}}$$, and \begin{equation} \partial^\mu u= \frac{\partial^{|\mu|}}{y_{j(1)}^{\mu_{j(1)}}\dotsb y_{j{(k)}}^{\mu_{j(k)}}}u, \end{equation} (3.3) where $$k=\#\{j\mid \mu _j\neq 0\}$$. We have the following bound on mixed derivatives of order r ⩾ 1 (although in our application we will need only r = 1). The proof follows essentially the same argument as the proof of Bachmayr et al. (2017c, Theorem 4.1). Here, we show a tighter bound by changing the condition from $$\frac{\ln 2}{\sqrt{r}}$$ to $$\frac{\ln 2}{{r}}$$ in Bachmayr et al. (2017c, (91)), and we have $$\rho ^{2\mu }$$ in (3.4) in place of $$\frac{\rho ^{2\mu }}{\mu !}$$ in the left-hand side of Bachmayr et al. (2017c, (92)). Proposition 3.1 Let r ⩾ 1 be an integer. Suppose $$(\psi _j)$$ satisfies condition (2.7) with $$\ln 2$$ replaced by $$\frac{\ln 2}{r}$$, with a positive sequence $$(\rho _j)$$. Then there exists a constant $$C_0=C_0(r)$$ that depends on $$\kappa $$ and r, such that \begin{equation} \sum_{\substack{\mu\in\mathcal{F}\\ \left\| \mu \right\|_\infty\leqslant r}} {\rho^{2\mu}} \int_{D} a(\boldsymbol{y})|\nabla(\partial^\mu u(\boldsymbol{y}))|^2\,{\textrm{d}}x \leqslant C_0 \int_{D} a(\boldsymbol{y})|\nabla u(\boldsymbol{y})|^2 \,{\textrm{d}}x \end{equation} (3.4) for all y that satisfy $$ \|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, where u(y) is the solution of (1.6) for such y. The same bound holds also for $$u^s(\boldsymbol{y})$$, the solution of (1.6) with $$\boldsymbol{y}=(y_1,\dotsc ,y_s,0,0,\dotsc )$$. Proof. Let \begin{equation*} \varLambda_{k}:= \{\mu\in\mathcal{F}\mid|\mu|=k\textrm{ and }\left\| \mu \right\|_{\ell_\infty}\leqslant r\},\ \, \textrm{and}\ \, S_{\mu}:= \{\nu\in\mathcal{F}\mid\nu\leqslant \mu\textrm{ and }\nu\neq\mu\}\textrm{ for }\mu\in\mathcal{F}, \end{equation*} with ⩽ denoting the componentwise partial order between multi-indices. Let us introduce the notation $$ \left \| v \right \|_{a(\boldsymbol{y})}^2:=\int _{D} a(\boldsymbol{y})|\nabla v|^2 \,{\textrm{d}}x $$ for all v ∈ V, and let $$ \sigma_k:=\sum_{\mu\in \varLambda_k}\rho^{2\mu} \left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2.$$ We show below that we can choose $$\delta =\delta (r)<1$$ such that \begin{equation} \sigma_k\leqslant \sigma_0\delta^k\qquad\textrm{ for all }k\geqslant 0. \end{equation} (3.5) Note that if this holds then we have \begin{equation} \sum_{\left\| \mu \right\|_\infty\leqslant r}{\rho^{2\mu}}\left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2 = \sum_{k=0}^\infty \sum_{\mu\in\varLambda_k}\rho^{2\mu}\left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2= \sum_{k=0}^\infty \sigma_k\leqslant \sigma_0\sum_{k=0}^\infty \delta^k<\infty, \end{equation} (3.6) and the statement will follow with $$C_0=C_0(r)=\sum _{k=0}^\infty \delta (r)^k$$. We now show $$\sigma _k\leqslant \sigma _0\delta ^k$$. Note that from the assumption $$\|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, in view of Bachmayr et al. (2017c, Lemma 3.2), we have $$\partial ^\mu u\in V$$ for any $$\mu \in \mathcal{F}$$. Thus, by taking $$v:=\partial ^\mu u$$ ($$\mu \in \varLambda _k$$) in Bachmayr et al. (2017c, (74)), we have \begin{align} {\sigma_k}&=\sum_{\mu\in \varLambda_k}\rho^{2\mu}\!\!\! \int_D\!\! a(\boldsymbol{y})|\nabla\partial^\mu u(\boldsymbol{y})|^2\,{\textrm{d}}x\nonumber\\ &\leqslant \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \left(\prod\limits_{j\geqslant 1} \frac{\mu_j!\rho_j^{\mu_j-\nu_j}\rho^{\mu_j}\rho^{\nu_j}} {\nu_j!(\mu_j-\nu_j)!} \right) \int_D a(\boldsymbol{y}) \left( \prod\limits_{j\geqslant 1}|\psi_j|^{\mu_j-\nu_j} \right) |\nabla\partial^\nu u(\boldsymbol{y})| |\nabla\partial^\mu u(\boldsymbol{y})| \,{\textrm{d}}x. \end{align} (3.7) Using the notation \begin{equation} \varepsilon(\mu,\nu)(x) := \varepsilon(\mu,\nu) := \frac{\mu!}{\nu!}\frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!}, \end{equation} (3.8) and the Cauchy–Schwarz inequality for the sum over $$S_\mu $$, it follows that \begin{equation} {\sigma_k}\leqslant \int_D \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})| |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})| \,{\textrm{d}}x \end{equation} (3.9) \begin{equation} \leqslant \int_D \sum_{\mu\in \varLambda_k} \left(\sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \left(\sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \,{\textrm{d}}x. \end{equation} (3.10) Let $$ S_{\mu,\ell}:=\{\nu\in S_\mu\mid |\mu-\nu|=\ell\}.$$ Then for $$\mu \in \varLambda _k$$ we have $$ S_{\mu}=\{\nu\in\mathcal{F}\mid\nu\leqslant \mu,\ \nu\neq\mu\}=\bigcup\limits_{\ell=1}^{|\mu|}\{\nu\in\mathcal{F}\mid\nu\leqslant \mu,\ |\mu-\nu|=\ell \} = \bigcup\limits_{\ell=1}^{|\mu|}S_{\mu,\ell} ,$$ and further, from $$|\mu |=k$$, we have \begin{equation} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) = \sum_{\ell=1}^{k} \sum_{\nu\in S_{\mu,\ell}} \varepsilon(\mu,\nu) = \sum_{\ell=1}^{k} \sum_{\nu\in S_{\mu,\ell}} \frac{\mu!}{\nu!}\frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!}. \end{equation} (3.11) Since $$\nu \in S_{\mu ,\ell }$$ implies $$\sum _{j\in \operatorname{supp}\mu }(\mu _j-\nu _j)=\ell $$, there are ℓ factors in $$\frac{\mu !}{\nu !}=\prod _{j\in \operatorname{supp}\mu }\mu _j(\mu _j-1)\dotsb (\nu _j+1)$$. From $$\mu _j\leqslant r$$ ($$j\in \operatorname{supp}\mu $$), each of the factors is at most r. Thus, $$ \frac{\mu!}{\nu!} \leqslant r^\ell\quad\textrm{ for }\mu\in \varLambda_k,\ \nu\in S_{\mu,\ell}. $$ Therefore, from the multinomial theorem, for each x ∈ D it follows from (3.11) that \begin{equation} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) \leqslant \sum_{\ell=1}^{k} r^\ell \sum_{\nu\in S_{\mu,\ell}} \frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!} \leqslant \sum_{\ell=1}^{k} r^\ell \sum_{|\tau|=\ell} \frac{\rho^{\tau}|\psi|^{\tau}}{\tau!} = \sum_{\ell=1}^{k} r^\ell \frac{1}{\ell!} \sum_{|\tau|=\ell} \frac{\ell!}{\tau!} \rho^{\tau}|\psi|^{\tau} \end{equation} (3.12) \begin{equation} = \sum_{\ell=1}^{k} r^\ell \frac{1}{\ell!} \left(\sum_{j=1}^\infty \rho_j|\psi_j|\right)^\ell \leqslant \sum_{\ell=1}^{k} r^\ell \frac{1}{\ell!} \kappa^\ell \leqslant e^{r\kappa}-1\leqslant e^{{\ln 2}}-1=1. \end{equation} (3.13) Inserting into (3.10), we have \begin{equation} \sum_{\mu\in \varLambda_k}\!\!\rho^{2\mu} \left\| \partial^\mu u(\boldsymbol{y}) \right\|_{a(\boldsymbol{y})}^2 \leqslant \int_D \sum_{\mu\in \varLambda_k} \left( \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \left( a(\boldsymbol{y}) |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \,{\textrm{d}}x. \end{equation} (3.14) Again applying the Cauchy–Schwarz inequality to the summation over $$\varLambda _k$$ and then to the integral, we have \begin{align*} \begin{split} \sigma_k &\leqslant \int_D \left( \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}}\!\! \left( \sum_{\mu\in \varLambda_k} a(\boldsymbol{y}) |\rho^\mu\nabla\partial^\mu u(\boldsymbol{y})|^2 \right)^{\frac{1}{2}} \,{\textrm{d}}x\nonumber \end{split} \\ &\leqslant \left(\int_D \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \,{\textrm{d}}x \right)^{\frac{1}{2}} \sigma_k^{\frac{1}{2}}, \end{align*} and hence \begin{equation} {\sigma_k\leqslant \int_D \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \,{\textrm{d}}x.} \end{equation} (3.15) Now, for any k ⩾ 1 and any $$\nu \in \varLambda _\ell =\{\nu \in \mathcal{F}\mid |\nu |=\ell ,\ \left \| \nu \right \|_\infty \leqslant r\}$$ with ℓ ⩽ k − 1, let $$ R_{\nu,\ell,k}:= \{\mu\in\varLambda_k\mid \nu\in S_\mu \} = \{\mu\in\mathcal{F}\mid |\mu|=k,\ \left\| \mu \right\|_\infty\leqslant r,\ \mu\geqslant \nu,\ \mu\neq\nu \}. $$ Then for fixed k ⩾ 1 we can write \begin{equation} \bigcup_{\mu\in\varLambda_k} \bigcup_{\nu\in S_\mu} (\mu,\nu) = \bigcup_{\ell=0}^{k-1} \bigcup_{\nu\in \varLambda_\ell} \bigcup_{\mu\in R_{\nu,\ell,k}} (\mu,\nu). \end{equation} (3.16) Thus, we have \begin{equation} \sum_{\mu\in \varLambda_k} \sum_{\nu\in S_\mu} \varepsilon(\mu,\nu) a(\boldsymbol{y}) |\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 = \sum_{\ell=0}^{k-1} \sum_{\nu\in \varLambda_\ell} a(\boldsymbol{y})|\rho^\nu\nabla\partial^\nu u(\boldsymbol{y})|^2 \sum_{\mu\in R_{\nu,\ell,k}} \varepsilon(\mu,\nu). \end{equation} (3.17) Now, note that $$k-\ell ={\sum _{j\in \operatorname{supp} \mu }\mu _j -\sum _{j\in \operatorname{supp} \mu }\nu _j=} |\mu -\nu |$$. Thus, we have $$\frac{\mu !}{\nu !}\leqslant r^{k-\ell }$$. It follows that \begin{equation} \sum_{\mu\in R_{\nu,\ell,k}} \varepsilon(\mu,\nu) = \sum_{ \nu\in R_{\nu,\ell,k} } \frac{\mu!}{\nu!}\frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!} \leqslant r^{k-\ell} \sum_{ \nu\in R_{\nu,\ell,k} } \frac{\rho^{\mu-\nu}|\psi|^{\mu-\nu}}{(\mu-\nu)!} \end{equation} (3.18) \begin{equation} \leqslant r^{k-\ell} \sum_{|\tau|=k-\ell} \frac{\rho^{\tau}|\psi|^{\tau}}{\tau!} \leqslant r^{k-\ell}\frac1{(k-\ell)!}\kappa^{k-\ell}. \end{equation} (3.19) Then substituting (3.19) into (3.17) we obtain from (3.15), \begin{equation} \sigma_k \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (r\kappa)^{k-\ell} \sigma_\ell. \end{equation} (3.20) From the assumption we have $$\kappa <{\frac{\ln 2}{r}}$$. Thus, we can take $$\delta =\delta (r)<1$$ such that $$\kappa <\delta{\frac{\ln 2}r}.$$ We show $$\sigma _k\leqslant \sigma _0\delta ^k$$ for all k ⩾ 0 by induction. This is clearly true for k = 0. Suppose $$\sigma _{\ell }\leqslant \sigma _0\delta ^{\ell }$$ holds for $$\ell =0,\dotsc ,k-1$$. Then for ℓ = k we have \begin{equation} \sigma_k \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (r\kappa)^{k-\ell} \sigma_\ell \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (r\kappa)^{k-\ell} \sigma_0\delta^\ell \leqslant \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} (\delta{\ln 2})^{k-\ell} \sigma_0\delta^\ell \end{equation} (3.21) \begin{equation} = \sigma_0\delta^k \sum_{\ell=0}^{k-1} \frac1{(k-\ell)!} ({\ln 2})^{k-\ell} \leqslant\sigma_0\delta^k(e^{{\ln 2}}-1)=\sigma_0\delta^k, \end{equation} (3.22) which completes the proof. With the notation \begin{equation} \check{a}(\boldsymbol{y}):=\mathop{\textrm{ess}\inf}_{x\in D} a(x,\boldsymbol{y})\quad\textrm{and}\quad \hat{a}(\boldsymbol{y}):=\mathop{\textrm{ess}\sup}_{x\in D} a(x,\boldsymbol{y}), \end{equation} (3.23) we have the following corollary, where here and from now on we set r = 1. Corollary 3.2 Suppose $$(\psi _j)$$ satisfies Assumption 2.2 with a positive sequence $$(\rho _j)$$. Then for $$C_0=C_0(1)$$ as in Proposition 3.1, for any $${\mathfrak{u}}\subset{\mathbb{N}}$$ of finite cardinality we have \begin{equation} \left\| \frac{\partial^{|{\mathfrak{u}}|} u(\boldsymbol{y})}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right\|_{V} \leqslant \sqrt{C_0} \frac{\left\| f \right\|_{{V^{^{\prime}}}}}{{\check{a}(\boldsymbol{y})}} \prod_{j\in{\mathfrak{u}}}\frac{1}{\rho_j}<\infty, \quad\text{almost surely,} \end{equation} (3.24) where $$\left \| \cdot \right \|_{{V^{^{\prime}}}}$$ is the norm in the dual space $${V^{^{\prime}}}$$. The same bound holds also for $$\left \| \frac{\partial ^{|{\mathfrak{u}}|} u^s}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right \|_{V}$$, with $$\boldsymbol{y}=(y_1,\dotsc ,y_s,0,0,\dotsc )$$. Proof. First, if $$\boldsymbol{y}\in{\mathbb{R}}^{\mathbb{N}}$$ satisfies $$\|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, then we have $$\frac{1}{(\check{a}(\boldsymbol{y}))}<\infty $$: \begin{equation} {\check{a}(\boldsymbol{y})} \geqslant \big(\inf_{x\in D}a_0(x)\big) \exp\left(-\mathop{\textrm{ess}\sup}_{x\in D}\Big|\sum_{j\geqslant1}y_j\psi_j(x)\Big|\right) , \end{equation} (3.25) and thus \begin{equation} \frac1{\check{a}(\boldsymbol{y})} \leqslant \frac1{\big(\inf_{x\in D}a_0(x)\big)} \exp\left(\mathop{\textrm{ess}\sup}_{x\in D}\big|\sum_{j\geqslant1}y_j\psi_j(x)\big|\right). \end{equation} (3.26) Now, from $$(1/\rho _j)\in \ell ^{q}$$ for some $$q\in (0,\infty )$$, in view of Bachmayr et al. (2017a, Remark 2.2) we have $$ \mathbb{E} \left[ \exp\left( k \bigg\| \sum_{j\geqslant1}y_j\psi_j \bigg\|_{L^{\infty}(D)} \right) \right]<\infty $$ for any $$0\leqslant k\ {<}\ \infty $$. Thus, $$\|\sum _{j\geqslant 1}y_j\psi _j\|_{L^{\infty }(D)}<\infty $$, and the right-hand side of (3.24) is bounded with full (Gaussian) measure. We remark that the $${\mathcal{B}({\mathbb{R}}^{{\mathbb{N}}})}/\mathcal{B}({\mathbb{R}})$$-measurability of the mapping $$\boldsymbol{y}\mapsto \big \| \sum _{j\geqslant 1}y_j\psi _j \big \|_{L^{\infty }(D)}$$ is not an issue. See Bachmayr et al. (2017a, Remark 2.2) noting the continuity of norms, together with, for example, Reed & Simon (1980, Appendix to IV. 5). Now, recalling the standard argument regarding the continuous dependence of the solution of the variational problem (1.6) on f, we have $$\int _{D} a(\boldsymbol{y})|\nabla (u(\boldsymbol{y}))|^2\,{\textrm{d}}x\leqslant \frac{\left \| f \right \|_{{V^{^{\prime}}}}^2}{\check{a}(\boldsymbol{y})}$$. Then the claim follows from Proposition 3.1, noting that for any $${\mathfrak{u}}\subset{\mathbb{N}}$$ of finite cardinality we have \begin{equation} \check{a}(\boldsymbol{y})\int_{D} \Big|\nabla\left(\frac{\partial^{|{\mathfrak{u}}|} u}{\partial \boldsymbol{y}_{{\mathfrak{u}}}}\right)\Big|^2 \,{\textrm{d}}x \leqslant \sum_{\substack{\mu\in\mathcal{F}\\\left\| \mu \right\|_\infty\leqslant 1}} {\rho^{2\mu}} \int_{D} a(\boldsymbol{y})|\nabla(\partial^\mu u(\boldsymbol{y}))|^2\,{\textrm{d}}x. \end{equation} (3.27) Remark 3.3 We note that following a similar discussion to the above, $$\hat{a}(\boldsymbol{y})$$ can be bounded almost surely. Thus, under Assumption 2.2, the well-posedness of problem (1.6) readily follows almost surely. Further, Assumption 2.2 implies the measurability of the mapping $${\mathbb{R}}^s\ni \boldsymbol{y}\mapsto \mathcal{G}(u^s(\cdot ,\boldsymbol{y}))\in{\mathbb{R}}$$. See Bachmayr et al. (2017a, Corollary 2.1, Remark 2.2) noting $$\mathcal{G}\in{V^{^{\prime}}}$$, together with the fact that a strongly $${\mathscr{F}}$$-measurable V-valued mapping is weakly $${\mathscr{F}}$$-measurable. For more details on the measurability of vector-valued functions, see, for example, Reed & Simon (1980); Yosida (1995). 4. QMC integration error with product weights Based on the bound on mixed derivatives obtained in the previous section, now we derive a QMC convergence rate with product weights. We first introduce some notation. Let \begin{equation} \varsigma_j(\lambda) := 2 \left( \frac{ \sqrt{2\pi}\exp(\alpha_j^2/\varLambda^{*}) }{ \pi^{2-2\varLambda^{*}}(1-\varLambda^{*})\varLambda^{*} } \right)^\lambda \zeta\left(\lambda+\frac12\right), \end{equation} (4.1) where $$\varLambda ^{*}:=\frac{2\lambda -1}{4\lambda }$$, and $$\zeta (x):=\sum _{k=1}^\infty k^{-x}$$ denotes the Riemann zeta function. We record the following result by Graham et al. (2015). Theorem 4.1 (Graham et al., 2015, Theorem 15). Let $$F\in \mathcal{W}^s$$. Given s, $$n\in \mathbb{N}$$ with $$2\leqslant n\leqslant 10^{30}$$, weights $$\boldsymbol{\gamma }=(\gamma _{{\mathfrak{u}}})_{{\mathfrak{u}}\subset \mathbb{N}}$$, and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that, for all $$\lambda \in (1/2,1]$$, \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \big| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \big|^2 } \leqslant{9} \left( \sum_{\emptyset\neq{\mathfrak{u}}\subseteq\{1:s\}} \gamma_{{\mathfrak{u}}}^\lambda \prod\limits_{j\in{\mathfrak{u}}}\varsigma_j(\lambda) \right)^{\frac1{2\lambda}} n^{-\frac{1}{2\lambda}} \left\| F \right\|_{\mathcal{W}^s}. \end{equation} (4.2) For the weight function (3.2) we assume that the $$\alpha _j$$ satisfy for some constants $$0<\alpha _{\min }<\alpha _{\max }<\infty $$, \begin{equation} \max\left\{\frac{\ln 2}{\rho_j},\alpha_{\min}\right\}<\alpha_j\leqslant \alpha_{\max},\qquad j\in\mathbb{N}. \end{equation} (4.3) For example, under Assumption 2.2 letting $$\alpha _j:=1+\frac{\ln 2}{\rho _j}$$ satisfies (4.3) with $$\alpha _{\min }:=1$$ and $$\alpha _{\max }:=1+\sup _{j\geqslant 1}\frac{\ln 2}{\rho _j}$$. We have the following bound on $$\left \| F \right \|_{\mathcal{W}^s}^2$$. The argument is essentially by Graham et al. (2015, Theorem 16). Proposition 4.2 Suppose Assumption 2.2 is satisfied with a positive sequence $$(\rho _j)$$ such that \begin{equation} (1/\rho_j)\in\ell^{1}. \end{equation} (4.4) Then we have \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 \leqslant (C^{\ast})^2\sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \left(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\right)^{\!\!2} \prod\limits_{j\in{\mathfrak{u}}} \frac{1}{\alpha_j-({\ln 2})/\rho_j}, \end{equation} (4.5) with a positive constant $$ C^{\ast }:= \frac{\left \| f \right \|_{V^{^{\prime}}}\left \| \mathcal{G} \right \|_{V^{^{\prime}}}\sqrt{C_0}}{\inf _{x\in{D}}a_0(x)} \left[\exp \left( \frac 12\sum _{j\geqslant 1}\frac{(\ln 2)^2}{\rho _j^2}+\frac 2{\sqrt{2\pi }}\sum _{j\geqslant 1}\frac{\ln 2}{\rho _j} \right)\right ]<\infty . $$ Proof. In this proof we abuse the notation slightly and y always denotes $$(y_1,\dotsc ,y_s,0,0,\dotsc )\in \mathbb{R}^{\mathbb{N}}$$. From (2.7) and (4.4), in view of Corollary 3.2 for $$\mathbb{P}_Y$$-almost every y we have \begin{equation} \left| \frac{\partial^{|{\mathfrak{u}}|} F}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right| \leqslant \left\| \mathcal{G} \right\|_{{V^{^{\prime}}}}\left\| \frac{\partial^{|{\mathfrak{u}}|} u^s}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right\|_V \leqslant \left\| \mathcal{G} \right\|_{{V^{^{\prime}}}}\sqrt{{C_0}}\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j} \frac{\left\| f \right\|_{{V^{^{\prime}}}}}{\check{a}(\boldsymbol{y})}. \end{equation} (4.6) Since $$ \sup_{x\in D}\sum_{j\geqslant1}|y_j||\psi_j(x)| \leqslant \left( \sup_{j\geqslant1}\frac{|y_j|}{\rho_j} \right) \sup_{x\in D}\sum_{j\geqslant1}\rho_j|\psi_j(x)|\leqslant \left( \sum_{j\geqslant1}\frac{|y_j|}{\rho_j} \right) \sup_{x\in D}\sum_{j\geqslant1}\rho_j|\psi_j(x)|,$$ condition (2.7) and equations (4.6) and (3.26) together with $$y_j=0$$ for j > s, imply \begin{equation} \left| \frac{\partial^{|{\mathfrak{u}}|} F}{\partial \boldsymbol{y}_{{\mathfrak{u}}}} \right| \leqslant \frac{K^{\ast}}{\prod\limits_{j\in{\mathfrak{u}}}\rho_j} \prod_{j\in\{1:s\}}\exp\left( \frac{\ln2}{\rho_j}|y_j| \right), \end{equation} (4.7) where $$K^{\ast }:=\frac{\left \| f \right \|_{V^{^{\prime}}}\left \| \mathcal{G} \right \|_{V^{^{\prime}}}\sqrt{{C_0}}} {\inf _{x\in{D}} a_0(x)} $$. Then it follows from (3.1) that \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 {=} \sum_{{\mathfrak{u}}\subseteq\{1:s\}} \frac1{\gamma_{{\mathfrak{u}}}} \int_{\mathbb{R}^{|{\mathfrak{u}}|}}\!\!\! \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \left| \frac{\partial^{|{\mathfrak{u}}|} F}{\partial \boldsymbol{y}_{{\mathfrak{u}}}}(\boldsymbol{y}_{\mathfrak{u}};\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}}) \right| \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^{\!\!\!2} \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}} \end{equation} (4.8) \begin{equation} \leqslant \sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \int_{\mathbb{R}^{|{\mathfrak{u}}|}}\!\!\! \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \frac{K^{\ast}}{\prod\limits_{j\in{\mathfrak{u}}}\rho_j} \prod_{j\in\{1:s\}}\exp\!\left( \frac{\ln2}{\rho_j}|y_j| \right)\! \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^{\!\!\!2} \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}} \end{equation} (4.9) \begin{align} =& \, (K^{\ast})^2\sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \left(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\right)^{\!\!2}\nonumber\\ &\times \left( \int_{\mathbb{R}^{s-|{\mathfrak{u}}|}} \prod_{j\in\{1:s\}\setminus{\mathfrak{u}}}\exp\left( \frac{\ln2}{\rho_j}|y_j| \right) \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} \phi(y_j)\,{\textrm{d}}\boldsymbol{y}_{\{1:s\}\setminus{\mathfrak{u}}} \right)^{\!\!2}\nonumber\\ &\times \int_{\mathbb{R}^{|{\mathfrak{u}}|}} \prod_{j\in{\mathfrak{u}}}\exp\left( \frac{2\ln2}{\rho_j}|y_j| \right) \prod\limits_{j\in{\mathfrak{u}}} w_j^2(y_j)\,{\textrm{d}}\boldsymbol{y}_{\mathfrak{u}}. \end{align} (4.10) Note that this takes essentially the same form as Graham et al. (2015, (4.14)). Thus, the rest of the proof parallels that of Graham et al. (2015, Theorem 16). Noting that $$2\alpha _j-\frac{2\ln 2}{\rho _j}<0$$, and following the same argument as in Graham et al. (2015, (4.15)–(4.17)), we have \begin{equation} \left\| F \right\|_{\mathcal{W}^s}^2 \leqslant (K^{\ast})^2\sum_{{\mathfrak{u}}\subseteq\{1:s\}}\! \frac1{\gamma_{{\mathfrak{u}}}} \left(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\right)^{\!\!2} \left( \prod\limits_{j\in\{1:s\}\setminus{\mathfrak{u}}} 2\exp\left(\frac{(\ln2)^2}{2\rho_j^2}\right)\varPhi\left(\frac{\ln2}{\rho_j}\right) \right)^{\!\!2} \prod_{j\in{\mathfrak{u}}}\frac1{\alpha_j-\frac{\ln2}{\rho_j}}, \end{equation} (4.11) with $$\varPhi (\cdot )$$ denoting the cumulative standard normal distribution function. Comparing this to Graham et al. (2015, (4.17)), the statement follows from the rest of the proof of Graham et al. (2015, Theorem 16). As in Graham et al. (2015, Theorem 17), from Theorem 4.1 and Proposition 4.2 we have the following. Proposition 4.3 For each j ⩾ 1, let $$w_j(t)=\exp (-2\alpha _j|t|)$$ ($$t\in \mathbb{R}$$) with $$\alpha _j$$ satisfying (4.3). Given s, $$n\in \mathbb{N}$$ with $$2\leqslant n\leqslant 10^{30}$$, weights $$\boldsymbol{\gamma }=(\gamma _{{\mathfrak{u}}})_{{\mathfrak{u}}\subset \mathbb{N}}$$ and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that, for all $$\lambda \in (1/2,1]$$, \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|{}^2 } \leqslant 9C^{\ast}C_{\boldsymbol{\gamma},s}(\lambda)n^{-\frac{1}{2\lambda}}, \end{equation} (4.12) with \begin{equation} C_{\boldsymbol{\gamma},s}(\lambda):= \left( \sum_{\emptyset\neq{\mathfrak{u}}\subseteq\{1:s\}} \gamma_{{\mathfrak{u}}}^\lambda \prod\limits_{j\in{\mathfrak{u}}}\varsigma_j(\lambda) \right)^{\frac{1}{\!\!2\lambda}} \left( \sum_{{\mathfrak{u}}\subseteq\{1:s\}} \frac1{\gamma_{{\mathfrak{u}}}} \bigg(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\bigg)^{\!\!2} \prod\limits_{j\in{\mathfrak{u}}} \frac{1}{[\alpha_j-{\ln2}/{\rho_j}]} \right)^{\!\!\frac{1}{2}}, \end{equation} (4.13) and $$C^{\ast }$$ defined as in Proposition 4.2. We choose weights of the product form \begin{equation} \gamma_{\mathfrak{u}}=\gamma^{\ast}_{\mathfrak{u}}(\lambda) := \left[ \bigg(\frac1{\prod\limits_{j\in{\mathfrak{u}}}\rho_j}\bigg)^{\!\!2} \prod\limits_{j\in{\mathfrak{u}}} \frac{1} {\varsigma_j(\lambda)[\alpha_j-\ln 2/\rho_j]} \right]^{\frac{1}{1+\lambda}}. \end{equation} (4.14) In particular, with $$\alpha _j:=1+\ln 2/\rho _j$$ we have \begin{equation} \gamma_{\mathfrak{u}}= \prod\limits_{j\in{\mathfrak{u}}} \left( \frac{1} {\rho_j^2\varsigma_j(\lambda)} \right)^{\frac{1}{1+\lambda}}. \end{equation} (4.15) Then it turns out that under a suitable value of $$\lambda $$ the constant (4.13) can be bounded independently of s, and we have the QMC error bound as follows. Theorem 4.4 For each j ⩾ 1, let $$w_j(t)=\exp (-2\alpha _j|t|)$$ ($$t\in \mathbb{R}$$) with $$\alpha _j$$ satisfying (4.3). Let $$\varsigma _{\max }(\lambda )$$ be $$\varsigma _j$$ defined by (4.1) but $$\alpha _j$$ replaced by $$\alpha _{\max }$$. Suppose $$(\psi _j)$$ satisfies Assumption 2.1. Suppose further that we choose $$\lambda $$ as \begin{equation} \lambda= \begin{cases} \frac1{2-2\delta}\textrm{ for arbitrary }\ \delta\in(0,\frac12] &\textrm{ when }q\in(0,\frac23],\\ \frac{q}{2-q} &\textrm{ when }q\in(\frac23,1], \end{cases} \end{equation} (4.16) and choose the weights$$\ \gamma _{\mathfrak{u}}$$ as in (4.14). Then given s, $$n\in \mathbb{N}$$ with $$n\leqslant 10^{30}$$, and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|{}^2 } \leqslant \begin{cases} 9C_{\rho,q,\delta}C^{\ast} n^{-(1-\delta)} &\textrm{when } 0<q\leqslant\frac23,\\ 9C_{\rho,q}C^{\ast} n^{-\frac{2-q}{2q}} &\textrm{when } \frac23<q\leqslant1, \end{cases} \end{equation} (4.17) where the constants $$C_{\rho ,q,\delta }$$, (resp. $$C_{\rho ,q}$$) are independent of s but depend on $$\rho :=(\rho _j)$$, q and $$\delta $$ (resp. $$\rho $$ and q), and $$C^{\ast }$$ is defined as in Proposition 4.2. In particular, with $$\alpha _j:=1+\ln 2/\rho _j$$ the finite constants $$C_{\rho ,q,\delta }$$ and $$C_{\rho ,q}$$ are both given by $$ {C_{\rho,q,\delta}=C_{\rho,q}=} \left( \prod_{j=1}^{\infty} \left( 1+ \left( \frac{\varsigma_j(\lambda)}{\rho_j^{2\lambda}} \right)^{\!\!\frac{1}{1+\lambda}} \right) -1 \right)^{\!\!\frac{1}{2\lambda}} \left( \prod_{j=1}^{\infty} \left( 1+ \left( \frac{\varsigma_j(\lambda)}{\rho_j^{2\lambda}} \right)^{\!\!\frac{1}{1+\lambda}} \right) \right)^{\!\!\frac{1}{2}}, $$ with $$\lambda $$ given by (4.16). Proof. Let $$\beta _j(\lambda ):=( \frac{(\varsigma _j(\lambda ))^{\frac 1{\lambda }}} {\rho _j^2[\alpha _j-\ln 2/\rho _j]}) ^{\frac{\lambda }{1+\lambda }}$$. Observe that with the choice of weights (4.14) we have \begin{equation} C_{\boldsymbol{\gamma},s}(\lambda) = \left( \sum\limits_{\emptyset\neq{\mathfrak{u}}\subseteq\{1:s\}} \prod\limits_{j\in{\mathfrak{u}}} \beta_j(\lambda) \right)^{\frac{1}{2\lambda}} \left( \sum\limits_{{\mathfrak{u}}\subseteq\{1:s\}} \prod\limits_{j\in{\mathfrak{u}}} \beta_j(\lambda) \right)^{\frac{1}{2}} \end{equation} (4.18) \begin{equation} = \left( \bigg( \prod\limits_{j=1}^{s} (1+\beta_j(\lambda)) \bigg) -1 \right)^{\frac{1}{2\lambda}} \left( \prod\limits_{j=1}^{s} (1+\beta_j(\lambda)) \right)^{\frac{1}{2}}. \end{equation} (4.19) Now, let $$\mathcal{J}:=\inf _{j\geqslant 1}(\alpha _j-\ln 2/\rho _j)$$, which is a positive value from (4.3). Further, note that $$\varsigma _j(\lambda )\leqslant \varsigma _{\max }(\lambda )$$ for j ⩾ 1. Then from $$\beta _j(\lambda )\geqslant 0$$ we have \begin{equation} \prod\limits_{j=1}^{s} (1+\beta_j(\lambda)) \leqslant \prod\limits_{j=1}^{s} \exp({\beta_j(\lambda)}) \leqslant \exp\left(\sum_{j\geqslant1}{\beta_j(\lambda)}\right) \leqslant \exp\left( \left[ \frac{[\varsigma_{\max}(\lambda)]^{\frac{1}{\lambda}}}{\mathcal{J}} \right]^{\frac{\lambda}{1+\lambda}} \sum_{j\geqslant1} \left[ \frac{1}{\rho_j} \right]^{\frac{2\lambda}{1+\lambda}} \right). \end{equation} (4.20) Thus, if $$ \sum _{j\geqslant 1}[ \frac{1}{\rho _j}]^{\frac{2\lambda }{1+\lambda }} <\infty $$ we can conclude that $$C_{\boldsymbol{\gamma },s}(\lambda )$$ is bounded independently of s. We discuss the relation between q and the exponent $${\frac{2\lambda }{1+\lambda }}$$. First note that from $$\lambda \in (\frac 12,1]$$, we have $$ \frac 23<{\frac{2\lambda }{1+\lambda }}\leqslant 1. $$ Suppose $$0<q\leqslant \frac 23$$. In this case, we always have $$q<{\frac{2\lambda }{1+\lambda }}$$, and thus $$(1/\rho _j)\in \ell ^{\frac{2\lambda }{1+\lambda }}$$. Thus, $$\sum _{j\geqslant 1} [ \frac{1}{\rho _j}]^{\frac{2\lambda }{1+\lambda }} <\infty $$ follows. Letting $$ \lambda :=\frac 1{2-2\delta } $$ with an arbitrary $$\delta \in (0,\frac 12]$$, we obtain the result for $$q\in (0,\frac 23]$$. Next consider the case $$\frac 23<q\leqslant 1$$. Then letting $$\lambda :=\lambda (q)=\frac{q}{2-q}$$, we have $$\lambda \in (1/2,1]$$ and \begin{equation} \frac{2\lambda}{1+\lambda} = \frac{2\frac{q}{2-q}}{1+\frac{q}{2-q}} = \frac{2{q}}{2-q+q}=q, \end{equation} (4.21) and thus $$\sum _{j\geqslant 1} \left [ \frac{1}{\rho _j} \right ]^{\frac{2\lambda }{1+\lambda }} <\infty $$. 4.1 On the estimate of the constant Estimate (4.17) gives the same convergence rate as the one obtained by Herrmann & Schwab (2016, Theorem 13). The weights used there are simpler than (4.15). See Herrmann & Schwab (2016, Equation (24)). The essential difference is that we incorporate the function $$1/{\varsigma _j(\lambda )}$$ into the weights as in (4.14) and (4.15). An advantage of this is that, roughly speaking, when the magnitude of $$\{\sup _{x\in D}|\psi _{j}|\}$$ is large, our estimate gives a smaller constant, as shown in Proposition 4.6 below. To make a comparison, following Herrmann & Schwab (2016) we let $$a_*\equiv 0$$, and $$a_0\equiv 1$$. Suppose the sequence $$\{\rho _j \}$$ that satisfies Assumption 2.1 is given by $$\rho _{j}=c_b \frac 1{b_j}$$ with a constant $$c_b>0$$ and a sequence $$\{b_j\}$$, and let \begin{equation} K_{\mathrm{HS}}:=\sup_{x\in D} \sum_{j\geqslant 1}\frac{|\psi_{j}(x)|}{b_j}=\frac{\kappa}{c_b} < \frac{\ln 2}{c_b}. \end{equation} (HS-A1) This is essentially the same assumption as Herrmann & Schwab (2016, Assumption (A1)). We quote the following result. Theorem 4.5 (Herrmann & Schwab, 2016, Theorem 13). Suppose $$(\psi _j)$$ satisfies Assumption 2.1 with a sequence $$\{\rho _j\}$$ that is of the form $$\rho _{j}=c_b \frac 1{b_j}$$ with a constant $$c_b>0$$ and a sequence $$\{b_j\}$$. Let $$w_j(t)=\exp (-2\alpha |t|)$$ ($$t\in \mathbb{R}$$) with a parameter $$\alpha>\frac{\kappa }{c_b} \sup _{j\geqslant 1}\{b_j\}$$. Let $$\varsigma _{\mathrm{HS}}(\lambda )$$ be $$\varsigma _j$$ defined by (4.1) but with $$\alpha _j$$ replaced by $$\alpha $$. Suppose further that $$\lambda $$ is chosen as in (4.16), and that the weights $$\gamma _{\mathfrak{u}}$$ are chosen as $$\gamma _{\mathfrak{u}}:=\prod _{j\in{\mathfrak{u}}}b_j^{\frac{2}{1+\lambda }}$$. Then given s, $$n\in \mathbb{N}$$ with $$n\leqslant 10^{30}$$, and the standard normal density function $$\phi $$, a randomly shifted lattice rule with n points in s dimensions can be constructed by a CBC algorithm such that \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|^2 } \leqslant 9 {\left\| f \right\|_{V^{^{\prime}}}\left\| \mathcal{G} \right\|_{V^{^{\prime}}}\sqrt{C_0}} C_{\mathrm{HS},1} C_{\mathrm{HS},2} C_{\mathrm{HS},3} n^{-\frac{1}{2\lambda}}, \end{equation} (4.22) with \begin{equation} C_{\mathrm{HS},1} :=\exp\left( \sum_{j\geqslant 1}\left( (K_{\mathrm{HS}} b_j)^2+\frac{2}{\sqrt{2\pi}}K_{\mathrm{HS}} b_j \right) \right), \end{equation} (4.23) \begin{equation} C_{\mathrm{HS},2} :=\exp\left( \frac1{2\lambda}\sum_{j\geqslant 1} b_j^{q} \varsigma_{\mathrm{HS}}(\lambda) \right) \quad\textrm{ and }\quad C_{\mathrm{HS},3} :=\exp\left( \frac12\sum_{j\geqslant 1} \frac{b_j^{q}/c^2}{\alpha/2 - 2 K_{\mathrm{HS}} b_j} \right), \end{equation} (4.24) with an arbitrarily fixed constant $$c\in (0,\ln 2/K_{\mathrm{HS}})$$, where $$C_0$$ is defined in Proposition 3.1. To compare, we note that (4.17) can be further bounded as \begin{equation} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q}_{s,n}(\boldsymbol{\varDelta};F) \right|^2 } \leqslant 9 {\left\| f \right\|_{V^{^{\prime}}}\left\| \mathcal{G} \right\|_{V^{^{\prime}}}\sqrt{C_0}} C_{{}1} C_{{}2} n^{-\frac{1}{2\lambda}}, \end{equation} (4.25) with \begin{equation} C_{{}1} :=\exp\left( \sum_{j\geqslant 1}\left( \frac12\left(\frac{\kappa}{\rho_j} \right)^2 + \frac{2}{\sqrt{2\pi}}\frac{\kappa}{\rho_j} \right) \right) \end{equation} (4.26) and \begin{equation} C_{{}2} := \exp\left( \left( \frac1{2\lambda}+\frac12 \right) \left[ \varsigma_{\max}(\lambda) \right]^{\frac{1}{1+\lambda}} \sum_{j\geqslant1} \left[ \frac{1}{\rho_j} \right]^{\frac{2\lambda}{1+\lambda}} \right), \end{equation} (4.27) with the choice $$\alpha _j:=1 + \frac{\kappa }{\rho _j}$$. Note that the scalar $$\ln 2$$ in (4.3) and (4.17) can be replaced by $$\kappa $$, which is defined as in (2.7). We have the following result on the comparison of the constants. Proposition 4.6 Fix $$\varepsilon _{\mathrm{HS}}>0$$ arbitrarily. Let the assumptions of Theorem 4.5 hold with $$\alpha :=\varepsilon _{\mathrm{HS}}+\kappa \sup _{j}\{{1}/{\rho _{j}}\}$$. Then we have $$C_{{}1}<C_{\mathrm{HS},1}$$ and $$1<C_{\mathrm{HS},3}$$. Further, for $$\lambda \in (1/2,1]$$ suppose $$ \kappa \sup_{j\geqslant1}\frac{1}{\rho_{j}}\geqslant \frac{\sqrt{1+\lambda}+1}{\lambda} $$ holds. Then we have $$C_{{}2}< C_{\mathrm{HS},2}$$, and therefore $$C_{{}1}C_{{}2}< C_{\mathrm{HS},1}C_{\mathrm{HS},2}C_{\mathrm{HS},3}$$. Proof. Clearly, we have $$1<C_{\mathrm{HS},3}$$. The equations (HS-A1) and $$\rho _{j}=c_b \frac 1{b_j}$$ imply $$K_{\mathrm{HS}}b_j = \frac{\kappa }{\rho _j}$$, and thus $$C_{{}1}<C_{\mathrm{HS},1}$$ follows. To show $$C_{{}2}< C_{\mathrm{HS},2}$$, it suffices to show \begin{align} &\left( \frac1{2\lambda}+\frac12 \right) \left( \frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}} \right)^{\frac{\lambda}{1+\lambda}} \big[ 2 \exp({\lambda \widetilde{\alpha}_{\max }^2/\varLambda^{\ast} }) \big]^{\frac{1}{1+\lambda}} \sum_{j\geqslant1} \left[ \frac{1}{\rho_j} \right]^{\frac{2\lambda}{1+\lambda}} \zeta(\lambda+1/2)^{\frac{1}{1+\lambda}}\nonumber\\ & \phantom{blahblahblah} < \frac{1}{\lambda} \left( \frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}} \right)^{\lambda} \exp({\lambda\alpha^2}/{\varLambda^{\ast}}) \sum_{j\geqslant 1} b_j^{\frac{2\lambda}{1+\lambda}} \zeta(\lambda+1/2). \end{align} (4.28) For $$\lambda \in (1/2,1]$$, we have $$\varLambda ^{\ast }=\varLambda ^{\ast }(\lambda )=\frac{2\lambda -1}{4\lambda }\in (0,1/4]$$, and thus $$ 1<\frac{\sqrt{2\pi}}{\pi^{2}}\frac{16}{3}=\frac{\sqrt{2\pi}}{\pi^{2-2\varLambda^{\ast}(1/2)} (1-\varLambda^{\ast}(1))\varLambda^{\ast}(1) } \leqslant \frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}(\lambda)}(1-\varLambda^{\ast}(\lambda))\varLambda^{\ast}(\lambda)}. $$ Hence, we have \begin{equation} \left(\frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}}\right)^{\frac{\lambda}{1+\lambda}}< \left(\frac{\sqrt{2\pi}}{ \pi^{2-2\varLambda^{\ast}}(1-\varLambda^{\ast})\varLambda^{\ast}} \right)^{\lambda}. \end{equation} (4.29) Further, from $$(1/(2\lambda )+1/2)2^{\frac{1}{1+\lambda }} \leqslant 2^{\frac{\lambda }{1+\lambda }}/\lambda $$, noting $$2<\zeta (3/2)\leqslant \zeta (\lambda +1/2)$$ we have \begin{equation} \left(\frac{1}{2\lambda}+\frac{1}{2}\right)2^{\frac{1}{1+\lambda}} \zeta(\lambda+1/2)^{\frac{1}{1+\lambda}} \leqslant \frac{2^{\frac{\lambda}{1+\lambda}}}{\lambda} \zeta(\lambda+1/2)^{\frac{1}{1+\lambda}} \leqslant \frac{\zeta(\lambda+1/2)}{\lambda}. \end{equation} (4.30) We now show $$\frac 1{1+\lambda }\widetilde{\alpha }_{\max }=1+\sup _{j\geqslant 1}\frac{\kappa }{\rho _j}< \alpha $$. The assumption $$\kappa \sup _{j}\frac{1}{\rho _{j}} \geqslant \frac{\sqrt{1+\lambda }+1}{\lambda } = \frac{1}{\sqrt{1+\lambda }-1} $$ implies $$ \frac{ (1+\kappa \sup _{j}\{{1}/{\rho _{j}}\})^2 } {(\kappa \sup _{j}\{{1}/{\rho _{j}}\})^2} \leqslant 1+\lambda $$, and thus \begin{equation} \frac{1}{1+\lambda}\left(1+\kappa \sup_{j}\{{1}/{\rho_{j}}\}\right)^2 <\left(\varepsilon_{\mathrm{HS}}+\kappa \sup_{j}\{{1}/{\rho_{j}}\}\right)^2. \end{equation} (4.31) Hence, from the above together with (4.29) and (4.30) we conclude that (4.28) holds, which is the desired result. 5. Application to a wavelet stochastic model Cioica et al. (2012) considered a stochastic model in which users can choose the smoothness at will. In this section, we consider the Gaussian case and show that the theory developed in Section 4 can be applicable to the model with a wide range of smoothness. 5.1 Stochastic model For simplicity we assume $$D\,{\subset \mathbb{R}^{d}}$$ is a bounded convex polygonal domain. Consider a wavelet system $$(\varphi _{\xi })_{\xi \in \nabla }$$ that is a Riesz basis for $$L^2(D)$$-space. We explain the notation and outline the standard properties we assume as follows. The indices $$\xi \in \nabla $$ typically encodes both the scale, often denoted by $$|\xi |$$, and the spatial location and also the type of the wavelet. Since our analysis does not rely on the choice of a type of wavelet, we often use the notation $$\xi =(\ell ,k)$$, and $$\nabla =\{(\ell ,k)\mid \ell \geqslant \ell _0,k\in \nabla _{\ell }\}$$ where $$\nabla _{\ell }$$ is some countable index set. The scale level ℓ of $$\varphi _\xi $$ is denoted by $$|\xi |=|(\ell ,k)|=\ell $$. Furthermore, $$(\widetilde{\varphi }_{\xi })_{\xi \in \nabla }$$ denotes the dual wavelet basis, i.e., $$\langle{\varphi }_{\xi }, \widetilde{\varphi }_{\xi ^{\prime}} \rangle _{L^2(D)}=\delta _{\xi \xi ^{\prime}}$$, $$\xi ,\xi ^{\prime}\in \nabla $$. In the following, $$\alpha \lesssim \beta $$ means that $$\alpha $$ can be bounded by some constant times $$\beta $$ uniformly with respect to any parameters on which $$\alpha $$ and $$\beta $$ may depend. Further, $$\alpha \sim \beta $$ means that $$\alpha \lesssim \beta $$ and $$\beta \lesssim \alpha $$. We list the assumption on wavelets: (W1) The wavelets $$(\varphi _{\xi })_{\xi \in \nabla }$$ form a Riesz basis for $$L^2(D)$$. (W2) The cardinality of the index set $${\nabla _{\ell }}$$ satisfies $$\#{\nabla _{\ell }}=C_{\nabla } 2^{\ell d}$$ for some constant $$C_{\nabla }>0$$, where d is the spatial dimension of D. (W3) The wavelets are local. That is, the supports of $$\varphi _{{\ell ,k}}$$ are contained in balls of diameter $$\sim{2^{-\ell }}$$ and do not overlap too much in the following sense: there exists a constant M > 0 independent of ℓ such that for each given ℓ for any x ∈ D, \begin{equation} \#\{ k\in\nabla_{\ell} \mid \varphi_{\ell,k} (x)\neq 0 \} \leqslant M. \end{equation} (5.1) (W4) The wavelets satisfy the cancellation property \begin{equation*} |\langle v,\varphi_{\xi}\rangle_{L^2(D)}| \lesssim 2^{-|\xi|(\frac{d}{2}+\tilde{m})}|v|_{W^{\tilde{m},\infty}(\operatorname{supp}(\varphi_{\xi}))} \end{equation*} for $$|\xi |\geqslant \ell _0$$ with some parameter $$\tilde{m}\in{\mathbb{N}}$$, where $$|\cdot |_{W^{\tilde{m},\infty }}$$ denotes the usual Sobolev semi-norm. That is, the inner product is small when the function v is smooth on the support $$\operatorname{supp}(\varphi _{\xi })$$. (W5) The wavelet basis induces characterisations of Besov spaces $$B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))$$ for $${1 \leqslant }{{\bar{\texttt{p}}}},{{\bar{\texttt{q}}}}<\infty $$ and all t with $$d\max \{1/{{\bar{\texttt{p}}}}-1,0\}<t<t_*$$ for some parameter $$t_*>0$$: \begin{equation} \left\| v \right\|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))} := \left( \sum_{\ell=\ell_0}^{\infty} 2^{\ell\big( t+d\big(\frac12-\frac{1}{{{\bar{\texttt{p}}}}}\big) \big){{\bar{\texttt{q}}}}} \left( \sum_{k\in\nabla_{\ell}} |\langle v, \tilde{\varphi}_{\ell,k}\rangle_{L^2(D)}|^{{{\bar{\texttt{p}}}}} \right)^{\frac{{{\bar{\texttt{q}}}}}{{{\bar{\texttt{p}}}}}} \right)^{\frac1{{{\bar{\texttt{q}}}}}}. \end{equation} (5.2) The upper bound $$t_*$$ depends on the choice of wavelet basis. Since t we consider is typically small, here for simplicity we define the Besov norm as above. (W6) The wavelets satisfy \begin{equation} \sup_{x\in D}|\varphi_{\ell,k}(x)|=C_{\varphi} 2^{\frac{\beta_0 d}{2}\ell} \qquad \textrm{ with some }{\beta_0}\in\mathbb{R_{+}}, \end{equation} (5.3) for some constant $$C_{\varphi }>0$$. Typically we have $$\varphi _{\ell ,k}\sim 2^{\frac{d}2\ell }\psi (2^{\ell }(x-x_{\ell ,k}))$$ for some bounded function $$\psi $$. In this case we have $$\beta _0=1.$$ See Cioica et al. (2012, Section 2.1) and references therein for further details. See also Cohen (2003); DeVore (1998); Urban (2002). We now investigate a stochastic model expanded by the wavelet basis described above. Let $$\{Y_{\ell ,k}\}$$ be a collection of independent standard normal random variables on a suitable probability space $$(\varOmega ^{\prime},{\mathscr{F}}^{\prime},\mathbb{P}^{\prime})$$. We assume the random field (1.2) is given with T such that \begin{equation} T(x,{\boldsymbol{y}}^{\prime}) = \sum_{\ell=\ell_0}^\infty\sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x), \end{equation} (5.4) where \begin{equation} \sigma_{\ell}:=2^{-\frac{\beta_1d }2\ell}\textrm{ with }\beta_1>1. \end{equation} (5.5) Thanks to the decaying factor $$\sigma _{\ell }$$, in view of (W5) the series (5.4) converges $$\mathbb{P}^{\prime}$$-almost surely in $$L^2(D)$$: $$\mathbb{E}_{\mathbb{P}^{\prime}} \big( \sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} Y_{\ell ,k}({\boldsymbol{y}}^{\prime})^2 \sigma _{\ell }^2\big) =C_{\nabla }\sum _{\ell =\ell _0}^{\infty } 2^{-(\beta _1-1)d \ell } <\infty $$. Further, $$\sigma _{\ell }$$ will be used for $$\{\sigma _{\ell } \varphi _{\ell ,k}\}$$ to satisfy condition (2.7). To replace (1.2), we consider the following log-normal stochastic model: \begin{equation} a(x,{\boldsymbol{y}}^{\prime})=a_*(x) + {a_0(x)} \exp \left( \sum_{\ell=\ell_0}^\infty \sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime})\sigma_{\ell} \varphi_{\ell,k}(x)\right). \end{equation} (5.6) In the following, we argue that we can reorder $$\sigma _{\ell } \varphi _{\ell ,k}$$ lexicographically as $$\sigma _{j} \varphi _{j}$$ and see it as $$\psi _j$$, while keeping the law. Throughout this section, we assume that the parameters $$\beta _0$$ and $$\beta _1$$satisfy \begin{equation} {0}<{\beta_1} -{\beta_0}, \end{equation} (5.7) and that point evaluation $$\varphi _{\ell ,k}(x)$$ ((ℓ, k) ∈ ∇) is well defined for any x ∈ D. Under this assumption, reordering $$(Y_{\ell ,k}\sigma _{\ell } \varphi _{\ell ,k})$$ lexicographically does not change the law of (5.4) on $${\mathbb{R}}^{D}$$. To see this, from the Gaussianity it suffices to show that the covariance function $$\mathbb{E}_{\mathbb{P}^{\prime}}[T(\cdot )T(\cdot )]\colon D\times D\to{\mathbb{R}}$$ is invariant under the reordering. Fix x ∈ D arbitrarily. For any L, L′ (L>L′), from the independence of $$\{Y_{\ell ,k}\}$$ we have \begin{equation} \mathbb{E}_{\mathbb{P}^{\prime}} \left( \sum_{\ell=\ell_0}^{L} \sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x) - \sum_{\ell=\ell_0}^{L^{\prime}} \sum_{k\in\nabla_{\ell}} Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x) \right)^2= \sum_{\ell=L^{\prime}+1}^L \sum_{k\in\nabla_{\ell}} {\sigma_{\ell}^2} \varphi_{\ell,k}^2(x) \end{equation} (5.8) \begin{equation} \leqslant{C_{\varphi}^2M \sum_{\ell=L^{\prime}+1}^L 2^{-(\beta_1-\beta_0 )d\ell}}<\infty. \end{equation} (5.9) Hence, the sequence $$\big \{\sum _{\ell =\ell _0}^{L} \sum _{k\in \nabla _{\ell }} Y_{\ell ,k}({\boldsymbol{y}}^{\prime}) \sigma _{\ell } \varphi _{\ell ,k}(x)\big \}_L$$ is convergent in $$L^2(\varOmega ^{\prime},\mathbb{P}^{\prime})$$. The continuity of the inner product $$\mathbb{E}_{\mathbb{P}^{\prime}}[\cdot ,\cdot ]$$ on $$L^2(\varOmega ^{\prime})$$ in each variable yields \begin{equation} \mathbb{E}_{\mathbb{P}^{\prime}}[T(x_1)T(x_2)] = \sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sum_{\ell^{\prime}=\ell^{\prime}_0}^{\infty} \sum_{k^{\prime}\in\nabla_{\ell^{\prime}}} \mathbb{E}_{\mathbb{P}^{\prime}}[ Y_{\ell,k}({\boldsymbol{y}}^{\prime}) \sigma_{\ell} \varphi_{\ell,k}(x_1) Y_{\ell^{\prime},k^{\prime}}({\boldsymbol{y}}^{\prime}) \sigma_{\ell^{\prime}} \varphi_{\ell^{\prime},k^{\prime}}(x_2) ] \end{equation} (5.10) \begin{equation} =\sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2 \varphi_{\ell,k}(x_1)\varphi_{\ell,k}(x_2) \qquad \text{for any {$x_1$}, {$x_2\in D$}.} \end{equation} (5.11) But we have $$\sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} \sigma _{\ell }^2 |\varphi _{\ell ,k}(x_1)\varphi _{\ell ,k}(x_2)| \leqslant{C_{\varphi }^2M \sum _{\ell =L^{\prime}+1}^L 2^{-(\beta _1-\beta _0 )d\ell }} $$. Hence $$ \mathbb{E}_{\mathbb{P}^{\prime}}[T(x_1)T(x_2)]=\sum_{j\geqslant1}\sigma_{j}^2\varphi_{j}(x_1)\varphi_{j}(x_2), \qquad x_1,\,x_2\in D. $$ Following a similar discussion, we see that the series $$\sum _{j\geqslant 1}\sigma _{j}^2 y_j\varphi _{j}(x)$$ converges in $$L^2(\varOmega )$$ for each x ∈ D and has the covariance function $$\sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} \sigma _{\ell }^2 \varphi _{\ell ,k}(x_1)\varphi _{\ell ,k}(x_2)$$. Hence the law on $${\mathbb{R}}^{D}$$ is the same. Thus, abusing the notation slightly we write T(⋅, y) := T(⋅, y′), $$y_{\ell ,k}:=Y_{\ell ,k}({\boldsymbol{y}}^{\prime})$$, $$\varOmega ={\mathbb{R}}^{{\mathbb{N}}}:=\varOmega ^{\prime}$$, $${\mathscr{F}}:={\mathscr{F}}^{\prime}$$, $$\mathbb{P}_{Y}:=\mathbb{P}^{\prime}$$ and $$\mathbb{E}[\cdot ]:=\mathbb{E}_{\mathbb{P}^{\prime}}[\cdot ]$$. Remark 5.1 Our theory at present is restricted to the Gaussian random fields with covariance functions of the form (5.11). Although there are attempts to represent a given Gaussian random field with wavelet-like functions (see Bachmayr et al.,2017b and references therein), unfortunately it does not seem to be the case that arbitrary covariance functions, in particular Matérn covariance functions, are representable as in (5.11) with wavelets with the properties (W1–W6). Next we discuss the applicability of the theory developed in Section 4 to the wavelet stochastic model above. We need to check Assumption 2.1. Take $$\theta \in (0,\frac{d}2({\beta _1}-{\beta _0}))$$ and for $$\xi =(\ell ,k)$$ let \begin{equation} \rho_{\xi}:=c2^{\theta |\xi|}=c2^{\theta \ell} \end{equation} (5.12) with some constant $$0<c<{\ln 2}\big (M{C_{\varphi }}\sum _{\ell =\ell _0}^\infty 2^{\ell (\theta -\frac{d}2({\beta _1}-{\beta _0}))}\big )^{-1}$$. Then by virtue of the locality property (5.1) we have (2.7): \begin{equation} \sup_{x\in D}\sum_{\xi} \rho_{\xi} |\sigma_{\xi}\varphi_{\xi}(x)| \leqslant \sum_{\ell=\ell_0}^\infty \rho_{\ell} \sup_{x\in D} \sum_{k\in\nabla_{\ell}} \left|2^{-\frac{\beta_1d\ell}2}\varphi_{\ell,k}(x)\right| \leqslant cM{C_{\varphi}}\sum_{\ell=\ell_0}^\infty 2^{\theta \ell} 2^{-\frac{\beta_1d\ell}2} 2^{\frac{\beta_0 d}{2}\ell}<\ln2. \end{equation} (5.13) Further, we note that by reordering for sufficiently large j we have \begin{equation} \sup_{x\in D}|\sigma_{j}\varphi_{j}(x)| \sim j^ { -\frac12({\beta_1} -{\beta_0 }), }. \end{equation} (5.14) To see this, first recall that there are $$\mathcal{O}(2^{{\ell d}})$$ wavelets at level ℓ. Thus, for an arbitrary but sufficiently large j we have $$ 2^{\ell _jd}\lesssim j \lesssim 2^{(\ell _j+1)d} $$ for some $$\ell _j\geqslant \ell _0$$, which is equivalent to $$ 2^{-(\ell_j+1)d}\lesssim j^{-1} \lesssim 2^{-\ell_jd}.$$ Now, let $$\xi _j\in \nabla _{\ell _j}$$ be the index corresponding to j. Since $$|\xi _j| = \ell _j$$, noting $$\beta _1-\beta _0>0$$ we have \begin{equation} \sup_{x\in D} |\sigma_{j} \varphi_{j} (x)| = \sup_{x\in D} |\sigma_{\ell_j} \varphi_{\xi_j} (x)| {=} C_{\varphi} 2^{-\frac{\beta_1 d}2 \ell_j} 2^{\frac{\beta_0 d}{2} \ell_j} \lesssim C_{\varphi}{2^{\frac{d}2\beta^{\ast}}} j^{-\frac12({\beta_1} -{\beta_0} )}\quad{\textrm{for any }\beta^{\ast}\geqslant \beta_0-\beta_1.} \end{equation} (5.15) The opposite direction can be derived as \begin{equation} j^{-\frac12({\beta_1} -{\beta_0} )} \lesssim 2^{{-\ell_j d(\frac12(\beta_1-\beta_0))}} =\frac{1}{C_{\varphi}} \sup_{x\in D}|\sigma_{j} \varphi_{j}(x)|. \end{equation} (5.16) Similarly, we have \begin{equation} \rho_{j}\sim j^{\frac{\theta}{d}}. \end{equation} (5.17) Thus, to have $$\sum _{j\geqslant 1} \frac 1{\rho _j}<\infty $$, the weakest condition on the summability on $$(1/\rho _j)$$ for Assumption 2.1 to be satisfied, it is necessary (and sufficient) to have $$\theta> d$$. The following proposition summarises the discussion above. Theorem 5.2 Suppose the random coefficient (1.2) is given by T as in (5.4) with $$(\varphi _{\ell ,k})$$ that satisfies (5.3), and non-negative numbers $$(\sigma _{\ell })$$ that satisfy (5.5). Let $$(\rho _{\xi })$$ be defined by (5.12). Further, assume $${\beta _0}$$ and $$\beta _1$$ satisfy \begin{equation} \frac{2}{q} <{\beta_1} -{\beta_0 } \end{equation} (5.18) for some q ∈ (0, 1]. Then the reordered system $$(\sigma _{j}\varphi _{j})$$ with the reordered $$(\rho _j)$$ satisfies Assumption 2.1, and under the same conditions on $$w_j(t)$$, $$\alpha _j$$ and $$\varsigma _j$$ as in Theorem 4.4 we have the QMC error bound (4.17): \begin{equation*} \sqrt{ \mathbb{E}^{\boldsymbol{\varDelta}} \left| I_s(F) - \mathcal{Q\!}_{s,n}(\boldsymbol{\varDelta};F) \right|{}^2 } = \begin{cases} \mathcal{O}\!(n^{-(1-\delta)}) &\textrm{when } 0<q\leqslant\frac23,\\ \mathcal{O}\!(n^{-\frac{2-q}{2q}}) &\textrm{when } \frac23<q\leqslant1, \end{cases} \end{equation*} where $$\delta \in (0,1/2]$$ is arbitrary, and the implied constants are as in Theorem 4.4. Proof. Take $$\theta \in (\frac{d}q,\frac{d}2(\beta _1-\beta _0))$$, and define $$(\rho _{\xi })$$ as in (5.12), reorder the components lexicographically and denote the reordered $$(\rho _{\xi })$$ by $$(\rho _j)$$. Then we have (2.8), \begin{equation} \sum_{j\geqslant 1} \left( \frac1{\rho_j}\right)^{q} \lesssim \sum_{j\geqslant 1} \left( \frac1{j}\right)^{ \frac{{q}\theta}d }<\infty. \end{equation} (5.19) Further, from $${\theta -\frac{\beta _1d}{2}+\frac{\beta _0 d}{2}}<0$$ we have (5.13), and thus (2.7) holds. Hence, from the discussion in this section, Assumption 2.1 is satisfied, and thus in view of Theorem 4.4 we have (4.17). 5.2 Smoothness of the stochastic model 5.2.1 Hölder smoothness of the realisations Often, random fields T with realisations that are not smooth are regularly of interest. In this section, we see that the stochastic model we consider, (5.6), allows reasonably rough random fields (Hölder smoothness) for d = 1, 2. The result is shown via Sobolev embedding results. We provide a necessary and sufficient condition to have specified Sobolev smoothness (Theorem 5.3). Recall that embedding results are in general optimal (see, for example, Adams & Fournier, 2003, 4.12, 4.40–4.44), and in this sense we have a sharp condition for our model to have Hölder smoothness. A building block is a Besov characterisation of the realisations which is essentially due to Cioica et al. (2012, Theorem 6). Here we define $$s:=s(L):=\sum _{\ell =\ell _0}^{L}\#(\nabla _{\ell })$$, that is, the truncation is considered in terms of the level L. Theorem 5.3 (Cioica et al., 2012, Theorem 6). Let $${{\bar{\texttt{p}}}},{{\bar{\texttt{q}}}}\in [1,\infty )$$, and $$t\in (d\max \{1/{{\bar{\texttt{p}}}}-1,0\},t_*)$$, where $$t_*$$ is the parameter in (W5). Then \begin{equation} t< d\left( \frac{\beta_1 - 1}2 \right) \end{equation} (5.20) if and only if $$T\in B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))$$ a.s. Further, if (5.20) is satisfied, then the stochastic model (5.6) satisfies $$\mathbb{E}[\left \| T^{s(L)} \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]\leqslant \mathbb{E}[\left \| T \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]<\infty $$ for all $$L\in{\mathbb{N}}$$. Proof. First, from the proof of Cioica et al. (2012, Theorem 6), we see that $$T\in B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))$$ a.s. is equivalent to $$ \sum_{\ell=\ell_0}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \sigma_{\ell}^{{{\bar{\texttt{q}}}}} ( \# \nabla_{\ell} )^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}} \sim \sum_{\ell=\ell_0}^{\infty} 2^{\ell{{\bar{\texttt{q}}}} (t-\frac{d}2(\beta_1-1) )}<\infty, $$ which holds from the assumption $$t< d\big ( \frac{\beta _1 - 1}2 \big )$$. Similarly, from the proof of Cioica et al. (2012, Theorem 6) we have $$ \mathbb{E}[\left\| T \right\|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]\,\,{\lesssim} \sum_{\ell=\ell_0}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \sigma_{\ell}^{{{\bar{\texttt{q}}}}} ( \# \nabla_{\ell} )^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}}<\infty. $$ Finally, from (W5) we have $$\mathbb{E}[\left \| T^{s} \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}] {= \sum _{\ell =\ell _0}^{s} 2^{\ell (t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \mathbb{E}[( \sum _{k\in \nabla _{\ell }}|Y_{\ell ,k}|^{{\bar{\texttt{p}}}})^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}} ]} \leqslant \mathbb{E}[\left \| T \right \|_{B^t_{{{\bar{\texttt{q}}}}}(L_{{{\bar{\texttt{p}}}}}(D))}^{{{\bar{\texttt{q}}}}}]$$, completing the proof. To establish Hölder smoothness we employ embedding results. To invoke them, we first establish that the realisations are continuous: we want the measurability and want to keep the law of T on $${\mathbb{R}}^{D}$$. The Hölder norm involves taking the supremum over the uncountable set D, and thus whether the resulting function $$\varOmega \ni \boldsymbol{y}\mapsto \left \| T(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}\in{\mathbb{R}}$$, where $$t_1\in (0,1]$$ is a Hölder exponent, is an $${\mathbb{R}}$$-valued random variable is not immediately clear. We see that by the continuity the measurability is preserved. Sobolev embeddings are achieved by finding a suitable representative by changing values of functions on measure zero sets of D. This change could affect the law on $${\mathbb{R}}^{D}$$, since it is determined by the laws of arbitrary finitely many random variables $$(T(x_1),\dotsc ,T(x_m))$$ ($$\{x_i\}_{i=1,\dotsc ,m}\subset D$$) on $${\mathbb{R}}^{m}$$. To avoid this, we establish the existence of continuous modification, thereby taking the continuous element of a Besov function that respects the law of T from the outset. We want realisations of T to have continuous paths. Now, suppose that there exist positive constants $$\iota _1$$, $$C_{\mathrm{KT}}$$ and $$\iota _2(>d)$$ satisfying \begin{equation} \mathbb{E}[|T(x_1)-T(x_2)|^{\iota_1}]\leqslant C_{\mathrm{KT}}\left\| x_1 - x_2 \right\|_2^{\iota_2} \qquad \textrm{ for any }x_1,x_2\in D. \end{equation} (5.21) Then by virtue of Kolmogorov–Totoki’s theorem (Kunita, 2004, Theorem 4.1) T has a continuous modification. Further, the continuous modification is uniformly continuous on D and it can be extended to the closure $$\overline{D}$$. Thus, we want T to satisfy (5.21). Hölder smoothness of $$(\varphi _{k,l})$$ is sufficient for (5.21) to hold. Proposition 5.4 Suppose that $$(\sigma _{\ell })$$ satisfies (5.5). Further, suppose that for each (ℓ,k) ∈ ∇, the function $$\varphi _{\ell ,k}$$ is $$t_0$$-Hölder continuous on D for some $$t_0\in (0,1]$$. Then (5.21) holds, in particular, T has a modification that is uniformly continuous on D and can be extended to the closure $$\overline{D}$$. Proof. It suffices to show (5.21) holds. Fix $$x_1,x_2\in D$$ arbitrarily. First note that \begin{equation} {\sigma_*^2}:=\mathbb{E}[|T(x_1)-T(x_2)|^{2}] =\sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2( \varphi_{\ell,k}(x_1) - \varphi_{\ell,k}(x_2))^2 \end{equation} (5.22) \begin{equation} {\leqslant C} \left\| x_1 - x_2 \right\|_2^{2t_0} \sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2<\infty, \end{equation} (5.23) where C is the $$t_0$$-Hölder constant. Then since $$T(x_1)-T(x_2)\sim \mathcal{N}(0,{\sigma _*^2})$$ we observe that, with $$X_{\mathrm{std}}\sim \mathcal{N}(0,1)$$ we have \begin{equation} \mathbb{E}[|T(x_1)-T(x_2)|^{2m}] =\mathbb{E}[|X_{\mathrm{std}} \sigma_*|^{2m}] =\sigma_*^{2m} \mathbb{E}[|X_{\mathrm{std}}|^{2m}] \end{equation} (5.24) \begin{equation} {\leqslant C^m}\left\| x_1 - x_2 \right\|_2^{2t_0 m} \left( \sum_{\ell=\ell_0}^{\infty} \sum_{k\in\nabla_{\ell}} \sigma_{\ell}^2 \right)^{{m}} \mathbb{E}[|X_{\mathrm{std}}|^{2m}]\,\,\textrm{for any }m\in{\mathbb{N}}. \end{equation} (5.25) Taking $$m>\frac{d}{2t_0}$$, we have (5.21) with $$\iota _1:=2m$$, $$C_{\mathrm{KT}}:= C^m \big ( \sum _{\ell =\ell _0}^{\infty } \sum _{k\in \nabla _{\ell }} \sigma _{\ell }^2 \big )^{m} \mathbb{E}[|X_{\mathrm{std}}|^{2m}]$$ and $$\iota _2:=2t_0m(>d)$$, and thus the statement follows. In the following, we assume $$\varphi _{\ell ,k}$$ is $$t_0$$-Hölder continuous on D for some $$t_0\in (0,1]$$. Note that under this assumption, we may assume $$\varphi _{\ell ,k}$$ is continuous on $$\overline{D}$$. Using the fact that $$T(\cdot ,\boldsymbol{y})\in B^t_{2}(L_{2}(D))=H^{t}(D)$$ a.s., now we establish the expected Hölder smoothness of the random coefficient a. This implies a spatial regularity of the solution u, given a suitable regularity of D and f. In turn, for example, the convergence rate of the finite element method using the piecewise linear functions is readily obtained, under a certain condition on the output functional $$\mathcal{G}$$. See Teckentrup et al. (2013, Lemma 3.3) or Graham et al. (2015, Theorem 6). First, we argue that to analyse the Hölder smoothness of the realisations of a, without loss of generality we may assume $$a_*\equiv 0$$ and $$a_0\equiv 1$$. To see this, suppose $$a_*$$, $$a_0$$ in (5.6) satisfies $$a_*,a_0\in C^{t_1}(\overline{D})$$ for some $$t_1\in (0,1]$$. By virtue of \begin{equation} |\textit{e}^{a} - \textit{e}^{b}|=\bigg|\int_{a}^{b}\textit{e}^{r}{\,{\textrm{d}}r} \bigg| \leqslant \max\{\textit{e}^{a},\textit{e}^{b}\}|b-a|\leqslant (\textit{e}^{a}+\textit{e}^{b})|b-a| \quad \textrm{for all}\,\,a,b\in{\mathbb{R}}, \end{equation} (5.26) for any $$x_0,x_1,x_2\in \overline{D}$$ ($$x_1\neq x_2$$) we have \begin{equation} {\big|\textit{e}^{T(x_0)}\big|+\frac{\big|\textit{e}^{T(x_1)}-\textit{e}^{T(x_2)}\big|}{\left\| x_1-x_2 \right\|_2^{t_1}} \leqslant \left(\sup_{x\in\overline{D}}\big|\textit{e}^{T(x)}\big|\right) \left(1+2 \frac{|T(x_2) - T(x_3)|}{\left\| x_1-x_2 \right\|_2^{t_1}} \right).} \end{equation} (5.27) Noting that $$ \| a_0 \textit{e}^{T} \|_{C^{t_1}(\overline{D})}\leqslant C_{t_1} \left \| a_0 \right \|_{C^{t_1}(\overline{D})} \| \textit{e}^{T} \|_{C^{t_1}(\overline{D})} $$ (see, for example, Gilbarg & Trudinger, 1983, p. 53) we have \begin{equation} \left\| a \right\|_{C^{t_1}(\overline{D})} \leqslant{ \left\| a_* \right\|_{C^{t_1}(\overline{D})} + C_{t_1}\left\| a_0 \right\|_{C^{t_1}(\overline{D})} \left(\sup_{x\in\overline{D}}|\textit{e}^{T(x)}|\right)} \left(1+2 \left\| T \right\|_{C^{t_1}(\overline{D})} \right). \end{equation} (5.28) Thus, given $$a_*,a_0\in C^{t_1}(\overline{D})$$, it suffices to show $$(\sup _{x\in \overline{D}}|\textit{e}^{T(x)}|) (1+2 \left \| T \right \|_{C^{t_1}(\overline{D})})<\infty $$ for the Hölder smoothness of the realisations of a. Therefore, in the rest of this subsection, for simplicity we assume $$a_*\equiv 0$$ and $$a_0\equiv 1$$. In order to invoke embedding results we assume $$t_*$$ satisfies $$\frac{d}2<\lfloor t_*\rfloor $$, and that we can take $$t\in (0,\frac{d}2(\beta _1-1))$$ such that $$\frac{d}2<\lfloor t\rfloor $$. For the latter to hold, taking $$\beta _1\geqslant 3$$, implying $$\frac{d}2<\lfloor \frac{d}2(\beta _1-1)\rfloor $$, is sufficient, which is always satisfied for the presented QMC theory to be applicable. See Section 5.2.2. Now, take $$t_1\in (0,1]\cap (0,\lfloor t\rfloor -\frac{d}2]$$. Then from $$B^t_{2}(L_{2}(D))=H^{t}(D)$$ and the Sobolev embedding (for example, Adams & Fournier, 2003, Theorem 4.12) we have \begin{equation} \left\| a \right\|_{C^{t_1}(\overline{D})}\lesssim \left(\sup_{x\in\overline{D}}|a(x)|\right) \left(1+2 \left\| T \right\|_{B^t_{2}(L_{2}(D))} \right) \end{equation} (5.29) Similarly, we have $$\left \| a^{s} \right \|_{C^{t_1}(\overline{D})} \lesssim (\sup _{x\in \overline{D}}|a^s(x)|)(1+2 \left \| T^s \right \|_{B^t_{2}(L_{2}(D))} )$$. We want to take the expectation of $$\left \| a \right \|_{C^{t_1}(\overline{D})}$$. To do this we establish the $${\mathscr{F}}/\mathcal{B}({\mathbb{R}})$$-measurability of $$\boldsymbol{y}\mapsto \left \| a(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}$$. Taking continuous modifications of T if necessary, we may assume paths of a are continuous on $$\overline{D}$$. Then from the continuity of the mapping $$ \{(x_1,x_2)\in\overline{D}\times\overline{D}\mid x_1\neq x_2 \}\ni (x_1,x_2)\mapsto \frac{|a(x_1)-a(x_2)|}{\left\| x_1-x_2 \right\|_2^{t_1}}\in{\mathbb{R}}, $$ with a countable set G that is dense in $$\{(x_1,x_2)\in \overline{D}\times \overline{D}\mid x_1\neq x_2 \}\subset{\mathbb{R}}^d\times{\mathbb{R}}^d$$ we have \begin{equation} \sup_{x_1,x_2\in \overline{D},\,x_1\neq x_2} \frac{|a(x_1)-a(x_2)|}{\left\| x_1-x_2 \right\|_2^{t_1}} = \sup_{(x_1,x_2)\in G} \frac{|a(x_1)-a(x_2)|}{\left\| x_1-x_2 \right\|_2^{t_1}}. \end{equation} (5.30) Thus, $$\boldsymbol{y}\mapsto \left \| a(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}$$, and by the same argument, $$\boldsymbol{y}\mapsto \left \| a^s(\cdot ,\boldsymbol{y}) \right \|_{C^{t_1}(\overline{D})}$$, are $$\mathcal{B}({\mathbb{R}}^{\mathbb{N}})/\mathcal{B}(\overline{{\mathbb{R}}})$$-measurable, where $$\overline{{\mathbb{R}}}:={\mathbb{R}}\cup \{-\infty \}\cup \{\infty \}$$. From $$\mathbb{E}[\left \| T^s \right \|_{C(\overline{D})}]\lesssim \mathbb{E}[\left \| T^s \right \|_{ B^t_{2}(L_{2}(D)) }]\leqslant \mathbb{E}[\left \| T \right \|_{ B^t_{2}(L_{2}(D)) }] \lesssim{(\sum _{\ell =\ell _0}^{\infty } 2^{\ell (2t-{d}(\beta _1-1) )} )^{1/2}}<\infty $$ independently of s, and $$\mathbb{E}[\left \| T \right \|_{C(\overline{D})}]\lesssim{(\sum _{\ell =\ell _0}^{\infty } 2^{\ell (2t-d(\beta _1-1) )} )^{1/2}}<\infty $$, following the discussion by Charrier (2012, Proof of Proposition 3.10) utilising Fernique’s theorem, there exists a constant $$M_p>0$$ independent of s such that \begin{align} \max\Big\{\mathbb{E}[\exp(p\|{T^s(\cdot,\boldsymbol{y})}\|_{C(\overline{D})})] , \mathbb{E}[\exp(p\|{T(\cdot,\boldsymbol{y})}\|_{C(\overline{D})})] \Big\}<M_{p}, \end{align} (5.31) for any $$p\in (0,\infty )$$. Together with $$\sup _{x\in \overline{D}}|a(x)|\leqslant \exp (\sup _{x\in \overline{D}}|T(x)|)$$ we have $$ \max\left\{\mathbb{E}[(\sup_{x\in\overline{(D)}}|a^s(x)|)^{ 2p }], \mathbb{E}[(\sup_{x\in\overline{(D)}}|a(x)|)^{ 2p }]\right\}<M_{2p} \quad \text{for any {$p\in(0,\infty)$}.}$$ Hence, from (5.29) we conclude that \begin{equation} \mathbb{E}[\left\| a \right\|_{C^{t_1}(\overline{D})}^{ p }] \leqslant \max\{1,2^{p-1/2}\}\sqrt{\mathbb{E}\left[ \left(\sup_{x\in\overline{D}}|a(x)|\right)^{ 2p } \right]} \sqrt{1+4^{ p } \mathbb{E}\left[ \left\| T \right\|_{B^t_{2}(L_{2}(D))}^{2p} \right]} <\infty. \end{equation} (5.32) Similarly we have $$ \mathbb{E}[\left\| a^s \right\|_{C^{t_1}(\overline{D})}^{ p }] \leqslant \max\{1,2^{p-1/2}\}\sqrt{\mathbb{E}\left[ \left(\sup_{x\in\overline{D}}|a^s(x)|\right)^{ 2p } \right]} \sqrt{1+4^{ p } \mathbb{E}\left[ \left\| T^s \right\|_{B^t_{2}(L_{2}(D))}^{2p} \right]}<\infty, $$ where the right-hand side can be bounded independently of s. 5.2.2 On the smoothness of the realisations our theory can treat We now discuss the smoothness of the realisations that the currently developed theory permits. From the conditions imposed on the basis functions, e.g., the summability conditions, random fields with smooth realisations are easily in the scope of the QMC theory applied to PDEs. Here, the capability of taking a reasonably rough random field into account is of interest. Thus, we are interested in the smallest $$\beta _1$$ our theory allows us to take: given $$\beta _0$$, in view of Theorem 5.3 the smaller the decay rate $$\beta _1$$ of $$\sigma _\ell $$ is, the rougher the realisations of (5.4) are. Typically, $$L^2$$ wavelet Riesz bases have growth rate $$\beta _0=1$$. Then the condition $$2 < \beta _1 - \beta _0 $$, the weakest condition on $$\beta _1$$ in Theorem 5.2, is equivalent to \begin{equation} \beta_1=3+{\frac{2}{d}}\varepsilon\quad\textrm{ for any }\varepsilon>0, \end{equation} (A1) where the factor $$\frac{2}{d}$$ is introduced to simplify the notation in the following discussion. In the following, we let $$\beta _0=1$$, take $$\beta _1$$ as in (A1) and discuss the smoothness of (5.4) achieved by taking small $$\varepsilon>0$$, i.e., smallest $$\beta _1$$ possible. Our discussion will be based on Sobolev embedding results. We first note that from $$B^t_{2}(L^{2}(D))=H^{t}(D)$$, in view of Theorem 5.3, $$T(\cdot , \boldsymbol{y})\in H^{t}(D)$$ a.s. if and only if condition (5.20), holds. In applications, d = 1, 2, 3 are of interest. We recall the following embedding results. See, for example, Adams & Fournier (2003, p. 85). For d = 1, 2 and 3 respectively, with $$\beta _1=3+{{2\varepsilon }/d}$$ the condition (5.20) reads $$t<1 + \varepsilon $$, $$t<2 + \varepsilon $$ and $$t<3 + \varepsilon $$. See Table 1, which summarises condition (5.20) with (A1). Table 1 For $$\beta _0=1$$, the upper bound for the exponent t for realisations of T to have $$H^{t}$$-smoothness is $$d/2(\beta _1-1)$$. Column 2 shows this upper bound varied with the spatial dimension d. Column 3 shows the smallest bound on t allowed by the presented QMC theory: the case $$\beta _1=3+2\varepsilon /d$$ for small $$\varepsilon>0$$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ View Large Table 1 For $$\beta _0=1$$, the upper bound for the exponent t for realisations of T to have $$H^{t}$$-smoothness is $$d/2(\beta _1-1)$$. Column 2 shows this upper bound varied with the spatial dimension d. Column 3 shows the smallest bound on t allowed by the presented QMC theory: the case $$\beta _1=3+2\varepsilon /d$$ for small $$\varepsilon>0$$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ $$t<\frac{d}2(\beta _1 - 1)$$ $$t<\frac{d}2(\beta _1 - 1)$$ with $$\beta _1=3+{2\varepsilon /d}$$ for some ($$\varepsilon>0$$) d = 1 $$t<(\beta _1 - 1)/2$$ $$t<1 + \varepsilon $$ d = 2 $$t<(\beta _1 - 1)$$ $$t<2 + \varepsilon $$ d = 3 $$t<\frac{3}2(\beta _1 - 1)$$ $$t<3+\varepsilon $$ View Large For d = 1 and d = 2, realisations allowed by $$t<1+\varepsilon $$, and respectively $$t<2+\varepsilon $$, seem to be rough enough. For d = 1, $$H^1(D)$$ is characterised as a space of absolutely continuous functions. Since in practice we employ a suitable numerical method to solve PDEs, the validity of point evaluations demands a(⋅, y) ∈ C(D). For d = 2, we know $$H^2(D)$$ can be embedded to $$C^{0,t}(\overline{D})$$ (t ∈ (0, 1)). This is a standard assumption to have the convergence of FEM with the hat function elements on polygonal domains. For d = 3, we have $$t<3+\varepsilon $$. We know $$H^3(D) = H^{1+2}(D)$$ can be embedded to $$C^{1,t}(\overline{D})$$ ($$t\in (0,2 - \frac 32] = (0, \frac 12]$$). In practice, we employ quadrature rules to compute the integrals in bilinear form. That $$a \in C^{1,t}(\overline{D})$$ ($$t \in (0,{\frac 12}]$$) is a reasonable assumption to get the convergence rate for FEM with quadratures. As a matter of fact, we want $$a(\cdot ,\boldsymbol{y}) \in C^{2r}(\overline{D})$$ to have the $$\mathcal{O}(H^{2r})$$ convergence of the expected $$L^{p}(\varOmega )$$-moment of $$L^2(D)$$-error even for $$C^2$$-bounded domains. See Charrier et al. (2013, Remark 3.14) and Teckentrup et al. (2013, Remark 3.2). Finally, we note that these embedding results are in general optimal (see, for example, Adams & Fournier, 2003, 4.12, 4.40–4.44), and in this sense, together with the characterisation (Theorem 5.3), the condition for our model to have Hölder smoothness is sharp. 5.3 Dimension truncation error In this section we estimate the truncation error $$ \mathbb{E} \left \| u - u^s \right \|_V $$. As in the previous section, the truncation is considered in terms of the level L and we let $$s=s(L)=\sum _{\ell =\ell _0}^{L}\#(\nabla _{\ell })$$. Let $$a^s$$ be a(x, y) with $$y_j=0$$ for j > s, and define $$\check{a}^s(\boldsymbol{y})$$, $$\hat{a}^s(\boldsymbol{y})$$ accordingly. Proposition 5.5 Let u be the solution of the variational problem (1.6) with the coefficient given by the stochastic model (5.6) defined with (5.4) and (5.5). Let $$u^{s(L)}$$ be the solution of the same problem but with $$y_j:=0$$ for j > s(L). Then we have \begin{equation} \mathbb{E}\left[\big\| u - u^{s(L)}\big\|_V\right] \lesssim \left(\sum_{\ell=L+1}^{\infty} 2^{\ell ({-d(\beta_1-1)+\epsilon^{\prime}})}\right)^{\frac12} \end{equation} (5.33) for any $$\epsilon ^{\prime}\in (0,2\min \{t_*,d/2({\beta _1 - 1})\})$$. Proof. By a variant of Strang’s lemma, we have \begin{equation} \left\| u - u^s \right\|_V \leqslant \left\| a-a^s \right\|_{L^\infty(D)}\frac{\left\| f \right\|_{{V^{^{\prime}}}}}{\check{a}(\boldsymbol{y})\check{a}^s(\boldsymbol{y})} \end{equation} (5.34) for y such that $$\check{a}(\boldsymbol{y})$$, $$\check{a}^s(\boldsymbol{y})>0$$. We first derive an estimate on $$\left \| a-a^s \right \|_{L^\infty (D)}$$. Fix $$t\in (0,\min \{t_*,d/2({\beta _1 - 1})\})$$ arbitrarily, where $$t_*$$ is the parameter in (W5). For $$t\in (0,\frac{d}2(\beta _1-1))$$, choose $${{\bar{\texttt{p}}}}_0\in [1,\infty )$$ such that $$\frac{d}{{{\bar{\texttt{p}}}}_0}\leqslant t$$ so that we can invoke the Besov embedding results. Since $$\max \{d(\frac{1}{{{\bar{\texttt{p}}}}_0}-1),0\}<t$$, from Theorem 5.3 there exists a set $$\varOmega _0\subset \varOmega $$ such that $$\mathbb{P}(\varOmega _0)=1$$ and $$T(\cdot ,\boldsymbol{y})\in B^t_{{{\bar{\texttt{q}}}}}(L^{{{\bar{\texttt{p}}}}_0}(D))$$ for all $$\boldsymbol{y}\in \varOmega _0$$ with any $${{\bar{\texttt{q}}}}\in [1,\infty )$$. Then letting $$T^L(x,\boldsymbol{y}):=\sum _{\ell =\ell _0}^{L} \sum _{k\in \nabla _{\ell }} y_{\ell ,k} \sigma _{\ell } \varphi _{\ell ,k}(x)$$, from the embedding result of Besov spaces (Adams & Fournier, 2003, Chapter 7), and the characterisation by wavelets (W5) for any L, L′ ⩾ 1 (L ⩾ L′) we have \begin{equation} \left\| T^L(\cdot,\boldsymbol{y})-T^{L^{\prime}}(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)} \lesssim \left\| T^L(\cdot,\boldsymbol{y})-T^{L^{\prime}}(\cdot,\boldsymbol{y}) \right\|_{B^t_{{{\bar{\texttt{q}}}}}(L^{{{\bar{\texttt{p}}}}_0}(D))} \end{equation} (5.35) \begin{equation} \sim \left( \sum_{\ell=L^{\prime}+1}^L 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}_0})) {{\bar{\texttt{q}}}} } \left( \sum_{k\in\nabla_{\ell}} |\sigma_{\ell} y_{\ell,k} |^{{{\bar{\texttt{p}}}}_0} \right)^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}_0} \right)^{1/{{\bar{\texttt{q}}}}}<\infty \end{equation} (5.36) for all $$\boldsymbol{y}\in \varOmega _0$$. Thus, the sequence $$\{T^{L}(\cdot ,\boldsymbol{y})\}_L$$ ($$\boldsymbol{y}\in \varOmega _0$$) is Cauchy, and thus convergent in $$L^{\infty }(D)$$. Hence, we obtain \begin{equation} \left\| T(\cdot,\boldsymbol{y})-T^{L}(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)}^{{{\bar{\texttt{q}}}}} \lesssim \sum_{\ell=L+1}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \left( \sum_{k\in\nabla_{\ell}} |\sigma_{\ell} y_{\ell,k} |^{{{\bar{\texttt{p}}}}} \right)^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}}\qquad \text{a.s.,} \end{equation} (5.37) for all $${{\bar{\texttt{p}}}}\in [1,\infty )$$ such that $$\frac{d}{{{\bar{\texttt{p}}}}}\leqslant t$$, and any $${{\bar{\texttt{q}}}}\in [1,\infty )$$. For such $${{\bar{\texttt{p}}}}$$ and $${{\bar{\texttt{q}}}}$$, from Cioica et al. (2012, Proof of Theorem 6) we have \begin{equation} \mathbb{E}\left[ \big\| T(\cdot,\boldsymbol{y})-T^{L}(\cdot,\boldsymbol{y}) \big\|_{L^{\infty}(D)}^{{{\bar{\texttt{q}}}}} \right]\lesssim \sum_{\ell=L+1}^{\infty} 2^{\ell(t+d(1/2-1/{{{\bar{\texttt{p}}}}}) ){{\bar{\texttt{q}}}}} \sigma_{\ell}^{{{\bar{\texttt{q}}}}} ( \# \nabla_{\ell} )^{{{\bar{\texttt{q}}}}/{{\bar{\texttt{p}}}}} \sim \sum_{\ell=L+1}^{\infty} 2^{\ell{{\bar{\texttt{q}}}} (t-\frac{d}2(\beta_1-1) )}<\infty. \end{equation} (5.38) Further, from (5.26) we have \begin{align} &\mathbb{E}\Big[\left\| a(x,\boldsymbol{y})-a^{s(L)}(x,\boldsymbol{y}) \right\|_{L^{\infty}(D)}^2\Big]\nonumber\\ &\qquad\leqslant (\sup_{x\in D}|a_0(x)|^2) \mathbb{E}[\exp(2\left\| T(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)}) + \exp(2\|T^L(\cdot,\boldsymbol{y})\|_{L^{\infty}(D)})]\,\, \mathbb{E}\left[\big\| T-T^L \big\|_{L^{\infty}(D)}^2\right]. \end{align} (5.39) The sequence $$(\rho _\xi )$$ defined by (5.12), when reordered, satisfies $$(1/\rho _j)\in \ell ^{\frac{d}{\theta }+\varepsilon }$$ for any $$\varepsilon>0$$. Thus, from the proof of Corollary 3.2, as in Bachmayr et al. (2017a, Remark 2.2), we have \begin{equation} \max\left\{ \mathbb{E}[\exp(2\left\| T(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)})], \mathbb{E}[ \exp(2\|T^L(\cdot,\boldsymbol{y})\|_{L^{\infty}(D)})] \right\}<M_2, \end{equation} (5.40) where the constant $$M_2>0$$ is independent of L. Together with (5.34), we have \begin{equation} \mathbb{E}[\left\| u - u^s \right\|_V] \leqslant \left\| f \right\|_{{V^{^{\prime}}}} \mathbb{E}\left[ \frac{1}{(\check{a}(\boldsymbol{y}))^4}\right]^{\frac14} \mathbb{E}\left[ \frac{1}{(\check{a}^s(\boldsymbol{y}))^4}\right]^{\frac14} \mathbb{E}\left[\left\| a-a^s \right\|_{L^\infty(D)}^2\right]^{\frac12}<\infty, \end{equation} (5.41) where the Cauchy–Schwarz inequality is employed in the right-hand side of (5.34). To see the finiteness of the right-hand side of (5.41) note that $$ \frac1{\check{a}(\boldsymbol{y})}\leqslant \frac1{\inf_{x\in D} a_0(x)}\exp(\|T\|_{L^{\infty}(D)}),\quad\ \frac1{\check{a}^s(\boldsymbol{y})}\leqslant\frac1{\inf_{x\in D} a_0(x)} \exp(\|T^L\|_{L^{\infty}(D)}),$$ and further, from the same argument as above we have \begin{equation} \max\left\{ \mathbb{E}[\exp(4\left\| T(\cdot,\boldsymbol{y}) \right\|_{L^{\infty}(D)})], \mathbb{E}[ \exp(4\|T^L(\cdot,\boldsymbol{y})\|_{L^{\infty}(D)})] \right\}<M_4, \end{equation} (5.42) where the constant $$M_4>0$$ is independent of L. Therefore, from (5.38), (5.39) and (5.41) we obtain \begin{equation} \mathbb{E}\big[\big\| u - u^{s(L)}\big\|_V\big] \lesssim \mathbb{E}\left[\big\| T-T^L \big\|_{L^{\infty}(D)}^2\right]^{\frac12} \lesssim \left(\sum_{\ell=L+1}^{\infty} 2^{\ell (2t-d(\beta_1-1) )}\right)^{\frac12}. \end{equation} (5.43) Letting $$\varepsilon ^{\prime}:=2t$$ completes the proof. We conclude this section with a remark on other examples to which the currently developed QMC theory is applicable. Bachmayr et al. (2017c) considered so-called functions $$(\psi _j)$$ with finitely overlapping supports, for example, indicator functions of a partition of the domain D. It is easy to find a positive sequence $$(\rho _j)$$ such that Assumption 2.1 holds, and thus Theorem 4.4 readily follows. However, for these examples, due to the lack of smoothness it does not seem that it is easy to obtain a meaningful analysis as given above, and thus we forgo elaborating on them. 6. Concluding remark We considered a QMC theory for a class of elliptic PDEs with a log-normal random coefficient. Using an estimate on the partial derivative with respect to the parameter $$y_{{\mathfrak{u}}}$$ that is of product form, we established a convergence rate ≈ 1 of randomly shifted lattice rules. Further, we considered a stochastic model with wavelets, and analysed the smoothness of the realisations, and truncation errors. In closing we note that the currently developed theory works well for $$(\psi _j)$$ with local supports such as wavelets as described, but does not work so well for functions with arbitrary supports. In fact, under the same summability condition $$ \sum_{j\geqslant1}(\sup_{x\in D}|\psi_j(x)|)^p<\infty\quad\textrm{for some}\,\, p\in(0,1] $$ considered by Graham et al. (2015), letting $$\rho _j:=c(\sup _{x\in D}|\psi _j(x)|)^{p-1}$$ with a suitable constant c > 0, one can apply Theorem 4.4 with $$q:=q(p):=\frac{p}{1-p}$$. Consequently, one gets a convergence rate ≈ 1 for $$p\in (0,\frac 25+\varepsilon ]$$ for small $$\varepsilon $$. Under a weaker summability condition than this—similarly to the uniform case (Kuo et al., 2012, p. 3368), for $$p\in (0,\frac 12]$$—the rate ≈ 1 with product weights already follows from the results by Graham et al. (2015); we are grateful to Frances Y. Kuo for bringing this point to our attention. Another point related to the above concerns the cost of the CBC construction. Suppose that we can represent a given random field with two representations: spatial functions with local support and global support. Let s(L) be the truncation degree as in Section 5.2.1 for the local support, and $$\tilde{s}$$ be that for global support as in Graham et al. (2015). We mentioned that the generating vector for the lattice rule can be constructed with the cost $$\mathcal{O}(s(L) n \log n)$$ with the CBC construction algorithm. In Graham et al. (2015), the POD weights led to the cost $$\mathcal{O}(\tilde{s}n\log n + \tilde{s}^2 n)$$. Given a target error, it is not clear which order is larger: we might require $$s(L)\gg \tilde{s}$$ to have a desired truncation error. Acknowledgements I would like to express my sincere gratitude to Frances Y. Kuo, Klaus Ritter, Ian H. Sloan and anonymous reviewers for their stimulating comments. References Adams , R. A. & Fournier , J. J. ( 2003 ) Sobolev Spaces , vol. 140. Amsterdam : Academic Press . Bachmayr , M. , Cohen , A. , DeVore , R. & Migliorati , G. ( 2017a ) Sparse polynomial approximation of parametric elliptic PDEs. Part II: lognormal coefficients . ESAIM Math. Model. Numer. Anal. , 51 , 341 – 363 . Google Scholar CrossRef Search ADS Bachmayr , M. , Cohen , A. & Migliorati , G. ( 2017b ) Representations of Gaussian random fields and approximation of elliptic PDEs with lognormal coefficients . J. Fourier Anal. Appl . http://gs.jku.at/gismo. Bachmayr , M. , Cohen , A. & Migliorati , G. ( 2017c ) Sparse polynomial approximation of parametric elliptic PDEs. Part I: affine coefficients . ESAIM Math. Model. Numer. Anal. , 51 , 321 – 339 . Google Scholar CrossRef Search ADS Charrier , J . ( 2012 ) Strong and weak error estimates for elliptic partial differential equations with random coefficients . SIAM J. Numer. Anal. , 50 , 216 – 246 . Google Scholar CrossRef Search ADS Charrier , J. , Scheichl , R. & Teckentrup , A. L. (2013 ) Finite element error analysis of elliptic PDEs with random coefficients and its application to multilevel Monte Carlo methods . SIAM J. Numer. Anal. , 51 , 322 – 352 . Google Scholar CrossRef Search ADS Cioica , P. A. , Dahlke , S. , Döhring , N. , Kinzel , S. , Lindner , F. , Raasch , T. , Ritter , K. & Schilling , R. L. ( 2012 ) Adaptive wavelet methods for the stochastic Poisson equation . BIT Numer. Math. , 52 , 589 – 614 . Google Scholar CrossRef Search ADS Cohen , A . ( 2003 ) Numerical Analysis of Wavelet Methods . Studies in Mathematics and its Applications, vol. 32. Amsterdam : Elsevier (North-Holland Publishing) . Cohen , A. & DeVore , R. ( 2015 ) Approximation of high-dimensional parametric PDEs . Acta Numer ., 24 , 1 – 159 . Google Scholar CrossRef Search ADS Dagan , G . ( 1984 ) Solute transport in heterogeneous porous formations . J. Fluid Mech. , 145 , 151 – 177 . Google Scholar CrossRef Search ADS DeVore , R. A. ( 1998 ) Nonlinear approximation . Acta Numer ., 7 , 51 – 150 . Google Scholar CrossRef Search ADS Dick , J. , Kuo , F. Y. , Le Gia , Q. T. , Nuyens , D. & Schwab , C. ( 2014 ) Higher order QMC Petrov–Galerkin discretization for affine parametric operator equations with random field inputs . SIAM J. Numer. Anal. , 52 , 2676 – 2702 . Google Scholar CrossRef Search ADS Dick , J. , Kuo , F. Y. & Sloan , I. H. ( 2013 ) High-dimensional integration: the quasi-Monte Carlo way . Acta Numer. , 22 , 133 – 288 . Google Scholar CrossRef Search ADS Gantner , R. N. , Herrmann , L. & Schwab , C. ( 2018 ) Quasi-Monte Carlo integration for affine-parametric, elliptic PDEs: local supports imply product weights . SIAM J. Numer. Anal. , 56 , 111 – 135 . Google Scholar CrossRef Search ADS Gilbarg , D. & Trudinger , N. S. ( 1983 ) Elliptic Partial Differential Equations of Second Order . Berlin, Heidelberg : Springer . Google Scholar CrossRef Search ADS Graham , I. G. , Kuo , F. Y. , Nichols , J. A. , Scheichl , R. , Schwab , C. & Sloan , I. H. ( 2015 ) Quasi-Monte Carlo finite element methods for elliptic PDEs with lognormal random coefficients . Numer. Math. , 131 , 329 – 368 . Google Scholar CrossRef Search ADS Graham , I. G. , Kuo , F. Y. , Nuyens , D. , Scheichl , R. & Sloan , I. H. ( 2011 ) Quasi-Monte Carlo methods for elliptic PDEs with random coefficients and applications . J. Comput. Phys. , 230 , 3668 – 3694 . Google Scholar CrossRef Search ADS Herrmann , L. & Schwab , C. ( 2016 ) Quasi-Monte Carlo integration for lognormal-parametric, elliptic PDEs: local supports imply product weights . Seminar for Applied Mathematics , ETH Zürich Research Report No. 2016-39 . Itô , K . ( 1984 ) Introduction to Probability Theory . Cambridge : Cambridge University Press . Kunita , H. ( 2004 ) Stochastic differential equations based on Lévy processes and stochastic flows of diffeomorphisms . Real and Stochastic Analysis, Trends Math. (M. M. Rao ed.). Boston : Birkhäuser , pp. 305 – 373 . Kuo , F. Y. & Nuyens , D. ( 2016 ) Application of quasi-Monte Carlo methods to elliptic PDEs with random diffusion coefficients: a survey of analysis and implementation . Foundations Comput. Math. (in press) . Kuo , F. Y. , Schwab , C. & Sloan , I. H. ( 2012 ) Quasi-Monte Carlo finite element methods for a class of elliptic partial differential equations with random coefficients . SIAM J. Numer. Anal. , 50 , 3351 – 3374 . Google Scholar CrossRef Search ADS Naff , R. L. , Haley , D. F. & Sudicky , E. A. ( 1998a ) High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 1. Methodology and flow results . Water Resour. Res. , 34 , 663 – 677 . Google Scholar CrossRef Search ADS Naff , R. L. , Haley , D. F. & Sudicky , E. A. ( 1998b ) High-resolution Monte Carlo simulation of flow and conservative transport in heterogeneous porous media: 2. Transport results . Water Resour. Res. , 34 , 679 – 697 . Google Scholar CrossRef Search ADS Reed , M. & Simon , B. ( 1980 ) Methods of Modern Mathematical Physics. I, 2nd edn. New York: Academic Press . Scheichl , R. , Stuart , A. M. & Teckentrup , A. L. ( 2017 ) Quasi-Monte Carlo and multilevel Monte Carlo methods for computing posterior expectations in elliptic inverse problems . SIAM/ASA J. Uncertain. Quantif. , 5 , 493 – 518 . Google Scholar CrossRef Search ADS Schwab , C. & Gittelson , C. J. ( 2011 ) Sparse tensor discretizations of high-dimensional parametric and stochastic PDEs . Acta Numer ., 20 , 291 – 467 . Google Scholar CrossRef Search ADS Teckentrup , A. , Scheichl , R. , Giles , M. & Ullmann , E. ( 2013 ) Further analysis of multilevel Monte Carlo methods for elliptic PDEs with random coefficients . Numer. Math. , 125 , 569 – 600 . Google Scholar CrossRef Search ADS Urban , K. ( 2002 ) Wavelets in Numerical Simulation , Lecture Notes in Computational Science and Engineering , vol. 22. Berlin, Heidelberg : Springer . Yosida , K . ( 1995 ) Functional Analysis. Classics in Mathematics . Berlin : Springer . Google Scholar CrossRef Search ADS © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com

Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: May 23, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off