# Construction and implementation of asymptotic expansions for Laguerre-type orthogonal polynomials

Construction and implementation of asymptotic expansions for Laguerre-type orthogonal polynomials Abstract Laguerre and Laguerre-type polynomials are orthogonal polynomials on the interval $$[0,\infty)$$ with respect to a weight function of the form \begin{equation*} w(x) = x^{\alpha} e^{-Q(x)}, \quad Q(x) = \sum_{k=0}^m q_k x^k, \quad \alpha > -1, \quad q_m > 0. \end{equation*} The classical Laguerre polynomials correspond to $$Q(x)=x$$. The computation of higher order terms of the asymptotic expansions of these polynomials for large degree becomes quite complicated, and a full description seems to be lacking in literature. However, this information is implicitly available in the work of Vanlessen (2007, Strong asymptotics of Laguerre-type orthogonal polynomials and applications in random matrix theory. Constr. Approx.,25, 125–175), based on a nonlinear steepest descent analysis of an associated Riemann–Hilbert problem. We will extend this work and show how to efficiently compute an arbitrary number of higher order terms in the asymptotic expansions of Laguerre and Laguerre-type polynomials. This effort is similar to the case of Jacobi and Jacobi-type polynomials in a previous article. We supply an implementation with explicit expansions in four different regions of the complex plane. These expansions can also be extended to Hermite-type weights of the form $$\exp(-\sum_{k=0}^m q_k x^{2k})$$ on $$(-\infty,\infty)$$ and to general nonpolynomial functions $$Q(x)$$ using contour integrals. The expansions may be used, e.g., to compute Gauss–Laguerre quadrature rules with lower computational complexity than based on the recurrence relation and with improved accuracy for large degree. They are also of interest in random matrix theory. 1. Introduction We determine asymptotic approximations as $$n \rightarrow \infty$$ of the orthonormal polynomials $$p_n(x)$$ on $$[0,\infty)$$ with positive leading coefficient, with the weight function $w(x) = x^{\alpha} e^{-Q(x)}, \quad Q(x) = \sum_{k=0}^m q_k x^k, \quad \alpha > -1, \quad q_m > 0.$ The classical Laguerre polynomials corresponds to $$Q(x)=x$$, but we aim to provide formulas and results for general real-valued functions $$Q(x)$$. The choice $$Q(x)=x^m$$ corresponds to so-called Freud-type polynomials (see Levin & Lubinsky, 2001). The asymptotic expansions in this article are commonly referred to as Plancherel–Rotach-type expansions, named after the initial study of Hermite polynomials by Plancherel & Rotach (1929). The procedure in Vanlessen (2007) gives four types of asymptotic expansions: ( I) inner asymptotics for $$x$$ near the bulk of the zeros, but away from the extreme zeros, ( II) outer asymptotics valid for $$x$$ away from the zeros of $$p_n(x)$$, ( III) boundary asymptotics near the so-called soft edge valid for $$x$$ near the largest zeros and ( IV) boundary asymptotics near the so-called hard edge for $$x$$ near $$0$$. We also provide asymptotic expansions for associated quantities such as leading term coefficients as well as recurrence coefficients $$a_n$$ and $$b_n$$ of the three-term recurrence relation $$\label{recur} b_n p_{n+1}(x) = (x-a_n)p_{n}(x) - b_{n-1}p_{n-1}(x).$$ (1.1) The methodology of Vanlessen (2007) is based on the nonlinear steepest descent method by Deift & Zhou (1993) for a $$2 \times 2$$ Riemann–Hilbert problem that is generically associated with orthogonal polynomials by Fokas et al. (1992). This is further detailed in Section 2.1. The general strategy is to apply an explicit and invertible sequence of transformations $$Y(z)\mapsto T(z) \mapsto S(z) \mapsto R(z)$$, such that the final matrix-valued function $$R(z)$$ is asymptotically close to the identity matrix as $$n$$ or $$z$$ tends to $$\infty$$. The asymptotic result for $$Y(z)$$, and subsequently for the polynomials, is obtained by inverting these transformations. The transformations involve a normalization of behaviour at infinity, the so-called ʻopening of a lensʼ around an interval of interest, and the introduction of local parametrices in disks around special points like endpoints, which are matched to global parametrices elsewhere in the complex plane. These transformations split $$\mathbb{C}$$ into different regions, where different formulas for the asymptotics are valid. In our case of Laguerre-type polynomials, one also first needs an $$n$$-dependent rescaling of the $$x$$ axis using the so-called MRS numbers (defined further on in this article). After this step, the roots of the rescaled polynomials accumulate in a fixed and finite interval. We provide an algorithm to obtain an arbitrary number of terms in the expansions, where we set up series expansions using many convolutions that follow the chain of transformations and their inverses in this steepest descent method. While doing this, we keep computational efficiency in mind, as well as the use of the correct branch cuts in the complex plane. The strategy outlined above and in Section 2.1 was also followed in our earlier article about asymptotic expansions of Jacobi-type polynomials: Deaño et al. (2016), which was based on the mathematical analysis in Kuijlaars et al. (2004). Here, we base our results on the analysis in the work of Vanlessen (2007). The main differences in this article compared with Deaño et al. (2016) are the following: The analysis of Laguerre-type polynomials on the halfline $$[0,\infty)$$ has an extra step that involves a rescaling via the Mhaskar–Rakhmanov–Saff (MRS) numbers $$\beta_n$$ (see Section 3). This leads to fractional powers of $$n$$ in the expansions. There is also a new type of behaviour near the largest zero (often referred to as a soft edge), captured by the Airy function. This leads to higher order poles in the derivations, and thus also to longer formulas for the higher order terms in the expansions. The behaviour near the hard edge at $$x=0$$ involves Bessel functions, like in the Jacobi case near the endpoints $$\pm 1$$. We obtain more explicit results for polynomials $$Q(x)$$. We will frequently distinguish in this article between three cases: monomial $$Q$$, general polynomial $$Q$$ and a more general analytic function $$Q$$. Asymptotic expansions can be useful in computations for several reasons. The use of the recurrence relation (1.1) in applications involving Laguerre polynomials results in accumulating roundoff errors and a computation time that is linear in $$n$$. In contrast, asymptotic expansions become increasingly accurate as the degree $$n$$ becomes large, and the computing time is essentially independent of $$n$$. Another motivation is the partition function for Laguerre ensembles of random matrices as mentioned in Vanlessen (2007) and studied in Zhao et al. (2014). This gives the eigenvalue distribution of products of random matrices of a certain type, which could, for example, arise in stochastic processes (Markov chains) or quantum mechanics. In this article, some leading order terms are given explicitly, and we detail the derivation of higher order terms. The analysis only requires elementary numerical techniques: in particular, there is no need for the evaluation of special functions besides the Airy and Bessel functions. The formulas are implemented in Sage and Matlab and are available on the software web page of our research group (see Opsomer, 2016). The expressions are also implemented in the Chebfun package in Matlab for computing with functions (The University of Oxford & the Chebfun Developers, 2016, lagpts.m) and into the Julia package FastGaussQuadrature (Townsend, 2016, gausslaguerre.jl), in both cases for the purpose of constructing Gauss–Laguerre quadrature rules with a high number of points in linear complexity. This approach is comparable to a number of modern numerical methods for other types of Gaussian quadratures, that are often based on asymptotics and that lead to a linear complexity as well (see Glaser et al., 2007; Bogaert et al., 2012; Hale & Townsend, 2013; Bogaert, 2014). A recent article (Bremer 2017) achieves competitive performance in a general way via nonoscillatory phase functions. Potential improvements to our code in light of this result and a more thorough discussion of the contributions to the computation of Gaussian quadrature rules are future research topics. We do remark here that the construction of Gaussian quadrature rules requires the derivative of the associated orthonormal Laguerre polynomial with positive leading coefficient. This can be obtained from our expansions and the identity $${\rm d} p_n^{(\alpha)}(x)/{\rm d}x = \sqrt{n}p_{n-1}^{(\alpha+1)}(x)$$, which follows from (NIST, 2016, 18.9.23) after accounting for normalization. Although technical, the expansions can readily be differentiated for general $$Q(x)$$. This article affirmatively answers the questions raised in the conclusions of Townsend et al. (2015), namely whether the Riemann–Hilbert approach can be applied to the fast computation of quadrature rules with generalized Laguerre weights (for the Jacobi case, see Deaño et al., 2016), and whether higher order asymptotic expansions can be computed effectively. As mentioned, the standard Laguerre polynomials correspond to $$Q(x) = x$$, or equivalently $$q_k \equiv 0$$, $$\forall k \neq 1$$ and $$q_1=1$$. For this case, asymptotic expansions are given in Deaño et al. (2013) with explicit expressions for the first terms. We refer the reader to Temme (1990), López & Temme (2004) and references therein for more results on asymptotics for the standard Laguerre polynomials. A recent scheme for the numerical evaluation of Laguerre polynomials of any degree is described in Gil et al. (2017). As an example, the type of expansions in this article have the following form. For the monic Laguerre polynomial $$\pi_n$$ of degree $$n$$, we obtain: \begin{align} & \pi_n(x) = \gamma_n^{-1} p_n(4nz) = \frac{(4n)^{n}\, e^{n (2z-1-2\log(2))} }{z^{1/4}(1-z)^{1/4} z^{\alpha/2} } \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{outer}}}(z) \nonumber \\ &\quad{} \begin{pmatrix} 2^{-\alpha} \cos(\arccos(2z-1)[1/2+\alpha/2] - n[2\sqrt{z}\sqrt{1-z}-2\arccos(\sqrt{z}) ] -\pi /4)\6pt] -i 2^{\alpha} \cos(\arccos(2z-1)[\alpha/2-1/2] - n[2\sqrt{z}\sqrt{1-z}-2\arccos(\sqrt{z}) ] -\pi /4) \end{pmatrix} \qquad z \in (0,1). \nonumber \end{align} This is expression (4.1) of the article, specified to the standard associated Laguerre weight w(x)=x^{\alpha}e^{-x}. It is valid for x in a complex neighbourhood of (0,4n), where x is related to z through the MRS number \beta_n = 4n, i.e., x=4nz. The expansion for \pi_n(x) itself follows from substituting the expansion of the 2 \times 2 matrix function R^{{\mathrm{outer}}}(z). The leading order term is obtained from the identity matrix R^{{\mathrm{outer}}}(z)=I, and further terms are listed explicitly in the appendix. We provide formulas and their implementation for an arbitrary number of terms in the asymptotic expansions and compute up to 50 terms in 32s on the architecture mentioned in Section 7.4. These formulas can also be applied to obtain asymptotic expansions of orthogonal polynomials with Hermite-type weights of the form \exp(-\sum_{k=0}^m q_k x^{2k}) on (-\infty,\infty). In Section 7.2, we show that they can be given in terms of asymptotics of Laguerre-type polynomials with \alpha = \pm 1/2, evaluated in x^2. We aim for a general nonpolynomial weight function Q(x), though our results in this case are thus far not rigorously valid. In particular, we do not provide estimates for the remainder term. We do provide numerical indications that the expansions converge at the expected rate for increasing n. Inspired by the requirements for the Jacobi case in Deaño et al. (2016) and the technical conditions on Q(x) in Levin & Lubinsky (2001, Section 1), we conjecture that the expansions in this article are valid as long as Q(x) is analytic within the contours defined further on and Q(x) grows faster than powers of \log(x) for x\rightarrow \infty. The structure of the article is as follows. In Section 2, we connect the Riemann–Hilbert problem for orthogonal polynomials that is analysed in Vanlessen (2007) with the expansion of a 2 \times 2 matrix-valued function R and introduce some notation. We detail the Mhaskar–Rakhmanov–Saff (MRS) numbers \beta_n and their asymptotic expansions for large n in Section 3. The formulas for the asymptotic expansions of the polynomials in the different regions of the complex plane are stated in Section 4. We explain the computation of higher order terms of R in Section 5 and provide a nonrecursive definition for R. Details on obtaining explicit expressions for higher order terms are provided in Section 6. We conclude the article with a number of examples and numerical results in Section 7. 2. Asymptotic expansions for Laguerre-type polynomials The largest root of a Laguerre-type polynomial p_n(x) grows with the degree n. For example, it asymptotically behaves as 4n+2\alpha+2 + 2^{2/3}a_1(4n+2\alpha+2)^{1/3} for the standard associated Laguerre polynomials, with a_1 the (negative) zero of the Airy function closest to zero (see (Szegő, 1967, (6.32.4)); (Olver et al., 2010, (18.16.14)); NIST (2016)). The first step in the description of the asymptotics is to rescale the polynomials, such that the support of the zero-counting measure maps to the interval [0,1]. The scaling is linear but n-dependent, and given by $$\label{eq:scaling} x = \beta_n z,$$ (2.1) where \beta_n is the MRS number (see Levin & Lubinsky, 2001) defined further on in (3.1). There is a distinction between several regions in the complex z-plane, shown in Fig. 1: Fig. 1. View largeDownload slide Regions of the complex plane in which the polynomials have different asymptotic expansions, after rescaling the support of the zero-counting measure to the interval [0,1]: the lens ( I, a complex neighbourhood of the interval (0,1) excluding the endpoints), the outer region ( II, the remainder of the complex plane) and the right and left disks ( III and IV, two disks around the endpoints 0 and 1). Fig. 1. View largeDownload slide Regions of the complex plane in which the polynomials have different asymptotic expansions, after rescaling the support of the zero-counting measure to the interval [0,1]: the lens ( I, a complex neighbourhood of the interval (0,1) excluding the endpoints), the outer region ( II, the remainder of the complex plane) and the right and left disks ( III and IV, two disks around the endpoints 0 and 1). A complex neighbourhood of the interval (0,1) excluding the endpoints, subsequently called the ʻlensʼ (region I). Two disks around the endpoints 1 and 0, called the right and left disk (regions III and IV). The remainder of the complex plane, the ʻouter regionʼ (region II). 2.1 Riemann–Hilbert formulation and steepest descent analysis In this section, we briefly summarize the main features of the derivation in Vanlessen (2007). The approach is based on the Riemann–Hilbert formulation for orthogonal polynomials in Fokas et al. (1992): we seek a 2\times 2 complex matrix-valued function Y(z) that satisfies the following Riemann–Hilbert problem, cf. (Vanlessen, 2007, Section 3): (a) Y : \mathbb{C}\setminus[0,\infty) \rightarrow \mathbb{C}^{2 \times 2} is analytic. (b) Y(z) has continuous boundary values Y_{\pm}(x) = \lim_{\epsilon \downarrow 0} Y(x \pm i\epsilon) for x \in (0,\infty). These boundary values are related via a jump matrix: $$Y_+(x) = Y_-(x) \begin{pmatrix}1 & x^\alpha e^{-Q(x)} \\ 0 & 1\end{pmatrix} \quad {\rm for}\ x \in (0,\infty).\nonumber$$ (c) (d) All entries are \mathcal{O}(1) as z \rightarrow 0, except that both entries in the second column are \mathcal{O}(z^\alpha) if \alpha < 0 and \mathcal{O}(\log z) if \alpha =0. It is proved in Fokas et al. (1992) and Kuijlaars (2003) that the unique solution of this Riemann–Hilbert problem is $$Y(x) = \begin{pmatrix} Y_{11} & Y_{12} \\ Y_{21} & Y_{22} \end{pmatrix} = \begin{pmatrix} p_n(x)/\gamma_n & \frac{1}{2\pi i\gamma_n}\int_{0}^\infty \frac{p_n(y) w(y)}{y-x}\, {\rm d}y \\ -2\pi i \gamma_{n-1} p_{n-1}(x) & -\gamma_{n-1}\int_{0}^\infty \frac{p_{n-1}(y) w(y)}{y-x}\, {\rm d}y \end{pmatrix}\!. \nonumber$$ Here, the Y_{11} entry is the monic orthogonal polynomial because \gamma_n is the leading order coefficient of p_n(x). The Y_{21} entry relates to the polynomial of degree n-1, whereas the second column contains the Cauchy transforms of both of these polynomials. Note that the weight function of the orthogonal polynomials enters through the jump condition in (b). To obtain the large n asymptotic behaviour of p_n(x), the Riemann–Hilbert formulation is combined with the Deift–Zhou steepest descent method for Riemann–Hilbert problems (see Deift & Zhou, 1992, 1993), and more specific results for orthogonal polynomials with exponential weights are given in Deift et al. (1999a,b). In our case, the steepest descent analysis presented in Vanlessen (2007) consists of the following sequence of (explicit and invertible) transformations: $$Y(z)\mapsto T(z) \mapsto S(z) \mapsto R(z). \nonumber$$ These steps have a well-defined interpretation: The first step is a normalization at infinity, such that \[ T(z) = I+\mathcal{O}(1/z) \qquad z\rightarrow \infty. \label{ETinf} This step comes at the cost of introducing rapidly oscillating entries in the new jump matrix for the Riemann–Hilbert problem for $$T$$. The second step is the opening of the so-called lens around $$[0,1]$$: it factorizes the previous jump matrix such that $$S(z) = T(z)$$ outside of the lens in Fig. 1, whereas $$S(z)$$ is exponentially close to $$T(z)$$ in $$n$$ in the upper and lower part of the lens. The shape of the lens is such that the oscillating entries on the diagonal in the jump matrix are transformed into exponentially decaying off-diagonal entries. Finally, the last transformation $$S(z) \mapsto R(z)$$ gives rise to the disks in Fig. 1 and is defined as (2.2) $$R(z)=S(z) \begin{cases} P_n(z)^{-1}, \qquad \text{for}\,\,z\,\,\text{close to }1, \\ \tilde{P}_n(z)^{-1}, \qquad \text{for}\,\,z\,\,\text{close to }0, \\ P^{(\infty)}(z)^{-1}, \qquad \text{elsewhere.} \end{cases} \label{EjumpR}$$ Here, a global parametrix $$P^{(\infty)}(z)$$ is introduced and constructed using the Szegő function $$z^{\alpha/2}\varphi(z)^{-\alpha/2}$$ (to be defined below). $$P_n(z)$$ and $$\tilde{P}_n(z)$$ are local parametrices that follow from a rather involved local analysis around the endpoints, and that should not be confused with the orthonormal polynomials $$p_n(x)$$. We omit the details, but we note that $$\tilde{P}_n(z)$$ is given explicitly in terms of standard Bessel and Hankel functions and their derivatives. The precise choice of these functions is made in such a way that $$\tilde{P}_n(z)$$ satisfies an asymptotic matching condition with the global parametrix on the boundary of the left disk, namely $$\tilde{P}_n(z)\left[P^{(\infty)}(z)\right]^{-1} \sim I + {\it{\Delta}}^{{\mathrm{left}}}(z) \quad {\rm for}\ n \rightarrow \infty, \nonumber$$ where $$z$$ is on the boundary of the left disk IV in Fig. 1 and $${\it{\Delta}}^{{\mathrm{left}}}(z)$$ will be defined in Section 5.1 as the jump matrix for $$R(z)$$. A similar construction yields explicit expressions for $$P_n(z)$$ in terms of the Airy function and its derivative. Because we know $$P_n(z)$$, $$\tilde{P}_n(z)$$ and $$P^{(\infty)}(z)$$ explicitly, we can determine the asymptotic expansion of $${\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}(z)$$ in a closed formula. The key idea is that the Riemann–Hilbert problem for $$R(z)$$ can be solved explicitly in an asymptotic sense for large $$n$$: it can be deduced that the matrix $$R(z)$$ is itself close to the identity $$R(z) = I+\mathcal{O}\left(\frac{1}{n}\right) \qquad{\rm for}\ n\to\infty, \nonumber$$ uniformly for $$z \in \mathbb{C} \setminus {\it{\Sigma}}_R$$. Here, $${\it{\Sigma}}_R$$ is a contour that results from the sequence of transformations outlined before and consists of the boundaries of the regions in Fig. 1. If we match all powers of $$z$$ and $$n$$ in (2.2) via $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z)$$, we obtain higher order terms in the asymptotic expansions, which is exactly the technique outlined in Section 5. Finally, reversing these transformations (since the different Riemann–Hilbert problems are equivalent), one can obtain asymptotic information for $$Y(z)$$ as $$n\to\infty$$ in different sectors of the complex plane and in particular of its $$(1,1)$$ entry. 2.2 The function $$R(z)$$ in the complex plane The function $$R(z)$$ is a $$2\times 2$$ matrix complex-valued function, analytic (element wise) in $$\mathbb{C}\setminus{\it{\Sigma}}_R$$, where $${\it{\Sigma}}_R$$ consists of the boundaries of the regions in Fig. 1. Also, $$R(z) - I = \mathcal{O}(1/z)$$ for $$z \to \infty$$. Finally, as $$n\to\infty$$ for polynomial $$Q(x)$$ with degree $$m$$ and independent of $$n$$, there exist functions $$R_k(z)$$ such that the function $$R(z)$$ admits an asymptotic expansion of the form $$\label{asympRn} R(z)\sim I + \sum_{k=1}^{\infty} \frac{R_k(z)}{n^{\tfrac{k-1}{m}+1} } \quad\text{for}\ n \rightarrow \infty.$$ (2.3) We obtain different expressions for $$R_k(z)$$ depending on the region in which $$z$$ lies. We will write $$R_k^{{\mathrm{right}}}(z)$$ and $$R_k^{{\mathrm{left}}}(z)$$ to refer to the coefficients for $$z$$ near $$1$$ and $$0$$, respectively, and $$R_k^{{\mathrm{outer}}}(z)$$ to indicate the coefficients for $$z$$ outside these two disks, so $$R_k^{{\mathrm{outer}}}(z) = \mathcal{O}(1/z)$$ for $$z \to \infty$$. Our formulas for the asymptotic expansions are written in terms of these functions. One may simply substitute $$R(z)=I$$ to obtain the leading order behaviour of the expansion. Higher order expansions are obtained via recursive computation of the $$R_k(z)$$ in Section 5 or, alternatively, using the explicit expressions listed in the appendix. 2.3 Auxiliary functions We recall some terminology and notation from Vanlessen (2007). In the formulation of our results, we use the third Pauli matrix and define for $$a \in \mathbb{C} \setminus \{0\}$$: $a^{\sigma_3} = \left[ \begin{array}{cc} a & 0 \\ 0 & a^{-1} \end{array}\right]\!.$ The following values $$A_k$$ arise in the recursive construction of the MRS numbers: $A_k = \int_0^1 \frac{x^{k-1/2}}{\pi\sqrt{1-x}} \,{\rm d}x \quad = \prod_{j=1}^k \frac{2j-1}{2j} \quad = \frac{{\it{\Gamma}}(k+1/2)}{\sqrt{\pi}{\it{\Gamma}}(k+1)} \quad = 4^{-k} {2k \choose k},$ and we will also use the Pochhammer symbol or rising factorial $(n)_j = n(n+1)\cdots(n+j-1). \nonumber$ We will define the MRS number $$\beta_n$$ and the associated quantities $$h_n(z), H_n(z)$$ and $$l_n$$ in Section 3.2. The corresponding scaling (2.1) gives rise to the rescaled field $$V_n(z)$$: $V_n(z) = Q(\beta_n z)/n.$ We also define the following functions: \begin{align} \theta(z) &= \begin{cases} 1,& \quad \arg(z-1) > 0, \\ -1, & \quad \arg(z-1)\leq 0, \end{cases} \nonumber \\ \xi_n(z) &= -i \left(H_n(z)\sqrt{z}\sqrt{1-z}/2 -2\arccos(\sqrt{z})\right), \label{Exin} \\ \end{align} (2.4) \begin{align} f_n(z) &= n^{2/3}(z-1)\left[\frac{-3\theta(z)\xi_n(z)}{2(z-1)^{3/2}}\right]^{2/3}, \nonumber \\ \bar{\phi}_n(z) &= \xi_n(z)/2 -\pi i/2. \label{Ephibar} \end{align} (2.5) The function $$\theta(z)$$ is nonstandard in literature on asymptotics, but it is introduced here because it allows the statement of analytic continuations of some functions using principal branches. We assume principal branches of all analytic functions in this article, such that the formulas are easily implemented. One may call $$\xi_n(z)$$, $$f_n(z)$$ and $$\bar{\phi}_n(z)$$phase functions for the orthonormal polynomials. They specify the oscillatory behaviour of $$p_n(x)$$ for $$z$$ away from the endpoints, near $$1$$ and near $$0$$, respectively. Here, too, one can avoid specifying select branch cuts by not simplifying the definition of $$f_n(z)$$. The function $$\bar{\phi}_n(z)$$ corresponds to $$\sqrt{\tilde{\varphi}_n(z)}$$ in Vanlessen (2007) and is used for the analytic continuation of the polynomial in the left disk. The conformal map $$\varphi(z)$$ from $$\mathbb{C} \setminus [0,1]$$ onto the exterior of the unit circle is used in the global parametrix $$P^{(\infty)}(z)$$, which determines the behaviour of $$p_n(x)$$ away from $$x=0$$ and $$\beta_n$$: \begin{align} \varphi(z) &= 2z-1+2\sqrt{z}\sqrt{z-1} \qquad = \exp(i \theta(z) \arccos(2z-1)), \label{Evarphi} \\ P^{(\infty)}(z) &= \frac{2^{-\alpha\sigma_3} }{2 z^{1/4}(z-1)^{1/4}} \begin{pmatrix} \sqrt{\varphi(z) } & \frac{i}{\sqrt{\varphi(z) } } \\ \frac{-i}{\sqrt{\varphi(z) } } & \sqrt{\varphi(z) } \end{pmatrix} \left(\frac{z^{\alpha/2}}{(\varphi(z))^{\alpha/2}}\right)^{-\sigma_3 }. \nonumber \end{align} (2.6) Finally, the coefficients $$\nu_k$$ and $$(\alpha,m)$$ appear in asymptotics of Airy and modified Bessel functions in the local parametrices: \begin{align} \nu_k &= \left(1-\frac{6k+1}{6k-1}\right) \frac{{\it{\Gamma}}(3k+1/2)}{54^k (k!) {\it{\Gamma}}(k+1/2)} \quad = \frac{-{\it{\Gamma}}(3k-1/2)2^k}{2k 27^k \sqrt{\pi}{\it{\Gamma}}(2k)}, \nonumber \\ (\alpha,m) &= 2^{-2m} (m!)^{-1} \prod_{n=1}^{m}(4 \alpha^2-(2n-1)^2). \nonumber \end{align} 3. MRS numbers and related functions The MRS numbers $$\beta_n$$ satisfy (Vanlessen, 2007, (3.6)) $$2\pi n = \int_0^{\beta_n} Q'(x)\sqrt{\frac{x}{\beta_n-x}}\,{\rm d}x. \label{EbetaInt}$$ (3.1) We will explain how to compute these and quantities depending on them for various types of $$Q(x)$$: monomials, more general polynomials and more general analytic functions. 3.1 Constant plus monomial $$Q(x)$$ In this case, $$Q(x) = q_0 + q_m x^m$$, although the constant only scales $$\gamma_n$$. The MRS number is given explicitly by $\beta_n = n^{1/m}\left(m q_m A_m /2\right)^{-1/m}.$ We also recall from Vanlessen (2007) the coefficients $$l_n$$ and polynomials $$H_n$$ with a slight adjustment for $$l_n$$: \begin{align} l_n = & -2/m-4\ln(2) -q_0/n, \nonumber \\ H_n(z) = & \frac{4}{2m-1} {}_2F_1(1,1-m;3/2-m;z) \quad = \frac{2}{mA_m}\sum_{k=0}^{m-1} A_{m-1-k}z^k. \label{EHn} \end{align} (3.2) In the classical Laguerre case where $$w(x) = x^\alpha e^{-x}$$, we have \begin{align} H_n(z) &= 4, \nonumber \\ \beta_n &= 4n. \nonumber \end{align} The latter value for $$\beta_n$$ is well known, and it implies that the largest root of the Laguerre polynomial of degree $$n$$ grows approximately like $$4n$$. 3.2 General polynomial $$Q(x)$$ For general polynomial $$Q(x)$$, $$\beta_n$$ has an asymptotic expansion with fractional powers, $$\beta_n \sim n^{1/m} \sum_{k=0}^\infty \beta^{1,k}n^{-k/m} \quad \text{as } n \rightarrow \infty. \label{EbetaExpa}$$ (3.3) To compute the coefficients $$\beta^{1,k}$$, we start from the equation at the end of the proof of Vanlessen (2007, Proposition 3.4): $$\sum_{k=0}^m \tfrac k2 q_k A_k \left. \frac{d^j}{d\epsilon^j}(\beta(\epsilon)^k\epsilon^{m-k})\right|_{\epsilon =0}\, = 0 =\, \sum_{k=\max(m-j,1)}^m k q_k A_k \beta^{k,j-m+k}, \label{EstartBeta}$$ (3.4) where we have defined $$\beta(\epsilon)^k \sim \left(\sum_{l=0}^\infty \beta^{1,l} \epsilon^l \right)^k \sim \sum_{l=0}^\infty \beta^{k,l} \epsilon^{l} \nonumber$$ as $$\epsilon$$ approaches zero, where the coefficients can be calculated as $$\beta^{k,l} = \sum_{i=0}^l \beta^{k-1,i} \beta^{1,l-i}. \label{Ebetaa}$$ (3.5) One uses the result (Vanlessen, 2007, (3.8)) $$\beta^{1,0} = (m q_m A_m/2)^{-1/m}, \label{Ebeta0}$$ (3.6) and then recursively computes (3.5) for $$l\leq j=0$$. Next, (3.4) leads to \begin{align} \beta^{1,j} = & -\left\{ m q_m A_m\left(\left[\left(\beta^{1,0}\right)^{m-2} \sum_{i=1}^{j-1} \beta^{1,j-i} \beta^{1,i}\right] + \sum_{i=0}^{m-3} (\beta^{1,0})^i \sum_{k=1}^{j-1} \beta^{1,j-k} \beta^{m-1-i,k} \right) \right. \nonumber \\ & \left. + \sum_{k=\max(m-j,1)}^{m-1} k q_k A_k \beta^{k,j-m+k} \right\} \left(m^2 q_m A_m [\beta^{1,0}]^{m-1}\right)^{-1} \nonumber \end{align} for $$j=1$$ and so on. We see that $$q_0$$ does not influence the MRS number, since it only rescales the weight function. The construction of $$\beta_n$$ from this Section satisfies the condition (3.1) asymptotically up to the correct order, and also the following explicit result from (Vanlessen, 2007, (3.8)): $$\beta^{1,1} = \frac{-2(m-1)q_{m-1}}{m(2m-1)q_m}. \label{Ebeta1}$$ (3.7) With these results, we can compute the polynomials $$H_n(x)$$ and the coefficients $$l_n$$ as (Vanlessen, 2007, Section 3): \begin{align} H_n(z) = & \sum_{k=0}^{m-1} z^k\sum_{j=k+1}^m \frac{q_j}{n} \beta_n^j A_{j-k-1} \qquad \sim \sum_{k=0}^{m-1} z^k\sum_{j=k+1}^m q_j A_{j-k-1} n^{j/m-1} \sum_{i=0}^\infty \beta^{j,i} n^{-i/m} \quad \text{as } n \rightarrow \infty,\nonumber\\ l_n = & -4\log(2) -\sum_{k=0}^m \frac{q_k}{n}\beta_n^k A_k.\label{EHn_nonmon} \end{align} (3.8) 3.3 General function $$Q(x)$$ For the calculation of the functions in case $$Q(x)$$ is not a polynomial, we introduce a numerical method. We provide an initial guess $$\beta_n^{(0)}$$ that satisfies $$n=Q(\beta_n^{(0)})$$ to an iterative numerical procedure, so $$\beta_n^{(0)} = Q^{-1}(n)$$. The procedure finds a $$\beta_n$$ that approximately satisfies (3.1), where the integral is computed by numerical integration. This procedure can be a Newton method, for example, and if needed, $$Q^{-1}(n)$$ and $$Q^\prime(x)$$ can be approximated numerically as well. Care has to be taken when $$Q(x)$$ is nonconvex since there may be multiple values of $$\beta_n$$, which may invalidate our asymptotic results. The other functions are given by integrals (Vanlessen, 2007, (3.11-16-38-40-24)) \begin{align} h_n(z) &= \frac{1}{2\pi i}\oint_{{\it{\Gamma}}_z} \frac{\sqrt{y}V_n^\prime(y)\, {\rm d}y}{\sqrt{y-1}(y-z)}, \nonumber \\ \xi_n(z) &= \frac{-1}{2} \int_1^z \frac{\sqrt{y-1}}{\sqrt{y}} h_n(y)\, {\rm d}y, \nonumber \\ l_n &= 2\int_0^1 \frac{\log(|1/2-y|)\sqrt{1-y}}{2\pi\sqrt{y}}h_n(y)\, {\rm d}y -\frac{Q(\beta_n/2)}{n}. \nonumber \end{align} The contour $${\it{\Gamma}}_z$$ for $$h_n$$ should enclose the interval $$[0,1]$$ and the point $$z$$. We choose $${\it{\Gamma}}_z$$ to be a circle with a centre halfway between the interval $$[0,1]$$ and $$z$$, while still including $$z$$ and the interval. The integrals for $$\xi_n(z)$$ and $$l_n$$ are also calculated numerically, so the former is computed by a double numerical integral. Remark 3.1 These expressions are also valid for polynomial $$Q$$. However, following the reasoning in this subsection only leads to a numerical value of $$\beta_n$$ for a given $$n$$, as opposed to a full asymptotic expansion of $$\beta_n$$ in fractional powers of $$n$$. The same observation holds for the functions defined above. In this case, the powers $$n^{-1/m}$$ are implicitly present in all quantities that involve $$\beta_n$$, whereas the results for polynomial $$Q$$ are more explicit. We compare both approaches further in Remark 6.1 and Section 7.4. 3.4 Explicit expressions satisfied by the MRS numbers Thus far, we have obtained either asymptotic expansions of $$\beta_n$$ or a numerical estimation. The cases in which explicit expressions can be derived are limited, but we provide some helpful expressions in this section. Inspired by (2.6), we invert that conformal map by changing the coordinates $$x=(\varphi+1)^2\beta_n/4/\varphi$$ in integral (3.1): $$\frac{-8\pi i n}{\beta_n} = \int_{\it{\Upsilon}} Q^\prime(x)\frac{(\varphi+1)^2}{\varphi^2} \,{\rm d}\varphi.\nonumber$$ The contour $${\it{\Upsilon}}$$ is half the unit circle, starting at $$\varphi = -1$$ through $$i$$ to $$1$$. Note that $$x$$ is real valued for $$\varphi$$ on this halfcircle in the upper half of the complex plane. Hence, it is also real valued when we take the complex conjugate of $$\varphi$$, corresponding to $$\varphi$$ on the halfcircle in the lower half of the complex plane. If we assume that $$Q^\prime$$ is real for real arguments, then the integral on the negative halfcircle, i.e., from $$-1$$ through $$-i$$ to $$1$$, is the complex conjugate of the integral above. Combining both, we find that $\frac{16\pi i n}{\beta_n} = \int_{\it{\Xi}} Q^\prime(x)\frac{(\varphi+1)^2}{\varphi^2} \,{\rm d}\varphi,$ where $${\it{\Xi}}$$ is a circle enclosing the origin $$\varphi=0$$ in the counterclockwise direction. The described change of variables maps a point $$\varphi$$ in the interior of the unit circle to $$x \in \mathbb{C} \setminus [0,\beta_n]$$. If $$Q$$ is not entire, we may have to subtract additional residues; else, the contour $${\it{\Xi}}$$ encloses a single pole at the origin. In the case where $$Q(x)$$ is a polynomial of degree $$m$$, we obtain from the residue theorem that $$\beta_n$$ is the root of a polynomial of degree $$m$$: \begin{align} \frac{8n}{\beta_n} &= \text{Res} \left( \frac{(\varphi+1)^2}{\varphi^2} \sum_{k=1}^m k q_k \left(\frac{(\varphi+1)^2\beta_n}{4\varphi}\right)^{k-1}, \quad \varphi=0 \right), \nonumber \\ \frac{8n}{\beta_n} &= \left[\sum_{k=2}^{m} k q_k \left(\frac{\beta_n}{4}\right)^{k-1} {2k-2 \choose k-2}\right] + 2\left[\!\sum_{k=1}^m k q_k \left(\frac{\beta_n}{4}\right)^{k-1}\!\! {2k-2 \choose k-1} \!\right] + \left[\!\sum_{k=2}^m k q_k \left(\frac{\beta_n}{4}\right)^{k-1} {2k-2 \choose k} \!\!\right]\!, \nonumber \\ 8n &= 2q_1\beta_n + 4\sum_{k=2}^{m} k {2k \choose k} q_k \left(\frac{\beta_n}{4}\right)^k. \label{EexplBeta} \end{align} (3.9) We can remark that (3.3) gives the asymptotic expansion of the zero of the $$m$$th degree polynomial (3.9) in $$\beta_n$$ with respect to a factor in its constant coefficient. Exact solutions for $$\beta_n$$ are only available up to $$m=4$$. For $$m=1$$, this boils down to the standard associated Laguerre case $$\beta_n=4n/q_1$$. For $$m=2$$, we take the positive solution that also corresponds to (3.6) and (3.7): $$\beta_n = \frac{-q_1+\sqrt{q_1^2+24q_2 n}}{3q_2} \sim \sqrt{\frac{8n}{3q_2}} -\frac{q_1}{3q_2} +\sqrt{\frac{q_1^4}{864q_2^3 n}} + \mathcal{O}(n^{-3/2}). \nonumber$$ We do find an explicit result for the nonpolynomial function $$Q(x) = \exp(x)$$. In that case, we have \begin{align} 8n &\sim \beta_n \text{Res} \left( \frac{1}{\varphi} \sum_{k=0}^\infty \frac{1}{k!} \left(\frac{\beta_n}{4}\right)^k \left(\varphi^{-1} +2 +\varphi \right)^{k+1}, \varphi=0 \right) \nonumber \\ 8n &\sim \beta_n \sum_{k=0}^\infty \frac{1}{k!} \left(\frac{\beta_n}{4}\right)^k {2k+2 \choose k+1} \nonumber \\ 8n &= 2\beta_n \exp\left(\frac{\beta_n}{2}\right)\left[I_0\left(\frac{\beta_n}{2}\right) + I_1\left(\frac{\beta_n}{2}\right) \right] \label{EexponQbeta} \\ \end{align} (3.10) \begin{align} 4n &\sim e^{\beta_n} \sqrt{\frac{\beta_n}{\pi}} \left[2-\frac{1}{2\sqrt{\beta_n}}\right] {\Rightarrow} \beta_n \sim \frac{W(8\pi n^2)}{2} \sim \log(n)-\log\frac{(\log[8\pi n^2])}{2}+\log\frac{(8\pi)}{2}, \label{EbetaExp} \end{align} (3.11) where $$W$$ denotes the Lambert-W function, and $$I_0$$ and $$I_1$$ are modified Bessel functions. For general $$Q(x)$$, a similar technique may allow one to find an explicit expression like (3.10) without integrals that is satisfied by $$\beta_n$$. Solving that expression numerically avoids having to evaluate the integral (3.1). However, it might become quite involved to derive higher order terms as explicitly as in Sections 6.3 and 6.4 from the resulting expansion of $$\beta_n$$ as $$n \rightarrow \infty$$. 4. Asymptotics of orthonormal polynomials $$\boldsymbol{p_n(x)}$$ and related coefficients The expansion for $$p_n(x)$$ follows from substituting the expansion of the function $$R(z)$$ described in Section 2.2 into the following expressions. Multiple regions of the complex plane are identified in (2.2), leading to the three different expressions: $$R^{{\mathrm{outer}}}(z)$$, see the appendix, combine the appendix with (5.6) or combine (2.3) with (5.9); $$R^{{\mathrm{right}}}(z)$$, see (5.7) or (5.8) and $$R^{{\mathrm{left}}}(z)$$, see (5.7) or (5.8). 4.1 Lens I Putting together the consecutive transformations in Vanlessen (2007) for $$z \in {\mathrm{I}}$$ in Fig. 1, and noting that $$x$$ and $$z$$ are related \begin{align} p_n(\beta_n z) &= \frac{\beta_n^{n}\gamma_n\, e^{n (V_n(z)+l_n)/2} }{z^{1/4}(1-z)^{1/4} z^{\alpha/2} } \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{outer}}}(z) \label{EpiInt} \\ & \begin{pmatrix} 2^{-\alpha} \cos(\arccos(2z-1)[1/2+\alpha/2] + n\xi_n(z)/i -\pi /4) \\ -i 2^{\alpha} \cos(\arccos(2z-1)[\alpha/2-1/2] + n\xi_n(z)/i -\pi /4) \end{pmatrix}. \nonumber \end{align} (4.1) The asymptotics of $$\gamma_n$$ are given in Section 4.5. The full asymptotic expansion of $$p_n(x)$$ is obtained by substituting the expansion for $$R^{{\mathrm{outer}}}(z)$$ that we derive later on. The asymptotic expansions of the orthonormal polynomials all separate two oscillatory terms (where phase functions are multiplied by $$n$$) from the nonoscillatory higher order terms. For polynomial $$Q(x)$$ of degree $$m$$, the asymptotic expansion truncated after $$T$$ terms corresponds to a relative error of size $$\mathcal{O}(n^{-T/m})$$. In the special case of a (constant plus) monomial $$Q(x) = q_0 + q_m x^m$$, the relative error improves to $$\mathcal{O}(n^{-T})$$. 4.2 Outer region II For $$z \in$$ II, the asymptotic expansion is \begin{align} p_n(\beta_n z) &= \frac{\beta_n^{n}\gamma_n\, e^{n (V_n(z)/2+\theta(z)\xi_n(z)+l_n/2)} \exp(i\theta(z)\arccos(2z-1)\alpha/2) }{2 z^{1/4}(z-1)^{1/4}z^{\alpha/2}}\begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{outer}}}(z) \label{EpiOut} \\ & \begin{pmatrix} 2^{-\alpha} \exp(i\theta(z)\arccos(2z-1)/2) \\ -i 2^{\alpha}\exp(-i\theta(z)\arccos(2z-1)/2) \end{pmatrix}. \nonumber \end{align} (4.2) It may appear to be problematic that $$\exp(Q(\beta_n z)/2) = \exp(n V_n(z)/2)$$ appears in the asymptotic expansions of the polynomials in the complex plane, especially for this region, since this factor grows very quickly. However, one may verify that this exponential behaviour is cancelled out with other terms. More specifically, the term $$\exp(n\xi_n(z))$$ in (4.2) ensures that $$p_n(x) = \mathcal{O}(x^n)$$, $$x \rightarrow \infty$$. 4.3 Right disk III The polynomials behave like an Airy function near the right endpoint $$z=1$$ ($$z \in {\mathrm{III}}$$). This is typical asymptotic behaviour near a so-called ʻsoft edgeʼ, in the language of random matrix theory. Note that the $$\theta(z)$$ in the following expression removes the branch cut, so that it can be used throughout $$\mathbb{C}$$, away from $$z=0$$: \begin{align} p_n(\beta_n z) &= \gamma_n \beta_n^{n} \frac{z^{-\alpha/2} \sqrt{\pi}}{z^{1/4} (z-1)^{1/4} }\, e^{n(V_n(z) + l_n)/2} \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{right}}}(z) \label{Epiboun} \\ & \begin{pmatrix} 2^{-\alpha}\left\{\cos\left[\frac{(\alpha+1)\arccos(2z-1)}{2} \right] Ai(f_n(z)) f_n(z)^{1/4} -i\sin\left[\frac{(\alpha+1)\arccos(2z-1)}{2} \right] Ai'(f_n(z))f_n(z)^{-1/4} \theta(z)\right\} \\ 2^\alpha \left\{-i\cos\left[\frac{(\alpha-1)\arccos(2z-1)}{2} \right] Ai(f_n(z)) f_n(z)^{1/4} -\sin\left[\frac{(\alpha-1)\arccos(2z-1)}{2}\right] Ai'(f_n(z)) f_n(z)^{-1/4} \theta(z) \right\}\end{pmatrix}. \nonumber \end{align} (4.3) We would like to note a possible issue when computing the zeros of this expression, as one would do for Gaussian quadrature. The largest root for the standard associated Laguerre polynomials asymptotically behaves as $$4n+2\alpha+2 + 2^{2/3}a_1(4n+2\alpha+2)^{1/3}$$, with $$a_1$$ the (negative) zero of the Airy function closest to zero (see Szegő (1967, (6.32.4)); Olver et al. (2010, (18.16.14)); NIST (2016)). For a fixed but relatively high $$\alpha$$, this point may lie outside the support of the equilibrium measure $$(0,4n)$$ (Vanlessen, 2007, Remark 3.8). There will always be a larger $$n$$ for which the point lies inside. Still, for large $$\alpha$$ one may want to pursue a different kind of asymptotic expansion, for example, using asymptotics with a varying weight $$x^{\alpha(n)}\exp(-Q(x))$$, as studied in, for example, Bosbach & Gawronski (1998), and apply it to the fixed $$\alpha$$. 4.4 Left disk IV The polynomials behave like a Bessel function of order $$\alpha$$ near the left endpoint $$z=0$$. For $$z \in$$ IV, we obtain \begin{align} & p_n(\beta_n z) = \gamma_n \beta_n^{n} \frac{(-1)^n \left(in\bar{\phi}_n(z)\pi\right)^{1/2} }{z^{1/4} (1-z)^{1/4}} z^{-\alpha/2} \,e^{n(V_n(z) + l_n)/2} \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{left}}}(z) \label{Epilboun} \\ & \begin{pmatrix} 2^{-\alpha} \left\{ \sin\left[\frac{(\alpha+1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha\left(2in \bar{\phi}_n(z)\right) + \cos\left[\frac{(\alpha+1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha'\left(2in \bar{\phi}_n(z)\right) \right\} \\ -i 2^\alpha \left\{ \sin\left[ \frac{(\alpha-1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha\left(2in \bar{\phi}_n(z)\right) + \cos\left[\frac{(\alpha-1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha'\left(2in \bar{\phi}_n(z)\right) \right\} \end{pmatrix}. \nonumber \end{align} (4.4) It is not immediately obvious that the expansions (4.3) and (4.4) are analytic in the points $$z=1$$ and $$z=0$$, respectively. This will follow from the expression (5.7) for $$R^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ and by also making a series expansion of the other terms at those points. For numerical purposes, it may be better to use those series expansions when evaluating close to (or at) $$z=0$$ and $$z=1$$. 4.5 Asymptotics of leading order coefficients The leading order coefficient of the orthonormal polynomials is $$\gamma_n$$, i.e., we have $p_n(x) = \gamma_n\pi_n(x),$ where $$\pi_n(x)$$ is the monic orthogonal polynomial of degree $$n$$. For a monomial or more general function $$Q(x)$$, the asymptotic expansion of $$\gamma_n$$ is \begin{align} \gamma_n & \sim \frac{\beta_n^{-n-\alpha/2-1/2}\exp(-n l_n/2) \sqrt{\frac{2}{\pi}} 2^{\alpha} } {\sqrt{1-4i4^{\alpha} \sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} } {n^k} } } \quad \text{as } n \rightarrow \infty. \label{Egamma} \end{align} (4.5) The quantities $$U_{k,1}^{{\mathrm{right}}/{\mathrm{left}}}$$ are defined and extensively described in Section 5. They are the constant $$2\times 2$$ matrices that multiply $$z^{-1}n^{-k}$$ and $$(z - 1)^{-1} n^{-k}$$ in the expansion for $$R(z)$$, of which we use the upper right elements here. Explicit expressions for these matrices up to $$k=3$$ are given in the appendix. The constant coefficient $$q_0$$ only changes the scaling of the weight function and does not influence $$\beta_n$$ nor the matrices. However, it does influence $$\gamma_n$$ through the coefficient $$l_n$$, giving $$\gamma_n \sim \exp(q_0/2)$$. For general polynomial $$Q(x)$$, the power of $$n$$ changes from $$k$$ to $$(k-1)/m+1$$: \begin{align} \gamma_n & \sim \frac{\beta_n^{-n-\alpha/2-1/2}\exp(-n l_n/2) \sqrt{\frac{2}{\pi}} 2^{\alpha} } {\sqrt{1-4i4^{\alpha} \sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} } {n^{(k-1)/m+1} } } } \quad \text{as } n \rightarrow \infty. \nonumber \end{align} This reflects the more accurate asymptotic information about $$\beta_n$$ available for polynomial $$Q$$. It is understood here that one substitutes the asymptotic expansion of $$\beta_n$$ in this formula. We retain this formulation here to show the analogy with (4.5). 4.6 Asymptotics of recurrence coefficients In the three-term recurrence relation (1.1), the recurrence coefficients have the following large $$n$$ asymptotic expansion \begin{align} & a_n \sim \beta_n \left[\frac{-\alpha}{4} + \vphantom{\frac{ \frac{4^{-\alpha}i(\alpha+2)}{16} + \left(\sum_{k=3}^\infty\frac{U_{k,2}^{{\mathrm{left}}}|_{1,2} }{n^k} \right) + \left(\sum_{k=1}^\infty\frac{(U_{k,2}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{right}}} )|_{1,2} }{n^k} \right) + \left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,1} 4^{-\alpha-1}i + (U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2}\alpha/4 }{n^k} \right) } {4^{-\alpha-1}i + \left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\right) }} \left\{\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,1} }{n^k}\right\} \right.\nonumber \\ &+ \left. \frac{ \frac{4^{-\alpha}i(\alpha+2)}{16} + \left(\!\sum_{k=3}^\infty\frac{U_{k,2}^{{\mathrm{left}}}|_{1,2} }{n^k} \!\right) + \left(\!\sum_{k=1}^\infty\frac{(U_{k,2}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{right}}} )|_{1,2} }{n^k} \!\right) + \left(\!\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,1} 4^{-\alpha-1}i + (U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2}\alpha/4 }{n^k} \! \right) } {4^{-\alpha-1}i + \left(\!\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\!\right)}\! \right] \nonumber \end{align} and \begin{align}\nonumber b_{n-1} \sim \frac{\beta_n}{4} & \left[1+4i\left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{2,1}4^{-\alpha} -4^\alpha (U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\right) \right. \nonumber \\ & \quad{}+\left. 16\left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{2,1} }{n^k}\right) \left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\right) \right]^{1/2}. \nonumber \end{align} The quantities $$U_{k,1}^{{\mathrm{right}}/{\mathrm{left}}}$$ in these expressions are the same as those appearing in (4.5) above. For general polynomial $$Q(x)$$, the powers of $$n$$ again change from $$k$$ to $$(k-1)/m+1$$ and the expression is otherwise unchanged. 5. Computation of higher order terms The literature on Riemann–Hilbert problems suggests a way to compute the asymptotic expansion of $$R(z)$$. In principle, it is clear how expressions can be obtained, but this involves many algebraic manipulations, summations and recursions. An important contribution of Deaño et al. (2016) was to identify a set of simplifications that significantly improve the efficiency of numerical evaluation of the expressions. Similar simplifications can be performed in the current setting of Laguerre-type polynomials, though the expressions are of course very different. In this section, as well as in the next one, we will use the symbol $$\sim$$ interchangeably for Poincaré, Taylor and Laurent series expansions and their combinations. Moreover, as it should be clear from the context that $$n \rightarrow \infty, w \rightarrow 0$$ and $$z \rightarrow 1$$ or $$0$$, this limit is often omitted for the sake of brevity. 5.1 Jumps of $$R(z)$$ Section 1 indicated that higher order terms in the asymptotic expansion for $$\pi_n(z)$$ are of practical interest, and the previous section showed that these can be obtained by computing the higher order terms $$R_k(z)$$ in (2.3). To that end, we recall that $$R$$ satisfies a Riemann–Hilbert problem with jumps across the contours shown in Fig. 1. We proceed as in Deaño et al. (2016) by writing the jump matrix for $$R(z)$$ as a perturbation of the identity matrix, $$I+{\it{\Delta}}(z)$$. Starting from Vanlessen (2007, (3.108)) we have, for $$z$$ on the boundary $${\it{\Sigma}}_R$$ of one of the disks as shown in Section 2, $$\label{Ejump2} R^{{\mathrm{outer}}}(z) = R^{{\mathrm{right}}/{\mathrm{left}}}(z)(I+{\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}(z)), \qquad z \in {\it{\Sigma}}_R.$$ (5.1) We then consider a full asymptotic expansion in powers of $$1/n$$ for $${\it{\Delta}}(z)$$: $$\nonumber {\it{\Delta}}(z) \sim \sum_{k=1}^{\infty} \frac{{\it{\Delta}}_k(z)}{n^k}, \qquad n\to\infty.$$ On the boundary of the disks, the terms $${\it{\Delta}}_k(z)$$ can be written explicitly as $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z)$$ (Vanlessen, 2007, (3.76), (3.98)): \begin{align} {\it{\Delta}}_k^{{\mathrm{right}}}(z) & = \frac{P^{(\infty)}(z)z^{\alpha/2\sigma_3}}{2\left(- \xi_n(z) \right)^k} \begin{pmatrix} (-1)^k\nu_k & -6k i\nu_k \\ 6k i(-1)^k\nu_k & \nu_k \end{pmatrix} z^{-\alpha/2\sigma_3} P^{(\infty)}(z)^{-1}, \quad 0 < |z-1| < \delta_2, \label{EdefDR} \\ \end{align} (5.2) \begin{align} {\it{\Delta}}_k^{{\mathrm{left}}}(z) & = \frac{(\alpha,k{-}1)}{\left(4\bar{\phi}_n(z) \right)^k}P^{(\infty)}(z) (-z)^{\alpha/2\sigma_3} \begin{pmatrix} \tfrac{(-1)^k}{k}(\alpha^2{+}\tfrac{k}{2} {-} \tfrac{1}{4}) & \left(k{-}\tfrac{1}{2}\right)i \\ (-1)^{k+1}\left(k-\tfrac{1}{2}\right)i & \tfrac{1}{k}(\alpha^2{+}\tfrac{k}{2} {-} \tfrac{1}{4}) \end{pmatrix} (-z)^{-\alpha/2\sigma_3} P^{(\infty)}(z)^{-1}, \label{EdefDL} \end{align} (5.3) with $$0 < |z| < \delta_3$$ for some sufficiently small $$\delta_2$$ and $$\delta_3 > 0$$. However, for a general polynomial or function $$Q(x)$$, the $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z)$$ also depend on $$n$$. We will extract the $$n$$-dependence explicitly for general polynomial $$Q(x)$$ and use contour integrals for each required $$n$$ otherwise. $${\it{\Delta}}_k^{{\mathrm{left}}}(z)$$ has poles of order at most $$\lceil k/2 \rceil$$ at $$z=0$$ (Vanlessen, 2007, Remark 3.29) as in the Jacobi case in Deaño et al. (2016), but $${\it{\Delta}}_k^{{\mathrm{right}}}(z)$$ has poles of order at most $$\lceil 3k/2 \rceil$$ at $$z=1$$ (Vanlessen, 2007, Remark 3.22). The $${\it{\Delta}}_k(z)$$ are identically $$0$$ on the other boundaries of the regions in Fig. 1. Remark 5.1 If $$\alpha^2 = 1/4$$ as in the Hermite case (see Section 7.2), then $${\it{\Delta}}^{{\mathrm{left}}}_k(z)$$ and $$s^{{\mathrm{left}}}_k(z)$$ are zero matrices for $$k > 1$$ and $${\it{\Delta}}^{{\mathrm{left}}}_1(z)$$ and $$s^{{\mathrm{left}}}_1(z)$$ have a Taylor series starting with $$\mathcal{O}(1)$$ near $$z=0$$. So, all $$U_{k,m}^{{\mathrm{left}}}$$ are zero matrices and can be left out of the calculation of higher order terms, which is still needed as the $$U_{k,m}^{{\mathrm{right}}}$$ are not zero. 5.2 Recursive computation of $$R_k(z)$$ for monomial $$Q(x)$$ In this case, there are no fractional powers of $$n$$ involved, and we can renumber (2.3) to simplify the formulas: $$\label{asympRnmon} R(z)\sim I + \sum_{k=1}^{\infty} \frac{R_k(z)}{n^{k}}, \qquad n \rightarrow \infty.$$ (5.4) By expanding the jump relation (5.1) and collecting the terms with equal order in $$n$$, we obtain a link between the terms $$R_k(z)$$ in the expansion (2.3) and the $${\it{\Delta}}_k$$. For $$z$$ on the boundary of the disks in Fig. 1, we have $$\label{RHPforRk} R_k^{{\mathrm{outer}}}(z)=R_k^{{\mathrm{right}}/{\mathrm{left}}}(z)+\sum_{j=1}^k R^{{\mathrm{right}}/{\mathrm{left}}}_{k-j}(z){\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_j(z)$$ (5.5) with $$R_0^{{\mathrm{right}}/{\mathrm{left}}}(z)=I$$. One can solve the additive Riemann–Hilbert problem as follows: Expand the sum in (5.5) in a Laurent series around $$z=0$$ and $$1$$. Define $$R_k^{{\mathrm{outer}}}(z)$$ as the sum of all the terms containing strictly negative powers of $$z$$ and $$(z- 1)$$. Since $$R_k(z) = \mathcal{O}(1/z)$$ as $$z \rightarrow \infty$$, positive powers do not contribute to $$R_k^{{\mathrm{outer}}}(z)$$. Define $$R^{{\mathrm{right}}}_k(z)$$ as the remainder after subtracting those poles. This construction ensures that $$R_k^{{\mathrm{outer}}}$$ is analytic outside the disk, $$R^{{\mathrm{right}}}_k$$ is analytic inside and (5.5) holds, as required. According to Vanlessen (2007, Remarks 3.22 and 3.29), we may write $$\nonumber {\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z) \sim \sum_{i=-\lceil 3k/2 \rceil }^\infty V_{k,i}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^i,$$ with $$V_{k,p}^{{\mathrm{left}}} \equiv 0$$ for all $$p < -\lceil k/2 \rceil$$. Note that $$V_{k,i}^{{\mathrm{right}}/{\mathrm{left}}}$$ are the Laurent coefficients of $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}$$ around $$z=1$$ and $$z=0$$, respectively. With $$U_{k,p}^{{\mathrm{left}}} \equiv 0$$ for all $$p < -\lceil k/2 \rceil$$, this yields $$\label{ERpl} R_{k}^{{\mathrm{outer}}}(z) =\sum_{p=1}^{\lceil 3k/2 \rceil } \left(\frac{U_{k,p}^{{\mathrm{right}}} }{(z-1)^p} + \frac{U_{k,p}^{{\mathrm{left}}} }{z^p} \right).$$ (5.6) At the same time, since $$R^{{\mathrm{right}}/{\mathrm{left}}}_k(z)$$ are analytic in $$z=1$$ (respectively, $$0$$), $$R^{{\mathrm{right}}}_k(z) \sim \sum_{n=0}^\infty Q_{k,n}^{{\mathrm{right}}} (z-1)^n, \quad R^{{\mathrm{left}}}_k(z) \sim \sum_{n=0}^\infty Q_{k,n}^{{\mathrm{left}}} z^n, \label{ERrlseries}$$ (5.7) with some coefficients $$Q^{{\mathrm{right}}/{\mathrm{left}}}_{k,n}$$ that can be determined as well, for example, via symbolic differentiation. It follows from the additive jump relation (5.5) that \begin{align} U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}} = & \hspace{1.5mm} V_{k,-p}^{{\mathrm{right}}/{\mathrm{left}}} + \sum_{j=1}^{k-1}\sum_{l=0}^{\lceil 3j/2 \rceil -p} Q_{k-j,l}^{{\mathrm{right}}/{\mathrm{left}}} V_{j,-p-l}^{{\mathrm{right}}/{\mathrm{left}}}, \nonumber \\ Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} = & \left(\sum_{i=1}^{\lceil 3k/2 \rceil} {-i \choose n} (\pm 1)^{i+n} U_{k,i}^{{\mathrm{left}}/{\mathrm{right}}} \right) \nonumber \\ & -V_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} -\sum_{j=1}^{k-1}\sum_{l=0}^{\lceil 3j/2 \rceil +n} Q_{k-j,l}^{{\mathrm{right}}/{\mathrm{left}}} V_{j,n-l}^{{\mathrm{right}}/{\mathrm{left}}}. \nonumber \end{align} Here, the $$(- 1)^{i+n} U_{k,i}^{{\mathrm{right}}}$$ corresponds to the $$Q_{k,n}^{{\mathrm{left}}}$$. In Section 5.4, we will explore an alternative way to compute the matrices $$U_{k,m}^{{\mathrm{right}}/{\mathrm{left}}}$$. 5.3 Recursive computation of $$R_k(z)$$ for general polynomial $$Q(x)$$ For general polynomial $$Q(x)$$, the $${\it{\Delta}}_k(z)$$ also depend on $$n$$, so we need fractional powers of $$n$$. We introduce Laurent coefficients with an extra index, indicating the power of $$n^{-1/m}$$, $$\nonumber {\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_k(z) \sim \sum_{l=0}^\infty \left[\sum_{i=-\lceil 3k/2 \rceil}^\infty V_{k,i,l}^{{\mathrm{right}}/{\mathrm{left}}}(z -1/2 \mp 1/2)^{i} \right] n^{-l/m}.$$ In this case, we do have a general expansion for $$R$$ in terms of fractional powers, given earlier by (2.3). We arriveat Taylor series and Laurent expansions of the form: \begin{align} R^{{\mathrm{right}}/{\mathrm{left}}}_k(z) &\sim \sum_{n=0}^\infty Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^n, \label{ERQnonmon} \end{align} (5.8) \begin{align} R^{{\mathrm{outer}}}_k(z) & = \sum_{p=1}^{\lceil 3/2 \lceil k/m\rceil \rceil } \left(\frac{U_{k,p}^{{\mathrm{right}}} }{(z-1)^p} + \frac{U_{k,p}^{{\mathrm{left}}} }{z^p} \right). \label{ERUnonmon} \end{align} (5.9) Here, $$p$$ corresponds to the order of the pole and must be $$\leq \lceil 3/2 \lceil k/m\rceil \rceil$$. This is because that is the highest order of the pole of the $${\it{\Delta}}^{{\mathrm{left}}/{\mathrm{right}}}_q(z)$$ matrices which appear in the expansion up to $$\mathcal{O}\left(n^{\tfrac{k-1}{m}+1}\right)$$ of the jump relation \begin{align} I + & \sum_{k=1}^{\infty} \frac{R_k^{{\mathrm{outer}}}(z)}{n^{\tfrac{k-1}{m}+1} } \sim \left(I + \sum_{k=1}^{\infty} \frac{R_k^{{\mathrm{right}}/{\mathrm{left}}}(z)}{n^{\tfrac{k-1}{m}+1} }\right) \left(I+\sum_{q=1}^{\infty} \frac{{\it{\Delta}}_q(z)}{n^q}\right) \quad \text{as } n \rightarrow \infty. \nonumber \end{align} Expanding near $$z=0$$ or $$1$$ and collecting terms with the same (fractional) power of $$n$$ in the jump relation (5.1), we obtain, after some more algebraic manipulations, \begin{align} U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}} = & \left[\sum_{q=\lceil (2p-1)/(2\pm 1) \rceil}^{(k-1)/m +1} V_{q,-p,k-1+m-qm}^{{\mathrm{right}}/{\mathrm{left}}} \right] + \left[ \sum_{q=1}^{(k-1)/m} \sum_{l=0}^{k-1-mq} \sum_{i=-\lceil (2\pm 1)q/2 \rceil}^{-p} Q_{k-l-mq,-i-p}^{{\mathrm{right}}/{\mathrm{left}}} V_{q,i,l}^{{\mathrm{right}}/{\mathrm{left}}} \right]\!, \nonumber \\ Q_{j,n}^{{\mathrm{right}}/{\mathrm{left}}} = & \left[\sum_{p=1}^{\lceil (2 \mp 1)/2 \lceil j/m\rceil \rceil } U_{j,p}^{{\mathrm{left}}/{\mathrm{right}}} {-p \choose n} (\pm 1)^{n-p} \right] - \left[ \sum_{q=1}^{(j-1)/m+1} V_{q,n,j-1+m+qm}^{{\mathrm{right}}/{\mathrm{left}}} \right] \nonumber \\ & -\left[ \sum_{q=1}^{(j-1)/m} \sum_{l=0}^{j-1-qm} \sum_{i=-\lceil (2 \pm 1) q/2 \rceil}^n Q_{j-l-qm,n-i}^{{\mathrm{right}}/{\mathrm{left}}} V_{q,i,l}^{{\mathrm{right}}/{\mathrm{left}}} \right]\!. \nonumber \end{align} 5.4 Simplifications We start by writing the jump relation (5.5) using the coefficients $$R^{{\mathrm{outer}}}_{k-m}(z)$$ instead of $$R^{{\mathrm{right}}/{\mathrm{left}}}_{k-m}(z)$$. Proposition 5.2 The jump relation (5.5) can be written as follows: $$\label{E:simplifiedjump} R^{{\mathrm{right}}/{\mathrm{left}}}_k(z) = R_k^{{\mathrm{outer}}}(z) - \sum_{l=1}^k R_{k-l}^{{\mathrm{outer}}}(z)s^{{\mathrm{right}}/{\mathrm{left}}}_l(z)$$ (5.10) with $$R_{0}^{{\mathrm{right}}/{\mathrm{left}}}(z) = I$$ and with $$\label{EcheckS} s^{{\mathrm{right}}/{\mathrm{left}}}_l(z) = {\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_l(z) - \sum_{j=1}^{l-1} s^{{\mathrm{right}}/{\mathrm{left}}}_j(z){\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_{l-j}(z).$$ (5.11) Proof. This can be proven by induction as in Deaño et al. (2016, Section 4.1). Equivalently, if we define $$s(z) = \sum_{i=1}^\infty s_i(z) n^{-i}$$, the relation between the sets of coefficient matrices expressed by (5.11) is simply that $$I+{\it{\Delta}}(z) = [I-s(z)]^{-1}$$. Indeed, (5.10) expresses the jump relation $$R^{{\mathrm{right}}/{\mathrm{left}}}(z) = R^{{\mathrm{outer}}}(z) [I-s(z)]$$, just as (5.5) expresses the jump relation (5.1). The result also follows by matching equal powers of $$n$$ in this relation. □ This formulation has two advantages: The jump term in (5.10) is written in terms of $$R_{k-l}^{{\mathrm{outer}}}$$ rather than $$R^{{\mathrm{right}}}_{k-l}$$, and the former has a simple and nonrecursive expression (5.6). The definition of the coefficients $$s_l^{{\mathrm{right}}/{\mathrm{left}}}$$ can be greatly simplified to a nonrecursive expression too, involving just the $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}$$’s. The latter property follows rather generically from the structure of the $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}$$’s. Lemma 5.3 For any invertible matrix function $$X(z)$$, scalar functions $$a_k(z)$$ and coefficients $$b_k$$ and $$c_k$$ such that $$a_{2k}(z) b_{2k} = (-1)^k a_k(z)^2 \left[b_k^2 + c_k^2 \right] + \sum_{j=1}^{k-1} (-1)^j a_j(z) a_{m-j}(z) \left[b_j b_{m-j} + c_j c_{m-j}\right]\!, \label{Esumabc}$$ (5.12) the following holds \begin{align} s^{\text{gen}}_m(z) = {\it{\Delta}}^{\text{gen}}_m(z) - \sum_{j=1}^{m-1} s^{\text{gen}}_j(z){\it{\Delta}}^{\text{gen}}_{m-j}(z) = {\it{\Delta}}^{\text{gen}}_m(z) + \begin{cases} 0, & \text{for odd } m \\ -2b_m a_m(z) I, & \text{for even } m, \end{cases} \label{EsumSl} \end{align} (5.13) where $${\it{\Delta}}^{\text{gen}}_k(z) = a_k(z) X(z) \begin{pmatrix} (-1)^k b_k & c_k \\ (-1)^{k+1} c_k & b_k \end{pmatrix} X^{-1}(z). \label{EformD}$$ (5.14) Proof. The result (5.13) is equivalent to (5.12) by rewriting the sum, as a direct computation shows that \begin{align*} C_{j,m-j} = s^{\text{gen}}_j(z){\it{\Delta}}^{\text{gen}}_{m-j}(z) + s^{\text{gen}}_{m-j}(z){\it{\Delta}}^{\text{gen}}_{j}(z) = \begin{cases} 0, & \text{for odd } m \\ -2 (-1)^j a_j(z) a_{m-j}(z) \left[b_j b_{m-j} + c_j c_{m-j}\right] I, & \text{for even } m. \end{cases} \end{align*} □ In our case, (5.2) and (5.3) are clearly of the form (5.14), and we check (5.12) in the Gosper–Zeilberger algorithm in the proof of our case: Proposition 5.4 The terms $$s^{{\mathrm{right}}/{\mathrm{left}}}_l(z)$$ defined by (5.11) satisfy $$s^{{\mathrm{left}}/{\mathrm{right}}}_l(z) = {\it{\Delta}}^{{\mathrm{left}}/{\mathrm{right}}}_l(z) \nonumber$$ for odd $$m$$ and \begin{align} s^{{\mathrm{left}}}_l(z) = & \hspace{1.5mm} {\it{\Delta}}^{{\mathrm{left}}}_l(z) -\frac{4\alpha^2+2l-1}{(4\bar{\phi}_n(z))^l}\frac{(\alpha,l-1)}{2l} I, \nonumber \\ s^{{\mathrm{right}}}_l(z) = & \hspace{1.5mm} {\it{\Delta}}^{{\mathrm{right}}}_l(z) -\frac{\nu_l}{(-\xi_n(z))^l}I, \nonumber \end{align} for even $$m$$, where $$\xi_n$$ and $$\bar{\phi}_n$$ are as defined in Section 2.3. Proof. This can be proven again by mathematical induction, completely analogous to Deaño et al. (2016, Section B) for the left case. We should note that gcd$$(q_n,r_{n+j})$$ is not necessarily one, which is needed in Gosper (1978, under (10d)), but that the suggested change in variables can eliminate the common factor. For the right case, the proof is also analogous, but with $$\lambda_{j,l-j} = a_j = (-1)^j[36(l-j)j -1]\nu_{l-j}\nu_j/2$$, which is $$-2 (-1)^j a_j(z) a_{m-j}(z) \left[b_j b_{m-j} + c_j c_{m-j}\right]$$ in the previous proposition. In the Gosper–Zeilberger algorithm (see Gosper, 1978; Koornwinder, 1993), we have $$p_j = 1-36j(l-j), q_j = -(6j-5)(6j-7) (l-j+1)$$ and $$f_j=1/l$$. □ In the next equations, the coefficients $$W_{k,i}^{{\mathrm{right}}/{\mathrm{left}}}$$ are the Laurent coefficients of $$s_k^{{\mathrm{right}}/{\mathrm{left}}}$$ for monomial $$Q$$, \begin{align} \label{Wkm} s_k^{{\mathrm{right}}/{\mathrm{left}}}(z) \sim \sum_{i=-\lceil 3k/2 \rceil}^\infty W_{k,i}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^i. \end{align} (5.15) They are used to compute $$U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}}$$ directly, based on (5.10). This has the advantage of needing less memory, as the $$Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}}$$ are not needed any more. Still for monomial $$Q(x)$$, that leads us to \begin{align} U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}} &= \hspace{1.5mm} W_{k,-p}^{{\mathrm{right}}/{\mathrm{left}}} + \sum_{j=1}^{k-1}\sum_{l=\max(p-\lceil (2 \pm 1)j/2 \rceil,1) }^{\lceil (2 \pm 1) (k-j)/2 \rceil} U_{k-j,l}^{{\mathrm{right}}/{\mathrm{left}}} W_{j,l-p}^{{\mathrm{right}}/{\mathrm{left}}} \nonumber \\ & \hspace{1.5mm} +\sum_{j=1}^{k-1}\sum_{n=0}^{\lceil (2 \pm 1) j/2 \rceil -p} \left(\sum_{i=1}^{\lceil (2 \mp 1) (k-j)/2 \rceil} (\pm 1)^{i+n} { -i \choose n} U_{k-j,i}^{{\mathrm{left}}/{\mathrm{right}}} \right) W_{j,-n-p}^{{\mathrm{right}}/{\mathrm{left}}}. \nonumber \end{align} It might be necessary to approximate the orthonormal polynomials near $$z=0$$ and $$1$$, for which it is inaccurate and computationally expensive to use (5.10) and certainly the recursive application of (5.5). So optionally, one can still compute the series expansion of $$R^{{\mathrm{right}}/{\mathrm{left}}}_k(z)$$ afterwards, using \begin{align} Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} = & \hspace{1.5mm} \left(\sum_{i=1}^{\lceil (2 \mp 1)k/2 \rceil} {-i \choose n} (\pm 1)^{-i-n} U_{k,i}^{{\mathrm{left}}/{\mathrm{right}}} \right)-W_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} -\sum_{j=1}^{k-1}\sum_{i=n+1}^{\lceil (2 \pm 1)(k-j)/2 \rceil +n} U_{k-j,i-n}^{{\mathrm{right}}/{\mathrm{left}}} W_{j,i}^{{\mathrm{right}}/{\mathrm{left}}} \nonumber \\ & -\sum_{j=1}^{k-1}\sum_{i=-\lceil (2\pm1) j/2 \rceil}^{n} \sum_{l=1}^{\lceil (2 \mp 1)(k-j)/2 \rceil} { -l \choose n-i} (\pm 1)^{i-n+l} U_{k-j,l}^{{\mathrm{left}}/{\mathrm{right}}} W_{j,i}^{{\mathrm{right}}/{\mathrm{left}}}. \nonumber \end{align} For general polynomial $$Q(x)$$, we have the more general expansion $$\label{Wkmnonmon} s_k^{{\mathrm{right}}/{\mathrm{left}}}(z) \sim \sum_{l=0}^\infty \sum_{i=-\lceil 3k/2 \rceil}^\infty W_{k,i,l}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^i n^{-l/m},$$ (5.16) which leads to \begin{align} U_{k,q}^{{\mathrm{right}}} = & \left[\sum_{j=\lfloor 2q/3\rfloor}^{\lfloor (k-1)/m \rfloor} W_{j+1,-q,k-1-jm}^{{\mathrm{right}}}\right] + \sum_{l=0}^{k-1-m} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=\max(-\lceil 3j/2 \rceil,1-q)}^{\lceil 3l/2 \rceil-q} U_{l,q+i}^{{\mathrm{right}}} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{right}}} \nonumber \\ & + \sum_{l=0}^{k-1-m} \sum_{p=1}^{\lceil l/2 \rceil } U_{l,p}^{{\mathrm{left}}} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=-\lceil 3j/2 \rceil}^{-q} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{right}}} {-p \choose -q-i}, \nonumber \\ U_{k,q}^{{\mathrm{left}}} = & \left[\sum_{j=2q}^{\lfloor (k-1)/m \rfloor} W_{j+1,-q,k-1-jm}^{{\mathrm{left}}} \right] + \sum_{l=0}^{k-1-m} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=\max(-\lceil j/2 \rceil,1-q)}^{\lceil l/2 \rceil -q } U_{l,q+i}^{{\mathrm{left}}} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{left}}} \nonumber \\ & \quad + \sum_{l=0}^{k-1-m} \sum_{p=1}^{\lceil 3l/2 \rceil} U_{l,p}^{{\mathrm{right}}} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=-\lceil j/2 \rceil}^{-q} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{left}}} (-1)^{q+i+p} {-p \choose -q-i}. \nonumber \end{align} 6. Explicit series expansions for $$\boldsymbol{s_k^{{\mathrm{left}}/{\mathrm{right}}}(z)}$$ and $$\boldsymbol{{\it{\Delta}}_k^{{\mathrm{left}}/{\mathrm{right}}}(z)}$$ In this section, we derive fully explicit expressions for the coefficients $$W_{k,i}^{{\mathrm{right}}/{\mathrm{left}}}$$, defined by (5.15) or (5.16) for a monomial, general polynomial and general function $$Q$$, respectively. These expressions are amenable to implementation without further symbolic manipulations. The process and terminology of symbols mimicks that used in Deaño et al. (2016) for Jacobi polynomials. We aim to be concise yet complete (thus needing Russian characters) and expand $$s_k^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ together with $${\it{\Delta}}_k^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ in power series, where the coefficients are computed using convolutions. 6.1 Left disk with monomial $$Q(x)$$ First, we consider $$s_k^{{\mathrm{left}}}(z)$$, where we know that $$W_{k,i}^{{\mathrm{left}}} \equiv 0$$$$\forall i < -\lceil k/2 \rceil$$. We have $$\label{EDeltak_Tk} {\it{\Delta}}_k^{{\mathrm{left}}}(z) = \frac{(\alpha,k-1)}{4^k \bar{\phi}_n(z)^k} 2^{-\alpha\sigma_3} G_k(z) 2^{\alpha\sigma_3},$$ (6.1) where $$G_k$$ is defined for odd and even $$k$$ as \begin{align} G_k^{\text{odd}}(z) &= \frac{\alpha^2 + \frac{k}{2}-\frac{1}{4}}{4k z^{1/2}(z-1)^{1/2}} \begin{pmatrix} -4z+2 & 2i \\ 2i & 4z-2 \end{pmatrix} + \frac{\left(k-\frac{1}{2}\right)i}{4 z^{1/2}(z-1)^{1/2}} \begin{pmatrix} -2\cos(y_{\alpha}) & -2i\cos(y_{\alpha+1})\\ -2i\cos(y_{\alpha-1}) & 2\cos(y_{\alpha}) \end{pmatrix}, \nonumber \\ G_k^{\text{even}}(z) &= \frac{\alpha^2 + \frac{k}{2}-\frac{1}{4} }{4 k z^{1/2}(z-1)^{1/2}} \begin{pmatrix} 4\sqrt{z}\sqrt{z-1} & 0\\ 0 & 4z\sqrt{z}\sqrt{z-1} \end{pmatrix}\nonumber\\ &\quad{} + \frac{\left(k-\frac{1}{2}\right)i}{4 z^{1/2}(z-1)^{1/2}} \begin{pmatrix} -2\sin(y_{\alpha}) & -2i\sin(y_{\alpha+1})\\ -2i\sin(y_{\alpha-1}) & 2\sin(y_{\alpha}) \end{pmatrix}. \nonumber \end{align} One should remark that $$(-\varphi(z))^\alpha \neq \varphi(z)^\alpha(-1)^\alpha$$ with the principal branch when deriving this formula. Also, we have used $$\varphi(z) = \exp(i\theta(z)\arccos(2z-1))$$. The functions $$y_\gamma$$ above depend on $$z$$ by \begin{align} y_\gamma & = \theta(z) \gamma \left( \arccos(2z-1) - \pi \right), & & & \nonumber \\ y_{\gamma} & \sim -\theta(z) 2\gamma\sqrt{z} \sum_{j=0}^{\infty}\rho_{1,j} z^j & \rho_{1,j} & = \frac{(1/2)_j}{j!(1+2j)}, & \rho_{0,j} & = \delta_{j,0}, \nonumber \\ y_\gamma^k & \sim (-\theta(z) 2\gamma\sqrt{z})^k \sum_{j=0}^\infty \rho_{k,j} z^j & \rho_{k,j} & = \sum_{l=0}^j \rho_{k-1,l} \rho_{1,j-l}, & & \nonumber \end{align} where we have used the Kronecker delta $$\delta_{i,j}$$. Note that with the principal branch for the powers, $$y_{\gamma}$$ is real on the interval $$[0,1]$$. We expand all terms appearing in the definitions of $$G_k$$: \begin{align*} \frac{\cos(y_{\gamma})}{2 z^{1/2}(z-1)^{1/2}} & \sim \frac{-i}{2\sqrt{z}} \sum_{m=0}^{\infty} z^m \sum_{j=0}^m {-1/2 \choose j} (-1)^{j} \sum_{l=0}^{m-j} \frac{(-1)^l}{(2l)!} (-2\gamma)^{2l} \rho_{2l,m-j-l}, \nonumber \\ \frac{\sin(y_{\gamma})}{2 z^{1/2}(z-1)^{1/2}} & \sim \gamma i \sum_{m=0}^{\infty} z^m \sum_{j=0}^m {-1/2 \choose j} (-1)^{-j} \sum_{l=0}^{m-j} \frac{(-1)^l}{(2l+1)!} (-2\gamma)^{2l} \rho_{2l+1,m-j-l}, \nonumber \\ \frac{4z-2}{4z^{1/2}(z-1)^{1/2}} & \sim \frac{i}{2\sqrt{z}} + \frac{i}{2\sqrt{z}} \sum_{n=1}^{\infty} \left(2{-1/2 \choose n-1} +{-1/2 \choose n}\right)(-1)^{n} z^n. \end{align*} We still need to expand the power $$\bar{\phi}_n(z)^{-k}$$ in (6.1) as $$z \to 0$$. To that end, we can combine (2.4), (2.5) and (3.2). It is quite standard, but increasingly tedious, for series expansions to involve convolutions whose coefficients can be found recursively. That is the origin of the coefficients $$f_j$$ and $$g_{k,m}$$ below, and we will use this pattern several times more in the remainder of this section: \begin{align*} \bar{\phi}_n(z) & \sim \theta(z) i \sqrt{z} \sum_{j=0}^\infty f_j z^j, & f_j & = -\frac{(1/2)_j }{j!(1+2j)} - \frac{1}{2 m A_m}\sum_{k=0}^{\min(m-1,j)} (-1)^{j-k} {1/2 \choose j-k} A_{m-k-1},\nonumber\\ (\bar{\phi}_n(z))^{-1} & \sim (-z)^{-1/2} \sum_{m=0}^{\infty} g_{1,m} z^{m} & g_{1,m} & = \frac{-1}{f_0} \left(-\delta_{m,0} + \sum_{j=0}^{m-1} g_{1,j} f_{m-j} \right), \nonumber \\ (\bar{\phi}_n(z))^{-k} & \sim (-z)^{-k/2} \sum_{m=0}^{\infty} g_{k,m} z^{m} & g_{k,m} & = \sum_{l=0}^n g_{k-1,l} g_{1,m-l}. \end{align*} The coefficients $$W_{k,i}^{{\mathrm{left}}}$$ in expansion (5.15) for the functions $$s_k^{{\mathrm{left}}}(z)$$ are given by \begin{align} W_{k,i}^{{\mathrm{left}}} & = \frac{(\alpha,k-1)}{-(-1)^{\lceil k/2 \rceil +1}4^k} 2^{-\alpha\sigma_3}\sum_{j=0}^{i+(k+1)/2} g_{k,j} G_{k,i+(k+1)/2-j}^{\operatorname{odd} } 2^{\alpha\sigma_3}, \nonumber \\ W_{k,i}^{{\mathrm{left}}} & = \frac{(\alpha,k-1)}{-(-1)^{\lceil k/2 \rceil +1}4^k} 2^{-\alpha\sigma_3}\left( \sum_{j=0}^{i+k/2} g_{k,j} G_{k,i+k/2-j}^{\operatorname{even}} \right) 2^{\alpha\sigma_3} - \frac{(\alpha,k-1)(4\alpha^2+2k-1)g_{k,i+k/2}}{2 k 4^k}I, \nonumber \end{align} respectively, for odd and even $$k$$. The $$V_{k,i}^{{\mathrm{left}}}$$ can be obtained by leaving out the term with $$I$$. 6.2 Right disk with monomial $$Q(x)$$ Unlike in the Jacobi case, the expressions for the left and right disks are not symmetric, since they correspond to qualitatively different behaviour of the polynomials near a hard edge and near a soft edge. With $$w = z-1$$ and $$\tau_\gamma = -\theta(w+1)\gamma \left(\arccos(2w+1) \right)$$, we have \begin{align} {\it{\Delta}}_k^{{\mathrm{right}}}(z) = & \frac{2^{-\alpha\sigma_3}}{2\left(- \xi_n(z) \right)^k} {\it{\Omega}}_k(z) 2^{\alpha\sigma_3}, \nonumber \\ {\it{\Omega}}_k^{\text{odd}}(z) = & \frac{\nu_k}{4\sqrt{w}\sqrt{w+1}}\begin{pmatrix} -4w-2 & 2i \\ 2i & 4w+2 \end{pmatrix} + \frac{-6k\nu_k}{4\sqrt{w}\sqrt{w+1}} \begin{pmatrix} -2\cos(\tau_{\alpha}) & 2i\cos(\tau_{\alpha+1})\\ 2i\cos(\tau_{\alpha-1}) & 2\cos(\tau_{\alpha}) \end{pmatrix},\nonumber \\ {\it{\Omega}}_k^{\text{even}}(z) = & \nu_k I + \frac{-6k\nu_k}{4\sqrt{w}\sqrt{w+1}} \begin{pmatrix} -2i\sin(\tau_{\alpha}) & -2\sin(\tau_{\alpha+1})\\ -2\sin(\tau_{\alpha-1}) & 2i\sin(\tau_{\alpha}) \end{pmatrix}. \nonumber \end{align} For the expansion of $${\it{\Omega}}_k(z)$$, we observe that With these expressions in hand, we focus again on the phase function. We construct the power series of $$\xi_n(1+w)^{-k}$$ as follows: \begin{align} [\sqrt{1+w}-1]^m & \sim w^m \sum_{l=0}^\infty u_{m,l} w^l & u_{1,l} & = {1/2 \choose l}, \qquad \qquad \qquad u_{m,l} = \sum_{j=0}^l u_{m-1,j} u_{1,l-j}, \nonumber \\ \sqrt{2-2\sqrt{1+w}} &\sim \sqrt{-2w} \sum_{k=0}^\infty r_k w^k & r_k & = \sum_{l=0}^k {1/2 \choose k-l} v_{k-l,l} 2^{k-l}, \qquad \qquad v_{1,j} = {1/2 \choose j+2}, \nonumber \\ v_{m,l} & = \sum_{j=0}^l v_{m-1,j} v_{1,l-j} & q_m & = \sum_{l=0}^m \frac{(1/2)_{m-l} u_{m-l,l} }{(-2)^{m-l}((m-l)! (1+2m-2l))}, \nonumber \end{align} \begin{align} \xi_n(1+w) \sim \sqrt{w} \sum_{j=1}^\infty f_j w^j\quad f_j & = 2\sum_{l=0}^j q_l r_{j-l} - \frac{1}{m A_m}\sum_{k=0}^{\min(m-1,j)} (-1)^{m-k-1} {1/2 \choose j-k} \frac{{\it{\Gamma}}(-1/2-k)}{{\it{\Gamma}}(1/2-m){\it{\Gamma}}(m-k)}, \label{Efxin} \end{align} (6.2) \begin{align} \left(-\xi_n(1+w)\right)^{-1} & \sim w^{-3/2} \sum_{m=0}^{\infty} g_{1,m} z^{m} & g_{1,m} & = \frac{-1}{f_1} \left(\delta_{m,0} + \sum_{j=1}^{m} f_{j+1} g_{1,m-j}\right), \nonumber \\ \left(-\xi_n(1+w)\right)^{-k} &\sim w^{-3k/2} \sum_{m=0}^{\infty} g_{k,m} w^m & g_{k,m} & = \sum_{l=0}^n g_{k-1,l} g_{1,m-l}. \nonumber \end{align} The phase function $$\xi_n(z)$$ should be $$\mathcal{O}\left( (z-1)^{3/2} \right)$$ (Vanlessen, 2007, Remark 3.22) and (6.2) indeed indicates that $$f_0 = -2+2q_0r_0$$ is zero, so we have started the indices in the expansion of $$\xi_n(z)$$ from $$j=1$$. Keeping in mind the Kronecker delta $$\delta_{i,j}$$ and that $${-1/2 \choose -1} = 0$$, we arrive at the final result to which we add $$-\nu_k g_{k,i+3k/2}I$$ when $$k$$ is even. 6.3 Left disk with general polynomial $$Q(x)$$ We can reuse the expansion of $$G_k(z)$$. However, in this case $$H_n(z)$$ has an expansion in $$n$$, given by (3.8). Hence, we have to determine the expansion of $$\bar{\phi}_n(z)$$ (2.5) near $$z=0$$ anew, but afterwards one continues as in Section 6.1 to get the $$W_{k,i,l}^{{\mathrm{left}}}$$ from (5.16): \begin{align} \bar{\phi}_n(z) &\sim -\theta(z) i \sqrt{z} \sum_{l=0}^\infty \sum_{j=0}^\infty f_j^l z^j n^{-l/m}, \nonumber \\ f_j^0 &= \frac{(1/2)_j }{j!(1+2j)} + \frac{1}{2mA_m}\sum_{k=0}^{\min(m-1,j)} (-1)^{j-k} {1/2 \choose j-k} A_{m-k-1}, \nonumber \\ f_j^l &= \frac{1}{2} \sum_{i=0}^{\min(j,m-1)} (-1)^{j-i} {1/2 \choose j-i} \sum_{n=\max(i+1,m-l)}^m q_n A_{n-i-1} \beta^{n,n+l-m} \qquad {\rm for}~l > 0, \nonumber \\ (\bar{\phi}_n(z))^{-1} &\sim (-z)^{-1/2} \sum_{l=0}^\infty \sum_{j=0}^{\infty} g_{1,j}^l z^j n^{-l/m}, \qquad \qquad g_{1,n}^0 = \frac{-1}{f_0} \left( -\delta_{n,0} + \sum_{i=0}^{n-1} g_{1,i}^0 f_{n-i}^0\right), \nonumber \\ g_{1,i}^l &= \sum_{y=0}^i \sum_{p=0}^{i-y} g_{1,i-y-p}^0 \sum_{q=0}^{l-1} g_{1,y}^q f_p^{l-q}, \qquad \qquad g_{k,j}^0 = \sum_{l=0}^j g_{k-1,l}^0 g_{1,j-l}, \nonumber \\ (\bar{\phi}_n(z))^{-k} & \sim (-z)^{-k/2} \sum_{l=0}^\infty \sum_{j=0}^{\infty} g_{k,j}^l z^j n^{-l/m}, \qquad \qquad g_{k,n}^l = \sum_{j=0}^l \sum_{i=0}^n g_{k-1,i}^{j} g_{1,n-i}^{l-j}. \nonumber \end{align} This gives a pole of order $$\mathcal{O}(z^{-\lceil k/2 \rceil})$$ in $${\it{\Delta}}^{{\mathrm{left}}}_k(z)$$, as it should (Vanlessen, 2007, Remark 3.29). 6.4 Right disk with general polynomial $$Q(x)$$ We can also reuse the expansion of $${\it{\Omega}}_k(z)$$ and the expansion of $$\xi_n(z)$$ up to the definition of $$f_j$$, including the expansion of $$\arccos(\sqrt{z})$$ from Section 6.2. Expanding (2.4), we get \begin{align} \xi_n(1+w) &\sim \sqrt{w} \sum_{l=0}^\infty \sum_{j=1}^\infty f_j^l w^j n^{-l/m}, \nonumber \\ f_j^0 &= \left[2\sum_{l=0}^{\min(j,m-1)} q_l r_{j-l}\right] -\frac{1}{m A_m}\left[\sum_{i=0}^{\min(j,m-1)} {1/2 \choose j-i} (-1)^{m-i-1} \frac{{\it{\Gamma}}(-1/2-i)}{{\it{\Gamma}}(-1/2-m){\it{\Gamma}}(m-i)} \right]\!, \nonumber \\ f_j^l &= \frac{-1}{2} \sum_{k=0}^{m-1} \left[ \sum_{z=\max(k+1,m-l)}^m q_z A_{z-k-1} \beta^{z,z+l-m} \right] \left\{\sum_{i=0}^{\min(k,j)} {1/2 \choose j-i} {k \choose i}\right\}. \label{EfxinNonmon} \end{align} (6.3) It follows by the construction in Section 3.2 that $$f_0^l =0$$. Thus, $$\left(-\xi_n(1+w) \right)^{-k}$$ again has a pole of order $$\mathcal{O}(w^{- 3k/2 })$$: \begin{align} (-\xi_n(1+w))^{-1} & \sim w^{-3/2} \sum_{l=0}^\infty \sum_{j=0}^\infty g_{1,j}^l w^j n^{-l/m}, & g_{1,i}^0 & = \frac{-1}{f_1} \left(\delta_{i,0} + \sum_{j=1}^i f_{j+1}^{0} g_{1,i-j}^{0}\right), \nonumber\\ g_{1,i}^l & = \sum_{y=0}^{i-1} \sum_{p=1}^{i-y} g_{1,i-y-p}^0 \sum_{q=0}^{l-1} g_{1,y}^q f_{p+1}^{l-q}, & g_{k,j}^0 & = \sum_{l=0}^j g_{k-1,l}^0 g_{1,j-l} , \nonumber \\ \left(-\xi_n(1+w) \right)^{-k} & \sim w^{-3k/2} \sum_{l=0}^\infty \sum_{j=0}^{\infty} g_{k,j}^l w^j n^{-l/m}, & g_{k,n}^l & = \sum_{j=0}^l \sum_{i=0}^n g_{k-1,i}^{j} g_{1,n-i}^{l-j}. \nonumber \end{align} 6.5 Left disk with general function $$Q(x)$$ For general $$Q$$, the MRS number $$\beta_n$$ depends on $$n$$ in a way that is not easy to predict, see (3.11), for example. As a result, so does $$V_n(x)$$. This means that the series expansions that form the result of this section also depend on $$n$$. Hence, strictly speaking, they are not the true asymptotic expansions. However, for any given $$n$$, they can still be useful in computations and require computational time independent of $$n$$. We proceed by expanding the function $$h_n$$ in a Taylor series, using contour integrals for the coefficients (see also Section 3.3): $$\label{Edn} h_n(z) \sim \sum_{l=0}^\infty d_l(n) z^l \qquad {\rm with}~d_l(n) = \frac{1}{2\pi i} \oint_{{\it{\Gamma}}_z} \frac{\sqrt{y}V_n^\prime(y) \,{\rm d}y}{\sqrt{y-1}y^{l+1}}.$$ (6.4) Continuing the analysis as before, we find an expansion for $$\bar{\phi}_n$$, where we used (2.5) and the power series of an integral. We plug this into Section 6.3 and continue with the other equations in Section 6.1 to get the expansion of $$s_k^{{\mathrm{left}}}(z)$$ for general $$Q(x)$$. The resulting matrices are also treated as for monomial $$Q(x)$$ in Sections 5.2 and 5.4 to obtain the $$U$$- and $$Q$$-matrices. 6.6 Right disk with general function $$Q(x)$$ We proceed as in the previous section by expanding $$h_n(z)$$ using contour integrals: $$\label{Ecn} h_n(z) \sim \sum_{l=0}^\infty c_l(n) (z-1)^l \quad {\rm with}\ c_l(n) = \frac{1}{2\pi i} \oint_{{\it{\Gamma}}_z} \frac{\sqrt{y}V_n^\prime(y) \,{\rm d}y}{\sqrt{y-1}(y-1)^{l+1}}.$$ (6.5) We can again continue with the phase function: Remark 6.1 Remark that in order to use these expressions, one only needs to (numerically) compute $$\beta_n$$ and the contour integrals. In contrast, we need to define $$m$$ times more coefficients in the case $$Q(x)$$ is a general polynomial to compute (6.3), even though we can also just use those same contour integrals. A brief numerical comparison is given in Section 7.4, and we can note that the expressions that we have specifically derived for general polynomial $$Q$$ are fully explicit. 7. Examples and numerical results 7.1 Monomial $$Q(x)$$ A case of specific interest in the context of Gaussian quadrature is the standard Laguerre polynomial. We illustrate the accuracy of the asymptotic expansion in the left disk using our simplifications ((5.10) and Proposition 5.4) in the left part of Fig. 2. The values we compare with are computed using a recurrence relation for orthonormal polynomials with exact coefficients, and calculations are performed in double precision. We evaluate at a point close to the normalized origin. The errors decrease as $$\mathcal{O}(n^{-T})$$ with $$T$$ the number of terms as expected. For small $$n$$, the expansions may diverge with increasing $$T$$ and the errors saturate at about $$10^{-14}$$. Fig. 2. View largeDownload slide Relative error of the asymptotic expansion in the left boundary region as a function of the degree, for $$w(x) = e^{-x}, x=4n/1000$$ (left) and $$w(x) = x^{2.8} \exp(-0.7x^3-3/2), x = \beta_n(1 - i)/100$$ (right). Fig. 2. View largeDownload slide Relative error of the asymptotic expansion in the left boundary region as a function of the degree, for $$w(x) = e^{-x}, x=4n/1000$$ (left) and $$w(x) = x^{2.8} \exp(-0.7x^3-3/2), x = \beta_n(1 - i)/100$$ (right). In the right part of Fig. 2, we show results for another monomial $$Q(x)$$, where the higher order terms are now calculated with (5.7) and where the summation index $$n$$ ranges from $$0$$ to $$11$$. Here, we have used high-precision arithmetic to compute the reference solution using standard methods. In the continuous Lanczos algorithm (Trefethen & Bau, 1997, Algorithm 37.1) for the computation of the recurrence coefficients, we have to evaluate integrals such as $$a_n = \int_0^\infty x^{\alpha+1} \exp(-Q(x)) p_{n-1}^2(x) \,{\rm d}x$$. To obtain sufficiently accurate ‘exact’ results using the recurrence relation, we had to evaluate the recurrence coefficients with 26 digits of accuracy, a computation that we performed in Julia. All computations with the asymptotic expansions were performed in standard floating point double precision. Having said that, the errors again decrease like $$\mathcal{O}(n^{-T})$$ as we expect, hence we conclude that the higher order terms are computed correctly. The asymptotic expansions of the coefficients $$\gamma_n$$, $$\alpha_n$$ and $$\beta_n$$, and of the polynomials in the other regions and for other values of $$x$$ exhibit similar behaviour. 7.2 Connection with Hermite polynomials NIST (2016, 18.7.17) states that $$H_{2n}(x) = (-1)^n 2^{2n}n!L_n^{(-1/2)}(x^2)$$, but as these easily overflow numerically, we construct normalized Hermite polynomials by $$H_0^{\text{norm}}(x) = \pi^{-1/4}, \quad H_{-1}^{\text{norm}}(x) = 0, \quad H_j^{\text{norm}}(x) = \frac{2x H_{j-1}^{\text{norm}}(x)}{\sqrt{2j}} -\frac{(j-1)H_{j-2}^{\text{norm}}(x)}{\sqrt{j(j-1)} }. \nonumber$$ As $$(2n)!\sqrt{\pi} = n! 4^n {\it{\Gamma}}(n+1/2)$$, we have that $$H_{2n}^{\text{norm}}(x) = L_n^{(-1/2),\text{norm}}(x^2)$$, the normalized associated Laguerre polynomial with positive leading coefficient. In the left part of Fig. 3, we see that the asymptotic expansion in the right disk (4.3) (for the summation index $$n$$ from $$0$$ to $$10$$) converges as expected to $$H_{2n}^{\text{norm}}(x)$$ as a function of $$n$$, evaluated at $$x^2 = 0.97(4n)$$ using (5.7), $$Q(x) = x$$ and $$\alpha = -1/2$$. Fig. 3. View largeDownload slide Relative error on the normalized Hermite polynomials as a function of the degree of the associated Laguerre polynomial $$n$$, for $$w(x) = \exp(-x^2), H_{2n}(x), x=\sqrt{3.88n}$$ (left) and $$w(x) = \exp(-x^4+3x^2), H_{2n+1}(x), x = \sqrt{0.31\beta_n}$$ (right). All calculations (also the recurrence coefficients) were performed in double precision. Fig. 3. View largeDownload slide Relative error on the normalized Hermite polynomials as a function of the degree of the associated Laguerre polynomial $$n$$, for $$w(x) = \exp(-x^2), H_{2n}(x), x=\sqrt{3.88n}$$ (left) and $$w(x) = \exp(-x^4+3x^2), H_{2n+1}(x), x = \sqrt{0.31\beta_n}$$ (right). All calculations (also the recurrence coefficients) were performed in double precision. For odd degrees, a similar reasoning gives $$H_{2n+1}^{\text{norm}}(x) = x L_n^{(1/2),\text{norm}}(x^2)$$. Now, we explore the connection of a generalized weight $$\exp(-x^4+3x^2)$$ on $$(-\infty, \infty)$$ with a weight $$\exp(-x^2+3x)$$ on $$[0, \infty)$$, using the expansion in the lens (4.1). Although the right panel of Fig. 3 shows higher errors, we do get the $$\mathcal{O}(n^{-T/m})$$ convergence we expect, with $$T$$ the number of terms and $$m=2$$. It also illustrates that taking more terms is not always advantageous, as the asymptotic expansions diverge when increasing $$T$$ for a fixed $$n$$. This effect is more pronounced when $$n$$ is low. In both cases, $$\alpha^2 = 1/4$$ and the expansion in the Bessel region exhibits trigonometric behaviour as in the lens. Unlike in the Jacobi case (Deaño et al., 2016, Section 2.6), they are not exactly equal for the same number of terms, but they agree more and more if $$n$$ and/or $$T$$ increase(s). However, the computation of higher order terms can be improved in this case as mentioned in Remark 5.1. One could also go through Deift et al. (1999a) to obtain asymptotics of Hermite-type polynomials, and there are indeed many analogies between both approaches, as Vanlessen (2007) was inspired by it. One advantage of exploiting the connection with Laguerre-type polynomials is that the $$U_{k,m}^{{\mathrm{left}}}$$ matrices are zero so their computations can be omitted, while the other approach computes $$U$$-matrices near both soft edges when straightforwardly implemented in an analogous way. 7.3 General function $$Q(x)$$ In this section, we provide numerical results for our claim that the expansions can also be used for general functions $$Q(x)$$ for the case $$Q(x) = \exp(x)$$. We have been able to verify that $$\int_0^\beta \exp(x)\sqrt{x/(\beta-x)} \,{\rm d}x/2/\pi$$ agrees numerically with $$\beta\exp(\beta/2) [I_0(\beta/2)+I_1(\beta/2)] /4$$, see (3.10), as long as these do not overflow. Additionally, the corresponding expansion of $$\beta_n$$ (3.11) converges with the expected rate, as can be seen in Fig. 4. We also show that we can approximate this special orthonormal polynomial in the bulk of its spectrum using (4.1) in the right side of Fig. 4. The reference results were again computed using recurrence coefficients with 26 digits. The errors agree with the expected orders that are only negative integer powers of $$n$$. This is because other types of dependencies on $$n$$ arising from $$\beta_n$$ (e.g., the $$\mathcal{O}(n^{-1/m})$$ for general polynomial $$Q(x)$$) were eliminated by for each $$n$$ numerically computing $$\beta_n$$ and the contour integrals (6.4) and (6.5). Fig. 4. View largeDownload slide Error of the asymptotic expansion of $$\beta_n$$ (left) and $$p_{n}(0.6\beta_n)$$ (right) for $$w(x) = x^{-1/2}\exp(-\exp(x))$$. Fig. 4. View largeDownload slide Error of the asymptotic expansion of $$\beta_n$$ (left) and $$p_{n}(0.6\beta_n)$$ (right) for $$w(x) = x^{-1/2}\exp(-\exp(x))$$. 7.4 General polynomial $$Q(x)$$ also used as a general function For this experiment, we recall Remarks 3.1 and 6.1, which state that one can use the procedure for general functions also for polynomial $$Q(x)$$. Figure 5 provides a comparison of the accuracy obtained for $$Q(x) = x^6 -2.1x^5 + 3x^3 -6x^2 +9$$. The procedure for general polynomials would need about $$m=6$$ times more terms to achieve the same order in $$n$$ of the asymptotic expansion than the procedure for general functions. However, where the number of terms is too low in the left part of the figure, we appear to see a divergence. This is because the accuracies of the phase function $$f_n(z)$$ and the MRS number $$\beta_n$$ are too low to cancel out the exponential behaviour $$\exp(n(V_n(z) + l_n)/2)$$ in (4.3) when calculated with too few terms. The reference recurrence coefficients were again computed using 26 digits, but we appear to see an $$\mathcal{O}(n)$$ error in the right panel of Fig. 5 at low errors. This means that we would need even more digits for this more difficult weight function if we would need to see the convergence for the expansions with more than three terms and all $$n$$. However, computing all recurrence coefficients is a very time consuming operation, whereas using asymptotics can be more accurate and orders of magnitude faster. Fig. 5. View largeDownload slide Relative error of the asymptotic expansion of $$p_{n}((0.99+0.02i)\beta_n)$$ in the right disk using the $$Q$$-matrices for $$w(x) = x^{-1.1}\exp(-x^6 +2.1x^5 - 3x^3 +6x^2 -9)$$ using the procedure for general polynomials $$Q(x)$$ (left) and general functions $$Q(x)$$ (right). Note that these are not directly comparable, as each term gives an $$\mathcal{O}(n^{-T/m})$$ error in the left part, while this is $$\mathcal{O}(n^{-T})$$ for the procedure for general polynomials. Fig. 5. View largeDownload slide Relative error of the asymptotic expansion of $$p_{n}((0.99+0.02i)\beta_n)$$ in the right disk using the $$Q$$-matrices for $$w(x) = x^{-1.1}\exp(-x^6 +2.1x^5 - 3x^3 +6x^2 -9)$$ using the procedure for general polynomials $$Q(x)$$ (left) and general functions $$Q(x)$$ (right). Note that these are not directly comparable, as each term gives an $$\mathcal{O}(n^{-T/m})$$ error in the left part, while this is $$\mathcal{O}(n^{-T})$$ for the procedure for general polynomials. The procedure for general functions thus provides a higher accuracy (for the same number of terms), also because $$\beta_n$$ is computed up to an accuracy independent of $$n$$. Table 1 shows that also the precomputations (computing the $$U$$ and $$Q$$ matrices as in Sections 5 and 6) are faster using Matlab2016b on a $$64$$-bit laptop with $$7.7$$ GB memory and $$4$$ Intel(R) Core(TM) i7-3540M CPU’s at $$3.0$$ Ghz. However, the mean time over the values of $$n$$ in Fig. 5 needed for evaluating the polynomial is much higher because the procedure for general functions computed double numerical integrals, as mentioned in Section 3.3. Thus, the procedure for general polynomial $$Q(x)$$ can be preferable when many evaluations of the polynomial are needed. The first row of Table 1 also lists that the time to evaluate the polynomial grows with the number of terms: the complexity is $$\mathcal{O}(T^2)$$ through (5.8) and (2.3) if both indices $$n$$ and $$k$$ are proportional to $$T$$. The precomputations require $$\mathcal{O}(T^5)$$ operations for general polynomials due to the double summation for $$g_{k,n}^l$$ in Section 6.3. For general functions, however, that derivation of higher order terms has to be repeated for each value of $$n$$. Table 1 Time required for different operations in seconds: mean time over seven values of $$n$$ of precomputations for seven terms in the expansion (only once for the procedure for general polynomials) and evaluating the asymptotic expansion for different numbers of terms $$T$$ Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Table 1 Time required for different operations in seconds: mean time over seven values of $$n$$ of precomputations for seven terms in the expansion (only once for the procedure for general polynomials) and evaluating the asymptotic expansion for different numbers of terms $$T$$ Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Funding FWO (Fonds Wetenschappelijk Onderzoek, Research Foundation—Flanders, Belgium), through FWO research projects (G.0617.10, G.0641.11 and G.A004.14 to D.H. and P.O.). Acknowledgements The authors would like to thank Alfredo Deaño, Arno Kuijlaars, Alex Townsend, Walter Van Assche and Marcus Webb for useful discussions on the topic of this paper. The authors are also grateful to the anonymous reviewers for their constructive remarks and for suggesting the alternative, insightful proof of Theorem 5.1 via the jump relation. References Bogaert, I., Michiels, B. & Fostier, J. ( 2012 ) $$\mathcal{O}(1)$$ computation of Legendre polynomials and Gauss-Legendre nodes and weights for parallel computing. SIAM J. Sci. Comput., 34, C83 – C101 . Google Scholar CrossRef Search ADS Bogaert, I. ( 2014 ) Iteration-free computation of Gauss–Legendre quadrature nodes and weights. SIAM J. Sci. Comput., 36, A1008 – A1026 . Google Scholar CrossRef Search ADS Bosbach, C. & Gawronski, W. ( 1998 ) Strong asymptotics for Laguerre polynomials with varying weights. J. Comput. Appl. Math., 99, 77 – 89 . Google Scholar CrossRef Search ADS Bremer, J. ( 2017 ) On the numerical calculation of the roots of special functions satisfying second order ordinary differential equations. SIAM J. Sci. Comput., 39, A55 – A82 . Google Scholar CrossRef Search ADS Deaño, A., Huertas, E. J. & Marcellán, F. ( 2013 ) Strong and ratio asymptotics for Laguerre polynomials revisited. J. Math. Anal. Appl., 403, 477 – 486 . Google Scholar CrossRef Search ADS Deaño, A., Huybrechs, D. & Opsomer, P. ( 2016 ) Construction and implementation of asymptotic expansions for Jacobi-type orthogonal polynomials. Adv. Comput. Math., 42, 791 – 822 . Google Scholar CrossRef Search ADS Deift, P., Kriecherbauaer, T., McLauglin, K. T.-R., Venakides, S. & Zhou, X. ( 1999a ) Strong asymptotics of orthogonal polynomials with respect to exponential weights. Commun. Pure Appl. Math., 52, 1491 – 1552 . Google Scholar CrossRef Search ADS Deift, P., Kriecherbauer, T., McLaughlin, K., Venakides, S. & Zhou, X. ( 1999b ) Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory. Commun. Pur. Appl. Math., LII, 1335 – 1425 . Google Scholar CrossRef Search ADS Deift, P. & Zhou, X. ( 1992 ) A steepest descent method for oscillatory Riemann–Hilbert problems. Bull. Amer. Math. Soc., 26, 119 – 124 . Google Scholar CrossRef Search ADS Deift, P. & Zhou, X. ( 1993 ) A steepest descent method for oscillatory Riemann–Hilbert problems. Asymptotics for the MKdV equation. Ann. Math., 137, 295 – 368 . Google Scholar CrossRef Search ADS Fokas, A., Its, A. & Kitaev, A. ( 1992 ) The isomonodromy approach to matrix models in 2d quantum gravity. Commun. Math. Phys., 147, 395 – 430 . Google Scholar CrossRef Search ADS Gil, A., Segura, J. & Temme, N. ( 2017 ) Efficient computation of Laguerre polynomials. Comput. Phys. Commun., 210, 124 – 131 . Google Scholar CrossRef Search ADS Glaser, A., Liu, X. & Rokhlin, V. ( 2007 ) A fast algorithm for the calculation of the roots of special functions. SIAM J. Sci. Comput., 29, 1420 – 1438 . Google Scholar CrossRef Search ADS Gosper, R. W. ( 1978 ) Decision procedure for indefinite hypergeometric summation. Proc. Natl. Acad. Sci. USA, 75, 40 – 42 . Google Scholar CrossRef Search ADS Hale, N. & Townsend, A. ( 2013 ) Fast and accurate computation of Gauss–Legendre and Gauss–Jacobi quadrature nodes and weights. SIAM J. Sci. Comput., 35, A652 – A672 . Google Scholar CrossRef Search ADS Koornwinder, T. H. ( 1993 ) On Zeilberger’s algorithm and its $$q$$-analogue. J. Comput. Appl. Math., 48, 91 – 111 . Google Scholar CrossRef Search ADS Kuijlaars, A. ( 2003 ) Riemann–Hilbert analysis for orthogonal polynomials. Orthogonal polynomials and Special functions ( Koelink E. & Van Assche W. eds). Vol. 1817 . New York : Springer-Verlag, pp. 167 – 210 . Google Scholar CrossRef Search ADS Kuijlaars, A. B. J., McLaughlin, K. T.-R., Van Assche, W. & Vanlessen, M. ( 2004 ) The Riemann-Hilbert approach to strong asymptotics of orthogonal polynomials on $$[-1,1]$$. Adv. Math., 188, 337 – 398 . Google Scholar CrossRef Search ADS Levin, E. & Lubinsky, D. ( 2001 ) Orthogonal Polynomials for Exponential Weights. New York : Springer, pp. xiv+417 . Google Scholar CrossRef Search ADS López, J. L. & Temme, N. M. ( 2004 ) Convergent asymptotic expansions of Charlier, Laguerre and Jacobi polynomials. Proc. R. Soc. Edinb. Sect. A: Math., 134, 537 – 555 . Google Scholar CrossRef Search ADS NIST ( 2016 ) NIST - Digital Library of Mathematical Functions. Available at http://dlmf.nist.gov/. Online companion to Olver et al. (2010). Accessed 16 May 2017 . Olver, F. W. J., Lozier, D. W., Boisvert, R. F. & Clark, C. W. (eds) ( 2010 ) NIST Handbook of Mathematical Functions. New York, NY : Cambridge University Press . Print companion to NIST (2016) . Opsomer, P. ( 2016 ) Laguerre - Asymptotic expansions of generalized Laguerre polynomials. Available at http://nines.cs.kuleuven.be/software/LAGUERRE. Accessed 16 May 2017 . Plancherel, M. & Rotach, W. ( 1929 ) Sur les valeurs asymptotiques des polyn ômes d’Hermite $$H_n(x) \,{=} (-1)^n e^{\frac{x^2}{2}} \frac{\rm{d}^n}{\rm{d} x^n} \left( e^{- \frac{x^2}{2}} \right)$$. Comment. Math. Helv., 1, 227 – 257 . Google Scholar CrossRef Search ADS Szegő, G. ( 1967 ) Orthogonal Polynomials: American Mathematical Society Colloquium publications Vol. XXIII, 3rd edn. Providence, Rhode Island : American Mathematical Society . Temme, N. M. ( 1990 ) Asymptotic estimates for Laguerre polynomials. J. Appl. Math. Phys. (ZAMP), 41, 114 – 126 . Google Scholar CrossRef Search ADS The University of Oxford & The Chebfun Developers ( 2016 ) Chebfun - Numerical computations with functions. http://www.chebfun.org/. Accessed 16 May 2017 . Townsend, A., Trogdon, T. & Olver, S. ( 2016 ) Fast computation of Gauss quadrature nodes and weights on the whole real line. IMA J. Numer. Anal., 36, 337 – 358 . Townsend, A. ( 2016 ) FastGaussQuadrature.jl - Gauss quadrature nodes and weights in Julia. Available at https://github.com/ajt60gaibb/FastGaussQuadrature.jl. Accessed 16 May 2017 . Trefethen, L. N. & Bau, D. ( 1997 ) Numerical Linear Algebra. Philadelphia : Society for Industrial and Applied Mathematics . Google Scholar CrossRef Search ADS Vanlessen, M. ( 2007 ) Strong asymptotics of Laguerre-type orthogonal polynomials and applications in random matrix theory. Constr. Approx., 25, 125 – 175 . Google Scholar CrossRef Search ADS Zhao, Y., Cao, L. & Dai, D. ( 2014 ) Asymptotics of the partition function of a Laguerre-type random matrix model. J. Approx. Theory, 178, 64 – 90 . Google Scholar CrossRef Search ADS Appendix: Explicit formulas for the first higher order terms The recursive computation of $$R_k$$ can give an arbitrary number of terms, but we provide the first few terms explicitly here for $$z$$ outside the two disks, ignoring the procedure for general polynomials. Expressions for $$R^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ can straightforwardly be obtained by identifying $$R_k^{{\mathrm{outer}}}(z)$$ from the following and using (5.5) and (5.4). We have: \begin{align*} R^{{\mathrm{outer}}}(z) & = I + \frac{1}{n}\left(\frac{U_{1,1}^{{\mathrm{right}}} }{z-1}+\frac{U_{1,2}^{{\mathrm{right}}} }{(z-1)^2}+\frac{U_{1,1}^{{\mathrm{left}}}}{z} \right) + \frac{1}{n^2}\left(\frac{U_{2,1}^{{\mathrm{right}}} }{z-1}+\frac{U_{2,2}^{{\mathrm{right}}} }{(z-1)^2}+\frac{U_{2,3}^{{\mathrm{right}}} }{(z-1)^3}+\frac{U_{2,1}^{{\mathrm{left}}}}{z} \right) \nonumber \\ & + \frac{1}{n^3}\left(\frac{U_{3,1}^{{\mathrm{right}}} }{z-1} + \frac{U_{3,2}^{{\mathrm{right}}} }{(z-1)^2} + \frac{U_{3,3}^{{\mathrm{right}}} }{(z-1)^3} + \frac{U_{3,4}^{{\mathrm{right}}} }{(z-1)^4} + \frac{U_{3,5}^{{\mathrm{right}}} }{(z-1)^5} +\frac{U_{3,1}^{{\mathrm{left}}}}{z} + \frac{U_{3,2}^{{\mathrm{left}}}}{z^2} \right) +\mathcal{O}\left(\frac{1}{n^4}\right), \nonumber \end{align*} with for general $$Q(x)$$ This agrees with results in Vanlessen (2007) as (4.11) in it equals $$U_{1,1}^{{\mathrm{right}}}+U_{1,1}^{{\mathrm{left}}}$$ and (4.12) equals $$\left.U_{1,1}^{{\mathrm{right}}}+U_{1,2}^{{\mathrm{right}}}\right|_{2,1}$$. For $$w(x) = x^\alpha \exp(-x)$$, one can use $$c_0 = 4 = d_0$$, $$c_1 = 0 = c_2$$ and the next higher order term is given by © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

# Construction and implementation of asymptotic expansions for Laguerre-type orthogonal polynomials

, Volume Advance Article (3) – Jul 29, 2017
34 pages

/lp/ou_press/construction-and-implementation-of-asymptotic-expansions-for-laguerre-xnSnShtCtx
Publisher
Oxford University Press
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/drx030
Publisher site
See Article on Publisher Site

### Abstract

Abstract Laguerre and Laguerre-type polynomials are orthogonal polynomials on the interval $$[0,\infty)$$ with respect to a weight function of the form \begin{equation*} w(x) = x^{\alpha} e^{-Q(x)}, \quad Q(x) = \sum_{k=0}^m q_k x^k, \quad \alpha > -1, \quad q_m > 0. \end{equation*} The classical Laguerre polynomials correspond to $$Q(x)=x$$. The computation of higher order terms of the asymptotic expansions of these polynomials for large degree becomes quite complicated, and a full description seems to be lacking in literature. However, this information is implicitly available in the work of Vanlessen (2007, Strong asymptotics of Laguerre-type orthogonal polynomials and applications in random matrix theory. Constr. Approx.,25, 125–175), based on a nonlinear steepest descent analysis of an associated Riemann–Hilbert problem. We will extend this work and show how to efficiently compute an arbitrary number of higher order terms in the asymptotic expansions of Laguerre and Laguerre-type polynomials. This effort is similar to the case of Jacobi and Jacobi-type polynomials in a previous article. We supply an implementation with explicit expansions in four different regions of the complex plane. These expansions can also be extended to Hermite-type weights of the form $$\exp(-\sum_{k=0}^m q_k x^{2k})$$ on $$(-\infty,\infty)$$ and to general nonpolynomial functions $$Q(x)$$ using contour integrals. The expansions may be used, e.g., to compute Gauss–Laguerre quadrature rules with lower computational complexity than based on the recurrence relation and with improved accuracy for large degree. They are also of interest in random matrix theory. 1. Introduction We determine asymptotic approximations as $$n \rightarrow \infty$$ of the orthonormal polynomials $$p_n(x)$$ on $$[0,\infty)$$ with positive leading coefficient, with the weight function $w(x) = x^{\alpha} e^{-Q(x)}, \quad Q(x) = \sum_{k=0}^m q_k x^k, \quad \alpha > -1, \quad q_m > 0.$ The classical Laguerre polynomials corresponds to $$Q(x)=x$$, but we aim to provide formulas and results for general real-valued functions $$Q(x)$$. The choice $$Q(x)=x^m$$ corresponds to so-called Freud-type polynomials (see Levin & Lubinsky, 2001). The asymptotic expansions in this article are commonly referred to as Plancherel–Rotach-type expansions, named after the initial study of Hermite polynomials by Plancherel & Rotach (1929). The procedure in Vanlessen (2007) gives four types of asymptotic expansions: ( I) inner asymptotics for $$x$$ near the bulk of the zeros, but away from the extreme zeros, ( II) outer asymptotics valid for $$x$$ away from the zeros of $$p_n(x)$$, ( III) boundary asymptotics near the so-called soft edge valid for $$x$$ near the largest zeros and ( IV) boundary asymptotics near the so-called hard edge for $$x$$ near $$0$$. We also provide asymptotic expansions for associated quantities such as leading term coefficients as well as recurrence coefficients $$a_n$$ and $$b_n$$ of the three-term recurrence relation $$\label{recur} b_n p_{n+1}(x) = (x-a_n)p_{n}(x) - b_{n-1}p_{n-1}(x).$$ (1.1) The methodology of Vanlessen (2007) is based on the nonlinear steepest descent method by Deift & Zhou (1993) for a $$2 \times 2$$ Riemann–Hilbert problem that is generically associated with orthogonal polynomials by Fokas et al. (1992). This is further detailed in Section 2.1. The general strategy is to apply an explicit and invertible sequence of transformations $$Y(z)\mapsto T(z) \mapsto S(z) \mapsto R(z)$$, such that the final matrix-valued function $$R(z)$$ is asymptotically close to the identity matrix as $$n$$ or $$z$$ tends to $$\infty$$. The asymptotic result for $$Y(z)$$, and subsequently for the polynomials, is obtained by inverting these transformations. The transformations involve a normalization of behaviour at infinity, the so-called ʻopening of a lensʼ around an interval of interest, and the introduction of local parametrices in disks around special points like endpoints, which are matched to global parametrices elsewhere in the complex plane. These transformations split $$\mathbb{C}$$ into different regions, where different formulas for the asymptotics are valid. In our case of Laguerre-type polynomials, one also first needs an $$n$$-dependent rescaling of the $$x$$ axis using the so-called MRS numbers (defined further on in this article). After this step, the roots of the rescaled polynomials accumulate in a fixed and finite interval. We provide an algorithm to obtain an arbitrary number of terms in the expansions, where we set up series expansions using many convolutions that follow the chain of transformations and their inverses in this steepest descent method. While doing this, we keep computational efficiency in mind, as well as the use of the correct branch cuts in the complex plane. The strategy outlined above and in Section 2.1 was also followed in our earlier article about asymptotic expansions of Jacobi-type polynomials: Deaño et al. (2016), which was based on the mathematical analysis in Kuijlaars et al. (2004). Here, we base our results on the analysis in the work of Vanlessen (2007). The main differences in this article compared with Deaño et al. (2016) are the following: The analysis of Laguerre-type polynomials on the halfline $$[0,\infty)$$ has an extra step that involves a rescaling via the Mhaskar–Rakhmanov–Saff (MRS) numbers $$\beta_n$$ (see Section 3). This leads to fractional powers of $$n$$ in the expansions. There is also a new type of behaviour near the largest zero (often referred to as a soft edge), captured by the Airy function. This leads to higher order poles in the derivations, and thus also to longer formulas for the higher order terms in the expansions. The behaviour near the hard edge at $$x=0$$ involves Bessel functions, like in the Jacobi case near the endpoints $$\pm 1$$. We obtain more explicit results for polynomials $$Q(x)$$. We will frequently distinguish in this article between three cases: monomial $$Q$$, general polynomial $$Q$$ and a more general analytic function $$Q$$. Asymptotic expansions can be useful in computations for several reasons. The use of the recurrence relation (1.1) in applications involving Laguerre polynomials results in accumulating roundoff errors and a computation time that is linear in $$n$$. In contrast, asymptotic expansions become increasingly accurate as the degree $$n$$ becomes large, and the computing time is essentially independent of $$n$$. Another motivation is the partition function for Laguerre ensembles of random matrices as mentioned in Vanlessen (2007) and studied in Zhao et al. (2014). This gives the eigenvalue distribution of products of random matrices of a certain type, which could, for example, arise in stochastic processes (Markov chains) or quantum mechanics. In this article, some leading order terms are given explicitly, and we detail the derivation of higher order terms. The analysis only requires elementary numerical techniques: in particular, there is no need for the evaluation of special functions besides the Airy and Bessel functions. The formulas are implemented in Sage and Matlab and are available on the software web page of our research group (see Opsomer, 2016). The expressions are also implemented in the Chebfun package in Matlab for computing with functions (The University of Oxford & the Chebfun Developers, 2016, lagpts.m) and into the Julia package FastGaussQuadrature (Townsend, 2016, gausslaguerre.jl), in both cases for the purpose of constructing Gauss–Laguerre quadrature rules with a high number of points in linear complexity. This approach is comparable to a number of modern numerical methods for other types of Gaussian quadratures, that are often based on asymptotics and that lead to a linear complexity as well (see Glaser et al., 2007; Bogaert et al., 2012; Hale & Townsend, 2013; Bogaert, 2014). A recent article (Bremer 2017) achieves competitive performance in a general way via nonoscillatory phase functions. Potential improvements to our code in light of this result and a more thorough discussion of the contributions to the computation of Gaussian quadrature rules are future research topics. We do remark here that the construction of Gaussian quadrature rules requires the derivative of the associated orthonormal Laguerre polynomial with positive leading coefficient. This can be obtained from our expansions and the identity $${\rm d} p_n^{(\alpha)}(x)/{\rm d}x = \sqrt{n}p_{n-1}^{(\alpha+1)}(x)$$, which follows from (NIST, 2016, 18.9.23) after accounting for normalization. Although technical, the expansions can readily be differentiated for general $$Q(x)$$. This article affirmatively answers the questions raised in the conclusions of Townsend et al. (2015), namely whether the Riemann–Hilbert approach can be applied to the fast computation of quadrature rules with generalized Laguerre weights (for the Jacobi case, see Deaño et al., 2016), and whether higher order asymptotic expansions can be computed effectively. As mentioned, the standard Laguerre polynomials correspond to $$Q(x) = x$$, or equivalently $$q_k \equiv 0$$, $$\forall k \neq 1$$ and $$q_1=1$$. For this case, asymptotic expansions are given in Deaño et al. (2013) with explicit expressions for the first terms. We refer the reader to Temme (1990), López & Temme (2004) and references therein for more results on asymptotics for the standard Laguerre polynomials. A recent scheme for the numerical evaluation of Laguerre polynomials of any degree is described in Gil et al. (2017). As an example, the type of expansions in this article have the following form. For the monic Laguerre polynomial $$\pi_n$$ of degree $$n$$, we obtain: \begin{align} & \pi_n(x) = \gamma_n^{-1} p_n(4nz) = \frac{(4n)^{n}\, e^{n (2z-1-2\log(2))} }{z^{1/4}(1-z)^{1/4} z^{\alpha/2} } \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{outer}}}(z) \nonumber \\ &\quad{} \begin{pmatrix} 2^{-\alpha} \cos(\arccos(2z-1)[1/2+\alpha/2] - n[2\sqrt{z}\sqrt{1-z}-2\arccos(\sqrt{z}) ] -\pi /4)\6pt] -i 2^{\alpha} \cos(\arccos(2z-1)[\alpha/2-1/2] - n[2\sqrt{z}\sqrt{1-z}-2\arccos(\sqrt{z}) ] -\pi /4) \end{pmatrix} \qquad z \in (0,1). \nonumber \end{align} This is expression (4.1) of the article, specified to the standard associated Laguerre weight w(x)=x^{\alpha}e^{-x}. It is valid for x in a complex neighbourhood of (0,4n), where x is related to z through the MRS number \beta_n = 4n, i.e., x=4nz. The expansion for \pi_n(x) itself follows from substituting the expansion of the 2 \times 2 matrix function R^{{\mathrm{outer}}}(z). The leading order term is obtained from the identity matrix R^{{\mathrm{outer}}}(z)=I, and further terms are listed explicitly in the appendix. We provide formulas and their implementation for an arbitrary number of terms in the asymptotic expansions and compute up to 50 terms in 32s on the architecture mentioned in Section 7.4. These formulas can also be applied to obtain asymptotic expansions of orthogonal polynomials with Hermite-type weights of the form \exp(-\sum_{k=0}^m q_k x^{2k}) on (-\infty,\infty). In Section 7.2, we show that they can be given in terms of asymptotics of Laguerre-type polynomials with \alpha = \pm 1/2, evaluated in x^2. We aim for a general nonpolynomial weight function Q(x), though our results in this case are thus far not rigorously valid. In particular, we do not provide estimates for the remainder term. We do provide numerical indications that the expansions converge at the expected rate for increasing n. Inspired by the requirements for the Jacobi case in Deaño et al. (2016) and the technical conditions on Q(x) in Levin & Lubinsky (2001, Section 1), we conjecture that the expansions in this article are valid as long as Q(x) is analytic within the contours defined further on and Q(x) grows faster than powers of \log(x) for x\rightarrow \infty. The structure of the article is as follows. In Section 2, we connect the Riemann–Hilbert problem for orthogonal polynomials that is analysed in Vanlessen (2007) with the expansion of a 2 \times 2 matrix-valued function R and introduce some notation. We detail the Mhaskar–Rakhmanov–Saff (MRS) numbers \beta_n and their asymptotic expansions for large n in Section 3. The formulas for the asymptotic expansions of the polynomials in the different regions of the complex plane are stated in Section 4. We explain the computation of higher order terms of R in Section 5 and provide a nonrecursive definition for R. Details on obtaining explicit expressions for higher order terms are provided in Section 6. We conclude the article with a number of examples and numerical results in Section 7. 2. Asymptotic expansions for Laguerre-type polynomials The largest root of a Laguerre-type polynomial p_n(x) grows with the degree n. For example, it asymptotically behaves as 4n+2\alpha+2 + 2^{2/3}a_1(4n+2\alpha+2)^{1/3} for the standard associated Laguerre polynomials, with a_1 the (negative) zero of the Airy function closest to zero (see (Szegő, 1967, (6.32.4)); (Olver et al., 2010, (18.16.14)); NIST (2016)). The first step in the description of the asymptotics is to rescale the polynomials, such that the support of the zero-counting measure maps to the interval [0,1]. The scaling is linear but n-dependent, and given by $$\label{eq:scaling} x = \beta_n z,$$ (2.1) where \beta_n is the MRS number (see Levin & Lubinsky, 2001) defined further on in (3.1). There is a distinction between several regions in the complex z-plane, shown in Fig. 1: Fig. 1. View largeDownload slide Regions of the complex plane in which the polynomials have different asymptotic expansions, after rescaling the support of the zero-counting measure to the interval [0,1]: the lens ( I, a complex neighbourhood of the interval (0,1) excluding the endpoints), the outer region ( II, the remainder of the complex plane) and the right and left disks ( III and IV, two disks around the endpoints 0 and 1). Fig. 1. View largeDownload slide Regions of the complex plane in which the polynomials have different asymptotic expansions, after rescaling the support of the zero-counting measure to the interval [0,1]: the lens ( I, a complex neighbourhood of the interval (0,1) excluding the endpoints), the outer region ( II, the remainder of the complex plane) and the right and left disks ( III and IV, two disks around the endpoints 0 and 1). A complex neighbourhood of the interval (0,1) excluding the endpoints, subsequently called the ʻlensʼ (region I). Two disks around the endpoints 1 and 0, called the right and left disk (regions III and IV). The remainder of the complex plane, the ʻouter regionʼ (region II). 2.1 Riemann–Hilbert formulation and steepest descent analysis In this section, we briefly summarize the main features of the derivation in Vanlessen (2007). The approach is based on the Riemann–Hilbert formulation for orthogonal polynomials in Fokas et al. (1992): we seek a 2\times 2 complex matrix-valued function Y(z) that satisfies the following Riemann–Hilbert problem, cf. (Vanlessen, 2007, Section 3): (a) Y : \mathbb{C}\setminus[0,\infty) \rightarrow \mathbb{C}^{2 \times 2} is analytic. (b) Y(z) has continuous boundary values Y_{\pm}(x) = \lim_{\epsilon \downarrow 0} Y(x \pm i\epsilon) for x \in (0,\infty). These boundary values are related via a jump matrix: $$Y_+(x) = Y_-(x) \begin{pmatrix}1 & x^\alpha e^{-Q(x)} \\ 0 & 1\end{pmatrix} \quad {\rm for}\ x \in (0,\infty).\nonumber$$ (c) (d) All entries are \mathcal{O}(1) as z \rightarrow 0, except that both entries in the second column are \mathcal{O}(z^\alpha) if \alpha < 0 and \mathcal{O}(\log z) if \alpha =0. It is proved in Fokas et al. (1992) and Kuijlaars (2003) that the unique solution of this Riemann–Hilbert problem is $$Y(x) = \begin{pmatrix} Y_{11} & Y_{12} \\ Y_{21} & Y_{22} \end{pmatrix} = \begin{pmatrix} p_n(x)/\gamma_n & \frac{1}{2\pi i\gamma_n}\int_{0}^\infty \frac{p_n(y) w(y)}{y-x}\, {\rm d}y \\ -2\pi i \gamma_{n-1} p_{n-1}(x) & -\gamma_{n-1}\int_{0}^\infty \frac{p_{n-1}(y) w(y)}{y-x}\, {\rm d}y \end{pmatrix}\!. \nonumber$$ Here, the Y_{11} entry is the monic orthogonal polynomial because \gamma_n is the leading order coefficient of p_n(x). The Y_{21} entry relates to the polynomial of degree n-1, whereas the second column contains the Cauchy transforms of both of these polynomials. Note that the weight function of the orthogonal polynomials enters through the jump condition in (b). To obtain the large n asymptotic behaviour of p_n(x), the Riemann–Hilbert formulation is combined with the Deift–Zhou steepest descent method for Riemann–Hilbert problems (see Deift & Zhou, 1992, 1993), and more specific results for orthogonal polynomials with exponential weights are given in Deift et al. (1999a,b). In our case, the steepest descent analysis presented in Vanlessen (2007) consists of the following sequence of (explicit and invertible) transformations: $$Y(z)\mapsto T(z) \mapsto S(z) \mapsto R(z). \nonumber$$ These steps have a well-defined interpretation: The first step is a normalization at infinity, such that \[ T(z) = I+\mathcal{O}(1/z) \qquad z\rightarrow \infty. \label{ETinf} This step comes at the cost of introducing rapidly oscillating entries in the new jump matrix for the Riemann–Hilbert problem for $$T$$. The second step is the opening of the so-called lens around $$[0,1]$$: it factorizes the previous jump matrix such that $$S(z) = T(z)$$ outside of the lens in Fig. 1, whereas $$S(z)$$ is exponentially close to $$T(z)$$ in $$n$$ in the upper and lower part of the lens. The shape of the lens is such that the oscillating entries on the diagonal in the jump matrix are transformed into exponentially decaying off-diagonal entries. Finally, the last transformation $$S(z) \mapsto R(z)$$ gives rise to the disks in Fig. 1 and is defined as (2.2) $$R(z)=S(z) \begin{cases} P_n(z)^{-1}, \qquad \text{for}\,\,z\,\,\text{close to }1, \\ \tilde{P}_n(z)^{-1}, \qquad \text{for}\,\,z\,\,\text{close to }0, \\ P^{(\infty)}(z)^{-1}, \qquad \text{elsewhere.} \end{cases} \label{EjumpR}$$ Here, a global parametrix $$P^{(\infty)}(z)$$ is introduced and constructed using the Szegő function $$z^{\alpha/2}\varphi(z)^{-\alpha/2}$$ (to be defined below). $$P_n(z)$$ and $$\tilde{P}_n(z)$$ are local parametrices that follow from a rather involved local analysis around the endpoints, and that should not be confused with the orthonormal polynomials $$p_n(x)$$. We omit the details, but we note that $$\tilde{P}_n(z)$$ is given explicitly in terms of standard Bessel and Hankel functions and their derivatives. The precise choice of these functions is made in such a way that $$\tilde{P}_n(z)$$ satisfies an asymptotic matching condition with the global parametrix on the boundary of the left disk, namely $$\tilde{P}_n(z)\left[P^{(\infty)}(z)\right]^{-1} \sim I + {\it{\Delta}}^{{\mathrm{left}}}(z) \quad {\rm for}\ n \rightarrow \infty, \nonumber$$ where $$z$$ is on the boundary of the left disk IV in Fig. 1 and $${\it{\Delta}}^{{\mathrm{left}}}(z)$$ will be defined in Section 5.1 as the jump matrix for $$R(z)$$. A similar construction yields explicit expressions for $$P_n(z)$$ in terms of the Airy function and its derivative. Because we know $$P_n(z)$$, $$\tilde{P}_n(z)$$ and $$P^{(\infty)}(z)$$ explicitly, we can determine the asymptotic expansion of $${\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}(z)$$ in a closed formula. The key idea is that the Riemann–Hilbert problem for $$R(z)$$ can be solved explicitly in an asymptotic sense for large $$n$$: it can be deduced that the matrix $$R(z)$$ is itself close to the identity $$R(z) = I+\mathcal{O}\left(\frac{1}{n}\right) \qquad{\rm for}\ n\to\infty, \nonumber$$ uniformly for $$z \in \mathbb{C} \setminus {\it{\Sigma}}_R$$. Here, $${\it{\Sigma}}_R$$ is a contour that results from the sequence of transformations outlined before and consists of the boundaries of the regions in Fig. 1. If we match all powers of $$z$$ and $$n$$ in (2.2) via $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z)$$, we obtain higher order terms in the asymptotic expansions, which is exactly the technique outlined in Section 5. Finally, reversing these transformations (since the different Riemann–Hilbert problems are equivalent), one can obtain asymptotic information for $$Y(z)$$ as $$n\to\infty$$ in different sectors of the complex plane and in particular of its $$(1,1)$$ entry. 2.2 The function $$R(z)$$ in the complex plane The function $$R(z)$$ is a $$2\times 2$$ matrix complex-valued function, analytic (element wise) in $$\mathbb{C}\setminus{\it{\Sigma}}_R$$, where $${\it{\Sigma}}_R$$ consists of the boundaries of the regions in Fig. 1. Also, $$R(z) - I = \mathcal{O}(1/z)$$ for $$z \to \infty$$. Finally, as $$n\to\infty$$ for polynomial $$Q(x)$$ with degree $$m$$ and independent of $$n$$, there exist functions $$R_k(z)$$ such that the function $$R(z)$$ admits an asymptotic expansion of the form $$\label{asympRn} R(z)\sim I + \sum_{k=1}^{\infty} \frac{R_k(z)}{n^{\tfrac{k-1}{m}+1} } \quad\text{for}\ n \rightarrow \infty.$$ (2.3) We obtain different expressions for $$R_k(z)$$ depending on the region in which $$z$$ lies. We will write $$R_k^{{\mathrm{right}}}(z)$$ and $$R_k^{{\mathrm{left}}}(z)$$ to refer to the coefficients for $$z$$ near $$1$$ and $$0$$, respectively, and $$R_k^{{\mathrm{outer}}}(z)$$ to indicate the coefficients for $$z$$ outside these two disks, so $$R_k^{{\mathrm{outer}}}(z) = \mathcal{O}(1/z)$$ for $$z \to \infty$$. Our formulas for the asymptotic expansions are written in terms of these functions. One may simply substitute $$R(z)=I$$ to obtain the leading order behaviour of the expansion. Higher order expansions are obtained via recursive computation of the $$R_k(z)$$ in Section 5 or, alternatively, using the explicit expressions listed in the appendix. 2.3 Auxiliary functions We recall some terminology and notation from Vanlessen (2007). In the formulation of our results, we use the third Pauli matrix and define for $$a \in \mathbb{C} \setminus \{0\}$$: $a^{\sigma_3} = \left[ \begin{array}{cc} a & 0 \\ 0 & a^{-1} \end{array}\right]\!.$ The following values $$A_k$$ arise in the recursive construction of the MRS numbers: $A_k = \int_0^1 \frac{x^{k-1/2}}{\pi\sqrt{1-x}} \,{\rm d}x \quad = \prod_{j=1}^k \frac{2j-1}{2j} \quad = \frac{{\it{\Gamma}}(k+1/2)}{\sqrt{\pi}{\it{\Gamma}}(k+1)} \quad = 4^{-k} {2k \choose k},$ and we will also use the Pochhammer symbol or rising factorial $(n)_j = n(n+1)\cdots(n+j-1). \nonumber$ We will define the MRS number $$\beta_n$$ and the associated quantities $$h_n(z), H_n(z)$$ and $$l_n$$ in Section 3.2. The corresponding scaling (2.1) gives rise to the rescaled field $$V_n(z)$$: $V_n(z) = Q(\beta_n z)/n.$ We also define the following functions: \begin{align} \theta(z) &= \begin{cases} 1,& \quad \arg(z-1) > 0, \\ -1, & \quad \arg(z-1)\leq 0, \end{cases} \nonumber \\ \xi_n(z) &= -i \left(H_n(z)\sqrt{z}\sqrt{1-z}/2 -2\arccos(\sqrt{z})\right), \label{Exin} \\ \end{align} (2.4) \begin{align} f_n(z) &= n^{2/3}(z-1)\left[\frac{-3\theta(z)\xi_n(z)}{2(z-1)^{3/2}}\right]^{2/3}, \nonumber \\ \bar{\phi}_n(z) &= \xi_n(z)/2 -\pi i/2. \label{Ephibar} \end{align} (2.5) The function $$\theta(z)$$ is nonstandard in literature on asymptotics, but it is introduced here because it allows the statement of analytic continuations of some functions using principal branches. We assume principal branches of all analytic functions in this article, such that the formulas are easily implemented. One may call $$\xi_n(z)$$, $$f_n(z)$$ and $$\bar{\phi}_n(z)$$phase functions for the orthonormal polynomials. They specify the oscillatory behaviour of $$p_n(x)$$ for $$z$$ away from the endpoints, near $$1$$ and near $$0$$, respectively. Here, too, one can avoid specifying select branch cuts by not simplifying the definition of $$f_n(z)$$. The function $$\bar{\phi}_n(z)$$ corresponds to $$\sqrt{\tilde{\varphi}_n(z)}$$ in Vanlessen (2007) and is used for the analytic continuation of the polynomial in the left disk. The conformal map $$\varphi(z)$$ from $$\mathbb{C} \setminus [0,1]$$ onto the exterior of the unit circle is used in the global parametrix $$P^{(\infty)}(z)$$, which determines the behaviour of $$p_n(x)$$ away from $$x=0$$ and $$\beta_n$$: \begin{align} \varphi(z) &= 2z-1+2\sqrt{z}\sqrt{z-1} \qquad = \exp(i \theta(z) \arccos(2z-1)), \label{Evarphi} \\ P^{(\infty)}(z) &= \frac{2^{-\alpha\sigma_3} }{2 z^{1/4}(z-1)^{1/4}} \begin{pmatrix} \sqrt{\varphi(z) } & \frac{i}{\sqrt{\varphi(z) } } \\ \frac{-i}{\sqrt{\varphi(z) } } & \sqrt{\varphi(z) } \end{pmatrix} \left(\frac{z^{\alpha/2}}{(\varphi(z))^{\alpha/2}}\right)^{-\sigma_3 }. \nonumber \end{align} (2.6) Finally, the coefficients $$\nu_k$$ and $$(\alpha,m)$$ appear in asymptotics of Airy and modified Bessel functions in the local parametrices: \begin{align} \nu_k &= \left(1-\frac{6k+1}{6k-1}\right) \frac{{\it{\Gamma}}(3k+1/2)}{54^k (k!) {\it{\Gamma}}(k+1/2)} \quad = \frac{-{\it{\Gamma}}(3k-1/2)2^k}{2k 27^k \sqrt{\pi}{\it{\Gamma}}(2k)}, \nonumber \\ (\alpha,m) &= 2^{-2m} (m!)^{-1} \prod_{n=1}^{m}(4 \alpha^2-(2n-1)^2). \nonumber \end{align} 3. MRS numbers and related functions The MRS numbers $$\beta_n$$ satisfy (Vanlessen, 2007, (3.6)) $$2\pi n = \int_0^{\beta_n} Q'(x)\sqrt{\frac{x}{\beta_n-x}}\,{\rm d}x. \label{EbetaInt}$$ (3.1) We will explain how to compute these and quantities depending on them for various types of $$Q(x)$$: monomials, more general polynomials and more general analytic functions. 3.1 Constant plus monomial $$Q(x)$$ In this case, $$Q(x) = q_0 + q_m x^m$$, although the constant only scales $$\gamma_n$$. The MRS number is given explicitly by $\beta_n = n^{1/m}\left(m q_m A_m /2\right)^{-1/m}.$ We also recall from Vanlessen (2007) the coefficients $$l_n$$ and polynomials $$H_n$$ with a slight adjustment for $$l_n$$: \begin{align} l_n = & -2/m-4\ln(2) -q_0/n, \nonumber \\ H_n(z) = & \frac{4}{2m-1} {}_2F_1(1,1-m;3/2-m;z) \quad = \frac{2}{mA_m}\sum_{k=0}^{m-1} A_{m-1-k}z^k. \label{EHn} \end{align} (3.2) In the classical Laguerre case where $$w(x) = x^\alpha e^{-x}$$, we have \begin{align} H_n(z) &= 4, \nonumber \\ \beta_n &= 4n. \nonumber \end{align} The latter value for $$\beta_n$$ is well known, and it implies that the largest root of the Laguerre polynomial of degree $$n$$ grows approximately like $$4n$$. 3.2 General polynomial $$Q(x)$$ For general polynomial $$Q(x)$$, $$\beta_n$$ has an asymptotic expansion with fractional powers, $$\beta_n \sim n^{1/m} \sum_{k=0}^\infty \beta^{1,k}n^{-k/m} \quad \text{as } n \rightarrow \infty. \label{EbetaExpa}$$ (3.3) To compute the coefficients $$\beta^{1,k}$$, we start from the equation at the end of the proof of Vanlessen (2007, Proposition 3.4): $$\sum_{k=0}^m \tfrac k2 q_k A_k \left. \frac{d^j}{d\epsilon^j}(\beta(\epsilon)^k\epsilon^{m-k})\right|_{\epsilon =0}\, = 0 =\, \sum_{k=\max(m-j,1)}^m k q_k A_k \beta^{k,j-m+k}, \label{EstartBeta}$$ (3.4) where we have defined $$\beta(\epsilon)^k \sim \left(\sum_{l=0}^\infty \beta^{1,l} \epsilon^l \right)^k \sim \sum_{l=0}^\infty \beta^{k,l} \epsilon^{l} \nonumber$$ as $$\epsilon$$ approaches zero, where the coefficients can be calculated as $$\beta^{k,l} = \sum_{i=0}^l \beta^{k-1,i} \beta^{1,l-i}. \label{Ebetaa}$$ (3.5) One uses the result (Vanlessen, 2007, (3.8)) $$\beta^{1,0} = (m q_m A_m/2)^{-1/m}, \label{Ebeta0}$$ (3.6) and then recursively computes (3.5) for $$l\leq j=0$$. Next, (3.4) leads to \begin{align} \beta^{1,j} = & -\left\{ m q_m A_m\left(\left[\left(\beta^{1,0}\right)^{m-2} \sum_{i=1}^{j-1} \beta^{1,j-i} \beta^{1,i}\right] + \sum_{i=0}^{m-3} (\beta^{1,0})^i \sum_{k=1}^{j-1} \beta^{1,j-k} \beta^{m-1-i,k} \right) \right. \nonumber \\ & \left. + \sum_{k=\max(m-j,1)}^{m-1} k q_k A_k \beta^{k,j-m+k} \right\} \left(m^2 q_m A_m [\beta^{1,0}]^{m-1}\right)^{-1} \nonumber \end{align} for $$j=1$$ and so on. We see that $$q_0$$ does not influence the MRS number, since it only rescales the weight function. The construction of $$\beta_n$$ from this Section satisfies the condition (3.1) asymptotically up to the correct order, and also the following explicit result from (Vanlessen, 2007, (3.8)): $$\beta^{1,1} = \frac{-2(m-1)q_{m-1}}{m(2m-1)q_m}. \label{Ebeta1}$$ (3.7) With these results, we can compute the polynomials $$H_n(x)$$ and the coefficients $$l_n$$ as (Vanlessen, 2007, Section 3): \begin{align} H_n(z) = & \sum_{k=0}^{m-1} z^k\sum_{j=k+1}^m \frac{q_j}{n} \beta_n^j A_{j-k-1} \qquad \sim \sum_{k=0}^{m-1} z^k\sum_{j=k+1}^m q_j A_{j-k-1} n^{j/m-1} \sum_{i=0}^\infty \beta^{j,i} n^{-i/m} \quad \text{as } n \rightarrow \infty,\nonumber\\ l_n = & -4\log(2) -\sum_{k=0}^m \frac{q_k}{n}\beta_n^k A_k.\label{EHn_nonmon} \end{align} (3.8) 3.3 General function $$Q(x)$$ For the calculation of the functions in case $$Q(x)$$ is not a polynomial, we introduce a numerical method. We provide an initial guess $$\beta_n^{(0)}$$ that satisfies $$n=Q(\beta_n^{(0)})$$ to an iterative numerical procedure, so $$\beta_n^{(0)} = Q^{-1}(n)$$. The procedure finds a $$\beta_n$$ that approximately satisfies (3.1), where the integral is computed by numerical integration. This procedure can be a Newton method, for example, and if needed, $$Q^{-1}(n)$$ and $$Q^\prime(x)$$ can be approximated numerically as well. Care has to be taken when $$Q(x)$$ is nonconvex since there may be multiple values of $$\beta_n$$, which may invalidate our asymptotic results. The other functions are given by integrals (Vanlessen, 2007, (3.11-16-38-40-24)) \begin{align} h_n(z) &= \frac{1}{2\pi i}\oint_{{\it{\Gamma}}_z} \frac{\sqrt{y}V_n^\prime(y)\, {\rm d}y}{\sqrt{y-1}(y-z)}, \nonumber \\ \xi_n(z) &= \frac{-1}{2} \int_1^z \frac{\sqrt{y-1}}{\sqrt{y}} h_n(y)\, {\rm d}y, \nonumber \\ l_n &= 2\int_0^1 \frac{\log(|1/2-y|)\sqrt{1-y}}{2\pi\sqrt{y}}h_n(y)\, {\rm d}y -\frac{Q(\beta_n/2)}{n}. \nonumber \end{align} The contour $${\it{\Gamma}}_z$$ for $$h_n$$ should enclose the interval $$[0,1]$$ and the point $$z$$. We choose $${\it{\Gamma}}_z$$ to be a circle with a centre halfway between the interval $$[0,1]$$ and $$z$$, while still including $$z$$ and the interval. The integrals for $$\xi_n(z)$$ and $$l_n$$ are also calculated numerically, so the former is computed by a double numerical integral. Remark 3.1 These expressions are also valid for polynomial $$Q$$. However, following the reasoning in this subsection only leads to a numerical value of $$\beta_n$$ for a given $$n$$, as opposed to a full asymptotic expansion of $$\beta_n$$ in fractional powers of $$n$$. The same observation holds for the functions defined above. In this case, the powers $$n^{-1/m}$$ are implicitly present in all quantities that involve $$\beta_n$$, whereas the results for polynomial $$Q$$ are more explicit. We compare both approaches further in Remark 6.1 and Section 7.4. 3.4 Explicit expressions satisfied by the MRS numbers Thus far, we have obtained either asymptotic expansions of $$\beta_n$$ or a numerical estimation. The cases in which explicit expressions can be derived are limited, but we provide some helpful expressions in this section. Inspired by (2.6), we invert that conformal map by changing the coordinates $$x=(\varphi+1)^2\beta_n/4/\varphi$$ in integral (3.1): $$\frac{-8\pi i n}{\beta_n} = \int_{\it{\Upsilon}} Q^\prime(x)\frac{(\varphi+1)^2}{\varphi^2} \,{\rm d}\varphi.\nonumber$$ The contour $${\it{\Upsilon}}$$ is half the unit circle, starting at $$\varphi = -1$$ through $$i$$ to $$1$$. Note that $$x$$ is real valued for $$\varphi$$ on this halfcircle in the upper half of the complex plane. Hence, it is also real valued when we take the complex conjugate of $$\varphi$$, corresponding to $$\varphi$$ on the halfcircle in the lower half of the complex plane. If we assume that $$Q^\prime$$ is real for real arguments, then the integral on the negative halfcircle, i.e., from $$-1$$ through $$-i$$ to $$1$$, is the complex conjugate of the integral above. Combining both, we find that $\frac{16\pi i n}{\beta_n} = \int_{\it{\Xi}} Q^\prime(x)\frac{(\varphi+1)^2}{\varphi^2} \,{\rm d}\varphi,$ where $${\it{\Xi}}$$ is a circle enclosing the origin $$\varphi=0$$ in the counterclockwise direction. The described change of variables maps a point $$\varphi$$ in the interior of the unit circle to $$x \in \mathbb{C} \setminus [0,\beta_n]$$. If $$Q$$ is not entire, we may have to subtract additional residues; else, the contour $${\it{\Xi}}$$ encloses a single pole at the origin. In the case where $$Q(x)$$ is a polynomial of degree $$m$$, we obtain from the residue theorem that $$\beta_n$$ is the root of a polynomial of degree $$m$$: \begin{align} \frac{8n}{\beta_n} &= \text{Res} \left( \frac{(\varphi+1)^2}{\varphi^2} \sum_{k=1}^m k q_k \left(\frac{(\varphi+1)^2\beta_n}{4\varphi}\right)^{k-1}, \quad \varphi=0 \right), \nonumber \\ \frac{8n}{\beta_n} &= \left[\sum_{k=2}^{m} k q_k \left(\frac{\beta_n}{4}\right)^{k-1} {2k-2 \choose k-2}\right] + 2\left[\!\sum_{k=1}^m k q_k \left(\frac{\beta_n}{4}\right)^{k-1}\!\! {2k-2 \choose k-1} \!\right] + \left[\!\sum_{k=2}^m k q_k \left(\frac{\beta_n}{4}\right)^{k-1} {2k-2 \choose k} \!\!\right]\!, \nonumber \\ 8n &= 2q_1\beta_n + 4\sum_{k=2}^{m} k {2k \choose k} q_k \left(\frac{\beta_n}{4}\right)^k. \label{EexplBeta} \end{align} (3.9) We can remark that (3.3) gives the asymptotic expansion of the zero of the $$m$$th degree polynomial (3.9) in $$\beta_n$$ with respect to a factor in its constant coefficient. Exact solutions for $$\beta_n$$ are only available up to $$m=4$$. For $$m=1$$, this boils down to the standard associated Laguerre case $$\beta_n=4n/q_1$$. For $$m=2$$, we take the positive solution that also corresponds to (3.6) and (3.7): $$\beta_n = \frac{-q_1+\sqrt{q_1^2+24q_2 n}}{3q_2} \sim \sqrt{\frac{8n}{3q_2}} -\frac{q_1}{3q_2} +\sqrt{\frac{q_1^4}{864q_2^3 n}} + \mathcal{O}(n^{-3/2}). \nonumber$$ We do find an explicit result for the nonpolynomial function $$Q(x) = \exp(x)$$. In that case, we have \begin{align} 8n &\sim \beta_n \text{Res} \left( \frac{1}{\varphi} \sum_{k=0}^\infty \frac{1}{k!} \left(\frac{\beta_n}{4}\right)^k \left(\varphi^{-1} +2 +\varphi \right)^{k+1}, \varphi=0 \right) \nonumber \\ 8n &\sim \beta_n \sum_{k=0}^\infty \frac{1}{k!} \left(\frac{\beta_n}{4}\right)^k {2k+2 \choose k+1} \nonumber \\ 8n &= 2\beta_n \exp\left(\frac{\beta_n}{2}\right)\left[I_0\left(\frac{\beta_n}{2}\right) + I_1\left(\frac{\beta_n}{2}\right) \right] \label{EexponQbeta} \\ \end{align} (3.10) \begin{align} 4n &\sim e^{\beta_n} \sqrt{\frac{\beta_n}{\pi}} \left[2-\frac{1}{2\sqrt{\beta_n}}\right] {\Rightarrow} \beta_n \sim \frac{W(8\pi n^2)}{2} \sim \log(n)-\log\frac{(\log[8\pi n^2])}{2}+\log\frac{(8\pi)}{2}, \label{EbetaExp} \end{align} (3.11) where $$W$$ denotes the Lambert-W function, and $$I_0$$ and $$I_1$$ are modified Bessel functions. For general $$Q(x)$$, a similar technique may allow one to find an explicit expression like (3.10) without integrals that is satisfied by $$\beta_n$$. Solving that expression numerically avoids having to evaluate the integral (3.1). However, it might become quite involved to derive higher order terms as explicitly as in Sections 6.3 and 6.4 from the resulting expansion of $$\beta_n$$ as $$n \rightarrow \infty$$. 4. Asymptotics of orthonormal polynomials $$\boldsymbol{p_n(x)}$$ and related coefficients The expansion for $$p_n(x)$$ follows from substituting the expansion of the function $$R(z)$$ described in Section 2.2 into the following expressions. Multiple regions of the complex plane are identified in (2.2), leading to the three different expressions: $$R^{{\mathrm{outer}}}(z)$$, see the appendix, combine the appendix with (5.6) or combine (2.3) with (5.9); $$R^{{\mathrm{right}}}(z)$$, see (5.7) or (5.8) and $$R^{{\mathrm{left}}}(z)$$, see (5.7) or (5.8). 4.1 Lens I Putting together the consecutive transformations in Vanlessen (2007) for $$z \in {\mathrm{I}}$$ in Fig. 1, and noting that $$x$$ and $$z$$ are related \begin{align} p_n(\beta_n z) &= \frac{\beta_n^{n}\gamma_n\, e^{n (V_n(z)+l_n)/2} }{z^{1/4}(1-z)^{1/4} z^{\alpha/2} } \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{outer}}}(z) \label{EpiInt} \\ & \begin{pmatrix} 2^{-\alpha} \cos(\arccos(2z-1)[1/2+\alpha/2] + n\xi_n(z)/i -\pi /4) \\ -i 2^{\alpha} \cos(\arccos(2z-1)[\alpha/2-1/2] + n\xi_n(z)/i -\pi /4) \end{pmatrix}. \nonumber \end{align} (4.1) The asymptotics of $$\gamma_n$$ are given in Section 4.5. The full asymptotic expansion of $$p_n(x)$$ is obtained by substituting the expansion for $$R^{{\mathrm{outer}}}(z)$$ that we derive later on. The asymptotic expansions of the orthonormal polynomials all separate two oscillatory terms (where phase functions are multiplied by $$n$$) from the nonoscillatory higher order terms. For polynomial $$Q(x)$$ of degree $$m$$, the asymptotic expansion truncated after $$T$$ terms corresponds to a relative error of size $$\mathcal{O}(n^{-T/m})$$. In the special case of a (constant plus) monomial $$Q(x) = q_0 + q_m x^m$$, the relative error improves to $$\mathcal{O}(n^{-T})$$. 4.2 Outer region II For $$z \in$$ II, the asymptotic expansion is \begin{align} p_n(\beta_n z) &= \frac{\beta_n^{n}\gamma_n\, e^{n (V_n(z)/2+\theta(z)\xi_n(z)+l_n/2)} \exp(i\theta(z)\arccos(2z-1)\alpha/2) }{2 z^{1/4}(z-1)^{1/4}z^{\alpha/2}}\begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{outer}}}(z) \label{EpiOut} \\ & \begin{pmatrix} 2^{-\alpha} \exp(i\theta(z)\arccos(2z-1)/2) \\ -i 2^{\alpha}\exp(-i\theta(z)\arccos(2z-1)/2) \end{pmatrix}. \nonumber \end{align} (4.2) It may appear to be problematic that $$\exp(Q(\beta_n z)/2) = \exp(n V_n(z)/2)$$ appears in the asymptotic expansions of the polynomials in the complex plane, especially for this region, since this factor grows very quickly. However, one may verify that this exponential behaviour is cancelled out with other terms. More specifically, the term $$\exp(n\xi_n(z))$$ in (4.2) ensures that $$p_n(x) = \mathcal{O}(x^n)$$, $$x \rightarrow \infty$$. 4.3 Right disk III The polynomials behave like an Airy function near the right endpoint $$z=1$$ ($$z \in {\mathrm{III}}$$). This is typical asymptotic behaviour near a so-called ʻsoft edgeʼ, in the language of random matrix theory. Note that the $$\theta(z)$$ in the following expression removes the branch cut, so that it can be used throughout $$\mathbb{C}$$, away from $$z=0$$: \begin{align} p_n(\beta_n z) &= \gamma_n \beta_n^{n} \frac{z^{-\alpha/2} \sqrt{\pi}}{z^{1/4} (z-1)^{1/4} }\, e^{n(V_n(z) + l_n)/2} \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{right}}}(z) \label{Epiboun} \\ & \begin{pmatrix} 2^{-\alpha}\left\{\cos\left[\frac{(\alpha+1)\arccos(2z-1)}{2} \right] Ai(f_n(z)) f_n(z)^{1/4} -i\sin\left[\frac{(\alpha+1)\arccos(2z-1)}{2} \right] Ai'(f_n(z))f_n(z)^{-1/4} \theta(z)\right\} \\ 2^\alpha \left\{-i\cos\left[\frac{(\alpha-1)\arccos(2z-1)}{2} \right] Ai(f_n(z)) f_n(z)^{1/4} -\sin\left[\frac{(\alpha-1)\arccos(2z-1)}{2}\right] Ai'(f_n(z)) f_n(z)^{-1/4} \theta(z) \right\}\end{pmatrix}. \nonumber \end{align} (4.3) We would like to note a possible issue when computing the zeros of this expression, as one would do for Gaussian quadrature. The largest root for the standard associated Laguerre polynomials asymptotically behaves as $$4n+2\alpha+2 + 2^{2/3}a_1(4n+2\alpha+2)^{1/3}$$, with $$a_1$$ the (negative) zero of the Airy function closest to zero (see Szegő (1967, (6.32.4)); Olver et al. (2010, (18.16.14)); NIST (2016)). For a fixed but relatively high $$\alpha$$, this point may lie outside the support of the equilibrium measure $$(0,4n)$$ (Vanlessen, 2007, Remark 3.8). There will always be a larger $$n$$ for which the point lies inside. Still, for large $$\alpha$$ one may want to pursue a different kind of asymptotic expansion, for example, using asymptotics with a varying weight $$x^{\alpha(n)}\exp(-Q(x))$$, as studied in, for example, Bosbach & Gawronski (1998), and apply it to the fixed $$\alpha$$. 4.4 Left disk IV The polynomials behave like a Bessel function of order $$\alpha$$ near the left endpoint $$z=0$$. For $$z \in$$ IV, we obtain \begin{align} & p_n(\beta_n z) = \gamma_n \beta_n^{n} \frac{(-1)^n \left(in\bar{\phi}_n(z)\pi\right)^{1/2} }{z^{1/4} (1-z)^{1/4}} z^{-\alpha/2} \,e^{n(V_n(z) + l_n)/2} \begin{pmatrix} 1 \\ 0 \end{pmatrix}^{\rm T} R^{{\mathrm{left}}}(z) \label{Epilboun} \\ & \begin{pmatrix} 2^{-\alpha} \left\{ \sin\left[\frac{(\alpha+1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha\left(2in \bar{\phi}_n(z)\right) + \cos\left[\frac{(\alpha+1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha'\left(2in \bar{\phi}_n(z)\right) \right\} \\ -i 2^\alpha \left\{ \sin\left[ \frac{(\alpha-1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha\left(2in \bar{\phi}_n(z)\right) + \cos\left[\frac{(\alpha-1)\arccos(2z-1)}{2} -\frac{\pi\alpha}{2}\right] J_\alpha'\left(2in \bar{\phi}_n(z)\right) \right\} \end{pmatrix}. \nonumber \end{align} (4.4) It is not immediately obvious that the expansions (4.3) and (4.4) are analytic in the points $$z=1$$ and $$z=0$$, respectively. This will follow from the expression (5.7) for $$R^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ and by also making a series expansion of the other terms at those points. For numerical purposes, it may be better to use those series expansions when evaluating close to (or at) $$z=0$$ and $$z=1$$. 4.5 Asymptotics of leading order coefficients The leading order coefficient of the orthonormal polynomials is $$\gamma_n$$, i.e., we have $p_n(x) = \gamma_n\pi_n(x),$ where $$\pi_n(x)$$ is the monic orthogonal polynomial of degree $$n$$. For a monomial or more general function $$Q(x)$$, the asymptotic expansion of $$\gamma_n$$ is \begin{align} \gamma_n & \sim \frac{\beta_n^{-n-\alpha/2-1/2}\exp(-n l_n/2) \sqrt{\frac{2}{\pi}} 2^{\alpha} } {\sqrt{1-4i4^{\alpha} \sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} } {n^k} } } \quad \text{as } n \rightarrow \infty. \label{Egamma} \end{align} (4.5) The quantities $$U_{k,1}^{{\mathrm{right}}/{\mathrm{left}}}$$ are defined and extensively described in Section 5. They are the constant $$2\times 2$$ matrices that multiply $$z^{-1}n^{-k}$$ and $$(z - 1)^{-1} n^{-k}$$ in the expansion for $$R(z)$$, of which we use the upper right elements here. Explicit expressions for these matrices up to $$k=3$$ are given in the appendix. The constant coefficient $$q_0$$ only changes the scaling of the weight function and does not influence $$\beta_n$$ nor the matrices. However, it does influence $$\gamma_n$$ through the coefficient $$l_n$$, giving $$\gamma_n \sim \exp(q_0/2)$$. For general polynomial $$Q(x)$$, the power of $$n$$ changes from $$k$$ to $$(k-1)/m+1$$: \begin{align} \gamma_n & \sim \frac{\beta_n^{-n-\alpha/2-1/2}\exp(-n l_n/2) \sqrt{\frac{2}{\pi}} 2^{\alpha} } {\sqrt{1-4i4^{\alpha} \sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} } {n^{(k-1)/m+1} } } } \quad \text{as } n \rightarrow \infty. \nonumber \end{align} This reflects the more accurate asymptotic information about $$\beta_n$$ available for polynomial $$Q$$. It is understood here that one substitutes the asymptotic expansion of $$\beta_n$$ in this formula. We retain this formulation here to show the analogy with (4.5). 4.6 Asymptotics of recurrence coefficients In the three-term recurrence relation (1.1), the recurrence coefficients have the following large $$n$$ asymptotic expansion \begin{align} & a_n \sim \beta_n \left[\frac{-\alpha}{4} + \vphantom{\frac{ \frac{4^{-\alpha}i(\alpha+2)}{16} + \left(\sum_{k=3}^\infty\frac{U_{k,2}^{{\mathrm{left}}}|_{1,2} }{n^k} \right) + \left(\sum_{k=1}^\infty\frac{(U_{k,2}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{right}}} )|_{1,2} }{n^k} \right) + \left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,1} 4^{-\alpha-1}i + (U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2}\alpha/4 }{n^k} \right) } {4^{-\alpha-1}i + \left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\right) }} \left\{\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,1} }{n^k}\right\} \right.\nonumber \\ &+ \left. \frac{ \frac{4^{-\alpha}i(\alpha+2)}{16} + \left(\!\sum_{k=3}^\infty\frac{U_{k,2}^{{\mathrm{left}}}|_{1,2} }{n^k} \!\right) + \left(\!\sum_{k=1}^\infty\frac{(U_{k,2}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{right}}} )|_{1,2} }{n^k} \!\right) + \left(\!\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,1} 4^{-\alpha-1}i + (U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2}\alpha/4 }{n^k} \! \right) } {4^{-\alpha-1}i + \left(\!\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\!\right)}\! \right] \nonumber \end{align} and \begin{align}\nonumber b_{n-1} \sim \frac{\beta_n}{4} & \left[1+4i\left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{2,1}4^{-\alpha} -4^\alpha (U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\right) \right. \nonumber \\ & \quad{}+\left. 16\left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{2,1} }{n^k}\right) \left(\sum_{k=1}^\infty \frac{(U_{k,1}^{{\mathrm{right}}}+U_{k,1}^{{\mathrm{left}}} )|_{1,2} }{n^k}\right) \right]^{1/2}. \nonumber \end{align} The quantities $$U_{k,1}^{{\mathrm{right}}/{\mathrm{left}}}$$ in these expressions are the same as those appearing in (4.5) above. For general polynomial $$Q(x)$$, the powers of $$n$$ again change from $$k$$ to $$(k-1)/m+1$$ and the expression is otherwise unchanged. 5. Computation of higher order terms The literature on Riemann–Hilbert problems suggests a way to compute the asymptotic expansion of $$R(z)$$. In principle, it is clear how expressions can be obtained, but this involves many algebraic manipulations, summations and recursions. An important contribution of Deaño et al. (2016) was to identify a set of simplifications that significantly improve the efficiency of numerical evaluation of the expressions. Similar simplifications can be performed in the current setting of Laguerre-type polynomials, though the expressions are of course very different. In this section, as well as in the next one, we will use the symbol $$\sim$$ interchangeably for Poincaré, Taylor and Laurent series expansions and their combinations. Moreover, as it should be clear from the context that $$n \rightarrow \infty, w \rightarrow 0$$ and $$z \rightarrow 1$$ or $$0$$, this limit is often omitted for the sake of brevity. 5.1 Jumps of $$R(z)$$ Section 1 indicated that higher order terms in the asymptotic expansion for $$\pi_n(z)$$ are of practical interest, and the previous section showed that these can be obtained by computing the higher order terms $$R_k(z)$$ in (2.3). To that end, we recall that $$R$$ satisfies a Riemann–Hilbert problem with jumps across the contours shown in Fig. 1. We proceed as in Deaño et al. (2016) by writing the jump matrix for $$R(z)$$ as a perturbation of the identity matrix, $$I+{\it{\Delta}}(z)$$. Starting from Vanlessen (2007, (3.108)) we have, for $$z$$ on the boundary $${\it{\Sigma}}_R$$ of one of the disks as shown in Section 2, $$\label{Ejump2} R^{{\mathrm{outer}}}(z) = R^{{\mathrm{right}}/{\mathrm{left}}}(z)(I+{\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}(z)), \qquad z \in {\it{\Sigma}}_R.$$ (5.1) We then consider a full asymptotic expansion in powers of $$1/n$$ for $${\it{\Delta}}(z)$$: $$\nonumber {\it{\Delta}}(z) \sim \sum_{k=1}^{\infty} \frac{{\it{\Delta}}_k(z)}{n^k}, \qquad n\to\infty.$$ On the boundary of the disks, the terms $${\it{\Delta}}_k(z)$$ can be written explicitly as $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z)$$ (Vanlessen, 2007, (3.76), (3.98)): \begin{align} {\it{\Delta}}_k^{{\mathrm{right}}}(z) & = \frac{P^{(\infty)}(z)z^{\alpha/2\sigma_3}}{2\left(- \xi_n(z) \right)^k} \begin{pmatrix} (-1)^k\nu_k & -6k i\nu_k \\ 6k i(-1)^k\nu_k & \nu_k \end{pmatrix} z^{-\alpha/2\sigma_3} P^{(\infty)}(z)^{-1}, \quad 0 < |z-1| < \delta_2, \label{EdefDR} \\ \end{align} (5.2) \begin{align} {\it{\Delta}}_k^{{\mathrm{left}}}(z) & = \frac{(\alpha,k{-}1)}{\left(4\bar{\phi}_n(z) \right)^k}P^{(\infty)}(z) (-z)^{\alpha/2\sigma_3} \begin{pmatrix} \tfrac{(-1)^k}{k}(\alpha^2{+}\tfrac{k}{2} {-} \tfrac{1}{4}) & \left(k{-}\tfrac{1}{2}\right)i \\ (-1)^{k+1}\left(k-\tfrac{1}{2}\right)i & \tfrac{1}{k}(\alpha^2{+}\tfrac{k}{2} {-} \tfrac{1}{4}) \end{pmatrix} (-z)^{-\alpha/2\sigma_3} P^{(\infty)}(z)^{-1}, \label{EdefDL} \end{align} (5.3) with $$0 < |z| < \delta_3$$ for some sufficiently small $$\delta_2$$ and $$\delta_3 > 0$$. However, for a general polynomial or function $$Q(x)$$, the $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z)$$ also depend on $$n$$. We will extract the $$n$$-dependence explicitly for general polynomial $$Q(x)$$ and use contour integrals for each required $$n$$ otherwise. $${\it{\Delta}}_k^{{\mathrm{left}}}(z)$$ has poles of order at most $$\lceil k/2 \rceil$$ at $$z=0$$ (Vanlessen, 2007, Remark 3.29) as in the Jacobi case in Deaño et al. (2016), but $${\it{\Delta}}_k^{{\mathrm{right}}}(z)$$ has poles of order at most $$\lceil 3k/2 \rceil$$ at $$z=1$$ (Vanlessen, 2007, Remark 3.22). The $${\it{\Delta}}_k(z)$$ are identically $$0$$ on the other boundaries of the regions in Fig. 1. Remark 5.1 If $$\alpha^2 = 1/4$$ as in the Hermite case (see Section 7.2), then $${\it{\Delta}}^{{\mathrm{left}}}_k(z)$$ and $$s^{{\mathrm{left}}}_k(z)$$ are zero matrices for $$k > 1$$ and $${\it{\Delta}}^{{\mathrm{left}}}_1(z)$$ and $$s^{{\mathrm{left}}}_1(z)$$ have a Taylor series starting with $$\mathcal{O}(1)$$ near $$z=0$$. So, all $$U_{k,m}^{{\mathrm{left}}}$$ are zero matrices and can be left out of the calculation of higher order terms, which is still needed as the $$U_{k,m}^{{\mathrm{right}}}$$ are not zero. 5.2 Recursive computation of $$R_k(z)$$ for monomial $$Q(x)$$ In this case, there are no fractional powers of $$n$$ involved, and we can renumber (2.3) to simplify the formulas: $$\label{asympRnmon} R(z)\sim I + \sum_{k=1}^{\infty} \frac{R_k(z)}{n^{k}}, \qquad n \rightarrow \infty.$$ (5.4) By expanding the jump relation (5.1) and collecting the terms with equal order in $$n$$, we obtain a link between the terms $$R_k(z)$$ in the expansion (2.3) and the $${\it{\Delta}}_k$$. For $$z$$ on the boundary of the disks in Fig. 1, we have $$\label{RHPforRk} R_k^{{\mathrm{outer}}}(z)=R_k^{{\mathrm{right}}/{\mathrm{left}}}(z)+\sum_{j=1}^k R^{{\mathrm{right}}/{\mathrm{left}}}_{k-j}(z){\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_j(z)$$ (5.5) with $$R_0^{{\mathrm{right}}/{\mathrm{left}}}(z)=I$$. One can solve the additive Riemann–Hilbert problem as follows: Expand the sum in (5.5) in a Laurent series around $$z=0$$ and $$1$$. Define $$R_k^{{\mathrm{outer}}}(z)$$ as the sum of all the terms containing strictly negative powers of $$z$$ and $$(z- 1)$$. Since $$R_k(z) = \mathcal{O}(1/z)$$ as $$z \rightarrow \infty$$, positive powers do not contribute to $$R_k^{{\mathrm{outer}}}(z)$$. Define $$R^{{\mathrm{right}}}_k(z)$$ as the remainder after subtracting those poles. This construction ensures that $$R_k^{{\mathrm{outer}}}$$ is analytic outside the disk, $$R^{{\mathrm{right}}}_k$$ is analytic inside and (5.5) holds, as required. According to Vanlessen (2007, Remarks 3.22 and 3.29), we may write $$\nonumber {\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}(z) \sim \sum_{i=-\lceil 3k/2 \rceil }^\infty V_{k,i}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^i,$$ with $$V_{k,p}^{{\mathrm{left}}} \equiv 0$$ for all $$p < -\lceil k/2 \rceil$$. Note that $$V_{k,i}^{{\mathrm{right}}/{\mathrm{left}}}$$ are the Laurent coefficients of $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}$$ around $$z=1$$ and $$z=0$$, respectively. With $$U_{k,p}^{{\mathrm{left}}} \equiv 0$$ for all $$p < -\lceil k/2 \rceil$$, this yields $$\label{ERpl} R_{k}^{{\mathrm{outer}}}(z) =\sum_{p=1}^{\lceil 3k/2 \rceil } \left(\frac{U_{k,p}^{{\mathrm{right}}} }{(z-1)^p} + \frac{U_{k,p}^{{\mathrm{left}}} }{z^p} \right).$$ (5.6) At the same time, since $$R^{{\mathrm{right}}/{\mathrm{left}}}_k(z)$$ are analytic in $$z=1$$ (respectively, $$0$$), $$R^{{\mathrm{right}}}_k(z) \sim \sum_{n=0}^\infty Q_{k,n}^{{\mathrm{right}}} (z-1)^n, \quad R^{{\mathrm{left}}}_k(z) \sim \sum_{n=0}^\infty Q_{k,n}^{{\mathrm{left}}} z^n, \label{ERrlseries}$$ (5.7) with some coefficients $$Q^{{\mathrm{right}}/{\mathrm{left}}}_{k,n}$$ that can be determined as well, for example, via symbolic differentiation. It follows from the additive jump relation (5.5) that \begin{align} U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}} = & \hspace{1.5mm} V_{k,-p}^{{\mathrm{right}}/{\mathrm{left}}} + \sum_{j=1}^{k-1}\sum_{l=0}^{\lceil 3j/2 \rceil -p} Q_{k-j,l}^{{\mathrm{right}}/{\mathrm{left}}} V_{j,-p-l}^{{\mathrm{right}}/{\mathrm{left}}}, \nonumber \\ Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} = & \left(\sum_{i=1}^{\lceil 3k/2 \rceil} {-i \choose n} (\pm 1)^{i+n} U_{k,i}^{{\mathrm{left}}/{\mathrm{right}}} \right) \nonumber \\ & -V_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} -\sum_{j=1}^{k-1}\sum_{l=0}^{\lceil 3j/2 \rceil +n} Q_{k-j,l}^{{\mathrm{right}}/{\mathrm{left}}} V_{j,n-l}^{{\mathrm{right}}/{\mathrm{left}}}. \nonumber \end{align} Here, the $$(- 1)^{i+n} U_{k,i}^{{\mathrm{right}}}$$ corresponds to the $$Q_{k,n}^{{\mathrm{left}}}$$. In Section 5.4, we will explore an alternative way to compute the matrices $$U_{k,m}^{{\mathrm{right}}/{\mathrm{left}}}$$. 5.3 Recursive computation of $$R_k(z)$$ for general polynomial $$Q(x)$$ For general polynomial $$Q(x)$$, the $${\it{\Delta}}_k(z)$$ also depend on $$n$$, so we need fractional powers of $$n$$. We introduce Laurent coefficients with an extra index, indicating the power of $$n^{-1/m}$$, $$\nonumber {\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_k(z) \sim \sum_{l=0}^\infty \left[\sum_{i=-\lceil 3k/2 \rceil}^\infty V_{k,i,l}^{{\mathrm{right}}/{\mathrm{left}}}(z -1/2 \mp 1/2)^{i} \right] n^{-l/m}.$$ In this case, we do have a general expansion for $$R$$ in terms of fractional powers, given earlier by (2.3). We arriveat Taylor series and Laurent expansions of the form: \begin{align} R^{{\mathrm{right}}/{\mathrm{left}}}_k(z) &\sim \sum_{n=0}^\infty Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^n, \label{ERQnonmon} \end{align} (5.8) \begin{align} R^{{\mathrm{outer}}}_k(z) & = \sum_{p=1}^{\lceil 3/2 \lceil k/m\rceil \rceil } \left(\frac{U_{k,p}^{{\mathrm{right}}} }{(z-1)^p} + \frac{U_{k,p}^{{\mathrm{left}}} }{z^p} \right). \label{ERUnonmon} \end{align} (5.9) Here, $$p$$ corresponds to the order of the pole and must be $$\leq \lceil 3/2 \lceil k/m\rceil \rceil$$. This is because that is the highest order of the pole of the $${\it{\Delta}}^{{\mathrm{left}}/{\mathrm{right}}}_q(z)$$ matrices which appear in the expansion up to $$\mathcal{O}\left(n^{\tfrac{k-1}{m}+1}\right)$$ of the jump relation \begin{align} I + & \sum_{k=1}^{\infty} \frac{R_k^{{\mathrm{outer}}}(z)}{n^{\tfrac{k-1}{m}+1} } \sim \left(I + \sum_{k=1}^{\infty} \frac{R_k^{{\mathrm{right}}/{\mathrm{left}}}(z)}{n^{\tfrac{k-1}{m}+1} }\right) \left(I+\sum_{q=1}^{\infty} \frac{{\it{\Delta}}_q(z)}{n^q}\right) \quad \text{as } n \rightarrow \infty. \nonumber \end{align} Expanding near $$z=0$$ or $$1$$ and collecting terms with the same (fractional) power of $$n$$ in the jump relation (5.1), we obtain, after some more algebraic manipulations, \begin{align} U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}} = & \left[\sum_{q=\lceil (2p-1)/(2\pm 1) \rceil}^{(k-1)/m +1} V_{q,-p,k-1+m-qm}^{{\mathrm{right}}/{\mathrm{left}}} \right] + \left[ \sum_{q=1}^{(k-1)/m} \sum_{l=0}^{k-1-mq} \sum_{i=-\lceil (2\pm 1)q/2 \rceil}^{-p} Q_{k-l-mq,-i-p}^{{\mathrm{right}}/{\mathrm{left}}} V_{q,i,l}^{{\mathrm{right}}/{\mathrm{left}}} \right]\!, \nonumber \\ Q_{j,n}^{{\mathrm{right}}/{\mathrm{left}}} = & \left[\sum_{p=1}^{\lceil (2 \mp 1)/2 \lceil j/m\rceil \rceil } U_{j,p}^{{\mathrm{left}}/{\mathrm{right}}} {-p \choose n} (\pm 1)^{n-p} \right] - \left[ \sum_{q=1}^{(j-1)/m+1} V_{q,n,j-1+m+qm}^{{\mathrm{right}}/{\mathrm{left}}} \right] \nonumber \\ & -\left[ \sum_{q=1}^{(j-1)/m} \sum_{l=0}^{j-1-qm} \sum_{i=-\lceil (2 \pm 1) q/2 \rceil}^n Q_{j-l-qm,n-i}^{{\mathrm{right}}/{\mathrm{left}}} V_{q,i,l}^{{\mathrm{right}}/{\mathrm{left}}} \right]\!. \nonumber \end{align} 5.4 Simplifications We start by writing the jump relation (5.5) using the coefficients $$R^{{\mathrm{outer}}}_{k-m}(z)$$ instead of $$R^{{\mathrm{right}}/{\mathrm{left}}}_{k-m}(z)$$. Proposition 5.2 The jump relation (5.5) can be written as follows: $$\label{E:simplifiedjump} R^{{\mathrm{right}}/{\mathrm{left}}}_k(z) = R_k^{{\mathrm{outer}}}(z) - \sum_{l=1}^k R_{k-l}^{{\mathrm{outer}}}(z)s^{{\mathrm{right}}/{\mathrm{left}}}_l(z)$$ (5.10) with $$R_{0}^{{\mathrm{right}}/{\mathrm{left}}}(z) = I$$ and with $$\label{EcheckS} s^{{\mathrm{right}}/{\mathrm{left}}}_l(z) = {\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_l(z) - \sum_{j=1}^{l-1} s^{{\mathrm{right}}/{\mathrm{left}}}_j(z){\it{\Delta}}^{{\mathrm{right}}/{\mathrm{left}}}_{l-j}(z).$$ (5.11) Proof. This can be proven by induction as in Deaño et al. (2016, Section 4.1). Equivalently, if we define $$s(z) = \sum_{i=1}^\infty s_i(z) n^{-i}$$, the relation between the sets of coefficient matrices expressed by (5.11) is simply that $$I+{\it{\Delta}}(z) = [I-s(z)]^{-1}$$. Indeed, (5.10) expresses the jump relation $$R^{{\mathrm{right}}/{\mathrm{left}}}(z) = R^{{\mathrm{outer}}}(z) [I-s(z)]$$, just as (5.5) expresses the jump relation (5.1). The result also follows by matching equal powers of $$n$$ in this relation. □ This formulation has two advantages: The jump term in (5.10) is written in terms of $$R_{k-l}^{{\mathrm{outer}}}$$ rather than $$R^{{\mathrm{right}}}_{k-l}$$, and the former has a simple and nonrecursive expression (5.6). The definition of the coefficients $$s_l^{{\mathrm{right}}/{\mathrm{left}}}$$ can be greatly simplified to a nonrecursive expression too, involving just the $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}$$’s. The latter property follows rather generically from the structure of the $${\it{\Delta}}_k^{{\mathrm{right}}/{\mathrm{left}}}$$’s. Lemma 5.3 For any invertible matrix function $$X(z)$$, scalar functions $$a_k(z)$$ and coefficients $$b_k$$ and $$c_k$$ such that $$a_{2k}(z) b_{2k} = (-1)^k a_k(z)^2 \left[b_k^2 + c_k^2 \right] + \sum_{j=1}^{k-1} (-1)^j a_j(z) a_{m-j}(z) \left[b_j b_{m-j} + c_j c_{m-j}\right]\!, \label{Esumabc}$$ (5.12) the following holds \begin{align} s^{\text{gen}}_m(z) = {\it{\Delta}}^{\text{gen}}_m(z) - \sum_{j=1}^{m-1} s^{\text{gen}}_j(z){\it{\Delta}}^{\text{gen}}_{m-j}(z) = {\it{\Delta}}^{\text{gen}}_m(z) + \begin{cases} 0, & \text{for odd } m \\ -2b_m a_m(z) I, & \text{for even } m, \end{cases} \label{EsumSl} \end{align} (5.13) where $${\it{\Delta}}^{\text{gen}}_k(z) = a_k(z) X(z) \begin{pmatrix} (-1)^k b_k & c_k \\ (-1)^{k+1} c_k & b_k \end{pmatrix} X^{-1}(z). \label{EformD}$$ (5.14) Proof. The result (5.13) is equivalent to (5.12) by rewriting the sum, as a direct computation shows that \begin{align*} C_{j,m-j} = s^{\text{gen}}_j(z){\it{\Delta}}^{\text{gen}}_{m-j}(z) + s^{\text{gen}}_{m-j}(z){\it{\Delta}}^{\text{gen}}_{j}(z) = \begin{cases} 0, & \text{for odd } m \\ -2 (-1)^j a_j(z) a_{m-j}(z) \left[b_j b_{m-j} + c_j c_{m-j}\right] I, & \text{for even } m. \end{cases} \end{align*} □ In our case, (5.2) and (5.3) are clearly of the form (5.14), and we check (5.12) in the Gosper–Zeilberger algorithm in the proof of our case: Proposition 5.4 The terms $$s^{{\mathrm{right}}/{\mathrm{left}}}_l(z)$$ defined by (5.11) satisfy $$s^{{\mathrm{left}}/{\mathrm{right}}}_l(z) = {\it{\Delta}}^{{\mathrm{left}}/{\mathrm{right}}}_l(z) \nonumber$$ for odd $$m$$ and \begin{align} s^{{\mathrm{left}}}_l(z) = & \hspace{1.5mm} {\it{\Delta}}^{{\mathrm{left}}}_l(z) -\frac{4\alpha^2+2l-1}{(4\bar{\phi}_n(z))^l}\frac{(\alpha,l-1)}{2l} I, \nonumber \\ s^{{\mathrm{right}}}_l(z) = & \hspace{1.5mm} {\it{\Delta}}^{{\mathrm{right}}}_l(z) -\frac{\nu_l}{(-\xi_n(z))^l}I, \nonumber \end{align} for even $$m$$, where $$\xi_n$$ and $$\bar{\phi}_n$$ are as defined in Section 2.3. Proof. This can be proven again by mathematical induction, completely analogous to Deaño et al. (2016, Section B) for the left case. We should note that gcd$$(q_n,r_{n+j})$$ is not necessarily one, which is needed in Gosper (1978, under (10d)), but that the suggested change in variables can eliminate the common factor. For the right case, the proof is also analogous, but with $$\lambda_{j,l-j} = a_j = (-1)^j[36(l-j)j -1]\nu_{l-j}\nu_j/2$$, which is $$-2 (-1)^j a_j(z) a_{m-j}(z) \left[b_j b_{m-j} + c_j c_{m-j}\right]$$ in the previous proposition. In the Gosper–Zeilberger algorithm (see Gosper, 1978; Koornwinder, 1993), we have $$p_j = 1-36j(l-j), q_j = -(6j-5)(6j-7) (l-j+1)$$ and $$f_j=1/l$$. □ In the next equations, the coefficients $$W_{k,i}^{{\mathrm{right}}/{\mathrm{left}}}$$ are the Laurent coefficients of $$s_k^{{\mathrm{right}}/{\mathrm{left}}}$$ for monomial $$Q$$, \begin{align} \label{Wkm} s_k^{{\mathrm{right}}/{\mathrm{left}}}(z) \sim \sum_{i=-\lceil 3k/2 \rceil}^\infty W_{k,i}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^i. \end{align} (5.15) They are used to compute $$U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}}$$ directly, based on (5.10). This has the advantage of needing less memory, as the $$Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}}$$ are not needed any more. Still for monomial $$Q(x)$$, that leads us to \begin{align} U_{k,p}^{{\mathrm{right}}/{\mathrm{left}}} &= \hspace{1.5mm} W_{k,-p}^{{\mathrm{right}}/{\mathrm{left}}} + \sum_{j=1}^{k-1}\sum_{l=\max(p-\lceil (2 \pm 1)j/2 \rceil,1) }^{\lceil (2 \pm 1) (k-j)/2 \rceil} U_{k-j,l}^{{\mathrm{right}}/{\mathrm{left}}} W_{j,l-p}^{{\mathrm{right}}/{\mathrm{left}}} \nonumber \\ & \hspace{1.5mm} +\sum_{j=1}^{k-1}\sum_{n=0}^{\lceil (2 \pm 1) j/2 \rceil -p} \left(\sum_{i=1}^{\lceil (2 \mp 1) (k-j)/2 \rceil} (\pm 1)^{i+n} { -i \choose n} U_{k-j,i}^{{\mathrm{left}}/{\mathrm{right}}} \right) W_{j,-n-p}^{{\mathrm{right}}/{\mathrm{left}}}. \nonumber \end{align} It might be necessary to approximate the orthonormal polynomials near $$z=0$$ and $$1$$, for which it is inaccurate and computationally expensive to use (5.10) and certainly the recursive application of (5.5). So optionally, one can still compute the series expansion of $$R^{{\mathrm{right}}/{\mathrm{left}}}_k(z)$$ afterwards, using \begin{align} Q_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} = & \hspace{1.5mm} \left(\sum_{i=1}^{\lceil (2 \mp 1)k/2 \rceil} {-i \choose n} (\pm 1)^{-i-n} U_{k,i}^{{\mathrm{left}}/{\mathrm{right}}} \right)-W_{k,n}^{{\mathrm{right}}/{\mathrm{left}}} -\sum_{j=1}^{k-1}\sum_{i=n+1}^{\lceil (2 \pm 1)(k-j)/2 \rceil +n} U_{k-j,i-n}^{{\mathrm{right}}/{\mathrm{left}}} W_{j,i}^{{\mathrm{right}}/{\mathrm{left}}} \nonumber \\ & -\sum_{j=1}^{k-1}\sum_{i=-\lceil (2\pm1) j/2 \rceil}^{n} \sum_{l=1}^{\lceil (2 \mp 1)(k-j)/2 \rceil} { -l \choose n-i} (\pm 1)^{i-n+l} U_{k-j,l}^{{\mathrm{left}}/{\mathrm{right}}} W_{j,i}^{{\mathrm{right}}/{\mathrm{left}}}. \nonumber \end{align} For general polynomial $$Q(x)$$, we have the more general expansion $$\label{Wkmnonmon} s_k^{{\mathrm{right}}/{\mathrm{left}}}(z) \sim \sum_{l=0}^\infty \sum_{i=-\lceil 3k/2 \rceil}^\infty W_{k,i,l}^{{\mathrm{right}}/{\mathrm{left}}} (z -1/2 \mp 1/2)^i n^{-l/m},$$ (5.16) which leads to \begin{align} U_{k,q}^{{\mathrm{right}}} = & \left[\sum_{j=\lfloor 2q/3\rfloor}^{\lfloor (k-1)/m \rfloor} W_{j+1,-q,k-1-jm}^{{\mathrm{right}}}\right] + \sum_{l=0}^{k-1-m} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=\max(-\lceil 3j/2 \rceil,1-q)}^{\lceil 3l/2 \rceil-q} U_{l,q+i}^{{\mathrm{right}}} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{right}}} \nonumber \\ & + \sum_{l=0}^{k-1-m} \sum_{p=1}^{\lceil l/2 \rceil } U_{l,p}^{{\mathrm{left}}} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=-\lceil 3j/2 \rceil}^{-q} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{right}}} {-p \choose -q-i}, \nonumber \\ U_{k,q}^{{\mathrm{left}}} = & \left[\sum_{j=2q}^{\lfloor (k-1)/m \rfloor} W_{j+1,-q,k-1-jm}^{{\mathrm{left}}} \right] + \sum_{l=0}^{k-1-m} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=\max(-\lceil j/2 \rceil,1-q)}^{\lceil l/2 \rceil -q } U_{l,q+i}^{{\mathrm{left}}} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{left}}} \nonumber \\ & \quad + \sum_{l=0}^{k-1-m} \sum_{p=1}^{\lceil 3l/2 \rceil} U_{l,p}^{{\mathrm{right}}} \sum_{j=0}^{\lfloor (k-1-l)/m-1 \rfloor} \sum_{i=-\lceil j/2 \rceil}^{-q} W_{j+1,i,k-1-l-jm-m}^{{\mathrm{left}}} (-1)^{q+i+p} {-p \choose -q-i}. \nonumber \end{align} 6. Explicit series expansions for $$\boldsymbol{s_k^{{\mathrm{left}}/{\mathrm{right}}}(z)}$$ and $$\boldsymbol{{\it{\Delta}}_k^{{\mathrm{left}}/{\mathrm{right}}}(z)}$$ In this section, we derive fully explicit expressions for the coefficients $$W_{k,i}^{{\mathrm{right}}/{\mathrm{left}}}$$, defined by (5.15) or (5.16) for a monomial, general polynomial and general function $$Q$$, respectively. These expressions are amenable to implementation without further symbolic manipulations. The process and terminology of symbols mimicks that used in Deaño et al. (2016) for Jacobi polynomials. We aim to be concise yet complete (thus needing Russian characters) and expand $$s_k^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ together with $${\it{\Delta}}_k^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ in power series, where the coefficients are computed using convolutions. 6.1 Left disk with monomial $$Q(x)$$ First, we consider $$s_k^{{\mathrm{left}}}(z)$$, where we know that $$W_{k,i}^{{\mathrm{left}}} \equiv 0$$$$\forall i < -\lceil k/2 \rceil$$. We have $$\label{EDeltak_Tk} {\it{\Delta}}_k^{{\mathrm{left}}}(z) = \frac{(\alpha,k-1)}{4^k \bar{\phi}_n(z)^k} 2^{-\alpha\sigma_3} G_k(z) 2^{\alpha\sigma_3},$$ (6.1) where $$G_k$$ is defined for odd and even $$k$$ as \begin{align} G_k^{\text{odd}}(z) &= \frac{\alpha^2 + \frac{k}{2}-\frac{1}{4}}{4k z^{1/2}(z-1)^{1/2}} \begin{pmatrix} -4z+2 & 2i \\ 2i & 4z-2 \end{pmatrix} + \frac{\left(k-\frac{1}{2}\right)i}{4 z^{1/2}(z-1)^{1/2}} \begin{pmatrix} -2\cos(y_{\alpha}) & -2i\cos(y_{\alpha+1})\\ -2i\cos(y_{\alpha-1}) & 2\cos(y_{\alpha}) \end{pmatrix}, \nonumber \\ G_k^{\text{even}}(z) &= \frac{\alpha^2 + \frac{k}{2}-\frac{1}{4} }{4 k z^{1/2}(z-1)^{1/2}} \begin{pmatrix} 4\sqrt{z}\sqrt{z-1} & 0\\ 0 & 4z\sqrt{z}\sqrt{z-1} \end{pmatrix}\nonumber\\ &\quad{} + \frac{\left(k-\frac{1}{2}\right)i}{4 z^{1/2}(z-1)^{1/2}} \begin{pmatrix} -2\sin(y_{\alpha}) & -2i\sin(y_{\alpha+1})\\ -2i\sin(y_{\alpha-1}) & 2\sin(y_{\alpha}) \end{pmatrix}. \nonumber \end{align} One should remark that $$(-\varphi(z))^\alpha \neq \varphi(z)^\alpha(-1)^\alpha$$ with the principal branch when deriving this formula. Also, we have used $$\varphi(z) = \exp(i\theta(z)\arccos(2z-1))$$. The functions $$y_\gamma$$ above depend on $$z$$ by \begin{align} y_\gamma & = \theta(z) \gamma \left( \arccos(2z-1) - \pi \right), & & & \nonumber \\ y_{\gamma} & \sim -\theta(z) 2\gamma\sqrt{z} \sum_{j=0}^{\infty}\rho_{1,j} z^j & \rho_{1,j} & = \frac{(1/2)_j}{j!(1+2j)}, & \rho_{0,j} & = \delta_{j,0}, \nonumber \\ y_\gamma^k & \sim (-\theta(z) 2\gamma\sqrt{z})^k \sum_{j=0}^\infty \rho_{k,j} z^j & \rho_{k,j} & = \sum_{l=0}^j \rho_{k-1,l} \rho_{1,j-l}, & & \nonumber \end{align} where we have used the Kronecker delta $$\delta_{i,j}$$. Note that with the principal branch for the powers, $$y_{\gamma}$$ is real on the interval $$[0,1]$$. We expand all terms appearing in the definitions of $$G_k$$: \begin{align*} \frac{\cos(y_{\gamma})}{2 z^{1/2}(z-1)^{1/2}} & \sim \frac{-i}{2\sqrt{z}} \sum_{m=0}^{\infty} z^m \sum_{j=0}^m {-1/2 \choose j} (-1)^{j} \sum_{l=0}^{m-j} \frac{(-1)^l}{(2l)!} (-2\gamma)^{2l} \rho_{2l,m-j-l}, \nonumber \\ \frac{\sin(y_{\gamma})}{2 z^{1/2}(z-1)^{1/2}} & \sim \gamma i \sum_{m=0}^{\infty} z^m \sum_{j=0}^m {-1/2 \choose j} (-1)^{-j} \sum_{l=0}^{m-j} \frac{(-1)^l}{(2l+1)!} (-2\gamma)^{2l} \rho_{2l+1,m-j-l}, \nonumber \\ \frac{4z-2}{4z^{1/2}(z-1)^{1/2}} & \sim \frac{i}{2\sqrt{z}} + \frac{i}{2\sqrt{z}} \sum_{n=1}^{\infty} \left(2{-1/2 \choose n-1} +{-1/2 \choose n}\right)(-1)^{n} z^n. \end{align*} We still need to expand the power $$\bar{\phi}_n(z)^{-k}$$ in (6.1) as $$z \to 0$$. To that end, we can combine (2.4), (2.5) and (3.2). It is quite standard, but increasingly tedious, for series expansions to involve convolutions whose coefficients can be found recursively. That is the origin of the coefficients $$f_j$$ and $$g_{k,m}$$ below, and we will use this pattern several times more in the remainder of this section: \begin{align*} \bar{\phi}_n(z) & \sim \theta(z) i \sqrt{z} \sum_{j=0}^\infty f_j z^j, & f_j & = -\frac{(1/2)_j }{j!(1+2j)} - \frac{1}{2 m A_m}\sum_{k=0}^{\min(m-1,j)} (-1)^{j-k} {1/2 \choose j-k} A_{m-k-1},\nonumber\\ (\bar{\phi}_n(z))^{-1} & \sim (-z)^{-1/2} \sum_{m=0}^{\infty} g_{1,m} z^{m} & g_{1,m} & = \frac{-1}{f_0} \left(-\delta_{m,0} + \sum_{j=0}^{m-1} g_{1,j} f_{m-j} \right), \nonumber \\ (\bar{\phi}_n(z))^{-k} & \sim (-z)^{-k/2} \sum_{m=0}^{\infty} g_{k,m} z^{m} & g_{k,m} & = \sum_{l=0}^n g_{k-1,l} g_{1,m-l}. \end{align*} The coefficients $$W_{k,i}^{{\mathrm{left}}}$$ in expansion (5.15) for the functions $$s_k^{{\mathrm{left}}}(z)$$ are given by \begin{align} W_{k,i}^{{\mathrm{left}}} & = \frac{(\alpha,k-1)}{-(-1)^{\lceil k/2 \rceil +1}4^k} 2^{-\alpha\sigma_3}\sum_{j=0}^{i+(k+1)/2} g_{k,j} G_{k,i+(k+1)/2-j}^{\operatorname{odd} } 2^{\alpha\sigma_3}, \nonumber \\ W_{k,i}^{{\mathrm{left}}} & = \frac{(\alpha,k-1)}{-(-1)^{\lceil k/2 \rceil +1}4^k} 2^{-\alpha\sigma_3}\left( \sum_{j=0}^{i+k/2} g_{k,j} G_{k,i+k/2-j}^{\operatorname{even}} \right) 2^{\alpha\sigma_3} - \frac{(\alpha,k-1)(4\alpha^2+2k-1)g_{k,i+k/2}}{2 k 4^k}I, \nonumber \end{align} respectively, for odd and even $$k$$. The $$V_{k,i}^{{\mathrm{left}}}$$ can be obtained by leaving out the term with $$I$$. 6.2 Right disk with monomial $$Q(x)$$ Unlike in the Jacobi case, the expressions for the left and right disks are not symmetric, since they correspond to qualitatively different behaviour of the polynomials near a hard edge and near a soft edge. With $$w = z-1$$ and $$\tau_\gamma = -\theta(w+1)\gamma \left(\arccos(2w+1) \right)$$, we have \begin{align} {\it{\Delta}}_k^{{\mathrm{right}}}(z) = & \frac{2^{-\alpha\sigma_3}}{2\left(- \xi_n(z) \right)^k} {\it{\Omega}}_k(z) 2^{\alpha\sigma_3}, \nonumber \\ {\it{\Omega}}_k^{\text{odd}}(z) = & \frac{\nu_k}{4\sqrt{w}\sqrt{w+1}}\begin{pmatrix} -4w-2 & 2i \\ 2i & 4w+2 \end{pmatrix} + \frac{-6k\nu_k}{4\sqrt{w}\sqrt{w+1}} \begin{pmatrix} -2\cos(\tau_{\alpha}) & 2i\cos(\tau_{\alpha+1})\\ 2i\cos(\tau_{\alpha-1}) & 2\cos(\tau_{\alpha}) \end{pmatrix},\nonumber \\ {\it{\Omega}}_k^{\text{even}}(z) = & \nu_k I + \frac{-6k\nu_k}{4\sqrt{w}\sqrt{w+1}} \begin{pmatrix} -2i\sin(\tau_{\alpha}) & -2\sin(\tau_{\alpha+1})\\ -2\sin(\tau_{\alpha-1}) & 2i\sin(\tau_{\alpha}) \end{pmatrix}. \nonumber \end{align} For the expansion of $${\it{\Omega}}_k(z)$$, we observe that With these expressions in hand, we focus again on the phase function. We construct the power series of $$\xi_n(1+w)^{-k}$$ as follows: \begin{align} [\sqrt{1+w}-1]^m & \sim w^m \sum_{l=0}^\infty u_{m,l} w^l & u_{1,l} & = {1/2 \choose l}, \qquad \qquad \qquad u_{m,l} = \sum_{j=0}^l u_{m-1,j} u_{1,l-j}, \nonumber \\ \sqrt{2-2\sqrt{1+w}} &\sim \sqrt{-2w} \sum_{k=0}^\infty r_k w^k & r_k & = \sum_{l=0}^k {1/2 \choose k-l} v_{k-l,l} 2^{k-l}, \qquad \qquad v_{1,j} = {1/2 \choose j+2}, \nonumber \\ v_{m,l} & = \sum_{j=0}^l v_{m-1,j} v_{1,l-j} & q_m & = \sum_{l=0}^m \frac{(1/2)_{m-l} u_{m-l,l} }{(-2)^{m-l}((m-l)! (1+2m-2l))}, \nonumber \end{align} \begin{align} \xi_n(1+w) \sim \sqrt{w} \sum_{j=1}^\infty f_j w^j\quad f_j & = 2\sum_{l=0}^j q_l r_{j-l} - \frac{1}{m A_m}\sum_{k=0}^{\min(m-1,j)} (-1)^{m-k-1} {1/2 \choose j-k} \frac{{\it{\Gamma}}(-1/2-k)}{{\it{\Gamma}}(1/2-m){\it{\Gamma}}(m-k)}, \label{Efxin} \end{align} (6.2) \begin{align} \left(-\xi_n(1+w)\right)^{-1} & \sim w^{-3/2} \sum_{m=0}^{\infty} g_{1,m} z^{m} & g_{1,m} & = \frac{-1}{f_1} \left(\delta_{m,0} + \sum_{j=1}^{m} f_{j+1} g_{1,m-j}\right), \nonumber \\ \left(-\xi_n(1+w)\right)^{-k} &\sim w^{-3k/2} \sum_{m=0}^{\infty} g_{k,m} w^m & g_{k,m} & = \sum_{l=0}^n g_{k-1,l} g_{1,m-l}. \nonumber \end{align} The phase function $$\xi_n(z)$$ should be $$\mathcal{O}\left( (z-1)^{3/2} \right)$$ (Vanlessen, 2007, Remark 3.22) and (6.2) indeed indicates that $$f_0 = -2+2q_0r_0$$ is zero, so we have started the indices in the expansion of $$\xi_n(z)$$ from $$j=1$$. Keeping in mind the Kronecker delta $$\delta_{i,j}$$ and that $${-1/2 \choose -1} = 0$$, we arrive at the final result to which we add $$-\nu_k g_{k,i+3k/2}I$$ when $$k$$ is even. 6.3 Left disk with general polynomial $$Q(x)$$ We can reuse the expansion of $$G_k(z)$$. However, in this case $$H_n(z)$$ has an expansion in $$n$$, given by (3.8). Hence, we have to determine the expansion of $$\bar{\phi}_n(z)$$ (2.5) near $$z=0$$ anew, but afterwards one continues as in Section 6.1 to get the $$W_{k,i,l}^{{\mathrm{left}}}$$ from (5.16): \begin{align} \bar{\phi}_n(z) &\sim -\theta(z) i \sqrt{z} \sum_{l=0}^\infty \sum_{j=0}^\infty f_j^l z^j n^{-l/m}, \nonumber \\ f_j^0 &= \frac{(1/2)_j }{j!(1+2j)} + \frac{1}{2mA_m}\sum_{k=0}^{\min(m-1,j)} (-1)^{j-k} {1/2 \choose j-k} A_{m-k-1}, \nonumber \\ f_j^l &= \frac{1}{2} \sum_{i=0}^{\min(j,m-1)} (-1)^{j-i} {1/2 \choose j-i} \sum_{n=\max(i+1,m-l)}^m q_n A_{n-i-1} \beta^{n,n+l-m} \qquad {\rm for}~l > 0, \nonumber \\ (\bar{\phi}_n(z))^{-1} &\sim (-z)^{-1/2} \sum_{l=0}^\infty \sum_{j=0}^{\infty} g_{1,j}^l z^j n^{-l/m}, \qquad \qquad g_{1,n}^0 = \frac{-1}{f_0} \left( -\delta_{n,0} + \sum_{i=0}^{n-1} g_{1,i}^0 f_{n-i}^0\right), \nonumber \\ g_{1,i}^l &= \sum_{y=0}^i \sum_{p=0}^{i-y} g_{1,i-y-p}^0 \sum_{q=0}^{l-1} g_{1,y}^q f_p^{l-q}, \qquad \qquad g_{k,j}^0 = \sum_{l=0}^j g_{k-1,l}^0 g_{1,j-l}, \nonumber \\ (\bar{\phi}_n(z))^{-k} & \sim (-z)^{-k/2} \sum_{l=0}^\infty \sum_{j=0}^{\infty} g_{k,j}^l z^j n^{-l/m}, \qquad \qquad g_{k,n}^l = \sum_{j=0}^l \sum_{i=0}^n g_{k-1,i}^{j} g_{1,n-i}^{l-j}. \nonumber \end{align} This gives a pole of order $$\mathcal{O}(z^{-\lceil k/2 \rceil})$$ in $${\it{\Delta}}^{{\mathrm{left}}}_k(z)$$, as it should (Vanlessen, 2007, Remark 3.29). 6.4 Right disk with general polynomial $$Q(x)$$ We can also reuse the expansion of $${\it{\Omega}}_k(z)$$ and the expansion of $$\xi_n(z)$$ up to the definition of $$f_j$$, including the expansion of $$\arccos(\sqrt{z})$$ from Section 6.2. Expanding (2.4), we get \begin{align} \xi_n(1+w) &\sim \sqrt{w} \sum_{l=0}^\infty \sum_{j=1}^\infty f_j^l w^j n^{-l/m}, \nonumber \\ f_j^0 &= \left[2\sum_{l=0}^{\min(j,m-1)} q_l r_{j-l}\right] -\frac{1}{m A_m}\left[\sum_{i=0}^{\min(j,m-1)} {1/2 \choose j-i} (-1)^{m-i-1} \frac{{\it{\Gamma}}(-1/2-i)}{{\it{\Gamma}}(-1/2-m){\it{\Gamma}}(m-i)} \right]\!, \nonumber \\ f_j^l &= \frac{-1}{2} \sum_{k=0}^{m-1} \left[ \sum_{z=\max(k+1,m-l)}^m q_z A_{z-k-1} \beta^{z,z+l-m} \right] \left\{\sum_{i=0}^{\min(k,j)} {1/2 \choose j-i} {k \choose i}\right\}. \label{EfxinNonmon} \end{align} (6.3) It follows by the construction in Section 3.2 that $$f_0^l =0$$. Thus, $$\left(-\xi_n(1+w) \right)^{-k}$$ again has a pole of order $$\mathcal{O}(w^{- 3k/2 })$$: \begin{align} (-\xi_n(1+w))^{-1} & \sim w^{-3/2} \sum_{l=0}^\infty \sum_{j=0}^\infty g_{1,j}^l w^j n^{-l/m}, & g_{1,i}^0 & = \frac{-1}{f_1} \left(\delta_{i,0} + \sum_{j=1}^i f_{j+1}^{0} g_{1,i-j}^{0}\right), \nonumber\\ g_{1,i}^l & = \sum_{y=0}^{i-1} \sum_{p=1}^{i-y} g_{1,i-y-p}^0 \sum_{q=0}^{l-1} g_{1,y}^q f_{p+1}^{l-q}, & g_{k,j}^0 & = \sum_{l=0}^j g_{k-1,l}^0 g_{1,j-l} , \nonumber \\ \left(-\xi_n(1+w) \right)^{-k} & \sim w^{-3k/2} \sum_{l=0}^\infty \sum_{j=0}^{\infty} g_{k,j}^l w^j n^{-l/m}, & g_{k,n}^l & = \sum_{j=0}^l \sum_{i=0}^n g_{k-1,i}^{j} g_{1,n-i}^{l-j}. \nonumber \end{align} 6.5 Left disk with general function $$Q(x)$$ For general $$Q$$, the MRS number $$\beta_n$$ depends on $$n$$ in a way that is not easy to predict, see (3.11), for example. As a result, so does $$V_n(x)$$. This means that the series expansions that form the result of this section also depend on $$n$$. Hence, strictly speaking, they are not the true asymptotic expansions. However, for any given $$n$$, they can still be useful in computations and require computational time independent of $$n$$. We proceed by expanding the function $$h_n$$ in a Taylor series, using contour integrals for the coefficients (see also Section 3.3): $$\label{Edn} h_n(z) \sim \sum_{l=0}^\infty d_l(n) z^l \qquad {\rm with}~d_l(n) = \frac{1}{2\pi i} \oint_{{\it{\Gamma}}_z} \frac{\sqrt{y}V_n^\prime(y) \,{\rm d}y}{\sqrt{y-1}y^{l+1}}.$$ (6.4) Continuing the analysis as before, we find an expansion for $$\bar{\phi}_n$$, where we used (2.5) and the power series of an integral. We plug this into Section 6.3 and continue with the other equations in Section 6.1 to get the expansion of $$s_k^{{\mathrm{left}}}(z)$$ for general $$Q(x)$$. The resulting matrices are also treated as for monomial $$Q(x)$$ in Sections 5.2 and 5.4 to obtain the $$U$$- and $$Q$$-matrices. 6.6 Right disk with general function $$Q(x)$$ We proceed as in the previous section by expanding $$h_n(z)$$ using contour integrals: $$\label{Ecn} h_n(z) \sim \sum_{l=0}^\infty c_l(n) (z-1)^l \quad {\rm with}\ c_l(n) = \frac{1}{2\pi i} \oint_{{\it{\Gamma}}_z} \frac{\sqrt{y}V_n^\prime(y) \,{\rm d}y}{\sqrt{y-1}(y-1)^{l+1}}.$$ (6.5) We can again continue with the phase function: Remark 6.1 Remark that in order to use these expressions, one only needs to (numerically) compute $$\beta_n$$ and the contour integrals. In contrast, we need to define $$m$$ times more coefficients in the case $$Q(x)$$ is a general polynomial to compute (6.3), even though we can also just use those same contour integrals. A brief numerical comparison is given in Section 7.4, and we can note that the expressions that we have specifically derived for general polynomial $$Q$$ are fully explicit. 7. Examples and numerical results 7.1 Monomial $$Q(x)$$ A case of specific interest in the context of Gaussian quadrature is the standard Laguerre polynomial. We illustrate the accuracy of the asymptotic expansion in the left disk using our simplifications ((5.10) and Proposition 5.4) in the left part of Fig. 2. The values we compare with are computed using a recurrence relation for orthonormal polynomials with exact coefficients, and calculations are performed in double precision. We evaluate at a point close to the normalized origin. The errors decrease as $$\mathcal{O}(n^{-T})$$ with $$T$$ the number of terms as expected. For small $$n$$, the expansions may diverge with increasing $$T$$ and the errors saturate at about $$10^{-14}$$. Fig. 2. View largeDownload slide Relative error of the asymptotic expansion in the left boundary region as a function of the degree, for $$w(x) = e^{-x}, x=4n/1000$$ (left) and $$w(x) = x^{2.8} \exp(-0.7x^3-3/2), x = \beta_n(1 - i)/100$$ (right). Fig. 2. View largeDownload slide Relative error of the asymptotic expansion in the left boundary region as a function of the degree, for $$w(x) = e^{-x}, x=4n/1000$$ (left) and $$w(x) = x^{2.8} \exp(-0.7x^3-3/2), x = \beta_n(1 - i)/100$$ (right). In the right part of Fig. 2, we show results for another monomial $$Q(x)$$, where the higher order terms are now calculated with (5.7) and where the summation index $$n$$ ranges from $$0$$ to $$11$$. Here, we have used high-precision arithmetic to compute the reference solution using standard methods. In the continuous Lanczos algorithm (Trefethen & Bau, 1997, Algorithm 37.1) for the computation of the recurrence coefficients, we have to evaluate integrals such as $$a_n = \int_0^\infty x^{\alpha+1} \exp(-Q(x)) p_{n-1}^2(x) \,{\rm d}x$$. To obtain sufficiently accurate ‘exact’ results using the recurrence relation, we had to evaluate the recurrence coefficients with 26 digits of accuracy, a computation that we performed in Julia. All computations with the asymptotic expansions were performed in standard floating point double precision. Having said that, the errors again decrease like $$\mathcal{O}(n^{-T})$$ as we expect, hence we conclude that the higher order terms are computed correctly. The asymptotic expansions of the coefficients $$\gamma_n$$, $$\alpha_n$$ and $$\beta_n$$, and of the polynomials in the other regions and for other values of $$x$$ exhibit similar behaviour. 7.2 Connection with Hermite polynomials NIST (2016, 18.7.17) states that $$H_{2n}(x) = (-1)^n 2^{2n}n!L_n^{(-1/2)}(x^2)$$, but as these easily overflow numerically, we construct normalized Hermite polynomials by $$H_0^{\text{norm}}(x) = \pi^{-1/4}, \quad H_{-1}^{\text{norm}}(x) = 0, \quad H_j^{\text{norm}}(x) = \frac{2x H_{j-1}^{\text{norm}}(x)}{\sqrt{2j}} -\frac{(j-1)H_{j-2}^{\text{norm}}(x)}{\sqrt{j(j-1)} }. \nonumber$$ As $$(2n)!\sqrt{\pi} = n! 4^n {\it{\Gamma}}(n+1/2)$$, we have that $$H_{2n}^{\text{norm}}(x) = L_n^{(-1/2),\text{norm}}(x^2)$$, the normalized associated Laguerre polynomial with positive leading coefficient. In the left part of Fig. 3, we see that the asymptotic expansion in the right disk (4.3) (for the summation index $$n$$ from $$0$$ to $$10$$) converges as expected to $$H_{2n}^{\text{norm}}(x)$$ as a function of $$n$$, evaluated at $$x^2 = 0.97(4n)$$ using (5.7), $$Q(x) = x$$ and $$\alpha = -1/2$$. Fig. 3. View largeDownload slide Relative error on the normalized Hermite polynomials as a function of the degree of the associated Laguerre polynomial $$n$$, for $$w(x) = \exp(-x^2), H_{2n}(x), x=\sqrt{3.88n}$$ (left) and $$w(x) = \exp(-x^4+3x^2), H_{2n+1}(x), x = \sqrt{0.31\beta_n}$$ (right). All calculations (also the recurrence coefficients) were performed in double precision. Fig. 3. View largeDownload slide Relative error on the normalized Hermite polynomials as a function of the degree of the associated Laguerre polynomial $$n$$, for $$w(x) = \exp(-x^2), H_{2n}(x), x=\sqrt{3.88n}$$ (left) and $$w(x) = \exp(-x^4+3x^2), H_{2n+1}(x), x = \sqrt{0.31\beta_n}$$ (right). All calculations (also the recurrence coefficients) were performed in double precision. For odd degrees, a similar reasoning gives $$H_{2n+1}^{\text{norm}}(x) = x L_n^{(1/2),\text{norm}}(x^2)$$. Now, we explore the connection of a generalized weight $$\exp(-x^4+3x^2)$$ on $$(-\infty, \infty)$$ with a weight $$\exp(-x^2+3x)$$ on $$[0, \infty)$$, using the expansion in the lens (4.1). Although the right panel of Fig. 3 shows higher errors, we do get the $$\mathcal{O}(n^{-T/m})$$ convergence we expect, with $$T$$ the number of terms and $$m=2$$. It also illustrates that taking more terms is not always advantageous, as the asymptotic expansions diverge when increasing $$T$$ for a fixed $$n$$. This effect is more pronounced when $$n$$ is low. In both cases, $$\alpha^2 = 1/4$$ and the expansion in the Bessel region exhibits trigonometric behaviour as in the lens. Unlike in the Jacobi case (Deaño et al., 2016, Section 2.6), they are not exactly equal for the same number of terms, but they agree more and more if $$n$$ and/or $$T$$ increase(s). However, the computation of higher order terms can be improved in this case as mentioned in Remark 5.1. One could also go through Deift et al. (1999a) to obtain asymptotics of Hermite-type polynomials, and there are indeed many analogies between both approaches, as Vanlessen (2007) was inspired by it. One advantage of exploiting the connection with Laguerre-type polynomials is that the $$U_{k,m}^{{\mathrm{left}}}$$ matrices are zero so their computations can be omitted, while the other approach computes $$U$$-matrices near both soft edges when straightforwardly implemented in an analogous way. 7.3 General function $$Q(x)$$ In this section, we provide numerical results for our claim that the expansions can also be used for general functions $$Q(x)$$ for the case $$Q(x) = \exp(x)$$. We have been able to verify that $$\int_0^\beta \exp(x)\sqrt{x/(\beta-x)} \,{\rm d}x/2/\pi$$ agrees numerically with $$\beta\exp(\beta/2) [I_0(\beta/2)+I_1(\beta/2)] /4$$, see (3.10), as long as these do not overflow. Additionally, the corresponding expansion of $$\beta_n$$ (3.11) converges with the expected rate, as can be seen in Fig. 4. We also show that we can approximate this special orthonormal polynomial in the bulk of its spectrum using (4.1) in the right side of Fig. 4. The reference results were again computed using recurrence coefficients with 26 digits. The errors agree with the expected orders that are only negative integer powers of $$n$$. This is because other types of dependencies on $$n$$ arising from $$\beta_n$$ (e.g., the $$\mathcal{O}(n^{-1/m})$$ for general polynomial $$Q(x)$$) were eliminated by for each $$n$$ numerically computing $$\beta_n$$ and the contour integrals (6.4) and (6.5). Fig. 4. View largeDownload slide Error of the asymptotic expansion of $$\beta_n$$ (left) and $$p_{n}(0.6\beta_n)$$ (right) for $$w(x) = x^{-1/2}\exp(-\exp(x))$$. Fig. 4. View largeDownload slide Error of the asymptotic expansion of $$\beta_n$$ (left) and $$p_{n}(0.6\beta_n)$$ (right) for $$w(x) = x^{-1/2}\exp(-\exp(x))$$. 7.4 General polynomial $$Q(x)$$ also used as a general function For this experiment, we recall Remarks 3.1 and 6.1, which state that one can use the procedure for general functions also for polynomial $$Q(x)$$. Figure 5 provides a comparison of the accuracy obtained for $$Q(x) = x^6 -2.1x^5 + 3x^3 -6x^2 +9$$. The procedure for general polynomials would need about $$m=6$$ times more terms to achieve the same order in $$n$$ of the asymptotic expansion than the procedure for general functions. However, where the number of terms is too low in the left part of the figure, we appear to see a divergence. This is because the accuracies of the phase function $$f_n(z)$$ and the MRS number $$\beta_n$$ are too low to cancel out the exponential behaviour $$\exp(n(V_n(z) + l_n)/2)$$ in (4.3) when calculated with too few terms. The reference recurrence coefficients were again computed using 26 digits, but we appear to see an $$\mathcal{O}(n)$$ error in the right panel of Fig. 5 at low errors. This means that we would need even more digits for this more difficult weight function if we would need to see the convergence for the expansions with more than three terms and all $$n$$. However, computing all recurrence coefficients is a very time consuming operation, whereas using asymptotics can be more accurate and orders of magnitude faster. Fig. 5. View largeDownload slide Relative error of the asymptotic expansion of $$p_{n}((0.99+0.02i)\beta_n)$$ in the right disk using the $$Q$$-matrices for $$w(x) = x^{-1.1}\exp(-x^6 +2.1x^5 - 3x^3 +6x^2 -9)$$ using the procedure for general polynomials $$Q(x)$$ (left) and general functions $$Q(x)$$ (right). Note that these are not directly comparable, as each term gives an $$\mathcal{O}(n^{-T/m})$$ error in the left part, while this is $$\mathcal{O}(n^{-T})$$ for the procedure for general polynomials. Fig. 5. View largeDownload slide Relative error of the asymptotic expansion of $$p_{n}((0.99+0.02i)\beta_n)$$ in the right disk using the $$Q$$-matrices for $$w(x) = x^{-1.1}\exp(-x^6 +2.1x^5 - 3x^3 +6x^2 -9)$$ using the procedure for general polynomials $$Q(x)$$ (left) and general functions $$Q(x)$$ (right). Note that these are not directly comparable, as each term gives an $$\mathcal{O}(n^{-T/m})$$ error in the left part, while this is $$\mathcal{O}(n^{-T})$$ for the procedure for general polynomials. The procedure for general functions thus provides a higher accuracy (for the same number of terms), also because $$\beta_n$$ is computed up to an accuracy independent of $$n$$. Table 1 shows that also the precomputations (computing the $$U$$ and $$Q$$ matrices as in Sections 5 and 6) are faster using Matlab2016b on a $$64$$-bit laptop with $$7.7$$ GB memory and $$4$$ Intel(R) Core(TM) i7-3540M CPU’s at $$3.0$$ Ghz. However, the mean time over the values of $$n$$ in Fig. 5 needed for evaluating the polynomial is much higher because the procedure for general functions computed double numerical integrals, as mentioned in Section 3.3. Thus, the procedure for general polynomial $$Q(x)$$ can be preferable when many evaluations of the polynomial are needed. The first row of Table 1 also lists that the time to evaluate the polynomial grows with the number of terms: the complexity is $$\mathcal{O}(T^2)$$ through (5.8) and (2.3) if both indices $$n$$ and $$k$$ are proportional to $$T$$. The precomputations require $$\mathcal{O}(T^5)$$ operations for general polynomials due to the double summation for $$g_{k,n}^l$$ in Section 6.3. For general functions, however, that derivation of higher order terms has to be repeated for each value of $$n$$. Table 1 Time required for different operations in seconds: mean time over seven values of $$n$$ of precomputations for seven terms in the expansion (only once for the procedure for general polynomials) and evaluating the asymptotic expansion for different numbers of terms $$T$$ Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Table 1 Time required for different operations in seconds: mean time over seven values of $$n$$ of precomputations for seven terms in the expansion (only once for the procedure for general polynomials) and evaluating the asymptotic expansion for different numbers of terms $$T$$ Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Time (s) Precomputations $$T=1$$ $$T=4$$ $$T=7$$ General polynomials 5.73e0 4.69e$$-$$3 6.95e$$-$$3 8.15e$$-$$3 General functions 1.96e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 2.24e$$-$$1 Funding FWO (Fonds Wetenschappelijk Onderzoek, Research Foundation—Flanders, Belgium), through FWO research projects (G.0617.10, G.0641.11 and G.A004.14 to D.H. and P.O.). Acknowledgements The authors would like to thank Alfredo Deaño, Arno Kuijlaars, Alex Townsend, Walter Van Assche and Marcus Webb for useful discussions on the topic of this paper. The authors are also grateful to the anonymous reviewers for their constructive remarks and for suggesting the alternative, insightful proof of Theorem 5.1 via the jump relation. References Bogaert, I., Michiels, B. & Fostier, J. ( 2012 ) $$\mathcal{O}(1)$$ computation of Legendre polynomials and Gauss-Legendre nodes and weights for parallel computing. SIAM J. Sci. Comput., 34, C83 – C101 . Google Scholar CrossRef Search ADS Bogaert, I. ( 2014 ) Iteration-free computation of Gauss–Legendre quadrature nodes and weights. SIAM J. Sci. Comput., 36, A1008 – A1026 . Google Scholar CrossRef Search ADS Bosbach, C. & Gawronski, W. ( 1998 ) Strong asymptotics for Laguerre polynomials with varying weights. J. Comput. Appl. Math., 99, 77 – 89 . Google Scholar CrossRef Search ADS Bremer, J. ( 2017 ) On the numerical calculation of the roots of special functions satisfying second order ordinary differential equations. SIAM J. Sci. Comput., 39, A55 – A82 . Google Scholar CrossRef Search ADS Deaño, A., Huertas, E. J. & Marcellán, F. ( 2013 ) Strong and ratio asymptotics for Laguerre polynomials revisited. J. Math. Anal. Appl., 403, 477 – 486 . Google Scholar CrossRef Search ADS Deaño, A., Huybrechs, D. & Opsomer, P. ( 2016 ) Construction and implementation of asymptotic expansions for Jacobi-type orthogonal polynomials. Adv. Comput. Math., 42, 791 – 822 . Google Scholar CrossRef Search ADS Deift, P., Kriecherbauaer, T., McLauglin, K. T.-R., Venakides, S. & Zhou, X. ( 1999a ) Strong asymptotics of orthogonal polynomials with respect to exponential weights. Commun. Pure Appl. Math., 52, 1491 – 1552 . Google Scholar CrossRef Search ADS Deift, P., Kriecherbauer, T., McLaughlin, K., Venakides, S. & Zhou, X. ( 1999b ) Uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory. Commun. Pur. Appl. Math., LII, 1335 – 1425 . Google Scholar CrossRef Search ADS Deift, P. & Zhou, X. ( 1992 ) A steepest descent method for oscillatory Riemann–Hilbert problems. Bull. Amer. Math. Soc., 26, 119 – 124 . Google Scholar CrossRef Search ADS Deift, P. & Zhou, X. ( 1993 ) A steepest descent method for oscillatory Riemann–Hilbert problems. Asymptotics for the MKdV equation. Ann. Math., 137, 295 – 368 . Google Scholar CrossRef Search ADS Fokas, A., Its, A. & Kitaev, A. ( 1992 ) The isomonodromy approach to matrix models in 2d quantum gravity. Commun. Math. Phys., 147, 395 – 430 . Google Scholar CrossRef Search ADS Gil, A., Segura, J. & Temme, N. ( 2017 ) Efficient computation of Laguerre polynomials. Comput. Phys. Commun., 210, 124 – 131 . Google Scholar CrossRef Search ADS Glaser, A., Liu, X. & Rokhlin, V. ( 2007 ) A fast algorithm for the calculation of the roots of special functions. SIAM J. Sci. Comput., 29, 1420 – 1438 . Google Scholar CrossRef Search ADS Gosper, R. W. ( 1978 ) Decision procedure for indefinite hypergeometric summation. Proc. Natl. Acad. Sci. USA, 75, 40 – 42 . Google Scholar CrossRef Search ADS Hale, N. & Townsend, A. ( 2013 ) Fast and accurate computation of Gauss–Legendre and Gauss–Jacobi quadrature nodes and weights. SIAM J. Sci. Comput., 35, A652 – A672 . Google Scholar CrossRef Search ADS Koornwinder, T. H. ( 1993 ) On Zeilberger’s algorithm and its $$q$$-analogue. J. Comput. Appl. Math., 48, 91 – 111 . Google Scholar CrossRef Search ADS Kuijlaars, A. ( 2003 ) Riemann–Hilbert analysis for orthogonal polynomials. Orthogonal polynomials and Special functions ( Koelink E. & Van Assche W. eds). Vol. 1817 . New York : Springer-Verlag, pp. 167 – 210 . Google Scholar CrossRef Search ADS Kuijlaars, A. B. J., McLaughlin, K. T.-R., Van Assche, W. & Vanlessen, M. ( 2004 ) The Riemann-Hilbert approach to strong asymptotics of orthogonal polynomials on $$[-1,1]$$. Adv. Math., 188, 337 – 398 . Google Scholar CrossRef Search ADS Levin, E. & Lubinsky, D. ( 2001 ) Orthogonal Polynomials for Exponential Weights. New York : Springer, pp. xiv+417 . Google Scholar CrossRef Search ADS López, J. L. & Temme, N. M. ( 2004 ) Convergent asymptotic expansions of Charlier, Laguerre and Jacobi polynomials. Proc. R. Soc. Edinb. Sect. A: Math., 134, 537 – 555 . Google Scholar CrossRef Search ADS NIST ( 2016 ) NIST - Digital Library of Mathematical Functions. Available at http://dlmf.nist.gov/. Online companion to Olver et al. (2010). Accessed 16 May 2017 . Olver, F. W. J., Lozier, D. W., Boisvert, R. F. & Clark, C. W. (eds) ( 2010 ) NIST Handbook of Mathematical Functions. New York, NY : Cambridge University Press . Print companion to NIST (2016) . Opsomer, P. ( 2016 ) Laguerre - Asymptotic expansions of generalized Laguerre polynomials. Available at http://nines.cs.kuleuven.be/software/LAGUERRE. Accessed 16 May 2017 . Plancherel, M. & Rotach, W. ( 1929 ) Sur les valeurs asymptotiques des polyn ômes d’Hermite $$H_n(x) \,{=} (-1)^n e^{\frac{x^2}{2}} \frac{\rm{d}^n}{\rm{d} x^n} \left( e^{- \frac{x^2}{2}} \right)$$. Comment. Math. Helv., 1, 227 – 257 . Google Scholar CrossRef Search ADS Szegő, G. ( 1967 ) Orthogonal Polynomials: American Mathematical Society Colloquium publications Vol. XXIII, 3rd edn. Providence, Rhode Island : American Mathematical Society . Temme, N. M. ( 1990 ) Asymptotic estimates for Laguerre polynomials. J. Appl. Math. Phys. (ZAMP), 41, 114 – 126 . Google Scholar CrossRef Search ADS The University of Oxford & The Chebfun Developers ( 2016 ) Chebfun - Numerical computations with functions. http://www.chebfun.org/. Accessed 16 May 2017 . Townsend, A., Trogdon, T. & Olver, S. ( 2016 ) Fast computation of Gauss quadrature nodes and weights on the whole real line. IMA J. Numer. Anal., 36, 337 – 358 . Townsend, A. ( 2016 ) FastGaussQuadrature.jl - Gauss quadrature nodes and weights in Julia. Available at https://github.com/ajt60gaibb/FastGaussQuadrature.jl. Accessed 16 May 2017 . Trefethen, L. N. & Bau, D. ( 1997 ) Numerical Linear Algebra. Philadelphia : Society for Industrial and Applied Mathematics . Google Scholar CrossRef Search ADS Vanlessen, M. ( 2007 ) Strong asymptotics of Laguerre-type orthogonal polynomials and applications in random matrix theory. Constr. Approx., 25, 125 – 175 . Google Scholar CrossRef Search ADS Zhao, Y., Cao, L. & Dai, D. ( 2014 ) Asymptotics of the partition function of a Laguerre-type random matrix model. J. Approx. Theory, 178, 64 – 90 . Google Scholar CrossRef Search ADS Appendix: Explicit formulas for the first higher order terms The recursive computation of $$R_k$$ can give an arbitrary number of terms, but we provide the first few terms explicitly here for $$z$$ outside the two disks, ignoring the procedure for general polynomials. Expressions for $$R^{{\mathrm{left}}/{\mathrm{right}}}(z)$$ can straightforwardly be obtained by identifying $$R_k^{{\mathrm{outer}}}(z)$$ from the following and using (5.5) and (5.4). We have: \begin{align*} R^{{\mathrm{outer}}}(z) & = I + \frac{1}{n}\left(\frac{U_{1,1}^{{\mathrm{right}}} }{z-1}+\frac{U_{1,2}^{{\mathrm{right}}} }{(z-1)^2}+\frac{U_{1,1}^{{\mathrm{left}}}}{z} \right) + \frac{1}{n^2}\left(\frac{U_{2,1}^{{\mathrm{right}}} }{z-1}+\frac{U_{2,2}^{{\mathrm{right}}} }{(z-1)^2}+\frac{U_{2,3}^{{\mathrm{right}}} }{(z-1)^3}+\frac{U_{2,1}^{{\mathrm{left}}}}{z} \right) \nonumber \\ & + \frac{1}{n^3}\left(\frac{U_{3,1}^{{\mathrm{right}}} }{z-1} + \frac{U_{3,2}^{{\mathrm{right}}} }{(z-1)^2} + \frac{U_{3,3}^{{\mathrm{right}}} }{(z-1)^3} + \frac{U_{3,4}^{{\mathrm{right}}} }{(z-1)^4} + \frac{U_{3,5}^{{\mathrm{right}}} }{(z-1)^5} +\frac{U_{3,1}^{{\mathrm{left}}}}{z} + \frac{U_{3,2}^{{\mathrm{left}}}}{z^2} \right) +\mathcal{O}\left(\frac{1}{n^4}\right), \nonumber \end{align*} with for general $$Q(x)$$ This agrees with results in Vanlessen (2007) as (4.11) in it equals $$U_{1,1}^{{\mathrm{right}}}+U_{1,1}^{{\mathrm{left}}}$$ and (4.12) equals $$\left.U_{1,1}^{{\mathrm{right}}}+U_{1,2}^{{\mathrm{right}}}\right|_{2,1}$$. For $$w(x) = x^\alpha \exp(-x)$$, one can use $$c_0 = 4 = d_0$$, $$c_1 = 0 = c_2$$ and the next higher order term is given by © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

### Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: Jul 29, 2017

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations