# Quadratures with multiple nodes for Fourier–Chebyshev coefficients

Quadratures with multiple nodes for Fourier–Chebyshev coefficients Abstract Gaussian quadrature formulas, relative to the Chebyshev weight functions, with multiple nodes and their optimal extensions for computing the Fourier coefficients in expansions of functions with respect to a given system of orthogonal polynomials, are considered. The existence and uniqueness of such quadratures is proved. One of them is a generalization of the well-known Micchelli–Rivlin quadrature formula. The others are new. A numerically stable construction of these quadratures is proposed. By determining the absolute value of the difference between these Gaussian quadratures with multiple nodes for the Fourier–Chebyshev coefficients and their corresponding optimal extensions, we get the well-known methods for estimating their error. Numerical results are included. These results are a continuation of the recent ones in Bojanov & Petrova (2009, J. Comput. Appl. Math., 231, 378–391) and Milovanović & Spalević (2014, Math. Comput., 83, 1207–1231). 1. Introduction Let $$\{P_k\}_{k=0}^\infty$$ be a system of orthonormal polynomials on $$[a,b]$$, a (bounded or not) real interval, with respect to a weight function $$\omega$$, integrable, non-negative function on $$[a,b]$$ that vanishes only at isolated points. The approximation of $$f$$ by partial sums of its series expansion   $S_n(f)=\sum_{k=0}^n a_k(f)P_k(x)$ with respect to a given system of orthonormal polynomials $$\{P_k\}_{k=0}^\infty$$ is a classical way to approximate functions. The numerical calculation of the coefficients $$a_k(f)$$ in $$S_n(f)$$ is the main task in such a procedure. The computation of $$a_k(f)$$,   $$a_k(f)=\int_a^b \omega(t) P_k(t)f(t)\,{\mathrm{d}}t,$$ (1.1) requires the use of a quadrature formula. A straightforward application of the Gauss quadrature formula based on $$n$$ values of the integrand $$P_k(t)f(t)$$ (with $$k<2n-1$$) will give the exact result for all polynomials of degree $$2n-k-1$$. But it is well known that, especially for large values of $$k$$, the highly oscillating character of the integrand $$P_k(t)f(t)$$ involved in the computation of the Fourier coefficients (1.1) often implies that the usual Gauss-type quadrature formulas (with simple nodes) do not perform well, in the sense that they are numerically unstable. This is the main reason to consider the use of quadrature formulas with multiple nodes for this purpose. In this article, following Bojanov & Petrova (2009) (see also Milovanović & Spalević, 2014) and using the same notation, we consider quadrature formulas with multiple nodes of the type   $$\int_a^b \omega(t)P_k(t)f(t)\,{\mathrm{d}}t\approx\sum_{j=1}^n\sum_{i=0}^{\nu_j-1}c_{ji}f^{(i)}(x_j),\quad a<x_1<\cdots<x_n<b,$$ (1.2) where $$\nu_j$$ are given natural numbers (multiplicities) and $$P_k(t)$$ is a monic polynomial of degree $$k$$. The number $$\ell$$ is the algebraic degree of precision ($$\mathrm{ADP}$$) of (1.2) if (1.2) is exact for all polynomials of degree $$\ell$$ and there is a polynomial of degree $$\ell+1$$ for which this formula is not exact. The outline of this article is as follows. In Section 2, a brief overview on quadrature formulas with multiple nodes is given and their utility to approximate Fourier coefficients is revised. The problem of estimating the quadrature errors, by means of optimal extensions of the proposed quadratures (in the sense of the well-known Kronrod approach), is analysed in Section 3. Section 4 is devoted to the study of quadrature formulas with multiple nodes to estimate Fourier coefficients for the four Chebyshev weights, some of them being new; this is one of the main contributions of this article. Finally, we also present a numerically stable construction of such formulas, and some numerical experiments are displayed in Section 5. 2. Gauss-type quadrature formulas with multiple nodes for computing Fourier coefficients Quadrature formulas with multiple nodes were introduced more than 100 years after classical quadratures. Turán (1950) was the first to introduce some quadrature formulas with multiple nodes of Gauss type, in such a way that now all such quadrature rules are commonly referred to as Gauss–Turán quadrature formulas. By $$\mathcal{P}_m$$ we denote the set of all algebraic polynomials of degree at most $$m$$. More generally, Chakalov (1954) proved the existence of Gauss-type quadratures with multiple nodes, and then Ghizzetti & Ossicini (1975) characterized their nodes as zeros of a polynomial determined by certain orthogonality relations, as shown in the following result. Theorem 2.1 For any given set of odd multiplicities $$\nu_1,\dots,\nu_n$$$$(\nu_j=2s_j+1,\ s_j\in\mathbb{N}_0,\ j=1,\dots,n)$$, there exists a unique quadrature formula of the form   $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{\nu_j-1}a_{ji}f^{(i)}(x_j),\quad a\le x_1<\dots<x_n\le b,$$ (2.1) of $$\mathrm{ADP}=\nu_1+\dots+\nu_n+n-1$$, which is well known as the Chakalov–Popoviciu quadrature formula. The nodes $$x_1,\dots,x_n$$ of this quadrature are determined uniquely by the orthogonality conditions   $(\forall\, Q\in\mathcal{P}_{n-1})\qquad \int_a^b \omega(t)\prod_{k=1}^n(t-x_k)^{\nu_k}Q(t)\,{\mathrm{d}}t=0.$ The corresponding (monic) orthogonal polynomial $$\prod_{k=1}^n(t-x_k)$$ is known as a $$\sigma$$-orthogonal polynomial, with $$\sigma=\sigma_n=(s_1,\dots,s_n)$$. In the first examples considered by Turán (1950), quadrature formulas of type (2.1), with equal multiplicities $$\nu_1=\dots=\nu_n=\nu$$, $$\nu$$ being an odd number ($$\nu=2s+1,\ s\in\mathbb{N}$$), were studied; the corresponding (monic) orthogonal polynomial $$\prod_{k=1}^n(t-x_k)$$ is called an $$s$$-orthogonal polynomial. On the other hand, in this article, we are mainly concerned (Section 4 below) with the Chebyshev weight functions and their corresponding $$s$$-orthogonal polynomials. In this sense, along with the classical Chebyshev weight function   $$\omega_1(t)=(1-t^2)^{-1/2},\quad t\in[-1,1],$$ (2.2) we consider the following generalized Chebyshev weight functions:   $$\omega_2(t)=(1-t^2)^{1/2+s},\;\omega_3(t)=(1-t)^{-1/2}(1+t)^{1/2+s},\;\omega_4(t)=(1-t)^{1/2+s}(1+t)^{-1/2},$$ (2.3) for $$s\ge 0$$. It is well known that the Chebyshev polynomials $$T_n$$ are $$s$$-orthogonal on $$(-1,1)$$ with respect to $$\omega_1$$ for each $$s\ge 0$$ (see Bernstein, 1930). Ossicini & Rosati (1975) found three other weight functions $$\omega_i(t)$$$$(i=2,3,4),$$ for which the $$s$$-orthogonal polynomials can be identified as the Chebyshev polynomials of the second, third and fourth kinds, $$U_n$$, $$V_n$$ and $$W_n$$, which are defined by   $U_n(t)=\frac{\sin(n+1)\theta}{\sin\theta},\ V_n(t)=\frac{\cos(n+\frac 12)\theta}{\cos(\theta/2)},\ W_n(t)=\frac{\sin(n+\frac 12)\theta }{\sin(\theta/2)},$ where $$t=\cos\theta$$. However, these weight functions depend on $$s$$ (see (2.3)). It is easy to see that $$W_n(-t)=(-1)^nV_n(t)$$, so that in the investigation it is sufficient to study $$\omega_1(t)$$, $$\omega_2(t)$$ and one of $$\omega_3(t)$$ and $$\omega_4(t)$$. It is also noteworthy that for each $$n\in\mathbb{N}$$, Gori & Micchelli (1996) introduced an interesting class of weight functions defined on $$[-1,1],$$ for which explicit Gauss–Turán quadrature formulas of all orders can be found. In other words, these classes of weight functions have the peculiarity that the corresponding $$s$$-orthogonal polynomials, $$s\in\mathbb{N}$$, of the same degree, are independent of $$s$$. This class includes certain generalized Jacobi weight functions $$\omega_{n,\mu}(t)=\vert U_{n-1}(t)/n\vert^{2\mu+1}(1-t^2)^\mu$$, where $$U_{n-1}(\cos\theta)=\sin {n\theta}/\sin \theta$$ (Chebyshev polynomial of the second kind) and $$\mu>-1$$. In this case, the Chebyshev polynomials $$T_n$$ appear as $$s$$-orthogonal polynomials, $$s\in\mathbb{N}$$. As for the real purpose of this article, i.e., the application of quadrature formulas with multiple nodes to the estimation of Fourier coefficients (1.1), and following Bojanov & Petrova (2009), the connection between quadratures with multiple nodes and formulas of type (1.2) may be described as follows. For the system of nodes $${\bf x}:=(x_1,\dots,x_n)$$ with corresponding multiplicities $$\bar\nu:=(\nu_1,\dots,\nu_n)$$, they define the polynomials   ${\it{\Lambda}}^{\bar\nu}(t;{\bf x}):=\prod_{m=1}^n(t-x_m)^{\nu_m}.$ Setting $$x_j^{\nu_j}:= {(x_j,\dots,x_j)}$$$$[x_j \ {\rm repeats}\ {\nu_j\ {\rm times}}]$$, $$j=1,\dots,n$$, they state and prove the following important theorem which reveals the relation between the standard quadratures and the quadratures for Fourier coefficients. Theorem 2.2 For any given sets of multiplicities $$\bar\mu:=(\mu_1,\dots,\mu_k)$$ and $$\bar\nu:=(\nu_1,\dots,\nu_n)$$, and nodes $$y_1<\cdots<y_k,\;x_1<\cdots<x_n$$, there exists a quadrature formula of the form   $$\int_a^b \omega(t){\it{\Lambda}}^{\bar\mu}(t;{\bf y})f(t)\,{\mathrm{d}}t\approx\sum_{j=1}^n\sum_{i=0}^{\nu_j-1}c_{ji}f^{(i)}(x_j),$$ (2.4) with $$\mathrm{ADP}=N$$ if and only if there exists a quadrature formula of the form   $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx\sum_{m=1}^k\sum_{\lambda=0}^{\mu_m-1}b_{m\lambda}f^{(\lambda)}(y_m)+ \sum_{j=1}^n\sum_{i=0}^{\nu_j-1}a_{ji}f^{(i)}(x_j),$$ (2.5) which has degree of precision $$N+\mu_1+\cdots+\mu_k$$. In the case $$y_m=x_j$$ for some $$m$$ and $$j$$, the corresponding terms in both sums combine in one term of the form   $\sum_{\lambda=0}^{\mu_m+\nu_j-1}d_{m\lambda}f^{(\lambda)}(y_m).$ Observe that the actual strength of this result relies on the freedom in choosing the nodes and multiplicities in polynomial $${\it{\Lambda}}^{\overline\mu}$$. This utility will be shown repeatedly in Section 4 below. Regarding the computation of the weight coefficients, let us suppose that the coefficients $$a_{ji}\ (j=1,\dots,n;\ i=0,1,\dots,\nu_j-1)$$ in (2.5) are known. Proceeding as in the first part of the proof of Bojanov & Petrova (2009, Theorem 2.1), we can determine the coefficients $$c_{ji}\ (j=1,\dots,n;\ i=0,1,\dots,\nu_j-1)$$ in (2.4), namely, applying (2.5) to the polynomial $${\it{\Lambda}}^{\bar\mu}(\cdot;{\bf y})f$$, where $$f\in\mathcal{P}_N$$, the first sum in (2.5) vanishes and we can obtain (see Bojanov & Petrova, 2009, Equation (2.4))   $\int_a^b \omega(t){\it{\Lambda}}^{\bar\mu}(t;{\bf y})f(t)\,{\mathrm{d}}t=\sum_{j=1}^n\left(\sum_{i=0}^{\nu_j-1}a_{ji}\left.\left[{\it{\Lambda}}^{\bar\mu}(t;{\bf y})f(t)\right]^{(i)}\right\vert_{t=x_j}\right) =\sum_{j=1}^n\sum_{i=0}^{\nu_j-1}c_{ji}f^{(i)}(x_j),$ where   $$\quad c_{ji}=\sum_{s=i}^{\nu_j-1}a_{js}{s\choose i}\left.\left[{\it{\Lambda}}^{\bar\mu}(t;{\bf y})\right]^{(s-i)}\right\vert_{t=x_j}\ \ (j=1,2,\dots,n;\ i=0,1,\dots,\nu_j-1).$$ (2.6) On the other hand, the following questions arise in a natural way: Is it possible to construct a formula based on $$n$$ evaluations of $$f$$ or its derivatives, which gives the exact value of the coefficients $$a_k(f)$$ in (1.1) for polynomials $$f$$ of higher degree? What is the highest degree of precision that can be attained by a formula based on $$n$$ evaluations? When dealing with these questions for the coefficients $$a_k(f)$$ of $$f$$ with respect to the system of Chebyshev polynomials of the first kind $$\{T_k\}_{k=0}^\infty$$, orthogonal on $$[-1,1]$$ with weight $$\omega(t)=1/\sqrt{1-t^2}$$,   $T_k(t)=\cos(k\arccos\, t)={2^{k-1}}\,(t-\xi_1)\cdots(t-\xi_k)=2^{k-1}\widehat T_k(t),\quad t\in(-1,1),$ where $$\widehat T_k$$ denotes the monic polynomial of degree $$k$$: $$2^{1-k}\,T_k\,;$$Micchelli & Rivlin (1972) discovered the remarkable fact that the quadrature   $$\int_{-1}^1\frac{1}{\sqrt{1-t^2}}\, T_n(t)f(t)\,{\mathrm{d}}t\approx\frac{\pi}{n2^n}f'[\xi_1,\dots,\xi_n]$$ (2.7) is exact for all algebraic polynomials of degree $$\le 3n-1$$. Here, $$g[x_1,\dots,x_m]$$ denotes the divided difference of $$g$$ at the points $$x_1,\dots,x_m$$, and thus, formula (2.7) uses $$n$$ function values of the derivative $$f'$$, i.e., $$f'(\xi_1),\dots,f'(\xi_n)$$. It is clear that there is no formula of the form   $$\int_{-1}^1\frac{1}{\sqrt{1-t^2}}\, T_n(t)f(t)\,{\mathrm{d}}t\approx\sum_{k=1}^na_kf(x_k)+\sum_{k=1}^n b_kf'(x_k),$$ (2.8) which is exact for all polynomials of degree $$3n$$. The polynomial $$f(t)=T_n(t)(t-x_1)^2\cdots(t-x_n)^2$$ is a standard counterexample. Thus, the aforementioned Micchelli–Rivlin formula is of the highest degree of precision among all formulas of type (2.8). The question of uniqueness of this quadrature formula is reduced to the following problem which is also of independent interest: Prove that if $$Q$$ is a polynomial of degree $$n$$ with $$n$$ zeros in $$[-1,1]$$ and such that $$\left|Q(\eta_j)\right|=1$$ at the extremal points $$\eta_j=\cos(j\pi/n),\ j=0,1,\dots,n$$, of the Chebyshev polynomial $$T_n$$, then $$Q\equiv \pm T_n$$. This property was proved in deVore (1974) and, thus, the uniqueness the of Micchelli–Rivlin quadrature formula was settled (see Micchelli & Rivlin, 1974). For more details on this subject, see Bojanov & Petrova (2009) and Milovanović & Spalević (2014). Finally, let us remark that numerically stable methods for constructing nodes $$x_j$$ and coefficients $$a_{ji}$$ in Gaussian quadrature formulas with simple and multiple nodes and their optimal (Kronrod) extensions with simple and multiple nodes can be found in Gautschi & Milovanović (1997), Milovanović et al. (2004), Shi & Xu (2007), Gautschi (2014), Calvetti et al. (2000), Laurie (1997), Cvetković & Spalević (2014) and Spalević & Cvetković (2016) (see also Gautschi, 2001, 2004; Cvetković & Milovanović, 2004; Milovanović & Cvetković, 2012). For the asymptotic representation of the coefficients $$a_{ji}\,$$, see Peherstorfer (2009). More concerning this theory and its applications can be found in Milovanović (2001) and references therein, and Ghizzetti & Ossicini (1970). The error bounds for these quadratures in the case of analytic integrands have been considered in several articles (see, e.g., Spalević, 2014, and references therein). The error bounds for the Micchelli-Rivlin and Micchelli-Sharma (cf. (4.9) below) quadratures in the case of analytic integrands have been considered in Pejčev & Spalević (2013, 2014). 3. Estimating the error: optimal extensions of the quadrature formulas As for other numerical methods, it is crucial to have an efficient estimation of the quadrature error. This is not a trivial problem, and its study has given birth to the so-called stopping functionals, which allow us to decide when to stop the algorithm, provided a sufficiently small error is guaranteed. It is well known that for Gaussian quadratures, the best known (and most commonly used) stopping functional comes from the seminal Kronrod idea (see, e.g., Gautschi, 1987). Essentially, this involves the construction of a higher-order quadrature formula, using the nodes of the previous one and some new ones, to take the difference between both quadrature formulas as an estimation of the error (see also Monegato, 1982, 2001; Li, 1994). This higher-order quadrature formula, which has maximal possible ADP due to Kronrod’s idea, used for testing the quadrature error is usually referred to as an optimal extension of the given one. In this sense, Milovanović & Spalević (2014), for a formula of type   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx \sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu} a_{\nu i} f^{(i)}(x_\nu)},$$ (3.1) where $$a\le x_1<x_2<\cdots<x_n\le b$$, studied its extension to the interpolatory quadrature formula   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu} b_{\nu i} f^{(i)}(x_\nu)}+\sum_{\mu=1}^m {\sum_{j=0}^{2s_\mu^*} c_{\mu j}^* f^{(j)}(x_\mu^*)},$$ (3.2) where $$x_\nu$$ are the same nodes as in (3.1), and the new nodes $$x_\mu^*$$ and new weights $$b_{\nu i},c_{\mu j}^*$$ are chosen to maximize the degree of precision of (3.2), which is greater than or equal to   $$\sum_{\nu=1}^n(2s_\nu+1)+\sum_{\mu=1}^m(2s_\mu^*+1)+m-1= 2\left(\sum_{\nu=1}^ns_\nu+\sum_{\mu=1}^m s_\mu^*\right)+n+2m-1.$$ The interpolatory quadrature formula (3.2) has in general ADP$$=\sum_{\nu=1}^n(2s_\nu+1)+\sum_{\mu=1}^m(2s_\mu^*+1)-1$$, which is higher than the ADP of the quadrature formula (3.1), i.e., $$\sum_{\nu=1}^n(2s_\nu+1)+n-1$$, if   $2\sum_{\mu=1}^m s_\mu^*+m>n.$ The last inequality represents the necessary condition for a construction of the optimal extensions of type (3.2). Observe that it does not depend on $$s_\nu,\ \nu=1,\dots,n$$. In the case when all $$s_\mu^*$$ are equal to $$0$$, the last condition reduces to $$m>n$$, i. e., $$m\ge n+1$$. In the case $$m=n+1$$, we referred to the optimal extensions (3.2) as Kronrod extensions, which have $$2n+1$$ nodes, since they are generalizations of the well-known Gauss–Kronrod quadrature formulas, which are optimal extensions of the Gauss quadrature formulas of (3.1) with $$s_\nu=0,\ \nu=1,\dots,n$$ ($$s_\mu^*=0,\ \mu=1,\dots,m=n+1$$). We say that the quadrature formula (3.2) is a Chakalov–Popoviciu–Kronrod quadrature formula. A particular case of this formula is the Gauss–Turán–Kronrod quadrature formula, if $$s_1=s_2=\cdots=s_n=s$$. When $$s_1=s_2=\cdots=s_n=0,\ s_1^*= s_2^*=\cdots=s_m^*=0$$ and $$m=n+1$$, the well-known Gauss–Kronrod quadrature formula is obtained as a particular case of both quadrature formulae just mentioned. In the theory of Gauss–Kronrod quadrature formulas, the Stieltjes polynomials $$E_{n+1}(t)$$, whose zeros are the nodes $$x_\mu^*$$, namely $$E_{n+1}(t)\equiv E_{n+1}(t,\omega):=\prod_{\mu=1}^{n+1}(t-x_\mu^*)$$, play an important role. Also, of foremost interest are weight functions for which the Gauss–Kronrod quadrature formula has the property that (i) all $$n+1$$ nodes $$x_\mu^*$$ are in $$(a,b)$$ and are simple (i.e., all the zeros of the Stieltjes polynomial $$E_{n+1}(t)$$ are in $$(a,b)$$ and are simple). It is also desirable to work with weight functions for which the following additional properties are fulfilled: (ii) The interlacing property. Namely, the nodes $$x_\mu^*$$ and $$x_\nu$$ separate each other (i.e., the $$n+1$$ zeros of $$E_{n+1}(t)$$ separate the $$n$$ zeros of the orthogonal polynomial $$\prod_{\nu=1}^n(t-x_\nu)$$) and (iii) all the quadrature weights are positive. On the basis of the above facts, it seems natural to consider Chakalov–Popoviciu–Kronrod quadratures (3.2) in which $$m=n+1$$, i.e.,   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu} b_{\nu i} f^{(i)}(x_\nu)}+\sum_{\mu=1}^{n+1} {\sum_{j=0}^{2s_\mu^*} c_{\mu j}^* f^{(j)}(x_\mu^*)}.$$ (3.3) We know that in the general case of quadratures with multiple nodes, not all the quadrature weights have to be positive (see, e.g., the examples displayed in Section 5 below). Therefore, for Kronrod extensions of Gaussian quadrature formulas with multiple nodes, it does not seem natural to consider property (iii) as desirable. Anyway, we continue asking for the two first properties, in the sense that the nodes $$x_\mu^*$$ should be all real and simple and the interlacing property with original nodes $$x_\nu$$ should be satisfied. In Shi (2000), the density of the zeros of $$\sigma$$-orthogonal polynomials on bounded intervals, i.e., the nodes of the quadrature formulas (3.1), is studied, extending the well-known results for the zeros of ordinary orthogonal polynomials, i.e., the nodes of the $$n$$-point Gauss quadrature formulas (cf. Szegő, 1975)). Because of that property it is very natural to consider extensions of these quadratures in the form (3.3) by looking for new nodes to satisfy the interlacing property with respect to the old nodes. In this way, we take into account the influence of the associated quadrature formula and get the information on the integrand and its derivatives uniformly over the whole interval of integration. As stated above, one of the pioneering works on quadrature formulas with multiple nodes was Turán (1950). In that study, the author proposed an interpolatory quadrature formula of the type   $$\int_{-1}^1 f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s}A_{i,\nu}f^{(i)}(\tau_\nu)\quad(s\in{\Bbb N_0}),$$ (3.4) which has the highest possible $$\mathrm{ADP}$$. For our purposes, it is natural to consider a generalization of formula (3.4), in the sense of   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s}A_{i,\nu}f^{(i)}(\tau_\nu)\quad(s\in{\Bbb N_0}).$$ (3.5) Because of this highest degree of precision, it is natural to call (3.5) a Gauss–Turán quadrature formula. Note that in (3.5), $$\tau_\nu$$ are the zeros of a polynomial $$\pi_n$$, known as the $$s$$-orthogonal polynomial, of degree $$n$$, which satisfies the orthogonality relation   $$(\forall\, p\in \mathcal{P}_{n-1})\qquad\int_a^b \omega(t) \pi_n^{2s+1}(t)p(t)\,{\mathrm{d}}t=0,$$ (3.6) and $$A_{i,\nu}$$ are determined through interpolation. If $$\tau_\nu$$ and $$A_{i,\nu}$$ are chosen in this way, $$\mathrm{ADP}$$ for (3.5) is $$2(s+1)n-1$$. The weight coefficients $$A_{i,\nu}$$ in the Gauss–Turán quadrature formula (3.5) are not all positive in general. Following Kronrod’s idea, Li (1994) considered an extension of the formula (3.5) to   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s}B_{i,\nu}f^{(i)} (\tau_\nu)+\sum_{j=1}^{n+1}C_jf(\hat\tau_j)\quad(s\in{\Bbb N_0}),$$ (3.7) where $$\tau_\nu$$ are the same nodes as in (3.5), and the new nodes $$\hat\tau_j$$ and new weights $$B_{i,\nu},C_j$$ are chosen to maximize the $$\mathrm{ADP}$$ of (3.7). It is shown in Li (1994) that when $$\omega$$ is any weight function on $$[a,b]$$, we can always obtain the maximum degree $$2n(s+1)+n+1$$ by taking $$\hat\tau_j$$ to be the zeros of the polynomials $$\hat\pi_{n+1}$$ satisfying the orthogonality property   $(\forall\, p\in \mathcal{P}_n)\qquad \int_a^b \omega(t) \hat\pi_{n+1}(t)\pi_n^{2s+1}(t)p(t)\,{\mathrm{d}}t=0.$ At the same time, it is shown that $$\hat\pi_{n+1}$$ always exists and is unique up to a multiplicative constant. In the special case when $$\omega(t)=(1-t^2)^{-1/2}$$, Li (1994) determined $$\hat\pi_{n+1}$$ explicitly and obtained the weights in (3.7) for $$s=1$$ and $$s=2$$. The weights in the remaining cases $$s\ge3$$ were obtained later in Shi (1996). 4. Quadrature formulas with multiple nodes for Fourier coefficients corresponding to Chebyshev weight functions As stated in Section 2, our main interest here is the use of quadrature formulas with multiple nodes in estimating the Fourier coefficients for the four Chebyshev weights $$\omega_i\,,\,i=1,\ldots,4$$, in (2.2)–(2.3). For these four weights, optimal extensions of the above Chakalov–Popoviciu quadratures will be considered as efficient tools to estimate the errors of quadrature. Throughout this section, for calculating Fourier coefficients (1.1) by a quadrature formula and estimating the corresponding error, we use the method based on Theorem 2.2; namely, if there exist unique quadrature formulas (3.1), (3.2), then Theorem 2.2 implies that there exist unique quadratures for calculating the integrals   $$\int_a^b \omega(t) f(t)\pi_{n,\sigma}(t)\,{\mathrm{d}}t\approx \sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu-1} \widehat a_{\nu i} f^{(i)}(x_\nu)},$$ (4.1) and   $$\int_a^b \omega(t) f(t)\pi_{n,\sigma}(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu-1} \widehat b_{\nu i} f^{(i)}(x_\nu)}+\sum_{\mu=1}^m {\sum_{j=0}^{2s_\mu^*} \widehat c_{\mu j}^* f^{(j)}(x_\mu^*)},$$ (4.2) which represent the Fourier coefficients if the given $$\sigma$$-orthogonal polynomial $$\pi_{n,\sigma}$$ agrees with the corresponding ordinary orthogonal polynomial $$P_n$$ with respect to the weight function $$\omega$$, i.e., $$\pi_{n,\sigma}(t)\equiv P_n(t)$$ on $$[a,b]$$. Then, the error in (4.1) can be estimated by the well-known method of computing the absolute value of the difference of the quadrature sums in (4.2) and (4.1). First, we are concerned with the Chebyshev weight function of the first kind. 4.1 A generalization of the Micchelli–Rivlin quadrature formula for Fourier–Chebyshev coefficients Using the above-presented method (see (4.1), (4.2)) for the case $$\omega(t)=1/\sqrt{1-t^2}$$, $$t\in[-1,1]$$, we have just proved the following statement. Theorem 4.1 Let $$n,s\in\mathbb{N}$$ and $$\omega(t)=1/\sqrt{1-t^2}$$, $$t\in[-1,1]$$. Then, there exists a unique quadrature formula with multiple nodes for calculating the corresponding Fourier–Chebyshev coefficients $$a_n(f)=\int_{-1}^1 f(t)T_n(t)/\sqrt{1-t^2}\,{\mathrm{d}}t$$,   $$\int_{-1}^1 \frac{f(t)T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s-1}\widehat A_{i,\nu}f^{(i)}(\tau_\nu),$$ (4.3) with $$\mathrm{ADP}=2sn+n-1$$, as well as its Kronrod extension   $$\int_{-1}^1 \frac{f(t)T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s-1} \widehat B_{i,\nu}f^{(i)}(\tau_\nu)+\sum_{j=1}^{n+1}\widehat C_jf(\hat\tau_j),$$ (4.4) with $$\mathrm{ADP}=2sn+2n+1$$. In the special case when $$s=1$$, the quadrature formula (4.3) becomes the well-known Micchelli–Rivlin quadrature formula (2.7) or, what is the same, (2.8). It is noteworthy that the precision in calculating $$a_n(f)$$ ($$n$$ fixed) increases with increasing $$s$$ in the quadrature formulas (4.3), (4.4). Now, for the sake of completeness, an alternative way to prove Theorem 4.1 for $$n\ge2$$ is presented. New proof of Theorem 4.1 For fixed $$n,s\in\mathbb{N}\ (n\ge2)$$, and consider the new weight function (cf. Engels, 1980)   $\omega^{n,s}(t)=\frac{\hat T_n^{2s}(t)}{\sqrt{1-t^2}}.$ Recently, Cvetković et al. (2016, Theorem 2.1, Equation (2.2)) obtained analytically in a closed form the coefficients of the three-term recurrence relation for the corresponding orthogonal polynomials $$p_k^{n,s}$$ with respect to the modified Chebyshev weight function of the first kind $$\omega^{n,s}$$ on $$[-1,1]$$,   $p_{k+1}^{n,s}(t)=tp_{k}^{n,s}(t)-\beta^{n,s}_kp_{k-1}^{n,s}(t),\quad k\in\mathbb{N}_0,$ where $$p_{0}^{n,s}(t)=1$$, $$p_{-1}^{n,s}(t)=0$$. The Jacobi matrix, on which the well-known construction of the Gauss quadrature formula with $$2n+1$$ with respect to $$\omega^{n,s}$$ is based (cf. Golub & Welsch, 1969), is formed by the following coefficients of the three-term recurrence relation (cf. Cvetković et al., 2016, Theorem 2.1, Equation (2.2))):   $$\begin{array}{rl} \alpha_k^{n,s}=&0\quad\ (k=0,1,\dots);\$2mm] \beta_1^{n,s}=&\displaystyle\frac 12,\quad \beta_n^{n,s}=\displaystyle\frac 14\frac{1+2s}{1+s},\quad\beta_{n+1}^{n,s}=\displaystyle\frac 14\frac{1}{1+s},\quad \beta_{2n}^{n,s}=\displaystyle\frac 14\frac{2}{1+s};\\[2mm] \beta_k^{n,s}=&0\ \quad\mbox{otherwise}; \end{array}$$ (4.5) with \beta_0^{n,s}=\frac{\pi}{2^{2ns}}{2s\choose s}. For such Gauss quadrature formula with n nodes with respect to \omega^{n,s}, the associated generalized averaged Gaussian quadrature formula with 2n+1 nodes (see Spalević, 2007) is constructed by the Jacobi matrix that consists of the same coefficients as the three-term recurrence relation (4.5), but just substituting \beta_{2n}^{n,s}=\frac 14\frac{2}{1+s} by \beta_{2n}^{n,s}=\frac 12, and whose nodes are the zeros of the polynomial \[ t_{2n+1}\equiv p_{n}^{n,s}\cdot F_{n+1},$ where   \begin{eqnarray*} F_{n+1}&=&p_{n+1}^{n,s}-\frac{1}{4(1+s)}\hat T_{n-1}\\ &=&\hat T_{n+1}+\frac 14\left(1-\frac{1+2s}{1+s}\right)\hat T_{n-1}- \frac{1}{4(1+s)}\hat T_{n-1}\\ &=&\hat T_{n+1}-\frac 14 \hat T_{n-1}\\ &=&\frac{1}{2^n}(T_{n+1}-T_{n-1})\\ &=&\frac{1}{2^{n-1}}(t^2-1)U_{n-1}. \end{eqnarray*} The last equality holds on the basis of Shi (1996, Equation (2.4))), by putting $$k=1$$ there. Since all entries, i.e., the coefficients of the three-term recurrence relation, of the Jacobi matrix for the generalized averaged Gaussian quadrature formula with $$2n+1$$ nodes agree with the corresponding entries of the Jacobi matrix for the Gauss quadrature formula with $$2n+1$$ nodes (the zeros of $$p_n^{n,s}\cdot F_{n+1}$$), up to the entry $$\sqrt{\beta_{2n}^{n,s}}$$, we conclude that the ADP of the given generalized averaged Gaussian quadrature formula with $$2n+1$$ nodes is $$2(2n+1)-1-2=4n-1\ge 3n+1$$, for $$n\ge2$$. This means that the given generalized averaged Gaussian quadrature formula with $$2n+1$$ nodes is in fact the Gauss–Kronrod quadrature formula and $$F_{n+1}\equiv E_{n+1}$$, where $$E_{n+1}$$ is the Stieltjes polynomial corresponding to $$p_n^{n,s}$$. It is well known that the last quadrature uniquely exists. Now, by applying Theorem 2.2, for the weight function $$\omega(t)=1/\sqrt{1-t^2}$$ on $$[-1,1]$$, we deduce that uniquely there exist first, the quadrature formula (3.7) with $$\mathrm{ADP}=2sn+3n+1$$, and then, (4.4) with $$\mathrm{ADP}=2sn+2n+1$$. There uniquely exists a Gauss quadrature formula with $$n$$ nodes for the weight function $$\omega^{n,s}$$. Now, by applying Theorem 2.2 again, for the weight function $$\omega(t)=1/\sqrt{1-t^2}$$ on $$[-1,1]$$, we deduce that there uniquely exist first, the quadrature formula of Gaussian type (3.5) with $$\mathrm{ADP}=2sn+2n-1$$, and then, (4.3) with $$\mathrm{ADP}=2sn+n-1$$. Remark 4.2 The quadrature formula (4.3) was briefly mentioned in Bojanov & Petrova (2009, p. 383). For calculating the nodes and weight coefficients in the standard Gauss–Turán and Chakalov–Popoviciu quadrature formulas, we can use the general numerically stable methods presented in the papers cited at the end of Section 2. Then, the weight coefficients of the corresponding quadratures for Fourier coefficients can be computed using (2.6). In Section 5, we will describe this step in more detail. For $$\omega(t)=1/\sqrt{1-t^2}$$, explicit expressions of the weight coefficients of the generalized standard Gaussian quadrature formulas of a general form on the Chebyshev nodes (of the first and second kind) are given in Shi (1998). Explicit expressions for Fourier–Chebyshev coefficients $$a_n(f)=\int_{-1}^1 f(t)T_n(t)/\sqrt{1-t^2}\,{\mathrm{d}}t$$ are derived in Yang & Wang (2003). However, to find them, we need to calculate divided differences with repeated nodes, which might not be a simple task, especially when $$n,s$$ increase (see Pop & Bărbosu, 2009). A more general case with the Gori–Micchelli weight functions $$\omega=\omega_{n,\mu}$$ was treated in a similar fashion in Yang (2005). 4.2 Quadratures for Fourier coefficients for Chebyshev weight functions of the second kind Let $$\{\eta_j\}$$, $$j=1,\dots,n-1$$ be the zeros of the Chebyshev polynomial of the second kind $$U_{n-1}$$ of degree $$n-1$$. It is well known that the Gauss–Turán quadrature formula   $$\int_{-1}^1(1-t^2)^{s+1/2} f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}\alpha_{ji}f^{(i)}(\eta_j),\quad s\in\mathbb{N}$$ (4.6) exists uniquely and has $$\mathrm{ADP}=2(n-1)(s+1)-1=2n(s+1)-2s-3$$. Thus, from (4.6), by Theorem 2.2 we get the quadrature formula   $$\int_{-1}^1 \sqrt{1-t^2} f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}\hat\alpha_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\beta_i f^{(i)}(-1)+ \gamma_i f^{(i)}(1)],\quad s\in\mathbb{N},$$ (4.7) which exists uniquely and has $$\mathrm{ADP}=2n(s+1)-3$$. Since the nodes of the quadrature formula (4.7) are known, we can calculate its weight coefficients (cf. Milovanović et al., 2004). Using (4.7) again, Theorem 2.2 provides the Gaussian quadrature formula of Lobatto type for the Fourier–Chebyshev coefficients   $$\int_{-1}^1 \sqrt{1-t^2} f(t) U_{n-1}(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s-1}\widetilde\alpha_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\widetilde\beta_i f^{(i)}(-1)+ \widetilde\gamma_i f^{(i)}(1)],\quad s\in\mathbb{N},$$ (4.8) which exists uniquely and has $$\mathrm{ADP}=2ns+n-3$$. Since the nodes of the quadrature formula (4.8) are known (they are the same as in (4.7)), we can calculate its weight coefficients by (2.6), by knowing the weight coefficients of (4.7). On the other hand, Micchelli & Sharma (1983), for every $$s\in\mathbb{N}$$, constructed a formula of the form   $$\int_{-1}^1 \frac{1}{\sqrt{1-t^2}}T_n(t) f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}a_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s}[A_i f^{(i)}(-1)+ B_i f^{(i)}(1)],$$ (4.9) with $$\mathrm{ADP}=(2s+3)n-1$$, which has the highest possible degree of precision. Since the nodes of their formula are located at the extremal points $$-1,\eta_1,\dots,\eta_{n-1},1$$ of the Chebyshev polynomial $$T_n$$ (note that $$\{\eta_j\}_{j=1}^{n-1}$$ are also the zeros of the Chebyshev polynomial of the second kind $$U_{n-1}$$), it can be considered an extension of the simple node formula   $\frac 2\pi\int_{-1}^1 \frac{1}{\sqrt{1-t^2}}T_n(t) f(t)\,{\mathrm{d}}t\approx 2^{1-n}f[-1,\eta_1,\dots,\eta_{n-1},1]$ of $$\mathrm{ADP}=3n-1$$, established earlier in Micchelli & Rivlin (1972). The uniqueness of the Micchelli–Sharma multiple node quadrature formula with the highest degree of precision (4.9) is proved in Bojanov & Petrova (2009, Theorem 2.6). From (4.9) and by Theorem 2.2, again, we get the Gaussian quadrature formula of Lobatto type,   $$\int_{-1}^1 \sqrt{1-t^2} f(t)\,{\mathrm{d}}t \approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}\hat a_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\hat A_i f^{(i)}(-1)+ \hat B_i f^{(i)}(1)] + \sum_{j=1}^n\lambda_jf(\xi_j),$$ (4.10) which uniquely exists and has $$\mathrm{ADP}=(2s+3)n-1-2+n=2(s+2)n-3$$. Since the nodes of the quadrature formula (4.10) are known, we can calculate its weight coefficients (cf. Milovanović et al., 2004). Note that $$\xi_j$$, $$j=1,\dots,n$$ are the zeros of the $$n$$-degree Chebyshev polynomial of the first kind $$T_n$$. Having in mind (4.10), and using again Theorem 2.2, we get the Gaussian quadrature formula of Lobatto type for the Fourier–Chebyshev coefficients   $$\int_{-1}^1 \sqrt{1-t^2} f(t) U_{n-1}(t)\,{\mathrm{d}}t \approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s-1}\tilde a_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\tilde A_i f^{(i)}(-1)+ \tilde B_i f^{(i)}(1)] + \sum_{j=1}^n\tilde\lambda_jf(\xi_j),$$ (4.11) which exists uniquely and has $$\mathrm{ADP}=(2s+3)n-2$$. Since the nodes of the quadrature formula (4.11) are known, as above, we can calculate its weight coefficients by (2.6), from the weight coefficients of (4.10). In this way, we found two quadrature formulas with multiple nodes, (4.8) and its modified Kronrod extension (4.11), which is the Gaussian quadrature formula of Lobatto type, for calculating the Fourier–Chebyshev coefficients relative to Chebyshev weight functions of the second kind. We have just proved the following theorem. Theorem 4.3 Let $$n,s\in\mathbb{N}$$ and $$\omega(t)=\sqrt{1-t^2}$$, $$t\in[-1,1]$$. Then, there exists a unique quadrature formula with multiple nodes for calculating the corresponding Fourier–Chebyshev coefficients   $a_{n-1}(f)=\int_{-1}^1 f(t)U_{n-1}(t)\sqrt{1-t^2}\,{\mathrm{d}}t,$ namely the quadrature formula (4.8) with $$\mathrm{ADP}=2ns+n-3$$, as well as its modified Kronrod extension (4.11) with $$\mathrm{ADP}=(2s+3)n-2$$. We called (4.11) the modified Kronrod extension of (4.8) in the following sense. We applied Kronrod’s idea to the quadrature formula (4.8) to obtain the quadrature formula of type (4.11) in which the nodes $$\eta_j,\ j=1,\dots,n-1$$ (and $$-1,1$$) are fixed, while the additional $$n$$ nodes and all weight coefficients are chosen to maximize the ADP of the extended formula. The extended quadrature formula, which has $$\mathrm{ADP}=2n(s+1)-1$$, agrees with the quadrature (4.11) since $$(2s+3)n-2\ge 2n(s+1)-1$$ for $$n\ge1$$. 4.3 Quadratures for Fourier coefficients for Chebyshev weight functions of the third and fourth kinds Let $$\sigma_n=(s,\ldots,s)$$ and $$\omega(t)\equiv \omega_4(t)=(1-t)^{1/2+s}(1+t)^{-1/2}$$. Let $$\{x_j\,,\, j=1,\dots,n\}$$, be the zeros of the Chebyshev polynomial of the fourth kind $$P_n^{(1/2,-1/2)}$$ of degree $$n$$, with respect to the Chebyshev weight function of the fourth kind $$(1-t)^{1/2}(1+t)^{-1/2}$$, which is $$s$$-orthogonal with respect to $$\omega_4$$ (see Ossicini & Rosati, 1975). It is well known that the Gauss–Turán quadrature formula   $$\int_{-1}^1 \omega_4(t) f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s}\alpha_{ji}f^{(i)}(x_j),\quad s\in\mathbb{N}$$ (4.12) uniquely exists, and has $$\mathrm{ADP}=2n(s+1)-1$$. From (4.12) and Theorem 2.2 we get the quadrature formula   $$\int_{-1}^1 \sqrt{\frac{1-t}{1+t}} f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s}\hat \alpha_{ji}f^{(i)}(x_j)+\sum_{i=0}^{s-1}\beta_{i}f^{(i)}(1),\quad s\in\mathbb{N},$$ (4.13) which is unique and has $$\mathrm{ADP}=2n(s+1)+s-1=(2n+1)s+2n-1$$. Since the nodes of the quadrature formula (4.13) are known, we can calculate its weight coefficients (cf. Milovanović et al., 2004). Now, from (4.13) and Theorem 2.2, the following Gaussian quadrature formula of Radau type for the Fourier–Chebyshev coefficients is obtained:   $$\int_{-1}^1 \sqrt{\frac{1-t}{1+t}} f(t)P_n^{(1/2,-1/2)}(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s-1}\widetilde \alpha_{ji}f^{(i)}(x_j)+\sum_{i=0}^{s-1}\widetilde\beta_{i}f^{(i)}(1),\quad s\in\mathbb{N},$$ (4.14) which uniquely exists and has $$\mathrm{ADP}=(2n+1)s+n-1$$. Since the nodes of the quadrature formula (4.14) are known (they are the same as in (4.13)), we can calculate its weight coefficients by (2.6), using the weight coefficients of (4.13). On the other hand, let $$\{x_\mu^*\}$$, $$j=1,\dots,n$$ be the zeros of the Chebyshev polynomial of the third kind $$P_n^{(-1/2,1/2)}$$ of degree $$n$$ with respect to the Chebyshev weight function of the third kind $$(1+t)^{1/2}(1-t)^{-1/2}$$. A Kronrod extension with $$\mathrm{ADP}=n(4s+3)+s+1$$, i. e.,   $$\int_{-1}^1f(t)\omega_4(t)\,{\mathrm{d}}t \approx \sum_{\nu=1}^n \sum_{i=0}^{2s} b_{\nu i} f^{(i)}(x_\nu)+\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} c_{\mu j}^* f^{(j)}(x_\mu^*) +\sum_{j=0}^{s}c_{1,j}^* f^{(j)}(-1),$$ (4.15) of the Gauss–Turán quadrature formula (4.12) was proposed by Milovanović & Spalević (2014, Equation (2.23)). The free nodes $$x_\mu^*,\ \mu=2,\dots,n+1$$ are of the same multiplicity $$2s+1$$ as the fixed nodes $$x_\nu,\ \nu=1,\dots,n$$, and we need the node at $$-1$$ of multiplicity $$s+1$$, since in that case the corresponding orthogonality conditions reduce to the conditions   $\int_{-1}^1[U_{2n}(t)]^{2s+1}(1-t^2)^{1/2+s}\,\,{\rm d}t=0,\quad k=0,1,\dots,2n-1,$ which are fulfilled since $$P_n^{(1/2,-1/2)}(t)P_n^{(-1/2,1/2)}(t)={\rm const} \cdot U_{2n}(t)$$ (cf. Monegato, 1982, Equation (33), p. 147). For more details, see Milovanović & Spalević (2014, pp. 1217–1218). Then, from (4.15) and Theorem 2.2, we get the following quadrature formula, which is a modified Kronrod extension of (4.13):   $$\begin{array}{r@{\;}l} \displaystyle \int_{-1}^1f(t)\sqrt{\frac{1-t}{1+t}}\,{\mathrm{d}}t &\approx \displaystyle\sum_{\nu=1}^n \sum_{i=0}^{2s} \widehat b_{\nu i} f^{(i)}(x_\nu)+\sum_{j=0}^{s-1}\widehat c_{n+1,j}^* f^{(j)}(1)\$0.1in] &\quad{}+\displaystyle\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} \widehat c_{\mu j}^* f^{(j)} (x_\mu^*) +\sum_{j=0}^{s}\widehat c_{1,j}^* f^{(j)}(-1), \end{array}$$ (4.16) which exists uniquely and has \mathrm{ADP}=n(4s+3)+2s+1. Since the nodes of the quadrature formula (4.16) are known, we can calculate, again, its weight coefficients (cf. Milovanović et al., 2004). In a similar way, from (4.16) and Theorem 2.2, we get the Gaussian quadrature formula for the Fourier–Chebyshev coefficients, which is a modified Kronrod extension of (4.14), $$\begin{array}{r@{\;}l} \displaystyle \int_{-1}^1\sqrt{\frac{1-t}{1+t}}f(t)P_n^{(1/2,-1/2)}(t)\,{\mathrm{d}}t &\approx \displaystyle\sum_{\nu=1}^n \sum_{i=0}^{2s-1} \widetilde b_{\nu i} f^{(i)}(x_\nu)+\sum_{j=0}^{s-1}\widetilde c_{n+1,j}^* f^{(j)}(1)\\[0.1in] &\quad{}+\displaystyle\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} \widetilde c_{\mu j}^* f^{(j)}(x_\mu^*) +\sum_{j=0}^{s}\widetilde c_{1,j}^* f^{(j)}(-1), \end{array}$$ (4.17) which uniquely exists and has \mathrm{ADP}=2n(2s+1)+2s+1. The knowledge of the nodes in (4.17) allows us, again, to compute its weight coefficients by (2.6), using the corresponding ones of (4.16). Therefore, two quadrature formulas with multiple nodes, (4.14) and its modified Kronrod extension (4.17), for calculating the Fourier–Chebyshev coefficients relative to Chebyshev weight functions of the fourth kind have been found. Indeed, we have just proved the following theorem. Theorem 4.4 Let n,s\in\mathbb{N} and \omega(t)=\sqrt{(1-t)/(1+t)}, t\in[-1,1]. Then, there exists a unique quadrature formula with multiple nodes, for calculating the corresponding Fourier–Chebyshev coefficients \[ a_{n}(f)=\int_{-1}^1 \sqrt{\frac{1-t}{1+t}}f(t)P_n^{(1/2,-1/2)}(t)\,{\mathrm{d}}t,$ namely, formula (4.14) with $$\mathrm{ADP}=(2n+1)s+n-1$$, and its modified Kronrod extension (4.17) with $$\mathrm{ADP}=2n(2s+1)+2s+1$$. Now, in a similar fashion, the following result about quadrature formulas with multiple nodes to estimate Fourier coefficients for the Chebyshev weight function of third kind $$\omega_3$$, as well as its optimal Kronrod’s type extension, may be obtained. Indeed, making use of the same notation as above, we have the following result: Theorem 4.5 Let $$n,s\in\mathbb{N}$$ and $$\omega(t)=\sqrt{(1+t)/(1-t)}$$, $$t\in[-1,1]$$. Then, uniquely there exists a quadrature formula with multiple nodes, for calculating the corresponding Fourier–Chebyshev coefficients   $a_{n}(f)=\int_{-1}^1 \sqrt{\frac{1+t}{1-t}}f(t)P_n^{(-1/2,1/2)}(t)\,{\mathrm{d}}t,$ namely, formula (4.18) below, with $$\mathrm{ADP}=(2n+1)s+n-1$$, and its modified Kronrod extension (see (4.19) below) with $$\mathrm{ADP}=2n(2s+1)+2s+1$$. The above quadrature formulas are given by, respectively,   $$\int_{-1}^1 \sqrt{\frac{1+t}{1-t}} f(t)P_n^{(-1/2,1/2)}(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s-1}\widetilde \alpha_{ji}f^{(i)}(x_j)+\sum_{i=0}^{s-1}\widetilde\beta_{i}f^{(i)}(-1),\quad s\in\mathbb{N}$$ (4.18) and its modified Kronrod extension,   $$\begin{array}{r@{\;}l} \displaystyle \int_{-1}^1\sqrt{\frac{1+t}{1-t}}f(t)P_n^{(-1/2,1/2)}(t)\,{\mathrm{d}}t &\approx \displaystyle\sum_{\nu=1}^n \sum_{i=0}^{2s-1} \widetilde b_{\nu i} f^{(i)}(x_\nu)+\sum_{j=0}^{s-1}\widetilde c_{1,j}^* f^{(j)}(-1)\$0.1in] &\quad{}+\displaystyle\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} \widetilde c_{\mu j}^* f^{(j)}(x_\mu^*) +\sum_{j=0}^{s}\widetilde c_{n+1,j}^* f^{(j)}(1), \end{array}$$ (4.19) where the nodes \{x_\nu\} and \{x_\mu^*\} have the same meaning as in Theorem 4.4. Remark 4.6 It is clear that in all the quoted cases, the nodes in the Gaussian quadrature formulas and their Kronrod (or modified Kronrod) extensions interlace (cf. Milovanović & Spalević, 2014). 5. Numerical construction In this section, the numerical feasibility and efficiency of the quadrature formulas considered in the previous section are analysed. First, we present a way of computing the weight coefficients in the quadratures for Fourier–Chebyshev coefficients considered above. Let $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s}a_{ji}f^{(i)}(x_j)$$ (5.1) represent a Gauss–Turán quadrature formula with respect to the weight function \omega(t), t\in[a,b], which has \mathrm{ADP}(\text{5.1})=2n(s+1)-1. If the corresponding monic s-orthogonal polynomial \pi_{n,s} agrees with the monic ordinary orthogonal polynomial P_n, based on the zeros \{x_j\}_{j=1}^n then the quadrature formula for the Fourier coefficient a_n(f)=\int_a^b \omega(t) P_n(t)f(t)\,{\mathrm{d}}t of Gaussian type, obtained from (5.1), has the form $$\int_a^b \omega(t)P_n(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s-1}\widehat a_{ji}f^{(i)}(x_j)=:\mathcal{T}_{n,s}(f),$$ (5.2) and \mathrm{ADP}(\text{5.2})=(2s+1)n-1. The previous quadrature (4.3) is the particular case of quadrature (5.2) for \omega(t)=1/\sqrt{1-t^2}, t\in[-1,1]. If we know the weight coefficients a_{ji} in (5.1) then we can find the weight coefficients \widehat a_{ji} in (5.2), as follows. Substituting f in (5.1) by fP_n, where f\in\mathcal{P}_{n(2s+1)-1} and P_n(x_j)=0(j=1,\ldots,n), we get \begin{eqnarray*} \int_a^b\omega(t) f(t)P_n(t)\,{\mathrm{d}}t&=&\sum_{j=1}^n\sum_{i=0}^{2s}a_{ji}\left.\left[f(t)P_n(t)\right]^{(i)}\right|_{t=x_j}\\ &=&\sum_{j=1}^n\sum_{i=1}^{2s}a_{ji}\left.\left[f(t)P_n(t)\right]^{(i)}\right|_{t=x_j}\\ &=&\sum_{j=1}^n\sum_{i=0}^{2s-1}a_{j,i+1}\left.\left[f(t)P_n(t)\right]^{(i+1)}\right|_{t=x_j}\\ &=&\sum_{j=1}^n\sum_{i=0}^{2s-1}a_{j,i+1}\sum_{k=0}^{i}\binom{i+1}{k}P_n^{(i+1-k)}(x_j)f^{(k)}(x_j). \end{eqnarray*} If for fixed j, we define \[ h_{ik}=a_{j,i+1}\sum_{k=0}^{i}\binom{i+1}{k}P_n^{(i+1-k)}(x_j)f^{(k)}(x_j),$ then we have   $\sum_{i=0}^{2s-1}\sum_{k=0}^{i}h_{ik}=\sum_{k=0}^{2s-1}\sum_{i=k}^{2s-1}h_{ik}=\sum_{i=0}^{2s-1}\sum_{k=i}^{2s-1}h_{ki},$ and, therefore, we conclude that   $$\int_a^b\omega(t) f(t)P_n(t)\,{\mathrm{d}}t=\sum_{j=1}^n\sum_{i=0}^{2s-1}\left(\sum_{k=i}^{2s-1}a_{j,k+1}\binom{k+1}{i}P_n^{(k+1-i)}(x_j)\right)f^{(i)}(x_j)$$ (5.3) for each $$f\in\mathcal{P}_{n(2s+1)-1}$$. It means that (5.3) gives (5.2), where   $$\widehat a_{ji}=\sum_{k=i}^{2s-1}a_{j,k+1}\binom{k+1}{i}P_n^{(k-i+1)}(x_j),\quad j=1,2,\dots,n;\ i=0,1,\dots,2s-1.$$ (5.4) Now, let   $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s}\lambda_{ji}f^{(i)}(x_j)+\sum_{j=1}^{n+1}\gamma_jf(\tau_j)$$ (5.5) represent a Kronrod extension of the Gauss–Turán quadrature formula (5.1) of type (3.7) with respect to the weight function $$\omega(t)$$, $$t\in[a,b]$$, which has $$\mathrm{ADP}(\text{5.5})=2n(s+1)+n+1$$. If the corresponding monic $$s$$-orthogonal polynomial $$\pi_{n,s}$$ agrees with the monic ordinary orthogonal polynomial $$P_n$$, based on the zeros $$\{x_j\}_{j=1}^n$$, then the quadrature formula for the Fourier coefficient $$a_n(f)=\int_a^b \omega(t) P_n(t)f(t)\,{\mathrm{d}}t$$, obtained from (5.5), has the form   $$\int_a^b \omega(t)f(t)P_n(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s-1}\widehat\lambda_{ji}f^{(i)}(x_j)+\sum_{j=1}^{n+1}\widehat\gamma_jf(\tau_j)=:\mathcal{TK}_{n,s}(f)$$ (5.6) and $$\mathrm{ADP}(\text{5.2})=2n(s+1)+1$$. It is a Kronrod extension of the quadrature formula (5.2). The quadrature (4.4) is the particular case of quadrature (5.6) for $$\omega(t)=1/\sqrt{1-t^2}$$, $$t\in[-1,1]$$. If we know the weight coefficients $$\lambda_{ji},\gamma_j$$ in (5.5), then we can find the weight coefficients $$\widehat \lambda_{ji},\widehat\gamma_j$$ in (5.6) in a similar way to the standard quadratures in the first part of this section. Therefore,   $$\widehat \lambda_{ji}=\sum_{k=i}^{2s-1}\lambda_{j,k+1}\binom{k+1}{i}P_n^{(k+1-i)}(x_j),\quad j=1,2,\dots,n;\ i=0,1,\dots,2s-1,$$ (5.7) and   $$\widehat \gamma_{j}=\gamma_jP_n(\tau_j),\quad j=1,2,\dots,n+1.$$ (5.8) For instance, acting in a similar way, we can get the coefficients $$\widetilde c_{\mu j}^*$$ in (4.17) from the coefficients $$\widehat c_{\mu k}^*$$ in (4.16), namely,   $$\widetilde c_{\mu j}^*=\sum_{k=j}^{2s}\widehat c_{\mu k}^*\binom{k}{j}\frac{{\mathrm{d}}^{k-j}}{{\mathrm{d}}t^{k-j}}\left[P_n^{(1/2,-1/2)}(t)\right]_{t=x_\mu^*};\quad \mu=2,\dots,n+1;\ j=0,1,\dots,2s.$$ (5.9) For computing the coefficients in (5.4) and (5.7), we need to determine the derivatives $$P_n^{(k)}(x_j)$$. Now a method to solve this question will be described. First, denoting by $$P_n$$ the corresponding monic orthogonal polynomial, set   $P_n(t)=\prod_{i=1}^n(t-x_i)=(t-x_j)\cdot h_j(t),$ where   $h_j(t)=\prod_{i=1,i\ne j}^n(t-x_i).$ Since   $P_n^{(k)}(t)=\sum_{l=0}^k\binom{k}{l}(t-x_k)^{(l)}h_j^{(k-l)}(t) =(t-x_j)h_j^{(k)}(t)+kh_j^{(k-1)}(t)\,,$ we get   $P_n^{(k)}(x_j)=kh_j^{(k-1)}(x_j).$ Therefore,   $$P_n^{(k)}(x_j)=\left\{ \begin{array}{ll} 0,&\quad k=0,\\ 0,&\quad k>n,\\ n!,&\quad k=n,\\ k\cdot h_j^{(k-1)}(x_j),& \quad k=1,\dots,n-1. \end{array} \right.$$ (5.10) Let $$t\in(x_{j-1},x_{j+1})$$. We have $$h_j(t)=(-1)^{n-j}\,g_j(t)$$, where   $g_j(t)=(t-x_1)\cdots(t-x_{j-1})(x_{j+1}-t)\cdots(x_n-t)\quad (g_j(t)>0\ \mbox{for}\ t\in(x_{j-1},x_{j+1})),$ and   $h_j^{(k-1)}(x_j)=(-1)^{n-j}g_j^{(k-1)}(x_j)=(-1)^{n-j}\left[e^{Q_j(t)}\right]^{(k-1)}_{t=x_j},$ where   $Q_j(t)=\sum_{i=1,i\ne j}^n q_i(t),\quad q_i(t)=\log |t-x_j|.$ Using Milovanović & Spalević (1998, Lemma 2.1), we have   $$\bigl(e^{Q_j(x_j)}\bigr)^{(0)}=e^{Q_j(x_j)},\quad \left(e^{Q_j}\right)^{(k-1)}_{t=x_j}=\sum_{l=1}^{k-1}\binom{k-2}{l-1}Q_j^{(l)}(x_j)\left(e^{Q_j}\right)^{(k-l-1)}_{t=x_j}\ \ (k\ge2).$$ (5.11) Finally,   $Q_j^{(l)}(x_j)=\sum_{i=1,i\ne j}^n q_i^{(l)}(x_j),$ where $$q_i^{(0)}(x_j)=\log |x_j-x_i|$$ and   $q_i^{(l)}(x_j)=(-1)^{l-1}\frac{(l-1)!}{(x_i-x_j)^l},\quad l\in\mathbb{N}, \quad i=1,\dots,n,$ which is useful in (5.11). But if we have to compute the coefficients $$\widetilde c_{\mu j}^*$$ at a point $$x_\mu^*$$ in the interval $$[-1,1]$$ not being a zero of the corresponding monic orthogonal polynomial $$P_n$$ (as, e.g., in (5.9)), we can proceed as follows. Since   \begin{equation*} P_n^{(k)}(x_\mu^*)=\left\{ \begin{array}{ll} 0,&\quad k>n,\\ n!,&\quad k=n,\\ \prod_{i=1}^n(x_\mu^*-x_i),& \quad k=0, \end{array} \right. \end{equation*} it only remains to determine $$P_n^{(k)}(x_\mu^*)$$, for $$k=1,2,\dots,n-1$$. In all the cases considered, we have $$x_\mu^*\in[-1,1]$$. Let us define $$x_0=-1$$ and $$x_{n+1}=1$$. The point $$x_\mu^*$$ can be located in one of the intervals $$I_0=[x_0,x_1)$$, $$I_1=(x_1,x_2)$$, $$\dots$$, $$I_{n-1}=(x_{n-1},x_n)$$, $$I_n=(x_n,1]$$. If $$x_\mu^*\in I_j$$, then we take $$t\in I_j$$, $$j=0,1,\dots,n$$. Let $$x_\mu^*\in I_j$$. Then, $$P_n(t)=(-1)^{n-j}g_j(t)$$, where $$g_j(t)=\prod_{i=1}^n|t-x_i|>0$$, i.e., $$g_j(t)=\prod_{i=1}^j(t-x_i)\cdot \prod_{i=j+1}^n(x_i-t)$$, with $$\prod_1^0\cdot=\prod_{n+1}^n\cdot=1$$. Now,   $P_n^{(k)}(t)=(-1)^{n-j}g_j^{(k)}(x_\mu^*)=(-1)^{n-j}\left[e^{Q_j(t)}\right]^{(k)}_{t=x_\mu^*},$ where   $Q_j(t)=\sum_{i=1}^n q_i(t),\quad q_i(t)=\log |t-x_i|.$ Similarly to above, for $$k\ge1$$, we have   $$\left(e^{Q_j(x_\mu^*)}\right)^{(0)}=e^{Q_j(x_\mu^*)},\quad \left(e^{Q_j}\right)^{(k)}_{t=x_\mu^*}=\sum_{l=1}^{k}\binom{k-1}{l-1}Q_j^{(l)}(x_\mu^*)\left(e^{Q_j}\right)^{(k-l)}_{t=x_\mu^*}.$$ (5.12) Finally,   $Q_j^{(l)}(x_\mu^*)=\sum_{i=1}^n q_i^{(l)}(x_\mu^*),$ where   $q_i^{(0)}(x_\mu^*)=\log |x_\mu^*-x_i|,\quad q_i^{(l)}(x_\mu^*)=(-1)^{l-1}\frac{(l-1)!}{(x_\mu^*-x_i)^l},\quad l\in\mathbb{N}, \quad i=1,\dots,n,$ which is what we need in (5.12). Example 5.1 Now, some numerical results about the computation of weights and the estimation of the error of quadrature formulas with multiple nodes to compute Fourier coefficients will be displayed. For this purpose, let us consider the calculation of the integrals   $$I_n=\frac{1}{2^{n-1}}\int_{-1}^{1}\frac{e^{10t}\,T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t=\int_{-1}^{1}\frac{e^{10t}\,\widehat T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t,\quad n\in\mathbb{N}_0,$$ (5.13) by the quadrature formula (5.2) of Gaussian type, where $$P_n\equiv\widehat T_n$$. In the case $$s=1$$, it reduces to the well-known Micchelli–Rivlin quadrature formula (2.8). We also calculate the quadrature formula (5.6), with $$P_n\equiv\widehat T_n$$, which is the Kronrod extension of (5.2) to estimate its quadrature error. For example, if we denote the quadrature sums in (5.2) and (5.6) by $$Q(\text{5.2})$$ and $$Q(\text{5.6})$$, respectively, we can use the well-known method to estimate the error in (5.2) by means of the difference,   ${\rm{Err}}_{n,s}=\left|Q(\text{5.2})-Q(\text{5.6})\right|.$ Let us remember that the quadrature nodes $$x_j=\xi_j$$ and $$\tau_j=\eta_j$$ in (5.2) and (5.6) are given by   $x_j=-\cos \frac{(2j-1)\pi}{2n} \quad (j=1,\dots,n)\quad \mbox{and}\quad \tau_j=-\cos \frac{(j-1)\pi}{n}\quad (j=1,2,\dots,n+1).$ For computing $$\widehat T_n^{(p)}(x_j)=T_n^{(p)}(x_j)/2^{n-1}$$, $$j\in\mathbb{N}$$, we can use, in this special case, the following method. Using Szegő (1975, Equation (4.21.7)), we have the values of $$T_n'(x_j)$$; then, Szegő (1975, Equation (4.2.1)) yields   $(1-t^2)T_n^{(k+2)}(t)-(2k+1)t T_n^{(k+1)}(t)+(n^2-k^2)T_n^{(k)}(t)=0,$ and, hence, it allows us to obtain $$T_n^{(k+2)}(x_j)$$ (for $$j=1,2,\dots,n$$), namely,   $T^{(k+2)}_n(x_j)=\frac{(2k+1)x_j\, T^{(k+1)}_n(x_j)-(n^2-k^2) T^{(k)}_n(x_j)}{1-x_j^2},\quad 1\le k\le n-3,$ where   $T'_n(x_j)=nU_{n-1}(x_j)=\frac{n(-1)^{n-j}}{\sin \dfrac{(2j-1)\pi}{2n}}\quad\mbox{and}\quad T''_n(x_j)=\frac{x_j}{1-x_j^2} T'_n(x_j).$ It is clear that $$\widehat T^{(n)}_n(x_j)=n!$$ and $$\widehat T^{(k)}_n(x_j)=0$$, for $$k>n$$. Knowing the weight coefficients $$a_{ji}$$ in (5.1) and $$\lambda_{ji}$$, $$\gamma_j$$ in (5.5), and using (5.4), (5.7), (5.8), we can compute the weight coefficients $$\widehat a_{ji}$$ in (5.2) and $$\widehat \lambda_{ji}$$, $$\widehat \gamma_j$$ in (5.6). For $$n=12$$ and $$s=2$$, the weight coefficients $$\widehat a_{ji}$$ (Tables 1 and 2) and $$\widehat \lambda_{ji}$$ (Tables 3 and 4) are displayed (recall the well-known phenomenon of the nonpositivity of the weights). Table 1 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  Table 1 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  Table 2 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  Table 2 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  Table 3 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.6) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  Table 3 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.6) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  Table 4 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  Table 4 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  The weight coefficients $$\widehat \gamma_{j}$$ are   \begin{eqnarray*} \widehat\gamma_1&=&\widehat\gamma_{13}=1.997370817559429e{-}05,\\ \widehat\gamma_{2}&=&\widehat\gamma_{4}=\widehat\gamma_{6}=\widehat\gamma_{8}=\widehat\gamma_{10}=\widehat\gamma_{12}=-3.994741635118857e{-}05,\\ \widehat\gamma_{3}&=&\widehat\gamma_{5}=\widehat\gamma_{7}=\widehat\gamma_{9}=\widehat\gamma_{11}=3.994741635118857e{-}05. \end{eqnarray*} Now, the results obtained computing the Fourier coefficients for the function $$f(t)=e^{10 t}$$ by means of the quadrature sums $$\mathcal{T}_{n,s}(f)$$, $$\mathcal{TK}_{n,s}(f)$$ of the formulas (5.2), (5.6), respectively, in this case, with the Chebyshev weight function of the first kind, are shown. With this aim, in Table 5, the corresponding relative error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$, for $$n=6(2)20$$ and $$s=1$$ and $$s=2$$, as well as the actual values of $$a_n(f)$$, are displayed. Table 5 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(\,f)$$, where $$f(t)=e^{10 t}$$, for some values of $$n$$ and for $$s=1,2$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  Table 5 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(\,f)$$, where $$f(t)=e^{10 t}$$, for some values of $$n$$ and for $$s=1,2$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  We have also done similar computations of the relative errors $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$ for the function $$f(t)=e^{\cos(\alpha t)}\ (\alpha>0)$$, which is a highly oscillating function for big values of $$\alpha$$. They are displayed in Table 6, for the case where $$\alpha = 10$$, for $$n=10(10)60$$ and $$s=1$$, $$s=2$$ and $$s=3$$, as well as the actual values of $$a_n(f)$$. Table 6 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$, where $$f(t)=e^{\cos(10 t)}$$, for some values of $$n$$ and for $$s=1,2,3$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  Table 6 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$, where $$f(t)=e^{\cos(10 t)}$$, for some values of $$n$$ and for $$s=1,2,3$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  The proposed numerical construction of the quadratures introduced in this article is based on the general method to construct standard quadrature formulas with multiple nodes (see Milovanović et al., 2004) and, then, on the construction of weight coefficients in quadratures for computing Fourier coefficients by formulas of the form (2.6), where the known weight coefficients of the standard quadratures with multiple nodes are calculated by the method given in Milovanović et al. (2004). Since all the numerical methods we used are numerically stable, then the numerical construction we propose for the new quadrature formulas is also stable. Remark 5.2 As suggested by one of the referees, for smaller values of $$n$$ and when $$f$$ is not a highly oscillating function, for computation of the Fourier coefficients   $$a_n(f)=\int_{-1}^1\frac{K_n(t)\,{\rm d}t}{\sqrt{1-t^2}},\quad K_n(t)=f(t)T_n(t),$$ (5.14) we can use the quadrature formula Monegato (1982, Equation (43), p. 152), which has the form   $$\int_{-1}^1\frac{K_n(t)\,{\rm d}t}{\sqrt{1-t^2}}\approx\frac{\pi}{2m}\left[\frac 12 K_n(-1)+\sum_{i=1}^{2m-1}K_n\left(\cos \frac{i\pi}{2m}\right)+\frac 12 K_n(1)\right]:=\mathcal{GL}_{2m+1}(K_n),\quad m\ge 2,$$ (5.15) and the basic Gaussian rule,   $$\int_{-1}^1\frac{K_n(t)\,{\rm d}t}{\sqrt{1-t^2}}\approx\frac{\pi}{m}\sum_{i=1}^{m}K_n\left(\cos \frac{(2i-1)\pi}{2m}\right):=\mathcal{G}_{m}(K_n),\quad m\ge 1,$$ (5.16) which is optimally extended by (5.15). Formulas (5.16) and (5.15), and (5.15) with $$m$$ replaced by $$2m$$, as well as their relative errors   \begin{eqnarray*} {\rm{Err}}_{\mathcal{G}_m}(K_n)&=&\left|\mathcal{GL}_{2m+1}(K_n)-\mathcal{G}_m(K_n)\right|/\left|\mathcal{GL}_{2m+1}(K_n)\right|,\\ {\rm{Err}}_{\mathcal{GL}_{2m+1}}(K_n)&=&\left|\mathcal{GL}_{4m+1}(K_n)-\mathcal{GL}_{2m+1}(K_n)\right|/\left|\mathcal{GL}_{4m+1}(K_n)\right|, \end{eqnarray*} are calculated with a smaller computational cost and are effective when $$m$$ increases. The same happens for the other weight functions that we dealt with in the article using the quadrature formulas Monegato (1982, Equations (44)–(46)). However, even for functions $$f$$ that do not oscillate too much, the integrand $$K_n$$ becomes highly oscillating for bigger values of $$n$$ and, thus, to achieve an acceptable accuracy with the standard Gauss-type formulas, the number of nodes needed becomes ‘astronomically’ large (see Iserles, 2006; Iserles et al., 2006). For instance, concerning the function $$f(t)=e^{10t}$$, which is not a highly oscillating function (but $$K_n(t)$$ becomes a highly oscillating function with increasing $$n$$), we performed the calculations using higher arithmetic precision. The results show that for small values of $$n$$ we can use $$\mathcal{G}_{m}$$, $$\mathcal{GL}_{2m+1}$$ for the calculation of the Fourier–Chebyshev coefficients, and they can be used instead of our quadratures, since they are simpler to implement and are performed with a smaller computational cost. By increasing $$m$$, we get the bigger precision of $$\mathcal{G}_{m}$$, $$\mathcal{GL}_{2m+1}$$; see the case $$n=6$$ in Table 7. (Of course, we can also use our quadratures in these cases, by using the given software; by increasing $$s$$ we get the bigger precision of our quadratures for the given $$a_n(f)$$, $$n$$ fixed.) The problem in using formulas $$\mathcal{G}_{m}$$, $$\mathcal{GL}_{2m+1}$$ arises when $$n$$ increases. Indeed, on the basis of performed calculations, we observed that $${\rm{Err}}_{\mathcal{G}_m}$$ (the same for $${\rm Err}_{\mathcal{GL}_{2m+1}}$$), $$m$$ fixed, increases when $$n$$ increases. The situation is considerably worse for bigger values of $$n$$. In this case, say, $$n=50$$ (see Table 7), we should take $$m$$ to be much bigger than $$20$$ to obtain satisfactory precision. Table 7 The error estimates $${\rm{Err}}_{\mathcal{G}_m}(K_n)$$, $${\rm{Err}}_{\mathcal{GL}_{2m+1}}(K_n)$$, where $$f(t)=e^{10 t}$$, for $$n=6$$, $$n=50$$, and some $$m$$ $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  Table 7 The error estimates $${\rm{Err}}_{\mathcal{G}_m}(K_n)$$, $${\rm{Err}}_{\mathcal{GL}_{2m+1}}(K_n)$$, where $$f(t)=e^{10 t}$$, for $$n=6$$, $$n=50$$, and some $$m$$ $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  6. Conclusion We have introduced some Gaussian-type quadratures with multiple nodes and their Kronrod or modified Kronrod extensions for computing Fourier–Chebyshev coefficients relative to the four Chebyshev classical weight functions. The proofs of existence and uniqueness of these quadratures are given; one of them is a generalization of the well-known Micchelli–Rivlin quadrature formula, whereas the others are new. The reason to consider the use of these quadrature formulas with multiple nodes, in place of the usual Gauss-type quadratures, lies in the fact that the latter often do not work well because of the highly oscillating character of the integrand $$K_n$$, especially for big values of $$n$$. Furthermore, a numerically stable construction of these quadratures is proposed. By determining the absolute value of the difference between these Gaussian quadratures with multiple nodes for the Fourier–Chebyshev coefficients and their corresponding optimal extensions, we get the well-known methods to estimate their errors. These results are illustrated by means of some numerical examples. Acknowledgements We are thankful to the unknown referees for useful comments that allowed us to improve the first version of the article. Funding Serbian Academy of Sciences and Arts (No. $${\it{\Phi}}$$-96 to G.V.M., in part); Serbian Ministry of Education, Science and Technological Development (174015 and 174002 to G.V.M. and M.M.S., in part); Spanish Ministerio de Ciencia e Innovación (MTM2015-71352-P to R.O., in part). References Bernstein S. ( 1930) Sur les polynomes orthogonaux relatifs à un segment fini. J. Math. Pures Appl. , 9, 127– 177 Bojanov B. & Petrova G. ( 2009) Quadrature formulae for Fourier coefficients. J. Comput. Appl. Math. , 231, 378– 391 Google Scholar CrossRef Search ADS   Calvetti D., Golub G. H., Gragg W. B. & Reichel L. ( 2000) Computation of Gauss-Kronrod rules. Math. Comp. , 69, 1035– 1052 Google Scholar CrossRef Search ADS   Chakalov L. ( 1954) General quadrature formulae of Gaussian type. Bulg. Akad. Nauk. Izv. Mat. Inst. , 2, 67– 84 Cvetković A. S., Matejić M. M. & Milovanović G. V. ( 2016) Orthogonal polynomials for modified Chebyshev measure of the first kind. Results Math. , 69, 443– 455 Google Scholar CrossRef Search ADS   Cvetković A. S. & Milovanović G. V. ( 2004) The Mathematica package ‘Orthogonal Polynomials’. Facta Univ. Ser. Math. Inform. , 19, 17– 36 Cvetković A. S. & Spalević M. M. ( 2014) Estimating the error of Gauss-Turán quadrature formulas using their extensions. Electron. Trans. Numer. Anal. , 41, 1– 12 DeVore R. ( 1974) A property of Chebyshev polynomials. J. Approx. Theory , 12, 418– 419 Google Scholar CrossRef Search ADS   Engels H. ( 1980) Numerical Quadrature and Cubature.  London: Academic Press. Gautschi W. ( 1987) Gauss-Kronrod quadrature–-a survey. Numerical Methods and Approximation Theory III, Niš  ( Milovanović G. V. ed.). Faculty of Electronic Engineering, Niš: Univ. Niš, pp. 39– 66 Gautschi W. ( 2001) OPQ suite . Available at http://www.cs.purdue.edu/archives/2001/wxg/codes ( last accessed 30 March 2009). Gautschi W. ( 2004) Orthogonal Polynomials: Computation and Approximation.  Oxford: Oxford University Press. Gautschi W. ( 2014) High-precision Gauss-Turán quadrature rules for Laguerre and Hermite weight functions. Numer. Algor. , 67, 59– 72. Google Scholar CrossRef Search ADS   Gautschi W. & Milovanović G. V. ( 1997) $$S$$-orthogonality and construction of Gauss–Turán-type quadrature formulae. J. Comput. Appl. Math. , 86, 205– 218 Google Scholar CrossRef Search ADS   Ghizzetti A. & Ossicini A. ( 1970) Quadrature Formulae.  Berlin: Akademie. Google Scholar CrossRef Search ADS   Ghizzetti A. & Ossicini A. ( 1975) Sull’ esistenza e unicitá delle formule di quadratura gaussiane. Rend. Mat. , 8, 1– 15 Golub G. H. & Welsch J. H. ( 1969) Calculation of Gauss quadrature rules. Math. Comp. , 23, 221– 230 Google Scholar CrossRef Search ADS   Gori L. & Micchelli C. A. ( 1996) On weight functions which admit explicit Gauss–Turán quadrature formulas. Math. Comp. , 65, 1567– 1581 Google Scholar CrossRef Search ADS   Iserles A. ( 2013) Three stories of high oscillation. Bull. EMS , 87, 18– 23 Available at http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2013_02.pdf. Iserles A., Norsett S. P. & Olver S. ( 2006) Highly oscillatory quadrature: the story so far. Proceedings of ENuMath , Santiago de Compostela ( Bermudez de Castro et al.  , eds). Berlin: Springer, pp. 97– 118 Available at http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2005_06.pdf. Google Scholar CrossRef Search ADS   Kahaner D. K. & Monegato G. ( 1978) Nonexistence of extended Gauss-Laguerre and Gauss-Hermite quadrature rules with positive weights. Z. Angew. Math. Phys. , 29, 983– 986 Google Scholar CrossRef Search ADS   Laurie D. P. ( 1997) Calculation of Gauss-Kronrod quadrature rules. Math. Comp. , 66, 1133– 1145 Google Scholar CrossRef Search ADS   Li S. ( 1994) Kronrod extension of Turán formula. Studia Sci. Math. Hungar. , 29, 71– 83 Micchelli C. A. & Rivlin T. J. ( 1972) Turán formulae and highest precision quadrature rules for Chebyshev coefficients. IBM J. Res. Dev. , 16, 372– 379 Google Scholar CrossRef Search ADS   Micchelli C. A. & Rivlin T. J. ( 1974) Some new characterizations of the Chebyshev polynomials. J. Approx. Theory , 12, 420– 424 Google Scholar CrossRef Search ADS   Micchelli C. A. & Sharma A. ( 1983) On a problem of Turán: multiple node Gaussian quadrature. Rend. Mat. , 3, 529– 552 Milovanović G. V. ( 2001) Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation. J. Comput. Appl. Math. , 127, 267– 286 Google Scholar CrossRef Search ADS   Milovanović G. V. & Cvetković A. S. ( 2012) Special classes of orthogonal polynomials and corresponding quadratures of Gaussian type. Math. Balkanica , 26, 169– 184 Milovanović G. V. & Spalević M. M. ( 1998) Construction of Chakalov-Popoviciu’s type quadrature formulae. Rend. Circ. Mat. Palermo , 52, 625– 636 Milovanović G. V. & Spalević M. M. ( 2014) Kronrod extensions with multiple nodes of quadrature formulas for Fourier coefficients. Math. Comput. , 83, 1207– 1231 Google Scholar CrossRef Search ADS   Milovanović G. V., Spalević M. M. & Cvetković A.S. ( 2004) Calculation of Gaussian type quadratures with multiple nodes. Math. Comput. Model. , 39 325– 347 Google Scholar CrossRef Search ADS   Monegato G. ( 1982) Stieltjes polynomials and related quadrature rules. SIAM Rev. , 24, 137– 158 Google Scholar CrossRef Search ADS   Monegato G. ( 2001) An overview of the computational aspects of Kronrod quadrature rules. Numer. Algor. , 26, 173– 196 Google Scholar CrossRef Search ADS   Ossicini A. & Rosati F. ( 1975) Funzioni caratteristiche nelle formule di quadratura gaussiane con nodi multipli. Boll. Un. Mat. Ital. , 11, 224– 237 Peherstorfer F. ( 2009) Gauss-Turán quadrature formulas: asymptotics of weights. SIAM J. Numer. Anal. , 47, 2638– 2659 Google Scholar CrossRef Search ADS   Pejčev A. V. & Spalević M. M. ( 2013) Error bounds of Micchelli-Rivlin quadrature formula for analytic functions. J. Approx. Theory , 169, 23– 34 Google Scholar CrossRef Search ADS   Pejčev A. V. & Spalević M. M. ( 2014) Error bounds of Micchelli-Sharma quadrature formula for analytic functions. J. Comput. Appl. Math. , 259, 48– 56 Google Scholar CrossRef Search ADS   Pop O. T. & Bărbosu D. ( 2009) Two dimensional divided differences with multiple konots. An. şt. Univ. Ovidius Constanţa , 17, 181– 190 Shi Y. G. ( 1996) Generalized Gaussian Kronrod-Turán quadrature formulas. Acta Sci. Math. (Szeged) , 62, 175– 185 Shi Y. G. ( 1998) General Gaussian quadrature formulas on Chebyshev nodes. Adv. Math. (China) , 27, 227– 239 Shi Y. G. ( 2000) Converegence of Gaussian quadrature formulas. J. Approx. Theory , 105, 279– 291 Google Scholar CrossRef Search ADS   Shi Y. G. & Xu G. ( 2007) Construction of $$\sigma$$-orthogonal polynomials and Gaussian quadrature formulas. Adv. Comput. Math. , 27, 79– 94 Google Scholar CrossRef Search ADS   Spalević M. M. ( 2007) On generalized averaged Gaussian formulas. Math. Comp. , 76, 1483– 1492 Google Scholar CrossRef Search ADS   Spalević M. M. ( 2014) Error bounds and estimates for Gauss-Turán quadrature formulae of analytic functions. SIAM J. Numer. Anal. , 52, 443– 467 Google Scholar CrossRef Search ADS   Spalević M. M. & Cvetković A. S. ( 2016) Estimating the error of Gaussian quadratures with simple and multiple nodes by using their extensions with multiple nodes. BIT , 56, 357– 374 Google Scholar CrossRef Search ADS   Szegő G. ( 1975) Orthogonal Polynomials.  Providence, RI: AMS. Turán P. ( 1950) On the theory of the mechanical quadrature. Acta Sci. Math. (Szeged) , 12, 30– 37 Yang S. ( 2005) On a quadrature formula of Gori and Micchelli. J. Comput. Appl. Math. , 176, 35– 43 Google Scholar CrossRef Search ADS   Yang S. & Wang X. ( 2003) Fourier-Chebyshev coefficients and Gauss-Turán quadrature with Chebyshev weight. J. Comput. Math. , 21, 189– 194 © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

# Quadratures with multiple nodes for Fourier–Chebyshev coefficients

, Volume Advance Article – Nov 15, 2017
26 pages

Publisher
Oxford University Press
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/drx067
Publisher site
See Article on Publisher Site

### Abstract

Abstract Gaussian quadrature formulas, relative to the Chebyshev weight functions, with multiple nodes and their optimal extensions for computing the Fourier coefficients in expansions of functions with respect to a given system of orthogonal polynomials, are considered. The existence and uniqueness of such quadratures is proved. One of them is a generalization of the well-known Micchelli–Rivlin quadrature formula. The others are new. A numerically stable construction of these quadratures is proposed. By determining the absolute value of the difference between these Gaussian quadratures with multiple nodes for the Fourier–Chebyshev coefficients and their corresponding optimal extensions, we get the well-known methods for estimating their error. Numerical results are included. These results are a continuation of the recent ones in Bojanov & Petrova (2009, J. Comput. Appl. Math., 231, 378–391) and Milovanović & Spalević (2014, Math. Comput., 83, 1207–1231). 1. Introduction Let $$\{P_k\}_{k=0}^\infty$$ be a system of orthonormal polynomials on $$[a,b]$$, a (bounded or not) real interval, with respect to a weight function $$\omega$$, integrable, non-negative function on $$[a,b]$$ that vanishes only at isolated points. The approximation of $$f$$ by partial sums of its series expansion   $S_n(f)=\sum_{k=0}^n a_k(f)P_k(x)$ with respect to a given system of orthonormal polynomials $$\{P_k\}_{k=0}^\infty$$ is a classical way to approximate functions. The numerical calculation of the coefficients $$a_k(f)$$ in $$S_n(f)$$ is the main task in such a procedure. The computation of $$a_k(f)$$,   $$a_k(f)=\int_a^b \omega(t) P_k(t)f(t)\,{\mathrm{d}}t,$$ (1.1) requires the use of a quadrature formula. A straightforward application of the Gauss quadrature formula based on $$n$$ values of the integrand $$P_k(t)f(t)$$ (with $$k<2n-1$$) will give the exact result for all polynomials of degree $$2n-k-1$$. But it is well known that, especially for large values of $$k$$, the highly oscillating character of the integrand $$P_k(t)f(t)$$ involved in the computation of the Fourier coefficients (1.1) often implies that the usual Gauss-type quadrature formulas (with simple nodes) do not perform well, in the sense that they are numerically unstable. This is the main reason to consider the use of quadrature formulas with multiple nodes for this purpose. In this article, following Bojanov & Petrova (2009) (see also Milovanović & Spalević, 2014) and using the same notation, we consider quadrature formulas with multiple nodes of the type   $$\int_a^b \omega(t)P_k(t)f(t)\,{\mathrm{d}}t\approx\sum_{j=1}^n\sum_{i=0}^{\nu_j-1}c_{ji}f^{(i)}(x_j),\quad a<x_1<\cdots<x_n<b,$$ (1.2) where $$\nu_j$$ are given natural numbers (multiplicities) and $$P_k(t)$$ is a monic polynomial of degree $$k$$. The number $$\ell$$ is the algebraic degree of precision ($$\mathrm{ADP}$$) of (1.2) if (1.2) is exact for all polynomials of degree $$\ell$$ and there is a polynomial of degree $$\ell+1$$ for which this formula is not exact. The outline of this article is as follows. In Section 2, a brief overview on quadrature formulas with multiple nodes is given and their utility to approximate Fourier coefficients is revised. The problem of estimating the quadrature errors, by means of optimal extensions of the proposed quadratures (in the sense of the well-known Kronrod approach), is analysed in Section 3. Section 4 is devoted to the study of quadrature formulas with multiple nodes to estimate Fourier coefficients for the four Chebyshev weights, some of them being new; this is one of the main contributions of this article. Finally, we also present a numerically stable construction of such formulas, and some numerical experiments are displayed in Section 5. 2. Gauss-type quadrature formulas with multiple nodes for computing Fourier coefficients Quadrature formulas with multiple nodes were introduced more than 100 years after classical quadratures. Turán (1950) was the first to introduce some quadrature formulas with multiple nodes of Gauss type, in such a way that now all such quadrature rules are commonly referred to as Gauss–Turán quadrature formulas. By $$\mathcal{P}_m$$ we denote the set of all algebraic polynomials of degree at most $$m$$. More generally, Chakalov (1954) proved the existence of Gauss-type quadratures with multiple nodes, and then Ghizzetti & Ossicini (1975) characterized their nodes as zeros of a polynomial determined by certain orthogonality relations, as shown in the following result. Theorem 2.1 For any given set of odd multiplicities $$\nu_1,\dots,\nu_n$$$$(\nu_j=2s_j+1,\ s_j\in\mathbb{N}_0,\ j=1,\dots,n)$$, there exists a unique quadrature formula of the form   $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{\nu_j-1}a_{ji}f^{(i)}(x_j),\quad a\le x_1<\dots<x_n\le b,$$ (2.1) of $$\mathrm{ADP}=\nu_1+\dots+\nu_n+n-1$$, which is well known as the Chakalov–Popoviciu quadrature formula. The nodes $$x_1,\dots,x_n$$ of this quadrature are determined uniquely by the orthogonality conditions   $(\forall\, Q\in\mathcal{P}_{n-1})\qquad \int_a^b \omega(t)\prod_{k=1}^n(t-x_k)^{\nu_k}Q(t)\,{\mathrm{d}}t=0.$ The corresponding (monic) orthogonal polynomial $$\prod_{k=1}^n(t-x_k)$$ is known as a $$\sigma$$-orthogonal polynomial, with $$\sigma=\sigma_n=(s_1,\dots,s_n)$$. In the first examples considered by Turán (1950), quadrature formulas of type (2.1), with equal multiplicities $$\nu_1=\dots=\nu_n=\nu$$, $$\nu$$ being an odd number ($$\nu=2s+1,\ s\in\mathbb{N}$$), were studied; the corresponding (monic) orthogonal polynomial $$\prod_{k=1}^n(t-x_k)$$ is called an $$s$$-orthogonal polynomial. On the other hand, in this article, we are mainly concerned (Section 4 below) with the Chebyshev weight functions and their corresponding $$s$$-orthogonal polynomials. In this sense, along with the classical Chebyshev weight function   $$\omega_1(t)=(1-t^2)^{-1/2},\quad t\in[-1,1],$$ (2.2) we consider the following generalized Chebyshev weight functions:   $$\omega_2(t)=(1-t^2)^{1/2+s},\;\omega_3(t)=(1-t)^{-1/2}(1+t)^{1/2+s},\;\omega_4(t)=(1-t)^{1/2+s}(1+t)^{-1/2},$$ (2.3) for $$s\ge 0$$. It is well known that the Chebyshev polynomials $$T_n$$ are $$s$$-orthogonal on $$(-1,1)$$ with respect to $$\omega_1$$ for each $$s\ge 0$$ (see Bernstein, 1930). Ossicini & Rosati (1975) found three other weight functions $$\omega_i(t)$$$$(i=2,3,4),$$ for which the $$s$$-orthogonal polynomials can be identified as the Chebyshev polynomials of the second, third and fourth kinds, $$U_n$$, $$V_n$$ and $$W_n$$, which are defined by   $U_n(t)=\frac{\sin(n+1)\theta}{\sin\theta},\ V_n(t)=\frac{\cos(n+\frac 12)\theta}{\cos(\theta/2)},\ W_n(t)=\frac{\sin(n+\frac 12)\theta }{\sin(\theta/2)},$ where $$t=\cos\theta$$. However, these weight functions depend on $$s$$ (see (2.3)). It is easy to see that $$W_n(-t)=(-1)^nV_n(t)$$, so that in the investigation it is sufficient to study $$\omega_1(t)$$, $$\omega_2(t)$$ and one of $$\omega_3(t)$$ and $$\omega_4(t)$$. It is also noteworthy that for each $$n\in\mathbb{N}$$, Gori & Micchelli (1996) introduced an interesting class of weight functions defined on $$[-1,1],$$ for which explicit Gauss–Turán quadrature formulas of all orders can be found. In other words, these classes of weight functions have the peculiarity that the corresponding $$s$$-orthogonal polynomials, $$s\in\mathbb{N}$$, of the same degree, are independent of $$s$$. This class includes certain generalized Jacobi weight functions $$\omega_{n,\mu}(t)=\vert U_{n-1}(t)/n\vert^{2\mu+1}(1-t^2)^\mu$$, where $$U_{n-1}(\cos\theta)=\sin {n\theta}/\sin \theta$$ (Chebyshev polynomial of the second kind) and $$\mu>-1$$. In this case, the Chebyshev polynomials $$T_n$$ appear as $$s$$-orthogonal polynomials, $$s\in\mathbb{N}$$. As for the real purpose of this article, i.e., the application of quadrature formulas with multiple nodes to the estimation of Fourier coefficients (1.1), and following Bojanov & Petrova (2009), the connection between quadratures with multiple nodes and formulas of type (1.2) may be described as follows. For the system of nodes $${\bf x}:=(x_1,\dots,x_n)$$ with corresponding multiplicities $$\bar\nu:=(\nu_1,\dots,\nu_n)$$, they define the polynomials   ${\it{\Lambda}}^{\bar\nu}(t;{\bf x}):=\prod_{m=1}^n(t-x_m)^{\nu_m}.$ Setting $$x_j^{\nu_j}:= {(x_j,\dots,x_j)}$$$$[x_j \ {\rm repeats}\ {\nu_j\ {\rm times}}]$$, $$j=1,\dots,n$$, they state and prove the following important theorem which reveals the relation between the standard quadratures and the quadratures for Fourier coefficients. Theorem 2.2 For any given sets of multiplicities $$\bar\mu:=(\mu_1,\dots,\mu_k)$$ and $$\bar\nu:=(\nu_1,\dots,\nu_n)$$, and nodes $$y_1<\cdots<y_k,\;x_1<\cdots<x_n$$, there exists a quadrature formula of the form   $$\int_a^b \omega(t){\it{\Lambda}}^{\bar\mu}(t;{\bf y})f(t)\,{\mathrm{d}}t\approx\sum_{j=1}^n\sum_{i=0}^{\nu_j-1}c_{ji}f^{(i)}(x_j),$$ (2.4) with $$\mathrm{ADP}=N$$ if and only if there exists a quadrature formula of the form   $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx\sum_{m=1}^k\sum_{\lambda=0}^{\mu_m-1}b_{m\lambda}f^{(\lambda)}(y_m)+ \sum_{j=1}^n\sum_{i=0}^{\nu_j-1}a_{ji}f^{(i)}(x_j),$$ (2.5) which has degree of precision $$N+\mu_1+\cdots+\mu_k$$. In the case $$y_m=x_j$$ for some $$m$$ and $$j$$, the corresponding terms in both sums combine in one term of the form   $\sum_{\lambda=0}^{\mu_m+\nu_j-1}d_{m\lambda}f^{(\lambda)}(y_m).$ Observe that the actual strength of this result relies on the freedom in choosing the nodes and multiplicities in polynomial $${\it{\Lambda}}^{\overline\mu}$$. This utility will be shown repeatedly in Section 4 below. Regarding the computation of the weight coefficients, let us suppose that the coefficients $$a_{ji}\ (j=1,\dots,n;\ i=0,1,\dots,\nu_j-1)$$ in (2.5) are known. Proceeding as in the first part of the proof of Bojanov & Petrova (2009, Theorem 2.1), we can determine the coefficients $$c_{ji}\ (j=1,\dots,n;\ i=0,1,\dots,\nu_j-1)$$ in (2.4), namely, applying (2.5) to the polynomial $${\it{\Lambda}}^{\bar\mu}(\cdot;{\bf y})f$$, where $$f\in\mathcal{P}_N$$, the first sum in (2.5) vanishes and we can obtain (see Bojanov & Petrova, 2009, Equation (2.4))   $\int_a^b \omega(t){\it{\Lambda}}^{\bar\mu}(t;{\bf y})f(t)\,{\mathrm{d}}t=\sum_{j=1}^n\left(\sum_{i=0}^{\nu_j-1}a_{ji}\left.\left[{\it{\Lambda}}^{\bar\mu}(t;{\bf y})f(t)\right]^{(i)}\right\vert_{t=x_j}\right) =\sum_{j=1}^n\sum_{i=0}^{\nu_j-1}c_{ji}f^{(i)}(x_j),$ where   $$\quad c_{ji}=\sum_{s=i}^{\nu_j-1}a_{js}{s\choose i}\left.\left[{\it{\Lambda}}^{\bar\mu}(t;{\bf y})\right]^{(s-i)}\right\vert_{t=x_j}\ \ (j=1,2,\dots,n;\ i=0,1,\dots,\nu_j-1).$$ (2.6) On the other hand, the following questions arise in a natural way: Is it possible to construct a formula based on $$n$$ evaluations of $$f$$ or its derivatives, which gives the exact value of the coefficients $$a_k(f)$$ in (1.1) for polynomials $$f$$ of higher degree? What is the highest degree of precision that can be attained by a formula based on $$n$$ evaluations? When dealing with these questions for the coefficients $$a_k(f)$$ of $$f$$ with respect to the system of Chebyshev polynomials of the first kind $$\{T_k\}_{k=0}^\infty$$, orthogonal on $$[-1,1]$$ with weight $$\omega(t)=1/\sqrt{1-t^2}$$,   $T_k(t)=\cos(k\arccos\, t)={2^{k-1}}\,(t-\xi_1)\cdots(t-\xi_k)=2^{k-1}\widehat T_k(t),\quad t\in(-1,1),$ where $$\widehat T_k$$ denotes the monic polynomial of degree $$k$$: $$2^{1-k}\,T_k\,;$$Micchelli & Rivlin (1972) discovered the remarkable fact that the quadrature   $$\int_{-1}^1\frac{1}{\sqrt{1-t^2}}\, T_n(t)f(t)\,{\mathrm{d}}t\approx\frac{\pi}{n2^n}f'[\xi_1,\dots,\xi_n]$$ (2.7) is exact for all algebraic polynomials of degree $$\le 3n-1$$. Here, $$g[x_1,\dots,x_m]$$ denotes the divided difference of $$g$$ at the points $$x_1,\dots,x_m$$, and thus, formula (2.7) uses $$n$$ function values of the derivative $$f'$$, i.e., $$f'(\xi_1),\dots,f'(\xi_n)$$. It is clear that there is no formula of the form   $$\int_{-1}^1\frac{1}{\sqrt{1-t^2}}\, T_n(t)f(t)\,{\mathrm{d}}t\approx\sum_{k=1}^na_kf(x_k)+\sum_{k=1}^n b_kf'(x_k),$$ (2.8) which is exact for all polynomials of degree $$3n$$. The polynomial $$f(t)=T_n(t)(t-x_1)^2\cdots(t-x_n)^2$$ is a standard counterexample. Thus, the aforementioned Micchelli–Rivlin formula is of the highest degree of precision among all formulas of type (2.8). The question of uniqueness of this quadrature formula is reduced to the following problem which is also of independent interest: Prove that if $$Q$$ is a polynomial of degree $$n$$ with $$n$$ zeros in $$[-1,1]$$ and such that $$\left|Q(\eta_j)\right|=1$$ at the extremal points $$\eta_j=\cos(j\pi/n),\ j=0,1,\dots,n$$, of the Chebyshev polynomial $$T_n$$, then $$Q\equiv \pm T_n$$. This property was proved in deVore (1974) and, thus, the uniqueness the of Micchelli–Rivlin quadrature formula was settled (see Micchelli & Rivlin, 1974). For more details on this subject, see Bojanov & Petrova (2009) and Milovanović & Spalević (2014). Finally, let us remark that numerically stable methods for constructing nodes $$x_j$$ and coefficients $$a_{ji}$$ in Gaussian quadrature formulas with simple and multiple nodes and their optimal (Kronrod) extensions with simple and multiple nodes can be found in Gautschi & Milovanović (1997), Milovanović et al. (2004), Shi & Xu (2007), Gautschi (2014), Calvetti et al. (2000), Laurie (1997), Cvetković & Spalević (2014) and Spalević & Cvetković (2016) (see also Gautschi, 2001, 2004; Cvetković & Milovanović, 2004; Milovanović & Cvetković, 2012). For the asymptotic representation of the coefficients $$a_{ji}\,$$, see Peherstorfer (2009). More concerning this theory and its applications can be found in Milovanović (2001) and references therein, and Ghizzetti & Ossicini (1970). The error bounds for these quadratures in the case of analytic integrands have been considered in several articles (see, e.g., Spalević, 2014, and references therein). The error bounds for the Micchelli-Rivlin and Micchelli-Sharma (cf. (4.9) below) quadratures in the case of analytic integrands have been considered in Pejčev & Spalević (2013, 2014). 3. Estimating the error: optimal extensions of the quadrature formulas As for other numerical methods, it is crucial to have an efficient estimation of the quadrature error. This is not a trivial problem, and its study has given birth to the so-called stopping functionals, which allow us to decide when to stop the algorithm, provided a sufficiently small error is guaranteed. It is well known that for Gaussian quadratures, the best known (and most commonly used) stopping functional comes from the seminal Kronrod idea (see, e.g., Gautschi, 1987). Essentially, this involves the construction of a higher-order quadrature formula, using the nodes of the previous one and some new ones, to take the difference between both quadrature formulas as an estimation of the error (see also Monegato, 1982, 2001; Li, 1994). This higher-order quadrature formula, which has maximal possible ADP due to Kronrod’s idea, used for testing the quadrature error is usually referred to as an optimal extension of the given one. In this sense, Milovanović & Spalević (2014), for a formula of type   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx \sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu} a_{\nu i} f^{(i)}(x_\nu)},$$ (3.1) where $$a\le x_1<x_2<\cdots<x_n\le b$$, studied its extension to the interpolatory quadrature formula   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu} b_{\nu i} f^{(i)}(x_\nu)}+\sum_{\mu=1}^m {\sum_{j=0}^{2s_\mu^*} c_{\mu j}^* f^{(j)}(x_\mu^*)},$$ (3.2) where $$x_\nu$$ are the same nodes as in (3.1), and the new nodes $$x_\mu^*$$ and new weights $$b_{\nu i},c_{\mu j}^*$$ are chosen to maximize the degree of precision of (3.2), which is greater than or equal to   $$\sum_{\nu=1}^n(2s_\nu+1)+\sum_{\mu=1}^m(2s_\mu^*+1)+m-1= 2\left(\sum_{\nu=1}^ns_\nu+\sum_{\mu=1}^m s_\mu^*\right)+n+2m-1.$$ The interpolatory quadrature formula (3.2) has in general ADP$$=\sum_{\nu=1}^n(2s_\nu+1)+\sum_{\mu=1}^m(2s_\mu^*+1)-1$$, which is higher than the ADP of the quadrature formula (3.1), i.e., $$\sum_{\nu=1}^n(2s_\nu+1)+n-1$$, if   $2\sum_{\mu=1}^m s_\mu^*+m>n.$ The last inequality represents the necessary condition for a construction of the optimal extensions of type (3.2). Observe that it does not depend on $$s_\nu,\ \nu=1,\dots,n$$. In the case when all $$s_\mu^*$$ are equal to $$0$$, the last condition reduces to $$m>n$$, i. e., $$m\ge n+1$$. In the case $$m=n+1$$, we referred to the optimal extensions (3.2) as Kronrod extensions, which have $$2n+1$$ nodes, since they are generalizations of the well-known Gauss–Kronrod quadrature formulas, which are optimal extensions of the Gauss quadrature formulas of (3.1) with $$s_\nu=0,\ \nu=1,\dots,n$$ ($$s_\mu^*=0,\ \mu=1,\dots,m=n+1$$). We say that the quadrature formula (3.2) is a Chakalov–Popoviciu–Kronrod quadrature formula. A particular case of this formula is the Gauss–Turán–Kronrod quadrature formula, if $$s_1=s_2=\cdots=s_n=s$$. When $$s_1=s_2=\cdots=s_n=0,\ s_1^*= s_2^*=\cdots=s_m^*=0$$ and $$m=n+1$$, the well-known Gauss–Kronrod quadrature formula is obtained as a particular case of both quadrature formulae just mentioned. In the theory of Gauss–Kronrod quadrature formulas, the Stieltjes polynomials $$E_{n+1}(t)$$, whose zeros are the nodes $$x_\mu^*$$, namely $$E_{n+1}(t)\equiv E_{n+1}(t,\omega):=\prod_{\mu=1}^{n+1}(t-x_\mu^*)$$, play an important role. Also, of foremost interest are weight functions for which the Gauss–Kronrod quadrature formula has the property that (i) all $$n+1$$ nodes $$x_\mu^*$$ are in $$(a,b)$$ and are simple (i.e., all the zeros of the Stieltjes polynomial $$E_{n+1}(t)$$ are in $$(a,b)$$ and are simple). It is also desirable to work with weight functions for which the following additional properties are fulfilled: (ii) The interlacing property. Namely, the nodes $$x_\mu^*$$ and $$x_\nu$$ separate each other (i.e., the $$n+1$$ zeros of $$E_{n+1}(t)$$ separate the $$n$$ zeros of the orthogonal polynomial $$\prod_{\nu=1}^n(t-x_\nu)$$) and (iii) all the quadrature weights are positive. On the basis of the above facts, it seems natural to consider Chakalov–Popoviciu–Kronrod quadratures (3.2) in which $$m=n+1$$, i.e.,   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu} b_{\nu i} f^{(i)}(x_\nu)}+\sum_{\mu=1}^{n+1} {\sum_{j=0}^{2s_\mu^*} c_{\mu j}^* f^{(j)}(x_\mu^*)}.$$ (3.3) We know that in the general case of quadratures with multiple nodes, not all the quadrature weights have to be positive (see, e.g., the examples displayed in Section 5 below). Therefore, for Kronrod extensions of Gaussian quadrature formulas with multiple nodes, it does not seem natural to consider property (iii) as desirable. Anyway, we continue asking for the two first properties, in the sense that the nodes $$x_\mu^*$$ should be all real and simple and the interlacing property with original nodes $$x_\nu$$ should be satisfied. In Shi (2000), the density of the zeros of $$\sigma$$-orthogonal polynomials on bounded intervals, i.e., the nodes of the quadrature formulas (3.1), is studied, extending the well-known results for the zeros of ordinary orthogonal polynomials, i.e., the nodes of the $$n$$-point Gauss quadrature formulas (cf. Szegő, 1975)). Because of that property it is very natural to consider extensions of these quadratures in the form (3.3) by looking for new nodes to satisfy the interlacing property with respect to the old nodes. In this way, we take into account the influence of the associated quadrature formula and get the information on the integrand and its derivatives uniformly over the whole interval of integration. As stated above, one of the pioneering works on quadrature formulas with multiple nodes was Turán (1950). In that study, the author proposed an interpolatory quadrature formula of the type   $$\int_{-1}^1 f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s}A_{i,\nu}f^{(i)}(\tau_\nu)\quad(s\in{\Bbb N_0}),$$ (3.4) which has the highest possible $$\mathrm{ADP}$$. For our purposes, it is natural to consider a generalization of formula (3.4), in the sense of   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s}A_{i,\nu}f^{(i)}(\tau_\nu)\quad(s\in{\Bbb N_0}).$$ (3.5) Because of this highest degree of precision, it is natural to call (3.5) a Gauss–Turán quadrature formula. Note that in (3.5), $$\tau_\nu$$ are the zeros of a polynomial $$\pi_n$$, known as the $$s$$-orthogonal polynomial, of degree $$n$$, which satisfies the orthogonality relation   $$(\forall\, p\in \mathcal{P}_{n-1})\qquad\int_a^b \omega(t) \pi_n^{2s+1}(t)p(t)\,{\mathrm{d}}t=0,$$ (3.6) and $$A_{i,\nu}$$ are determined through interpolation. If $$\tau_\nu$$ and $$A_{i,\nu}$$ are chosen in this way, $$\mathrm{ADP}$$ for (3.5) is $$2(s+1)n-1$$. The weight coefficients $$A_{i,\nu}$$ in the Gauss–Turán quadrature formula (3.5) are not all positive in general. Following Kronrod’s idea, Li (1994) considered an extension of the formula (3.5) to   $$\int_a^b \omega(t) f(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s}B_{i,\nu}f^{(i)} (\tau_\nu)+\sum_{j=1}^{n+1}C_jf(\hat\tau_j)\quad(s\in{\Bbb N_0}),$$ (3.7) where $$\tau_\nu$$ are the same nodes as in (3.5), and the new nodes $$\hat\tau_j$$ and new weights $$B_{i,\nu},C_j$$ are chosen to maximize the $$\mathrm{ADP}$$ of (3.7). It is shown in Li (1994) that when $$\omega$$ is any weight function on $$[a,b]$$, we can always obtain the maximum degree $$2n(s+1)+n+1$$ by taking $$\hat\tau_j$$ to be the zeros of the polynomials $$\hat\pi_{n+1}$$ satisfying the orthogonality property   $(\forall\, p\in \mathcal{P}_n)\qquad \int_a^b \omega(t) \hat\pi_{n+1}(t)\pi_n^{2s+1}(t)p(t)\,{\mathrm{d}}t=0.$ At the same time, it is shown that $$\hat\pi_{n+1}$$ always exists and is unique up to a multiplicative constant. In the special case when $$\omega(t)=(1-t^2)^{-1/2}$$, Li (1994) determined $$\hat\pi_{n+1}$$ explicitly and obtained the weights in (3.7) for $$s=1$$ and $$s=2$$. The weights in the remaining cases $$s\ge3$$ were obtained later in Shi (1996). 4. Quadrature formulas with multiple nodes for Fourier coefficients corresponding to Chebyshev weight functions As stated in Section 2, our main interest here is the use of quadrature formulas with multiple nodes in estimating the Fourier coefficients for the four Chebyshev weights $$\omega_i\,,\,i=1,\ldots,4$$, in (2.2)–(2.3). For these four weights, optimal extensions of the above Chakalov–Popoviciu quadratures will be considered as efficient tools to estimate the errors of quadrature. Throughout this section, for calculating Fourier coefficients (1.1) by a quadrature formula and estimating the corresponding error, we use the method based on Theorem 2.2; namely, if there exist unique quadrature formulas (3.1), (3.2), then Theorem 2.2 implies that there exist unique quadratures for calculating the integrals   $$\int_a^b \omega(t) f(t)\pi_{n,\sigma}(t)\,{\mathrm{d}}t\approx \sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu-1} \widehat a_{\nu i} f^{(i)}(x_\nu)},$$ (4.1) and   $$\int_a^b \omega(t) f(t)\pi_{n,\sigma}(t)\,{\mathrm{d}}t\approx\sum_{\nu=1}^n {\sum_{i=0}^{2s_\nu-1} \widehat b_{\nu i} f^{(i)}(x_\nu)}+\sum_{\mu=1}^m {\sum_{j=0}^{2s_\mu^*} \widehat c_{\mu j}^* f^{(j)}(x_\mu^*)},$$ (4.2) which represent the Fourier coefficients if the given $$\sigma$$-orthogonal polynomial $$\pi_{n,\sigma}$$ agrees with the corresponding ordinary orthogonal polynomial $$P_n$$ with respect to the weight function $$\omega$$, i.e., $$\pi_{n,\sigma}(t)\equiv P_n(t)$$ on $$[a,b]$$. Then, the error in (4.1) can be estimated by the well-known method of computing the absolute value of the difference of the quadrature sums in (4.2) and (4.1). First, we are concerned with the Chebyshev weight function of the first kind. 4.1 A generalization of the Micchelli–Rivlin quadrature formula for Fourier–Chebyshev coefficients Using the above-presented method (see (4.1), (4.2)) for the case $$\omega(t)=1/\sqrt{1-t^2}$$, $$t\in[-1,1]$$, we have just proved the following statement. Theorem 4.1 Let $$n,s\in\mathbb{N}$$ and $$\omega(t)=1/\sqrt{1-t^2}$$, $$t\in[-1,1]$$. Then, there exists a unique quadrature formula with multiple nodes for calculating the corresponding Fourier–Chebyshev coefficients $$a_n(f)=\int_{-1}^1 f(t)T_n(t)/\sqrt{1-t^2}\,{\mathrm{d}}t$$,   $$\int_{-1}^1 \frac{f(t)T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s-1}\widehat A_{i,\nu}f^{(i)}(\tau_\nu),$$ (4.3) with $$\mathrm{ADP}=2sn+n-1$$, as well as its Kronrod extension   $$\int_{-1}^1 \frac{f(t)T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t\approx\sum_{\nu=1}^n\sum_{i=0}^{2s-1} \widehat B_{i,\nu}f^{(i)}(\tau_\nu)+\sum_{j=1}^{n+1}\widehat C_jf(\hat\tau_j),$$ (4.4) with $$\mathrm{ADP}=2sn+2n+1$$. In the special case when $$s=1$$, the quadrature formula (4.3) becomes the well-known Micchelli–Rivlin quadrature formula (2.7) or, what is the same, (2.8). It is noteworthy that the precision in calculating $$a_n(f)$$ ($$n$$ fixed) increases with increasing $$s$$ in the quadrature formulas (4.3), (4.4). Now, for the sake of completeness, an alternative way to prove Theorem 4.1 for $$n\ge2$$ is presented. New proof of Theorem 4.1 For fixed $$n,s\in\mathbb{N}\ (n\ge2)$$, and consider the new weight function (cf. Engels, 1980)   $\omega^{n,s}(t)=\frac{\hat T_n^{2s}(t)}{\sqrt{1-t^2}}.$ Recently, Cvetković et al. (2016, Theorem 2.1, Equation (2.2)) obtained analytically in a closed form the coefficients of the three-term recurrence relation for the corresponding orthogonal polynomials $$p_k^{n,s}$$ with respect to the modified Chebyshev weight function of the first kind $$\omega^{n,s}$$ on $$[-1,1]$$,   $p_{k+1}^{n,s}(t)=tp_{k}^{n,s}(t)-\beta^{n,s}_kp_{k-1}^{n,s}(t),\quad k\in\mathbb{N}_0,$ where $$p_{0}^{n,s}(t)=1$$, $$p_{-1}^{n,s}(t)=0$$. The Jacobi matrix, on which the well-known construction of the Gauss quadrature formula with $$2n+1$$ with respect to $$\omega^{n,s}$$ is based (cf. Golub & Welsch, 1969), is formed by the following coefficients of the three-term recurrence relation (cf. Cvetković et al., 2016, Theorem 2.1, Equation (2.2))):   $$\begin{array}{rl} \alpha_k^{n,s}=&0\quad\ (k=0,1,\dots);\$2mm] \beta_1^{n,s}=&\displaystyle\frac 12,\quad \beta_n^{n,s}=\displaystyle\frac 14\frac{1+2s}{1+s},\quad\beta_{n+1}^{n,s}=\displaystyle\frac 14\frac{1}{1+s},\quad \beta_{2n}^{n,s}=\displaystyle\frac 14\frac{2}{1+s};\\[2mm] \beta_k^{n,s}=&0\ \quad\mbox{otherwise}; \end{array}$$ (4.5) with \beta_0^{n,s}=\frac{\pi}{2^{2ns}}{2s\choose s}. For such Gauss quadrature formula with n nodes with respect to \omega^{n,s}, the associated generalized averaged Gaussian quadrature formula with 2n+1 nodes (see Spalević, 2007) is constructed by the Jacobi matrix that consists of the same coefficients as the three-term recurrence relation (4.5), but just substituting \beta_{2n}^{n,s}=\frac 14\frac{2}{1+s} by \beta_{2n}^{n,s}=\frac 12, and whose nodes are the zeros of the polynomial \[ t_{2n+1}\equiv p_{n}^{n,s}\cdot F_{n+1},$ where   \begin{eqnarray*} F_{n+1}&=&p_{n+1}^{n,s}-\frac{1}{4(1+s)}\hat T_{n-1}\\ &=&\hat T_{n+1}+\frac 14\left(1-\frac{1+2s}{1+s}\right)\hat T_{n-1}- \frac{1}{4(1+s)}\hat T_{n-1}\\ &=&\hat T_{n+1}-\frac 14 \hat T_{n-1}\\ &=&\frac{1}{2^n}(T_{n+1}-T_{n-1})\\ &=&\frac{1}{2^{n-1}}(t^2-1)U_{n-1}. \end{eqnarray*} The last equality holds on the basis of Shi (1996, Equation (2.4))), by putting $$k=1$$ there. Since all entries, i.e., the coefficients of the three-term recurrence relation, of the Jacobi matrix for the generalized averaged Gaussian quadrature formula with $$2n+1$$ nodes agree with the corresponding entries of the Jacobi matrix for the Gauss quadrature formula with $$2n+1$$ nodes (the zeros of $$p_n^{n,s}\cdot F_{n+1}$$), up to the entry $$\sqrt{\beta_{2n}^{n,s}}$$, we conclude that the ADP of the given generalized averaged Gaussian quadrature formula with $$2n+1$$ nodes is $$2(2n+1)-1-2=4n-1\ge 3n+1$$, for $$n\ge2$$. This means that the given generalized averaged Gaussian quadrature formula with $$2n+1$$ nodes is in fact the Gauss–Kronrod quadrature formula and $$F_{n+1}\equiv E_{n+1}$$, where $$E_{n+1}$$ is the Stieltjes polynomial corresponding to $$p_n^{n,s}$$. It is well known that the last quadrature uniquely exists. Now, by applying Theorem 2.2, for the weight function $$\omega(t)=1/\sqrt{1-t^2}$$ on $$[-1,1]$$, we deduce that uniquely there exist first, the quadrature formula (3.7) with $$\mathrm{ADP}=2sn+3n+1$$, and then, (4.4) with $$\mathrm{ADP}=2sn+2n+1$$. There uniquely exists a Gauss quadrature formula with $$n$$ nodes for the weight function $$\omega^{n,s}$$. Now, by applying Theorem 2.2 again, for the weight function $$\omega(t)=1/\sqrt{1-t^2}$$ on $$[-1,1]$$, we deduce that there uniquely exist first, the quadrature formula of Gaussian type (3.5) with $$\mathrm{ADP}=2sn+2n-1$$, and then, (4.3) with $$\mathrm{ADP}=2sn+n-1$$. Remark 4.2 The quadrature formula (4.3) was briefly mentioned in Bojanov & Petrova (2009, p. 383). For calculating the nodes and weight coefficients in the standard Gauss–Turán and Chakalov–Popoviciu quadrature formulas, we can use the general numerically stable methods presented in the papers cited at the end of Section 2. Then, the weight coefficients of the corresponding quadratures for Fourier coefficients can be computed using (2.6). In Section 5, we will describe this step in more detail. For $$\omega(t)=1/\sqrt{1-t^2}$$, explicit expressions of the weight coefficients of the generalized standard Gaussian quadrature formulas of a general form on the Chebyshev nodes (of the first and second kind) are given in Shi (1998). Explicit expressions for Fourier–Chebyshev coefficients $$a_n(f)=\int_{-1}^1 f(t)T_n(t)/\sqrt{1-t^2}\,{\mathrm{d}}t$$ are derived in Yang & Wang (2003). However, to find them, we need to calculate divided differences with repeated nodes, which might not be a simple task, especially when $$n,s$$ increase (see Pop & Bărbosu, 2009). A more general case with the Gori–Micchelli weight functions $$\omega=\omega_{n,\mu}$$ was treated in a similar fashion in Yang (2005). 4.2 Quadratures for Fourier coefficients for Chebyshev weight functions of the second kind Let $$\{\eta_j\}$$, $$j=1,\dots,n-1$$ be the zeros of the Chebyshev polynomial of the second kind $$U_{n-1}$$ of degree $$n-1$$. It is well known that the Gauss–Turán quadrature formula   $$\int_{-1}^1(1-t^2)^{s+1/2} f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}\alpha_{ji}f^{(i)}(\eta_j),\quad s\in\mathbb{N}$$ (4.6) exists uniquely and has $$\mathrm{ADP}=2(n-1)(s+1)-1=2n(s+1)-2s-3$$. Thus, from (4.6), by Theorem 2.2 we get the quadrature formula   $$\int_{-1}^1 \sqrt{1-t^2} f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}\hat\alpha_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\beta_i f^{(i)}(-1)+ \gamma_i f^{(i)}(1)],\quad s\in\mathbb{N},$$ (4.7) which exists uniquely and has $$\mathrm{ADP}=2n(s+1)-3$$. Since the nodes of the quadrature formula (4.7) are known, we can calculate its weight coefficients (cf. Milovanović et al., 2004). Using (4.7) again, Theorem 2.2 provides the Gaussian quadrature formula of Lobatto type for the Fourier–Chebyshev coefficients   $$\int_{-1}^1 \sqrt{1-t^2} f(t) U_{n-1}(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s-1}\widetilde\alpha_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\widetilde\beta_i f^{(i)}(-1)+ \widetilde\gamma_i f^{(i)}(1)],\quad s\in\mathbb{N},$$ (4.8) which exists uniquely and has $$\mathrm{ADP}=2ns+n-3$$. Since the nodes of the quadrature formula (4.8) are known (they are the same as in (4.7)), we can calculate its weight coefficients by (2.6), by knowing the weight coefficients of (4.7). On the other hand, Micchelli & Sharma (1983), for every $$s\in\mathbb{N}$$, constructed a formula of the form   $$\int_{-1}^1 \frac{1}{\sqrt{1-t^2}}T_n(t) f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}a_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s}[A_i f^{(i)}(-1)+ B_i f^{(i)}(1)],$$ (4.9) with $$\mathrm{ADP}=(2s+3)n-1$$, which has the highest possible degree of precision. Since the nodes of their formula are located at the extremal points $$-1,\eta_1,\dots,\eta_{n-1},1$$ of the Chebyshev polynomial $$T_n$$ (note that $$\{\eta_j\}_{j=1}^{n-1}$$ are also the zeros of the Chebyshev polynomial of the second kind $$U_{n-1}$$), it can be considered an extension of the simple node formula   $\frac 2\pi\int_{-1}^1 \frac{1}{\sqrt{1-t^2}}T_n(t) f(t)\,{\mathrm{d}}t\approx 2^{1-n}f[-1,\eta_1,\dots,\eta_{n-1},1]$ of $$\mathrm{ADP}=3n-1$$, established earlier in Micchelli & Rivlin (1972). The uniqueness of the Micchelli–Sharma multiple node quadrature formula with the highest degree of precision (4.9) is proved in Bojanov & Petrova (2009, Theorem 2.6). From (4.9) and by Theorem 2.2, again, we get the Gaussian quadrature formula of Lobatto type,   $$\int_{-1}^1 \sqrt{1-t^2} f(t)\,{\mathrm{d}}t \approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s}\hat a_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\hat A_i f^{(i)}(-1)+ \hat B_i f^{(i)}(1)] + \sum_{j=1}^n\lambda_jf(\xi_j),$$ (4.10) which uniquely exists and has $$\mathrm{ADP}=(2s+3)n-1-2+n=2(s+2)n-3$$. Since the nodes of the quadrature formula (4.10) are known, we can calculate its weight coefficients (cf. Milovanović et al., 2004). Note that $$\xi_j$$, $$j=1,\dots,n$$ are the zeros of the $$n$$-degree Chebyshev polynomial of the first kind $$T_n$$. Having in mind (4.10), and using again Theorem 2.2, we get the Gaussian quadrature formula of Lobatto type for the Fourier–Chebyshev coefficients   $$\int_{-1}^1 \sqrt{1-t^2} f(t) U_{n-1}(t)\,{\mathrm{d}}t \approx \sum_{j=1}^{n-1}\sum_{i=0}^{2s-1}\tilde a_{ji}f^{(i)}(\eta_j)+\sum_{i=0}^{s-1}[\tilde A_i f^{(i)}(-1)+ \tilde B_i f^{(i)}(1)] + \sum_{j=1}^n\tilde\lambda_jf(\xi_j),$$ (4.11) which exists uniquely and has $$\mathrm{ADP}=(2s+3)n-2$$. Since the nodes of the quadrature formula (4.11) are known, as above, we can calculate its weight coefficients by (2.6), from the weight coefficients of (4.10). In this way, we found two quadrature formulas with multiple nodes, (4.8) and its modified Kronrod extension (4.11), which is the Gaussian quadrature formula of Lobatto type, for calculating the Fourier–Chebyshev coefficients relative to Chebyshev weight functions of the second kind. We have just proved the following theorem. Theorem 4.3 Let $$n,s\in\mathbb{N}$$ and $$\omega(t)=\sqrt{1-t^2}$$, $$t\in[-1,1]$$. Then, there exists a unique quadrature formula with multiple nodes for calculating the corresponding Fourier–Chebyshev coefficients   $a_{n-1}(f)=\int_{-1}^1 f(t)U_{n-1}(t)\sqrt{1-t^2}\,{\mathrm{d}}t,$ namely the quadrature formula (4.8) with $$\mathrm{ADP}=2ns+n-3$$, as well as its modified Kronrod extension (4.11) with $$\mathrm{ADP}=(2s+3)n-2$$. We called (4.11) the modified Kronrod extension of (4.8) in the following sense. We applied Kronrod’s idea to the quadrature formula (4.8) to obtain the quadrature formula of type (4.11) in which the nodes $$\eta_j,\ j=1,\dots,n-1$$ (and $$-1,1$$) are fixed, while the additional $$n$$ nodes and all weight coefficients are chosen to maximize the ADP of the extended formula. The extended quadrature formula, which has $$\mathrm{ADP}=2n(s+1)-1$$, agrees with the quadrature (4.11) since $$(2s+3)n-2\ge 2n(s+1)-1$$ for $$n\ge1$$. 4.3 Quadratures for Fourier coefficients for Chebyshev weight functions of the third and fourth kinds Let $$\sigma_n=(s,\ldots,s)$$ and $$\omega(t)\equiv \omega_4(t)=(1-t)^{1/2+s}(1+t)^{-1/2}$$. Let $$\{x_j\,,\, j=1,\dots,n\}$$, be the zeros of the Chebyshev polynomial of the fourth kind $$P_n^{(1/2,-1/2)}$$ of degree $$n$$, with respect to the Chebyshev weight function of the fourth kind $$(1-t)^{1/2}(1+t)^{-1/2}$$, which is $$s$$-orthogonal with respect to $$\omega_4$$ (see Ossicini & Rosati, 1975). It is well known that the Gauss–Turán quadrature formula   $$\int_{-1}^1 \omega_4(t) f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s}\alpha_{ji}f^{(i)}(x_j),\quad s\in\mathbb{N}$$ (4.12) uniquely exists, and has $$\mathrm{ADP}=2n(s+1)-1$$. From (4.12) and Theorem 2.2 we get the quadrature formula   $$\int_{-1}^1 \sqrt{\frac{1-t}{1+t}} f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s}\hat \alpha_{ji}f^{(i)}(x_j)+\sum_{i=0}^{s-1}\beta_{i}f^{(i)}(1),\quad s\in\mathbb{N},$$ (4.13) which is unique and has $$\mathrm{ADP}=2n(s+1)+s-1=(2n+1)s+2n-1$$. Since the nodes of the quadrature formula (4.13) are known, we can calculate its weight coefficients (cf. Milovanović et al., 2004). Now, from (4.13) and Theorem 2.2, the following Gaussian quadrature formula of Radau type for the Fourier–Chebyshev coefficients is obtained:   $$\int_{-1}^1 \sqrt{\frac{1-t}{1+t}} f(t)P_n^{(1/2,-1/2)}(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s-1}\widetilde \alpha_{ji}f^{(i)}(x_j)+\sum_{i=0}^{s-1}\widetilde\beta_{i}f^{(i)}(1),\quad s\in\mathbb{N},$$ (4.14) which uniquely exists and has $$\mathrm{ADP}=(2n+1)s+n-1$$. Since the nodes of the quadrature formula (4.14) are known (they are the same as in (4.13)), we can calculate its weight coefficients by (2.6), using the weight coefficients of (4.13). On the other hand, let $$\{x_\mu^*\}$$, $$j=1,\dots,n$$ be the zeros of the Chebyshev polynomial of the third kind $$P_n^{(-1/2,1/2)}$$ of degree $$n$$ with respect to the Chebyshev weight function of the third kind $$(1+t)^{1/2}(1-t)^{-1/2}$$. A Kronrod extension with $$\mathrm{ADP}=n(4s+3)+s+1$$, i. e.,   $$\int_{-1}^1f(t)\omega_4(t)\,{\mathrm{d}}t \approx \sum_{\nu=1}^n \sum_{i=0}^{2s} b_{\nu i} f^{(i)}(x_\nu)+\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} c_{\mu j}^* f^{(j)}(x_\mu^*) +\sum_{j=0}^{s}c_{1,j}^* f^{(j)}(-1),$$ (4.15) of the Gauss–Turán quadrature formula (4.12) was proposed by Milovanović & Spalević (2014, Equation (2.23)). The free nodes $$x_\mu^*,\ \mu=2,\dots,n+1$$ are of the same multiplicity $$2s+1$$ as the fixed nodes $$x_\nu,\ \nu=1,\dots,n$$, and we need the node at $$-1$$ of multiplicity $$s+1$$, since in that case the corresponding orthogonality conditions reduce to the conditions   $\int_{-1}^1[U_{2n}(t)]^{2s+1}(1-t^2)^{1/2+s}\,\,{\rm d}t=0,\quad k=0,1,\dots,2n-1,$ which are fulfilled since $$P_n^{(1/2,-1/2)}(t)P_n^{(-1/2,1/2)}(t)={\rm const} \cdot U_{2n}(t)$$ (cf. Monegato, 1982, Equation (33), p. 147). For more details, see Milovanović & Spalević (2014, pp. 1217–1218). Then, from (4.15) and Theorem 2.2, we get the following quadrature formula, which is a modified Kronrod extension of (4.13):   $$\begin{array}{r@{\;}l} \displaystyle \int_{-1}^1f(t)\sqrt{\frac{1-t}{1+t}}\,{\mathrm{d}}t &\approx \displaystyle\sum_{\nu=1}^n \sum_{i=0}^{2s} \widehat b_{\nu i} f^{(i)}(x_\nu)+\sum_{j=0}^{s-1}\widehat c_{n+1,j}^* f^{(j)}(1)\$0.1in] &\quad{}+\displaystyle\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} \widehat c_{\mu j}^* f^{(j)} (x_\mu^*) +\sum_{j=0}^{s}\widehat c_{1,j}^* f^{(j)}(-1), \end{array}$$ (4.16) which exists uniquely and has \mathrm{ADP}=n(4s+3)+2s+1. Since the nodes of the quadrature formula (4.16) are known, we can calculate, again, its weight coefficients (cf. Milovanović et al., 2004). In a similar way, from (4.16) and Theorem 2.2, we get the Gaussian quadrature formula for the Fourier–Chebyshev coefficients, which is a modified Kronrod extension of (4.14), $$\begin{array}{r@{\;}l} \displaystyle \int_{-1}^1\sqrt{\frac{1-t}{1+t}}f(t)P_n^{(1/2,-1/2)}(t)\,{\mathrm{d}}t &\approx \displaystyle\sum_{\nu=1}^n \sum_{i=0}^{2s-1} \widetilde b_{\nu i} f^{(i)}(x_\nu)+\sum_{j=0}^{s-1}\widetilde c_{n+1,j}^* f^{(j)}(1)\\[0.1in] &\quad{}+\displaystyle\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} \widetilde c_{\mu j}^* f^{(j)}(x_\mu^*) +\sum_{j=0}^{s}\widetilde c_{1,j}^* f^{(j)}(-1), \end{array}$$ (4.17) which uniquely exists and has \mathrm{ADP}=2n(2s+1)+2s+1. The knowledge of the nodes in (4.17) allows us, again, to compute its weight coefficients by (2.6), using the corresponding ones of (4.16). Therefore, two quadrature formulas with multiple nodes, (4.14) and its modified Kronrod extension (4.17), for calculating the Fourier–Chebyshev coefficients relative to Chebyshev weight functions of the fourth kind have been found. Indeed, we have just proved the following theorem. Theorem 4.4 Let n,s\in\mathbb{N} and \omega(t)=\sqrt{(1-t)/(1+t)}, t\in[-1,1]. Then, there exists a unique quadrature formula with multiple nodes, for calculating the corresponding Fourier–Chebyshev coefficients \[ a_{n}(f)=\int_{-1}^1 \sqrt{\frac{1-t}{1+t}}f(t)P_n^{(1/2,-1/2)}(t)\,{\mathrm{d}}t,$ namely, formula (4.14) with $$\mathrm{ADP}=(2n+1)s+n-1$$, and its modified Kronrod extension (4.17) with $$\mathrm{ADP}=2n(2s+1)+2s+1$$. Now, in a similar fashion, the following result about quadrature formulas with multiple nodes to estimate Fourier coefficients for the Chebyshev weight function of third kind $$\omega_3$$, as well as its optimal Kronrod’s type extension, may be obtained. Indeed, making use of the same notation as above, we have the following result: Theorem 4.5 Let $$n,s\in\mathbb{N}$$ and $$\omega(t)=\sqrt{(1+t)/(1-t)}$$, $$t\in[-1,1]$$. Then, uniquely there exists a quadrature formula with multiple nodes, for calculating the corresponding Fourier–Chebyshev coefficients   $a_{n}(f)=\int_{-1}^1 \sqrt{\frac{1+t}{1-t}}f(t)P_n^{(-1/2,1/2)}(t)\,{\mathrm{d}}t,$ namely, formula (4.18) below, with $$\mathrm{ADP}=(2n+1)s+n-1$$, and its modified Kronrod extension (see (4.19) below) with $$\mathrm{ADP}=2n(2s+1)+2s+1$$. The above quadrature formulas are given by, respectively,   $$\int_{-1}^1 \sqrt{\frac{1+t}{1-t}} f(t)P_n^{(-1/2,1/2)}(t)\,{\mathrm{d}}t\approx \sum_{j=1}^{n}\sum_{i=0}^{2s-1}\widetilde \alpha_{ji}f^{(i)}(x_j)+\sum_{i=0}^{s-1}\widetilde\beta_{i}f^{(i)}(-1),\quad s\in\mathbb{N}$$ (4.18) and its modified Kronrod extension,   $$\begin{array}{r@{\;}l} \displaystyle \int_{-1}^1\sqrt{\frac{1+t}{1-t}}f(t)P_n^{(-1/2,1/2)}(t)\,{\mathrm{d}}t &\approx \displaystyle\sum_{\nu=1}^n \sum_{i=0}^{2s-1} \widetilde b_{\nu i} f^{(i)}(x_\nu)+\sum_{j=0}^{s-1}\widetilde c_{1,j}^* f^{(j)}(-1)\$0.1in] &\quad{}+\displaystyle\sum_{\mu=2}^{n+1} \sum_{j=0}^{2s} \widetilde c_{\mu j}^* f^{(j)}(x_\mu^*) +\sum_{j=0}^{s}\widetilde c_{n+1,j}^* f^{(j)}(1), \end{array}$$ (4.19) where the nodes \{x_\nu\} and \{x_\mu^*\} have the same meaning as in Theorem 4.4. Remark 4.6 It is clear that in all the quoted cases, the nodes in the Gaussian quadrature formulas and their Kronrod (or modified Kronrod) extensions interlace (cf. Milovanović & Spalević, 2014). 5. Numerical construction In this section, the numerical feasibility and efficiency of the quadrature formulas considered in the previous section are analysed. First, we present a way of computing the weight coefficients in the quadratures for Fourier–Chebyshev coefficients considered above. Let $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s}a_{ji}f^{(i)}(x_j)$$ (5.1) represent a Gauss–Turán quadrature formula with respect to the weight function \omega(t), t\in[a,b], which has \mathrm{ADP}(\text{5.1})=2n(s+1)-1. If the corresponding monic s-orthogonal polynomial \pi_{n,s} agrees with the monic ordinary orthogonal polynomial P_n, based on the zeros \{x_j\}_{j=1}^n then the quadrature formula for the Fourier coefficient a_n(f)=\int_a^b \omega(t) P_n(t)f(t)\,{\mathrm{d}}t of Gaussian type, obtained from (5.1), has the form $$\int_a^b \omega(t)P_n(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s-1}\widehat a_{ji}f^{(i)}(x_j)=:\mathcal{T}_{n,s}(f),$$ (5.2) and \mathrm{ADP}(\text{5.2})=(2s+1)n-1. The previous quadrature (4.3) is the particular case of quadrature (5.2) for \omega(t)=1/\sqrt{1-t^2}, t\in[-1,1]. If we know the weight coefficients a_{ji} in (5.1) then we can find the weight coefficients \widehat a_{ji} in (5.2), as follows. Substituting f in (5.1) by fP_n, where f\in\mathcal{P}_{n(2s+1)-1} and P_n(x_j)=0(j=1,\ldots,n), we get \begin{eqnarray*} \int_a^b\omega(t) f(t)P_n(t)\,{\mathrm{d}}t&=&\sum_{j=1}^n\sum_{i=0}^{2s}a_{ji}\left.\left[f(t)P_n(t)\right]^{(i)}\right|_{t=x_j}\\ &=&\sum_{j=1}^n\sum_{i=1}^{2s}a_{ji}\left.\left[f(t)P_n(t)\right]^{(i)}\right|_{t=x_j}\\ &=&\sum_{j=1}^n\sum_{i=0}^{2s-1}a_{j,i+1}\left.\left[f(t)P_n(t)\right]^{(i+1)}\right|_{t=x_j}\\ &=&\sum_{j=1}^n\sum_{i=0}^{2s-1}a_{j,i+1}\sum_{k=0}^{i}\binom{i+1}{k}P_n^{(i+1-k)}(x_j)f^{(k)}(x_j). \end{eqnarray*} If for fixed j, we define \[ h_{ik}=a_{j,i+1}\sum_{k=0}^{i}\binom{i+1}{k}P_n^{(i+1-k)}(x_j)f^{(k)}(x_j),$ then we have   $\sum_{i=0}^{2s-1}\sum_{k=0}^{i}h_{ik}=\sum_{k=0}^{2s-1}\sum_{i=k}^{2s-1}h_{ik}=\sum_{i=0}^{2s-1}\sum_{k=i}^{2s-1}h_{ki},$ and, therefore, we conclude that   $$\int_a^b\omega(t) f(t)P_n(t)\,{\mathrm{d}}t=\sum_{j=1}^n\sum_{i=0}^{2s-1}\left(\sum_{k=i}^{2s-1}a_{j,k+1}\binom{k+1}{i}P_n^{(k+1-i)}(x_j)\right)f^{(i)}(x_j)$$ (5.3) for each $$f\in\mathcal{P}_{n(2s+1)-1}$$. It means that (5.3) gives (5.2), where   $$\widehat a_{ji}=\sum_{k=i}^{2s-1}a_{j,k+1}\binom{k+1}{i}P_n^{(k-i+1)}(x_j),\quad j=1,2,\dots,n;\ i=0,1,\dots,2s-1.$$ (5.4) Now, let   $$\int_a^b \omega(t)f(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s}\lambda_{ji}f^{(i)}(x_j)+\sum_{j=1}^{n+1}\gamma_jf(\tau_j)$$ (5.5) represent a Kronrod extension of the Gauss–Turán quadrature formula (5.1) of type (3.7) with respect to the weight function $$\omega(t)$$, $$t\in[a,b]$$, which has $$\mathrm{ADP}(\text{5.5})=2n(s+1)+n+1$$. If the corresponding monic $$s$$-orthogonal polynomial $$\pi_{n,s}$$ agrees with the monic ordinary orthogonal polynomial $$P_n$$, based on the zeros $$\{x_j\}_{j=1}^n$$, then the quadrature formula for the Fourier coefficient $$a_n(f)=\int_a^b \omega(t) P_n(t)f(t)\,{\mathrm{d}}t$$, obtained from (5.5), has the form   $$\int_a^b \omega(t)f(t)P_n(t)\,{\mathrm{d}}t\approx \sum_{j=1}^n\sum_{i=0}^{2s-1}\widehat\lambda_{ji}f^{(i)}(x_j)+\sum_{j=1}^{n+1}\widehat\gamma_jf(\tau_j)=:\mathcal{TK}_{n,s}(f)$$ (5.6) and $$\mathrm{ADP}(\text{5.2})=2n(s+1)+1$$. It is a Kronrod extension of the quadrature formula (5.2). The quadrature (4.4) is the particular case of quadrature (5.6) for $$\omega(t)=1/\sqrt{1-t^2}$$, $$t\in[-1,1]$$. If we know the weight coefficients $$\lambda_{ji},\gamma_j$$ in (5.5), then we can find the weight coefficients $$\widehat \lambda_{ji},\widehat\gamma_j$$ in (5.6) in a similar way to the standard quadratures in the first part of this section. Therefore,   $$\widehat \lambda_{ji}=\sum_{k=i}^{2s-1}\lambda_{j,k+1}\binom{k+1}{i}P_n^{(k+1-i)}(x_j),\quad j=1,2,\dots,n;\ i=0,1,\dots,2s-1,$$ (5.7) and   $$\widehat \gamma_{j}=\gamma_jP_n(\tau_j),\quad j=1,2,\dots,n+1.$$ (5.8) For instance, acting in a similar way, we can get the coefficients $$\widetilde c_{\mu j}^*$$ in (4.17) from the coefficients $$\widehat c_{\mu k}^*$$ in (4.16), namely,   $$\widetilde c_{\mu j}^*=\sum_{k=j}^{2s}\widehat c_{\mu k}^*\binom{k}{j}\frac{{\mathrm{d}}^{k-j}}{{\mathrm{d}}t^{k-j}}\left[P_n^{(1/2,-1/2)}(t)\right]_{t=x_\mu^*};\quad \mu=2,\dots,n+1;\ j=0,1,\dots,2s.$$ (5.9) For computing the coefficients in (5.4) and (5.7), we need to determine the derivatives $$P_n^{(k)}(x_j)$$. Now a method to solve this question will be described. First, denoting by $$P_n$$ the corresponding monic orthogonal polynomial, set   $P_n(t)=\prod_{i=1}^n(t-x_i)=(t-x_j)\cdot h_j(t),$ where   $h_j(t)=\prod_{i=1,i\ne j}^n(t-x_i).$ Since   $P_n^{(k)}(t)=\sum_{l=0}^k\binom{k}{l}(t-x_k)^{(l)}h_j^{(k-l)}(t) =(t-x_j)h_j^{(k)}(t)+kh_j^{(k-1)}(t)\,,$ we get   $P_n^{(k)}(x_j)=kh_j^{(k-1)}(x_j).$ Therefore,   $$P_n^{(k)}(x_j)=\left\{ \begin{array}{ll} 0,&\quad k=0,\\ 0,&\quad k>n,\\ n!,&\quad k=n,\\ k\cdot h_j^{(k-1)}(x_j),& \quad k=1,\dots,n-1. \end{array} \right.$$ (5.10) Let $$t\in(x_{j-1},x_{j+1})$$. We have $$h_j(t)=(-1)^{n-j}\,g_j(t)$$, where   $g_j(t)=(t-x_1)\cdots(t-x_{j-1})(x_{j+1}-t)\cdots(x_n-t)\quad (g_j(t)>0\ \mbox{for}\ t\in(x_{j-1},x_{j+1})),$ and   $h_j^{(k-1)}(x_j)=(-1)^{n-j}g_j^{(k-1)}(x_j)=(-1)^{n-j}\left[e^{Q_j(t)}\right]^{(k-1)}_{t=x_j},$ where   $Q_j(t)=\sum_{i=1,i\ne j}^n q_i(t),\quad q_i(t)=\log |t-x_j|.$ Using Milovanović & Spalević (1998, Lemma 2.1), we have   $$\bigl(e^{Q_j(x_j)}\bigr)^{(0)}=e^{Q_j(x_j)},\quad \left(e^{Q_j}\right)^{(k-1)}_{t=x_j}=\sum_{l=1}^{k-1}\binom{k-2}{l-1}Q_j^{(l)}(x_j)\left(e^{Q_j}\right)^{(k-l-1)}_{t=x_j}\ \ (k\ge2).$$ (5.11) Finally,   $Q_j^{(l)}(x_j)=\sum_{i=1,i\ne j}^n q_i^{(l)}(x_j),$ where $$q_i^{(0)}(x_j)=\log |x_j-x_i|$$ and   $q_i^{(l)}(x_j)=(-1)^{l-1}\frac{(l-1)!}{(x_i-x_j)^l},\quad l\in\mathbb{N}, \quad i=1,\dots,n,$ which is useful in (5.11). But if we have to compute the coefficients $$\widetilde c_{\mu j}^*$$ at a point $$x_\mu^*$$ in the interval $$[-1,1]$$ not being a zero of the corresponding monic orthogonal polynomial $$P_n$$ (as, e.g., in (5.9)), we can proceed as follows. Since   \begin{equation*} P_n^{(k)}(x_\mu^*)=\left\{ \begin{array}{ll} 0,&\quad k>n,\\ n!,&\quad k=n,\\ \prod_{i=1}^n(x_\mu^*-x_i),& \quad k=0, \end{array} \right. \end{equation*} it only remains to determine $$P_n^{(k)}(x_\mu^*)$$, for $$k=1,2,\dots,n-1$$. In all the cases considered, we have $$x_\mu^*\in[-1,1]$$. Let us define $$x_0=-1$$ and $$x_{n+1}=1$$. The point $$x_\mu^*$$ can be located in one of the intervals $$I_0=[x_0,x_1)$$, $$I_1=(x_1,x_2)$$, $$\dots$$, $$I_{n-1}=(x_{n-1},x_n)$$, $$I_n=(x_n,1]$$. If $$x_\mu^*\in I_j$$, then we take $$t\in I_j$$, $$j=0,1,\dots,n$$. Let $$x_\mu^*\in I_j$$. Then, $$P_n(t)=(-1)^{n-j}g_j(t)$$, where $$g_j(t)=\prod_{i=1}^n|t-x_i|>0$$, i.e., $$g_j(t)=\prod_{i=1}^j(t-x_i)\cdot \prod_{i=j+1}^n(x_i-t)$$, with $$\prod_1^0\cdot=\prod_{n+1}^n\cdot=1$$. Now,   $P_n^{(k)}(t)=(-1)^{n-j}g_j^{(k)}(x_\mu^*)=(-1)^{n-j}\left[e^{Q_j(t)}\right]^{(k)}_{t=x_\mu^*},$ where   $Q_j(t)=\sum_{i=1}^n q_i(t),\quad q_i(t)=\log |t-x_i|.$ Similarly to above, for $$k\ge1$$, we have   $$\left(e^{Q_j(x_\mu^*)}\right)^{(0)}=e^{Q_j(x_\mu^*)},\quad \left(e^{Q_j}\right)^{(k)}_{t=x_\mu^*}=\sum_{l=1}^{k}\binom{k-1}{l-1}Q_j^{(l)}(x_\mu^*)\left(e^{Q_j}\right)^{(k-l)}_{t=x_\mu^*}.$$ (5.12) Finally,   $Q_j^{(l)}(x_\mu^*)=\sum_{i=1}^n q_i^{(l)}(x_\mu^*),$ where   $q_i^{(0)}(x_\mu^*)=\log |x_\mu^*-x_i|,\quad q_i^{(l)}(x_\mu^*)=(-1)^{l-1}\frac{(l-1)!}{(x_\mu^*-x_i)^l},\quad l\in\mathbb{N}, \quad i=1,\dots,n,$ which is what we need in (5.12). Example 5.1 Now, some numerical results about the computation of weights and the estimation of the error of quadrature formulas with multiple nodes to compute Fourier coefficients will be displayed. For this purpose, let us consider the calculation of the integrals   $$I_n=\frac{1}{2^{n-1}}\int_{-1}^{1}\frac{e^{10t}\,T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t=\int_{-1}^{1}\frac{e^{10t}\,\widehat T_n(t)}{\sqrt{1-t^2}}\,{\mathrm{d}}t,\quad n\in\mathbb{N}_0,$$ (5.13) by the quadrature formula (5.2) of Gaussian type, where $$P_n\equiv\widehat T_n$$. In the case $$s=1$$, it reduces to the well-known Micchelli–Rivlin quadrature formula (2.8). We also calculate the quadrature formula (5.6), with $$P_n\equiv\widehat T_n$$, which is the Kronrod extension of (5.2) to estimate its quadrature error. For example, if we denote the quadrature sums in (5.2) and (5.6) by $$Q(\text{5.2})$$ and $$Q(\text{5.6})$$, respectively, we can use the well-known method to estimate the error in (5.2) by means of the difference,   ${\rm{Err}}_{n,s}=\left|Q(\text{5.2})-Q(\text{5.6})\right|.$ Let us remember that the quadrature nodes $$x_j=\xi_j$$ and $$\tau_j=\eta_j$$ in (5.2) and (5.6) are given by   $x_j=-\cos \frac{(2j-1)\pi}{2n} \quad (j=1,\dots,n)\quad \mbox{and}\quad \tau_j=-\cos \frac{(j-1)\pi}{n}\quad (j=1,2,\dots,n+1).$ For computing $$\widehat T_n^{(p)}(x_j)=T_n^{(p)}(x_j)/2^{n-1}$$, $$j\in\mathbb{N}$$, we can use, in this special case, the following method. Using Szegő (1975, Equation (4.21.7)), we have the values of $$T_n'(x_j)$$; then, Szegő (1975, Equation (4.2.1)) yields   $(1-t^2)T_n^{(k+2)}(t)-(2k+1)t T_n^{(k+1)}(t)+(n^2-k^2)T_n^{(k)}(t)=0,$ and, hence, it allows us to obtain $$T_n^{(k+2)}(x_j)$$ (for $$j=1,2,\dots,n$$), namely,   $T^{(k+2)}_n(x_j)=\frac{(2k+1)x_j\, T^{(k+1)}_n(x_j)-(n^2-k^2) T^{(k)}_n(x_j)}{1-x_j^2},\quad 1\le k\le n-3,$ where   $T'_n(x_j)=nU_{n-1}(x_j)=\frac{n(-1)^{n-j}}{\sin \dfrac{(2j-1)\pi}{2n}}\quad\mbox{and}\quad T''_n(x_j)=\frac{x_j}{1-x_j^2} T'_n(x_j).$ It is clear that $$\widehat T^{(n)}_n(x_j)=n!$$ and $$\widehat T^{(k)}_n(x_j)=0$$, for $$k>n$$. Knowing the weight coefficients $$a_{ji}$$ in (5.1) and $$\lambda_{ji}$$, $$\gamma_j$$ in (5.5), and using (5.4), (5.7), (5.8), we can compute the weight coefficients $$\widehat a_{ji}$$ in (5.2) and $$\widehat \lambda_{ji}$$, $$\widehat \gamma_j$$ in (5.6). For $$n=12$$ and $$s=2$$, the weight coefficients $$\widehat a_{ji}$$ (Tables 1 and 2) and $$\widehat \lambda_{ji}$$ (Tables 3 and 4) are displayed (recall the well-known phenomenon of the nonpositivity of the weights). Table 1 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  Table 1 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  $$j$$  $$\widehat a_{j0}$$  $$\widehat a_{j1}$$  1  0.0  $$-7.815241282671094e-$$07  2  0.0  $$2.291312806989612e-$$06  3  0.0  $$-3.644952304489222e-$$06  4  0.0  $$4.750194326006378e-$$06  5  0.0  $$-5.531718454273488e-$$06  6  0.0  $$5.936265111478834e-$$06  7  0.0  $$-5.936265111478834e-$$06  8  0.0  $$5.531718454273488e-$$06  9  0.0  $$-4.750194326006378e-$$06  10  0.0  $$3.644952304489222e-$$06  11  0.0  $$-2.291312806989612e-$$06  12  0.0  $$7.815241282671094e-$$07  Table 2 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  Table 2 The weight coefficients $$\widehat a_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  $$j$$  $$\widehat a_{j2}$$  $$\widehat a_{j3}$$  1  $$-1.794991693459627e-$$09  $$-1.028177177832354e-$$11  2  $$4.904008505695800e-$$09  $$2.591158236863608e-$$10  3  $$-6.699000199155427e-$$09  $$-1.043076922624359e-$$09  4  $$6.699000199155427e-$$09  $$2.308739415256665e-$$09  5  $$-4.904008505695800e-$$09  $$-3.646036326218161e-$$09  6  $$1.794991693459627e-$$09  $$4.505890692801156e-$$09  7  $$1.794991693459627e-$$09  $$-4.505890692801156e-$$09  8  $$-4.904008505695800e-$$09  $$3.646036326218161e-$$09  9  $$6.699000199155427e-$$09  $$-2.308739415256665e-$$09  10  $$-6.699000199155427e-$$09  $$1.043076922624359e-$$09  11  $$4.904008505695800e-$$09  $$-2.591158236863608e-$$10  12  $$-1.794991693459627e-$$09  $$1.028177177832354e-$$11  Table 3 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.6) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  Table 3 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.6) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  $$j$$  $$\widehat \lambda_{j0}$$  $$\widehat \lambda_{j1}$$  1  0.0  $$-2.750924698597869e-$$07  2  0.0  $$8.065303123702174e-$$07  3  0.0  $$-1.283004446946978e-$$06  4  0.0  $$1.672043948729401e-$$06  5  0.0  $$-1.947136418589188e-$$06  6  0.0  $$2.089534759317196e-$$06  7  0.0  $$-2.089534759317196e-$$06  8  0.0  $$1.947136418589188e-$$06  9  0.0  $$-1.672043948729401e-$$06  10  0.0  $$1.283004446946978e-$$06  11  0.0  $$-8.065303123702174e-$$07  12  0.0  $$2.750924698597869e-$$07  Table 4 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  Table 4 The weight coefficients $$\widehat \lambda_{ji}$$ in (5.2) for $$n=12$$, $$s=2$$ $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  $$j$$  $$\widehat \lambda_{j2}$$  $$\widehat \lambda_{j3}$$  1  $$-2.991652822432712e-$$10  $$-1.713628629720590e-$$12  2  $$8.173347509493001e-$$10  $$4.318597061439347e-$$11  3  $$-1.116500033192571e-$$09  $$-1.738461537707264e-$$10  4  $$1.116500033192571e-$$09  $$3.847899025427774e-$$10  5  $$-8.173347509493001e-$$10  $$-6.076727210363602e-$$10  6  $$2.991652822432712e-$$10  $$7.509817821335261e-$$10  7  $$2.991652822432712e-$$10  $$-7.509817821335261e-$$10  8  $$-8.173347509493001e-$$10  $$6.076727210363602e-$$10  9  $$1.116500033192571e-$$09  $$-3.847899025427774e-$$10  10  $$-1.116500033192571e-$$09  $$1.738461537707264e-$$10  11  $$8.173347509493001e-$$10  $$-4.318597061439347e-$$11  12  $$-2.991652822432712e-$$10  $$1.713628629720590e-$$12  The weight coefficients $$\widehat \gamma_{j}$$ are   \begin{eqnarray*} \widehat\gamma_1&=&\widehat\gamma_{13}=1.997370817559429e{-}05,\\ \widehat\gamma_{2}&=&\widehat\gamma_{4}=\widehat\gamma_{6}=\widehat\gamma_{8}=\widehat\gamma_{10}=\widehat\gamma_{12}=-3.994741635118857e{-}05,\\ \widehat\gamma_{3}&=&\widehat\gamma_{5}=\widehat\gamma_{7}=\widehat\gamma_{9}=\widehat\gamma_{11}=3.994741635118857e{-}05. \end{eqnarray*} Now, the results obtained computing the Fourier coefficients for the function $$f(t)=e^{10 t}$$ by means of the quadrature sums $$\mathcal{T}_{n,s}(f)$$, $$\mathcal{TK}_{n,s}(f)$$ of the formulas (5.2), (5.6), respectively, in this case, with the Chebyshev weight function of the first kind, are shown. With this aim, in Table 5, the corresponding relative error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$, for $$n=6(2)20$$ and $$s=1$$ and $$s=2$$, as well as the actual values of $$a_n(f)$$, are displayed. Table 5 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(\,f)$$, where $$f(t)=e^{10 t}$$, for some values of $$n$$ and for $$s=1,2$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  Table 5 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(\,f)$$, where $$f(t)=e^{10 t}$$, for some values of $$n$$ and for $$s=1,2$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $$a_n(f)$$  $$6$$  $$1.4248e-$$05  $$1.7333e-$$13  $$4.41\ldots e+$$01  $$8$$  $$6.6270e-$$09  $$1.7594e-$$21  $$2.84\ldots e+$$00  $$10$$  $$1.0672e-$$12  $$2.1729e-$$30  $$1.34\ldots e-$$01  $$12$$  $$7.3663e-$$17  $$5.0383e-$$40  $$4.77\ldots e-$$03  $$14$$  $$2.5208e-$$21  $$2.9278e-$$50  $$1.31\ldots e-$$04  $$16$$  $$4.7467e-$$26  $$5.2338e-$$61  $$2.88\ldots e-$$06  $$18$$  $$2.7192e-$$31  $$3.3522e-$$72  $$5.11\ldots e-$$08  $$20$$  $$3.7615e-$$36  $$8.6532e-$$84  $$7.49\ldots e-$$10  We have also done similar computations of the relative errors $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$ for the function $$f(t)=e^{\cos(\alpha t)}\ (\alpha>0)$$, which is a highly oscillating function for big values of $$\alpha$$. They are displayed in Table 6, for the case where $$\alpha = 10$$, for $$n=10(10)60$$ and $$s=1$$, $$s=2$$ and $$s=3$$, as well as the actual values of $$a_n(f)$$. Table 6 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$, where $$f(t)=e^{\cos(10 t)}$$, for some values of $$n$$ and for $$s=1,2,3$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  Table 6 The error estimates $${\rm{Err}}_{\mathcal{T}_{n,s}}(f)$$, where $$f(t)=e^{\cos(10 t)}$$, for some values of $$n$$ and for $$s=1,2,3$$; the actual values of $$a_n(f)$$ $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  $$n$$  $${\rm{Err}}_{\mathcal{T}_{n,1}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,2}}(f)$$  $${\rm{Err}}_{\mathcal{T}_{n,3}}(f)$$  $$a_n(f)$$  $$10$$  $$6.2249e-$$02  $$2.2112e-$$03  $$4.7866e-$$05  $$-1.71\ldots e-$$03  $$20$$  $$3.4826e-$$04  $$1.7860e-$$08  $$2.3481e-$$13  $$2.73\ldots e-$$07  $$30$$  $$7.7364e-$$07  $$1.9331e-$$14  $$5.7548e-$$23  $$-3.43\ldots e-$$11  $$40$$  $$8.3891e-$$10  $$5.4078e-$$21  $$1.9948e-$$33  $$3.73\ldots e-$$15  $$50$$  $$5.4015e-$$13  $$5.7080e-$$28  $$1.6800e-$$44  $$-3.51\ldots e-$$19  $$60$$  $$2.3444e-$$16  $$2.8500e-$$35  $$4.7292e-$$56  $$2.88\ldots e-$$23  The proposed numerical construction of the quadratures introduced in this article is based on the general method to construct standard quadrature formulas with multiple nodes (see Milovanović et al., 2004) and, then, on the construction of weight coefficients in quadratures for computing Fourier coefficients by formulas of the form (2.6), where the known weight coefficients of the standard quadratures with multiple nodes are calculated by the method given in Milovanović et al. (2004). Since all the numerical methods we used are numerically stable, then the numerical construction we propose for the new quadrature formulas is also stable. Remark 5.2 As suggested by one of the referees, for smaller values of $$n$$ and when $$f$$ is not a highly oscillating function, for computation of the Fourier coefficients   $$a_n(f)=\int_{-1}^1\frac{K_n(t)\,{\rm d}t}{\sqrt{1-t^2}},\quad K_n(t)=f(t)T_n(t),$$ (5.14) we can use the quadrature formula Monegato (1982, Equation (43), p. 152), which has the form   $$\int_{-1}^1\frac{K_n(t)\,{\rm d}t}{\sqrt{1-t^2}}\approx\frac{\pi}{2m}\left[\frac 12 K_n(-1)+\sum_{i=1}^{2m-1}K_n\left(\cos \frac{i\pi}{2m}\right)+\frac 12 K_n(1)\right]:=\mathcal{GL}_{2m+1}(K_n),\quad m\ge 2,$$ (5.15) and the basic Gaussian rule,   $$\int_{-1}^1\frac{K_n(t)\,{\rm d}t}{\sqrt{1-t^2}}\approx\frac{\pi}{m}\sum_{i=1}^{m}K_n\left(\cos \frac{(2i-1)\pi}{2m}\right):=\mathcal{G}_{m}(K_n),\quad m\ge 1,$$ (5.16) which is optimally extended by (5.15). Formulas (5.16) and (5.15), and (5.15) with $$m$$ replaced by $$2m$$, as well as their relative errors   \begin{eqnarray*} {\rm{Err}}_{\mathcal{G}_m}(K_n)&=&\left|\mathcal{GL}_{2m+1}(K_n)-\mathcal{G}_m(K_n)\right|/\left|\mathcal{GL}_{2m+1}(K_n)\right|,\\ {\rm{Err}}_{\mathcal{GL}_{2m+1}}(K_n)&=&\left|\mathcal{GL}_{4m+1}(K_n)-\mathcal{GL}_{2m+1}(K_n)\right|/\left|\mathcal{GL}_{4m+1}(K_n)\right|, \end{eqnarray*} are calculated with a smaller computational cost and are effective when $$m$$ increases. The same happens for the other weight functions that we dealt with in the article using the quadrature formulas Monegato (1982, Equations (44)–(46)). However, even for functions $$f$$ that do not oscillate too much, the integrand $$K_n$$ becomes highly oscillating for bigger values of $$n$$ and, thus, to achieve an acceptable accuracy with the standard Gauss-type formulas, the number of nodes needed becomes ‘astronomically’ large (see Iserles, 2006; Iserles et al., 2006). For instance, concerning the function $$f(t)=e^{10t}$$, which is not a highly oscillating function (but $$K_n(t)$$ becomes a highly oscillating function with increasing $$n$$), we performed the calculations using higher arithmetic precision. The results show that for small values of $$n$$ we can use $$\mathcal{G}_{m}$$, $$\mathcal{GL}_{2m+1}$$ for the calculation of the Fourier–Chebyshev coefficients, and they can be used instead of our quadratures, since they are simpler to implement and are performed with a smaller computational cost. By increasing $$m$$, we get the bigger precision of $$\mathcal{G}_{m}$$, $$\mathcal{GL}_{2m+1}$$; see the case $$n=6$$ in Table 7. (Of course, we can also use our quadratures in these cases, by using the given software; by increasing $$s$$ we get the bigger precision of our quadratures for the given $$a_n(f)$$, $$n$$ fixed.) The problem in using formulas $$\mathcal{G}_{m}$$, $$\mathcal{GL}_{2m+1}$$ arises when $$n$$ increases. Indeed, on the basis of performed calculations, we observed that $${\rm{Err}}_{\mathcal{G}_m}$$ (the same for $${\rm Err}_{\mathcal{GL}_{2m+1}}$$), $$m$$ fixed, increases when $$n$$ increases. The situation is considerably worse for bigger values of $$n$$. In this case, say, $$n=50$$ (see Table 7), we should take $$m$$ to be much bigger than $$20$$ to obtain satisfactory precision. Table 7 The error estimates $${\rm{Err}}_{\mathcal{G}_m}(K_n)$$, $${\rm{Err}}_{\mathcal{GL}_{2m+1}}(K_n)$$, where $$f(t)=e^{10 t}$$, for $$n=6$$, $$n=50$$, and some $$m$$ $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  Table 7 The error estimates $${\rm{Err}}_{\mathcal{G}_m}(K_n)$$, $${\rm{Err}}_{\mathcal{GL}_{2m+1}}(K_n)$$, where $$f(t)=e^{10 t}$$, for $$n=6$$, $$n=50$$, and some $$m$$ $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  $$n$$  $$m$$  $${\rm{Err}}_{\mathcal{G}_m}$$  $${\rm{Err}}_{\mathcal{GL}_{2m+1}}$$  $$6$$  $$6$$  1.0$$e{+}$$0  4.7494$$e{-}$$06  $$6$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$6$$  $$8$$  4.8724$$e{-}$$02  2.0455$$e{-}$$11  $$6$$  $$9$$  6.9280$$e{-}$$03  1.7333$$e{-}$$14  $$6$$  $$10$$  7.6377$$e{-}$$04  8.9020$$e{-}$$18  $$6$$  $$12$$  4.7494$$e{-}$$06  6.4177$$e{-}$$25  $$6$$  $$14$$  1.3677$$e{-}$$08  1.0587$$e{-}$$32  $$6$$  $$16$$  2.0455$$e{-}$$11  5.0111$$e{-}$$41  $$6$$  $$18$$  1.7333$$e{-}$$14  8.0393$$e{-}$$50  $$6$$  $$20$$  8.9020$$e{-}$$18  4.9675$$e{-}$$59  $$50$$  $$6$$  9.7456$$e{-}$$03  2.6974$$e{-}$$09  $$50$$  $$7$$  2.5833$$e{-}$$01  1.3677$$e{-}$$08  $$50$$  $$8$$  6.6074$$e{+}$$03  6.2183$$e{-}$$03  $$50$$  $$9$$  3.5740$$e{+}$$03  5.5845$$e{+}$$04  $$50$$  $$10$$  1.0$$e{+}$$0  2.8111$$e{+}$$12  $$50$$  $$12$$  2.6974$$e{-}$$09  5.2045$$e{+}$$28  $$50$$  $$14$$  1.3677$$e{-}$$08  9.4453$$e{+}$$31  $$50$$  $$16$$  6.2183$$e{-}$$03  7.2140$$e{+}$$28  $$50$$  $$18$$  5.5845$$e{+}$$04  1.2918$$e{+}$$24  $$50$$  $$20$$  2.8111$$e{+}$$12  1.6371$$e{+}$$12  6. Conclusion We have introduced some Gaussian-type quadratures with multiple nodes and their Kronrod or modified Kronrod extensions for computing Fourier–Chebyshev coefficients relative to the four Chebyshev classical weight functions. The proofs of existence and uniqueness of these quadratures are given; one of them is a generalization of the well-known Micchelli–Rivlin quadrature formula, whereas the others are new. The reason to consider the use of these quadrature formulas with multiple nodes, in place of the usual Gauss-type quadratures, lies in the fact that the latter often do not work well because of the highly oscillating character of the integrand $$K_n$$, especially for big values of $$n$$. Furthermore, a numerically stable construction of these quadratures is proposed. By determining the absolute value of the difference between these Gaussian quadratures with multiple nodes for the Fourier–Chebyshev coefficients and their corresponding optimal extensions, we get the well-known methods to estimate their errors. These results are illustrated by means of some numerical examples. Acknowledgements We are thankful to the unknown referees for useful comments that allowed us to improve the first version of the article. Funding Serbian Academy of Sciences and Arts (No. $${\it{\Phi}}$$-96 to G.V.M., in part); Serbian Ministry of Education, Science and Technological Development (174015 and 174002 to G.V.M. and M.M.S., in part); Spanish Ministerio de Ciencia e Innovación (MTM2015-71352-P to R.O., in part). References Bernstein S. ( 1930) Sur les polynomes orthogonaux relatifs à un segment fini. J. Math. Pures Appl. , 9, 127– 177 Bojanov B. & Petrova G. ( 2009) Quadrature formulae for Fourier coefficients. J. Comput. Appl. Math. , 231, 378– 391 Google Scholar CrossRef Search ADS   Calvetti D., Golub G. H., Gragg W. B. & Reichel L. ( 2000) Computation of Gauss-Kronrod rules. Math. Comp. , 69, 1035– 1052 Google Scholar CrossRef Search ADS   Chakalov L. ( 1954) General quadrature formulae of Gaussian type. Bulg. Akad. Nauk. Izv. Mat. Inst. , 2, 67– 84 Cvetković A. S., Matejić M. M. & Milovanović G. V. ( 2016) Orthogonal polynomials for modified Chebyshev measure of the first kind. Results Math. , 69, 443– 455 Google Scholar CrossRef Search ADS   Cvetković A. S. & Milovanović G. V. ( 2004) The Mathematica package ‘Orthogonal Polynomials’. Facta Univ. Ser. Math. Inform. , 19, 17– 36 Cvetković A. S. & Spalević M. M. ( 2014) Estimating the error of Gauss-Turán quadrature formulas using their extensions. Electron. Trans. Numer. Anal. , 41, 1– 12 DeVore R. ( 1974) A property of Chebyshev polynomials. J. Approx. Theory , 12, 418– 419 Google Scholar CrossRef Search ADS   Engels H. ( 1980) Numerical Quadrature and Cubature.  London: Academic Press. Gautschi W. ( 1987) Gauss-Kronrod quadrature–-a survey. Numerical Methods and Approximation Theory III, Niš  ( Milovanović G. V. ed.). Faculty of Electronic Engineering, Niš: Univ. Niš, pp. 39– 66 Gautschi W. ( 2001) OPQ suite . Available at http://www.cs.purdue.edu/archives/2001/wxg/codes ( last accessed 30 March 2009). Gautschi W. ( 2004) Orthogonal Polynomials: Computation and Approximation.  Oxford: Oxford University Press. Gautschi W. ( 2014) High-precision Gauss-Turán quadrature rules for Laguerre and Hermite weight functions. Numer. Algor. , 67, 59– 72. Google Scholar CrossRef Search ADS   Gautschi W. & Milovanović G. V. ( 1997) $$S$$-orthogonality and construction of Gauss–Turán-type quadrature formulae. J. Comput. Appl. Math. , 86, 205– 218 Google Scholar CrossRef Search ADS   Ghizzetti A. & Ossicini A. ( 1970) Quadrature Formulae.  Berlin: Akademie. Google Scholar CrossRef Search ADS   Ghizzetti A. & Ossicini A. ( 1975) Sull’ esistenza e unicitá delle formule di quadratura gaussiane. Rend. Mat. , 8, 1– 15 Golub G. H. & Welsch J. H. ( 1969) Calculation of Gauss quadrature rules. Math. Comp. , 23, 221– 230 Google Scholar CrossRef Search ADS   Gori L. & Micchelli C. A. ( 1996) On weight functions which admit explicit Gauss–Turán quadrature formulas. Math. Comp. , 65, 1567– 1581 Google Scholar CrossRef Search ADS   Iserles A. ( 2013) Three stories of high oscillation. Bull. EMS , 87, 18– 23 Available at http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2013_02.pdf. Iserles A., Norsett S. P. & Olver S. ( 2006) Highly oscillatory quadrature: the story so far. Proceedings of ENuMath , Santiago de Compostela ( Bermudez de Castro et al.  , eds). Berlin: Springer, pp. 97– 118 Available at http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2005_06.pdf. Google Scholar CrossRef Search ADS   Kahaner D. K. & Monegato G. ( 1978) Nonexistence of extended Gauss-Laguerre and Gauss-Hermite quadrature rules with positive weights. Z. Angew. Math. Phys. , 29, 983– 986 Google Scholar CrossRef Search ADS   Laurie D. P. ( 1997) Calculation of Gauss-Kronrod quadrature rules. Math. Comp. , 66, 1133– 1145 Google Scholar CrossRef Search ADS   Li S. ( 1994) Kronrod extension of Turán formula. Studia Sci. Math. Hungar. , 29, 71– 83 Micchelli C. A. & Rivlin T. J. ( 1972) Turán formulae and highest precision quadrature rules for Chebyshev coefficients. IBM J. Res. Dev. , 16, 372– 379 Google Scholar CrossRef Search ADS   Micchelli C. A. & Rivlin T. J. ( 1974) Some new characterizations of the Chebyshev polynomials. J. Approx. Theory , 12, 420– 424 Google Scholar CrossRef Search ADS   Micchelli C. A. & Sharma A. ( 1983) On a problem of Turán: multiple node Gaussian quadrature. Rend. Mat. , 3, 529– 552 Milovanović G. V. ( 2001) Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation. J. Comput. Appl. Math. , 127, 267– 286 Google Scholar CrossRef Search ADS   Milovanović G. V. & Cvetković A. S. ( 2012) Special classes of orthogonal polynomials and corresponding quadratures of Gaussian type. Math. Balkanica , 26, 169– 184 Milovanović G. V. & Spalević M. M. ( 1998) Construction of Chakalov-Popoviciu’s type quadrature formulae. Rend. Circ. Mat. Palermo , 52, 625– 636 Milovanović G. V. & Spalević M. M. ( 2014) Kronrod extensions with multiple nodes of quadrature formulas for Fourier coefficients. Math. Comput. , 83, 1207– 1231 Google Scholar CrossRef Search ADS   Milovanović G. V., Spalević M. M. & Cvetković A.S. ( 2004) Calculation of Gaussian type quadratures with multiple nodes. Math. Comput. Model. , 39 325– 347 Google Scholar CrossRef Search ADS   Monegato G. ( 1982) Stieltjes polynomials and related quadrature rules. SIAM Rev. , 24, 137– 158 Google Scholar CrossRef Search ADS   Monegato G. ( 2001) An overview of the computational aspects of Kronrod quadrature rules. Numer. Algor. , 26, 173– 196 Google Scholar CrossRef Search ADS   Ossicini A. & Rosati F. ( 1975) Funzioni caratteristiche nelle formule di quadratura gaussiane con nodi multipli. Boll. Un. Mat. Ital. , 11, 224– 237 Peherstorfer F. ( 2009) Gauss-Turán quadrature formulas: asymptotics of weights. SIAM J. Numer. Anal. , 47, 2638– 2659 Google Scholar CrossRef Search ADS   Pejčev A. V. & Spalević M. M. ( 2013) Error bounds of Micchelli-Rivlin quadrature formula for analytic functions. J. Approx. Theory , 169, 23– 34 Google Scholar CrossRef Search ADS   Pejčev A. V. & Spalević M. M. ( 2014) Error bounds of Micchelli-Sharma quadrature formula for analytic functions. J. Comput. Appl. Math. , 259, 48– 56 Google Scholar CrossRef Search ADS   Pop O. T. & Bărbosu D. ( 2009) Two dimensional divided differences with multiple konots. An. şt. Univ. Ovidius Constanţa , 17, 181– 190 Shi Y. G. ( 1996) Generalized Gaussian Kronrod-Turán quadrature formulas. Acta Sci. Math. (Szeged) , 62, 175– 185 Shi Y. G. ( 1998) General Gaussian quadrature formulas on Chebyshev nodes. Adv. Math. (China) , 27, 227– 239 Shi Y. G. ( 2000) Converegence of Gaussian quadrature formulas. J. Approx. Theory , 105, 279– 291 Google Scholar CrossRef Search ADS   Shi Y. G. & Xu G. ( 2007) Construction of $$\sigma$$-orthogonal polynomials and Gaussian quadrature formulas. Adv. Comput. Math. , 27, 79– 94 Google Scholar CrossRef Search ADS   Spalević M. M. ( 2007) On generalized averaged Gaussian formulas. Math. Comp. , 76, 1483– 1492 Google Scholar CrossRef Search ADS   Spalević M. M. ( 2014) Error bounds and estimates for Gauss-Turán quadrature formulae of analytic functions. SIAM J. Numer. Anal. , 52, 443– 467 Google Scholar CrossRef Search ADS   Spalević M. M. & Cvetković A. S. ( 2016) Estimating the error of Gaussian quadratures with simple and multiple nodes by using their extensions with multiple nodes. BIT , 56, 357– 374 Google Scholar CrossRef Search ADS   Szegő G. ( 1975) Orthogonal Polynomials.  Providence, RI: AMS. Turán P. ( 1950) On the theory of the mechanical quadrature. Acta Sci. Math. (Szeged) , 12, 30– 37 Yang S. ( 2005) On a quadrature formula of Gori and Micchelli. J. Comput. Appl. Math. , 176, 35– 43 Google Scholar CrossRef Search ADS   Yang S. & Wang X. ( 2003) Fourier-Chebyshev coefficients and Gauss-Turán quadrature with Chebyshev weight. J. Comput. Math. , 21, 189– 194 © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

### Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: Nov 15, 2017

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations