# A stroboscopic averaging algorithm for highly oscillatory delay problems

A stroboscopic averaging algorithm for highly oscillatory delay problems Abstract We propose and analyse a heterogeneous multiscale method for the efficient integration of constant-delay differential equations subject to fast periodic forcing. The stroboscopic averaging method suggested here may provide approximations with $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$ errors with a computational effort that grows like $$H^{-1}$$ (the inverse of the step size), uniformly in the forcing frequency $$\varOmega$$. 1. Introduction We propose and analyse a heterogeneous multiscale method (HMM) (E, 2003; E & Engquist, 2003) for the efficient integration of constant-delay differential equations subject to fast periodic forcing. The stroboscopic averaging method (SAM) suggested here may provide approximations with $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$ errors with a computational effort that grows like $$H^{-1}$$ (the inverse of the step size), uniformly in the forcing frequency $$\varOmega$$. Delay differential equations with fast periodic forcing appear in a number of recent contributions to the nonlinear Physics literature. As shown by Daza et al. (2017), under fast periodic forcing, the delayed-action oscillator (Boutle et al., 2007) that describes El Niño phenomenon may generate basins of attraction with the Wada property, i.e. each point on the boundary of one of the basins is actually on the boundary of all basins. The system   \begin{eqnarray} \dot{x}_{1}(t)&=&\frac{\alpha}{1+x_{2}^{\beta}(t)}-x_{1}(t-\tau)+A\sin(\omega t)+B\sin(\varOmega t),\\ \dot{x}_{2}(t)&=&\frac{\alpha}{1+x_{1}^{\beta}(t)}-x_{2}(t-\tau), \nonumber \end{eqnarray} (1.1)($$\tau$$, $$\alpha ,\beta ,\omega ,A,B$$ are constants) describes, for A = 0, B = 0 a time-delayed genetic toggle switch, a synthetic gene-regulatory network (Gardner et al., 2000). Studied by Daza et al. (2013a) is the phenomenon of vibrational resonance (Landa & McClintock, 2000), i.e. the way in which the presence of the high-frequency forcing $$B\sin (\varOmega t)$$ enhances the response of the system to the low-frequency forcing $$A\sin (\varOmega t)$$. Jeevarathinam et al. (2011) and Daza et al. (2013b) investigate in a similar way the forced, delayed Duffing oscillator. A new kind of nonlinear resonance of periodically forced delay systems has recently been described by Coccolo et al. (2018). The numerical integration of highly oscillatory differential equations with or without delay may be a very demanding task, as standard methods typically have to employ time steps smaller than the periods present in the solution. For systems without delay, the literature contains many suggestions of numerical schemes specially designed to operate in the highly oscillatory scenario; many of them are reviewed in Hairer et al. (2002). Perhaps counterintuitively, some of those methodologies take advantage of the large frequency and their efficiency actually increases with $$\varOmega$$ (Iserles & Nørsett, 2005). On the other hand, schemes for highly oscillatory problems may suffer from unexpected instabilities and inaccuracies (Calvo & Sanz-Serna, 2009). The algorithm suggested here is based on ideas presented, for systems without delay, by Calvo et al. (2011a) and Calvo et al. (2011b). In these references, given an oscillatory problem, a stroboscopically averaged problem is introduced such that, at the stroboscopic times $$t^{(k)}=kT$$, $$T=2\pi /\varOmega$$, $$k = 0, 1,\dots$$, its solution X(t) (approximately) coincides with the true oscillatory solution x. The stroboscopically averaged problem does not include rapidly varying forcing terms and therefore, if available in closed form, may be integrated numerically without much ado. The algorithms in Calvo et al. (2011a) and Calvo et al. (2011b) compute numerically values of X, without demanding that the user finds analytically the expression of the averaged system. More precisely, the algorithms only require evaluations of the right-hand side of the originally given oscillatory problem. The solution X is advanced with a standard integrator (the macro-integrator) with a step size H that, for a target accuracy, may be chosen to be independent of $$\varOmega$$. When the macro-integrator requires a value F of the slope $$\dot X$$, F is found by numerical differentiation of a micro-solution u, i.e. a solution of the originally given oscillatory problem. While the micro-integrations to find u are performed with step sizes h that are submultiples of the (small) period T, the corresponding computational cost does not increase as $$\varOmega \rightarrow \infty$$, because u is only required in windows of width mT, m a small integer. The extension of the material in the studies by Calvo et al. (2011a) and Calvo et al. (2011b) to systems with delay is far from trivial. A first difficulty stems from the well-known fact that, in the delay scenario, regardless of the smoothness of the equation, solutions may be nonsmooth at points t that are integer multiples of the (constant) delay. Therefore, the algorithm presented here has to make special provision for that lack of smoothness. In addition, the analysis of the algorithm (but, as emphasized above, not the algorithm itself) is built on the knowledge of the stroboscopically averaged systems. While the construction of a stroboscopically averaged system with errors $$x\left (t^{(k)}\right )- X\left (t^{(k)}\right )=\mathscr{O}(1/\varOmega )$$ is not difficult, here we aim at errors $$x\left (t^{(k)}\right )- X\left (t^{(k)}\right )=\mathscr{O}\big (1/\varOmega ^{2}\big )$$ and this requires much additional analysis. The classical reference by Lehman & Weibel (1999) only considers zero-mean, $$\mathscr{O}(1/\varOmega )$$ averaging. In Section 2 we present the main ideas of the paper and a detailed description of the algorithm. Due to the difficulties imposed by the lack of smoothness in the solution, the algorithm uses low-order methods: the second-order Adams–Bashforth formula as a macro-integrator and Euler’s rule as a micro-integrator. Section 3 contains the construction of the stroboscopically averaged system with $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ accuracy. Sections 4 and 5 are devoted to the analysis of the SAM algorithm. In the first of these, we assume that the micro-integrations carried out in the algorithm are performed exactly. Under suitable hypotheses, the errors in SAM are $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$. The effect of the errors in the micro-integration is studied in Section 5: it is shown that, with a computational cost that grows like 1/H, SAM may yield errors of size $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$. The $$H^{2}$$ (second order) behaviour of the error may come as a surprise, because micro-integrations are performed by Euler’s rule; of key importance here is a superconvergence result (see the bound in (5.2)) for the Euler solution of oscillatory problems when the integration is carried out over a whole number of periods. The last two sections report numerical experiments that, on the one hand, confirm the theoretical expectations and, on the other, show the advantage of SAM when compared with a direct numerical integration of the oscillatory problem. Speed-ups larger than three order of magnitude are reported at the end of Section 7. 2. The stroboscopic averaging method (SAM) This section motivates and describes the SAM algorithm. 2.1 Motivation We consider highly oscillatory delay differential systems of the form   \begin{eqnarray} \dot{x}(t)&=&f(x(t),y(t),t,\varOmega t;\varOmega),\qquad t\geqslant 0,\\ y(t)&=&x(t-\tau),\qquad t\geqslant 0, \nonumber \end{eqnarray} (2.1)where the solution x is defined for $$t \geqslant -\tau$$ and takes values in $$\mathbb{R}^{D}$$, the function $$f(x,y,t,\theta ;\varOmega ):\mathbb{R}^{D}\times \mathbb{R}^{D}\times \mathbb{R}\times \mathbb{R}\times \mathbb{R} \rightarrow \mathbb{R}^{D}$$ depends $$2\pi$$-periodically on its fourth argument $$\theta$$, $$\tau>0$$ is the (constant) delay and the frequency $$\varOmega$$ is seen as a parameter $$\varOmega \gg 1$$. Note that f depends explicitly on t through its third and fourth arguments; the fourth is the rapidly rotating phase$$\theta =\varOmega t$$ and the third corresponds to a slow (i.e. $$\varOmega$$-independent) dependence on t (see the toggle switch equations above). The values of x on the interval $$[-\tau ,0]$$ are prescribed through an $$\varOmega$$-independent1 function $$\varphi$$:   \begin{align} x(t)=\varphi(t),\qquad-\tau\leqslant t\leqslant 0. \end{align} (2.2) It is well known that, regardless of the smoothness of f and $$\varphi$$, the function x(t) will typically not be differentiable at t = 0 and that in (2.1) $$\dot{x}(0)$$ has to be understood as a right derivative. Furthermore, the discontinuity of $$\dot{x}(t)$$ at t = 0 will lead to the discontinuity of $$\ddot{x}(t)$$ at $$t=\tau$$, etc. We assume that, at the stroboscopic times$$t^{(k)}=kT$$, where $$T =2\pi /\varOmega$$ is the period and $$k = 0, 1,\dots$$, the solution x(t) of the oscillatory delay problems (2.1)–(2.2) may be approximated (in a sense to be made precise later) by the solution X(t) of an averaged problem   \begin{eqnarray} \dot{X}(t)&=&F,\qquad t\geqslant 0,\\ X(t)&=&\varphi(t),\qquad -\tau\leqslant t\leqslant 0,\nonumber \end{eqnarray} (2.3)where the value of the function F may depend on X(t), on the history X(s), $$-\tau \leqslant s<t$$, on the slow time t and on $$\varOmega$$, but is independent of the fastly varying phase $$\theta =\varOmega t$$. We illustrate the situation in the particular case of the toggle switch problem (1.1). Figure 1 displays, in a short time interval, a solution of the given oscillatory system and the corresponding averaged solution found by solving the stroboscopically averaged system  \begin{eqnarray} \dot{X}_{1}(t)&=&\frac{\alpha}{1+X_{2}^{\beta}(t)}-Y_{1}(t)-\frac{B}{\varOmega}1_{\{t\geqslant \tau\}}(t)+A\sin(\omega t),\\ \nonumber \dot{X}_{2}(t)&=&\frac{\alpha}{1+X_{1}^{\beta}(t)}-Y_{2}(t)-\frac{B}{\varOmega}\frac{\alpha \beta X_{1}^{\beta-1}(t)}{\left(1+X_{1}^{\beta}(t)\right)^{2}}, \end{eqnarray} (2.4) Fig. 1. View largeDownload slide $$x_{1}$$ component of the toggle switch problem. The constants $$\tau$$, $$\alpha$$, $$\beta$$, A, $$\omega$$, B, and the function $$\varphi$$ are chosen as in Section 6 and $$\varOmega =60$$. The true solution x and the averaged solution X are very close at the stroboscopic times $$t^{(k)} = k(2\pi /\varOmega )=kT$$, $$k = 0, 1,\dots$$ SAM is used to generate approximations to X at the step points $$t_{n}=nH$$, H = 0.125, $$n=0, 1,\dots$$ The value of H is chosen so that the point $$t=\tau$$, where the slope of the solution is discontinuous, is a step point. On the other hand, with the value of $$\varOmega$$ considered, the step points (abscissae of the stars) are not stroboscopic times. Fig. 1. View largeDownload slide $$x_{1}$$ component of the toggle switch problem. The constants $$\tau$$, $$\alpha$$, $$\beta$$, A, $$\omega$$, B, and the function $$\varphi$$ are chosen as in Section 6 and $$\varOmega =60$$. The true solution x and the averaged solution X are very close at the stroboscopic times $$t^{(k)} = k(2\pi /\varOmega )=kT$$, $$k = 0, 1,\dots$$ SAM is used to generate approximations to X at the step points $$t_{n}=nH$$, H = 0.125, $$n=0, 1,\dots$$ The value of H is chosen so that the point $$t=\tau$$, where the slope of the solution is discontinuous, is a step point. On the other hand, with the value of $$\varOmega$$ considered, the step points (abscissae of the stars) are not stroboscopic times. where $$Y_{i}(t) = X_{i}(t-\tau )$$ and $$1_{\{t\geqslant \tau \}}(t)$$ is the (indicator) function that takes the value 1 for $$t\geqslant \tau$$ and vanishes for $$t<\tau$$ (see the next section for the derivation of this averaged system). Note that the slow time-dependent forcing $$A\sin (\omega t)$$ has not been averaged out and that, due to the presence of $$1_{\{t\geqslant \tau \}}(t)$$, the right-hand side of the averaged system is discontinuous at $$t=\tau$$ (this discontinuity manifests itself in Figure 1 through a discontinuity in the slope of X, not to be confused with the discontinuity at t = 0 typically present, as mentioned above, in solutions of delay problems). The notion of stroboscopic averaging studied in detail in the study by Chartier et al. (2012) is far from new. However, standard treatments of the theory of averaging favour alternative techniques, specially the zero-mean approach where the functions necessary to express the required change of variables are chosen so as to have zero-mean over one period of the oscillation. In stroboscopic averaging the freedom available in the choice of change of variables is used to impose that the old and new variables coincide at stroboscopic times. This is advantageous for the numerical methods studied here. The zero-mean approach may be better for analytic purposes as it usually leads to simpler high-order averaged systems. If zero-mean averaging had been used in Figure 1, the averaged solution would have been located halfway between the maxima and the minima of the oscillatory solution. The study of the vibrational resonance of (1.1) requires to simulate over long time intervals (the interval $$0\leqslant t\leqslant 300$$ is used in the study by Daza et al., 2013a) for many choices of the values of the constants $$\alpha ,\beta ,\omega ,A,B,\tau$$ and the parameter $$\varOmega$$; the presence of the fast-frequency oscillations makes such a task costly. It is then of interest to simulate, if possible, averaged systems like (2.3) rather than highly oscillatory models like (2.1). However, obtaining F analytically may be difficult or even impossible, and we wish to have a numerical method that approximates X by using only f. SAM is such a technique. The idea behind SAM is as follows. Let x and X be the oscillatory and averaged solutions, respectively, corresponding to the same $$\varphi$$. At fixed t, $$\dot x$$ and $$\dot X$$ may differ substantially. However, difference quotients such as $$(x(t+T)-x(t))/T$$ or $$(x(t+T)-x(t-T))/(2T)$$ may provide a good approximation to the slope $$\dot X(t)$$ (see Fig. 1). As other heterogeneous multiscale methods (see e.g. E, 2003; E & Engquist, 2003; Engquist & Tsai, 2005; E et al., 2007; Li et al., 2007; Ariel et al., 2009; Sanz-Serna, 2009; Calvo & Sanz-Serna, 2010), the algorithm includes macro-integrations and micro-integrations. Macro-integrations are used to advance X over macro-steps of length H larger than the period T. The necessary slopes $$\dot X$$ are obtained by forming difference quotients of auxiliary oscillatory solutions found by micro-integrations with small steps h. 2.2 The algorithm Let us now describe the algorithm. 2.2.1 Macro-integration Choose a positive integer N and define the macro-step size $$H=\tau /N$$. If the solution is sought in an interval $$0\leqslant t\leqslant t_{max}$$, SAM generates approximations $$X_{n}$$ to $$X(t_{n})$$, $$t_{n}=nH$$, $$n=0,1,\dots , \lfloor t_{max}/H \rfloor$$ by using the second-order Adams–Bashforth formula (macro-integrator)   \begin{align} X_{n+1}=X_{n}+\frac{3}{2}HF_{n}-\frac{1}{2}HF_{n-1}, \end{align} (2.5)starting from $$X_{0}=x(0)=X(0)=\varphi (0)$$; here $$F_{n}$$ is an approximation to $$\dot{X}(t_{n})$$ obtained by numerical differentiation of the micro-solution. The formula is not used if n = 0 and n = N, where it would be inconsistent in view of the jump discontinuities of $$\dot X$$ at t = 0 and $$t=\tau$$ noted above. For n = 0 and n = N we use Euler’s rule, i.e. we set   \begin{align} X_{1}=X_{0}+HF_{0},\qquad X_{N+1}=X_{N}+HF_{N}. \end{align} (2.6) 2.2.2 Micro-integration If $$\nu _{max}$$ is a positive integer, the micro-step size h is chosen to be $$T/\nu _{max}$$ (recall that $$T=2\pi /\varOmega$$ denotes the period). We use Euler’s rule, starting from $$u_{n,0}=X_{n}$$, first to integrate forward the oscillatory problem (2.1) over one period   \begin{align} u_{n,\nu+1}=u_{n,\nu}+hf(u_{n,\nu},v_{n,\nu},t_{n}+\nu h,\varOmega\nu h;\varOmega),~~~\nu=0,1,\dots,\nu_{max}-1, \end{align} (2.7)and then to integrate backward over one period   \begin{align} u_{n,-\nu-1}=u_{n,-\nu}-hf(u_{n,-\nu},v_{n,-\nu},t_{n}-\nu h,-\varOmega\nu h;\varOmega),~~~\nu=0,1,\dots,\nu_{max}-1. \end{align} (2.8)Here v denotes the past values for u given by $$v_{n,\nu } =u_{n-N,\nu }$$ if n > N and $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$ for n < N (if n = N$$v_{N,\nu } =u_{0,\nu }$$ for $$\nu \geqslant 0$$ and $$v_{N,\nu }=\varphi (\nu h)$$ for $$\nu <0$$). It is crucial to observe the values $$\varOmega \nu h$$ and $$-\varOmega \nu h$$ used for the fast argument $$\theta$$ in (2.7) and (2.8), respectively; each micro-integration starts from $$\theta = 0$$ rather than from the value $$\theta = \varOmega t_{n}$$, which may perhaps have been expected. The reason for this is that in stroboscopic averaging, the resulting averaged system changes with the initial value of the phase $$\theta$$; to work with one and the same averaged system micro-integrations have to start from the value $$\theta = 0$$ that the phase takes at the initial point of the interval $$[0,t_{max}]$$ (see Calvo et al., 2011a and Calvo et al., 2011b for a detailed discussion). The slopes $$F_{n}$$ to be used in (2.5) or (2.6) are given by the central difference formula,   \begin{align} F_{n}=\frac{u_{n,\nu_{max}}-u_{n,-\nu_{max}}}{2T} \end{align} (2.9)if $$n\neq 0$$ and $$n\neq N$$, while for n = 0 and n = N, we use the forward difference formula   \begin{align} F_{0}=\frac{u_{0,\nu_{max}}-u_{0,0}}{T},\qquad F_{N}=\frac{u_{N,\nu_{max}}-u_{N,0}}{T}, \end{align} (2.10)due to the discontinuity of $$\dot{X}(t)$$ at t = 0 and $$t=\tau$$. A detailed description of the algorithm is provided in Table 1. Figure 2 may help to better understand the procedure. The upper time axis corresponds to the macro-integration; all the information needed to obtain $$X_{n+1}$$ are $$X_{n}$$, $$F_{n}$$ and $$F_{n-1}$$ ($$F_{n-1}$$ is actually not required for n = 0, N). The value of $$F_{n}$$ is derived by numerical differentiation of the micro-solution and passed to the macro-integrator to compute $$X_{n+1}$$. For fixed n, the computation of the Euler micro-solution $$u_{n,\nu }$$ uses the past values $$v_{n,\nu }$$ and the initial datum $$u_{n,0} = X_{n}$$ (note that $$X_{n}$$ is the most recent vector found in the macro-integration). Fig. 2. View largeDownload slide Schematic description of SAM. Once $$X_{n}$$ is available, it is passed to the micro-integrator to find $$u_{n,\nu }$$ for varying $$\nu$$. Numerical differentiation of the micro-solution yields $$F_{n}$$, which is used in the macro-stepping to compute $$X_{n+1}$$. Fig. 2. View largeDownload slide Schematic description of SAM. Once $$X_{n}$$ is available, it is passed to the micro-integrator to find $$u_{n,\nu }$$ for varying $$\nu$$. Numerical differentiation of the micro-solution yields $$F_{n}$$, which is used in the macro-stepping to compute $$X_{n+1}$$. SAM only operates with macro-step sizes H that are submultiples of the delay $$\tau$$; this restriction is imposed to enforce that $$t=\tau$$ be a step point to better deal with the discontinuity in slope at $$t=\tau$$ (see Figure 1). In general, the quotient $$\tau /T$$ will not be a whole number and then the step points $$t_{n}$$ will not be stroboscopic times; this is the case in the simulation in Figure 1. We emphasize that, given N and $$\nu _{max}$$, the complexity of the algorithm is independent of $$\varOmega$$. When $$\varOmega$$ increases, the micro-step size h decreases to cater for the more rapid variation of the oscillations, but the window of width 2T (or T) for each micro-integration becomes correspondingly narrower. Finally, we point out that we have tested several alternative algorithms. For instance, we alternatively performed the micro-integrations with the Adams–Bashforth second-order method, or we used second-order forward approximations for $$F_{0}$$, $$F_{N}$$. While those modifications improve the accuracy of the results for small step sizes, experiments reveal that they are not always beneficial for large step sizes; therefore, we shall not be concerned with them here. Table 1 SAM Algorithm Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Table 1 SAM Algorithm Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  3. The averaged system In this section and the three that follow, we assume that the system (2.1) is of the particular form   \begin{align} \dot{x}(t)=f(x(t),y(t),\varOmega t). \end{align} (3.1) When comparing with the general format in (2.1) we note that now $$f(x,y,\theta )$$ has three arguments rather than five. The case where f includes a slow explicit dependence on t, i.e. $$f = f(x,y,t,\theta )$$, may be trivially reduced to (3.1) by adding a component $$x_{D+1}$$ to the state vector $$x\in \mathbb{R}^{D}$$ and setting $$\dot{x}_{D+1} = 1$$. The case where f depends on $$\varOmega$$, i.e. $$f = f(x,y,\theta ;\varOmega )$$, is taken up in the final section. We assume that $$f(x,y,\theta )$$ and the initial function $$\varphi$$ in (2.2) are of class $$C^{3}$$ and that the solution x exists in the interval $$[0,t_{max}]$$, where $$t_{max}$$ is a constant (i.e. does not change with the parameter $$\varOmega$$). Using an approach similar to that in the studies by Chartier et al. (2010), Chartier et al. (2012), Chartier et al. (2015), we use a Fourier decomposition   $$f(x,y,\theta)=\sum_{k\in \mathbb{Z}}\exp(ik\theta)f_{k}(x,y).$$The coefficients $$f_{k}(x,y)$$ satisfy $$f_{k}\equiv f_{-k}^{\ast }$$ because the problem is real. It is easily seen that, under the preceding hypotheses, x undergoes oscillations of frequency $$\varOmega$$ and amplitude $$\mathscr{O}(1/\varOmega )$$ as $$\varOmega \rightarrow \infty$$. To reduce the amplitude of the oscillations to $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$, we consider the near identity change of variables   \begin{align} \left\{ \begin{aligned} x&=X+\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}f_{k}(X,Y),~~~t \geqslant 0,\\ x&=X,\qquad \qquad -\tau\leqslant t<0, \end{aligned} \right. \end{align} (3.2)where we note that x and X coincide at stroboscopic times, i.e. the change is stroboscopic. In what follows, we use the notation $$z(t)=y(t-\tau ),~t\geqslant \tau$$, $$Y(t) = X(t-\tau ), t\geqslant 0$$, $$Z(t)=Y(t-\tau ),~t\geqslant \tau$$. The proof of the following result is a straightforward, but very lengthy exercise on Taylor expansion. Lemma 3.1 The change of variables (3.2) transforms the system (3.1) into   \begin{eqnarray} \dot{X}&=&f_{0}-\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial X}\ f_{0} -\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial Y}\dot{\varphi}(t-\tau)\nonumber \\ &&+ \frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{0}}{\partial X}\ f_{k} +\frac{1}{\varOmega}\sum_{k,\ell\neq 0}\exp(ik\varOmega t)\frac{\exp(i\ell\varOmega t)-1}{i\ell}\frac{\partial f_{k}}{\partial X}\ f_{\ell} \nonumber \\ &&+ \mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), \end{eqnarray} (3.3)for $$0\leqslant t< \tau$$, and into   \begin{eqnarray} \dot{X}&=&f_{0} -\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial X}\ f_{0} -\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\nonumber\\ &&+\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{0}}{\partial X}\ f_{k} +\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega (t-\tau))-1}{ik}\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}\nonumber \\ &&+ \frac{1}{\varOmega} \sum_{k,\ell\neq 0}\exp(ik\varOmega t)\frac{\exp(i\ell\varOmega t)-1}{i\ell} \frac{\partial f_{k}}{\partial X}\ f_{\ell} \nonumber\\ &&+\frac{1}{\varOmega}\sum_{k,\ell\neq 0}\exp(ik\varOmega t)\frac{\exp(i\ell\varOmega (t-\tau))-1}{i\ell}\frac{\partial f_{k}}{\partial Y}\ f_{\ell\tau} +\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), \end{eqnarray} (3.4)for $$t \geqslant \tau$$. Here   $$f_{k}=f_{k}(X,Y),\qquad f_{k\tau}=f_{k}(Y,Z), \qquad k\in \mathbb{Z}.$$ Note that the system in (3.3)–(3.4) is of the (expected) form   \begin{align} \dot X = f_{0}+\mathscr{O}(1/\varOmega), \end{align} (3.5)where $$f_{0}$$ is the average   \begin{align} f_{0}(X,Y) = \frac{1}{2\pi}\int_{0}^{2\pi} f(X,Y,\theta)\,\mathrm{d}\theta. \end{align} (3.6)By suppressing the remainder in (3.5), we obtain the averaged system $$\dot X = f_{0}(X,Y)$$ with $$\mathscr{O}(1/\varOmega )$$ error; this is not sufficiently accurate for the values of $$\varOmega$$ of interest and we take the averaging procedure to higher order. To do so, we perform an additional stroboscopic change of variables chosen so as to annihilate the oscillatory parts of the $$\mathscr{O}(1/\varOmega )$$ terms in (3.3)–(3.4). Specifically, we take   \begin{eqnarray*} X&=&\tilde{X}+\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{0} +\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}} \frac{\partial f_{k}}{\partial \tilde{Y}}\dot{\varphi}(t-\tau) \\ & & -\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}} \frac{\partial f_{0}}{\partial \tilde{X}}\ f_{k} -\frac{1}{\varOmega^{2}}\sum_{\substack{k\neq 0\\\ell\neq 0,-k}}\frac{\exp(i(k+\ell)\varOmega t)-1}{\ell(k+\ell)} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} \\&& +\frac{1}{\varOmega^{2}}\sum_{k,\ell\neq 0}\frac{\exp(ik\varOmega t)-1}{k\ell} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} \end{eqnarray*}for $$0\leqslant t< \tau$$ and   \begin{eqnarray*} X&=& \tilde{X} +\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}}\frac{\partial f_{k}}{\partial \tilde{X}}\ f_{0} +\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}}\frac{\partial f_{k}}{\partial \tilde{Y}}\ f_{0\tau} \\ &&-\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}}\frac{\partial f_{0}}{\partial \tilde{X}}\ f_{k} -\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega(t-\tau))-\exp(-ik\varOmega\tau)}{k^{2}} \frac{\partial f_{0}}{\partial \tilde{Y}}\ f_{k\tau} \\ &&-\frac{1}{\varOmega^{2}}\sum_{\substack{k\neq 0\\\ell\neq 0,-k}}\frac{\exp(i(k+\ell)\varOmega t)-1}{l(k+\ell)} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} +\frac{1}{\varOmega^{2}}\sum_{k,\ell\neq 0}\frac{\exp(ik\varOmega t)-1}{k\ell}\frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} \\ && -\frac{1}{\varOmega^{2}}\sum_{\substack{k\neq 0\\\ell\neq 0,-k}}\exp(-il\varOmega \tau)\frac{\exp(i(k+\ell)\varOmega t)-1}{\ell(k+\ell)} \frac{\partial f_{k}}{\partial \tilde{Y}}\ f_{\ell\tau} +\frac{1}{\varOmega^{2}}\sum_{k,\ell\neq 0}\frac{\exp(ik\varOmega t)}{k\ell}\frac{\partial f_{k}}{\partial \tilde{Y}}\ f_{\ell\tau} \end{eqnarray*}for $$t\geqslant \tau$$, where $$f_{k}=f_{k}\left (\tilde{X},\tilde{Y}\right )$$ and $$f_{k\tau }=f_{k}\left (\tilde{X},\tilde{Y}\right )$$$$\left (\tilde{Y}(t) =\tilde{X}(t-\tau ),\tilde{Z}(t) =\tilde{Y}(t-\tau )\right )$$. Taking the last displays to (3.3)–(3.4) and discarding the $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ remainder results in the averaged system (3.7)–(3.9) below, where, for notational simplicity, we have suppressed the tildes over X, Y and Z and [⋅, ⋅] is the Lie–Jacobi bracket or commutator defined by   $$\left[f_{i},f_{j}\right]=\frac{\partial f_{j}}{\partial X}f_{i}-\frac{\partial f_{i}}{\partial X}f_{j},~~~i,j\in \mathbb{Z}.$$ The averaged solution X will obviously approximate x at stroboscopic times with $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ errors (arising from discarding the $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ remainder); this implies that, at least for $$\varOmega$$ sufficiently large, X exists in the interval $$\left [0,t_{max}\right ]$$. We then have proved the following result: Theorem 3.2 Consider the averaged problem   \begin{eqnarray} \dot{X}(t)&=&F(X(t),Y(t),Z(t),t;\varOmega),\qquad t\geqslant 0,\\ Y(t) &=& X(t-\tau),\qquad t\geqslant 0,\nonumber\\ Z(t) &=& Y(t-\tau),\qquad t\geqslant \tau,\nonumber\\ X(t) &=& \varphi(t), \qquad -\tau\leqslant t\leqslant 0,\nonumber \end{eqnarray} (3.7)with $$F(X,Y,Z,t;\varOmega )= F^{(1)}(X,Y,t;\varOmega )$$,   \begin{align} F^{(1)}(X,Y,t;\varOmega)=f_{0}+\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right)-\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\dot{\varphi}(t-\tau), \end{align} (3.8)for $$0\leqslant t<\tau$$ and $$F(X,Y,Z,t;\varOmega )= F^{(2)}(X,Y,Z;\varOmega )$$,   \begin{eqnarray} F^{(2)}(X,Y,Z;\varOmega)&=&f_{0}+\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right)-\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\\ \nonumber & &\qquad+\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}+\sum_{k\neq0}\frac{i\exp(ik\varOmega \tau)}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{-k\tau}, \end{eqnarray} (3.9)for $$t\geqslant \tau$$. For $$\varOmega$$ sufficiently large, the averaged solution X exists in the interval $$\left [0,t_{max}\right ]$$. Furthermore, for the approximation at stroboscopic times, we may write   $$\max_{ 0\leqslant t^{(k)}\leqslant t_{max}}\left\|x\left(t^{(k)}\right)-X\left(t^{(k)}\right) \right\|=\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right),\qquad \varOmega\rightarrow \infty.$$ We point out that, for $$t\geqslant \tau$$, the dependence of $$\dot X(t)$$ on the past values X(s), $$-\tau \leqslant s< t$$ is through both$$Y(t) = X(t-\tau )$$and$$Z(t) = X(t-2\tau )$$; this is to be compared with the situation for the original oscillatory system (2.1), which does not include the delay $$2\tau$$. The functions $$F^{(1)}$$ and $$F^{(2)}$$ are of class $$C^{2}$$, but of course F is discontinuous at $$t =\tau$$. By implication, X(t) will be of class $$C^{3}$$, except at $$t = 0,\tau$$ (where $$\dot{X}(t)$$ has a jump discontinuity), at $$t = 2\tau$$ (where the second derivative jumps) and $$t = 3\tau$$ (where the third derivative jumps). In the particular case where f does not depend on the delayed argument y, so that we are dealing with an ordinary differential system, (3.8) and (3.9) are in agreement with formula (64) of Chartier et al. (2012). 4. Error analysis: exact micro-integration In this section we investigate the global error of the algorithm under the assumption that the micro-integration is exact, so that the macro-integration and the numerical differentiations performed to find the slopes $$F_{n}$$ are the only sources of error. This scenario is of course relevant when the micro-step h is chosen to be very small. The errors due to the Euler micro-integration will be studied in the next section. In order to avoid misunderstandings, we state that ‘exact micro-integration’ has to be understood as follows. Consider e.g. the computation of $$X_{2}$$, …, $$X_{N}$$ in Table 1; in this section we assume that   $$F_{n} = \frac{u_{n}(T)-u_{n}(-T)}{2T},$$where $$u_{n}$$ solves the problem $$\dot u_{n} = f(u_{n},v_{n},\varOmega t)$$, $$u_{n}(0)= X_{n}$$, $$v_{n}(t) =\varphi (-\tau +nH+t)$$. Of course similar modifications of the algorithm in Table 1 have to be carried out for the computation of $$F_{n}$$ for other values of n. We begin by proving a stability bound for the macro-integrator. We consider a sequence $$\left \{\hat{X}_{n}\right \}$$ of vectors in $$\mathbb{R}^{D}$$ such that $$\hat{X}_{-n}=\varphi (-nH)$$, $$n=1,\dots ,N$$, and furthermore satisfy the macro-integration equations with residuals $$\big \{ \hat{\rho }_{n}\big \}$$, i.e.   \begin{eqnarray*} \hat{X}_{1}&=&\hat{X}_{0}+HF^{(1)}\left(\hat{X}_{0},\hat{X}_{-N},0;\varOmega\right)+H\hat{\rho}_{0},\\ \hat{X}_{n+1}&=&\hat{X}_{n}+\frac{3H}{2}F^{(1)}\left(\hat{X}_{n},\hat{X}_{n-N},nH;\varOmega\right)\\ &&\qquad\qquad-\frac{H}{2}F^{(1)}\left(\hat{X}_{n-1},\hat{X}_{n-1-N},(n-1)H;\varOmega\right)+H\hat{\rho}_{n}, \qquad n=1,\dots,N-1,\\ \hat{X}_{N+1}&=&\hat{X}_{N}+HF^{(2)}\left(\hat{X}_{N},\hat{X}_{0},\hat{X}_{-N};\varOmega\right)+H\hat{\rho}_{N},\\ \hat{X}_{n+1}&=&\hat{X}_{n}+\frac{3H}{2}F^{(2)}\left(\hat{X}_{n},\hat{X}_{n-N},\hat{X}_{n-2N};\varOmega\right)\\ &&\qquad\qquad-\frac{H}{2}F^{(2)}\left(\hat{X}_{n-1},\hat{X}_{n-1-N},\hat{X}_{n-1-2N};\varOmega\right)+H\hat{\rho}_{n}, \qquad n\geqslant N+1. \end{eqnarray*}Furthermore, we consider a second sequence $$\left \{\tilde{X}_{n}\right \}$$ with $$\tilde{X}_{-n}=\varphi (-nH)$$, $$n=1,\dots ,N$$, satisfying the macro-integration equations above with residuals $$\big \{ \tilde{\rho }_{n}\big \}$$, rather than $$\big \{ \hat{\rho }_{n}\big \}$$. Proposition 4.1 To each bounded set $$B\subset \mathbb{R}^{D}$$, there corresponds a constant C > 0, independent of H and $$\varOmega$$, such that for any sequences $$\left \{\hat{X}_{n}\right \}$$ and $$\left \{\tilde{X}_{n}\right \}$$ as above contained in B,   $$\left\|\hat{X}_{n}-\tilde{X}_{n} \right\|\leqslant C\,\sum_{k=0}^{n-1} H \left\|\hat{\rho}_{k}-\tilde{\rho}_{k} \right\|,\qquad 0\leqslant nH\leqslant t_{max}.$$ Proof. From the hypotheses in the preceding section, F is a Lipschitz continuous function of X, Y and Z, with a Lipschitz constant that is uniform as t varies in the interval $$0\leqslant t\leqslant t_{max}$$ and X, Y and Z vary in B. The stability bound is then proved in a standard way by recurrence. In our next result we investigate the consistency of the formulas (2.9)–(2.10) used to recover the slopes $$F_{n}$$. There are four cases corresponding to the four successive blocks in Table 1. Proposition 4.2 The following results hold: If the function $$u_{n}$$ solves the problem   \begin{eqnarray*} \dot{u}_{n}(t)&=&f(u_{n}(t),\varphi(-\tau+t_{n}+t),\varOmega t),\\ u_{n}(0)&=&X_{n}, \end{eqnarray*}then, with $$Y_{n} = \varphi (-\tau +t_{n})$$,   \begin{align} \frac{u_{n}(T)-u_{n}(0)}{T}=f_{0}(X_{n},Y_{n})+\mathscr{O}\left(\frac{1}{\varOmega}\right). \end{align} (4.1) If $$u_{n}$$ is as in the preceding item, then   \begin{align} \frac{u_{n}(T)\!-\!u_{n}(-T)}{2T}=f_{0}\!+\!\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right) \!-\!\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\dot{\varphi}(-\tau+t_{n}) +\mathscr{O}\left(\!\frac{1}{\varOmega^{2}}\!\right)\!, \end{align} (4.2)where $$f_{k}$$, $$k\in \mathbb{Z}$$ are evaluated at $$(X_{n},Y_{n})$$, $$Y_{n} = \varphi (-\tau +t_{n})$$. If $$u_{n}$$ and $$v_{n}$$ satisfy   \begin{eqnarray*} \dot{u}_{n}(t)&=&f(u_{n}(t),v_{n}(t),\varOmega t),\\ \dot{v}_{n}(t)&=&f(v_{n}(t),w_{n}(t),\varOmega t),\\ u_{n}(0)&=&X_{n},\\ v_{n}(0)&=&Y_{n},\\ w_{n}(0)&=&Z_{n}, \end{eqnarray*}with $$w_{n}$$ a continuously differentiable function, then   \begin{align} \frac{u_{n}(T)-u_{n}(0)}{T}=f_{0}(X_{n},Y_{n})+\mathscr{O}\left(\frac{1}{\varOmega}\right). \end{align} (4.3) If $$u_{n}$$ and $$v_{n}$$ are as in 3., then   \begin{eqnarray}\nonumber \frac{u_{n}(T)-u_{n}(-T)}{2T}&=&f_{0}+\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right)-\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\\ & &\qquad+\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}+\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{-k\tau}+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), \end{eqnarray} (4.4)where $$f_{k}$$, $$k\in \mathbb{Z}$$ are evaluated at $$(X_{n},Y_{n})$$ and $$f_{k\tau }$$ stands for $$f_{k}(Y_{n},Z_{n})$$. Proof. Here we prove the fourth case; the other proofs are similar (and slightly simpler). For simplicity the subindex n is dropped. We rewrite the equation for u in integral form   $$u(t)=X+{\int_{0}^{t}} f(u(s),v(s),\varOmega s)\,\mathrm{d}s = {\int_{0}^{t}} \sum_{k} \exp(ik\varOmega s) f_{h}(u(s),v(s))\,\mathrm{d}s$$and use Picard’s iteration. Clearly for $$-T\leqslant t \leqslant T$$,   $$u(t)=X+\mathscr{O}\left(\frac{1}{\varOmega}\right).$$We take this equality to the integral equation above and integrate with respect to s to find   $$u(t)=X+t\ f_{0}(X,Y)+\sum_{k\neq0}\alpha_{k}(t)f_{k}(X,Y)+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right),$$where   $$\alpha_{k}(t)=\frac{\exp(ik\varOmega t)-1}{ik\varOmega},~~~k\in \mathbb{Z}.$$Then,   \begin{eqnarray}\nonumber u(t)&=&X+{\int_{0}^{t}} \sum_{k\in\mathbb{Z}}\exp(ik\varOmega s)\\ &&\nonumber\qquad \times\ f_{k}\left(X+s\ f_{0}+\sum_{m\neq0}\alpha_{m}(s)f_{m}+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), Y+s\ f_{0\tau} +\sum_{m\neq0}\alpha_{m}(s)f_{m\tau}+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right)\right)\mathrm{d}s. \end{eqnarray}By Taylor expanding f at X, Y, we obtain, for $$-T\leqslant t \leqslant T$$,   \begin{eqnarray*} u(t)&=&X+t\ f_{0}+\sum_{k\neq 0}\alpha_{k}(t)f_{k}+{\int_{0}^{t}} s \frac{\partial f_{0}}{\partial X}\ f_{0}\,\mathrm{d}s +{\int_{0}^{t}} \sum_{k\neq0}\alpha_{k}(s)\frac{\partial f_{0}}{\partial X}\ f_{k}\,\mathrm{d}s \\ &&+{\int_{0}^{t}} \sum_{k\neq0}\alpha_{k}(s)\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}\,\mathrm{d}s +{\int_{0}^{t}} s\frac{\partial f_{0}}{\partial Y}\ f_{0\tau}\,\mathrm{d}s +{\int_{0}^{t}} \sum_{k\neq0}\exp(ik\varOmega s)s\frac{\partial f_{k}}{\partial X}\ f_{0}\,\mathrm{d}s \\ &&+{\int_{0}^{t}} \sum_{k,m\neq0}\exp(ik\varOmega s)\alpha_{m}(s)\frac{\partial f_{k}}{\partial X}\ f_{m}\,\mathrm{d}s+{\int_{0}^{t}}\sum_{k\neq0}\exp(ik\varOmega s)s\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\,\mathrm{d}s \\ &&+{\int_{0}^{t}} \sum_{k,m\neq0}\exp(ik\varOmega s)\alpha_{m}(s)\frac{\partial f_{k}}{\partial Y}\ f_{m\tau}\,\mathrm{d}s +\mathscr{O}\left(\frac{1}{\varOmega^{3}}\right). \end{eqnarray*}The proof concludes by evaluating this expression at t = ±T and taking those values to the left-hand side of (4.4). According to this result, (4.1) and (4.3) provide approximations with $$\mathscr{O}(1/\varOmega )$$ errors to (3.8) and (3.9) (evaluated at $$X=X_{n}$$, $$Y=Y_{n}$$, $$t=t_{n}$$), respectively, as expected from the use of forward differencing. Similarly, (4.2) approximates (3.8) (at $$X=X_{n}$$, $$Y=Y_{n}$$, $$t=t_{n}$$) with $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ error, as expected of central differences. However, when comparing (4.4) with (3.9) (at $$X=X_{n}$$, $$Y=Y_{n}$$, $$Z=Z_{n}$$, $$t=t_{n}$$), we observe that the last sum in (3.9) has a factor $$\exp (ik\varOmega \tau )$$, which is not present in the last sum in (4.4), and therefore the error is, in general, only $$\mathscr{O}(1/\varOmega )$$. To achieve $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ errors we may assume that the functions $$f_{k}(X,Y)$$, $$k\neq 0$$, are independent of the second argument Y, i.e. the delay argument y only appears in f through $$f_{0}$$ (this is the case in (1.1)). Alternatively, we may assume that, for all $$k\neq 0$$, $$\exp (ik\varOmega \tau )=1$$, i.e. that $$\tau$$ is an integer multiple of the period $$T = 2\pi /\varOmega$$. We are now ready to give the main result of this section. Theorem 4.3 Assume that the problem (3.1), with the smoothness assumptions stated in the preceding section, is integrated with SAM with exact micro-integrations. In addition assume that one of the following hypotheses holds: (H1) The oscillatory Fourier coefficients $$f_{k}$$, $$k\neq 0$$ of f do not depend on the delayed argument y. (H2) The delay $$\tau$$ is an integer multiple of the period $$T = 2\pi /\varOmega$$. Then there exist constants $$\varOmega _{0}$$, $$H_{0}$$ and K such that, for $$\varOmega \geqslant \varOmega _{0}$$, $$H\leqslant H_{0}$$, $$0\leqslant t_{n}=nH\leqslant t_{max}$$, the difference between the numerical solution and the solution of the averaged problem has the bound   \begin{align} \|X_{n}-X(t_{n}) \|\leqslant K\left( H^{2}+\frac{H}{\varOmega}+\frac{1}{\varOmega^{2}}\right). \end{align} (4.5) Proof. We apply Proposition 4.1 with B taken as a large ball containing the trajectory X(t), $$0\leqslant t\leqslant t_{max}$$ in its interior; the vectors $$X(t_{n})$$ play the role of $$\tilde{X}_{n}$$ and the vectors $$X_{n}$$ play the role of $$\hat{X}_{n}$$. It is clear that each $$\tilde{X}_{n}$$ is in B. For H sufficiently small and $$\varOmega$$ sufficiently large, the same will be true for each $$\hat{X}_{n}$$; this is established by means of a standard argument by contradiction using (4.5). Each residual $$\tilde{\rho }_{n}$$ is $$\mathscr{O}\big (H^{2}\big )$$ with the exceptions of $$\tilde{\rho }_{0}$$, $$\tilde{\rho }_{N}$$ and $$\tilde{\rho }_{2N}$$; these are only $$\mathscr{O}(H)$$ because the first two correspond to Euler steps and in the third there is a jump discontinuity in the second time derivative of the averaged solution. According to Proposition 4.2, the residuals $$\hat{\rho }_{n}$$ for the numerical solution are $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$, with the exceptions of $$\hat{\rho }_{0}$$ and $$\hat{\rho }_{N}$$, which are of size $$\mathscr{O}(1/\varOmega )$$. Taking these results to Proposition 4.1 we get a global error bound of the desired form. Remark 4.4 It is clear that the bound in (4.5) may be replaced by one of the form $$K^{\prime } \big (H^{2}+1/\varOmega ^{2}\big )$$. We prefer the form (4.5), as it relates to three sources of error: the macro-integration $$H^{2}$$ error, the error $$H/\varOmega$$ arising from differencing at 0, $$\tau$$, $$2\tau$$ and the error from second-order differencing at all other step points $$t_{n}$$. Remark 4.5 The discrepancy between (4.4) and (3.9) that leads to the introduction of (H1) and (H2) stems from the fact that the values of the phase $$\theta$$ at $$t_{n}$$ and $$t_{n-N} = t_{n}-\tau$$ are in general different in the oscillatory problem, but the same in SAM. The hypothesis (H1) holds in most applications; in fact in all the papers mentioned in the introduction, the delay system is obtained by adding to the right-hand side of an oscillatory ordinary differential system a linear feedback term My, with M a constant matrix. (H2) is relevant in those studies, where there is freedom in choosing the exact value of the large frequency $$\varOmega$$. Remark 4.6 If (H1) and (H2) do not hold, the same proof yields for our algorithm a bound of the form $$K\big (H^{2}+1/\varOmega \big )$$ under the assumption that f and $$\varphi$$ are $$C^{2}$$ functions. Numerical experiments reported in Section 6 reveal that in that case the bound cannot be improved to $$K\big (H^{2}+1/\varOmega ^{2}\big )$$. However, if (H1) and (H2) do not hold, errors of size $$K\big (H^{2}+1/\varOmega \big )$$ may be obtained by means of a simpler algorithm based on applying forward differences at each step point $$t_{n}$$; obviously that alternative algorithm does not require the backward integration legs (2.8). 5. Error analysis: micro-integration errors We now take into account the errors introduced by the Euler micro-integration. We begin with an auxiliary result. Note the improved error bound at the end of the integration interval. Proposition 5.1 Consider the application of Euler’s rule with constant step size $$h = T/\nu _{max}$$ to integrate in the interval $$0\leqslant t\leqslant T$$ the initial value problem $$\dot u = f(u,v,\varOmega t)$$, u(0) = X, where v is a known $$C^{1}$$ function. Denote by $$u_{\nu }$$ the Euler solution at $$t =\nu h$$. There are constants C, $$\varOmega _{0}$$ and $$h_{0}$$ such that for $$h\leqslant h_{0}$$, $$\varOmega \geqslant \varOmega _{0}$$, the following bounds hold:   $$\| u_{\nu}-u(\nu h)\| \leqslant C h,\qquad \nu = 0, 1,\dots, \nu_{max},$$ (5.1)  $$\| u_{\nu_{max}}-u(T)\| \leqslant C \frac{h}{\varOmega},$$ (5.2)  $$\left \| \frac{u_{\nu_{max}}-X}{T}-\frac{u(T)-X}{T}\right\| \leqslant C h.$$ (5.3) Proof. A standard error bound for Euler’s rule is   $$\| u_{\nu}-u(\nu h)\| \leqslant \frac{\exp(LT)-1}{L} M h,\qquad \nu = 0,\dots,\nu_{max},$$where L is the Lispchitz constant of f with respect to u in a neighbourhood of the solution and M is an upper bound for $$\|(1/2) \ddot u(t)\|$$, $$0\leqslant t\leqslant T$$. In the present circumstances we have to take into account that, as $$\varOmega \rightarrow \infty$$, the length $$T=2\pi / \varOmega$$ of the integration interval decreases and M grows like $$\varOmega$$ because   $$\ddot u = \frac{\partial f}{\partial u} \dot u+\frac{\partial f}{\partial v} \dot v+\varOmega \frac{\partial f}{\partial t}.$$ From the elementary inequality $$(\exp (LT)-1)/L \leqslant T\exp (LT)$$ and the standard bound, we have   $$\| u_{v}-u(\nu h)\| \leqslant T \exp(LT)M h,\qquad \nu = 0,\dots,\nu_{max},$$and therefore (5.1) holds. By adding all the Euler equations, we find   $$u_{\nu_{max}} = X + \sum_{\nu=0}^{\nu_{max}-1} h f(u_{\nu},v(\nu h),\varOmega \nu h),$$and from (5.1),   \begin{align} u_{v_{max}} = X +\sum_{\nu=0}^{\nu_{max}-1} h f(u(\nu h),v(\nu h),\varOmega \nu h) +\mathscr{O}(hT), \end{align} (5.4)a relation that has to be compared with   \begin{align} u(T) = X +{\int_{0}^{T}} f(u(s),v(s), \varOmega s)\, \mathrm{d}s. \end{align} (5.5) The bound (5.2) will be established if we show that the quadrature sum in (5.4) approximates the integral in (5.5) with errors of size $$\mathscr{O}(hT)$$. To this end we decompose the function being integrated as   $$f(u(s),v(s),\varOmega s) = f(X,v(0),\varOmega s) + ( f(u(s),v(s), \varOmega s) -f(X,v(0),\varOmega s)) = f_{1}+f_{2}.$$It is easily seen (for instance by expanding in a Fourier series) that the total time derivative $$({\textrm d}/{\textrm d}t)f_{2}$$ remains bounded as $$\varOmega \rightarrow \infty$$; elementary results then show that the quadrature of $$f_{2}$$ has errors of the desired size $$\mathscr{O}(hT)$$. On the other hand, the time derivative of $$f_{1}$$ grows like $$\varOmega$$, and quadrature errors of size $$\mathscr{O}(h)$$ may be feared. Fortunately the quadrature for $$f_{1}$$ is actually exact, because one checks by an explicit computation that it is exact for each Fourier mode $$f_{k}(X,v(0)) \exp (ik\varOmega s)$$. The third bound (5.3) is a trivial consequence of (5.2). It goes without saying that the corresponding result holds for backward integrations with $$\!-T\!\leqslant t\leqslant 0$$. The following theorem provides the main result of this paper. Theorem 5.2 Assume that the problem (3.1), with the smoothness assumptions stated in the preceding sections, is integrated with SAM. In addition assume that one of the following hypotheses holds: (H1) The oscillatory Fourier coefficients $$f_{k}$$, $$k\neq 0$$ of f do not depend on the delayed argument y. (H2) The delay $$\tau$$ is an integer multiple of the period $$T = 2\pi /\varOmega$$. Then there exist constants $$\varOmega _{0}$$, $$H_{0}$$, $$h_{0}$$ and K such that, for $$\varOmega \geqslant \varOmega _{0}$$, $$H\leqslant H_{0}$$, $$h\leqslant h_{0}$$, $$0\leqslant t_{n}=nH\leqslant t_{max}$$, the difference between the numerical solution and the solution of the averaged problem may be bounded as follows:   $$\left\|X_{n}-X\big(t_{n}\big) \right\| \leqslant K\left( H^{2}+\frac{H}{\varOmega}+\frac{1}{\varOmega^{2}}+h\right).$$In particular, if the grids are refined in such a way that h is taken proportional to $$H/\varOmega$$ (i.e. $$\nu _{max}$$ is taken proportional to N), then the bound becomes   \begin{align} \left\|X_{n}-X\big(t_{n}\big) \right\| \leqslant K^{\prime}\left( H^{2}+\frac{H}{\varOmega}+\frac{1}{\varOmega^{2}}\right). \end{align} (5.6) Proof. We argue as in Theorem 4.3. Now in the residual for the numerical solution $$\big \{X_{n}\big \}$$ we have taken into account the micro-integration error. For $$n\leqslant N$$, the bound (5.3) (and the corresponding bound for the backward integration) show that the micro-integration adds a term of size $$\mathscr{O}(h)$$ to the residual. For n > N the situation is slightly more complicated because the algorithm uses past values $$v_{n,\nu }$$ that are themselves affected by micro-integration errors. However, the stability of the micro-integrator guarantees that even when those errors are taken into account an estimate like (5.3) holds. We recall that taking h proportional to $$H/\varOmega$$ makes the complexity of the algorithm independent of $$\varOmega$$. 6. Numerical experiments Table 2 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are not stroboscopic times N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  View Large Table 2 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are not stroboscopic times N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  View Large We now present experiments that illustrate the preceding theory. We first integrated with SAM the toggle switch problem (1.1) with $$\alpha =2.5$$, $$\beta =2$$, A = 0.1, $$\omega =0.1$$, B = 4.0 and $$\tau =0.5$$. The function $$\varphi$$ is constant: $$x_{1}(t) = 0.5$$, $$x_{2}(t) = 2.0$$, $$-\tau \leqslant t\leqslant 0$$, which corresponds to the system staying at an equilibrium up to time t = 0, and then switching on the slow and fast oscillatory forcing terms. For the macro-step size we set $$H = \tau /N$$, $$N=1,2,4,\dots$$ and the micro-step size was chosen as h = T/(2N) (note that for the coarsest macro-step size N = 1, there are only two Euler steps per period). Simulations took place in the interval $$0\leqslant t\leqslant 2$$ that includes the locations t = 0, $$t=\tau$$ and $$t=2\tau$$, where the growth in the error of the SAM solution is more pronounced due to the lack of smoothness. SAM has no difficulty in coping with the longer intervals required in practical simulations, but we have chosen a short interval because on longer intervals it may be very expensive to obtain a sufficiently accurate reference solution to measure errors; see in this connection the computational times quoted at the end of Section 7. Table 2 reports, for varying $$\varOmega$$ and N, the maximum error, over $$0\leqslant t\leqslant 2$$ in the $$X_{1}$$ component of the SAM solution with respect to the averaged solution obtained by integrating (2.4) (this integration was carried out with the Matlab function dde23 with relative and absolute tolerances $$10^{-8}$$ and $$10^{-10}$$, respectively). The combinations of N and $$\varOmega$$ leading to values of H not significantly larger than T were not attempted, as the HMM idea does not make sense for them. Note that here $$\tau /T$$ is irrational and therefore the step points $$t_{n}$$ are not stroboscopic times. Figure 3 displays the errors in Table 2 as functions of N; for clarity not all values of $$\varOmega$$ are included. By looking at the columns of the table (or at each of the four solid lines in the figure) we see that the error behaves as $$N^{-2}$$, i.e as $$H^{2}$$, except at the bottom of each column, where the behaviour is as $$N^{-1}$$. This is of course the behaviour of the bound in (5.6). Errors along rows saturate if $$\varOmega$$ is very large; for those values one just observes the error in the macro-integration. This behaviour is also seen in the figure by comparing points corresponding to the same value of N and varying $$\varOmega$$. Along the main diagonal of the table, errors approximately divide by four, which is also in agreement with the bound (5.6). In the figure this corresponds to observing the behaviour of the right-most point of each of the solid lines. Table 3 differs from Table 2 in that now $$\varOmega$$ is taken from the sequence $$8\pi$$, $$16\pi$$, … that consists of values not very different from those in Table 2. In fact the errors in Table 3 are very similar to those in Table 2. However, for the sequence $$8\pi$$, $$16\pi$$, … the step points are stroboscopic times, and it makes sense to compare the SAM solution with the true oscillatory solution. The results are reported in Table 4. From Theorems 3.2 and 5.2 the errors with respect to the true solution possess a bound of the form (5.6) and this is consistent with the data in the table. Fig. 3. View largeDownload slide Error in the first component of the SAM solution with respect to the solution of the averaged problem versus macro step size $$H=\tau /N$$ for some of the simulations in Table 2. Different lines correspond to difference values of $$\varOmega$$. Here the reference line shows the $$N^{-2}$$ behaviour. Fig. 3. View largeDownload slide Error in the first component of the SAM solution with respect to the solution of the averaged problem versus macro step size $$H=\tau /N$$ for some of the simulations in Table 2. Different lines correspond to difference values of $$\varOmega$$. Here the reference line shows the $$N^{-2}$$ behaviour. Table 3 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  View Large Table 3 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  View Large In order to check numerically that the hypotheses (H1)–(H2) are necessary to ensure a bound of the form (5.6), we have considered the simple scalar equation   \begin{align} \dot{x}=y+(x-y)\sin(\varOmega t)+\frac{y}{2}\cos(2\varOmega t); \end{align} (6.1)this is an academic example where (H1) does not hold (recall that in all the systems from the literature cited (H1) holds). The averaged version is   $$\dot{X}=Y-\frac{1}{\varOmega}Y,$$for $$0\leqslant t<\tau$$ and   $$\dot{X}=Y+\frac{1}{\varOmega}\left(\frac{Y}{2}-\frac{Z}{2}\right)\sin(\varOmega\tau)-\frac{1}{16\varOmega}Z\sin(2\varOmega \tau),$$for $$t\geqslant \tau$$. Table 4 Errors in $$x_{1}$$ for SAM with respect to the true oscillatory solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  View Large Table 4 Errors in $$x_{1}$$ for SAM with respect to the true oscillatory solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  View Large Table 5 Errors between SAM solution and the averaged solution for problem (6.1). (H2) holds N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  View Large Table 5 Errors between SAM solution and the averaged solution for problem (6.1). (H2) holds N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  View Large We used $$\tau = 0.5$$, $$H = \tau /N$$, h = T/(5N), a constant $$\varphi = 0.1$$ and, as before, measured errors in the maximum norm for $$0\leqslant t\leqslant 2$$. Table 52 gives errors when $$\varOmega$$ is taken from the sequence $$8\pi$$, $$16\pi$$, …, so that the periods T are 1/4, 1/8, … Hypothesis (H1) does not hold, but (H2) does, so that Theorem 5.2 may be applied. In fact an $$N^{-2}$$ (or equivalently $$\varOmega ^{-2}$$) behaviour is seen along the diagonals of the table. We next slightly changed the frequencies and used the sequence $$8\pi +\pi /64$$, $$16\pi +\pi /32$$, … (this represents an increase of less than 0.2% in each frequency). Neither (H1) nor (H2) are fulfilled and Theorem 5.2 may not be applied. The results in Table 6 show that, for $$\varOmega$$ large the second-order behaviour along the main diagonal is lost, indicating that the error does not behave as in the bound (5.6). Note the substantial difference between Tables 5 and 6 at N = 64, in spite of the very small relative change in the value of $$\varOmega$$. Table 6 Errors in x between SAM solution and the averaged solution for problem (6.1). Neither (H1) nor (H2) hold N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  View Large Table 6 Errors in x between SAM solution and the averaged solution for problem (6.1). Neither (H1) nor (H2) hold N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  View Large 7. Extensions We finally consider the application of SAM to problems that are not of the form (3.1). The number of variants that may arise is very high and we restrict the attention to reporting numerical results for a case study. The corresponding analysis may be carried out by adapting the proofs given in the preceding sections. We study again the toggle switch problem, but now in an alternative asymptotic regime. The system is given by   \begin{eqnarray} \frac{{\textrm d}x_{1}}{{\textrm d}t}&=&\frac{\alpha}{1+x_{2}^{\beta}}-x_{1}(t-\tau)+A\sin(\omega t)+\hat{B}\varOmega\sin(\varOmega t),\\ \nonumber \frac{{\textrm d}x_{2}}{{\textrm d}t}&=&\frac{\alpha}{1+x_{1}^{\beta}}-x_{2}(t-\tau), \end{eqnarray} (7.1)where $$\hat{B}$$ is a constant and the other symbols are as before. As $$\varOmega \rightarrow \infty$$, the variable $$x_{1}$$ undergoes oscillations of frequency $$\varOmega$$ and $$\mathscr{O}(1)$$ amplitude, which, for $$\varOmega$$ large, makes the direct numerical integration of the system more expensive than that of (1.1) (the amplitude there is $$\mathscr{O}(1/\varOmega )$$). For an analytic treatment, we begin by performing, for $$t\geqslant 0$$, the preliminary stroboscopic change of variables  \begin{eqnarray*} x_{1}&=&X_{1}+\hat{B}(1-\cos(\varOmega t)),\\ x_{2}&=&X_{2}, \end{eqnarray*}which differs from the identity in $$\mathscr{O}(1)$$ quantities. This leads to:   \begin{eqnarray} \frac{{\textrm d}X_{1}}{{\textrm d}t}&=&\frac{\alpha}{1+X_{2}^{\beta}}-X_{1}(t-\tau)-\hat{B}\,1_{\{t\geqslant \tau\}}+A\sin(\omega t),\\ \nonumber \frac{{\textrm d}X_{2}}{{\textrm d}t}&=&\frac{\alpha}{1+(X_{1}+\hat{B}(1-\cos(\varOmega t)))^{\beta}}-X_{2}(t-\tau). \end{eqnarray} (7.2)The highly oscillatory forcing has been reduced from $$\mathscr{O}(\varOmega )$$ to $$\mathscr{O}(1)$$ and, in principle, it is possible to average (7.2) by the techniques used to deal with (3.1). Unfortunately, finding the required Fourier coefficients in closed form does not appear to be possible in general. In the particular case where $$\beta = 2$$, the zero Fourier coefficient may be found by evaluating the relevant integral (3.6) with the help of the residue theorem. This leads to the averaged system of the form $$\dot X = f_{0}$$ explicitly given by   \begin{eqnarray} \frac{{\textrm d}X_{1}}{{\textrm d}t}&=&\frac{\alpha}{1+{X_{2}^{2}}}-X_{1}(t-\tau)-\hat{B}\,1_{\{t\geqslant \tau\}}+A\sin(\omega t),\\ \nonumber \frac{{\textrm d}X_{2}}{{\textrm d}t}&=&\alpha\frac{\sqrt{-\frac{M}{2}+\frac{\sqrt{N}}{2}}+\left(X_{1}+\hat{B}\right)\sqrt{\frac{M}{2}+\frac{\sqrt{N}}{2}}} {\left(\frac{M}{2}+\frac{\sqrt{N}}{2}\right)^{2}+\left(X_{1}+\hat{B}\right)^{2}+\sqrt{N}}-X_{2}(t-\tau), \end{eqnarray} (7.3)with $$M={X_{1}^{2}}+2\hat{B}X_{1}-1$$, $$N=M^{2}+4\left (X_{1}+\hat{B}\right )^{2}$$, whose solutions approximate x with errors of size at least $$\mathscr{O}(1/\varOmega )$$ at stroboscopic times. However, since (7.2) is even in $$\varOmega$$, in this particular case the errors are actually $$\mathscr{O}\left (1/\varOmega ^{2}\right )$$. Table 7 presents the errors in the SAM solution measured with respect to the solution of (7.3). The experiments have $$\hat{B} = 0.1$$; all other details are as in the toggle switch simulations in the preceding section. The table shows that the performance of SAM is very similar to that encountered in problem (1.1). We also measured errors with respect to the oscillatory solution and found that they are very close to those reported here, i.e. the situation is similar to that seen when comparing Tables 3 and 4. Finally we mention that, for the choice of constants considered here, the integration (in the interval $$0\leqslant t\leqslant 2$$) of the oscillatory problem for $$\varOmega =1024\pi$$ with dde23 in a laptop computer took more than 9,000 seconds. The corresponding SAM solution with the smallest value of H took approximately one second. Since as pointed out before, the study of vibrational resonance requires integrations in time intervals two orders of magnitude larger than $$0\leqslant t\leqslant 2$$, for many choices of the values of the constants that appear in the model, it is clear that a direct numerical integration of the oscillatory problem is not feasible for large values of $$\varOmega$$. Table 7 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (7.1) N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  View Large Table 7 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (7.1) N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  View Large Funding Ministerio de Economía y Comercio (MTM2013-46553-C3-1-P to J.M.S.-S.); Ministerio de Economía, Industria y Competitividad, Spain (MTM2016-77660-P(AEI/FEDER, UE) to J.M.S.-S.); National Natural Science Foundation of China (11371357 and 11771438 to B.Z.). Acknowledgements B.Z. is grateful to Universidad Carlos III de Madrid for hosting the stay in Spain that made this work possible and to the Chinese Scholarship Council for providing the necessary funds. The authors are thankful to M. A. F. Sanjuán and A. Daza for bringing to their attention the vibrational resonance phenomenon, the toggle switch problem and other highly oscillatory systems with delay. Footnotes 1  Assuming that $$\varphi$$ does not depend on $$\varOmega$$ implies no loss of generality, as the general case may be reduced to the $$\varOmega$$-independent case by introducing a new dependent variable $$x(t)-\varPhi (t)$$, where $$\varPhi (t)$$ coincides with $$\varphi (t)$$ for $$-\tau \leqslant t\leqslant 0$$. 2  For typographic reasons only, this table and the next have one column less than those presented before. There is nothing unexpected in the results corresponding to the omitted frequency $$\varOmega = 1024\pi$$ (or $$\varOmega = 1024\pi +2\pi$$). References Ariel, G., Engquist, B. & Tsai, R. ( 2009) A multiscale method for highly oscillatory ordinary differential equations with resonance. Math. Comput ., 78, 929– 956. Google Scholar CrossRef Search ADS   Boutle, I., Taylor, R. H. S. & Römer, R. A. ( 2007) El Niño and the delay action oscillator. Am. J. Phys. , 75, 15– 24. Google Scholar CrossRef Search ADS   Calvo, M. P., Chartier, Ph., Murua, A. & Sanz-Serna, J. M. ( 2011a) Numerical stroboscopic averaging for ODEs and DAEs. Appl. Numer. Math. , 61, 1077– 1095. Google Scholar CrossRef Search ADS   Calvo, M. P., Chartier, Ph., Murua, A. & Sanz-Serna, J. M. ( 2011b) A stroboscopic method for highly oscillatory problems. Numerical Analysis and Multiscale Computations (B. Engquist, O. Runborg & R. Tsai  eds). New York: Springer, pp. 73– 87. Calvo, M. P. & Sanz-Serna, J. M. ( 2009) Instabilities and inaccuracies in the integration of highly oscillatory problems. SIAM J. Sci. Comput. , 31, 1653– 1677. Google Scholar CrossRef Search ADS   Calvo, M. P. & Sanz-Serna, J. M. ( 2010) Heterogeneous multiscale methods for mechanical systems with vibrations. SIAM J. Sci. Comput. , 32, 2029– 2046. Google Scholar CrossRef Search ADS   Chartier, P., Murua, A. & Sanz-Serna, J. M. ( 2010) Higher-order averaging, formal series and numerical integration I: B-series. Found. Comput. Math. , 10, 695– 727. Google Scholar CrossRef Search ADS   Chartier, P., Murua, A. & Sanz-Serna, J. M. ( 2012) Higher-order averaging, formal series and numerical integration II: the quasi-periodic case. Found. Comput. Math. , 12, 471– 508. Google Scholar CrossRef Search ADS   Chartier, P., Murua, A. & Sanz-Serna, J. M. ( 2015) Higher-order averaging, formal series and numerical integration III: error bounds. Found. Comput. Math. , 15, 591– 612. Google Scholar CrossRef Search ADS   Coccolo, M., Zhu, B., Sanjuán, M. A. F. & Sanz-Serna, J. M. ( 2018) Bogdanov–Takens resonance in time-delayed systems. Nonlinear Dyn ., 91, 1939– 1947. Daza, A., Wagemakers, A., Rajasekar, S. & Sanjuán, M. A. F. ( 2013a) Vibrational resonance in a time-delayed genetic toggle switch. Commun. Nonlinear Sci. Numer. Simul. , 18, 411– 416. Google Scholar CrossRef Search ADS   Daza, A., Wagemakers, A. & Sanjuán, M. A. F. ( 2013b) Strong sensitivity of the vibrational resonance induced by fractal structures. Int. J. Bifurcation Chaos , 23, 1350129. Daza, A., Wagemakers, A. & Sanjuán, M. A. F. ( 2017) Wada property in systems with delay. Commun. Nonlinear Sci. Numer. Simul. , 43, 220– 226. Google Scholar CrossRef Search ADS   E, W. ( 2003) Analysis of the heterogeneous multiscale method for ordinary differential equations. Comm. Math. Sci. , 1, 423– 436. CrossRef Search ADS   E, W. & Engquist, B. ( 2003) The heterogeneous multiscale methods. Comm. Math. Sci. , 1, 87– 132. Google Scholar CrossRef Search ADS   E, W., Engquist, B., Li, X., Ren, W. & Vanden-Eijnden, E. ( 2007) The hetergenous multiscale method: a review. Commun. Comput. Phys. , 2, 367– 450. Engquist, B. & Tsai, Y. H. ( 2005) Hetergenous multiscale methods for stiff ordinary differential equations. Math. Comput. , 74, 1707– 1742. Google Scholar CrossRef Search ADS   Gardner, T. S., Cantor, C. R. & Collins, J. J. ( 2000) Construction of a genetic toggle switch in Escherichia coli. Nature , 403, 339– 342. Google Scholar CrossRef Search ADS PubMed  Hairer, E., Lubich, C. & Wanner, G.( 2002) Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations . New York: Springer, 471– 529. Iserles, A. & Nørsett, S. P. ( 2005) Efficient quadrature of highly oscillatory integrals using derivatives. Proc. R. Soc. A. , 461, 1383– 1399. Google Scholar CrossRef Search ADS   Jeevarathinam, C., Rajasekar, S. & Sanjuán, M. A. F. ( 2011) Theory and numerics of vibrational resonance in Duffing oscillators with time-delayed feedback. Phys. Rev. E , 83, 066205. Landa, P. S. & McClintock, P. V. E. ( 2000) Vibrational resonance. J. Phys. A Math. Gen. , 33, L433. Lehman, B. & Weibel, S. P. ( 1999) Fundamental theorems of averaging for functional differential equations. J. Diff. Eqns. , 152, 160– 190. Google Scholar CrossRef Search ADS   Li, J., Kevrekidis, P. G., Gear, C. W. & Kevrekidis, I. G. ( 2007) Deciding the nature of the coarse equation through microscopic simulations: the baby-bathwater scheme. SIAM Rev. , 49, 469– 487. Google Scholar CrossRef Search ADS   Sanz-Serna, J. M. ( 2009) Modulated Fourier expansions and heterogeneous multiscale methods. IMA J. Numer. Anal. , 29, 595– 605. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

# A stroboscopic averaging algorithm for highly oscillatory delay problems

, Volume Advance Article – Apr 13, 2018
24 pages

/lp/ou_press/a-stroboscopic-averaging-algorithm-for-highly-oscillatory-delay-p0hcP4Olew
Publisher
Oxford University Press
ISSN
0272-4979
eISSN
1464-3642
D.O.I.
10.1093/imanum/dry020
Publisher site
See Article on Publisher Site

### Abstract

Abstract We propose and analyse a heterogeneous multiscale method for the efficient integration of constant-delay differential equations subject to fast periodic forcing. The stroboscopic averaging method suggested here may provide approximations with $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$ errors with a computational effort that grows like $$H^{-1}$$ (the inverse of the step size), uniformly in the forcing frequency $$\varOmega$$. 1. Introduction We propose and analyse a heterogeneous multiscale method (HMM) (E, 2003; E & Engquist, 2003) for the efficient integration of constant-delay differential equations subject to fast periodic forcing. The stroboscopic averaging method (SAM) suggested here may provide approximations with $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$ errors with a computational effort that grows like $$H^{-1}$$ (the inverse of the step size), uniformly in the forcing frequency $$\varOmega$$. Delay differential equations with fast periodic forcing appear in a number of recent contributions to the nonlinear Physics literature. As shown by Daza et al. (2017), under fast periodic forcing, the delayed-action oscillator (Boutle et al., 2007) that describes El Niño phenomenon may generate basins of attraction with the Wada property, i.e. each point on the boundary of one of the basins is actually on the boundary of all basins. The system   \begin{eqnarray} \dot{x}_{1}(t)&=&\frac{\alpha}{1+x_{2}^{\beta}(t)}-x_{1}(t-\tau)+A\sin(\omega t)+B\sin(\varOmega t),\\ \dot{x}_{2}(t)&=&\frac{\alpha}{1+x_{1}^{\beta}(t)}-x_{2}(t-\tau), \nonumber \end{eqnarray} (1.1)($$\tau$$, $$\alpha ,\beta ,\omega ,A,B$$ are constants) describes, for A = 0, B = 0 a time-delayed genetic toggle switch, a synthetic gene-regulatory network (Gardner et al., 2000). Studied by Daza et al. (2013a) is the phenomenon of vibrational resonance (Landa & McClintock, 2000), i.e. the way in which the presence of the high-frequency forcing $$B\sin (\varOmega t)$$ enhances the response of the system to the low-frequency forcing $$A\sin (\varOmega t)$$. Jeevarathinam et al. (2011) and Daza et al. (2013b) investigate in a similar way the forced, delayed Duffing oscillator. A new kind of nonlinear resonance of periodically forced delay systems has recently been described by Coccolo et al. (2018). The numerical integration of highly oscillatory differential equations with or without delay may be a very demanding task, as standard methods typically have to employ time steps smaller than the periods present in the solution. For systems without delay, the literature contains many suggestions of numerical schemes specially designed to operate in the highly oscillatory scenario; many of them are reviewed in Hairer et al. (2002). Perhaps counterintuitively, some of those methodologies take advantage of the large frequency and their efficiency actually increases with $$\varOmega$$ (Iserles & Nørsett, 2005). On the other hand, schemes for highly oscillatory problems may suffer from unexpected instabilities and inaccuracies (Calvo & Sanz-Serna, 2009). The algorithm suggested here is based on ideas presented, for systems without delay, by Calvo et al. (2011a) and Calvo et al. (2011b). In these references, given an oscillatory problem, a stroboscopically averaged problem is introduced such that, at the stroboscopic times $$t^{(k)}=kT$$, $$T=2\pi /\varOmega$$, $$k = 0, 1,\dots$$, its solution X(t) (approximately) coincides with the true oscillatory solution x. The stroboscopically averaged problem does not include rapidly varying forcing terms and therefore, if available in closed form, may be integrated numerically without much ado. The algorithms in Calvo et al. (2011a) and Calvo et al. (2011b) compute numerically values of X, without demanding that the user finds analytically the expression of the averaged system. More precisely, the algorithms only require evaluations of the right-hand side of the originally given oscillatory problem. The solution X is advanced with a standard integrator (the macro-integrator) with a step size H that, for a target accuracy, may be chosen to be independent of $$\varOmega$$. When the macro-integrator requires a value F of the slope $$\dot X$$, F is found by numerical differentiation of a micro-solution u, i.e. a solution of the originally given oscillatory problem. While the micro-integrations to find u are performed with step sizes h that are submultiples of the (small) period T, the corresponding computational cost does not increase as $$\varOmega \rightarrow \infty$$, because u is only required in windows of width mT, m a small integer. The extension of the material in the studies by Calvo et al. (2011a) and Calvo et al. (2011b) to systems with delay is far from trivial. A first difficulty stems from the well-known fact that, in the delay scenario, regardless of the smoothness of the equation, solutions may be nonsmooth at points t that are integer multiples of the (constant) delay. Therefore, the algorithm presented here has to make special provision for that lack of smoothness. In addition, the analysis of the algorithm (but, as emphasized above, not the algorithm itself) is built on the knowledge of the stroboscopically averaged systems. While the construction of a stroboscopically averaged system with errors $$x\left (t^{(k)}\right )- X\left (t^{(k)}\right )=\mathscr{O}(1/\varOmega )$$ is not difficult, here we aim at errors $$x\left (t^{(k)}\right )- X\left (t^{(k)}\right )=\mathscr{O}\big (1/\varOmega ^{2}\big )$$ and this requires much additional analysis. The classical reference by Lehman & Weibel (1999) only considers zero-mean, $$\mathscr{O}(1/\varOmega )$$ averaging. In Section 2 we present the main ideas of the paper and a detailed description of the algorithm. Due to the difficulties imposed by the lack of smoothness in the solution, the algorithm uses low-order methods: the second-order Adams–Bashforth formula as a macro-integrator and Euler’s rule as a micro-integrator. Section 3 contains the construction of the stroboscopically averaged system with $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ accuracy. Sections 4 and 5 are devoted to the analysis of the SAM algorithm. In the first of these, we assume that the micro-integrations carried out in the algorithm are performed exactly. Under suitable hypotheses, the errors in SAM are $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$. The effect of the errors in the micro-integration is studied in Section 5: it is shown that, with a computational cost that grows like 1/H, SAM may yield errors of size $$\mathscr{O}\big (H^{2}+1/\varOmega ^{2}\big )$$. The $$H^{2}$$ (second order) behaviour of the error may come as a surprise, because micro-integrations are performed by Euler’s rule; of key importance here is a superconvergence result (see the bound in (5.2)) for the Euler solution of oscillatory problems when the integration is carried out over a whole number of periods. The last two sections report numerical experiments that, on the one hand, confirm the theoretical expectations and, on the other, show the advantage of SAM when compared with a direct numerical integration of the oscillatory problem. Speed-ups larger than three order of magnitude are reported at the end of Section 7. 2. The stroboscopic averaging method (SAM) This section motivates and describes the SAM algorithm. 2.1 Motivation We consider highly oscillatory delay differential systems of the form   \begin{eqnarray} \dot{x}(t)&=&f(x(t),y(t),t,\varOmega t;\varOmega),\qquad t\geqslant 0,\\ y(t)&=&x(t-\tau),\qquad t\geqslant 0, \nonumber \end{eqnarray} (2.1)where the solution x is defined for $$t \geqslant -\tau$$ and takes values in $$\mathbb{R}^{D}$$, the function $$f(x,y,t,\theta ;\varOmega ):\mathbb{R}^{D}\times \mathbb{R}^{D}\times \mathbb{R}\times \mathbb{R}\times \mathbb{R} \rightarrow \mathbb{R}^{D}$$ depends $$2\pi$$-periodically on its fourth argument $$\theta$$, $$\tau>0$$ is the (constant) delay and the frequency $$\varOmega$$ is seen as a parameter $$\varOmega \gg 1$$. Note that f depends explicitly on t through its third and fourth arguments; the fourth is the rapidly rotating phase$$\theta =\varOmega t$$ and the third corresponds to a slow (i.e. $$\varOmega$$-independent) dependence on t (see the toggle switch equations above). The values of x on the interval $$[-\tau ,0]$$ are prescribed through an $$\varOmega$$-independent1 function $$\varphi$$:   \begin{align} x(t)=\varphi(t),\qquad-\tau\leqslant t\leqslant 0. \end{align} (2.2) It is well known that, regardless of the smoothness of f and $$\varphi$$, the function x(t) will typically not be differentiable at t = 0 and that in (2.1) $$\dot{x}(0)$$ has to be understood as a right derivative. Furthermore, the discontinuity of $$\dot{x}(t)$$ at t = 0 will lead to the discontinuity of $$\ddot{x}(t)$$ at $$t=\tau$$, etc. We assume that, at the stroboscopic times$$t^{(k)}=kT$$, where $$T =2\pi /\varOmega$$ is the period and $$k = 0, 1,\dots$$, the solution x(t) of the oscillatory delay problems (2.1)–(2.2) may be approximated (in a sense to be made precise later) by the solution X(t) of an averaged problem   \begin{eqnarray} \dot{X}(t)&=&F,\qquad t\geqslant 0,\\ X(t)&=&\varphi(t),\qquad -\tau\leqslant t\leqslant 0,\nonumber \end{eqnarray} (2.3)where the value of the function F may depend on X(t), on the history X(s), $$-\tau \leqslant s<t$$, on the slow time t and on $$\varOmega$$, but is independent of the fastly varying phase $$\theta =\varOmega t$$. We illustrate the situation in the particular case of the toggle switch problem (1.1). Figure 1 displays, in a short time interval, a solution of the given oscillatory system and the corresponding averaged solution found by solving the stroboscopically averaged system  \begin{eqnarray} \dot{X}_{1}(t)&=&\frac{\alpha}{1+X_{2}^{\beta}(t)}-Y_{1}(t)-\frac{B}{\varOmega}1_{\{t\geqslant \tau\}}(t)+A\sin(\omega t),\\ \nonumber \dot{X}_{2}(t)&=&\frac{\alpha}{1+X_{1}^{\beta}(t)}-Y_{2}(t)-\frac{B}{\varOmega}\frac{\alpha \beta X_{1}^{\beta-1}(t)}{\left(1+X_{1}^{\beta}(t)\right)^{2}}, \end{eqnarray} (2.4) Fig. 1. View largeDownload slide $$x_{1}$$ component of the toggle switch problem. The constants $$\tau$$, $$\alpha$$, $$\beta$$, A, $$\omega$$, B, and the function $$\varphi$$ are chosen as in Section 6 and $$\varOmega =60$$. The true solution x and the averaged solution X are very close at the stroboscopic times $$t^{(k)} = k(2\pi /\varOmega )=kT$$, $$k = 0, 1,\dots$$ SAM is used to generate approximations to X at the step points $$t_{n}=nH$$, H = 0.125, $$n=0, 1,\dots$$ The value of H is chosen so that the point $$t=\tau$$, where the slope of the solution is discontinuous, is a step point. On the other hand, with the value of $$\varOmega$$ considered, the step points (abscissae of the stars) are not stroboscopic times. Fig. 1. View largeDownload slide $$x_{1}$$ component of the toggle switch problem. The constants $$\tau$$, $$\alpha$$, $$\beta$$, A, $$\omega$$, B, and the function $$\varphi$$ are chosen as in Section 6 and $$\varOmega =60$$. The true solution x and the averaged solution X are very close at the stroboscopic times $$t^{(k)} = k(2\pi /\varOmega )=kT$$, $$k = 0, 1,\dots$$ SAM is used to generate approximations to X at the step points $$t_{n}=nH$$, H = 0.125, $$n=0, 1,\dots$$ The value of H is chosen so that the point $$t=\tau$$, where the slope of the solution is discontinuous, is a step point. On the other hand, with the value of $$\varOmega$$ considered, the step points (abscissae of the stars) are not stroboscopic times. where $$Y_{i}(t) = X_{i}(t-\tau )$$ and $$1_{\{t\geqslant \tau \}}(t)$$ is the (indicator) function that takes the value 1 for $$t\geqslant \tau$$ and vanishes for $$t<\tau$$ (see the next section for the derivation of this averaged system). Note that the slow time-dependent forcing $$A\sin (\omega t)$$ has not been averaged out and that, due to the presence of $$1_{\{t\geqslant \tau \}}(t)$$, the right-hand side of the averaged system is discontinuous at $$t=\tau$$ (this discontinuity manifests itself in Figure 1 through a discontinuity in the slope of X, not to be confused with the discontinuity at t = 0 typically present, as mentioned above, in solutions of delay problems). The notion of stroboscopic averaging studied in detail in the study by Chartier et al. (2012) is far from new. However, standard treatments of the theory of averaging favour alternative techniques, specially the zero-mean approach where the functions necessary to express the required change of variables are chosen so as to have zero-mean over one period of the oscillation. In stroboscopic averaging the freedom available in the choice of change of variables is used to impose that the old and new variables coincide at stroboscopic times. This is advantageous for the numerical methods studied here. The zero-mean approach may be better for analytic purposes as it usually leads to simpler high-order averaged systems. If zero-mean averaging had been used in Figure 1, the averaged solution would have been located halfway between the maxima and the minima of the oscillatory solution. The study of the vibrational resonance of (1.1) requires to simulate over long time intervals (the interval $$0\leqslant t\leqslant 300$$ is used in the study by Daza et al., 2013a) for many choices of the values of the constants $$\alpha ,\beta ,\omega ,A,B,\tau$$ and the parameter $$\varOmega$$; the presence of the fast-frequency oscillations makes such a task costly. It is then of interest to simulate, if possible, averaged systems like (2.3) rather than highly oscillatory models like (2.1). However, obtaining F analytically may be difficult or even impossible, and we wish to have a numerical method that approximates X by using only f. SAM is such a technique. The idea behind SAM is as follows. Let x and X be the oscillatory and averaged solutions, respectively, corresponding to the same $$\varphi$$. At fixed t, $$\dot x$$ and $$\dot X$$ may differ substantially. However, difference quotients such as $$(x(t+T)-x(t))/T$$ or $$(x(t+T)-x(t-T))/(2T)$$ may provide a good approximation to the slope $$\dot X(t)$$ (see Fig. 1). As other heterogeneous multiscale methods (see e.g. E, 2003; E & Engquist, 2003; Engquist & Tsai, 2005; E et al., 2007; Li et al., 2007; Ariel et al., 2009; Sanz-Serna, 2009; Calvo & Sanz-Serna, 2010), the algorithm includes macro-integrations and micro-integrations. Macro-integrations are used to advance X over macro-steps of length H larger than the period T. The necessary slopes $$\dot X$$ are obtained by forming difference quotients of auxiliary oscillatory solutions found by micro-integrations with small steps h. 2.2 The algorithm Let us now describe the algorithm. 2.2.1 Macro-integration Choose a positive integer N and define the macro-step size $$H=\tau /N$$. If the solution is sought in an interval $$0\leqslant t\leqslant t_{max}$$, SAM generates approximations $$X_{n}$$ to $$X(t_{n})$$, $$t_{n}=nH$$, $$n=0,1,\dots , \lfloor t_{max}/H \rfloor$$ by using the second-order Adams–Bashforth formula (macro-integrator)   \begin{align} X_{n+1}=X_{n}+\frac{3}{2}HF_{n}-\frac{1}{2}HF_{n-1}, \end{align} (2.5)starting from $$X_{0}=x(0)=X(0)=\varphi (0)$$; here $$F_{n}$$ is an approximation to $$\dot{X}(t_{n})$$ obtained by numerical differentiation of the micro-solution. The formula is not used if n = 0 and n = N, where it would be inconsistent in view of the jump discontinuities of $$\dot X$$ at t = 0 and $$t=\tau$$ noted above. For n = 0 and n = N we use Euler’s rule, i.e. we set   \begin{align} X_{1}=X_{0}+HF_{0},\qquad X_{N+1}=X_{N}+HF_{N}. \end{align} (2.6) 2.2.2 Micro-integration If $$\nu _{max}$$ is a positive integer, the micro-step size h is chosen to be $$T/\nu _{max}$$ (recall that $$T=2\pi /\varOmega$$ denotes the period). We use Euler’s rule, starting from $$u_{n,0}=X_{n}$$, first to integrate forward the oscillatory problem (2.1) over one period   \begin{align} u_{n,\nu+1}=u_{n,\nu}+hf(u_{n,\nu},v_{n,\nu},t_{n}+\nu h,\varOmega\nu h;\varOmega),~~~\nu=0,1,\dots,\nu_{max}-1, \end{align} (2.7)and then to integrate backward over one period   \begin{align} u_{n,-\nu-1}=u_{n,-\nu}-hf(u_{n,-\nu},v_{n,-\nu},t_{n}-\nu h,-\varOmega\nu h;\varOmega),~~~\nu=0,1,\dots,\nu_{max}-1. \end{align} (2.8)Here v denotes the past values for u given by $$v_{n,\nu } =u_{n-N,\nu }$$ if n > N and $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$ for n < N (if n = N$$v_{N,\nu } =u_{0,\nu }$$ for $$\nu \geqslant 0$$ and $$v_{N,\nu }=\varphi (\nu h)$$ for $$\nu <0$$). It is crucial to observe the values $$\varOmega \nu h$$ and $$-\varOmega \nu h$$ used for the fast argument $$\theta$$ in (2.7) and (2.8), respectively; each micro-integration starts from $$\theta = 0$$ rather than from the value $$\theta = \varOmega t_{n}$$, which may perhaps have been expected. The reason for this is that in stroboscopic averaging, the resulting averaged system changes with the initial value of the phase $$\theta$$; to work with one and the same averaged system micro-integrations have to start from the value $$\theta = 0$$ that the phase takes at the initial point of the interval $$[0,t_{max}]$$ (see Calvo et al., 2011a and Calvo et al., 2011b for a detailed discussion). The slopes $$F_{n}$$ to be used in (2.5) or (2.6) are given by the central difference formula,   \begin{align} F_{n}=\frac{u_{n,\nu_{max}}-u_{n,-\nu_{max}}}{2T} \end{align} (2.9)if $$n\neq 0$$ and $$n\neq N$$, while for n = 0 and n = N, we use the forward difference formula   \begin{align} F_{0}=\frac{u_{0,\nu_{max}}-u_{0,0}}{T},\qquad F_{N}=\frac{u_{N,\nu_{max}}-u_{N,0}}{T}, \end{align} (2.10)due to the discontinuity of $$\dot{X}(t)$$ at t = 0 and $$t=\tau$$. A detailed description of the algorithm is provided in Table 1. Figure 2 may help to better understand the procedure. The upper time axis corresponds to the macro-integration; all the information needed to obtain $$X_{n+1}$$ are $$X_{n}$$, $$F_{n}$$ and $$F_{n-1}$$ ($$F_{n-1}$$ is actually not required for n = 0, N). The value of $$F_{n}$$ is derived by numerical differentiation of the micro-solution and passed to the macro-integrator to compute $$X_{n+1}$$. For fixed n, the computation of the Euler micro-solution $$u_{n,\nu }$$ uses the past values $$v_{n,\nu }$$ and the initial datum $$u_{n,0} = X_{n}$$ (note that $$X_{n}$$ is the most recent vector found in the macro-integration). Fig. 2. View largeDownload slide Schematic description of SAM. Once $$X_{n}$$ is available, it is passed to the micro-integrator to find $$u_{n,\nu }$$ for varying $$\nu$$. Numerical differentiation of the micro-solution yields $$F_{n}$$, which is used in the macro-stepping to compute $$X_{n+1}$$. Fig. 2. View largeDownload slide Schematic description of SAM. Once $$X_{n}$$ is available, it is passed to the micro-integrator to find $$u_{n,\nu }$$ for varying $$\nu$$. Numerical differentiation of the micro-solution yields $$F_{n}$$, which is used in the macro-stepping to compute $$X_{n+1}$$. SAM only operates with macro-step sizes H that are submultiples of the delay $$\tau$$; this restriction is imposed to enforce that $$t=\tau$$ be a step point to better deal with the discontinuity in slope at $$t=\tau$$ (see Figure 1). In general, the quotient $$\tau /T$$ will not be a whole number and then the step points $$t_{n}$$ will not be stroboscopic times; this is the case in the simulation in Figure 1. We emphasize that, given N and $$\nu _{max}$$, the complexity of the algorithm is independent of $$\varOmega$$. When $$\varOmega$$ increases, the micro-step size h decreases to cater for the more rapid variation of the oscillations, but the window of width 2T (or T) for each micro-integration becomes correspondingly narrower. Finally, we point out that we have tested several alternative algorithms. For instance, we alternatively performed the micro-integrations with the Adams–Bashforth second-order method, or we used second-order forward approximations for $$F_{0}$$, $$F_{N}$$. While those modifications improve the accuracy of the results for small step sizes, experiments reveal that they are not always beneficial for large step sizes; therefore, we shall not be concerned with them here. Table 1 SAM Algorithm Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Table 1 SAM Algorithm Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute$$X_{1}$$   Micro-integration    $$u_{0,0} = \varphi (0)$$ % initial value    For $$\nu =0:\nu _{max}$$, $$v_{0,\nu }=\varphi (-\tau +\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{0,\nu +1}=u_{0,\nu }+hf(u_{0,\nu },v_{0,\nu },t_{0}+\nu h,\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{0}=(u_{0,\nu _{max}}-u_{0,0})/T$$ % slope for macro-step   Macro-integration    $$t_{0} = 0$$, $$X_{0}=\varphi (0)$$, $$t_{1}=t_{0}+H$$, $$X_{1}=X_{0}+HF_{0}$$ % Euler macro at n=0  Compute $$X_{2},\dots ,X_{N}$$  For n = 1 : N − 1   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=\varphi (-\tau +nH+\nu h)$$, end % history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  Compute $$X_{N+1}$$   Micro-integration    $$u_{N,0}=X_{N}$$ % initial value    For $$\nu =1:\nu _{max}$$, $$u_{0,-\nu }=\varphi (-\nu h)$$, end % values taken from history    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{N,\nu }=u_{0,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{N,\nu +1}=u_{N,\nu }+hf(u_{N,\nu },v_{N,\nu },t_{N}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{N,-\nu -1}=u_{N,-\nu }-hf(u_{N,-\nu },v_{N,-\nu },t_{N}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{N}=(u_{N,\nu _{max}}-u_{N,0})/T$$ % slope for macro-step   Macro-integration    $$t_{N+1}=t_{N}+H$$, $$X_{N+1}=X_{N}+HF_{N}$$ % Euler macro at n=N  Compute $$X_{N+2},\dots$$  For $$n=N+1:\lfloor t_{max}/H \rfloor$$   Micro-integration    $$u_{n,0}=X_{n}$$ % initial value    For $$\nu =-\nu _{max}:\nu _{max}$$, $$v_{n,\nu }=u_{n-N,\nu }$$, end % save for later history    For $$\nu =0:\nu _{max}-1$$, $$u_{n,\nu +1}=u_{n,\nu }+hf(u_{n,\nu },v_{n,\nu },t_{n}+\nu h,\varOmega \nu h;\varOmega )$$    $$u_{n,-\nu -1}=u_{n,-\nu }-hf(u_{n,-\nu },v_{n,-\nu },t_{n}-\nu h,-\varOmega \nu h;\varOmega )$$, end % Euler    $$F_{n}=(u_{n,\nu _{max}}-u_{n,-\nu _{max}})/(2T)$$ % slope for macro-step   Macro-integration    $$t_{n+1}=t_{n}+H$$, $$X_{n+1}=X_{n}+(3H/2)F_{n}-(H/2)F_{n-1}$$ % Adams–Bashforth2 macro  end  3. The averaged system In this section and the three that follow, we assume that the system (2.1) is of the particular form   \begin{align} \dot{x}(t)=f(x(t),y(t),\varOmega t). \end{align} (3.1) When comparing with the general format in (2.1) we note that now $$f(x,y,\theta )$$ has three arguments rather than five. The case where f includes a slow explicit dependence on t, i.e. $$f = f(x,y,t,\theta )$$, may be trivially reduced to (3.1) by adding a component $$x_{D+1}$$ to the state vector $$x\in \mathbb{R}^{D}$$ and setting $$\dot{x}_{D+1} = 1$$. The case where f depends on $$\varOmega$$, i.e. $$f = f(x,y,\theta ;\varOmega )$$, is taken up in the final section. We assume that $$f(x,y,\theta )$$ and the initial function $$\varphi$$ in (2.2) are of class $$C^{3}$$ and that the solution x exists in the interval $$[0,t_{max}]$$, where $$t_{max}$$ is a constant (i.e. does not change with the parameter $$\varOmega$$). Using an approach similar to that in the studies by Chartier et al. (2010), Chartier et al. (2012), Chartier et al. (2015), we use a Fourier decomposition   $$f(x,y,\theta)=\sum_{k\in \mathbb{Z}}\exp(ik\theta)f_{k}(x,y).$$The coefficients $$f_{k}(x,y)$$ satisfy $$f_{k}\equiv f_{-k}^{\ast }$$ because the problem is real. It is easily seen that, under the preceding hypotheses, x undergoes oscillations of frequency $$\varOmega$$ and amplitude $$\mathscr{O}(1/\varOmega )$$ as $$\varOmega \rightarrow \infty$$. To reduce the amplitude of the oscillations to $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$, we consider the near identity change of variables   \begin{align} \left\{ \begin{aligned} x&=X+\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}f_{k}(X,Y),~~~t \geqslant 0,\\ x&=X,\qquad \qquad -\tau\leqslant t<0, \end{aligned} \right. \end{align} (3.2)where we note that x and X coincide at stroboscopic times, i.e. the change is stroboscopic. In what follows, we use the notation $$z(t)=y(t-\tau ),~t\geqslant \tau$$, $$Y(t) = X(t-\tau ), t\geqslant 0$$, $$Z(t)=Y(t-\tau ),~t\geqslant \tau$$. The proof of the following result is a straightforward, but very lengthy exercise on Taylor expansion. Lemma 3.1 The change of variables (3.2) transforms the system (3.1) into   \begin{eqnarray} \dot{X}&=&f_{0}-\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial X}\ f_{0} -\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial Y}\dot{\varphi}(t-\tau)\nonumber \\ &&+ \frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{0}}{\partial X}\ f_{k} +\frac{1}{\varOmega}\sum_{k,\ell\neq 0}\exp(ik\varOmega t)\frac{\exp(i\ell\varOmega t)-1}{i\ell}\frac{\partial f_{k}}{\partial X}\ f_{\ell} \nonumber \\ &&+ \mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), \end{eqnarray} (3.3)for $$0\leqslant t< \tau$$, and into   \begin{eqnarray} \dot{X}&=&f_{0} -\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial X}\ f_{0} -\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\nonumber\\ &&+\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{ik}\frac{\partial f_{0}}{\partial X}\ f_{k} +\frac{1}{\varOmega}\sum_{k\neq 0}\frac{\exp(ik\varOmega (t-\tau))-1}{ik}\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}\nonumber \\ &&+ \frac{1}{\varOmega} \sum_{k,\ell\neq 0}\exp(ik\varOmega t)\frac{\exp(i\ell\varOmega t)-1}{i\ell} \frac{\partial f_{k}}{\partial X}\ f_{\ell} \nonumber\\ &&+\frac{1}{\varOmega}\sum_{k,\ell\neq 0}\exp(ik\varOmega t)\frac{\exp(i\ell\varOmega (t-\tau))-1}{i\ell}\frac{\partial f_{k}}{\partial Y}\ f_{\ell\tau} +\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), \end{eqnarray} (3.4)for $$t \geqslant \tau$$. Here   $$f_{k}=f_{k}(X,Y),\qquad f_{k\tau}=f_{k}(Y,Z), \qquad k\in \mathbb{Z}.$$ Note that the system in (3.3)–(3.4) is of the (expected) form   \begin{align} \dot X = f_{0}+\mathscr{O}(1/\varOmega), \end{align} (3.5)where $$f_{0}$$ is the average   \begin{align} f_{0}(X,Y) = \frac{1}{2\pi}\int_{0}^{2\pi} f(X,Y,\theta)\,\mathrm{d}\theta. \end{align} (3.6)By suppressing the remainder in (3.5), we obtain the averaged system $$\dot X = f_{0}(X,Y)$$ with $$\mathscr{O}(1/\varOmega )$$ error; this is not sufficiently accurate for the values of $$\varOmega$$ of interest and we take the averaging procedure to higher order. To do so, we perform an additional stroboscopic change of variables chosen so as to annihilate the oscillatory parts of the $$\mathscr{O}(1/\varOmega )$$ terms in (3.3)–(3.4). Specifically, we take   \begin{eqnarray*} X&=&\tilde{X}+\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{0} +\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}} \frac{\partial f_{k}}{\partial \tilde{Y}}\dot{\varphi}(t-\tau) \\ & & -\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}} \frac{\partial f_{0}}{\partial \tilde{X}}\ f_{k} -\frac{1}{\varOmega^{2}}\sum_{\substack{k\neq 0\\\ell\neq 0,-k}}\frac{\exp(i(k+\ell)\varOmega t)-1}{\ell(k+\ell)} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} \\&& +\frac{1}{\varOmega^{2}}\sum_{k,\ell\neq 0}\frac{\exp(ik\varOmega t)-1}{k\ell} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} \end{eqnarray*}for $$0\leqslant t< \tau$$ and   \begin{eqnarray*} X&=& \tilde{X} +\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}}\frac{\partial f_{k}}{\partial \tilde{X}}\ f_{0} +\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}}\frac{\partial f_{k}}{\partial \tilde{Y}}\ f_{0\tau} \\ &&-\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega t)-1}{k^{2}}\frac{\partial f_{0}}{\partial \tilde{X}}\ f_{k} -\frac{1}{\varOmega^{2}}\sum_{k\neq 0}\frac{\exp(ik\varOmega(t-\tau))-\exp(-ik\varOmega\tau)}{k^{2}} \frac{\partial f_{0}}{\partial \tilde{Y}}\ f_{k\tau} \\ &&-\frac{1}{\varOmega^{2}}\sum_{\substack{k\neq 0\\\ell\neq 0,-k}}\frac{\exp(i(k+\ell)\varOmega t)-1}{l(k+\ell)} \frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} +\frac{1}{\varOmega^{2}}\sum_{k,\ell\neq 0}\frac{\exp(ik\varOmega t)-1}{k\ell}\frac{\partial f_{k}}{\partial \tilde{X}}\ f_{\ell} \\ && -\frac{1}{\varOmega^{2}}\sum_{\substack{k\neq 0\\\ell\neq 0,-k}}\exp(-il\varOmega \tau)\frac{\exp(i(k+\ell)\varOmega t)-1}{\ell(k+\ell)} \frac{\partial f_{k}}{\partial \tilde{Y}}\ f_{\ell\tau} +\frac{1}{\varOmega^{2}}\sum_{k,\ell\neq 0}\frac{\exp(ik\varOmega t)}{k\ell}\frac{\partial f_{k}}{\partial \tilde{Y}}\ f_{\ell\tau} \end{eqnarray*}for $$t\geqslant \tau$$, where $$f_{k}=f_{k}\left (\tilde{X},\tilde{Y}\right )$$ and $$f_{k\tau }=f_{k}\left (\tilde{X},\tilde{Y}\right )$$$$\left (\tilde{Y}(t) =\tilde{X}(t-\tau ),\tilde{Z}(t) =\tilde{Y}(t-\tau )\right )$$. Taking the last displays to (3.3)–(3.4) and discarding the $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ remainder results in the averaged system (3.7)–(3.9) below, where, for notational simplicity, we have suppressed the tildes over X, Y and Z and [⋅, ⋅] is the Lie–Jacobi bracket or commutator defined by   $$\left[f_{i},f_{j}\right]=\frac{\partial f_{j}}{\partial X}f_{i}-\frac{\partial f_{i}}{\partial X}f_{j},~~~i,j\in \mathbb{Z}.$$ The averaged solution X will obviously approximate x at stroboscopic times with $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ errors (arising from discarding the $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ remainder); this implies that, at least for $$\varOmega$$ sufficiently large, X exists in the interval $$\left [0,t_{max}\right ]$$. We then have proved the following result: Theorem 3.2 Consider the averaged problem   \begin{eqnarray} \dot{X}(t)&=&F(X(t),Y(t),Z(t),t;\varOmega),\qquad t\geqslant 0,\\ Y(t) &=& X(t-\tau),\qquad t\geqslant 0,\nonumber\\ Z(t) &=& Y(t-\tau),\qquad t\geqslant \tau,\nonumber\\ X(t) &=& \varphi(t), \qquad -\tau\leqslant t\leqslant 0,\nonumber \end{eqnarray} (3.7)with $$F(X,Y,Z,t;\varOmega )= F^{(1)}(X,Y,t;\varOmega )$$,   \begin{align} F^{(1)}(X,Y,t;\varOmega)=f_{0}+\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right)-\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\dot{\varphi}(t-\tau), \end{align} (3.8)for $$0\leqslant t<\tau$$ and $$F(X,Y,Z,t;\varOmega )= F^{(2)}(X,Y,Z;\varOmega )$$,   \begin{eqnarray} F^{(2)}(X,Y,Z;\varOmega)&=&f_{0}+\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right)-\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\\ \nonumber & &\qquad+\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}+\sum_{k\neq0}\frac{i\exp(ik\varOmega \tau)}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{-k\tau}, \end{eqnarray} (3.9)for $$t\geqslant \tau$$. For $$\varOmega$$ sufficiently large, the averaged solution X exists in the interval $$\left [0,t_{max}\right ]$$. Furthermore, for the approximation at stroboscopic times, we may write   $$\max_{ 0\leqslant t^{(k)}\leqslant t_{max}}\left\|x\left(t^{(k)}\right)-X\left(t^{(k)}\right) \right\|=\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right),\qquad \varOmega\rightarrow \infty.$$ We point out that, for $$t\geqslant \tau$$, the dependence of $$\dot X(t)$$ on the past values X(s), $$-\tau \leqslant s< t$$ is through both$$Y(t) = X(t-\tau )$$and$$Z(t) = X(t-2\tau )$$; this is to be compared with the situation for the original oscillatory system (2.1), which does not include the delay $$2\tau$$. The functions $$F^{(1)}$$ and $$F^{(2)}$$ are of class $$C^{2}$$, but of course F is discontinuous at $$t =\tau$$. By implication, X(t) will be of class $$C^{3}$$, except at $$t = 0,\tau$$ (where $$\dot{X}(t)$$ has a jump discontinuity), at $$t = 2\tau$$ (where the second derivative jumps) and $$t = 3\tau$$ (where the third derivative jumps). In the particular case where f does not depend on the delayed argument y, so that we are dealing with an ordinary differential system, (3.8) and (3.9) are in agreement with formula (64) of Chartier et al. (2012). 4. Error analysis: exact micro-integration In this section we investigate the global error of the algorithm under the assumption that the micro-integration is exact, so that the macro-integration and the numerical differentiations performed to find the slopes $$F_{n}$$ are the only sources of error. This scenario is of course relevant when the micro-step h is chosen to be very small. The errors due to the Euler micro-integration will be studied in the next section. In order to avoid misunderstandings, we state that ‘exact micro-integration’ has to be understood as follows. Consider e.g. the computation of $$X_{2}$$, …, $$X_{N}$$ in Table 1; in this section we assume that   $$F_{n} = \frac{u_{n}(T)-u_{n}(-T)}{2T},$$where $$u_{n}$$ solves the problem $$\dot u_{n} = f(u_{n},v_{n},\varOmega t)$$, $$u_{n}(0)= X_{n}$$, $$v_{n}(t) =\varphi (-\tau +nH+t)$$. Of course similar modifications of the algorithm in Table 1 have to be carried out for the computation of $$F_{n}$$ for other values of n. We begin by proving a stability bound for the macro-integrator. We consider a sequence $$\left \{\hat{X}_{n}\right \}$$ of vectors in $$\mathbb{R}^{D}$$ such that $$\hat{X}_{-n}=\varphi (-nH)$$, $$n=1,\dots ,N$$, and furthermore satisfy the macro-integration equations with residuals $$\big \{ \hat{\rho }_{n}\big \}$$, i.e.   \begin{eqnarray*} \hat{X}_{1}&=&\hat{X}_{0}+HF^{(1)}\left(\hat{X}_{0},\hat{X}_{-N},0;\varOmega\right)+H\hat{\rho}_{0},\\ \hat{X}_{n+1}&=&\hat{X}_{n}+\frac{3H}{2}F^{(1)}\left(\hat{X}_{n},\hat{X}_{n-N},nH;\varOmega\right)\\ &&\qquad\qquad-\frac{H}{2}F^{(1)}\left(\hat{X}_{n-1},\hat{X}_{n-1-N},(n-1)H;\varOmega\right)+H\hat{\rho}_{n}, \qquad n=1,\dots,N-1,\\ \hat{X}_{N+1}&=&\hat{X}_{N}+HF^{(2)}\left(\hat{X}_{N},\hat{X}_{0},\hat{X}_{-N};\varOmega\right)+H\hat{\rho}_{N},\\ \hat{X}_{n+1}&=&\hat{X}_{n}+\frac{3H}{2}F^{(2)}\left(\hat{X}_{n},\hat{X}_{n-N},\hat{X}_{n-2N};\varOmega\right)\\ &&\qquad\qquad-\frac{H}{2}F^{(2)}\left(\hat{X}_{n-1},\hat{X}_{n-1-N},\hat{X}_{n-1-2N};\varOmega\right)+H\hat{\rho}_{n}, \qquad n\geqslant N+1. \end{eqnarray*}Furthermore, we consider a second sequence $$\left \{\tilde{X}_{n}\right \}$$ with $$\tilde{X}_{-n}=\varphi (-nH)$$, $$n=1,\dots ,N$$, satisfying the macro-integration equations above with residuals $$\big \{ \tilde{\rho }_{n}\big \}$$, rather than $$\big \{ \hat{\rho }_{n}\big \}$$. Proposition 4.1 To each bounded set $$B\subset \mathbb{R}^{D}$$, there corresponds a constant C > 0, independent of H and $$\varOmega$$, such that for any sequences $$\left \{\hat{X}_{n}\right \}$$ and $$\left \{\tilde{X}_{n}\right \}$$ as above contained in B,   $$\left\|\hat{X}_{n}-\tilde{X}_{n} \right\|\leqslant C\,\sum_{k=0}^{n-1} H \left\|\hat{\rho}_{k}-\tilde{\rho}_{k} \right\|,\qquad 0\leqslant nH\leqslant t_{max}.$$ Proof. From the hypotheses in the preceding section, F is a Lipschitz continuous function of X, Y and Z, with a Lipschitz constant that is uniform as t varies in the interval $$0\leqslant t\leqslant t_{max}$$ and X, Y and Z vary in B. The stability bound is then proved in a standard way by recurrence. In our next result we investigate the consistency of the formulas (2.9)–(2.10) used to recover the slopes $$F_{n}$$. There are four cases corresponding to the four successive blocks in Table 1. Proposition 4.2 The following results hold: If the function $$u_{n}$$ solves the problem   \begin{eqnarray*} \dot{u}_{n}(t)&=&f(u_{n}(t),\varphi(-\tau+t_{n}+t),\varOmega t),\\ u_{n}(0)&=&X_{n}, \end{eqnarray*}then, with $$Y_{n} = \varphi (-\tau +t_{n})$$,   \begin{align} \frac{u_{n}(T)-u_{n}(0)}{T}=f_{0}(X_{n},Y_{n})+\mathscr{O}\left(\frac{1}{\varOmega}\right). \end{align} (4.1) If $$u_{n}$$ is as in the preceding item, then   \begin{align} \frac{u_{n}(T)\!-\!u_{n}(-T)}{2T}=f_{0}\!+\!\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right) \!-\!\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\dot{\varphi}(-\tau+t_{n}) +\mathscr{O}\left(\!\frac{1}{\varOmega^{2}}\!\right)\!, \end{align} (4.2)where $$f_{k}$$, $$k\in \mathbb{Z}$$ are evaluated at $$(X_{n},Y_{n})$$, $$Y_{n} = \varphi (-\tau +t_{n})$$. If $$u_{n}$$ and $$v_{n}$$ satisfy   \begin{eqnarray*} \dot{u}_{n}(t)&=&f(u_{n}(t),v_{n}(t),\varOmega t),\\ \dot{v}_{n}(t)&=&f(v_{n}(t),w_{n}(t),\varOmega t),\\ u_{n}(0)&=&X_{n},\\ v_{n}(0)&=&Y_{n},\\ w_{n}(0)&=&Z_{n}, \end{eqnarray*}with $$w_{n}$$ a continuously differentiable function, then   \begin{align} \frac{u_{n}(T)-u_{n}(0)}{T}=f_{0}(X_{n},Y_{n})+\mathscr{O}\left(\frac{1}{\varOmega}\right). \end{align} (4.3) If $$u_{n}$$ and $$v_{n}$$ are as in 3., then   \begin{eqnarray}\nonumber \frac{u_{n}(T)-u_{n}(-T)}{2T}&=&f_{0}+\sum_{k>0}\frac{i}{k\varOmega}\left([f_{k}-f_{-k},f_{0}]+[f_{-k},f_{k}]\right)-\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\\ & &\qquad+\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}+\sum_{k\neq0}\frac{i}{k\varOmega}\frac{\partial f_{k}}{\partial Y}\ f_{-k\tau}+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), \end{eqnarray} (4.4)where $$f_{k}$$, $$k\in \mathbb{Z}$$ are evaluated at $$(X_{n},Y_{n})$$ and $$f_{k\tau }$$ stands for $$f_{k}(Y_{n},Z_{n})$$. Proof. Here we prove the fourth case; the other proofs are similar (and slightly simpler). For simplicity the subindex n is dropped. We rewrite the equation for u in integral form   $$u(t)=X+{\int_{0}^{t}} f(u(s),v(s),\varOmega s)\,\mathrm{d}s = {\int_{0}^{t}} \sum_{k} \exp(ik\varOmega s) f_{h}(u(s),v(s))\,\mathrm{d}s$$and use Picard’s iteration. Clearly for $$-T\leqslant t \leqslant T$$,   $$u(t)=X+\mathscr{O}\left(\frac{1}{\varOmega}\right).$$We take this equality to the integral equation above and integrate with respect to s to find   $$u(t)=X+t\ f_{0}(X,Y)+\sum_{k\neq0}\alpha_{k}(t)f_{k}(X,Y)+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right),$$where   $$\alpha_{k}(t)=\frac{\exp(ik\varOmega t)-1}{ik\varOmega},~~~k\in \mathbb{Z}.$$Then,   \begin{eqnarray}\nonumber u(t)&=&X+{\int_{0}^{t}} \sum_{k\in\mathbb{Z}}\exp(ik\varOmega s)\\ &&\nonumber\qquad \times\ f_{k}\left(X+s\ f_{0}+\sum_{m\neq0}\alpha_{m}(s)f_{m}+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right), Y+s\ f_{0\tau} +\sum_{m\neq0}\alpha_{m}(s)f_{m\tau}+\mathscr{O}\left(\frac{1}{\varOmega^{2}}\right)\right)\mathrm{d}s. \end{eqnarray}By Taylor expanding f at X, Y, we obtain, for $$-T\leqslant t \leqslant T$$,   \begin{eqnarray*} u(t)&=&X+t\ f_{0}+\sum_{k\neq 0}\alpha_{k}(t)f_{k}+{\int_{0}^{t}} s \frac{\partial f_{0}}{\partial X}\ f_{0}\,\mathrm{d}s +{\int_{0}^{t}} \sum_{k\neq0}\alpha_{k}(s)\frac{\partial f_{0}}{\partial X}\ f_{k}\,\mathrm{d}s \\ &&+{\int_{0}^{t}} \sum_{k\neq0}\alpha_{k}(s)\frac{\partial f_{0}}{\partial Y}\ f_{k\tau}\,\mathrm{d}s +{\int_{0}^{t}} s\frac{\partial f_{0}}{\partial Y}\ f_{0\tau}\,\mathrm{d}s +{\int_{0}^{t}} \sum_{k\neq0}\exp(ik\varOmega s)s\frac{\partial f_{k}}{\partial X}\ f_{0}\,\mathrm{d}s \\ &&+{\int_{0}^{t}} \sum_{k,m\neq0}\exp(ik\varOmega s)\alpha_{m}(s)\frac{\partial f_{k}}{\partial X}\ f_{m}\,\mathrm{d}s+{\int_{0}^{t}}\sum_{k\neq0}\exp(ik\varOmega s)s\frac{\partial f_{k}}{\partial Y}\ f_{0\tau}\,\mathrm{d}s \\ &&+{\int_{0}^{t}} \sum_{k,m\neq0}\exp(ik\varOmega s)\alpha_{m}(s)\frac{\partial f_{k}}{\partial Y}\ f_{m\tau}\,\mathrm{d}s +\mathscr{O}\left(\frac{1}{\varOmega^{3}}\right). \end{eqnarray*}The proof concludes by evaluating this expression at t = ±T and taking those values to the left-hand side of (4.4). According to this result, (4.1) and (4.3) provide approximations with $$\mathscr{O}(1/\varOmega )$$ errors to (3.8) and (3.9) (evaluated at $$X=X_{n}$$, $$Y=Y_{n}$$, $$t=t_{n}$$), respectively, as expected from the use of forward differencing. Similarly, (4.2) approximates (3.8) (at $$X=X_{n}$$, $$Y=Y_{n}$$, $$t=t_{n}$$) with $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ error, as expected of central differences. However, when comparing (4.4) with (3.9) (at $$X=X_{n}$$, $$Y=Y_{n}$$, $$Z=Z_{n}$$, $$t=t_{n}$$), we observe that the last sum in (3.9) has a factor $$\exp (ik\varOmega \tau )$$, which is not present in the last sum in (4.4), and therefore the error is, in general, only $$\mathscr{O}(1/\varOmega )$$. To achieve $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$ errors we may assume that the functions $$f_{k}(X,Y)$$, $$k\neq 0$$, are independent of the second argument Y, i.e. the delay argument y only appears in f through $$f_{0}$$ (this is the case in (1.1)). Alternatively, we may assume that, for all $$k\neq 0$$, $$\exp (ik\varOmega \tau )=1$$, i.e. that $$\tau$$ is an integer multiple of the period $$T = 2\pi /\varOmega$$. We are now ready to give the main result of this section. Theorem 4.3 Assume that the problem (3.1), with the smoothness assumptions stated in the preceding section, is integrated with SAM with exact micro-integrations. In addition assume that one of the following hypotheses holds: (H1) The oscillatory Fourier coefficients $$f_{k}$$, $$k\neq 0$$ of f do not depend on the delayed argument y. (H2) The delay $$\tau$$ is an integer multiple of the period $$T = 2\pi /\varOmega$$. Then there exist constants $$\varOmega _{0}$$, $$H_{0}$$ and K such that, for $$\varOmega \geqslant \varOmega _{0}$$, $$H\leqslant H_{0}$$, $$0\leqslant t_{n}=nH\leqslant t_{max}$$, the difference between the numerical solution and the solution of the averaged problem has the bound   \begin{align} \|X_{n}-X(t_{n}) \|\leqslant K\left( H^{2}+\frac{H}{\varOmega}+\frac{1}{\varOmega^{2}}\right). \end{align} (4.5) Proof. We apply Proposition 4.1 with B taken as a large ball containing the trajectory X(t), $$0\leqslant t\leqslant t_{max}$$ in its interior; the vectors $$X(t_{n})$$ play the role of $$\tilde{X}_{n}$$ and the vectors $$X_{n}$$ play the role of $$\hat{X}_{n}$$. It is clear that each $$\tilde{X}_{n}$$ is in B. For H sufficiently small and $$\varOmega$$ sufficiently large, the same will be true for each $$\hat{X}_{n}$$; this is established by means of a standard argument by contradiction using (4.5). Each residual $$\tilde{\rho }_{n}$$ is $$\mathscr{O}\big (H^{2}\big )$$ with the exceptions of $$\tilde{\rho }_{0}$$, $$\tilde{\rho }_{N}$$ and $$\tilde{\rho }_{2N}$$; these are only $$\mathscr{O}(H)$$ because the first two correspond to Euler steps and in the third there is a jump discontinuity in the second time derivative of the averaged solution. According to Proposition 4.2, the residuals $$\hat{\rho }_{n}$$ for the numerical solution are $$\mathscr{O}\big (1/\varOmega ^{2}\big )$$, with the exceptions of $$\hat{\rho }_{0}$$ and $$\hat{\rho }_{N}$$, which are of size $$\mathscr{O}(1/\varOmega )$$. Taking these results to Proposition 4.1 we get a global error bound of the desired form. Remark 4.4 It is clear that the bound in (4.5) may be replaced by one of the form $$K^{\prime } \big (H^{2}+1/\varOmega ^{2}\big )$$. We prefer the form (4.5), as it relates to three sources of error: the macro-integration $$H^{2}$$ error, the error $$H/\varOmega$$ arising from differencing at 0, $$\tau$$, $$2\tau$$ and the error from second-order differencing at all other step points $$t_{n}$$. Remark 4.5 The discrepancy between (4.4) and (3.9) that leads to the introduction of (H1) and (H2) stems from the fact that the values of the phase $$\theta$$ at $$t_{n}$$ and $$t_{n-N} = t_{n}-\tau$$ are in general different in the oscillatory problem, but the same in SAM. The hypothesis (H1) holds in most applications; in fact in all the papers mentioned in the introduction, the delay system is obtained by adding to the right-hand side of an oscillatory ordinary differential system a linear feedback term My, with M a constant matrix. (H2) is relevant in those studies, where there is freedom in choosing the exact value of the large frequency $$\varOmega$$. Remark 4.6 If (H1) and (H2) do not hold, the same proof yields for our algorithm a bound of the form $$K\big (H^{2}+1/\varOmega \big )$$ under the assumption that f and $$\varphi$$ are $$C^{2}$$ functions. Numerical experiments reported in Section 6 reveal that in that case the bound cannot be improved to $$K\big (H^{2}+1/\varOmega ^{2}\big )$$. However, if (H1) and (H2) do not hold, errors of size $$K\big (H^{2}+1/\varOmega \big )$$ may be obtained by means of a simpler algorithm based on applying forward differences at each step point $$t_{n}$$; obviously that alternative algorithm does not require the backward integration legs (2.8). 5. Error analysis: micro-integration errors We now take into account the errors introduced by the Euler micro-integration. We begin with an auxiliary result. Note the improved error bound at the end of the integration interval. Proposition 5.1 Consider the application of Euler’s rule with constant step size $$h = T/\nu _{max}$$ to integrate in the interval $$0\leqslant t\leqslant T$$ the initial value problem $$\dot u = f(u,v,\varOmega t)$$, u(0) = X, where v is a known $$C^{1}$$ function. Denote by $$u_{\nu }$$ the Euler solution at $$t =\nu h$$. There are constants C, $$\varOmega _{0}$$ and $$h_{0}$$ such that for $$h\leqslant h_{0}$$, $$\varOmega \geqslant \varOmega _{0}$$, the following bounds hold:   $$\| u_{\nu}-u(\nu h)\| \leqslant C h,\qquad \nu = 0, 1,\dots, \nu_{max},$$ (5.1)  $$\| u_{\nu_{max}}-u(T)\| \leqslant C \frac{h}{\varOmega},$$ (5.2)  $$\left \| \frac{u_{\nu_{max}}-X}{T}-\frac{u(T)-X}{T}\right\| \leqslant C h.$$ (5.3) Proof. A standard error bound for Euler’s rule is   $$\| u_{\nu}-u(\nu h)\| \leqslant \frac{\exp(LT)-1}{L} M h,\qquad \nu = 0,\dots,\nu_{max},$$where L is the Lispchitz constant of f with respect to u in a neighbourhood of the solution and M is an upper bound for $$\|(1/2) \ddot u(t)\|$$, $$0\leqslant t\leqslant T$$. In the present circumstances we have to take into account that, as $$\varOmega \rightarrow \infty$$, the length $$T=2\pi / \varOmega$$ of the integration interval decreases and M grows like $$\varOmega$$ because   $$\ddot u = \frac{\partial f}{\partial u} \dot u+\frac{\partial f}{\partial v} \dot v+\varOmega \frac{\partial f}{\partial t}.$$ From the elementary inequality $$(\exp (LT)-1)/L \leqslant T\exp (LT)$$ and the standard bound, we have   $$\| u_{v}-u(\nu h)\| \leqslant T \exp(LT)M h,\qquad \nu = 0,\dots,\nu_{max},$$and therefore (5.1) holds. By adding all the Euler equations, we find   $$u_{\nu_{max}} = X + \sum_{\nu=0}^{\nu_{max}-1} h f(u_{\nu},v(\nu h),\varOmega \nu h),$$and from (5.1),   \begin{align} u_{v_{max}} = X +\sum_{\nu=0}^{\nu_{max}-1} h f(u(\nu h),v(\nu h),\varOmega \nu h) +\mathscr{O}(hT), \end{align} (5.4)a relation that has to be compared with   \begin{align} u(T) = X +{\int_{0}^{T}} f(u(s),v(s), \varOmega s)\, \mathrm{d}s. \end{align} (5.5) The bound (5.2) will be established if we show that the quadrature sum in (5.4) approximates the integral in (5.5) with errors of size $$\mathscr{O}(hT)$$. To this end we decompose the function being integrated as   $$f(u(s),v(s),\varOmega s) = f(X,v(0),\varOmega s) + ( f(u(s),v(s), \varOmega s) -f(X,v(0),\varOmega s)) = f_{1}+f_{2}.$$It is easily seen (for instance by expanding in a Fourier series) that the total time derivative $$({\textrm d}/{\textrm d}t)f_{2}$$ remains bounded as $$\varOmega \rightarrow \infty$$; elementary results then show that the quadrature of $$f_{2}$$ has errors of the desired size $$\mathscr{O}(hT)$$. On the other hand, the time derivative of $$f_{1}$$ grows like $$\varOmega$$, and quadrature errors of size $$\mathscr{O}(h)$$ may be feared. Fortunately the quadrature for $$f_{1}$$ is actually exact, because one checks by an explicit computation that it is exact for each Fourier mode $$f_{k}(X,v(0)) \exp (ik\varOmega s)$$. The third bound (5.3) is a trivial consequence of (5.2). It goes without saying that the corresponding result holds for backward integrations with $$\!-T\!\leqslant t\leqslant 0$$. The following theorem provides the main result of this paper. Theorem 5.2 Assume that the problem (3.1), with the smoothness assumptions stated in the preceding sections, is integrated with SAM. In addition assume that one of the following hypotheses holds: (H1) The oscillatory Fourier coefficients $$f_{k}$$, $$k\neq 0$$ of f do not depend on the delayed argument y. (H2) The delay $$\tau$$ is an integer multiple of the period $$T = 2\pi /\varOmega$$. Then there exist constants $$\varOmega _{0}$$, $$H_{0}$$, $$h_{0}$$ and K such that, for $$\varOmega \geqslant \varOmega _{0}$$, $$H\leqslant H_{0}$$, $$h\leqslant h_{0}$$, $$0\leqslant t_{n}=nH\leqslant t_{max}$$, the difference between the numerical solution and the solution of the averaged problem may be bounded as follows:   $$\left\|X_{n}-X\big(t_{n}\big) \right\| \leqslant K\left( H^{2}+\frac{H}{\varOmega}+\frac{1}{\varOmega^{2}}+h\right).$$In particular, if the grids are refined in such a way that h is taken proportional to $$H/\varOmega$$ (i.e. $$\nu _{max}$$ is taken proportional to N), then the bound becomes   \begin{align} \left\|X_{n}-X\big(t_{n}\big) \right\| \leqslant K^{\prime}\left( H^{2}+\frac{H}{\varOmega}+\frac{1}{\varOmega^{2}}\right). \end{align} (5.6) Proof. We argue as in Theorem 4.3. Now in the residual for the numerical solution $$\big \{X_{n}\big \}$$ we have taken into account the micro-integration error. For $$n\leqslant N$$, the bound (5.3) (and the corresponding bound for the backward integration) show that the micro-integration adds a term of size $$\mathscr{O}(h)$$ to the residual. For n > N the situation is slightly more complicated because the algorithm uses past values $$v_{n,\nu }$$ that are themselves affected by micro-integration errors. However, the stability of the micro-integrator guarantees that even when those errors are taken into account an estimate like (5.3) holds. We recall that taking h proportional to $$H/\varOmega$$ makes the complexity of the algorithm independent of $$\varOmega$$. 6. Numerical experiments Table 2 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are not stroboscopic times N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  View Large Table 2 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are not stroboscopic times N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  N  $$\varOmega =25$$  $$\varOmega =50$$  $$\varOmega =100$$  $$\varOmega =200$$  $$\varOmega =400$$  $$\varOmega =800$$  $$\varOmega =1600$$  $$\varOmega =3200$$  1  6.28($$-$$2)  3.42($$-$$2)  1.71($$-$$2)  7.87($$-$$3)  3.14($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.66($$-$$3)  3.74($$-$$3)  1.66($$-$$3)  8.27($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.45($$-$$4)  2.60($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.80($$-$$4)  6.35($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.20($$-$$5)  1.27($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.31($$-$$6)  2.81($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.36($$-$$6)  6.46($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.22($$-$$7)  View Large We now present experiments that illustrate the preceding theory. We first integrated with SAM the toggle switch problem (1.1) with $$\alpha =2.5$$, $$\beta =2$$, A = 0.1, $$\omega =0.1$$, B = 4.0 and $$\tau =0.5$$. The function $$\varphi$$ is constant: $$x_{1}(t) = 0.5$$, $$x_{2}(t) = 2.0$$, $$-\tau \leqslant t\leqslant 0$$, which corresponds to the system staying at an equilibrium up to time t = 0, and then switching on the slow and fast oscillatory forcing terms. For the macro-step size we set $$H = \tau /N$$, $$N=1,2,4,\dots$$ and the micro-step size was chosen as h = T/(2N) (note that for the coarsest macro-step size N = 1, there are only two Euler steps per period). Simulations took place in the interval $$0\leqslant t\leqslant 2$$ that includes the locations t = 0, $$t=\tau$$ and $$t=2\tau$$, where the growth in the error of the SAM solution is more pronounced due to the lack of smoothness. SAM has no difficulty in coping with the longer intervals required in practical simulations, but we have chosen a short interval because on longer intervals it may be very expensive to obtain a sufficiently accurate reference solution to measure errors; see in this connection the computational times quoted at the end of Section 7. Table 2 reports, for varying $$\varOmega$$ and N, the maximum error, over $$0\leqslant t\leqslant 2$$ in the $$X_{1}$$ component of the SAM solution with respect to the averaged solution obtained by integrating (2.4) (this integration was carried out with the Matlab function dde23 with relative and absolute tolerances $$10^{-8}$$ and $$10^{-10}$$, respectively). The combinations of N and $$\varOmega$$ leading to values of H not significantly larger than T were not attempted, as the HMM idea does not make sense for them. Note that here $$\tau /T$$ is irrational and therefore the step points $$t_{n}$$ are not stroboscopic times. Figure 3 displays the errors in Table 2 as functions of N; for clarity not all values of $$\varOmega$$ are included. By looking at the columns of the table (or at each of the four solid lines in the figure) we see that the error behaves as $$N^{-2}$$, i.e as $$H^{2}$$, except at the bottom of each column, where the behaviour is as $$N^{-1}$$. This is of course the behaviour of the bound in (5.6). Errors along rows saturate if $$\varOmega$$ is very large; for those values one just observes the error in the macro-integration. This behaviour is also seen in the figure by comparing points corresponding to the same value of N and varying $$\varOmega$$. Along the main diagonal of the table, errors approximately divide by four, which is also in agreement with the bound (5.6). In the figure this corresponds to observing the behaviour of the right-most point of each of the solid lines. Table 3 differs from Table 2 in that now $$\varOmega$$ is taken from the sequence $$8\pi$$, $$16\pi$$, … that consists of values not very different from those in Table 2. In fact the errors in Table 3 are very similar to those in Table 2. However, for the sequence $$8\pi$$, $$16\pi$$, … the step points are stroboscopic times, and it makes sense to compare the SAM solution with the true oscillatory solution. The results are reported in Table 4. From Theorems 3.2 and 5.2 the errors with respect to the true solution possess a bound of the form (5.6) and this is consistent with the data in the table. Fig. 3. View largeDownload slide Error in the first component of the SAM solution with respect to the solution of the averaged problem versus macro step size $$H=\tau /N$$ for some of the simulations in Table 2. Different lines correspond to difference values of $$\varOmega$$. Here the reference line shows the $$N^{-2}$$ behaviour. Fig. 3. View largeDownload slide Error in the first component of the SAM solution with respect to the solution of the averaged problem versus macro step size $$H=\tau /N$$ for some of the simulations in Table 2. Different lines correspond to difference values of $$\varOmega$$. Here the reference line shows the $$N^{-2}$$ behaviour. Table 3 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  View Large Table 3 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.25($$-$$2)  3.40($$-$$2)  1.70($$-$$2)  7.82($$-$$3)  3.11($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  7.62($$-$$3)  3.72($$-$$3)  1.65($$-$$3)  8.26($$-$$4)  7.56($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  1.11($$-$$3)  4.42($$-$$4)  2.59($$-$$4)  2.20($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  1.78($$-$$4)  6.34($$-$$5)  5.57($$-$$5)  5.06($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  3.16($$-$$5)  1.26($$-$$5)  1.22($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  6.24($$-$$6)  2.80($$-$$6)  2.85($$-$$6)  64  ***  ***  ***  ***  ***  ***  1.35($$-$$6)  6.47($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  3.18($$-$$7)  View Large In order to check numerically that the hypotheses (H1)–(H2) are necessary to ensure a bound of the form (5.6), we have considered the simple scalar equation   \begin{align} \dot{x}=y+(x-y)\sin(\varOmega t)+\frac{y}{2}\cos(2\varOmega t); \end{align} (6.1)this is an academic example where (H1) does not hold (recall that in all the systems from the literature cited (H1) holds). The averaged version is   $$\dot{X}=Y-\frac{1}{\varOmega}Y,$$for $$0\leqslant t<\tau$$ and   $$\dot{X}=Y+\frac{1}{\varOmega}\left(\frac{Y}{2}-\frac{Z}{2}\right)\sin(\varOmega\tau)-\frac{1}{16\varOmega}Z\sin(2\varOmega \tau),$$for $$t\geqslant \tau$$. Table 4 Errors in $$x_{1}$$ for SAM with respect to the true oscillatory solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  View Large Table 4 Errors in $$x_{1}$$ for SAM with respect to the true oscillatory solution for problem (1.1). Step points are stroboscopic times N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  6.10($$-$$2)  3.30($$-$$2)  1.66($$-$$2)  7.74($$-$$3)  3.09($$-$$3)  1.66($$-$$3)  2.04($$-$$3)  2.30($$-$$3)  2  ***  6.65($$-$$3)  3.41($$-$$3)  1.56($$-$$3)  8.31($$-$$4)  7.57($$-$$4)  7.20($$-$$4)  7.02($$-$$4)  4  ***  ***  7.95($$-$$4)  3.56($$-$$4)  2.63($$-$$4)  2.21($$-$$4)  1.99($$-$$4)  1.88($$-$$4)  8  ***  ***  ***  9.25($$-$$5)  6.62($$-$$5)  5.64($$-$$5)  5.07($$-$$5)  4.77($$-$$5)  16  ***  ***  ***  ***  1.50($$-$$5)  1.34($$-$$5)  1.23($$-$$5)  1.18($$-$$5)  32  ***  ***  ***  ***  ***  3.03($$-$$6)  2.95($$-$$6)  2.88($$-$$6)  64  ***  ***  ***  ***  ***  ***  6.44($$-$$7)  6.76($$-$$7)  128  ***  ***  ***  ***  ***  ***  ***  1.43($$-$$7)  View Large Table 5 Errors between SAM solution and the averaged solution for problem (6.1). (H2) holds N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  View Large Table 5 Errors between SAM solution and the averaged solution for problem (6.1). (H2) holds N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.94($$-$$2)  2  ***  8.41($$-$$3)  8.36($$-$$3)  8.35($$-$$3)  8.34($$-$$3)  8.33($$-$$3)  8.33($$-$$3)  4  ***  ***  2.21($$-$$3)  2.26($$-$$3)  2.28($$-$$3)  2.29($$-$$3)  2.29($$-$$3)  8  ***  ***  ***  5.64($$-$$4)  5.84($$-$$4)  5.91($$-$$4)  5.94($$-$$4)  16  ***  ***  ***  ***  1.42($$-$$4)  1.48($$-$$4)  1.50($$-$$4)  32  ***  ***  ***  ***  ***  3.57($$-$$5)  3.73($$-$$5)  64  ***  ***  ***  ***  ***  ***  8.94($$-$$6)  View Large We used $$\tau = 0.5$$, $$H = \tau /N$$, h = T/(5N), a constant $$\varphi = 0.1$$ and, as before, measured errors in the maximum norm for $$0\leqslant t\leqslant 2$$. Table 52 gives errors when $$\varOmega$$ is taken from the sequence $$8\pi$$, $$16\pi$$, …, so that the periods T are 1/4, 1/8, … Hypothesis (H1) does not hold, but (H2) does, so that Theorem 5.2 may be applied. In fact an $$N^{-2}$$ (or equivalently $$\varOmega ^{-2}$$) behaviour is seen along the diagonals of the table. We next slightly changed the frequencies and used the sequence $$8\pi +\pi /64$$, $$16\pi +\pi /32$$, … (this represents an increase of less than 0.2% in each frequency). Neither (H1) nor (H2) are fulfilled and Theorem 5.2 may not be applied. The results in Table 6 show that, for $$\varOmega$$ large the second-order behaviour along the main diagonal is lost, indicating that the error does not behave as in the bound (5.6). Note the substantial difference between Tables 5 and 6 at N = 64, in spite of the very small relative change in the value of $$\varOmega$$. Table 6 Errors in x between SAM solution and the averaged solution for problem (6.1). Neither (H1) nor (H2) hold N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  View Large Table 6 Errors in x between SAM solution and the averaged solution for problem (6.1). Neither (H1) nor (H2) hold N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  N  $$\varOmega\! =\! 8\pi\! +\! \frac{\pi }{64}$$  $$\varOmega \! =\! 16\pi \! +\! \frac{\pi }{32}$$  $$\varOmega \! =\! 32\pi\! +\! \frac{\pi }{16}$$  $$\varOmega \! =\! 64\pi \! +\! \frac{\pi }{8}$$  $$\varOmega \! =\! 128\pi \! +\! \frac{\pi }{4}$$  $$\varOmega \! =\! 256\pi \! +\! \frac{\pi }{2}$$  $$\varOmega \! =\! 512\pi \! +\! \pi$$  1  3.08($$-$$2)  3.00($$-$$2)  2.97($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2.95($$-$$2)  2  ***  8.43($$-$$3)  8.38($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  8.36($$-$$3)  4  ***  ***  2.23($$-$$3)  2.28($$-$$3)  2.30($$-$$3)  2.31($$-$$3)  2.32($$-$$3)  8  ***  ***  ***  5.78($$-$$4)  6.00($$-$$4)  6.12($$-$$4)  6.23($$-$$4)  16  ***  ***  ***  ***  1.58($$-$$4)  1.69($$-$$4)  1.79($$-$$4)  32  ***  ***  ***  ***  ***  5.63($$-$$5)  6.60($$-$$5)  64  ***  ***  ***  ***  ***  ***  3.76($$-$$5)  View Large 7. Extensions We finally consider the application of SAM to problems that are not of the form (3.1). The number of variants that may arise is very high and we restrict the attention to reporting numerical results for a case study. The corresponding analysis may be carried out by adapting the proofs given in the preceding sections. We study again the toggle switch problem, but now in an alternative asymptotic regime. The system is given by   \begin{eqnarray} \frac{{\textrm d}x_{1}}{{\textrm d}t}&=&\frac{\alpha}{1+x_{2}^{\beta}}-x_{1}(t-\tau)+A\sin(\omega t)+\hat{B}\varOmega\sin(\varOmega t),\\ \nonumber \frac{{\textrm d}x_{2}}{{\textrm d}t}&=&\frac{\alpha}{1+x_{1}^{\beta}}-x_{2}(t-\tau), \end{eqnarray} (7.1)where $$\hat{B}$$ is a constant and the other symbols are as before. As $$\varOmega \rightarrow \infty$$, the variable $$x_{1}$$ undergoes oscillations of frequency $$\varOmega$$ and $$\mathscr{O}(1)$$ amplitude, which, for $$\varOmega$$ large, makes the direct numerical integration of the system more expensive than that of (1.1) (the amplitude there is $$\mathscr{O}(1/\varOmega )$$). For an analytic treatment, we begin by performing, for $$t\geqslant 0$$, the preliminary stroboscopic change of variables  \begin{eqnarray*} x_{1}&=&X_{1}+\hat{B}(1-\cos(\varOmega t)),\\ x_{2}&=&X_{2}, \end{eqnarray*}which differs from the identity in $$\mathscr{O}(1)$$ quantities. This leads to:   \begin{eqnarray} \frac{{\textrm d}X_{1}}{{\textrm d}t}&=&\frac{\alpha}{1+X_{2}^{\beta}}-X_{1}(t-\tau)-\hat{B}\,1_{\{t\geqslant \tau\}}+A\sin(\omega t),\\ \nonumber \frac{{\textrm d}X_{2}}{{\textrm d}t}&=&\frac{\alpha}{1+(X_{1}+\hat{B}(1-\cos(\varOmega t)))^{\beta}}-X_{2}(t-\tau). \end{eqnarray} (7.2)The highly oscillatory forcing has been reduced from $$\mathscr{O}(\varOmega )$$ to $$\mathscr{O}(1)$$ and, in principle, it is possible to average (7.2) by the techniques used to deal with (3.1). Unfortunately, finding the required Fourier coefficients in closed form does not appear to be possible in general. In the particular case where $$\beta = 2$$, the zero Fourier coefficient may be found by evaluating the relevant integral (3.6) with the help of the residue theorem. This leads to the averaged system of the form $$\dot X = f_{0}$$ explicitly given by   \begin{eqnarray} \frac{{\textrm d}X_{1}}{{\textrm d}t}&=&\frac{\alpha}{1+{X_{2}^{2}}}-X_{1}(t-\tau)-\hat{B}\,1_{\{t\geqslant \tau\}}+A\sin(\omega t),\\ \nonumber \frac{{\textrm d}X_{2}}{{\textrm d}t}&=&\alpha\frac{\sqrt{-\frac{M}{2}+\frac{\sqrt{N}}{2}}+\left(X_{1}+\hat{B}\right)\sqrt{\frac{M}{2}+\frac{\sqrt{N}}{2}}} {\left(\frac{M}{2}+\frac{\sqrt{N}}{2}\right)^{2}+\left(X_{1}+\hat{B}\right)^{2}+\sqrt{N}}-X_{2}(t-\tau), \end{eqnarray} (7.3)with $$M={X_{1}^{2}}+2\hat{B}X_{1}-1$$, $$N=M^{2}+4\left (X_{1}+\hat{B}\right )^{2}$$, whose solutions approximate x with errors of size at least $$\mathscr{O}(1/\varOmega )$$ at stroboscopic times. However, since (7.2) is even in $$\varOmega$$, in this particular case the errors are actually $$\mathscr{O}\left (1/\varOmega ^{2}\right )$$. Table 7 presents the errors in the SAM solution measured with respect to the solution of (7.3). The experiments have $$\hat{B} = 0.1$$; all other details are as in the toggle switch simulations in the preceding section. The table shows that the performance of SAM is very similar to that encountered in problem (1.1). We also measured errors with respect to the oscillatory solution and found that they are very close to those reported here, i.e. the situation is similar to that seen when comparing Tables 3 and 4. Finally we mention that, for the choice of constants considered here, the integration (in the interval $$0\leqslant t\leqslant 2$$) of the oscillatory problem for $$\varOmega =1024\pi$$ with dde23 in a laptop computer took more than 9,000 seconds. The corresponding SAM solution with the smallest value of H took approximately one second. Since as pointed out before, the study of vibrational resonance requires integrations in time intervals two orders of magnitude larger than $$0\leqslant t\leqslant 2$$, for many choices of the values of the constants that appear in the model, it is clear that a direct numerical integration of the oscillatory problem is not feasible for large values of $$\varOmega$$. Table 7 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (7.1) N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  View Large Table 7 Errors in $$x_{1}$$ for SAM with respect to the averaged solution for problem (7.1) N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  N  $$\varOmega =8\pi$$  $$\varOmega =16\pi$$  $$\varOmega =32\pi$$  $$\varOmega =64\pi$$  $$\varOmega =128\pi$$  $$\varOmega =256\pi$$  $$\varOmega =512\pi$$  $$\varOmega =1024\pi$$  1  4.10($$-$$2)  4.09($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.08($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  4.07($$-$$2)  2  ***  8.08($$-$$3)  7.89($$-$$3)  7.81($$-$$3)  7.77($$-$$3)  7.76($$-$$3)  7.75($$-$$3)  7.75($$-$$3)  4  ***  ***  1.78($$-$$3)  1.70($$-$$3)  1.67($$-$$3)  1.65($$-$$3)  1.64($$-$$3)  1.64($$-$$3)  8  ***  ***  ***  4.29($$-$$4)  4.08($$-$$4)  3.99($$-$$4)  3.99($$-$$4)  4.06($$-$$4)  16  ***  ***  ***  ***  1.06($$-$$4)  1.01($$-$$4)  1.04($$-$$4)  1.07($$-$$4)  32  ***  ***  ***  ***  ***  2.62($$-$$5)  2.50($$-$$5)  2.68($$-$$5)  64  ***  ***  ***  ***  ***  ***  6.53($$-$$6)  6.47($$-$$6)  128  ***  ***  ***  ***  ***  ***  ***  1.63($$-$$6)  View Large Funding Ministerio de Economía y Comercio (MTM2013-46553-C3-1-P to J.M.S.-S.); Ministerio de Economía, Industria y Competitividad, Spain (MTM2016-77660-P(AEI/FEDER, UE) to J.M.S.-S.); National Natural Science Foundation of China (11371357 and 11771438 to B.Z.). Acknowledgements B.Z. is grateful to Universidad Carlos III de Madrid for hosting the stay in Spain that made this work possible and to the Chinese Scholarship Council for providing the necessary funds. The authors are thankful to M. A. F. Sanjuán and A. Daza for bringing to their attention the vibrational resonance phenomenon, the toggle switch problem and other highly oscillatory systems with delay. Footnotes 1  Assuming that $$\varphi$$ does not depend on $$\varOmega$$ implies no loss of generality, as the general case may be reduced to the $$\varOmega$$-independent case by introducing a new dependent variable $$x(t)-\varPhi (t)$$, where $$\varPhi (t)$$ coincides with $$\varphi (t)$$ for $$-\tau \leqslant t\leqslant 0$$. 2  For typographic reasons only, this table and the next have one column less than those presented before. There is nothing unexpected in the results corresponding to the omitted frequency $$\varOmega = 1024\pi$$ (or $$\varOmega = 1024\pi +2\pi$$). References Ariel, G., Engquist, B. & Tsai, R. ( 2009) A multiscale method for highly oscillatory ordinary differential equations with resonance. Math. Comput ., 78, 929– 956. Google Scholar CrossRef Search ADS   Boutle, I., Taylor, R. H. S. & Römer, R. A. ( 2007) El Niño and the delay action oscillator. Am. J. Phys. , 75, 15– 24. Google Scholar CrossRef Search ADS   Calvo, M. P., Chartier, Ph., Murua, A. & Sanz-Serna, J. M. ( 2011a) Numerical stroboscopic averaging for ODEs and DAEs. Appl. Numer. Math. , 61, 1077– 1095. Google Scholar CrossRef Search ADS   Calvo, M. P., Chartier, Ph., Murua, A. & Sanz-Serna, J. M. ( 2011b) A stroboscopic method for highly oscillatory problems. Numerical Analysis and Multiscale Computations (B. Engquist, O. Runborg & R. Tsai  eds). New York: Springer, pp. 73– 87. Calvo, M. P. & Sanz-Serna, J. M. ( 2009) Instabilities and inaccuracies in the integration of highly oscillatory problems. SIAM J. Sci. Comput. , 31, 1653– 1677. Google Scholar CrossRef Search ADS   Calvo, M. P. & Sanz-Serna, J. M. ( 2010) Heterogeneous multiscale methods for mechanical systems with vibrations. SIAM J. Sci. Comput. , 32, 2029– 2046. Google Scholar CrossRef Search ADS   Chartier, P., Murua, A. & Sanz-Serna, J. M. ( 2010) Higher-order averaging, formal series and numerical integration I: B-series. Found. Comput. Math. , 10, 695– 727. Google Scholar CrossRef Search ADS   Chartier, P., Murua, A. & Sanz-Serna, J. M. ( 2012) Higher-order averaging, formal series and numerical integration II: the quasi-periodic case. Found. Comput. Math. , 12, 471– 508. Google Scholar CrossRef Search ADS   Chartier, P., Murua, A. & Sanz-Serna, J. M. ( 2015) Higher-order averaging, formal series and numerical integration III: error bounds. Found. Comput. Math. , 15, 591– 612. Google Scholar CrossRef Search ADS   Coccolo, M., Zhu, B., Sanjuán, M. A. F. & Sanz-Serna, J. M. ( 2018) Bogdanov–Takens resonance in time-delayed systems. Nonlinear Dyn ., 91, 1939– 1947. Daza, A., Wagemakers, A., Rajasekar, S. & Sanjuán, M. A. F. ( 2013a) Vibrational resonance in a time-delayed genetic toggle switch. Commun. Nonlinear Sci. Numer. Simul. , 18, 411– 416. Google Scholar CrossRef Search ADS   Daza, A., Wagemakers, A. & Sanjuán, M. A. F. ( 2013b) Strong sensitivity of the vibrational resonance induced by fractal structures. Int. J. Bifurcation Chaos , 23, 1350129. Daza, A., Wagemakers, A. & Sanjuán, M. A. F. ( 2017) Wada property in systems with delay. Commun. Nonlinear Sci. Numer. Simul. , 43, 220– 226. Google Scholar CrossRef Search ADS   E, W. ( 2003) Analysis of the heterogeneous multiscale method for ordinary differential equations. Comm. Math. Sci. , 1, 423– 436. CrossRef Search ADS   E, W. & Engquist, B. ( 2003) The heterogeneous multiscale methods. Comm. Math. Sci. , 1, 87– 132. Google Scholar CrossRef Search ADS   E, W., Engquist, B., Li, X., Ren, W. & Vanden-Eijnden, E. ( 2007) The hetergenous multiscale method: a review. Commun. Comput. Phys. , 2, 367– 450. Engquist, B. & Tsai, Y. H. ( 2005) Hetergenous multiscale methods for stiff ordinary differential equations. Math. Comput. , 74, 1707– 1742. Google Scholar CrossRef Search ADS   Gardner, T. S., Cantor, C. R. & Collins, J. J. ( 2000) Construction of a genetic toggle switch in Escherichia coli. Nature , 403, 339– 342. Google Scholar CrossRef Search ADS PubMed  Hairer, E., Lubich, C. & Wanner, G.( 2002) Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations . New York: Springer, 471– 529. Iserles, A. & Nørsett, S. P. ( 2005) Efficient quadrature of highly oscillatory integrals using derivatives. Proc. R. Soc. A. , 461, 1383– 1399. Google Scholar CrossRef Search ADS   Jeevarathinam, C., Rajasekar, S. & Sanjuán, M. A. F. ( 2011) Theory and numerics of vibrational resonance in Duffing oscillators with time-delayed feedback. Phys. Rev. E , 83, 066205. Landa, P. S. & McClintock, P. V. E. ( 2000) Vibrational resonance. J. Phys. A Math. Gen. , 33, L433. Lehman, B. & Weibel, S. P. ( 1999) Fundamental theorems of averaging for functional differential equations. J. Diff. Eqns. , 152, 160– 190. Google Scholar CrossRef Search ADS   Li, J., Kevrekidis, P. G., Gear, C. W. & Kevrekidis, I. G. ( 2007) Deciding the nature of the coarse equation through microscopic simulations: the baby-bathwater scheme. SIAM Rev. , 49, 469– 487. Google Scholar CrossRef Search ADS   Sanz-Serna, J. M. ( 2009) Modulated Fourier expansions and heterogeneous multiscale methods. IMA J. Numer. Anal. , 29, 595– 605. Google Scholar CrossRef Search ADS   © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) For permissions, please e-mail: journals. permissions@oup.com

### Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: Apr 13, 2018

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations