Approximate linear minimum variance filters for continuous-discrete state space models: convergence and practical adaptive algorithms

Approximate linear minimum variance filters for continuous-discrete state space models:... Abstract In this article, approximate linear minimum variance (LMV) filters for continuous-discrete state space models are introduced. The filters are derived from a wide class of recursive approximations to the predictions for the first two conditional moments of the state equation between each pair of consecutive observations. The convergence of the approximate filters to the exact LMV filter is proved when the error between the predictions and their approximations decreases no matter the time distance between observations. As particular instance, the order-$$\beta$$ local linearization filters are presented and expounded in detail. Practical adaptive algorithms are also provided and their performance in simulation is illustrated with various examples. The proposed filters are intended for the recurrent practical situation where a stochastic dynamical system should be identified from a reduced number of partial and noisy observations distant in time. 1. Introduction The estimation of unobserved states of a continuous stochastic dynamical system from noisy discrete observations is of central importance to solve diverse scientific and technological problems. The major contribution to the solution of this estimation problem is due to Kalman & Bucy (1961), who provided a sequential and computationally efficient solution to the optimal filtering and prediction problem for linear state space models with additive noise. However, the optimal estimation of non-linear state space models is still a subject of active researches. Typically, the solution of optimal filtering problems involves the resolution of evolution equations for conditional probabilistic densities, moments or modes, which in general have explicit solutions in few particular cases. Therefore, a variety approximations have been developed. Examples of such approximate non-linear filters are the classical ones as the Extended Kalman, the Iterated Extended Kalman, the Gaussian and the Modified Gaussian filters (see, e.g., Jazwinski, 1970); and other relatively recent as the local linearization (LL) (see, e.g., Ozaki, 1993; Jimenez & Ozaki, 2003), the Projection (Brigo et al., 1999) and the Particle filters (del Moral et al., 2001) methods. In a variety of practical situations, the solution of the general optimal filtering problem is dispensable since the solution provided by a suboptimal filter is satisfactory. This is the case of the signal filtering and detection problems, the system stabilization, and the parameter estimation of non-linear systems, among others. Prominent examples of suboptimal filters are the linear, the quadratic and the polynomial one, which have been widely used for the estimation of the state of both, continuous–continuous (Mohler & Kolodziej, 1980; Mil’shtein & Ryashko, 1984; Phillis & Kouikoglou, 1989) and discrete–discrete (Pakshin, 1978; de Koning, 1984; Phillis & Kouikoglou, 1989; de Santis et al., 1995; Carravetta et al., 1997; Germani et al., 2005) models. In the case of continuous–discrete models, exact expressions for linear minimum variance filter (LMV) have also been derived in Jimenez & Ozaki (2002), but they are restricted to linear models. For non-linear models, this kind of suboptimal filter has in general no exact solution since the first two conditional moments of the state equation has no explicit solution. Therefore, adequate approximations are required in this situation as well. In this article, approximate LMV filters for non-linear continuous-discrete state space models are introduced. The filters are obtained by means of a recursive approximation to the predictions for the first two moments of the state equation between each pair of consecutive observations. It is shown that the approximate filters converge to the exact LMV filter when the error between the predictions and their approximations decreases. Based on the well-known Local Linear approximations for the state equation, the order-$$\beta$$ LL filters are presented as a particular instance. Their convergence, practical algorithms and performance in simulations are also considered in detail. The simulations show that these LL filters provide accurate and computationally efficient estimation of the states of the stochastic systems given a reduced number of noisy observations distant in time, which is a typical situation in practical control engineering. The article is organized as follows: in Section 2, basic notations and results on LMV filters, Local Linear approximations and LL filters are presented. The general class of approximate LMV filters is introduced in Section 3 and its convergence is stated. In Section 4, the order-$$\beta$$ LL filters are presented and their convergence analyzed. In the last two sections, practical algorithms for these filters and their performance in simulations are considered. 2. Notations and preliminaries Let $$({\it{\Omega}},\mathscr{F},P)$$ be a complete probability space, and $$\{\mathscr{F}_{t},$$$$t\geq t_{0}\}$$ be an increasing right continuous family of complete sub-$$\sigma$$-algebras of $$\mathscr{F}$$. Consider the state space model defined by the continuous state equation \begin{equation} d\mathbf{x}(t)=\mathbf{f}(t,\mathbf{x}(t))dt+\sum\limits_{i=1}^{m}\mathbf{g}_{i}(t,\mathbf{x}(t))d\mathbf{w}^{i}(t),\text{ } \end{equation} (2.1) for all $$t\in \lbrack t_{0},T]$$, and the discrete observation equation \begin{equation} \mathbf{z}_{t_{k}}=\mathbf{Cx}(t_{k})+\mathbf{e}_{t_{k}},\text{ } \end{equation} (2.2) for all $$k=0,1,..,M-1$$, where $$\mathbf{f}$$, $$\mathbf{g}_{i}:[t_{0},T]\times \mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$$ are functions, $$\mathbf{w=(\mathbf{w}}^{1},\ldots,\mathbf{w}^{m}\mathbf{)}$$ is an $$m$$-dimensional $$\mathscr{F}_{t}$$-adapted standard Wiener process, $$\{\mathbf{e}_{t_{k}}:\mathbf{e}_{t_{k}}\thicksim\mathscr{N}(0,{\it{\Sigma}}_{t_{k}}),$$$$k=0,..,M-1\}$$ is a sequence of $$r$$-dimensional i.i.d. Gaussian random vectors independent of $$\mathbf{w}$$, $${\it{\Sigma}}_{t_{k}}$$ an $$r\times r$$ positive semi-definite matrix, and $$\mathbf{C}$$ an $$r\times d$$ matrix. Here, it is assumed that the $$M$$ time instants $$t_{k}$$ define an increasing sequence $$\{t\}_{M}=\{t_{k}:t_{k}<t_{k+1}$$, $$t_{M-1}=T$$, $$k=0,1,..,M-1\}$$. Conditions for the existence and uniqueness of a strong solution of (2.1) with bounded moments are assumed. Let $$\mathbf{x}_{t/\rho }=E(\mathbf{x(}t)/Z_{\rho })$$ and $$\mathbf{Q}_{t/\rho }=E(\mathbf{x(}t)\mathbf{x}^{\intercal }(t)/Z_{\rho })$$ be the first two conditional moments of $$\mathbf{x}$$ with $$\rho \leq t$$, where $$E(\cdot )$$ denotes the expected value of random variables, and $$Z_{\rho }=\{\mathbf{z}_{t_{k}}:$$$$t_{k}\leq \rho ,$$$$t_{k}\in \{t\}_{M}\}$$ is a time series with observations from (2.2). Further, let us denote by \begin{align*} \mathbf{U}_{t/\rho }& =E((\mathbf{x(}t)-\mathbf{x}_{t/\rho })(\mathbf{x(}t)- \mathbf{x}_{t/\rho })^{\intercal }/Z_{\rho }) \\ & =\mathbf{Q}_{t/\rho }-\mathbf{x}_{t/\rho }\mathbf{x}_{t/\rho }^{\intercal } \end{align*} the conditional variance of $$\mathbf{x}$$. Denote by $$\mathscr{C}_{P}^{l}(\mathbb{R}^{d},\mathbb{R})$$ the space of $$l$$ time continuously differentiable functions $$g:\mathbb{R}^{d}\rightarrow \mathbb{R}$$ for which $$g$$ and all its partial derivatives up to order $$l$$ have polynomial growth. In what follows, $$\left\vert \cdot \right\vert$$ denotes the Euclidean norm of vectors and the Frobenius norm of matrices. 2.1 Linear minimum variance filtering problem According to Battin (1962), Schmidt (1966), Sorenson (1966) and Jazwinski (1970), the LMV filter $$\mathbf{x}_{t_{k+1}/t_{k+1}}$$ for a state space model with discrete observation equation (2.2) is defined as \begin{equation*} \mathbf{x}_{t_{k+1}/t_{k+1}}=\mathbf{x}_{t_{k+1}/t_{k}}+\mathbf{G}_{t_{k+1}} \mathbf{(\mathbf{z}}_{t_{k+1}}-\mathbf{\mathbf{C}x}_{t_{k+1}/t_{k}}\mathbf{)} , \end{equation*} where the filter gain $$\mathbf{G}_{t_{k+1}}$$ is to be determined so as to minimize the error variance \begin{equation*} E\left(\left(\mathbf{x}(t_{k+1})-\mathbf{x}_{t_{k+1}/t_{k+1}}\right)\left(\mathbf{x}(t_{k+1})- \mathbf{x}_{t_{k+1}/t_{k+1}}\right)^{\intercal}\right). \end{equation*} This yields to the following definition. Definition 2.1 The LMV filter for the state space model (2.1)–(2.2) is defined, between observations, by \begin{gather} \frac{d\mathbf{x}_{t/t}}{dt}=E(\mathbf{f}(t,\mathbf{x})/Z_{t})\\ \end{gather} (2.3) \begin{gather} \begin{aligned}[b] \frac{d\mathbf{U}_{t/t}}{dt} & =E(\mathbf{xf}^{\intercal}(t,\mathbf{x} )/Z_{t})-\mathbf{x}_{t/t}E(\mathbf{f}^{\intercal}(t,\mathbf{x})/Z_{t})+E( \mathbf{f}(t,\mathbf{x})\mathbf{x}^{\intercal}/Z_{t}) \end{aligned}\\ \quad-E(\mathbf{f}(t,\mathbf{x})/Z_{t})\mathbf{x}_{t/t}^{\intercal}-\sum \limits_{i=1}^{m}E(\mathbf{g}_{i}(t,\mathbf{x})\mathbf{g}_{i}^{\intercal}(t, \mathbf{x})/Z_{t}) \end{gather} (2.4) for all $$t\in(t_{k},t_{k+1})$$, and by \begin{gather} \mathbf{x}_{t_{k+1}/t_{k+1}}=\mathbf{x}_{t_{k+1}/t_{k}}+\mathbf{G}_{t_{k+1}} \mathbf{\big(}\mathbf{z}_{t_{k+1}}-\mathbf{\mathbf{C}x}_{t_{k+1}/t_{k}}\mathbf{\big)} \end{gather} (2.5) \begin{gather} \mathbf{U}_{t_{k+1}/t_{k+1}}=\mathbf{U}_{t_{k+1}/t_{k}}-\mathbf{G}_{t_{k+1}} \mathbf{CU}_{t_{k+1}/t_{k}} \end{gather} (2.6) for each observation at $$t_{k+1}$$, with filter gain \begin{equation} \mathbf{G}_{t_{k+1}}=\mathbf{U}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal } {\Large \Big(}\mathbf{CU}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal}+ {\it{\Sigma}}_{t_{k+1}}\Big)^{-1} \end{equation} (2.7) for all $$t_{k},t_{k+1}\in\{t\}_{M}$$. The predictions $$\mathbf{x}_{t/t_{k}}$$, $$\mathbf{U}_{t/t_{k}}$$ are accomplished, respectively, via expressions (2.3)–(2.4) with initial conditions $$\mathbf{x}_{t_{k}/t_{k}}$$ and $$\mathbf{U}_{t_{k}/t_{k}}$$ for all $$t\in(t_{k},t_{k+1}]$$ and $$ t_{k},t_{k+1}\in\{t\}_{M}$$. Note that, in continuous–discrete filtering problem, the filters $$E(\mathbf{x}(t)/Z_{t})$$ and $$E(\mathbf{x}(t)\mathbf{x}^{\intercal }(t)/Z_{t})$$ reduce to the predictions $$E(\mathbf{x}(t)/Z_{t_{k}})$$ and $$E(\mathbf{x}(t)\mathbf{x}^{\intercal }(t)/Z_{t_{k}})$$ for all $$t$$ between two consecutive observations $$t_{k}$$ and $$t_{k+1}$$, that is, for all $$t\in(t_{k},t_{k+1})$$. This is because there is not more observations between $$t_{k}$$ and $$t_{k+1}$$. This implies that, in the above definition, $$\mathbf{x}_{t_{k+1}-\varepsilon /t_{k+1}-\varepsilon }\equiv \mathbf{x}_{t_{k+1}-\varepsilon /t_{k}}$$ for all $$\varepsilon >0$$ and so $$\mathbf{x}_{t_{k+1}-\varepsilon/t_{k+1}-\varepsilon }$$ tends to $$\mathbf{x}_{t_{k+1}/t_{k}}$$ when $$\varepsilon$$ goes to zero. Clearly, for linear state equation with additive noise, the LMV filter (2.3)–(2.7) reduces to the classical continuous-discrete Kalman filter. For linear state equation with multiplicative noise, explicit formulas for the LMV filter can be found in Jimenez & Ozaki (2002). In general, since the integro-differential equations (2.3)–(2.4) of the LMV filter have explicit solution for a few simple state equations, approximations to them are needed. In principle, for this type of suboptimal filter, the same conventional approximations to the general optimal minimum variance filter may be used as well. For instance, those for the solution of (2.3)–(2.4) provided by the conventional Extended Kalman, the Iterated Extended Kalman, the Gaussian, the Modified Gaussian and the LL filters. However, in all these approximations, once the data $$Z_{t_{M}}$$ are given on a time partition $$\{t\}_{M}$$ the error between the exact and the approximate predictions for the mean and variance of (2.1) at $$t_{k}$$ is completely settled by $$t_{k}-t_{k-1}$$ and cannot be reduced. Therefore, small enough time distance between consecutive observations would be typically necessary to obtain an adequate approximation to the LMV filter. Undoubtedly, this imposes undesirable restrictions to the time distance between observations that cannot be accomplished in many practical situations. Intuitively, this drawback can be overcome by adding high order terms to the polynomial approximation of the prediction equations (2.3)–(2.4) (similar to the approach considered in Basin, 2008, for continuous–continuous filtering problems) or, alternatively, with an adequate couple of indexes in the Carleman discretization approach considered in Cacace et al. (2014) and references therein. These approximate filters converge to the exact one as the order of the polynomial approximation increases or as the dimension of the involved linear system with extended states increases, respectively. Although, these methods perform well in various illustrative examples, a general strategy for choosing the order of the polynomial or the value of the couple of indexes to control the mean-square stability and error of the approximations is still missing. On the other hand, the particle filter considered in Moral et al. (2001) and Crisan & Doucet (2002) also converges to the exact filter no matter the distance between observations, but at expense of a high-computation cost. Note that this filter performs, by means of intensive simulations, an estimation of the whole probabilistic distribution of the processes $$\mathbf{x}$$ solution of (2.1) from which the first two conditional moments of $$\mathbf{x}$$ can then be computed. Obviously, this general and reliable solution to the filtering problem is not practical when an expedited computation of the LMV filter (2.3)–(2.7) is required, which is typically demanded in many applications. For example, the LMV filter and its approximations are a key component of the innovation method for the parameter estimation of diffusion processes from a time series of partial and noisy observations (see, e.g., Ozaki, 1994; Shoji, 1998; Nielsen et al., 2000a,b; Nielsen & Madsen, 2001; Singer, 2002; Jimenez & Ozaki, 2006). For this purpose, accurate and computationally efficient approximations to the LMV filter will be certainly useful. 2.2 Local linearization filters A key component for constructing the LL filters is the concept of Local Linear approximation for stochastic differential equations (SDEs) considered in Jimenez & Biscay (2002), Jimenez & Ozaki (2003) and Carbonell et al., 2006. Let us consider the SDE (2.1) on the time interval $$[a,b]\subset \lbrack t_{0},T]$$, and the time discretization $$\left(\tau \right) _{h}=\left\{ \tau _{n}:n=0,1,\ldots ,N\right\}$$ of $$[a,b]$$ with maximum stepsize $$h>0$$ defined as a sequence of times that satisfy the conditions $$a=\tau _{0}<\tau _{1}<\cdots <\tau _{N}=b$$, and $$\underset{n}{\max}(\tau_{n+1}-\tau _{n})\leq h$$ for $$n=0,\ldots ,N-1$$. Further, let \begin{equation*} n_{t}=\max \{n=0,1,\ldots ,N:\tau _{n}\leq t\text{ and }\tau _{n}\in \left( \tau \right) _{h}\} \end{equation*} for all $$t\in \left[ a,b\right]$$. The time $$\tau _{n}$$ is allowed to be random for including the case of adaptive stepsize selection. In this situation, $$\tau _{n}$$ is assumed to be $$\mathscr{F}_{\tau_{n}}-$$ measurable for all $$n$$. Definition 2.2 For a given time discretization $$\left( \tau \right) _{h}$$ of $$\left[ a,b\right]$$, the stochastic process $$\mathbf{y}=\{\mathbf{y}(t),$$$$t\in \left[ a,b\right] \}$$ is called order-$$\beta$$$$(=1,2)$$ weak local linear (WLL) approximation of the solution of (2.1) on $$\left[ a,b\right]$$ if it is the weak solution of the piecewise linear equation \begin{equation} d\mathbf{y}(t)=(\mathbf{A}(\tau _{n_{t}})\mathbf{y}(t)+\mathbf{a}^{\mathbb{ \beta }}(t;\tau_{n_{t}}))dt+\sum\limits_{i=1}^{m}(\mathbf{B}_{i}(\tau_{n_{t}})\mathbf{y}(t)+\mathbf{b}_{i}^{\mathbb{\beta }}(t;\tau _{n_{t}}))d \mathbf{w}^{i}(t) \end{equation} (2.8) for all $$t\in (a,b]$$ and initial value $$\mathbf{y}(a)=\mathbf{y}_{0}$$, where the matrices functions $$\mathbf{A,B}_{i}$$ are defined as \begin{equation*} \mathbf{A}(s)=\frac{\partial \mathbf{f}(s,\mathbf{y}(s))}{\partial \mathbf{y} }\quad \text{and}\quad \mathbf{B}_{i}(s)=\frac{\partial \mathbf{g}_{i}(s,\mathbf{y}(s))}{\partial \mathbf{y}}, \end{equation*} and the vectors functions $$\mathbf{a}^{\mathbb{\beta }}$$, $$\mathbf{b}_{i}^{\mathbb{\beta }}$$ as \begin{equation*} \mathbf{a}^{\beta }(t;s)=\left\{ \begin{array}{@{}ll@{}} \mathbf{f}(s,\mathbf{y}(s))-\frac{\partial \mathbf{f}(s,\mathbf{y}(s))}{ \partial \mathbf{y}}\mathbf{y}(s)+\frac{\partial \mathbf{f}(s,\mathbf{y}(s)) }{\partial s}(t-s) & \text{ for }\mathbb{\beta }=1 \\ \mathbf{a}^{1}(t;s)+\frac{1}{2}\sum\limits_{j,l=1}^{d}[\mathbf{G}(s,\mathbf{y }(s))\mathbf{G}^{\intercal }(s,\mathbf{y}(s))]^{j,l}\text{ }\frac{\partial ^{2}\mathbf{f}(s,\mathbf{y}(s))}{\partial \mathbf{y}^{j}\partial \mathbf{y} ^{l}}(t-s) & \text{for }\mathbb{\beta }=2 \end{array} \right. \end{equation*} and \begin{equation*} \mathbf{b}_{i}^{\beta }(t;s)=\left\{ \begin{array}{@{}ll@{}} \mathbf{g}_{i}(s,\mathbf{y}(s))-\frac{\partial \mathbf{g}_{i}(s,\mathbf{y} (s))}{\partial \mathbf{y}}\mathbf{y}(s)+\frac{\partial \mathbf{g}_{i}(s, \mathbf{y}(s))}{\partial s}(t-s) & \text{for }\mathbb{\beta }=1 \\ \mathbf{b}_{i}^{1}(t;s)+\frac{1}{2}\sum\limits_{j,l=1}^{d}[\mathbf{G}(s, \mathbf{y}(s))\mathbf{G}^{\intercal }(s\mathbf{,y}(s))]^{j,l}\text{ }\frac{ \partial ^{2}\mathbf{g}_{i}(s,\mathbf{y}(s))}{\partial \mathbf{y} ^{j}\partial \mathbf{y}^{l}}(t-s) & \text{for }\mathbb{\beta }=2 \end{array} \right. \end{equation*} for all $$s\leq t$$. Here, $$\mathbf{G=[g}_{1},\ldots ,\mathbf{g}_{m}]$$ is an $$d\times m$$ matrix function. Explicit formulas for the conditional mean and variance of the Local Linear approximation defined above were initially given in Jimenez & Ozaki (2002, 2003) and simplified recently in Jimenez (2015). The conventional LL filters for the model (2.1)–(2.2) are obtained in two steps (Jimenez & Ozaki, 2003): (1) by approximating the solution of the non-linear state equation (2.1) on each time subinterval $$[t_{k},t_{k+1}]$$ by the Local Linear approximation (2.8) on $$[t_{k},t_{k+1}]$$ with time discretization $$\left( \tau \right) _{h}\equiv\{t_{k},t_{k+1}\}$$ for all $$t_{k},t_{k+1}\in \{t\}_{M}$$; and (2) by the recursive application of the LMV filter (Jimenez & Ozaki, 2002) to the resulting piecewise linear continuous-discrete model. This yields to the following. Definition 2.3 Given a time discretization $$\left( \tau \right)_{h}\equiv\{t\}_{M}$$, the LL filter for the state space model (2.1)-(2.2) is defined, between observations, by the linear equations \begin{gather} \frac{d\mathbf{y}_{t/t}}{dt}=\mathbf{A}(\tau _{n_{t}})\mathbf{y}_{t/t}+ \mathbf{a}^{\beta }(t;\tau _{n_{t}}) \end{gather} (2.9) \begin{gather} \frac{d\mathbf{P}_{t/t}}{dt}=\mathbf{A}(\tau _{n_{t}})\mathbf{P}_{t/t}+ \mathbf{P}_{t/t}\mathbf{A}^{\intercal }(\tau _{n_{t}})+\sum\limits_{i=1}^{m} \mathbf{B}_{i}(\tau_{n_{t}})\mathbf{P}_{t/t}\mathbf{B}_{i}^{\intercal }(\tau_{n_{t}})+\mathscr{B}(t;\tau _{n_{t}}) \end{gather} (2.10) \begin{gather} \mathbf{V}_{t/t}=\mathbf{P}_{t/t}-\mathbf{y}_{t/t}\mathbf{y} _{t/t}^{\intercal } \end{gather} (2.11) for all $$t\in (t_{k},t_{k+1})$$, and by \begin{gather} \mathbf{y}_{t_{k+1}/t_{k+1}}=\mathbf{y}_{t_{k+1}/t_{k}}+\mathbf{K}_{t_{k+1}} \mathbf{\big(} \mathbf{z}_{t_{k+1}}-\mathbf{\mathbf{C}y}_{t_{k+1}/t_{k}}\mathbf{\big)} \end{gather} (2.12) \begin{gather} \mathbf{V}_{t_{k+1}/t_{k+1}}=\mathbf{V}_{t_{k+1}/t_{k}}-\mathbf{K}_{t_{k+1}} \mathbf{CV}_{t_{k+1}/t_{k}} \label{LLF5} \end{gather} (2.13) for each observation at $$t_{k+1}$$, with filter gain \begin{equation} \mathbf{K}_{t_{k+1}}=\mathbf{V}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal } {\Large \Big(}\mathbf{CV}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)^{-1} \end{equation} (2.14) for all $$t_{k},t_{k+1}\in \{t\}_{M}$$. Here, \begin{align} \mathscr{B}(t;s)& =\mathbf{a}^{\beta }(t;s)\mathbf{y}_{t/t}^{\intercal }+ \mathbf{y}_{t/t}(\mathbf{a}^{\beta }(t;s))^{\intercal } \notag \\ &\quad +\sum\limits_{i=1}^{m}\mathbf{B}_{i}(s)\mathbf{y}_{t/t}(\mathbf{b} _{i}^{\beta }(t;s))^{\intercal }+\mathbf{b}_{i}^{\beta }(t;s)\mathbf{y} _{t/t}^{\intercal }\mathbf{B}_{i}^{\intercal }(s)+\mathbf{b}_{i}^{\beta }(t;s)(\mathbf{b}_{i}^{\beta }(t;s))^{\intercal } \label{LLF7} \end{align} (2.15) with matrix functions $$\mathbf{A},\mathbf{B}_{i}$$ and vector functions $$\mathbf{a}^{\beta }\mathbf{,b}_{i}^{\beta }$$ defined as in the WLL approximation (2.8) but, replacing $$\mathbf{y}(s)$$ by $$\mathbf{y}_{s/s}$$. The predictions $$\mathbf{y}_{t/t_{k}}$$, $$\mathbf{P}_{t/t_{k}}$$ and $$\mathbf{V}_{t/t_{k}}$$ are accomplished, respectively, via expressions (2.9)–(2.11) with initial conditions $$\mathbf{y}_{t_{k}/t_{k}}$$ and $$\mathbf{P}_{t_{k}/t_{k}}$$ for $$t\in (t_{k},t_{k+1}]$$ and $$t_{k},t_{k+1}\in\{t\}_{M}$$, and with $$\mathbf{A},\mathbf{B}_{i},\mathbf{a}^{\beta }\mathbf{,b}_{i}^{\beta }$$ also defined as in (2.8) but, replacing $$\mathbf{y}(s)$$ by $$\mathbf{y}_{s/t_{k}}$$. Both, the local linear approximations and the LL filters have had a number of important applications. The first ones, in addition to the filtering problems, have been used for the derivation of effective integration (Jimenez & Biscay, 2002; Carbonell et al., 2006, 2008; Jimenez & Carbonell, 2015) and inference (Shoji & Ozaki, 1997; Durham & Gallant, 2002; Singer, 2002; Hurn et al., 2007) methods for SDEs, in the estimation of distribution functions in Monte Carlo Markov Chain methods (Stramer & Tweedie, 1999; Roberts & Stramer, 2001) and the simulation of likelihood functions (Nicolau, 2002). The second ones have played a crucial role in the practical implementation of innovation estimators for the identification of continuous–discrete state space models (Ozaki, 1994; Shoji, 1998; Ozaki et al., 2000; Jimenez & Ozaki, 2006). In a variety of applications, these approximate innovation methods have shown high effectiveness and efficiency for the estimation of unobserved components and unknown parameters of SDEs given a set of discrete observations. Remarkable is the identification, from actual data, of neurophysiological, financial and molecular models, among others (see, e.g., Jimenez et al., 2006; Calderon et al., 2009; Chiarella et al., 2009; Date & Ponomareva, 2011; Kamerlin et al., 2011). 3. Approximate linear minimum variance filters Let $$\left( \tau \right) _{h}$$ be a time discretization of $$[t_{0},T]$$ such that $$\left( \tau \right) _{h}\supset \{t\}_{M}$$, and $$\mathbf{y}_{n}$$ the approximate value of $$\mathbf{x}(\tau_{n})$$ obtained from a discretization of the equation (2.1) for all $$\tau _{n}\in \left( \tau \right) _{h}$$. Let us consider the continuous time approximation $$\mathbf{y}=\{\mathbf{y}(t),$$$$t\in \lbrack t_{0},T]:\mathbf{y}(\tau _{n})=\mathbf{y}_{n}$$ for all $$\tau _{n}\in \left( \tau \right) _{h}\}$$ of $$\mathbf{x}$$ with initial conditions \begin{equation*} E\left( \mathbf{y}(t_{0})\mid\mathscr{F} _{t_{0}}\right) =E\left( \mathbf{x}(t_{0})\mid \mathscr{F}_{t_{0}}\right) \quad \text{and} \quad E\left( \mathbf{y}(t_{0}) \mathbf{y}^{\intercal }(t_{0})\mid\mathscr{F} _{t_{0}}\right) =E\left( \mathbf{x}(t_{0})\mathbf{x}^{\intercal }(t_{0}) \mid\mathscr{F}_{t_{0}}\right)\!;\text{ } \end{equation*} satisfying the bound condition \begin{equation} E\left( \underset{t_{k}\leq t\leq t_{k+1}}{\sup }\left\vert \mathbf{y} (t)\right\vert ^{2q}\mid \mathscr{F}_{t_{k}}\right) \leq L, \label{LMVF6} \end{equation} (3.1) for all $$t\in \lbrack t_{k},t_{k+1}]$$; and the weak convergence criterion \begin{equation} \underset{t_{k}\leq t\leq t_{k+1}}{\sup }\left\vert E\left( g(\mathbf{x}(t)) \mid\mathscr{F}_{t_{k}}\right) -E\left( g( \mathbf{y}(t))\mid \mathscr{F}_{t_{k}}\right) \right\vert \leq L_{k}h^{\beta } \label{LMVF7} \end{equation} (3.2) for all $$t_{k},t_{k+1}\in \{t\}_{M}$$, where $$g\in\mathscr{C}_{P}^{2(\beta+1)}(\mathbb{R}^{d},\mathbb{R})$$, $$L$$ and $$L_{k}$$ are positive constants, $$\beta \in\mathbb{N}_{+}$$, and $$q=1,2...$$. The process $$\mathbf{y}$$ defined in this way is typically called order-$$\beta$$ approximation to $$\mathbf{x}$$ in weak sense (Kloeden & Platen, 1995). When an order-$$\beta$$ approximation to the solution of the state equation (2.1) is chosen, the following approximate filter can be naturally defined. Definition 3.1 Given a time discretization $$\left(\tau\right) _{h}\supset\{t\}_{M}$$, the order-$$\beta$$ LMV filter for the state space model (2.1)–(2.2) is defined, between observations, by \begin{equation} \mathbf{y}_{t/t}=E(\mathbf{y(}t)/Z_{t}) \quad \text{and} \quad \mathbf{V}_{t/t}=E(\mathbf{y(}t)\mathbf{y}^{\intercal}(t)/Z_{t})-\mathbf{y}_{t/t} \mathbf{y}_{t/t}^{\intercal} \end{equation} (3.3) for all $$t\in(t_{k},t_{k+1})$$, and by \begin{gather} \mathbf{y}_{t_{k+1}/t_{k+1}}=\mathbf{y}_{t_{k+1}/t_{k}}+\mathbf{K}_{t_{k+1}} \mathbf{(\mathbf{z}}_{t_{k+1}}-\mathbf{\mathbf{C}y}_{t_{k+1}/t_{k}}\mathbf{)} , \end{gather} (3.4) \begin{gather} \mathbf{V}_{t_{k+1}/t_{k+1}}=\mathbf{V}_{t_{k+1}/t_{k}}-\mathbf{K}_{t_{k+1}} \mathbf{CV}_{t_{k+1}/t_{k}}, \end{gather} (3.5) for each observation at $$t_{k+1}$$, with filter gain \begin{equation} \mathbf{K}_{t_{k+1}}=\mathbf{V}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal } {\Large (}\mathbf{CV}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal}+ {\it{\Sigma}}_{t_{k+1}})^{-1} \end{equation} (3.6) for all $$t_{k},t_{k+1}\in\{t\}_{M}$$, where $$\mathbf{y}$$ is an order-$$\beta$$ approximation to the solution of (2.1) in weak sense. The predictions $$\mathbf{y}_{t/t_{k}}=E(\mathbf{y(}t)/Z_{t_{k}})$$ and $$\mathbf{V}_{t/t_{k}}=E(\mathbf{y(}t)\mathbf{y}^{\intercal}(t)/Z_{t_{k}})-\mathbf{y}_{t/t_{k}}\mathbf{y}_{t/t_{k}}^{\intercal}$$, with initial conditions $$\mathbf{y}_{t_{k}/t_{k}}$$ and $$\mathbf{V}_{t_{k}/t_{k}}$$, are defined for all $$t\in(t_{k},t_{k+1}]$$ and $$t_{k},t_{k+1}\in\{t\}_{M}$$. Note that the goodness of the approximation $$\mathbf{y}$$ to $$\mathbf{x}$$ is measured (in weak sense) by the left hand side of (3.2). Thus, the inequality (3.2) gives a bound for the errors of the approximation $$\mathbf{y}$$ to $$\mathbf{x}$$, for all $$t\in \lbrack t_{k},t_{k+1}]$$ and all pair of consecutive observations $$t_{k},t_{k+1}\in\{t\}_{M}$$. Moreover, this inequality states the convergence (in weak sense and with rate $$\beta$$) of the approximation $$\mathbf{y}$$ to $$\mathbf{x}$$ as the maximum stepsize $$h$$ of the time discretization $$(\tau )_{h}\supset \{t\}_{M}$$ goes to zero. Clearly this includes, as particular case, the convergence of the first two conditional moments of $$\mathbf{y}$$ to those of $$\mathbf{x}$$. Since the approximate filter in Definition 3.1 is designed in terms of the first two conditional moments of the approximation $$\mathbf{y}$$, the weak convergence of $$\mathbf{y}$$ to $$\mathbf{x}$$ should imply the convergence of the approximate filter to the exact one. Next result deals with this matter. Theorem 3.1 Let $$\mathbf{x}_{t/\rho }$$ and $$\mathbf{U}_{t/\rho }$$ be the conditional mean and variance corresponding to the LMV filter (2.3)–(2.7) for the model (2.1)–(2.2), and $$\mathbf{y}_{t/\rho }$$ and $$\mathbf{V}_{t/\rho }$$ their respective approximations given by the order-$$\beta$$ LMV filter (3.3)–(3.6). Then, between observations, the filters satisfy \begin{equation} \left\vert \mathbf{x}_{t/t}-\mathbf{y}_{t/t}\right\vert \leq K_{1}h^{\beta } \quad \text{and} \quad \left\vert \mathbf{U}_{t/t}-\mathbf{V} _{t/t}\right\vert \leq K_{1}h^{\beta } \end{equation} (3.7) for all $$t\in (t_{k},t_{k+1})$$ and, at each observation $$t_{k+1}$$, \begin{equation} \left\vert \mathbf{x}_{t_{k+1}/t_{k+1}}-\mathbf{y}_{t_{k+1}/t_{k+1}}\right \vert \leq K_{2}h^{\beta } \quad \text{and} \quad \left\vert \mathbf{U} _{t_{k+1}/t_{k+1}}-\mathbf{V}_{t_{k+1}/t_{k+1}}\right\vert \leq K_{2}h^{\beta } \end{equation} (3.8) for all $$t_{k},t_{k+1}\in \{t\}_{M}$$, where $$K_{1}$$ is a positive constant. For the predictions, \begin{equation} \left\vert \mathbf{x}_{t/t_{k}}-\mathbf{y}_{t/t_{k}}\right\vert \leq K_{1}h^{\beta } \quad \text{and} \quad \left\vert \mathbf{U}_{t/t_{k}}- \mathbf{V}_{t/t_{k}}\right\vert \leq K_{1}h^{\beta } \end{equation} (3.9) hold for all $$t\in (t_{k},t_{k+1}]$$ and $$t_{k},t_{k+1}\in\{t\}_{M}$$, where $$K_{2}$$ is a positive constant. Proof. Let us start proving inequalities (3.9) and (3.7). For the functions $$g(\mathbf{x}(t))=\mathbf{x}^{i}(t)$$ and $$g(\mathbf{x}(t))=\mathbf{x}^{i}(t)\mathbf{x}^{j}(t)$$ belonging to the function space $$\mathscr{C}_{P}^{2(\beta +1)}(\mathbb{R}^{d},\mathbb{R})$$, for all $$i,j=1..d$$, condition (3.2) directly implies that \begin{equation} \left\vert \mathbf{x}_{t/t_{k}}-\mathbf{y}_{t/t_{k}}\right\vert \leq \sqrt{d} L_{k}h^{\beta } \end{equation} (3.10) and \begin{equation*} \left\vert \mathbf{Q}_{t/t_{k}}-\mathbf{P}_{t/t_{k}}\right\vert \leq dL_{k}h^{\beta } \end{equation*} for all $$t\in (t_{k},t_{k+1}]$$, where $$\mathbf{P}_{t/t_{k}}=E(\mathbf{y(}t)\mathbf{y}^{\intercal }(t)/Z_{t_{k}})$$. Since the solution of (2.1) has bounded moments, there exists a positive constant $${\it{\Lambda}}$$ such that of $$\left\vert \mathbf{x}_{t/t_{k}}\right\vert\leq {\it{\Lambda}}$$ for all $$t\in \lbrack t_{k},t_{k+1}]$$. Condition (3.2) implies that $$\left\vert\mathbf{y}_{t/t_{k}}\right\vert \leq L$$ for all $$t\in \lbrack t_{k},t_{k+1}]$$. From the formula of the variance in terms of the first two moments, it follows that \begin{equation*} \left\vert \mathbf{U}_{t/t_{k}}-\mathbf{V}_{t/t_{k}}\right\vert \leq \left\vert \mathbf{Q}_{t/t_{k}}-\mathbf{P}_{t/t_{k}}\right\vert +\left\vert \mathbf{x}_{t/t_{k}}\mathbf{x}_{t/t_{k}}^{\intercal }-\mathbf{y}_{t/t_{k}} \mathbf{y}_{t/t_{k}}^{\intercal }\right\vert \!. \end{equation*} Since \begin{gather} \begin{aligned} \left\vert \mathbf{x}_{t/t_{k}}\mathbf{x}_{t/t_{k}}^{\intercal }-\mathbf{y} _{t/t_{k}}\mathbf{y}_{t/t_{k}}^{\intercal }\right\vert & =\left\vert \mathbf{ x}_{t/t_{k}}\mathbf{x}_{t/t_{k}}^{\intercal }-\mathbf{x}_{t/t_{k}}\mathbf{y} _{t/t_{k}}^{\intercal}+\mathbf{x}_{t/t_{k}}\mathbf{y}_{t/t_{k}}^{\intercal}-\mathbf{y}_{t/t_{k}}\mathbf{y}_{t/t_{k}}^{\intercal }\right\vert \notag\\ & \leq \left\vert \mathbf{x}_{t/t_{k}}(\mathbf{x}_{t/t_{k}}^{\intercal }- \mathbf{y}_{t/t_{k}}^{\intercal })\right\vert +\left\vert (\mathbf{x} _{t/t_{k}}-\mathbf{y}_{t/t_{k}})\mathbf{y}_{t/t_{k}}^{\intercal}\right\vert\notag \\ & \leq (\left\vert \mathbf{x}_{t/t_{k}}\right\vert +\left\vert \mathbf{y} _{t/t_{k}}\right\vert )\left\vert \mathbf{x}_{t/t_{k}}-\mathbf{y} _{t/t_{k}}\right\vert \!,\notag \end{aligned}\\ \left\vert \mathbf{U}_{t/t_{k}}-\mathbf{V}_{t/t_{k}}\right\vert \leq \alpha _{k}h^{\beta } \end{gather} (3.11) for all $$t\in (t_{k},t_{k+1}]$$, where $$\alpha _{k}=$$$$(\sqrt{d}+L+{\it{\Lambda}})\sqrt{d}L_{k}$$. Hence, inequalities (3.9) are obtained from (3.10) and (3.11) with $$K_{1}=\underset{k}{\max }\{\alpha _{k}\}$$. Inequalities (3.7) can be derived in the same way. For the remainder inequalities follow this. From (2.5) and (3.4), it is obtained \begin{align*} \left\vert \mathbf{x}_{t_{k+1}/t_{k+1}}-\mathbf{y}_{t_{k+1}/t_{k+1}}\right \vert & \leq \left\vert \mathbf{x}_{t_{k+1}/t_{k}}-\mathbf{y} _{t_{k+1}/t_{k}}\right\vert \\ &\quad +\left\vert \mathbf{G}_{t_{k+1}}\mathbf{\big(}\mathbf{z}_{t_{k+1}}-\mathbf{ \mathbf{C}x}_{t_{k+1}/t_{k}}\mathbf{\big)}-{\bf K}_{t_{k+1}}\mathbf{\big(}\mathbf{z} _{t_{k+1}}-\mathbf{\mathbf{C}y}_{t_{k+1}/t_{k}}\mathbf{\big)}\right\vert \\ & \leq \left(1+\left\vert \mathbf{G}_{t_{k+1}}\mathbf{\mathbf{C}}\right\vert \right)\left\vert \mathbf{x}_{t_{k+1}/t_{k}}-\mathbf{y}_{t_{k+1}/t_{k}}\right\vert \\ &\quad +\left(\left\vert \mathbf{\mathbf{z}}_{t_{k+1}}\right\vert +\left\vert \mathbf{ \mathbf{C}y}_{t_{k+1}/t_{k}}\right\vert \right)\left\vert \mathbf{G}_{t_{k+1}}- \mathbf{K}_{t_{k+1}}\right\vert \!. \end{align*} From (2.6) and (3.5), \begin{align*} \left\vert \mathbf{U}_{t_{k+1}/t_{k+1}}-\mathbf{V}_{t_{k+1}/t_{k+1}}\right \vert & \leq \left\vert \mathbf{U}_{t_{k+1}/t_{k}}-\mathbf{V} _{t_{k+1}/t_{k}}\right\vert +\left\vert \mathbf{G}_{t_{k+1}}\mathbf{CU} _{t_{k+1}/t_{k}}-\mathbf{K}_{t_{k+1}}\mathbf{CV}_{t_{k+1}/t_{k}}\right\vert \\ & \leq \left(1+\left\vert \mathbf{G}_{t_{k+1}}\mathbf{C}\right\vert \right)\left\vert \mathbf{U}_{t_{k+1}/t_{k}}-\mathbf{V}_{t_{k+1}/t_{k}}\right\vert +\left\vert \mathbf{CV}_{t_{k+1}/t_{k}}\right\vert \left\vert \mathbf{G}_{t_{k+1}}- \mathbf{K}_{t_{k+1}}\right\vert \!. \end{align*} By rewriting (3.6) and (2.7) as \begin{equation*} \mathbf{K}_{t_{k+1}}{\Large \Big(}\mathbf{CV}_{t_{k+1}/t_{k}}\mathbf{C} ^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)=\mathbf{V}_{t_{k+1}/t_{k}}\mathbf{C} ^{\intercal } \end{equation*} and \begin{equation*} \mathbf{G}_{t_{k+1}}{\Large \Big(}\mathbf{CU}_{t_{k+1}/t_{k}}\mathbf{C} ^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)-\mathbf{G}_{t_{k+1}}{\Large \Big(}\mathbf{CV} _{t_{k+1}/t_{k}}\mathbf{C}^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)+\mathbf{G} _{t_{k+1}}{\Large \Big(}\mathbf{CV}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)=\mathbf{U}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal }, \end{equation*} and subtracting the first expression to the second one, it follows that \begin{equation*} \left(\mathbf{G}_{t_{k+1}}-\mathbf{K}_{t_{k+1}}\right){\Large \Big(}\mathbf{CV} _{t_{k+1}/t_{k}}\mathbf{C}^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)=\mathbf{G} _{t_{k+1}}\mathbf{C\Big(}\mathbf{\mathbf{V}}_{t_{k+1}/t_{k}}-\mathbf{U} _{t_{k+1}/t_{k}}\mathbf{\Big)}{\bf C}^{\intercal }+\mathbf{\Big(}{\bf U}_{t_{k+1}/t_{k}}-\mathbf{ \mathbf{V}}_{t_{k+1}/t_{k}}\mathbf{\Big)}{\bf C}^{\intercal }. \end{equation*} Thus, \begin{equation*} \Big(\mathbf{G}_{t_{k+1}}-\mathbf{K}_{t_{k+1}}\Big)=\Big(\mathbf{I-G}_{t_{k+1}}\mathbf{ C\Big)\Big(}{\bf U}_{t_{k+1}/t_{k}}-\mathbf{\mathbf{V}}_{t_{k+1}/t_{k}}\mathbf{\Big)}{\bf C} ^{\intercal }{\Large \Big(}\mathbf{CV}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)^{-1} \end{equation*} and \begin{equation*} \left\vert \mathbf{G}_{t_{k+1}}-\mathbf{K}_{t_{k+1}}\right\vert \leq \left\vert \big(\mathbf{I-G}_{t_{k+1}}\mathbf{C\big)}\right\vert \left\vert \mathbf{C }^{\intercal }{\Large \big(}\mathbf{CV}_{t_{k+1}/t_{k}}\mathbf{C}^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\big)^{-1}\right\vert \left\vert \mathbf{U}_{t_{k+1}/t_{k}}- \mathbf{\mathbf{V}}_{t_{k+1}/t_{k}}\right\vert \!. \end{equation*} From the above inequalities, the bound provided by (3.10) and (3.11) at $$t=t_{k+1}$$, and taking into account that $$\left\vert\mathbf{V}_{t_{k+1}/t_{k}}\right\vert$$, $$\left\vert \mathbf{G}_{t_{k+1}}\right\vert$$, $$\left\vert {\it{\Sigma}}_{t_{k+1}} \right\vert$$ and $$\left\vert \mathbf{C}\right\vert$$ are also bounded, it is obtained that \begin{equation*} \left\vert \mathbf{x}_{t_{k+1}/t_{k+1}}-\mathbf{y}_{t_{k+1}/t_{k+1}}\right \vert \leq \beta _{k}h^{\beta }\ \quad \text{and} \quad \left\vert \mathbf{U}_{t_{k+1}/t_{k+1}}-\mathbf{V}_{t_{k+1}/t_{k+1}}\right\vert \leq \beta _{k}h^{\beta }, \end{equation*} where $$\beta _{k}$$ is a positive constant. This implies (3.8) with $$K_{2}=\underset{k}{\max }\{\beta _{k}\}$$. □ Theorem 3.1 states that, given a set of $$M$$ partial and noisy observations of the states $$\mathbf{x}$$ on $$\{t\}_{M}$$, the approximate LMV filter of Definition 3.1 converges with rate $$\beta$$ to the exact LMV filter of Definition 2.1 as $$h$$ goes to zero, where $$h$$ is the maximum stepsize of the time discretization $$(\tau )_{h}\supset \{t\}_{M}$$ on which the approximation $$\mathbf{y}$$ to $$\mathbf{x}$$ is defined. This means that the approximate filter inherits the convergence rate of the approximation employed for its design. Note that, the convergence results of Theorem 3.1 can be easily extended for noisy observations of any realization of $$\mathbf{x}$$ just by taking expectation value in the inequalities (3.7)–(3.9) with respect to the measure on the underlying probability space generating the realizations of the model (2.1)–(2.2). Further note that in both, Definition 3.1 and Theorem 3.1, no restriction on the time partition $$\{t\}_{M}$$ for the data has been assumed. Thus, there are not specific constraints about the time distance between two consecutive observations, which allows the application of the approximate filter in a variety of practical problems (see, e.g., Riera et al., 2007; Hu et al., 2012) with a reduced number of not close observations in time, with sequential random measurements, or with multiple missing data. Neither there are restrictions on the time discretization $$(\tau )_{h}$$$$\supset \{t\}_{M}$$ on which the approximate filter is defined. Thus, $$(\tau )_{h}$$ can be set by the user by taking into account some specifications or previous knowledge on the filtering problem under consideration, or automatically designed by an adaptive strategy as it will be shown in the section concerning the numerical simulations. The order-$$\beta$$ LMV filter of Definition 3.1 has been proposed for models with linear observation equation. However, by following the procedure proposed in Jimenez & Ozaki, 2006, it can be easily applied as well to models with non-linear observation equation. To illustrate this, let us consider the state space model defined by the continuous state equation (2.1) and the discrete observation equation \begin{equation} \mathbf{z}_{t_{k}}=\mathbf{h}(t_{k},\mathbf{x}(t_{k}))+\mathbf{e}_{t_{k}}, \text{ for }k=0,1,..,M-1, \end{equation} (3.12) where $$\mathbf{e}_{t_{k}}$$ is defined as in (2.2) and $$\mathbf{h}:$$$$\mathbb{R}\times \mathbb{R}^{d}\rightarrow \mathbb{R}^{r}$$ is a twice differentiable function. By using the Ito formula, \begin{align*} d\mathbf{h}^{j}& =\left\{\frac{\partial \mathbf{h}^{j}}{\partial t} +\sum\limits_{k=1}^{d}f^{k}\frac{\partial \mathbf{h}^{j}}{\partial \mathbf{x} ^{k}}+\frac{1}{2}\sum\limits_{s=1}^{m}\sum\limits_{k,l=1}^{d}\mathbf{g} _{s}^{l}\mathbf{g}_{s}^{k}\frac{\partial ^{2}\mathbf{h}^{j}}{\partial \mathbf{x}^{l}\partial \mathbf{x}^{k}}\right\}dt+\sum\limits_{s=1}^{m} \sum\limits_{l=1}^{d}\mathbf{g}_{s}^{l}\frac{\partial \mathbf{h}^{j}}{ \partial \mathbf{x}^{l}}d\mathbf{w}^{s} \\ & =\mathbf{\rho }^{j}dt+\sum\limits_{s=1}^{m}\mathbf{\sigma }_{s}^{j}d \mathbf{w}^{s} \end{align*} with $$j=1,..,r$$. Hence, the state space model (2.1) and (3.12) is transformed to the following higher-dimensional state space model with linear observation \begin{eqnarray*} d\mathbf{v}(t) &=&\mathbf{a}(t,\mathbf{v}(t))dt+\sum\limits_{i=1}^{m}\mathbf{ b}_{i}(t,\mathbf{v}(t))d\mathbf{w}^{i}(t), \\ \mathbf{z}_{t_{k}} &=&\mathbf{Cv}(t_{k})+\mathbf{e}_{t_{k}},\text{ for } k=0,1,..,M-1, \end{eqnarray*} where \begin{equation*} \mathbf{v}=\left[ \begin{array}{@{}l@{}} \mathbf{x} \\ \mathbf{h} \end{array} \right]\!,\text{ }\mathbf{a}=\left[ \begin{array}{@{}l@{}} \mathbf{f} \\ \mathbf{\rho } \end{array} \right]\!,\text{ }\mathbf{b}_{i}=\left[ \begin{array}{@{}l@{}} \mathbf{g}_{i} \\ \mathbf{\sigma }_{i} \end{array} \right] \end{equation*} and the matrix $$\mathbf{C}$$ is such that $$\mathbf{h}(t_{k},\mathbf{x}(t_{k}))=\mathbf{Cv}(t_{k})$$. In this way, the state space model (2.1) and (3.12) is transformed to the form of the state space model (2.1)–(2.2), and so the order-$$\beta$$ LMV filter of Definition 3.1 and the convergence result of Theorem 3.1 can be applied. For concluding this section, it is important to remark the following. In practice, since the exact value of $$E\left(g(\mathbf{y}(t))\mid \mathscr{F}_{t_{k}}\right)$$ in (3.2) is in general unknown (with $$t\in (t_{k},t_{k+1}]$$ and $$g\in \mathscr{C}_{P}^{l}(\mathbb{R}^{d},\mathbb{R}))$$, this value is usually computed via Monte Carlo simulations as the average of $$S$$ realizations $$g(\mathbf{y}^{\{s\}}(t))$$ of the random variable $$g(\mathbf{y}(t)\mathbf{)}$$, with $$s=1,..,S$$. In this case, the error $$e$$ in the computation of $$E\left( g(\mathbf{x}(t))\mid\mathscr{F}_{t_{k}}\right)$$ naturally discomposes in two terms: the time discretization and the statistical error (see, e.g., Kloeden & Platen, 1995). That is, \begin{equation*} e=\left\vert E\left( g(\mathbf{x}(t))\mid \mathscr{F}_{t_{k}}\right) -\frac{1}{S}\sum\limits_{s=1}^{S}g(\mathbf{y} ^{\{s\}}(t))\right\vert \leq e_{d}+e_{s} \end{equation*} where the time discretization error $$e_{d}=\left\vert E\left( g(\mathbf{x}(t))\mid\mathscr{F}_{t_{k}}\right) -E\left( g(\mathbf{y}(t))\mid \mathscr{F}_{t_{k}}\right)\right\vert$$ satisfies (3.2), and the statistical error $$e_{s}=\left\vert E\left( g(\mathbf{y}(t))\mid\mathscr{F}_{t_{k}}\right) -\frac{1}{S}\sum\limits_{s=1}^{S}g(\mathbf{y}^{\{s\}}(t))\right\vert$$ is a random variable asymptotically Gaussian with zero mean and variance proportional to $$1/\sqrt{S}$$. Thus, if we set $$S=1/h^{2\beta }$$ the statistical error will be proportional to $$h^{\beta }$$ as well. In this way, if the exact moments of $$\mathbf{y}$$ in Definition 3.1 are replaced by their corresponding sample averages, then the resulting approximate LMV filter will also converge to the exact LMV filter with rate $$\beta$$ as $$h$$ goes to zero. Note that this holds for any order-$$\beta$$ weak approximation $$\mathbf{y}$$ defined on any time discretization $$(\tau )_{h}\supset \{t\}_{M}$$. This is a remarkable result compare with that obtained in del Moral et al. (2001), in which the convergence was only stated for the approximate filter defined by the Euler scheme with uniform time discretization $$(\tau)_{h\propto 1/\sqrt{S}}$$ as $$S$$ goes to infinite. A key element in the proof given in del Moral et al. (2001) is the known convergence in distribution of the Euler method when $$h$$ decreases, which is not available for other numerical schemes in general. Contrary, the weak convergence criterion (3.2) used in this article for the same purpose is usually known for any numerical integrator of SDEs on non-uniform time discretizations. Further note that, unlike the particle filter, the proposed filter class does not involve an algorithm for sampling the posterior distribution corresponding to the considered filtering problem. 4. Order-$$\boldsymbol{\beta}$$ local linearization filters In principle, according to Theorem 3.1, any kind of approximation $$\mathbf{y}$$ converging to $$\mathbf{x}$$ in a weak sense can be used to construct approximate LMV filters (e.g., those in Kloeden & Platen, 1995). Therefore, additional selection criteria could be taking into account for this purpose. For instance, high order of convergence, efficient algorithm for the computation of the moments, and so on. In this article, we elected the local linear approximation (2.8) for the following reasons: (1) its first two conditional moments have simple explicit formulas that can be computed by means of efficient algorithm—including high dimensional state equations—(Jimenez & Carbonell, 2015); (2) its first two conditional moments are exact for linear state equations in all the possible variants—with additive and/or multiplicative noise, autonomous or not—(Jimenez & Ozaki, 2002) and so; (3) it preserves the mean-square stability or instability that the solution of the mentioned linear equation may have; (4) it has an adequate order of weak convergence for state equations with additive noise (Carbonell et al., 2006); and (5) the high effectiveness of the conventional LL filters for the identification of complex non-linear models in a variety of applications (see, e.g., Jimenez et al., 2006; Riera et al., 2007; Calderon et al., 2009; Chiarella et al., 2009). Once the order-$$\beta$$ WLL approximation (2.8) is chosen for approximating the solution of the state equation (2.1), the order-$$\beta$$ LL filter for the state space model (2.1)–(2.2) is then defined. More precisely, Definition 4.1 Given a time discretization $$\left( \tau \right) _{h}\supset\{t\}_{M}$$, the order-$$\beta$$$$(=1,2)$$ LL filter for the state space model (2.1)–(2.2) is outlined by Definition 3.1 and the first two conditional moments of the order-$$\beta$$$$(=1,2)$$ WLL approximation (2.8). According with Theorem 3.1, this approximate filter will inherit the order of convergence of the WLL approximation (2.8). As it was mention before, the weak convergence rate of that approximation was early stated in Carbonell et al. (2006) for SDEs with additive noise. For equations with multiplicative noise, this subject will be considered now. In what follows of this section, a first lemma provides uniform bounds for the moments of the WLL approximation, whereas a second lemma compares the first terms of the Itô–Taylor expansion of the solution of (2.1) with those of the WLL approximation. Both results are then used in a first theorem to derive the weak convergence rate of the WLL approximation to the solution of (2.1). Based on this, a second theorem states the convergence rate of the approximate order-$$\beta$$ LL filter to the exact LMV filter. Lemma 4.1 Suppose that the drift and diffusion coefficients of the SDE (2.1) satisfy the following conditions \begin{gather} \mathbf{f}^{k},\mathbf{g}_{i}^{k}\in \mathscr{C}_{P}^{2(\beta +1)}([a,b]\times \mathbb{R}^{d},\mathbb{R})\text{ } \end{gather} (4.1) \begin{gather} \left\vert \mathbf{f}(s,\mathbf{u})\right\vert +\sum\limits_{i=1}^{m}(\left\vert \mathbf{g}_{i}(s,\mathbf{u})\right\vert +\sum\limits_{k,l=1}^{d}\left\vert \mathbf{g}_{i}^{k}(s,\mathbf{u})\mathbf{g} _{i}^{l}(s,\mathbf{u})\right\vert \delta _{\beta }^{2})\leq K(1+\left\vert \mathbf{u}\right\vert )\!, \end{gather} (4.2) \begin{gather} \left\vert \frac{\partial \mathbf{f}(s,\mathbf{u})}{\partial t}\right\vert +\left\vert \frac{\partial \mathbf{f}(s,\mathbf{u})}{\partial \mathbf{x}} \right\vert +\left\vert \frac{\partial ^{2}\mathbf{f}(s,\mathbf{u})}{ \partial \mathbf{x}^{2}}\right\vert \delta _{\beta }^{2}\leq K \end{gather} (4.3) and \begin{equation} \left\vert \frac{\partial \mathbf{g}_{i}(s,\mathbf{u})}{\partial t} \right\vert +\left\vert \frac{\partial \mathbf{g}_{i}(s,\mathbf{u})}{ \partial \mathbf{x}}\right\vert +\left\vert \frac{\partial ^{2}\mathbf{g} _{i}(s,\mathbf{u})}{\partial \mathbf{x}^{2}}\right\vert \delta _{\beta }^{2}\leq K \end{equation} (4.4) for all $$s\in \lbrack a,b]$$, $$\mathbf{u}\in \mathbb{R}^{d}$$, and $$i=1,..,m$$, where $$K$$ is a positive constant. Then the order-$$\beta$$ WLL approximation (2.8) satisfies \begin{equation} E\left( \underset{a\leq t\leq b}{\sup }\left\vert \mathbf{y}(t)\right\vert ^{2q}\mid \mathscr{F}_{a}\right) \leq C(1+\left\vert \mathbf{y}(a)\right\vert ^{2q}) \end{equation} (4.5) for each $$q=1,2,\ldots ,$$ where $$C$$ is positive constant. Proof. Let us denote the drift and diffusion coefficients of the SDE (2.8) by \begin{equation*} \mathbf{p}(t,\mathbf{y}(t)\mathbf{;}\tau _{n_{t}})=\mathbf{A}(\tau _{n_{t}}) \mathbf{y}(t)+\mathbf{a}^{\mathbb{\beta }}(t;\tau _{n_{t}}) \end{equation*} and \begin{equation*} \mathbf{q}_{i}(t,\mathbf{y}(t)\mathbf{;}\tau _{n_{t}})=\mathbf{B}_{i}(\tau _{n_{t}})\mathbf{y}(t)+\mathbf{b}_{i}^{\mathbb{\beta }}(t;\tau _{n_{t}}), \end{equation*} respectively. For each $$q$$, the Ito formula applied to $$\left\vert \mathbf{y(}t\mathbf{)}\right\vert ^{2q}$$ implies that \begin{align} \left\vert \mathbf{y(}t\mathbf{)}\right\vert ^{2q}& =\left\vert \mathbf{y(} \tau _{n_{t}}\mathbf{)}\right\vert ^{2q}+\int\limits_{\tau _{n_{t}}}^{t}2q\left\vert \mathbf{y}(s)\right\vert ^{2q-2}\mathbf{y} ^{\intercal }(s)\mathbf{p}(s,\mathbf{y}(s);\tau _{n_{t}})ds \notag \\ &\quad +\sum\limits_{i=1}^{m}\int\limits_{\tau _{n_{t}}}^{t}2q\left\vert \mathbf{y }(s)\right\vert ^{2q-2}\mathbf{y}^{\intercal }(s)\mathbf{q}_{i}(s,\mathbf{y} (s)\mathbf{;}\tau _{n_{t}})d\mathbf{w}^{i}(s) \notag \\ & \quad+\sum\limits_{i=1}^{m}\int\limits_{\tau _{n_{t}}}^{t}q\left\vert \mathbf{y} (s)\right\vert ^{2q-2}\left\vert \mathbf{q}_{i}(s,\mathbf{y}(s)\mathbf{;} \tau _{n_{t}})\right\vert ^{2}ds \notag \\[-1pt] &\quad +\sum\limits_{i=1}^{m}\int\limits_{\tau _{n_{t}}}^{t}2q(q-1)\left\vert \mathbf{y}(s)\right\vert ^{2q-4}\left\vert \mathbf{y}^{\intercal }(s)\mathbf{ q}_{i}(s,\mathbf{y}(s)\mathbf{;}\tau _{n_{t}})\right\vert ^{2}ds \label{Power} \end{align} (4.6) for all $$t\in\lbrack\tau_{n_{t}},\tau_{n_{t}+1}]$$. Note that the above expression (4.6) also holds for all $$t\in \lbrack \tau _{n},\tau _{n+1}]$$ and $$\tau _{n},\tau _{n+1}\in\left( \tau \right) _{h}$$. Thus, by the recursive evaluation of (4.6) at the last point $$\tau _{n+1}$$ of the time interval $$[\tau _{n},\tau _{n+1}]$$ for all $$n=0,..,n_{t}-1$$, and substituting each resulting expression in (4.6), it is obtained that \begin{align*} \left\vert \mathbf{y(}t\mathbf{)}\right\vert ^{2q}& =\left\vert \mathbf{y} (a)\right\vert ^{2q}+\int\limits_{a}^{t}2q\left\vert \mathbf{y} (s)\right\vert ^{2q-2}\mathbf{y}^{\intercal }(s)\mathbf{p}(s,\mathbf{y} (s);\tau _{n_{s}})ds \\[-1pt] & \quad+\sum\limits_{i=1}^{m}\int\limits_{a}^{t}2q\left\vert \mathbf{y} (s)\right\vert ^{2q-2}\mathbf{y}^{\intercal }(s)\mathbf{q}_{i}(s,\mathbf{y} (s)\mathbf{;}\tau _{n_{s}})d\mathbf{w}^{i}(s) \\[-1pt] &\quad +\sum\limits_{i=1}^{m}\int\limits_{a}^{t}q\left\vert \mathbf{y} (s)\right\vert ^{2q-2}\left\vert \mathbf{q}_{i}(s,\mathbf{y}(s)\mathbf{;} \tau _{n_{s}})\right\vert ^{2}ds \\[-1pt] &\quad +\sum\limits_{i=1}^{m}\int\limits_{a}^{t}2q(q-1)\left\vert \mathbf{y} (s)\right\vert ^{2q-4}\left\vert \mathbf{y}^{\intercal }(s)\mathbf{q}_{i}(s, \mathbf{y}(s)\mathbf{;}\tau _{n_{s}})\right\vert ^{2}ds \end{align*} for all $$t\in \lbrack a,b]$$. Theorem 4.5.4 in Kloeden & Platen (1995) implies that $$E\left(\left\vert \mathbf{y}(t)\right\vert ^{2q}\right) <\infty$$ for $$a\leq t\leq b$$. Hence, the function $$\mathbf{r}$$ defined as $$\mathbf{r}(t)=\mathbf{0}$$ for $$0\leq t<a$$ and as $$\mathbf{r}(t)=\left\vert \mathbf{y}(t)\right\vert ^{2q-2}\mathbf{y}^{\intercal}(t)\mathbf{q}_{i}(t,\mathbf{y}(t)\mathbf{;}\tau _{n_{t}})$$ for $$a\leq t\leq b$$ belongs to the class $$\mathscr{L}_{b}^{2}$$ of function $$\mathscr{L}\times \mathscr{F}-$$ measurable$$.$$ Then, Lemma 3.2.2 in Kloeden & Platen (1995) implies that \begin{equation*} E\left( \int\limits_{a}^{t}\left\vert \mathbf{y}(s)\right\vert ^{2q-2} \mathbf{y}^{\intercal }(s)\mathbf{q}_{i}(s,\mathbf{y}(s)\mathbf{;}\tau _{n_{s}})d\mathbf{w}^{i}(s)\right) =0 \end{equation*} for all $$i=1,..,m$$. From this and the previous expression for $$\left\vert\mathbf{y(}t\mathbf{)}\right\vert ^{2q}$$ follows that \begin{align*} E\left( \underset{a\leq u\leq t}{\sup }\left\vert \mathbf{y}(u)\right\vert ^{2q}\mid \mathscr{F}_{a}\right) & \leq \left\vert \mathbf{y}(a)\right\vert ^{2q}+2q\int\limits_{a}^{t}E\left( \left\vert \mathbf{y}(s)\right\vert ^{2q-2}\left\vert \mathbf{y}^{\intercal }(s)\mathbf{p}(s,\mathbf{y}(s);\tau _{n_{s}})\right\vert \mid \mathscr{F}_{a}\right) ds \\[-1pt] & \quad +q\sum\limits_{i=1}^{m}\int\limits_{a}^{t}E\left( \left\vert \mathbf{y} (s)\right\vert ^{2q-2}\left\vert \mathbf{q}_{i}(s,\mathbf{y}(s)\mathbf{;} \tau _{n_{s}})\right\vert ^{2}\mid\mathscr{F} _{a}\right) ds \\[-1pt] &\quad +2q(q-1)\sum\limits_{i=1}^{m}\int\limits_{a}^{t}E\left( \left\vert \mathbf{ y}(s)\right\vert ^{2q-4}\left\vert \mathbf{y}^{\intercal }(s)\mathbf{q} _{i}(s,\mathbf{y}(s)\mathbf{;}\tau _{n_{s}})\right\vert ^{2}\mid\mathscr{F}_{a}\right) ds. \end{align*} From conditions (4.2)–(4.4) follows that \begin{equation*} \left\vert \mathbf{p}(s,\mathbf{y}(s)\mathbf{;}\tau _{n_{s}})\right\vert \leq K(\left\vert \mathbf{y}(s)\right\vert +\left\vert \mathbf{y}(\tau _{n_{s}})\right\vert )+K_{\beta }(1+|\mathbf{y}(s)|)+K \end{equation*} and \begin{equation*} \left\vert \mathbf{q}_{i}(s,\mathbf{y}(s)\mathbf{;}\tau _{n_{s}})\right\vert \leq K(\left\vert \mathbf{y}(s)\right\vert +\left\vert \mathbf{y}(\tau _{n_{s}})\right\vert )+K_{\beta }(1+|\mathbf{y}(s)|)+K, \end{equation*} where \begin{equation*} K_{\beta }=\left\{ \begin{array}{@{}cc@{}} K & \text{ for }\beta =1 \\ K(1+\frac{1}{2}K) & \text{ for }\beta =2 \end{array} \right. . \end{equation*} Thus, there exists a positive constant $$C$$ such that \begin{gather*} \left\vert \mathbf{y}^{\intercal }(s)\mathbf{p}(s,\mathbf{y}(s)\mathbf{;} \tau _{n_{s}})\right\vert \leq C(1+|\mathbf{y(}s\mathbf{)}|^{2})+C(1+| \mathbf{y}(\tau _{n_{s}})|^{2}),\\ \left\vert \mathbf{q}_{i}(s,\mathbf{y}(s);\tau _{n_{s}})\right\vert ^{2}\leq C(1+|\mathbf{y(}s\mathbf{)}|^{2})+C(1+|\mathbf{y}(\tau _{n_{s}})|^{2}),\\ \left\vert \mathbf{y}^{\intercal }(s)\mathbf{q}_{i}(s,\mathbf{y}(s);\tau _{n_{s}})\right\vert ^{2}\leq C|\mathbf{y(}s\mathbf{)}|^{2}(1+|\mathbf{y(}s \mathbf{)}|^{2})+C|\mathbf{y(}s\mathbf{)}|^{2}(1+|\mathbf{y}(\tau _{n_{s}})|^{2}), \end{gather*} and so \begin{equation*} E\left( \underset{a\leq u\leq t}{\sup }\left\vert \mathbf{y}(u)\right\vert ^{2q}\mid \mathscr{F}_{a}\right) \leq \left\vert \mathbf{y}_{0}\right\vert ^{2q}+L\int\limits_{a}^{t}E\left( \underset{a\leq u\leq s}{\sup }(1+\left\vert \mathbf{y}(u)\right\vert ^{2})\left\vert \mathbf{y}(u)\right\vert ^{2q-2}\mid \mathscr{F}_{a}\right) ds, \end{equation*} where $$L=2qC(2+2qm-m)$$. From the inequality $$(1+z^{2})z^{2q-2}\leq 1+2z^{2q}$$, \begin{equation*} E\left( \underset{a\leq u\leq t}{\sup }\left\vert \mathbf{y}(u)\right\vert ^{2q}\mid \mathscr{F}_{a}\right) \leq \left\vert \mathbf{y}_{0}\right\vert ^{2q}+L(t-a)+2L\int\limits_{a}^{t}E\left( \underset{a\leq u\leq s}{\sup } \left\vert \mathbf{y(}u)\right\vert ^{2q}\mid \mathscr{F}_{a}\right) ds. \end{equation*} From this and the Gronwall lemma, the assertion of the theorem is obtained. □ In what follows, additional notations and results of Kloeden & Platen (1995) will be used. Briefly recall us that $$\mathscr{M}$$ denotes the set of all the multi-indexes $$\alpha =(j_{1},\ldots,j_{l(\alpha )})$$ with $$j_{i}\in \{0,1,\ldots ,m\}$$ and $$i=1,\ldots,l(\alpha )$$, where $$m$$ is the dimension of $$\mathbf{w}$$ in (2.1). $$l(\alpha )$$ denotes the length of the multi-index $$\alpha$$ and $$n(\alpha )$$ the number of its zero components. $$-\alpha$$ and $$\alpha -$$ are the multi-indexes in $$\mathscr{M}$$ obtained by deleting the first and the last component of $$\alpha$$, respectively. The multi-index of length zero will be denoted by $$v$$. For a $$ \mathscr{F}_{t}$$-adapted right continuous process $$h=\{h(t),t\geq 0\}$$ with left hand limits, $$I_{\alpha }[h]_{\rho_{1},\rho _{2}}$$ denotes the multiple Ito integral of $$h$$ corresponding to the multi-index $$\alpha \in \mathscr{M}$$, evaluated at the stopping times $$\rho _{2}\geq \rho _{1}\geq 0$$. Further, let the operators \begin{equation*} L^{0}=\frac{\partial }{\partial t}+\sum\limits_{k=1}^{d}\mathbf{f}^{k}\frac{ \partial }{\partial \mathbf{x}^{k}}+\frac{1}{2}\sum\limits_{k,l=1}^{d}\sum \limits_{j=1}^{m}\mathbf{g}_{j}^{k}\mathbf{g}_{j}^{l}\text{ }\frac{\partial ^{2}}{\partial \mathbf{x}^{k}\partial \mathbf{x}^{l}} \quad \text{and} \quad L^{j}=\sum\limits_{k=1}^{d}\mathbf{g}_{j}^{k}\frac{\partial }{ \partial \mathbf{x}^{k}}, \end{equation*} for $$j=1,\ldots ,m$$. Let us consider the hierarchical set \begin{equation*} {\it{\Gamma}}_{\beta}=\left\{ \alpha\in\mathscr{M}:l(\alpha)\leq\beta\right\} \end{equation*} with $$\beta=1,2$$; and $$\mathscr{B}({\it{\Gamma}}_{\beta})=\{\mathbb{\alpha}\in\mathscr{M}\backslash{\it{\Gamma}}_{\beta}:-\mathbb{\alpha}\in{\it{\Gamma}}_{\beta}\}$$ the remainder set of $${\it{\Gamma}}_{\beta}$$. Lemma 4.2 Let $$\mathbf{y}$$ be the order-$$\beta$$ WLL approximation (2.8), and $$\mathbf{z}=\{\mathbf{z}(t),$$$$t\in \lbrack a,b]\}$$ be the stochastic process defined by \begin{equation} \mathbf{z}(t)=\mathbf{y}_{n_{t}}+\sum\limits_{\alpha \in {\it{\Gamma}}_{\beta}/\{\nu \}}I_{\alpha }[{\it{\Lambda}} _{\alpha }(\tau _{n_{t}},\mathbf{y}_{n_{t}};\tau _{n_{t}})]_{\tau _{n_{t}},t}+\sum\limits_{\alpha \in \mathscr{B}({\it{\Gamma}} _{\beta })}I_{\alpha }[{\it{\Lambda}} _{\alpha}(.,\mathbf{y}.;\tau _{n_{t}})]_{\tau _{n_{t}},t} \end{equation} (4.7) for any given $$(\tau _{n_{t}},\mathbf{y}_{n_{t}})$$, \begin{equation*} {\it{\Lambda}} _{\mathbb{\alpha }}(s,\mathbf{v};\tau _{n_{t}})=\left\{ \begin{array}{@{}lc@{}} L^{j_{1}}\ldots L^{j_{l(\alpha )-1}}\mathbf{p}^{\beta }(s,\mathbf{v};\tau _{n_{t}}) & \text{ }if\text{ }j_{l(\alpha )}=0 \\ L^{j_{1}}\ldots L^{j_{l(\alpha )-1}}\mathbf{q}_{j_{l(\mathbb{\alpha )} }}^{\beta }(s,\mathbf{v};\tau _{n_{t}}) & \text{ }if\text{ }j_{l(\alpha )}\neq 0 \end{array} \right. \end{equation*} is a function of $$s$$ and $$\mathbf{v}$$, with \begin{equation*} \mathbf{p}^{\beta }(s,\mathbf{v;}\tau _{n_{t}})=\mathbf{A}(\tau _{n_{t}}) \mathbf{v}+\mathbf{a}^{\mathbb{\beta }}(s;\tau _{n_{t}}) \quad \text{and} \quad \mathbf{q}_{i}^{\beta }(s,\mathbf{v;}\tau _{n_{t}})= \mathbf{B}_{i}(\tau _{n_{t}})\mathbf{v}+\mathbf{b}_{i}^{\mathbb{\beta } }(s;\tau _{n_{t}}), \end{equation*} for all $$s\in \lbrack a,b]$$ and $$\mathbf{v}\in \mathbb{R}^{d}$$, and matrix functions $$\mathbf{A},\mathbf{B}_{i}$$ and vector functions $$\mathbf{a}^{\beta }\mathbf{,b}_{i}^{\beta }$$ defined as in the WLL approximation (2.8). Then \begin{gather} E\left( g(\mathbf{y}(t))\right) =E\left( g(\mathbf{z}(t))\right)\! , \end{gather} (4.8) \begin{gather} E\left( g(\mathbf{y}(t)-\mathbf{y}(\tau _{n_{t}}))\right) =E\left( g(\mathbf{ z}(t)-\mathbf{z}(\tau _{n_{t}}))\right) \end{gather} (4.9) for all $$t\in \lbrack a,b]$$ and $$g\in \mathscr{C}_{P}^{2(\beta +1)}(\mathbb{R}^{d},\mathbb{R})$$; and \begin{equation} I_{\alpha }[{\it{\Lambda}} _{\alpha }(\tau _{n_{t}},\mathbf{y}_{n_{t}};\tau _{n_{t}})]_{\tau _{n_{t}},t}=I_{\alpha }[\lambda _{\alpha }(\tau _{n_{t}}, \mathbf{y}_{n_{t}})]_{\tau _{n_{t}},t}, \end{equation} (4.10) for all $$\alpha \in {\it{\Gamma}} _{\beta }/\{\nu \}$$ and $$t\in \lbrack a,b]$$, where $$\lambda _{\alpha }$$ denotes the Ito coefficient function corresponding to the SDE (2.1). Proof. The identities (4.8)–(4.9) trivially hold, since (4.7) is the order-$$\beta$$ weak Ito–Taylor expansion of the solution of the piecewise linear equation (2.8) with initial value $$\mathbf{y}(a)=\mathbf{y}_{0}$$. By simple calculations, it is obtained that Ito coefficient functions $$\lambda _{\alpha }$$ corresponding to the SDE (2.1) are \begin{gather*} \lambda _{(0)}^{k}=\mathbf{f}^{k}, \quad \lambda _{(j)}^{k}=\mathbf{g} _{j}^{k}, \quad \lambda _{(j,0)}^{k}=\sum\limits_{i=1}^{d}\mathbf{g} _{j}^{i}\text{ }\frac{\partial \mathbf{f}^{k}}{\partial \mathbf{x}^{i}}, \quad \lambda _{(i,j)}^{k}=\sum\limits_{l=1}^{d}\mathbf{g}_{i}^{l} \frac{\partial \mathbf{g}_{j}^{k}}{\partial \mathbf{x}^{l}} \\ \lambda _{(0,j)}^{k}=\frac{\partial \mathbf{g}_{j}^{k}}{\partial t} +\sum\limits_{i=1}^{d}\mathbf{f}^{i}\frac{\partial \mathbf{g}_{j}^{k}}{ \partial \mathbf{x}^{i}}+\frac{1}{2}\sum\limits_{i,l=1}^{d}\sum \limits_{j=1}^{m}\mathbf{g}_{j}^{i}\mathbf{g}_{j}^{l}\text{ }\frac{\partial ^{2}\mathbf{g}_{j}^{k}}{\partial \mathbf{x}^{i}\partial \mathbf{x}^{l}}, \quad \lambda _{(0,0)}^{k}=\frac{\partial \mathbf{f}^{k}}{\partial t} +\sum\limits_{i=1}^{d}\mathbf{f}^{i}\frac{\partial \mathbf{f}^{k}}{\partial \mathbf{x}^{i}}+\frac{1}{2}\sum\limits_{i,l=1}^{d}\sum\limits_{j=1}^{m} \mathbf{g}_{j}^{i}\mathbf{g}_{j}^{l}\text{ }\frac{\partial ^{2}\mathbf{f}^{k} }{\partial \mathbf{x}^{i}\partial \mathbf{x}^{l}}, \end{gather*} for $$\alpha \in {\it{\Gamma}} _{2}$$. By taking into account that $$\mathbf{p}^{\beta }(s,\mathbf{v};\tau _{n})$$ and $$\mathbf{q}_{i}^{\beta }(s,\mathbf{v};\tau _{n})$$ are linear functions of $$s$$ and $$\mathbf{v}$$, it is not difficult to obtain that $${\it{\Lambda}} _{\mathbb{\alpha }}(\tau _{n_{s}},\mathbf{y}_{n_{s}};\tau _{n_{s}})=\lambda _{\mathbb{\alpha }}(\tau _{n_{s}},\mathbf{y}_{n_{s}})_{t_{n_{s}},s}$$ for all $$\alpha \in {\it{\Gamma}} _{2}$$, which implies (4.10). □ Note that, the stochastic process $$\mathbf{z}$$ defined in the previous lemma is solution of the piecewise linear SDE (2.8) and $${\it{\Lambda}} _{\mathbb{\alpha}}$$ denotes the Ito coefficient functions corresponding to that equation. Therefore, (4.7) is the Ito–Taylor expansion of the Local Linear approximation (2.8). The main convergence result for the WLL approximations is them stated in the following theorem. Theorem 4.1 Let $$\mathbf{x}$$ be the solution of the SDE (2.1) on $$[a,b]$$, and $$\mathbf{y}$$ the order-$$\beta$$ WLL approximation of $$\mathbf{x}$$ defined by (2.8). Suppose that the drift and diffusion coefficients of the SDE (2.1) satisfy the conditions (4.1)–(4.4). Further, suppose that the initial values of $$\mathbf{x}$$ and $$\mathbf{y}$$ satisfy the conditions $$E(\left\vert \mathbf{x}(a)\right\vert ^{q})<\infty$$ and $$\left\vert E\left( g(\mathbf{x}(a))\right) -E\left(g(\mathbf{y}(a))\right)\right\vert \leq C_{0}h^{\beta }$$ for $$q=1,2,\ldots ,$$ some constant $$C_{0}>0$$ and all $$g\in \mathscr{C}_{P}^{2(\beta +1)}(\mathbb{R}^{d},\mathbb{R})$$. Then there exits a positive constant $$C$$ such that \begin{equation} \left\vert E\left( g(\mathbf{x}(b))\mid \mathscr{F}_{a}\right) -E\left( g(\mathbf{y}(b))\mid \mathscr{F}_{a}\right) \right\vert \leq C(b-a)h^{\beta }. \label{GlobalOrder} \end{equation} (4.11) Proof. For $$l=1,2,\ldots,$$ let $$P_{l}=\{\mathbf{p}\in\{1,\ldots,d\}^{l}\}$$, and let $$\mathbf{F}_{\mathbf{p}}:\mathbb{R}^{d}\rightarrow\mathbb{R}$$ be the function defined as \begin{equation*} \mathbf{F}_{\mathbf{p}}(\mathbf{x})=\prod\limits_{i=1}^{l}\mathbf{x}^{p_{i}}, \end{equation*} where $$\mathbf{p}=(p_{1},\ldots,p_{l})\in P_{l}$$. By applying Lemma 5.11.7 in Kloeden & Platen (1995) to (2.8) and taking into account that (4.7) is the order-$$\beta$$ weak Ito–Taylor expansion of the solution of (2.8), it is obtained \begin{equation*} \left\vert E\left( \mathbf{F}_{\mathbf{p}}(\mathbf{y}_{n+1}-\mathbf{y}_{n})- \mathbf{F}_{\mathbf{p}}\left(\sum\limits_{\alpha \in {\it{\Gamma}} _{\beta }/\{\nu \}}I_{\alpha }[{\it{\Lambda}} _{\alpha }(\tau _{n},\mathbf{y}_{n};\tau _{n})]_{\tau _{n},\tau _{n+1}}\right)\mid\mathscr{F}_{\tau _{n}}\right) \right\vert \leq K(1+\left\vert \mathbf{y}_{n}\right\vert ^{2r})(\tau _{n+1}-\tau _{n})h_{n}^{\beta }, \end{equation*} for all $$\mathbf{p}\in P_{l}$$ and $$l=1,\ldots ,2\beta +1$$, some $$K>0$$ and $$ r\in \{1,2,\ldots \}$$, where $${\it{\Lambda}} _{\alpha }$$ denotes the Ito coefficient function corresponding to (2.8), and $$h_{n}=\tau_{n+1}-\tau _{n}$$. Further, Lemma 4.2 implies that \begin{equation*} E\left( \mathbf{F}_{\mathbf{p}}\left.\left(\sum\limits_{\alpha \in {\it{\Gamma}} _{\beta }/\{\nu \}}I_{\alpha }[\lambda _{\alpha }(\tau _{n},\mathbf{y}_{n})]_{\tau _{n},\tau _{n+1}}\right)\right|\mathscr{F}_{\tau _{n}}\right) =E\left.\left( \mathbf{F}_{\mathbf{p}}\left(\sum\limits_{\alpha \in {\it{\Gamma}} _{\beta }/\{\nu \}}I_{\alpha }[{\it{\Lambda}} _{\alpha }(\tau _{n},\mathbf{y} _{n};\tau _{n})]_{\tau _{n},\tau _{n+1}}\right)\right| \mathscr{F}_{\tau _{n}}\right)\!, \end{equation*} where $$\lambda _{\alpha }$$ denotes the Ito coefficient function corresponding to (2.1). Hence, \begin{align*} \left\vert E\left( \mathbf{F}_{\mathbf{p}}(\mathbf{y}_{n+1}-\mathbf{y}_{n})- \mathbf{F}_{\mathbf{p}}\left.\left(\sum\limits_{\alpha \in {\it{\Gamma}} _{\beta }/\{\nu \}}I_{\alpha }[\lambda _{\alpha }(\tau _{n},\mathbf{y}_{n})]_{\tau _{n},\tau _{n+1}}\right)\right|\mathscr{F}_{\tau _{n}}\right) \right\vert & \leq K(1+\left\vert \mathbf{y}_{n}\right\vert ^{2r})(\tau _{n+1}-\tau _{n})h_{n}^{\beta } \\ & \leq K(1+\underset{0\leq k\leq n}{\max }\left\vert \mathbf{y} _{k}\right\vert ^{2r})(\tau _{n+1}-\tau _{n})h_{n}^{\beta }. \end{align*} On the other hand, Theorem 4.5.4 in Kloeden & Platen (1995) applied to (2.8) and Lemma 4.1 imply \begin{equation*} E\left( \left\vert \mathbf{y}_{n+1}-\mathbf{y}_{n}\right\vert ^{2q}\mid \mathscr{F}_{\tau _{n}}\right) \leq L(1+\underset {0\leq k\leq n}{\max }\left\vert \mathbf{y}_{k}\right\vert ^{2q})(\tau _{n+1}-\tau _{n})^{q} \end{equation*} for all $$0\leq n\leq N-1$$, and \begin{equation*} E\left( \underset{0\leq k\leq n_{b}}{\max }\left\vert \mathbf{y} _{k}\right\vert ^{2q}\mid\mathscr{F} _{a}\right) \leq C(1+\left\vert \mathbf{y}_{0}\right\vert ^{2q}), \end{equation*} respectively, where $$C$$ and $$L$$ are positive constants. The proof concludes by using Theorem 14.5.2 in Kloeden & Platen (1995) with the last three inequalities. □ For state equations with additive noise, the order of weak convergence of the WLL approximations provided by this Theorem matches with that early obtained in Carbonell et al. (2006). Theorem 4.1 provides the global order of weak convergence for the WLL approximations at the time $$t=b$$. Notice further that inequality (4.11) implies that the uniform bound \begin{equation} \sup_{t\in \lbrack a,b]}\left\vert E\left( g(\mathbf{x}(t))\mid \mathscr{F}_{a}\right) -E\left( g(\mathbf{y}(t))\mid \mathscr{F}_{a}\right) \right\vert \leq C(b-a)h^{\beta } \label{UniformOrder} \end{equation} (4.12) holds as well for the order-$$\beta$$ WLL approximation $$\mathbf{y}$$ since, in general, the global order of weak convergence of a numerical integrator implies the uniform one (see Theorem 14.5.1 and Exercise 14.5.3 in Kloeden & Platen (1995) for details). Convergence in Theorem 4.1 has been proved under the assumption of continuity for $$\mathbf{f}$$ and $$\mathbf{g}_{i}$$. In other practical situations, it is important to integrate SDEs with non-global Lipschitz coefficients on $$\mathbb{R}^{d}$$ (Milstein & Tretyakov, 2005). Typically, for such type of equations, the conventional weak integrators display explosive values for some realizations. In such a case, if each numerical realization of an order-$$\beta$$ scheme leaving a sufficient large sphere $$\mathscr{R}\subset\mathbb{R}^{d}$$ is rejected, then Theorem 2.3 in Milstein & Tretyakov (2005) ensures that the accuracy of the scheme is $$\varepsilon +O(h^{\beta })$$, where $$\varepsilon$$ can be made arbitrary small with increasing the sphere radius. This Theorem could be applied to the WLL approximations as well. Finally, the convergence rate of the approximate LL filter is stated as follows. Theorem 4.2 Given a set of $$M$$ partial and noisy observations of the state equation (2.1) on $$\{t\}_{M}$$, and under the assumption that conditions (4.1)–(4.4) hold on $$[t_{0},T]$$, the approximate order-$$\beta$$$$(=1,2)$$ LL filter defined on $$(\tau)_{h}\supset \{t\}_{M}$$ converges with order $$\beta$$$$(=1,2)$$ to the exact LMV filter (2.3)–(2.7) as $$h$$ goes to zero. Proof. Lemma 4.1 and Theorem 4.1 imply that the order-$$\beta$$ WLL approximation $$\mathbf{y}$$ of $$\mathbf{x}$$ defined by (2.8) satisfies the inequalities (4.5) and (4.12) for any integration interval $$[a,b]\subset \lbrack t_{0},T]$$. Thus, by applying that lemma and theorem in each interval $$[t_{k},t_{k+1}]$$ with $$\mathbf{y}(t_{k})\equiv \mathbf{y}_{t_{k}/t_{k}}$$ (and $$\mathbf{y}_{t_{0}/t_{0}}\equiv \mathbf{x}_{t_{0}/t_{0}}$$), for all $$t_{k},t_{k+1}\in\{t\}_{M}$$, the bound and convergence conditions (3.1) and (3.2) required by Theorem 3.1 for the convergence of the filter designed from $$\mathbf{y}$$ are satisfied. Therefore, the inequalities (3.7)–(3.9) hold for the approximate LL filter defined in terms of the first two conditional moments of (2.8) and Definition 3.1, and so this filter converges with order $$\beta$$ to the exact LMV filter as $$h$$ goes to zero. □ 5. Practical algorithms This section deals with two practical implementations of the LL filter introduced above: one highly accurate but computational costly, and the other one less accurate but significantly faster. The first implementation is based on a stochastic approximation for the predictions $$\mathbf{y}_{t/t_{k}}$$ and $$\mathbf{P}_{t/t_{k}}$$ via Monte Carlo simulations, whereas the second one is derived from a deterministic approximation for these predictions. An adaptive strategy for the construction of automatic time discretizations is given for each implementation as well as a criterion for the selection of the number of simulations for the first one. 5.1 Stochastic formulas for the predictions A straightforward approximation for the first two moments of the piecewise linear equation (2.8) is that derived from the order $$\beta =1$$ weak LL schemes considered in Carbonell et al. (2006) and Jimenez et al. (2017). In this case, for all $$t\in (t_{k},t_{k+1}]$$, the random variable \begin{eqnarray} \widetilde{\mathbf{y}}(t) &=&\mathbf{\mu }(\tau _{n_{t}},\widetilde{\mathbf{y }}(\tau _{n_{t}});t-\tau _{n_{t}}) \notag \\ &&+\sqrt{\mathbf{{\it{\Lambda}} }(\tau _{n_{t}},\widetilde{\mathbf{y}}(\tau _{n_{t}})\widetilde{\mathbf{y}}^{\intercal }(\tau _{n_{t}});t-\tau _{n_{t}})- \mathbf{\mu }(\tau _{n_{t}},\widetilde{\mathbf{y}}(\tau _{n_{t}});t-\tau _{n_{t}})\mathbf{\mu }^{\intercal }(\tau _{n_{t}},\widetilde{\mathbf{y}} (\tau _{n_{t}});t-\tau _{n_{t}})}\,\mathbf{\xi }_{_{t}} \end{eqnarray} (5.1) gives a weak approximation to the solution $$\mathbf{y}(t)$$ of (2.8) with initial condition $$\mathbf{y}(t_{k})=\overline{\mathbf{y}}_{t_{k}/t_{k}}$$. Here, $$\mathbf{\mu }(\tau ,\mu_{0};h)=\mathbb{E}\left( \mathbf{y}(\tau +h)\mid \mathbf{y}(\tau)=\mu_{0}\right)$$, $$\mathbf{{\it{\Lambda}} }(\tau ,\mathbf{{\it{\Lambda}} }_{0};h)=\mathbb{E}\left( \mathbf{y}(\tau +h)\mathbf{y}^{\intercal }(\tau +h)\mid\mathbf{y}(\tau )\mathbf{y}^{\intercal }(\tau )=\mathbf{{\it{\Lambda}} }_{0}\right)$$ and $$\mathbf{\xi }_{_{t}}$$ is a $$d$$-dimensional random vector with i.i.d. symmetric random elements $$\mathbf{\xi }_{_{t}}^{k}$$ having zero mean, variance $$1$$ and finite moments of any order for all $$t\in [t_{k},t_{k+1}]$$ and $$k=1,..,d$$. Thus, computing $$S$$ simulations $$\widetilde{\mathbf{y}}^{\{s\}}(t_{k+1})$$ of $$\widetilde{\mathbf{y}}(t_{k+1})$$, the sample averages \begin{equation*} \overline{\mathbf{y}}_{t_{k+1}/t_{k}}=\frac{1}{S}\sum\limits_{s=1}^{S} \widetilde{\mathbf{y}}^{\{s\}}(t_{k+1})\qquad\overline{ \mathbf{V}}_{t_{k+1}/t_{k}}=\frac{1}{S}\sum\limits_{s=1}^{S}\widetilde{ \mathbf{y}}^{\{s\}}(t_{k+1})\text{ }(\widetilde{\mathbf{y}}^{\{s\}}(t_{k+1}) \text{ })^{\intercal }-\overline{\mathbf{y}}_{t_{k+1}/t_{k}}\overline{ \mathbf{y}}_{t_{k+1}/t_{k}}^{\intercal } \end{equation*} give an approximation to the predictions $$\mathbf{y}_{t_{k+1}/t_{k}}$$ and $$\mathbf{V}_{t_{k+1}/t_{k}}$$, respectively. A key point in the generation of the random variable (5.1) is the evaluation of just one matrix exponential for computing the expected values $$\mathbf{\mu }$$ and $$\mathbf{{\it{\Lambda}} }$$. Indeed, from Theorem 2 in Jimenez (2015), \begin{equation} \mathbf{\mu }(\tau ,\mathbf{\mu }_{0};h)=\mathbf{\mu }_{0}+\mathbf{L}_{2}e^{ \mathbf{M}h}\mathbf{u} \quad \text{and} \quad vec(\mathbf{{\it{\Lambda}} }(\tau ,\mathbf{{\it{\Lambda}} }_{0};h))=\mathbf{L}_{1}e^{\mathbf{M}h} \mathbf{u,} \end{equation} (5.2) where the vector $$\mathbf{u}$$ and the matrices $$\mathbf{M}$$, $$\mathbf{L}_{1}$$, $$\mathbf{L}_{2}$$ are given by \begin{equation*} \mathbf{M}=\left[ \begin{array}{@{}cccccc@{}} \mathscr{A}(\tau ) & \mathscr{B}_{5}(\tau ) & \mathscr{B}_{4}(\tau ) & \mathscr{B}_{3}(\tau ) & \mathscr{B}_{2}(\tau ) & \mathscr{B}_{1}(\tau ) \\ \mathbf{0} & \mathbf{C}(\tau ) & \mathbf{I}_{d+2} & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{C}(\tau ) & \mathbf{0} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & 0 & 2 & 0 \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & 0 & 0 & 1 \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & 0 & 0 & 0 \end{array} \right]\text{,} \quad \mathbf{u}=\left[ \begin{array}{@{}c@{}} vec(\mathbf{{\it{\Lambda}} }_{0}) \\ \mathbf{0} \\ \mathbf{r} \\ 0 \\ 0 \\ 1 \end{array} \right] \in \mathbb{R} ^{(d^{2}+2d+7)} \end{equation*} $\mathbf{L}_{1}=\left[\begin{array}{@{}cc@{}}\mathbf{I}_{d^{2}} & \mathbf{0}_{d^{2}\times (2d+7)}\end{array}\right]$ and $\mathbf{L}_{2}=\left[\begin{array}{@{}ccc@{}}\mathbf{0}_{d\times (d^{2}+d+2)} & \mathbf{I}_{d} & \mathbf{0}_{d\times 5}\end{array}\right]$, in terms of the vector $$\mathbf{r}$$ and matrices $$\mathscr{A}$$, $$\mathscr{B}_{1}$$, $$\mathbf{C}$$ defined as \begin{align*} &\mathscr{A}(\tau )=\mathbf{A}(\tau )\mathbf{\oplus A}(\tau )+\sum\limits_{i=1}^{m}\mathbf{B}_{i}(\tau )\mathbf{\otimes B} _{i}(\tau )\text{,}\\ &\mathbf{C(}\tau )=\left[ \begin{array}{@{}ccc@{}} \mathbf{A}(\tau ) & \mathbf{a}_{1}(\tau ) & \mathbf{A}(\tau )\mathbf{\mu } _{0}+\mathbf{a}_{0}(\tau ) \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right]\text{,} \quad \mathbf{r}=\left[ \begin{array}{@{}c@{}} \mathbf{0}_{(d+1)\times 1} \\ 1 \end{array} \right] \end{align*} $$\mathscr{B}_{1}(\tau )={\rm vec}(\mathbf{\beta }_{1}(\tau ))+\beta _{4}(\tau )\mathbf{y}_{\tau /t_{k}}$$, $$\mathscr{B}_{2}(\tau )={\rm vec}(\mathbf{\beta }_{2}(\tau ))+\mathbf{\beta }_{5}(\tau )\mathbf{y}_{\tau /t_{k}}$$, $$\mathscr{B}_{3}(\tau )={\rm vec}(\mathbf{\beta }_{3}(\tau ))$$, $$\mathscr{B}_{4}(\tau )=\mathbf{\beta }_{4}(\tau )\mathbf{L}$$ and $$\mathscr{B}_{5}(\tau )=\mathbf{\beta }_{5}(\tau )\mathbf{L}$$. Here, $\mathbf{L}=\left[\begin{array}{@{}ll@{}}\mathbf{I}_{d} & \mathbf{0}_{d\times 2}\end{array}\right]$, $$\mathbf{I}_{d}$$ is the $$d$$-dimensional identity matrix and \begin{gather*} \mathbf{\beta }_{1}(\tau )=\sum\limits_{i=1}^{m}\mathbf{b}_{i,0}(\tau ) \mathbf{b}_{i,0}^{\intercal }(\tau )\text{,} \quad \mathbf{\beta }_{2}(\tau )=\sum\limits_{i=1}^{m}\mathbf{b}_{i,0}(\tau )\mathbf{b}_{i,1}^{\intercal }(\tau )+\mathbf{b}_{i,1}(\tau )\mathbf{b}_{i,0}^{\intercal }(\tau )\text{,} \quad \mathbf{\beta }_{3}(\tau )=\sum\limits_{i=1}^{m}\mathbf{b}_{i,1}(\tau ) \mathbf{b}_{i,1}^{\intercal }(\tau ) \\ \mathbf{\beta }_{4}(\tau )=\mathbf{a}_{0}(\tau )\oplus \mathbf{a}_{0}(\tau )+\sum\limits_{i=1}^{m}\mathbf{b}_{i,0}(\tau )\otimes \mathbf{B}_{i}(\tau )+ \mathbf{B}_{i}(\tau )\otimes \mathbf{b}_{i,0}(\tau )\text{,} \\ \mathbf{\beta }_{5}(\tau )=\mathbf{a}_{1}(\tau )\oplus \mathbf{a} _{1}(\tau )+\sum\limits_{i=1}^{m}\mathbf{b}_{i,1}(\tau )\otimes \mathbf{B} _{i}(\tau )+\mathbf{B}_{i}(\tau )\otimes \mathbf{b}_{i,1}(\tau ), \end{gather*} being $$\mathbf{a}_{0}(\tau )$$, $$\mathbf{a}_{1}(\tau )$$, $$\mathbf{b}_{i,0}(\tau )$$ and $$\mathbf{b}_{i,1}(\tau )$$ vectors that satisfy the expressions \begin{equation*} \mathbf{a}^{\mathbb{\beta }}(t;\tau )=\mathbf{a}_{0}(\tau )+\mathbf{a} _{1}(\tau )(t-\tau ) \quad \text{and} \quad \mathbf{b}_{i}^{\mathbb{ \beta }}(t;\tau _{n_{t}})=\mathbf{b}_{i,0}(\tau )+\mathbf{b}_{i,1}(\tau )(t-\tau ) \end{equation*} for all $$t\in \lbrack t_{k},t_{k+1}]$$, where the vector functions $$\mathbf{a}^{\beta }$$ and $$\mathbf{b}_{i}^{\beta }$$ are defined as in the WLL approximation (2.8) but, replacing $$\mathbf{y}(s)$$ by $$\mathbf{\mu }_{0}.$$ The matrix functions $$\mathbf{A},\mathbf{B}_{i}$$ are also defined as in the WLL approximation (2.8) but, replacing $$\mathbf{y}(s)$$ by $$\mathbf{\mu }_{0}$$. The symbols $${\rm vec}$$, $$\oplus$$ and $$\otimes$$ denote the vectorization operator, the Kronecker sum and product, respectively. Alternatively, see Theorems 3 and 4 in Jimenez (2015) for simplified formulas in the case autonomous state equations or with additive noise. 5.2 Adaptive selection of a time discretization In order to write a code that automatically determines a suitable time discretization $$(\tau )_{h}\supset \{t\}_{M}$$ for achieving a prescribed accuracy in the computation of the predictions $$\overline{\mathbf{y}}_{t_{k+1}/t_{k}}$$ and $$\overline{\mathbf{P}}_{t_{k+1}/t_{k}}$$, an adequate adaptive strategy is necessary. Since the first two conditional moments of $$\mathbf{y}$$ are solutions of ordinary differential equations (ODEs) (Jimenez, 2015), conventional adaptive strategies for numerical integrators of such class of equations are useful. In what follows, the adaptive strategy described in Hairer et al. (1993) is adapted to the LL filter requirements. Once the values for the relative and absolute tolerances $${\rm rtol}_{\mathbf{y}},{\rm rtol}_{\mathbf{P}}$$ and $${\rm atol}_{\mathbf{y}},{\rm atol}_{\mathbf{P}}$$ for the local discretization errors of the first two conditional moments, for the maximum and minimum stepsizes $$h_{\max }^{k}$$ and $$h_{\min }$$, and for the floating point precision $$prs$$ are set, an initial stepsize $$h_{0}$$ needs to be estimated. Specifically, \begin{equation*} h_{0}=\max \{h_{\min },\min \{\delta (\mathbf{y}),\delta (vec(\mathbf{P} )),h_{\max }^{0}\}\} \end{equation*} where \begin{equation*} \delta (\mathbf{v})=\min \{100\delta _{1}(\mathbf{v}),\delta _{2}(\mathbf{v} )\} \end{equation*} with \begin{equation*} \delta _{1}(\mathbf{v})=\left\{ \begin{array}{@{}cc@{}} {\rm atol}_{\mathbf{v}} & \text{if }d_{0}(\mathbf{v})<10\cdot {\rm atol}_{\mathbf{v}} \text{ or }d_{1}(\mathbf{v})<10\cdot {\rm atol}_{\mathbf{v}} \\ 0.01\frac{d_{0}(\mathbf{v})}{d_{1}(\mathbf{v})} & \text{otherwise} \end{array} \right. \end{equation*} and \begin{equation*} \delta _{2}(\mathbf{v})=\left\{ \begin{array}{@{}cc@{}} \max \{{\rm atol}_{\mathbf{v}},\delta _{1}\cdot {\rm rtol}_{\mathbf{v}}\}. & \text{if } {\rm {max}}\{d_{1}(\mathbf{v}),d_{2}(\mathbf{v})\}\leq prs \\ \left(\dfrac{0.01}{\max \{d_{1}(\mathbf{v}),d_{2}(\mathbf{v})\}}\right)^{\frac{1}{\beta +1}} & \text{otherwise}. \end{array} \right. \end{equation*} Here, $$d_{0}(\mathbf{v})=\left\Vert \mathbf{v}_{t_{0}/t_{0}}\right\Vert$$, $$d_{1}(\mathbf{v})=\left\Vert \mathbf{F}(t_{0},\mathbf{v}_{t_{0}/t_{0}})\right\Vert$$ and $$d_{2}(\mathbf{v})=\left\Vert \dfrac{\partial \mathbf{F}(t_{0},\mathbf{v}_{t_{0}/t_{0}})}{\partial t}+\dfrac{\partial \mathbf{F}(t_{0},\mathbf{v}_{t_{0}/t_{0}})}{\partial \mathbf{v}}\mathbf{F}(t_{0},\mathbf{v}_{t_{0}/t_{0}})\right\Vert$$ are the scaled norms of the filters and of their first two derivatives with respect to $$t$$ at $$t_{0}$$, where $$\mathbf{F}$$ is the vector field of the equation for $$\mathbf{v}$$ (i.e., (2.9) for $$\mathbf{y}$$, and the vectorization of (2.10) for $$vec(\mathbf{P})$$), and $$\Vert \mathbf{u}\Vert =\sqrt{\dfrac{1}{\dim (\mathbf{u})}\sum_{i=1}^{\dim (\mathbf{u})}\left(\dfrac{\mathbf{u}^{i}}{\mathbf{sc}^{i}(\mathbf{u})}\right)^{2}}$$ with $$\mathbf{sc}^{i}(\mathbf{u})={\rm atol}_{\mathbf{u}}+{\rm rtol}_{\mathbf{u}}\cdot \left\vert \mathbf{u}_{t_{0}/t_{0}}^{i}\right\vert$$ for any vector $$\mathbf{u}$$. Starting with the filter estimates $$\overline{\mathbf{y}}_{t_{k}/t_{k}}$$ and $$\overline{\mathbf{P}}_{t_{k}/t_{k}}$$, the basic steps of the adaptive algorithm for determining $$\left( \tau \right) _{h}$$ and computing the predictions $$\overline{\mathbf{y}}_{t_{k+1}/t_{k}}$$ and $$\overline{\mathbf{P}}_{t_{k+1}/t_{k}}$$ between two consecutive observations $$t_{k}$$ and $$t_{k+1}$$ are the following: $$s=1$$ and $$S=S_{0},$$ $$n=n_{t_{k}},$$$$\tau _{n}=t_{k}$$, and $$\widetilde{\mathbf{y}}^{\{s\}}(\tau _{n})=\overline{\mathbf{y}}_{t_{k}/t_{k}}+\sqrt{\overline{\mathbf{P}}_{t_{k}/t_{k}}-\overline{\mathbf{y}}_{t_{k}/t_{k}}\overline{\mathbf{y}}_{t_{k}/t_{k}}^{\intercal }}\mathbf{\xi }_{_{t_{k}}}$$, $$\widetilde{\mathbf{y}}_{\tau _{n}/t_{k}}=\overline{\mathbf{y}}_{t_{k}/t_{k}},$$$$\widetilde{\mathbf{P}}_{\tau _{n}/t_{k}}=\overline{\mathbf{P}}_{t_{k}/t_{k}}$$, where $$\mathbf{\xi }_{_{t_{k}}}$$ is defined as in the previous subsection Computation of the first two conditional moments of $$\widetilde{\mathbf{y}}$$ at $$\tau _{n+1}=\tau _{n}+h_{n}$$ given $$\widetilde{\mathbf{y}}^{\{s\}}(\tau _{n})$$ by means of the expressions (5.2). That is, \begin{equation*} \widetilde{\mathbf{y}}_{\tau _{n+1}/t_{k}}=\mu (\tau _{n},\widetilde{\mathbf{ y}}^{\{s\}}(\tau _{n});h_{n}) \quad \text{and} \quad \widetilde{\mathbf{P}}_{\tau _{n+1}/t_{k}}={\it{\Lambda}} (\tau _{n},\widetilde{ \mathbf{y}}^{\{s\}}(\tau _{n})(\widetilde{\mathbf{y}}^{\{s\}}(\tau _{n}))^{\intercal };h_{n}) \end{equation*} Computation of an alternative estimate for the above conditional moments at $$\tau _{n+1}=\tau _{n}+h_{n}$$ by the recursive evaluation of the expressions (5.2) at the two consecutive times $$\tau ^{\ast }=\tau _{n}+h_{n}/2$$ and $$\tau ^{\ast }+h_{n}/2$$. That is, \begin{equation*} \widehat{\mathbf{y}}_{\tau _{n+1}/t_{k}}=\mu (\tau ^{\ast }+h_{n}/2, \widetilde{\mathbf{y}}_{\tau ^{\ast }/t_{k}};h_{n}/2) \quad \text{and} \quad \widehat{\mathbf{P}}_{\tau _{n+1}/t_{k}}={\it{\Lambda}} (\tau ^{\ast }+h_{n}/2,\widetilde{\mathbf{P}}_{\tau ^{\ast }/t_{k}};h_{n}/2) \end{equation*} where $$\widetilde{\mathbf{y}}_{\tau ^{\ast }/t_{k}}=\mu (\tau _{n},\widetilde{\mathbf{y}}^{\{s\}}(\tau _{n});h_{n}/2)$$ and $$\widetilde{\mathbf{P}}_{\tau ^{\ast }/t_{k}}={\it{\Lambda}} (\tau _{n},\widetilde{\mathbf{y}}^{\{s\}}(\tau _{n})(\widetilde{\mathbf{y}}^{\{s\}}(\tau_{n}))^{\intercal };h_{n}/2)$$ Estimation of the time discretization errors by \begin{equation*} E_{1}=\sqrt{\dfrac{1}{d}\sum_{i=1}^{d}\left(\dfrac{\widetilde{\mathbf{y}}_{\tau _{n+1}/t_{k}}^{i}-\widehat{\mathbf{y}}_{\tau _{n+1}/t_{k}}^{i}}{\mathbf{sc} ^{i}(\widetilde{\mathbf{y}})}\right)^{2}} \quad \text{and} \quad E_{2}=\sqrt{\dfrac{1}{ d^{2}}\sum_{i=1}^{d^{2}}\left(\dfrac{\widetilde{\mathbf{p}}_{\tau _{n+1}/t_{k}}^{i}-\widehat{\mathbf{p}}_{\tau _{n+1}/t_{k}}^{i}}{\mathbf{sc} ^{i}(\widetilde{\mathbf{p}})}\right)^{2}}, \end{equation*} where $$\mathbf{sc}^{i}(\mathbf{v})={\rm atol}_{\mathbf{v}}+{\rm rtol}_{\mathbf{v}}\cdot\max \left\{\left\vert \mathbf{v}_{\tau _{n}/t_{k}}^{i}\right\vert\!,\left\vert\mathbf{v}_{\tau _{n+1}/t_{k}}^{i}\right\vert \right\}$$, $$\widetilde{\mathbf{p}}_{\tau _{n+1}/t_{k}}=vec(\widetilde{\mathbf{P}}_{\tau _{n+1}/t_{k}})$$ and $$\widehat{\mathbf{p}}_{\tau_{n+1}/t_{k}}=vec\left(\widehat{\mathbf{P}}_{\tau _{n+1}/t_{k}}\right)$$ Estimation of a new stepsize \begin{equation*} h_{\rm new}=\max \{h_{\min },\min \{\delta _{\rm new}(E_{1}),\delta _{{\rm new}}(E_{2}),h_{\max }^{k}\} \end{equation*} where \begin{equation*} \delta _{{\rm new}}(E)=\left\{ \begin{array}{@{}cc@{}} h_{n}\cdot \min \left\{5,\max \left\{0.25,0.8\cdot \left(\dfrac{1}{E}\right)^{\frac{1}{\beta +1} }\right\}\right\} & E\leq 1\text{ } \\[10pt] h_{n}\cdot \min \left\{1,\max \left\{0.1,0.2\cdot \left(\dfrac{1}{E}\right)^{\frac{1}{\beta +1} }\right\}\right\} & E>1\text{ } \end{array} \right. \end{equation*} Validation of $$\widetilde{\mathbf{y}}_{\tau _{n+1}/t_{k}}$$ and $$\widetilde{\mathbf{P}}_{\tau _{n+1}/t_{k}}$$: if $${\rm {max}}\{E_{1},E_{2}\}\leq 1$$ or $$h_{n}=h_{\min }$$, then accept the stepsize $$h_{n}$$ and compute \begin{equation*} \widetilde{\mathbf{y}}^{\{s\}}(\tau _{n+1})=\widetilde{\mathbf{y}}_{\tau _{n+1}/t_{k}}+\sqrt{\widetilde{\mathbf{P}}_{\tau _{n+1}/t_{k}}-\widetilde{ \mathbf{y}}_{\tau _{n+1}/t_{k}}\widetilde{\mathbf{y}}_{\tau _{n+1}/t_{k}}^{\intercal }}\,\mathbf{\xi }_{\tau _{n+1}}. \end{equation*} Otherwise, return to Step 3 with $$h_{n}=h_{{\rm new}}$$ Control of the final stepsize \begin{equation*} \matrix{ {{\rm{If}}{\mkern 1mu} \,\, {\tau _{n + 1}} + {h_{{\rm{new}}}} < {t_{k + 1}},\,\, \,\, {\rm{then}}\,\, \,\, {\rm{return}}\,\, \,\, \,\, \,\, {\rm{to}}\,\, \,\, {\rm{Step}}\,\, \,\, {\rm{3}}\,\, \,\, {\rm{with}}\,\, \,\, {h_n} = {h_{{\rm{new}}}}\,\, \,\, {\rm{and}}\,\, \,\, n = n + 1,} \hfill \cr {{\rm{If}}\,\, \,\, {\tau _{n + 1}} + {h_{{\rm{new}}}} > {t_{k + 1}},\,\, \,\, {\rm{then}}\,\, \,\, {\rm{return}}\,\, \,\, {\rm{to}}\,\, \,\, {\rm{Step}}\,\, \,\, {\rm{3}}\,\, \,\, {\rm{with}}\,\, \,\, {h_n} = {t_{k + 1}} - {\tau _{n + 1}}\,\, \,\, {\rm{and}}\,\, \,\, n = n + 1,} \hfill \cr {{\rm{if}}\,\, \,\, {\tau _{n + 1}} = {t_{k + 1}}\,\, \,\, {\rm{and}}\,\, \,\, s < S,\,\, \,\, {\rm{return}}\,\, \,\, {\rm{to}}\,\, \,\, {\rm{Step}}\,\, \,\, {\rm{2}}\,\, \,\, {\rm{with}}\,\, \,\, s = s + 1} \hfill \cr } \end{equation*} Computation of the averages \begin{eqnarray*} \overline{\mathbf{y}}_{t_{k+1}/t_{k}} &=&\frac{1}{S}\sum\limits_{s=1}^{S} \widetilde{\mathbf{y}}^{\{s\}}(t_{k+1}), \quad \overline{\mathbf{P}}_{t_{k+1}/t_{k}}=\frac{1}{S}\sum\limits_{s=1}^{S}\widetilde{\mathbf{y}}^{\{s\}}(t_{k+1})\text{ }(\widetilde{\mathbf{y}}^{\{s\}}(t_{k+1}) \text{ })^{\intercal },\\ \overline{\mathbf{L}}_{t_{k+1}/t_{k}}^{i} &=&\frac{1}{S}\sum \limits_{s=1}^{S}((\widetilde{\mathbf{y}}^{\{s\}}(t_{k+1})\text{ }( \widetilde{\mathbf{y}}^{\{s\}}(t_{k+1})\text{ })^{\intercal })^{ii})^{2}- \overline{\mathbf{P}}_{t_{k+1}/t_{k}}^{ii} \quad \text{for all} \,\, i=1,..,d \end{eqnarray*} Control of the number $$S$$ of simulation via the statistical error. If the statistical error is not small enough, return to step 2 with new values for $$S$$ and $$s$$ (see details in the next subsection). Clearly, in this adaptive strategy, the selected values for the relative and absolute tolerances will have a direct impact in the filtering performance expressed in terms of the filtering error and the computational time cost. Note that, under the assumed smoothness conditions for the first two conditional moments of the state equation, the adaptive algorithm provides an adequate estimation of the local discretization errors of the approximate moments at each $$\tau _{n}\in (\tau )_{h}$$, and ensures that the relative and absolute errors of the approximate moments at $$\tau _{n}$$ are lower than the prearranged relative and absolute tolerance. This is done with a computational time cost that typically increases as the values of the tolerances decreases. Thus, for each filtering problem, adequate tolerance values should be carefully set in advance. In practical control engineering, these tolerances can be chosen by taking into account the level of accuracy required by the particular problem under consideration and the specific range of values of its state variables. Remarks: It is worth to emphasize that the initial stepsize $$h_{0}$$ is computed just one time at $$\tau _{0}\in \lbrack t_{0},t_{1}]$$. For other $$\tau _{n}\in \lbrack t_{k},t_{k+1}]$$ with $$n=n_{t_{k}}$$ and $$k>0$$, the value for the corresponding $$h_{n}$$ is set as $$h_{n}=h_{{\rm new}},$$ where the value $$h_{{\rm new}}$$ was estimated when the previous stepsize $$h_{n-1}$$ was accepted. In Step 6, the constant values in the formula for the new stepsize $$\delta_{{\rm new}}(E)$$ were set according to the standard integration criterion oriented to reach an adequate balance of accuracy and computational cost with the adaptive strategy (see, e.g., Hairer et al., 1993). These values might be adjusted for improving the filtering performance in some specific types of state equations. Contrary to other adaptive strategies for the stepsize selection (see, e.g., Szepessy et al., 2001), the strategy proposed here is deterministic and so it does not require additional simulations. Further note that, because the flow property of the exponential operator, only two exponential matrices need to be evaluated in steps 3 and 4, instead of three. These two exponential matrices can the efficiently computed through the well-known Padé method for exponential matrices (Moler & Van Loan, 2003) or, alternatively, by means of the Krylov subspace method (Moler & Van Loan, 2003) in the case of high dimensional state equation. Even more, low order Padé and Krylov methods as suggested in Jimenez & Carbonell (2015) can be used as well for reducing the computation cost, but preserving the order-$$\beta$$ of the LL filters. 5.3 Adaptive selection of the number of simulations As it was mention above, the statistical error of the Monte Carlo estimator is a function of both, the number of simulations and the variance of the estimator. Based in this fact, an adaptive strategy that determines the appropriate number of simulations achieving a prescribed accuracy was proposed in Szepessy et al. (2001). This can be easy adapted to the LL filter requirements as follows. With the averages computed at the Step 9 in previous subsection from $$S$$ simulations of $$\widetilde{\mathbf{y}}$$ between two consecutive observations $$t_{k}$$ and $$t_{k+1}$$, the estimation of a new value of $$S$$ has the following steps: Estimation of the statistical errors by \begin{equation*} E_{1}^{i}=c_{0}\sqrt{\overline{\mathbf{V}}_{t_{k+1}/t_{k}}^{ii}/S} \quad \text{and} \quad E_{2}^{i}=c_{0}\sqrt{\overline{\mathbf{L}} _{t_{k+1}/t_{k}}^{i}/S} \end{equation*} for all $$i=1,..,d$$, with $$c_{0}=1.65$$, where $$\overline{\mathbf{V}}_{t_{k+1}/t_{k}}=\overline{\mathbf{P}}_{t_{k+1}/t_{k}}-\overline{\mathbf{y}}_{t_{k+1}/t_{k}}\overline{\mathbf{y}}_{t_{k+1}/t_{k}}^{\intercal }$$. Control of the number of simulations: if $$(\underset{i}{\rm {max}}\{E_{1}^{i}\}\leq {\rm {mtol}}_{\mathbf{y}}$$ and $$\underset{i}{\rm {max}}\{E_{2}^{i}\}\leq {\rm {mtol}}_{\mathbf{P}})$$, or $$S=S_{\max }$$, then stop. Here, $${\rm {mtol}}_{\mathbf{y}}$$ and $${\rm {mtol}}_{\mathbf{P}}$$ denote the required tolerance for the statistical error of $$\overline{\mathbf{y}}_{t_{k+1}/t_{k}}^{i}$$ and $$\overline{\mathbf{V}}_{t_{k+1}/t_{k}}^{ii}$$. Estimation of a new the number of simulations \begin{equation*} S_{{\rm new}}={\rm {max}}\{S_{1},S_{2},S+1\}, \end{equation*} where \begin{equation*} S_{1}=\min\{\underset{i}{{\rm {max}}}\{{\rm round}(C_{1}\overline{\mathbf{V}} _{t_{k+1}/t_{k}}^{ii})\},2S,S_{\max }\} \quad \text{and} \quad S_{2}=\min\{\underset{ i}{{\rm {max}}}\{{\rm round}(C_{2}\overline{\mathbf{L}}_{t_{k+1}/t_{k}}^{i})\},2S,S_{\max }\}, \end{equation*} with $$C_{1}=(c_{0}$$$$/$$$$(0.95$$$${\rm {mtol}}_{\mathbf{y}}))^{2}$$ and $$C_{2}=(c_{0}$$$$/$$$$(0.95$$$${\rm {mtol}}_{\mathbf{P}}))^{2}$$, and return to Step 2 of previous subsection with $$S=S_{{\rm new}}$$ and $$s=s+1.$$ 5.4 Stochastic local linearization filter algorithm Starting with the initial filter values $$\overline{\mathbf{y}}_{t_{0}/t_{0}}=\mathbf{x}_{t_{0}/t_{0}}$$ and $$\overline{\mathbf{P}}_{t_{0}/t_{0}}=\mathbf{Q}_{t_{0}/t_{0}}$$, the adaptive LL filter algorithm performs the recursive computation of: the predictions $$\overline{\mathbf{y}}_{t_{k+1}/t_{k}}$$ and $$\overline{\mathbf{P}}_{t_{k+1}/t_{k}}$$ by means of the adaptive time-stepping strategy of Section 5.2, and the prediction variance by \begin{equation*} \overline{\mathbf{V}}_{t_{k+1}/t_{k}}=\overline{\mathbf{P}}_{t_{k+1}/t_{k}}- \overline{\mathbf{y}}_{t_{k+1}/t_{k}}\overline{\mathbf{y}} _{t_{k+1}/t_{k}}^{\intercal }; \end{equation*} the filters \begin{align*} \overline{\mathbf{y}}_{t_{k+1}/t_{k+1}}& =\overline{\mathbf{y}} _{t_{k+1}/t_{k}}+\overline{\mathbf{K}}_{t_{k+1}}\mathbf{\Big(}\mathbf{z} _{t_{k+1}}-\mathbf{\mathbf{C}}\overline{\mathbf{y}}_{t_{k+1}/t_{k}}\mathbf{\Big)} , \\ \overline{\mathbf{V}}_{t_{k+1}/t_{k+1}}& =\overline{\mathbf{V}} _{t_{k+1}/t_{k}}-\overline{\mathbf{K}}_{t_{k+1}}\mathbf{C}\overline{\mathbf{V }}_{t_{k+1}/t_{k}}, \\ \overline{\mathbf{P}}_{t_{k+1}/t_{k+1}}& =\overline{\mathbf{V}} _{t_{k+1}/t_{k+1}}+\overline{\mathbf{y}}_{t_{k+1}/t_{k+1}}\overline{\mathbf{y }}_{t_{k+1}/t_{k+1}}^{\intercal }, \end{align*} with filter gain \begin{equation*} \overline{\mathbf{K}}_{t_{k+1}}=\overline{\mathbf{V}}_{t_{k+1}/t_{k}}\mathbf{ C}^{\intercal }{\Large \Big(}\mathbf{C}\overline{\mathbf{V}}_{t_{k+1}/t_{k}} \mathbf{C}^{\intercal }+{\it{\Sigma}} _{t_{k+1}}\Big)^{-1}; \end{equation*} for each $$k$$, with $$k=0,1,\ldots ,M-2$$. 5.5 Deterministic local linearization filter algorithm Undoubtedly, the main drawback of the above stochastic filter algorithm is the large number of simulations that eventually might be needed for solving a specific filtering problem. In a variety of situations for which approximate solutions of low or medium accuracy are only required, an alternative filter algorithm can be defined by tracking the first two conditional moments of the Local Linear approximation (2.8) at various time instants in between two consecutive observations. Indeed, if the value of the predictions $$\mathbf{y}_{t_{k+1}/t_{k}}$$ and $$\mathbf{P}_{t_{k+1}/t_{k}}$$ are computed from the recursive formulas \begin{equation} \mathbf{y}_{\tau _{n+1}/t_{k}}=\mu (\tau _{n},\mathbf{y}_{\tau _{n}/t_{k}};h_{n}) \quad \text{and} \quad \mathbf{P}_{\tau _{n+1}/t_{k}}={\it{\Lambda}} (\tau _{n},\mathbf{P}_{\tau _{n}/t_{k}};h_{n}), \quad \text{with} \,\, \tau _{n},\tau _{n+1}\in (\tau )_{h}\cap \lbrack t_{k,}t_{k+1}], \end{equation} (5.3) the resulting deterministic filter is then defined by the expressions (2.9)–(2.13) with $$\left( \tau \right) _{h}\supset\{t\}_{M}$$, whose predictions are clearly defined by a different system of ODEs at each time instant $$\tau _{n}$$ in between two observations times $$t_{k,}$$ and $$t_{k+1}$$. For this deterministic filter, the adaptive strategy for the stepsize selection of Section 5.2 remains valid with light modifications. In Steps 3 and 4, the expressions for $$\widetilde{\mathbf{y}}_{\tau _{n+1}/t_{k}}$$, $$\widetilde{\mathbf{P}}_{\tau_{n+1}/t_{k}}$$ and $$\widetilde{\mathbf{y}}_{\tau ^{\ast }/t_{k}}$$, $$\widetilde{\mathbf{P}}_{\tau ^{\ast }/t_{k}}$$ are replaced by \begin{equation*} \widetilde{\mathbf{y}}_{\tau _{n+1}/t_{k}}=\mu (\tau _{n},\widetilde{\mathbf{ y}}_{\tau _{n}/t_{k}};h_{n})\text{,} \quad \widetilde{\mathbf{P}}_{\tau _{n+1}/t_{k}}={\it{\Lambda}} (\tau_{n},\widetilde{\mathbf{P}}_{\tau _{n}/t_{k}};h_{n}) \end{equation*} and \begin{equation*} \widetilde{\mathbf{y}}_{\tau ^{\ast }/t_{k}}=\mu (\tau _{n},\widetilde{ \mathbf{y}}_{\tau _{n}/t_{k}};h_{n}/2)\text{,} \quad \widetilde{\mathbf{P} }_{\tau ^{\ast }/t_{k}}={\it{\Lambda}} (\tau _{n},\widetilde{\mathbf{P}}_{\tau _{n}/t_{k}};h_{n}/2), \end{equation*} respectively. In Step 7, the computation of $$\widetilde{\mathbf{y}}^{\{s\}}(\tau _{n+1})$$ is substituted by \begin{equation*} \overline{\mathbf{y}}_{t_{k+1}/t_{k}}=\widehat{\mathbf{y}}_{t_{n+1}/t_{k}} \quad \text{and} \quad \overline{\mathbf{P}}_{t_{k+1}/t_{k}}=\widehat{ \mathbf{P}}_{t_{n+1}/t_{k}}. \end{equation*} The Steps 9 and 10 are obviously not longer needed as well as the computation of $$\widetilde{\mathbf{y}}^{\{s\}}(\tau _{n})$$ at the Step 2. In this situation, $$S_{0}=1$$. In this way, this adaptive strategy decides the time instant $$\tau _{n}$$ in between two observations times $$t_{k}$$ and $$t_{k+1}$$ when a different system of ODEs for the predictions is required. With exception of these modifications in the adaptive strategy and in the computation of the predictions $$\overline{ \mathbf{y}}_{t_{k+1}/t_{k}}$$ and $$\overline{\mathbf{P}}_{t_{k+1}/t_{k}}$$, the deterministic LL filter algorithm coincides with that of Section 5.4 for the stochastic one. Clearly, this deterministic filter algorithm is faster than the stochastic one, but it might be not converge to the LMV filter as $$h$$ goes to zero since the predictions (5.3) does not satisfy condition (3.2) for non-linear state equations in general (see next section for some examples). Note that, in the case that $$\left( \tau \right) _{h}\equiv \{t\}_{M}$$, the just defined deterministic filter reduces to the conventional LL filter of Definition 2.3 for non-linear state equation with multiplicative noise, and to the classical Kalman filter for linear state equation with additive noise. Further note that, as a main difference with the approximate polynomial-type filter mentioned at the end of Section 2.1, the deterministic filter proposed here involves the computation of more than one different systems of ODEs for the predictions between two consecutive observations (instead the single system required by the mentioned polynomial-type filters). It is also clear that, whereas the main role of the adaptive ODE solvers as those considered in Kulikov & Kulikova (2014) (and references therein) is the accurate computation of the predictions between observations, the purpose of the adaptive strategy described above is modifying the equation for the predictions between observations. In the next section, the advantage of this new tracking and adaptive strategies for computing the predictions is shown when the performance of the conventional LL filter is compared with that of the deterministic LL filter on uniform and adaptive time partitions $$\left( \tau \right) _{h}\supset \{t\}_{M}$$. 6. Numerical experiments In this section, the performance of the approximate LMV filters introduced in this article is illustrated by means of numerical experiments. To do so, a variety of state space models with different kind of complexity was chosen. This includes linear and non-linear state equations with additive or multiplicative noise that could be autonomous or not. The situation of a reduced number of partial or complete observations distant in time is considered in the examples. The exact LMV filter estimates are compared with those obtained by: (1) the conventional LL filter (Definition 2.3); (2) the deterministic LL filter with uniform and adaptive time partitions (Section 5.5); and (3) the stochastic LL filter with adaptive time partition and number of simulations (Section 5.4). In what follows, these four approximated filters will be denoted by LLF, LLF($$h$$), LLF($$\cdot$$) and SLLF($${\rm tol}$$), respectively, where $$h$$ is the stepsize of the uniform time partition, and $${\rm tol}$$ is the tolerance for the statistical error of the first two moments. In what follows, $$h_{\max }^{k}=t_{k+1}-t_{k}$$, $$ h_{\min }=2$$ eps, $$S_{0}=128$$, $${\rm {mtol}}_{\mathbf{y}}={\rm {mtol}}_{\mathbf{P}}={\rm tol}$$, $$ {\rm rtol}_{\mathbf{y}}={\rm rtol}_{\mathbf{P}}={\rm rtol}$$ and $${\rm atol}_{\mathbf{y}}={\rm atol}_{ \mathbf{P}}={\rm atol}$$, where $${\rm eps}=2^{-52}$$ is the spacing of floating point numbers. The observation noise variance $${\it{\Sigma}} _{t_{k}}$$ is assumed to have the same constant value $${\it{\Sigma}}$$ for all $$k$$. 6.1 Test examples The state space models to be considered are the followings. Example 1 State equation with multiplicative noise \begin{equation} dx=atxdt+\sigma \sqrt{t}xdw \label{SE EJ1} \end{equation} (6.1) and observation equation \begin{equation} z_{t_{k}}=x(t_{k})+e_{t_{k}},\text{ for }k=0,1,..,M-1 \label{OE EJ1} \end{equation} (6.2) with $$a=-0.1$$, $$\sigma =0.1$$, $${\it{\Sigma}} =0.0001$$, $$t_{0}=0.5$$, $$ x_{t_{0}/t_{0}}=1$$ and $$Q_{t_{0}/t_{0}}=1$$. For this state equation, the predictions for the first two moments are \begin{equation*} x_{t_{k+1}/t_{k}}=x_{t_{k}/t_{k}}e^{a(t_{k+1}^{2}-t_{k}^{2})/2}\,\,\text{and}\,\,Q_{t_{k+1}/t_{k}}=Q_{t_{k}/t_{k}}e^{(a+\sigma ^{2}/2)(t_{k+1}^{2}-t_{k}^{2})}, \end{equation*} where the filters $$x_{t_{k+1}/t_{k+1}}$$ and $$Q_{t_{k+1}/t_{k+1}}$$ are obtained from ( 2.5) and (2.6) for all $$k=0,1,..,M-2$$. Example 2 State equation with two additive noise \begin{equation} dx=atxdt+\sigma _{1}t^{p}e^{at^{2}/2}dw_{1}+\sigma _{2}\sqrt{t}dw_{2} \label{SE EJ2} \end{equation} (6.3) and observation equation \begin{equation} z_{t_{k}}=x(t_{k})+e_{t_{k}},\text{ for }k=0,1,..,M-1 \label{OE EJ2} \end{equation} (6.4) with $$a=-0.25$$, $$p=2$$, $$\sigma _{1}=5,$$$$\sigma _{2}=0.1$$, $${\it{\Sigma}} =0.0001$$, $$t_{0}=0.01$$, $$x_{t_{0}/t_{0}}=10$$ and $$Q_{t_{0}/t_{0}}=100$$. For this state equation, the predictions for the first two moments are \begin{equation*} x_{t_{k+1}/t_{k}}=x_{t_{k}/t_{k}}e^{a(t_{k+1}^{2}-t_{k}^{2})/2}\text{ } \end{equation*} and \begin{equation*} Q_{t_{k+1}/t_{k}}=\left(Q_{t_{k}/t_{k}}+\frac{\sigma _{2}^{2}}{2a} \right)e^{a(t_{k+1}^{2}-t_{k}^{2})}+\frac{\sigma _{1}^{2}}{2p+1} (t_{k+1}^{2p+1}-t_{k}^{2p+1})e^{at_{k+1}^{2}}-\frac{\sigma _{2}^{2}}{2a}, \end{equation*} where the filters $$x_{t_{k+1}/t_{k+1}}$$ and $$Q_{t_{k+1}/t_{k+1}}$$ are obtained from (2.5) and (2.6) for all $$k=0,1,..,M-2$$. Example 3 Cox–Ingersoll–Ross (CIR) model of interest rates (Cox et al., 1985) \begin{equation} dx=(a+\beta x)dt+\sigma \sqrt{x}dw \label{SE CIR} \end{equation} (6.5) and observation equation \begin{equation} z_{t_{k}}=x(t_{k})+e_{t_{k}},\text{ for }k=0,1,..,M-1 \label{OE CIR} \end{equation} (6.6) with $$a=1$$, $$\beta =-0.5$$, $$\sigma =0.25$$, $${\it{\Sigma}} =0.05$$, $$t_{0}=0$$, $$ x_{t_{0}/t_{0}}=20$$ and $$Q_{t_{0}/t_{0}}=400$$. For this state equation, the predictions for the first two moments are \begin{equation*} x_{t_{k+1}/t_{k}}=x_{t_{k}/t_{k}}e^{\beta (t_{k+1}-t_{k})}+\text{ }\frac{\alpha }{ \beta }(e^{\beta (t_{k+1}-t_{k})}-1) \end{equation*} and \begin{equation*} Q_{t_{k+1}/t_{k}}=\frac{2\alpha +\sigma ^{2}}{2\beta ^{2}}\left\{ (\alpha +2\beta x_{t_{k}/t_{k}}+\frac{2\beta ^{2}}{2\alpha +\sigma ^{2}} Q_{t_{k}/t_{k}})e^{2\beta (t_{k+1}-t_{k})}-2(\alpha +\beta x_{t_{k}/t_{k}})e^{\beta (t_{k+1}-t_{k})}+\alpha \right\} \end{equation*} where the filters $$x_{t_{k+1}/t_{k+1}}$$ and $$Q_{t_{k+1}/t_{k+1}}$$ are obtained from (2.5) and (2.6) for all $$k=0,1,..,M-2$$. Example 4 Van der Pol oscillator with random frequency (Gitterman, 2005) \begin{align} dx_{1}& =x_{2}dt \label{SEa EJ4} \\ \end{align} (6.7) \begin{align} dx_{2}& =(-\epsilon (x_{1}^{2}-1)x_{2}-\varpi x_{1})dt+\rho x_{1}dw \label{SEb EJ4} \end{align} (6.8) and observation equation \begin{equation} z_{t_{k}}=x_{1}(t_{k})+e_{t_{k}},\text{ for }k=0,1,..,M-1, \label{OE EJ4} \end{equation} (6.9) where $$\varpi =1$$, $$\epsilon =1$$ and $$\rho ^{2}=1$$ are the frequency mean value and variance, respectively. In addition, $${\it{\Sigma}} =0.001$$, $$t_{0}=0$$, $$ \mathbf{x}_{t_{0}/t_{0}}^{\intercal }=[1$$$$1]$$ and $$\mathbf{Q}_{t_{0}/t_{0}}= \mathbf{x}_{t_{0}/t_{0}}\mathbf{x}_{t_{0}/t_{0}}^{\intercal }$$. Example 5 Van der Pol oscillator with random force (Gitterman, 2005) \begin{align} dx_{1}& =x_{2}dt \label{SEa EJ3} \\ \end{align} (6.10) \begin{align} dx_{2}& =(-\epsilon (x_{1}^{2}-1)x_{2}-x_{1}+a)dt+\sigma dw \label{SEb EJ3} \end{align} (6.11) and observation equation \begin{equation} z_{t_{k}}=x_{1}(t_{k})+e_{t_{k}},\text{ for }k=0,1,..,M-1, \label{OE EJ3} \end{equation} (6.12) where $$a=0.5$$, $$\epsilon =1$$ and $$\sigma ^{2}=(0.75)^{2}$$ are the intensity and the variance of the random input, respectively. In addition, $$t_{0}=0$$, $$ {\it{\Sigma}} =0.001$$, $$\mathbf{x}_{t_{0}/t_{0}}^{\intercal }=[1$$$$1]$$ and $$\mathbf{ Q}_{t_{0}/t_{0}}=\mathbf{x}_{t_{0}/t_{0}}\mathbf{x}_{t_{0}/t_{0}}^{\intercal }$$. Neglecting the third and higher order moments, a suite approximation $$m$$ and $$V$$ for the prediction mean and variance of these Van der Pol oscillators is given by the solution of the ODEs \begin{gather*} \frac{dm_{1}}{dt}=m_{2},\qquad\frac{dm_{2}}{dt}=a-\varpi m_{1}+\epsilon (m_{2}-m_{2}m_{3}-V_{23}),\qquad\frac{dm_{3}}{dt} =2m_{1}m_{2}+2V_{12},\\ \frac{dV_{11}}{dt}=2V_{12},\qquad\frac{dV_{22}}{dt}=\sigma ^{2}-2\varpi V_{12}+2\epsilon (V_{22}-m_{3}V_{22}-m_{2}V_{23})+\rho ^{2}(V_{22}+m_{2}^{2}),\\ \frac{dV_{12}}{dt}=-\varpi V_{11}+V_{22}+\epsilon (V_{12}-m_{3}V_{12}-m_{2}V_{13}),\qquad\frac{dV_{13}}{dt} =2m_{2}V_{11}+V_{23}+2m_{1}V_{12},\\ \frac{dV_{33}}{dt}=4(m_{2}V_{13}+m_{1}V_{23}),\qquad\frac{dV_{23}}{dt} =-\varpi V_{13}+2(m_{2}V_{12}+m_{1}V_{22})+\epsilon (V_{23}-m_{3}V_{23}-m_{2}V_{33}), \end{gather*} which can be easily derived from (2.3) to (2.4) as it is indicated in Basin (2008) for polynomial equations and by adding a third state equation corresponding to the auxiliary variable $$ x_{3}=x_{1}^{2}$$. In what follows, the numerical solution of these equations obtained by the LLDP scheme (Jimenez et al., 2014) with refined tolerance will be considered as the ‘exact’ prediction mean and variance of the Van der Pol oscillators. 6.2 Experiments with a single realization of the models For each example, a realization of the state equation solution was computed by means of the Euler–Maruyama scheme (Kloeden & Platen, 1995) over the thin time partition $$\{t_{0}+n\delta :\delta =10^{-4},n=0,..,9\times 10^{4}\}$$ on the interval $$[t_{0},t_{0}+9]$$, and a subsample of each realization at the time instants $$\{t\}_{M=10}=\{t_{k}=t_{0}+k:$$$$k=0,..,M-1\}$$ was taken to evaluate the corresponding observation equation. In this way, a time series $$ \{z_{t_{k}}\}_{k=0,..,M-1}$$ of only $$10$$ values each one is available as given data for every state space example. For each time series of the five models, the values of the exact and the approximate LMV filters were computed at the $$M$$ observations times $$t_{k}$$ as well as the errors $$\underset{k}{{\rm {max}}}\left\vert \mathbf{x} _{t_{k}/t_{k}}^{i}-\overline{\mathbf{y}}_{t_{k}/t_{k}}^{i}\right\vert$$ and $$ \underset{k}{{\rm {max}}}\left\vert \mathbf{U}_{t_{k}/t_{k}}^{i,j}-\overline{\mathbf{ \ V}}_{t_{k}/t_{k}}^{i,j}\right\vert$$ between each approximate filter and the exact one, for all $$i,j=1,..,d$$. Table 1 shows these errors for the two linear models. For the model with multiplicative noise (6.1)–(6.2), the LLF($$h$$) was computed on three uniform time partitions with $$h=2^{-5},2^{-7},2^{-9}$$, the adaptive LLF($$\cdot$$) was computed with relative and absolute tolerances $${\rm rtol}=10^{-6}$$ and $$ {\rm atol}=10^{-9}$$, whereas the SLLF($${\rm tol}$$) was computed for the three values of $$ {\rm tol}=10^{-2},10^{-3},10^{-4}$$, but with the same values $${\rm rtol}=10^{-6}$$ and $$ {\rm atol}=10^{-9}$$ of the adaptive LLF($$\cdot$$). For the model with additive noise (6.3)–(6.4), similar computations were carried out, but with $$ h=2^{-3},2^{-5},2^{-7}$$, $${\rm rtol}=10^{-4}$$, $${\rm atol}=10^{-7}$$ and $$ {\rm tol}=1,10^{-1},10^{-2}$$. The Table also shows the lower ($$L$$), higher ($$H$$) and total ($$T$$) number of accepted ($$A$$) and fail ($$F$$) stepsizes of the adaptive filters as well as the lower ($$L$$), higher ($$H$$) and total ($$T$$) number of simulations ($$S$$) of the stochastic filter. For these two linear models, the accuracy of the new approximate filters is always much better than that of the conventional LL filter. The accuracy of the filtering increases as the stepsize $$h$$ of the LLF($$h$$) decreases. For relatively same total number of accepted steps $$TA$$, the adaptive LLF($$\cdot$$) is much more accurate than the filter with uniform time partition. The accuracy of the stochastic filter increases as the number of simulation does, but it is always lower than that of the adaptive deterministic filter for the specified tolerances. Table 1 Errors between the exact LMV filter of the two linear models and their corresponding approximations obtained by the conventional LL filter, the deterministic LL filter with uniform and adaptive time partitions, and the stochastic LL filter with adaptive time partition and number of simulations. Lower ($$L$$), higher ($$H$$) and total ($$T$$) number of accepted ($$A$$) or fail ($$F$$) stepsizes or number of simulations ($$S$$). Linear $$x$$ $$U$$ LS HS TS Mul $$\times 10^{-3}$$ $$\times 10^{-6}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{4}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 5.104}$$ $${ 5.266}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.036}$$ $${0.049}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.009}$$ $${0.012}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(2}^{-9}{ )}$$ $${ 0.002}$$ $${0.003}$$ $${ 512}$$ $${512}$$ $${ 4608}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.006}$$ $${0.010}$$ $${ 53}$$ $${ 90}$$ $${ 688}$$ $${ 0}$$ $${ 1}$$ $${5}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.627}$$ $${4.177}$$ $${ 42}$$ $${99}$$ $${ 774}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 0.09}$$ $${ 0.03}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.514}$$ $${2.897}$$ $${ 43}$$ $${91}$$ $${ 738}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 8.43}$$ $${ 1.88}$$ $${\rm SLLF(10}^{-4}{ )}$$ $${ 0.033}$$ $${0.043}$$ $${ 40}$$ $${91}$$ $${ 727}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 3.46}$$ $${ 832}$$ $${ 186}$$ Linear $$x$$ $$U$$ LS HS TS Add $$\times 10^{-4}$$ $$\times 10^{-7}$$ LA HA TA LF HF TF $$\times 10^{2}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 51.970}$$ $${ 1.064}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-3}{ )}$$ $${ 0.054}$$ $${0.035}$$ $${ 8}$$ $${8}$$ $${ 72}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.013}$$ $${0.008}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.003}$$ $${0.002}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.005}$$ $${0.004}$$ $${ 13}$$ $${ 41}$$ $${ 233}$$ $${ 1}$$ $${ 3}$$ $${14}$$ — — — $${\rm SLLF(1)}$$ $${ 0.976}$$ $${ 0.659}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 1.16}$$ $${ 2.47}$$ $${\rm SLLF(0.1)}$$ $${ 0.274}$$ $${ 0.657}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 115}$$ $${ 243}$$ $${\rm SLLF(0.01)}$$ $${ 0.071}$$ $${ 0.078}$$ $${ 13}$$ $${ 35}$$ $${ 231}$$ $${ 3}$$ $${ 14}$$ $${69}$$ $${ 6.02}$$ $${ 11486}$$ $${ 24333}$$ Linear $$x$$ $$U$$ LS HS TS Mul $$\times 10^{-3}$$ $$\times 10^{-6}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{4}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 5.104}$$ $${ 5.266}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.036}$$ $${0.049}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.009}$$ $${0.012}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(2}^{-9}{ )}$$ $${ 0.002}$$ $${0.003}$$ $${ 512}$$ $${512}$$ $${ 4608}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.006}$$ $${0.010}$$ $${ 53}$$ $${ 90}$$ $${ 688}$$ $${ 0}$$ $${ 1}$$ $${5}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.627}$$ $${4.177}$$ $${ 42}$$ $${99}$$ $${ 774}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 0.09}$$ $${ 0.03}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.514}$$ $${2.897}$$ $${ 43}$$ $${91}$$ $${ 738}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 8.43}$$ $${ 1.88}$$ $${\rm SLLF(10}^{-4}{ )}$$ $${ 0.033}$$ $${0.043}$$ $${ 40}$$ $${91}$$ $${ 727}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 3.46}$$ $${ 832}$$ $${ 186}$$ Linear $$x$$ $$U$$ LS HS TS Add $$\times 10^{-4}$$ $$\times 10^{-7}$$ LA HA TA LF HF TF $$\times 10^{2}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 51.970}$$ $${ 1.064}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-3}{ )}$$ $${ 0.054}$$ $${0.035}$$ $${ 8}$$ $${8}$$ $${ 72}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.013}$$ $${0.008}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.003}$$ $${0.002}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.005}$$ $${0.004}$$ $${ 13}$$ $${ 41}$$ $${ 233}$$ $${ 1}$$ $${ 3}$$ $${14}$$ — — — $${\rm SLLF(1)}$$ $${ 0.976}$$ $${ 0.659}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 1.16}$$ $${ 2.47}$$ $${\rm SLLF(0.1)}$$ $${ 0.274}$$ $${ 0.657}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 115}$$ $${ 243}$$ $${\rm SLLF(0.01)}$$ $${ 0.071}$$ $${ 0.078}$$ $${ 13}$$ $${ 35}$$ $${ 231}$$ $${ 3}$$ $${ 14}$$ $${69}$$ $${ 6.02}$$ $${ 11486}$$ $${ 24333}$$ Table 1 Errors between the exact LMV filter of the two linear models and their corresponding approximations obtained by the conventional LL filter, the deterministic LL filter with uniform and adaptive time partitions, and the stochastic LL filter with adaptive time partition and number of simulations. Lower ($$L$$), higher ($$H$$) and total ($$T$$) number of accepted ($$A$$) or fail ($$F$$) stepsizes or number of simulations ($$S$$). Linear $$x$$ $$U$$ LS HS TS Mul $$\times 10^{-3}$$ $$\times 10^{-6}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{4}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 5.104}$$ $${ 5.266}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.036}$$ $${0.049}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.009}$$ $${0.012}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(2}^{-9}{ )}$$ $${ 0.002}$$ $${0.003}$$ $${ 512}$$ $${512}$$ $${ 4608}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.006}$$ $${0.010}$$ $${ 53}$$ $${ 90}$$ $${ 688}$$ $${ 0}$$ $${ 1}$$ $${5}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.627}$$ $${4.177}$$ $${ 42}$$ $${99}$$ $${ 774}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 0.09}$$ $${ 0.03}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.514}$$ $${2.897}$$ $${ 43}$$ $${91}$$ $${ 738}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 8.43}$$ $${ 1.88}$$ $${\rm SLLF(10}^{-4}{ )}$$ $${ 0.033}$$ $${0.043}$$ $${ 40}$$ $${91}$$ $${ 727}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 3.46}$$ $${ 832}$$ $${ 186}$$ Linear $$x$$ $$U$$ LS HS TS Add $$\times 10^{-4}$$ $$\times 10^{-7}$$ LA HA TA LF HF TF $$\times 10^{2}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 51.970}$$ $${ 1.064}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-3}{ )}$$ $${ 0.054}$$ $${0.035}$$ $${ 8}$$ $${8}$$ $${ 72}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.013}$$ $${0.008}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.003}$$ $${0.002}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.005}$$ $${0.004}$$ $${ 13}$$ $${ 41}$$ $${ 233}$$ $${ 1}$$ $${ 3}$$ $${14}$$ — — — $${\rm SLLF(1)}$$ $${ 0.976}$$ $${ 0.659}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 1.16}$$ $${ 2.47}$$ $${\rm SLLF(0.1)}$$ $${ 0.274}$$ $${ 0.657}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 115}$$ $${ 243}$$ $${\rm SLLF(0.01)}$$ $${ 0.071}$$ $${ 0.078}$$ $${ 13}$$ $${ 35}$$ $${ 231}$$ $${ 3}$$ $${ 14}$$ $${69}$$ $${ 6.02}$$ $${ 11486}$$ $${ 24333}$$ Linear $$x$$ $$U$$ LS HS TS Mul $$\times 10^{-3}$$ $$\times 10^{-6}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{4}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 5.104}$$ $${ 5.266}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.036}$$ $${0.049}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.009}$$ $${0.012}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(2}^{-9}{ )}$$ $${ 0.002}$$ $${0.003}$$ $${ 512}$$ $${512}$$ $${ 4608}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.006}$$ $${0.010}$$ $${ 53}$$ $${ 90}$$ $${ 688}$$ $${ 0}$$ $${ 1}$$ $${5}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.627}$$ $${4.177}$$ $${ 42}$$ $${99}$$ $${ 774}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 0.09}$$ $${ 0.03}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.514}$$ $${2.897}$$ $${ 43}$$ $${91}$$ $${ 738}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 0.13}$$ $${ 8.43}$$ $${ 1.88}$$ $${\rm SLLF(10}^{-4}{ )}$$ $${ 0.033}$$ $${0.043}$$ $${ 40}$$ $${91}$$ $${ 727}$$ $${ 0}$$ $${1}$$ $${ 1}$$ $${ 3.46}$$ $${ 832}$$ $${ 186}$$ Linear $$x$$ $$U$$ LS HS TS Add $$\times 10^{-4}$$ $$\times 10^{-7}$$ LA HA TA LF HF TF $$\times 10^{2}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 51.970}$$ $${ 1.064}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-3}{ )}$$ $${ 0.054}$$ $${0.035}$$ $${ 8}$$ $${8}$$ $${ 72}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 0.013}$$ $${0.008}$$ $${ 32}$$ $${32}$$ $${ 288}$$ — — — — — — $${\rm LLF(2}^{-7}{ )}$$ $${ 0.003}$$ $${0.002}$$ $${ 128}$$ $${128}$$ $${ 1152}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.005}$$ $${0.004}$$ $${ 13}$$ $${ 41}$$ $${ 233}$$ $${ 1}$$ $${ 3}$$ $${14}$$ — — — $${\rm SLLF(1)}$$ $${ 0.976}$$ $${ 0.659}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 1.16}$$ $${ 2.47}$$ $${\rm SLLF(0.1)}$$ $${ 0.274}$$ $${ 0.657}$$ $${ 13}$$ $${ 37}$$ $${ 226}$$ $${ 3}$$ $${ 15}$$ $${70}$$ $${ 1.29}$$ $${ 115}$$ $${ 243}$$ $${\rm SLLF(0.01)}$$ $${ 0.071}$$ $${ 0.078}$$ $${ 13}$$ $${ 35}$$ $${ 231}$$ $${ 3}$$ $${ 14}$$ $${69}$$ $${ 6.02}$$ $${ 11486}$$ $${ 24333}$$ For the three examples with non-linear state equations, a similar error analysis is reported in Tables 2 and 3. For the CIR model (6.5)–(6.6), the LLF($$h$$) was computed on three uniform time partitions with $$ h=2^{-2},2^{-4},2^{-6}$$, the adaptive LLF($$\cdot$$) was computed ones with relative and absolute tolerances $${\rm rtol}=10^{-6}$$ and $${\rm atol}=10^{-9}$$, whereas the SLLF($${\rm tol}$$) was computed for the three values of $$ {\rm tol}=10^{-1},10^{-2},10^{-3}$$, but with the same values of $${\rm rtol}$$ and $${\rm atol}$$. For the Van der Pol models (6.7)–(6.9) and (6.10)–(6.12), similar computation were completed, but with $$ h=2^{-4},2^{-5}$$, $${\rm rtol}=10^{-3}$$, $${\rm atol}=10^{-6}$$ and $${\rm tol}=10^{-2}$$. Clearly, for the CIR model (with only one state equation), both the deterministic and the stochastic filters provide much better approximations than those given by the classical LL filter. Similar result holds for the non-observable state variable of the Van der Pol models, whereas for the observable one the approximation obtained by the conventional LL filter is quite similar (slightly better or worse). Contrary to the linear models, the accuracy of the deterministic filter with uniform time partition remains almost same for stepsizes smaller than certain value. Table 2 Errors between the exact LMV filter of the CIR model (6.5)–(6.6) and its corresponding approximations obtained by the conventional LL filter, the deterministic LL filter with uniform and adaptive time partitions, and the stochastic LL filter with adaptive time partition and number of simulations. Lower ($$L$$), higher ($$H$$) and total ($$T$$) number of accepted ($$A$$) or fail ($$F$$) stepsizes or number of simulations ($$S$$). $$x$$ $$U$$ LS HS TS $${\rm CIR}$$ $$\times 10^{-3}$$ $$\times 10^{-4}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 3.721}$$ $${ 0.823}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-2}{ )}$$ $${ 0.295}$$ $${0.336}$$ $${ 4}$$ $${4}$$ $${ 36}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 0.280}$$ $${0.333}$$ $${ 16}$$ $${16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-6}{ )}$$ $${ 0.279}$$ $${0.333}$$ $${ 64}$$ $${64}$$ $${ 576}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.280}$$ $${0.333}$$ $${ 5}$$ $${ 11}$$ $${ 58}$$ $${ 0}$$ $${ 2}$$ $${ 7}$$ — — — $${\rm SLLF(10}^{-1}{ )}$$ $${ 8.227}$$ $${7.359}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 0.88}$$ $${1.27}$$ $${ 2.31}$$ $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.686}$$ $${1.392}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 87.9}$$ $${125}$$ $${ 228}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.266}$$ $${0.177}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 8663}$$ $${12488}$$ $${ 22760}$$ $$x$$ $$U$$ LS HS TS $${\rm CIR}$$ $$\times 10^{-3}$$ $$\times 10^{-4}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 3.721}$$ $${ 0.823}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-2}{ )}$$ $${ 0.295}$$ $${0.336}$$ $${ 4}$$ $${4}$$ $${ 36}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 0.280}$$ $${0.333}$$ $${ 16}$$ $${16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-6}{ )}$$ $${ 0.279}$$ $${0.333}$$ $${ 64}$$ $${64}$$ $${ 576}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.280}$$ $${0.333}$$ $${ 5}$$ $${ 11}$$ $${ 58}$$ $${ 0}$$ $${ 2}$$ $${ 7}$$ — — — $${\rm SLLF(10}^{-1}{ )}$$ $${ 8.227}$$ $${7.359}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 0.88}$$ $${1.27}$$ $${ 2.31}$$ $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.686}$$ $${1.392}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 87.9}$$ $${125}$$ $${ 228}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.266}$$ $${0.177}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 8663}$$ $${12488}$$ $${ 22760}$$ Table 2 Errors between the exact LMV filter of the CIR model (6.5)–(6.6) and its corresponding approximations obtained by the conventional LL filter, the deterministic LL filter with uniform and adaptive time partitions, and the stochastic LL filter with adaptive time partition and number of simulations. Lower ($$L$$), higher ($$H$$) and total ($$T$$) number of accepted ($$A$$) or fail ($$F$$) stepsizes or number of simulations ($$S$$). $$x$$ $$U$$ LS HS TS $${\rm CIR}$$ $$\times 10^{-3}$$ $$\times 10^{-4}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 3.721}$$ $${ 0.823}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-2}{ )}$$ $${ 0.295}$$ $${0.336}$$ $${ 4}$$ $${4}$$ $${ 36}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 0.280}$$ $${0.333}$$ $${ 16}$$ $${16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-6}{ )}$$ $${ 0.279}$$ $${0.333}$$ $${ 64}$$ $${64}$$ $${ 576}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.280}$$ $${0.333}$$ $${ 5}$$ $${ 11}$$ $${ 58}$$ $${ 0}$$ $${ 2}$$ $${ 7}$$ — — — $${\rm SLLF(10}^{-1}{ )}$$ $${ 8.227}$$ $${7.359}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 0.88}$$ $${1.27}$$ $${ 2.31}$$ $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.686}$$ $${1.392}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 87.9}$$ $${125}$$ $${ 228}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.266}$$ $${0.177}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 8663}$$ $${12488}$$ $${ 22760}$$ $$x$$ $$U$$ LS HS TS $${\rm CIR}$$ $$\times 10^{-3}$$ $$\times 10^{-4}$$ LA HA TA LF HF TF $$\times 10^{3}$$ $$\times 10^{5}$$ $$\times 10^{5}$$ $${\rm LLF}$$ $${ 3.721}$$ $${ 0.823}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-2}{ )}$$ $${ 0.295}$$ $${0.336}$$ $${ 4}$$ $${4}$$ $${ 36}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 0.280}$$ $${0.333}$$ $${ 16}$$ $${16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-6}{ )}$$ $${ 0.279}$$ $${0.333}$$ $${ 64}$$ $${64}$$ $${ 576}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 0.280}$$ $${0.333}$$ $${ 5}$$ $${ 11}$$ $${ 58}$$ $${ 0}$$ $${ 2}$$ $${ 7}$$ — — — $${\rm SLLF(10}^{-1}{ )}$$ $${ 8.227}$$ $${7.359}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 0.88}$$ $${1.27}$$ $${ 2.31}$$ $${\rm SLLF(10}^{-2}{ )}$$ $${ 0.686}$$ $${1.392}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 87.9}$$ $${125}$$ $${ 228}$$ $${\rm SLLF(10}^{-3}{ )}$$ $${ 0.266}$$ $${0.177}$$ $${ 5}$$ $${7}$$ $${ 47}$$ $${ 0}$$ $${ 1}$$ $${ 7}$$ $${ 8663}$$ $${12488}$$ $${ 22760}$$ Table 3 Errors between the exact LMV filter of the two Van der Pol models and their corresponding approximations obtained by the conventional LL filter, the deterministic LL filter with uniform and adaptive time partitions, and the stochastic LL filter with adaptive time partition and number of simulations. Lower ($$L$$), higher ($$H$$) and total ($$T$$) number of accepted ($$A$$) or fail ($$F$$) stepsizes or number of simulations ($$S$$). VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Mul $${\times 10}^{-2}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 2.82}$$ $${ 1.99}$$ $${ 3.70}$$ $${ 13.94}$$ $${ 3.42}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — $${ -}$$ — $${\rm LLF(2}^{-4}{ )}$$ $${ 3.31}$$ $${ 0.98}$$ $${ 4.13}$$ $${ 1.36}$$ $${ 1.83}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — $${ -}$$ — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 3.32}$$ $${ 0.93}$$ $${ 4.14}$$ $${ 1.36}$$ $${ 1.16}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — $${ - }$$ — — — $${\rm LLF(\cdot )}$$ $${ 3.34}$$ $${ 1.11}$$ $${ 4.15}$$ $${ 1.36}$$ $${ 1.14}$$ $${ 30}$$ $${ 119}$$ $${ 545}$$ $${ 0}$$ $${ 9}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.31}$$ $${ 0.85}$$ $${ 4.11}$$ $${ 1.37}$$ $${ 1.14}$$ $${ 19}$$ $${ 64}$$ $${ 348}$$ $${ 2}$$ $${ 9}$$ $${ 41}$$ $${ 8}$$ $${ 258}$$ $${ 597}$$ VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Add $${\times 10}^{-3}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 6.30}$$ $${ 2.05}$$ $${ 1.86}$$ $${ 0.20}$$ $${ 1.46}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 5.23}$$ $${ 1.58}$$ $${ 2.34}$$ $${ 0.08}$$ $${ 1.45}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 5.87}$$ $${ 1.50}$$ $${ 2.48}$$ $${ 0.08}$$ $${ 1.42}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 6.26}$$ $${ 1.44}$$ $${ 2.53}$$ $${ 0.08}$$ $${ 1.40}$$ $${ 17}$$ $${ 91}$$ $${ 377}$$ $${ 0}$$ $${ 7}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.56}$$ $${ 0.28}$$ $${ 0.69}$$ $${ 0.13}$$ $${ 0.39}$$ $${ 16}$$ $${ 39}$$ $${ 235}$$ $${ 0}$$ $${ 0}$$ $${ 0}$$ $${ 2}$$ $${ 53}$$ $${ 177}$$ VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Mul $${\times 10}^{-2}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 2.82}$$ $${ 1.99}$$ $${ 3.70}$$ $${ 13.94}$$ $${ 3.42}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — $${ -}$$ — $${\rm LLF(2}^{-4}{ )}$$ $${ 3.31}$$ $${ 0.98}$$ $${ 4.13}$$ $${ 1.36}$$ $${ 1.83}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — $${ -}$$ — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 3.32}$$ $${ 0.93}$$ $${ 4.14}$$ $${ 1.36}$$ $${ 1.16}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — $${ - }$$ — — — $${\rm LLF(\cdot )}$$ $${ 3.34}$$ $${ 1.11}$$ $${ 4.15}$$ $${ 1.36}$$ $${ 1.14}$$ $${ 30}$$ $${ 119}$$ $${ 545}$$ $${ 0}$$ $${ 9}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.31}$$ $${ 0.85}$$ $${ 4.11}$$ $${ 1.37}$$ $${ 1.14}$$ $${ 19}$$ $${ 64}$$ $${ 348}$$ $${ 2}$$ $${ 9}$$ $${ 41}$$ $${ 8}$$ $${ 258}$$ $${ 597}$$ VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Add $${\times 10}^{-3}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 6.30}$$ $${ 2.05}$$ $${ 1.86}$$ $${ 0.20}$$ $${ 1.46}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 5.23}$$ $${ 1.58}$$ $${ 2.34}$$ $${ 0.08}$$ $${ 1.45}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 5.87}$$ $${ 1.50}$$ $${ 2.48}$$ $${ 0.08}$$ $${ 1.42}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 6.26}$$ $${ 1.44}$$ $${ 2.53}$$ $${ 0.08}$$ $${ 1.40}$$ $${ 17}$$ $${ 91}$$ $${ 377}$$ $${ 0}$$ $${ 7}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.56}$$ $${ 0.28}$$ $${ 0.69}$$ $${ 0.13}$$ $${ 0.39}$$ $${ 16}$$ $${ 39}$$ $${ 235}$$ $${ 0}$$ $${ 0}$$ $${ 0}$$ $${ 2}$$ $${ 53}$$ $${ 177}$$ Table 3 Errors between the exact LMV filter of the two Van der Pol models and their corresponding approximations obtained by the conventional LL filter, the deterministic LL filter with uniform and adaptive time partitions, and the stochastic LL filter with adaptive time partition and number of simulations. Lower ($$L$$), higher ($$H$$) and total ($$T$$) number of accepted ($$A$$) or fail ($$F$$) stepsizes or number of simulations ($$S$$). VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Mul $${\times 10}^{-2}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 2.82}$$ $${ 1.99}$$ $${ 3.70}$$ $${ 13.94}$$ $${ 3.42}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — $${ -}$$ — $${\rm LLF(2}^{-4}{ )}$$ $${ 3.31}$$ $${ 0.98}$$ $${ 4.13}$$ $${ 1.36}$$ $${ 1.83}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — $${ -}$$ — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 3.32}$$ $${ 0.93}$$ $${ 4.14}$$ $${ 1.36}$$ $${ 1.16}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — $${ - }$$ — — — $${\rm LLF(\cdot )}$$ $${ 3.34}$$ $${ 1.11}$$ $${ 4.15}$$ $${ 1.36}$$ $${ 1.14}$$ $${ 30}$$ $${ 119}$$ $${ 545}$$ $${ 0}$$ $${ 9}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.31}$$ $${ 0.85}$$ $${ 4.11}$$ $${ 1.37}$$ $${ 1.14}$$ $${ 19}$$ $${ 64}$$ $${ 348}$$ $${ 2}$$ $${ 9}$$ $${ 41}$$ $${ 8}$$ $${ 258}$$ $${ 597}$$ VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Add $${\times 10}^{-3}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 6.30}$$ $${ 2.05}$$ $${ 1.86}$$ $${ 0.20}$$ $${ 1.46}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 5.23}$$ $${ 1.58}$$ $${ 2.34}$$ $${ 0.08}$$ $${ 1.45}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 5.87}$$ $${ 1.50}$$ $${ 2.48}$$ $${ 0.08}$$ $${ 1.42}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 6.26}$$ $${ 1.44}$$ $${ 2.53}$$ $${ 0.08}$$ $${ 1.40}$$ $${ 17}$$ $${ 91}$$ $${ 377}$$ $${ 0}$$ $${ 7}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.56}$$ $${ 0.28}$$ $${ 0.69}$$ $${ 0.13}$$ $${ 0.39}$$ $${ 16}$$ $${ 39}$$ $${ 235}$$ $${ 0}$$ $${ 0}$$ $${ 0}$$ $${ 2}$$ $${ 53}$$ $${ 177}$$ VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Mul $${\times 10}^{-2}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 2.82}$$ $${ 1.99}$$ $${ 3.70}$$ $${ 13.94}$$ $${ 3.42}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — $${ -}$$ — $${\rm LLF(2}^{-4}{ )}$$ $${ 3.31}$$ $${ 0.98}$$ $${ 4.13}$$ $${ 1.36}$$ $${ 1.83}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — $${ -}$$ — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 3.32}$$ $${ 0.93}$$ $${ 4.14}$$ $${ 1.36}$$ $${ 1.16}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — $${ - }$$ — — — $${\rm LLF(\cdot )}$$ $${ 3.34}$$ $${ 1.11}$$ $${ 4.15}$$ $${ 1.36}$$ $${ 1.14}$$ $${ 30}$$ $${ 119}$$ $${ 545}$$ $${ 0}$$ $${ 9}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.31}$$ $${ 0.85}$$ $${ 4.11}$$ $${ 1.37}$$ $${ 1.14}$$ $${ 19}$$ $${ 64}$$ $${ 348}$$ $${ 2}$$ $${ 9}$$ $${ 41}$$ $${ 8}$$ $${ 258}$$ $${ 597}$$ VDP $${x}^{1}$$ $${ U}^{11}$$ $${U}^{12}$$ LS HS TS Add $${\times 10}^{-3}$$ $${ x}^{2}$$ $${ \times 10}^{-5}$$ $${ U}^{22}$$ $${\times 10}^{-3}$$ $${\rm LA}$$ $${\rm HA}$$ $${\rm TA}$$ $${\rm LF}$$ $${\rm HF}$$ $${\rm TF}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\times 10}^{5}$$ $${\rm LLF}$$ $${ 6.30}$$ $${ 2.05}$$ $${ 1.86}$$ $${ 0.20}$$ $${ 1.46}$$ $${ 1}$$ $${ 1}$$ $${ 9}$$ — — — — — — $${\rm LLF(2}^{-4}{ )}$$ $${ 5.23}$$ $${ 1.58}$$ $${ 2.34}$$ $${ 0.08}$$ $${ 1.45}$$ $${ 16}$$ $${ 16}$$ $${ 144}$$ — — — — — — $${\rm LLF(2}^{-5}{ )}$$ $${ 5.87}$$ $${ 1.50}$$ $${ 2.48}$$ $${ 0.08}$$ $${ 1.42}$$ $${ 32}$$ $${ 32}$$ $${ 288}$$ — — — — — — $${\rm LLF(\cdot )}$$ $${ 6.26}$$ $${ 1.44}$$ $${ 2.53}$$ $${ 0.08}$$ $${ 1.40}$$ $${ 17}$$ $${ 91}$$ $${ 377}$$ $${ 0}$$ $${ 7}$$ $${ 34}$$ — — — $${\rm SLLF(10}^{-2}{ )}$$ $${ 3.56}$$ $${ 0.28}$$ $${ 0.69}$$ $${ 0.13}$$ $${ 0.39}$$ $${ 16}$$ $${ 39}$$ $${ 235}$$ $${ 0}$$ $${ 0}$$ $${ 0}$$ $${ 2}$$ $${ 53}$$ $${ 177}$$ In order to obtain a better characterization of the deterministic LL filters a more precise error analysis (see, e.g., Kloeden & Platen, 1995; Carbonell et al., 2006) will be given in the next subsection. 6.3 Experiments with multiple realizations of the models For each example, $$2000$$ realizations as specified in previous subsection were computed. Therefore, $$2000$$ time series $$\{z_{t_{k}}^{r}\}_{k=0,..,M-1}$$, with $$r=1,..2000$$, of $$M=10$$ values each one are now available for every state space example. For each time series of the five models, the values of the exact LMV filter and the deterministic LL filters were computed at the $$M$$ observations times $$t_{k}$$. For each time series $$\{z_{t_{k}}^{r}\}_{k=0,..,M-1}$$, the errors $$ \ \left\vert \mathbf{x}_{t_{k}/t_{k}}^{i}-\overline{\mathbf{y}} _{t_{k}/t_{k}}^{i}\right\vert$$ and $$\left\vert \mathbf{U} _{t_{k}/t_{k}}^{i,j}-\overline{\mathbf{V}}_{t_{k}/t_{k}}^{i,j}\right\vert$$ between the approximate deterministic filters and the exact one are computed for all $$k=1,..,M-1$$, and $$i,j=1,..,d$$. The $$2000$$ errors of each type were arranged into $$L=20$$ batches with $$K=100$$ values each one, which are denoted by $$\widehat{e}_{l,j},$$$$l=1,..,L;$$$$j=1,...,K.$$ Then, the sample mean of the lth batch and of all batches can be computed by \begin{equation*} \widehat{e}_{l}=\frac{1}{K}\sum\limits_{j=1}^{K}\widehat{e}_{l,j},\text{ and }\widehat{e}=\frac{1}{L}\sum\limits_{l=1}^{L}\widehat{e}_{l}, \end{equation*} respectively. The confidence interval for each type of error is computed as \begin{equation*} \lbrack \widehat{e}-{\it{\Delta}} ,\widehat{e}+{\it{\Delta}} ], \end{equation*} where \begin{equation*} {\it{\Delta}} =t_{1-\alpha /2,L-1}\sqrt{\frac{\widehat{\sigma }_{e}^{2}}{L}},\text{ }\widehat{\sigma }_{e}^{2}=\frac{1}{L-1}\sum\limits_{i=1}^{L}\left\vert \widehat{e}_{i}-\widehat{e}\right\vert ^{2}, \end{equation*} and $$t_{1-\alpha /2,L-1}$$ denotes the $$1-\alpha /2$$ percentile of the Student’s $$t$$ distribution with $$L-1$$ degrees for the significance level $$ 0<\alpha <1.$$ The 90% confidence interval (i.e., the values $${\it{\Delta}}$$ for $$ \alpha =0.1$$) was chosen. For the linear models (6.1)–(6.2) and (6.3)–(6.4), Tables 4 and 5 show the confidence limits for the errors between the exact LMV filter and the deterministic LL filter on the uniform time discretization $$\left( \tau \right) _{h}^{u}=\{\tau _{n}=t_{0}+nh:$$$$n=0,..,(M-1)/h\}\supset \{t\}_{M}$$ with $$ h=1/64,1/128,1/256,1/512$$. Note that, as well as for the case of a single experiment, the accuracy of the LL filter on uniform discretizations $$\left( \tau \right) _{h}^{u}$$ improve as $$h$$ decreases. Moreover, these tables also display the estimated order $$\widehat{\beta }$$ of weak convergence that were obtained as the slope of the straight line fitted to the set of four points $$ \{\log _{2}(h_{j})$$, $$\log _{2}(\widehat{e}(h_{j}))\}_{j=1,..,4}$$ taken from their corresponding errors. That is, for these two linear models, the deterministic LL filter converges to the exact LMV filter and preserve the convergence rate of the stochastic order$$-1$$ LL filter given in Theorem 4.2. Table 4 Confidence limits for the errors between the exact LMV filter of the linear model (6.1)–(6.2) and the deterministic LL filter on uniform time partition $$\left( \tau\right) _{h}^{u}$$ with different value of $$h$$. Order $$\widehat{\beta}$$ of weak convergence estimated from the errors $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 1.36\pm 0.03\times 10}^{-5}$$ $${ 6.73\pm 0.13\times 10}^{-6}$$ $${ 3.35\pm 0.06\times 10}^{-6}$$ $${ 1.67\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{2}{ /t}_{2}$$ $${ 5.35\pm 0.11\times 10}^{-6}$$ $${ 2.66\pm 0.06\times 10}^{-6}$$ $${ 1.33\pm 0.03\times 10}^{-6}$$ $${ 6.64\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 3.65\pm 0.06\times 10}^{-6}$$ $${ 1.82\pm 0.03\times 10}^{-6}$$ $${ 9.09\pm 0.16\times 10}^{-7}$$ $${ 4.54\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.32\pm 0.10\times 10}^{-6}$$ $${ 1.66\pm 0.05\times 10}^{-6}$$ $${ 8.28\pm 0.25\times 10}^{-7}$$ $${ 4.14\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.54\pm 0.09\times 10}^{-6}$$ $${ 1.77\pm 0.04\times 10}^{-6}$$ $${ 8.82\pm 0.22\times 10}^{-7}$$ $${ 4.41\pm 0.11\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.98\pm 0.09\times 10}^{-6}$$ $${ 1.98\pm 0.05\times 10}^{-6}$$ $${ 9.91\pm 0.23\times 10}^{-7}$$ $${ 4.95\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.42\pm 0.11\times 10}^{-6}$$ $${ 1.71\pm 0.05\times 10}^{-6}$$ $${ 8.52\pm 0.26\times 10}^{-7}$$ $${ 4.26\pm 0.13\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.00\pm 0.05\times 10}^{-6}$$ $${ 9.96\pm 0.26\times 10}^{-7}$$ $${ 4.98\pm 0.13\times 10}^{-7}$$ $${ 2.49\pm 0.06\times 10}^{-7}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 8.34\pm 0.33\times 10}^{-7}$$ $${ 4.17\pm 0.16\times 10}^{-7}$$ $${ 2.09\pm 0.08\times 10}^{-7}$$ $${ 1.05\pm 0.04\times 10}^{-7}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.47\pm 0.06\times 10}^{-5}$$ $${ 1.23\pm 0.03\times 10}^{-5}$$ $${ 6.12\pm 0.14\times 10}^{-6}$$ $${ 3.05\pm 0.07\times 10}^{-6}$$ $${ 1.01}$$ $${ t}_{2}{ /t}_{2}$$ $${ 8.08\pm 0.20\times 10}^{-6}$$ $${ 4.03\pm 0.10\times 10}^{-6}$$ $${ 2.01\pm 0.05\times 10}^{-6}$$ $${ 1.00\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 4.06\pm 0.11\times 10}^{-6}$$ $${ 2.03\pm 0.06\times 10}^{-6}$$ $${ 1.01\pm 0.03\times 10}^{-6}$$ $${ 5.05\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 2.36\pm 0.07\times 10}^{-6}$$ $${ 1.18\pm 0.03\times 10}^{-6}$$ $${ 5.87\pm 0.17\times 10}^{-7}$$ $${ 2.93\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 1.52\pm 0.05\times 10}^{-6}$$ $${ 7.60\pm 0.24\times 10}^{-7}$$ $${ 3.78\pm 0.12\times 10}^{-7}$$ $${ 1.89\pm 0.06\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 9.36\pm 0.30\times 10}^{-7}$$ $${ 4.66\pm 0.15\times 10}^{-7}$$ $${ 2.33\pm 0.07\times 10}^{-7}$$ $${ 1.16\pm 0.04\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 4.49\pm 0.22\times 10}^{-7}$$ $${ 2.23\pm 0.10\times 10}^{-7}$$ $${ 1.11\pm 0.05\times 10}^{-7}$$ $${ 5.55\pm 0.27\times 10}^{-8}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 1.32\pm 0.07\times 10}^{-7}$$ $${ 6.55\pm 0.35\times 10}^{-8}$$ $${ 3.26\pm 0.17\times 10}^{-8}$$ $${ 1.63\pm 0.09\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 2.42\pm 0.19\times 10}^{-8}$$ $${ 1.19\pm 0.09\times 10}^{-8}$$ $${ 5.94\pm 0.46\times 10}^{-9}$$ $${ 2.96\pm 0.23\times 10}^{-9}$$ $${ 1.01}$$ $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 1.36\pm 0.03\times 10}^{-5}$$ $${ 6.73\pm 0.13\times 10}^{-6}$$ $${ 3.35\pm 0.06\times 10}^{-6}$$ $${ 1.67\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{2}{ /t}_{2}$$ $${ 5.35\pm 0.11\times 10}^{-6}$$ $${ 2.66\pm 0.06\times 10}^{-6}$$ $${ 1.33\pm 0.03\times 10}^{-6}$$ $${ 6.64\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 3.65\pm 0.06\times 10}^{-6}$$ $${ 1.82\pm 0.03\times 10}^{-6}$$ $${ 9.09\pm 0.16\times 10}^{-7}$$ $${ 4.54\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.32\pm 0.10\times 10}^{-6}$$ $${ 1.66\pm 0.05\times 10}^{-6}$$ $${ 8.28\pm 0.25\times 10}^{-7}$$ $${ 4.14\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.54\pm 0.09\times 10}^{-6}$$ $${ 1.77\pm 0.04\times 10}^{-6}$$ $${ 8.82\pm 0.22\times 10}^{-7}$$ $${ 4.41\pm 0.11\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.98\pm 0.09\times 10}^{-6}$$ $${ 1.98\pm 0.05\times 10}^{-6}$$ $${ 9.91\pm 0.23\times 10}^{-7}$$ $${ 4.95\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.42\pm 0.11\times 10}^{-6}$$ $${ 1.71\pm 0.05\times 10}^{-6}$$ $${ 8.52\pm 0.26\times 10}^{-7}$$ $${ 4.26\pm 0.13\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.00\pm 0.05\times 10}^{-6}$$ $${ 9.96\pm 0.26\times 10}^{-7}$$ $${ 4.98\pm 0.13\times 10}^{-7}$$ $${ 2.49\pm 0.06\times 10}^{-7}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 8.34\pm 0.33\times 10}^{-7}$$ $${ 4.17\pm 0.16\times 10}^{-7}$$ $${ 2.09\pm 0.08\times 10}^{-7}$$ $${ 1.05\pm 0.04\times 10}^{-7}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.47\pm 0.06\times 10}^{-5}$$ $${ 1.23\pm 0.03\times 10}^{-5}$$ $${ 6.12\pm 0.14\times 10}^{-6}$$ $${ 3.05\pm 0.07\times 10}^{-6}$$ $${ 1.01}$$ $${ t}_{2}{ /t}_{2}$$ $${ 8.08\pm 0.20\times 10}^{-6}$$ $${ 4.03\pm 0.10\times 10}^{-6}$$ $${ 2.01\pm 0.05\times 10}^{-6}$$ $${ 1.00\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 4.06\pm 0.11\times 10}^{-6}$$ $${ 2.03\pm 0.06\times 10}^{-6}$$ $${ 1.01\pm 0.03\times 10}^{-6}$$ $${ 5.05\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 2.36\pm 0.07\times 10}^{-6}$$ $${ 1.18\pm 0.03\times 10}^{-6}$$ $${ 5.87\pm 0.17\times 10}^{-7}$$ $${ 2.93\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 1.52\pm 0.05\times 10}^{-6}$$ $${ 7.60\pm 0.24\times 10}^{-7}$$ $${ 3.78\pm 0.12\times 10}^{-7}$$ $${ 1.89\pm 0.06\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 9.36\pm 0.30\times 10}^{-7}$$ $${ 4.66\pm 0.15\times 10}^{-7}$$ $${ 2.33\pm 0.07\times 10}^{-7}$$ $${ 1.16\pm 0.04\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 4.49\pm 0.22\times 10}^{-7}$$ $${ 2.23\pm 0.10\times 10}^{-7}$$ $${ 1.11\pm 0.05\times 10}^{-7}$$ $${ 5.55\pm 0.27\times 10}^{-8}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 1.32\pm 0.07\times 10}^{-7}$$ $${ 6.55\pm 0.35\times 10}^{-8}$$ $${ 3.26\pm 0.17\times 10}^{-8}$$ $${ 1.63\pm 0.09\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 2.42\pm 0.19\times 10}^{-8}$$ $${ 1.19\pm 0.09\times 10}^{-8}$$ $${ 5.94\pm 0.46\times 10}^{-9}$$ $${ 2.96\pm 0.23\times 10}^{-9}$$ $${ 1.01}$$ Table 4 Confidence limits for the errors between the exact LMV filter of the linear model (6.1)–(6.2) and the deterministic LL filter on uniform time partition $$\left( \tau\right) _{h}^{u}$$ with different value of $$h$$. Order $$\widehat{\beta}$$ of weak convergence estimated from the errors $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 1.36\pm 0.03\times 10}^{-5}$$ $${ 6.73\pm 0.13\times 10}^{-6}$$ $${ 3.35\pm 0.06\times 10}^{-6}$$ $${ 1.67\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{2}{ /t}_{2}$$ $${ 5.35\pm 0.11\times 10}^{-6}$$ $${ 2.66\pm 0.06\times 10}^{-6}$$ $${ 1.33\pm 0.03\times 10}^{-6}$$ $${ 6.64\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 3.65\pm 0.06\times 10}^{-6}$$ $${ 1.82\pm 0.03\times 10}^{-6}$$ $${ 9.09\pm 0.16\times 10}^{-7}$$ $${ 4.54\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.32\pm 0.10\times 10}^{-6}$$ $${ 1.66\pm 0.05\times 10}^{-6}$$ $${ 8.28\pm 0.25\times 10}^{-7}$$ $${ 4.14\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.54\pm 0.09\times 10}^{-6}$$ $${ 1.77\pm 0.04\times 10}^{-6}$$ $${ 8.82\pm 0.22\times 10}^{-7}$$ $${ 4.41\pm 0.11\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.98\pm 0.09\times 10}^{-6}$$ $${ 1.98\pm 0.05\times 10}^{-6}$$ $${ 9.91\pm 0.23\times 10}^{-7}$$ $${ 4.95\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.42\pm 0.11\times 10}^{-6}$$ $${ 1.71\pm 0.05\times 10}^{-6}$$ $${ 8.52\pm 0.26\times 10}^{-7}$$ $${ 4.26\pm 0.13\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.00\pm 0.05\times 10}^{-6}$$ $${ 9.96\pm 0.26\times 10}^{-7}$$ $${ 4.98\pm 0.13\times 10}^{-7}$$ $${ 2.49\pm 0.06\times 10}^{-7}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 8.34\pm 0.33\times 10}^{-7}$$ $${ 4.17\pm 0.16\times 10}^{-7}$$ $${ 2.09\pm 0.08\times 10}^{-7}$$ $${ 1.05\pm 0.04\times 10}^{-7}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.47\pm 0.06\times 10}^{-5}$$ $${ 1.23\pm 0.03\times 10}^{-5}$$ $${ 6.12\pm 0.14\times 10}^{-6}$$ $${ 3.05\pm 0.07\times 10}^{-6}$$ $${ 1.01}$$ $${ t}_{2}{ /t}_{2}$$ $${ 8.08\pm 0.20\times 10}^{-6}$$ $${ 4.03\pm 0.10\times 10}^{-6}$$ $${ 2.01\pm 0.05\times 10}^{-6}$$ $${ 1.00\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 4.06\pm 0.11\times 10}^{-6}$$ $${ 2.03\pm 0.06\times 10}^{-6}$$ $${ 1.01\pm 0.03\times 10}^{-6}$$ $${ 5.05\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 2.36\pm 0.07\times 10}^{-6}$$ $${ 1.18\pm 0.03\times 10}^{-6}$$ $${ 5.87\pm 0.17\times 10}^{-7}$$ $${ 2.93\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 1.52\pm 0.05\times 10}^{-6}$$ $${ 7.60\pm 0.24\times 10}^{-7}$$ $${ 3.78\pm 0.12\times 10}^{-7}$$ $${ 1.89\pm 0.06\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 9.36\pm 0.30\times 10}^{-7}$$ $${ 4.66\pm 0.15\times 10}^{-7}$$ $${ 2.33\pm 0.07\times 10}^{-7}$$ $${ 1.16\pm 0.04\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 4.49\pm 0.22\times 10}^{-7}$$ $${ 2.23\pm 0.10\times 10}^{-7}$$ $${ 1.11\pm 0.05\times 10}^{-7}$$ $${ 5.55\pm 0.27\times 10}^{-8}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 1.32\pm 0.07\times 10}^{-7}$$ $${ 6.55\pm 0.35\times 10}^{-8}$$ $${ 3.26\pm 0.17\times 10}^{-8}$$ $${ 1.63\pm 0.09\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 2.42\pm 0.19\times 10}^{-8}$$ $${ 1.19\pm 0.09\times 10}^{-8}$$ $${ 5.94\pm 0.46\times 10}^{-9}$$ $${ 2.96\pm 0.23\times 10}^{-9}$$ $${ 1.01}$$ $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 1.36\pm 0.03\times 10}^{-5}$$ $${ 6.73\pm 0.13\times 10}^{-6}$$ $${ 3.35\pm 0.06\times 10}^{-6}$$ $${ 1.67\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{2}{ /t}_{2}$$ $${ 5.35\pm 0.11\times 10}^{-6}$$ $${ 2.66\pm 0.06\times 10}^{-6}$$ $${ 1.33\pm 0.03\times 10}^{-6}$$ $${ 6.64\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 3.65\pm 0.06\times 10}^{-6}$$ $${ 1.82\pm 0.03\times 10}^{-6}$$ $${ 9.09\pm 0.16\times 10}^{-7}$$ $${ 4.54\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.32\pm 0.10\times 10}^{-6}$$ $${ 1.66\pm 0.05\times 10}^{-6}$$ $${ 8.28\pm 0.25\times 10}^{-7}$$ $${ 4.14\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.54\pm 0.09\times 10}^{-6}$$ $${ 1.77\pm 0.04\times 10}^{-6}$$ $${ 8.82\pm 0.22\times 10}^{-7}$$ $${ 4.41\pm 0.11\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.98\pm 0.09\times 10}^{-6}$$ $${ 1.98\pm 0.05\times 10}^{-6}$$ $${ 9.91\pm 0.23\times 10}^{-7}$$ $${ 4.95\pm 0.12\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.42\pm 0.11\times 10}^{-6}$$ $${ 1.71\pm 0.05\times 10}^{-6}$$ $${ 8.52\pm 0.26\times 10}^{-7}$$ $${ 4.26\pm 0.13\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.00\pm 0.05\times 10}^{-6}$$ $${ 9.96\pm 0.26\times 10}^{-7}$$ $${ 4.98\pm 0.13\times 10}^{-7}$$ $${ 2.49\pm 0.06\times 10}^{-7}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 8.34\pm 0.33\times 10}^{-7}$$ $${ 4.17\pm 0.16\times 10}^{-7}$$ $${ 2.09\pm 0.08\times 10}^{-7}$$ $${ 1.05\pm 0.04\times 10}^{-7}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.47\pm 0.06\times 10}^{-5}$$ $${ 1.23\pm 0.03\times 10}^{-5}$$ $${ 6.12\pm 0.14\times 10}^{-6}$$ $${ 3.05\pm 0.07\times 10}^{-6}$$ $${ 1.01}$$ $${ t}_{2}{ /t}_{2}$$ $${ 8.08\pm 0.20\times 10}^{-6}$$ $${ 4.03\pm 0.10\times 10}^{-6}$$ $${ 2.01\pm 0.05\times 10}^{-6}$$ $${ 1.00\pm 0.03\times 10}^{-6}$$ $${ 1.00}$$ $${ t}_{3}{ /t}_{3}$$ $${ 4.06\pm 0.11\times 10}^{-6}$$ $${ 2.03\pm 0.06\times 10}^{-6}$$ $${ 1.01\pm 0.03\times 10}^{-6}$$ $${ 5.05\pm 0.14\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{4}{ /t}_{4}$$ $${ 2.36\pm 0.07\times 10}^{-6}$$ $${ 1.18\pm 0.03\times 10}^{-6}$$ $${ 5.87\pm 0.17\times 10}^{-7}$$ $${ 2.93\pm 0.08\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{5}{ /t}_{4}$$ $${ 1.52\pm 0.05\times 10}^{-6}$$ $${ 7.60\pm 0.24\times 10}^{-7}$$ $${ 3.78\pm 0.12\times 10}^{-7}$$ $${ 1.89\pm 0.06\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{6}{ /t}_{6}$$ $${ 9.36\pm 0.30\times 10}^{-7}$$ $${ 4.66\pm 0.15\times 10}^{-7}$$ $${ 2.33\pm 0.07\times 10}^{-7}$$ $${ 1.16\pm 0.04\times 10}^{-7}$$ $${ 1.00}$$ $${ t}_{7}{ /t}_{7}$$ $${ 4.49\pm 0.22\times 10}^{-7}$$ $${ 2.23\pm 0.10\times 10}^{-7}$$ $${ 1.11\pm 0.05\times 10}^{-7}$$ $${ 5.55\pm 0.27\times 10}^{-8}$$ $${ 1.00}$$ $${ t}_{8}{ /t}_{8}$$ $${ 1.32\pm 0.07\times 10}^{-7}$$ $${ 6.55\pm 0.35\times 10}^{-8}$$ $${ 3.26\pm 0.17\times 10}^{-8}$$ $${ 1.63\pm 0.09\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{9}{ /t}_{9}$$ $${ 2.42\pm 0.19\times 10}^{-8}$$ $${ 1.19\pm 0.09\times 10}^{-8}$$ $${ 5.94\pm 0.46\times 10}^{-9}$$ $${ 2.96\pm 0.23\times 10}^{-9}$$ $${ 1.01}$$ Table 5 Confidence limits for the errors between the exact LMV filter of the linear model (6.3)–(6.4) and the deterministic LL filter on uniform time partition $$\left( \tau\right) _{h}^{u}$$ with different value of $$h$$. Order $$\widehat{\beta}$$ of weak convergence estimated from the errors $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.00\pm 0.04\times 10}^{-8}$$ $${ 1.17\pm 0.02\times 10}^{-8}$$ $${ 6.23\pm 0.11\times 10}^{-9}$$ $${ 3.22\pm 0.06\times 10}^{-9}$$ $${ 0.95}$$ $${ t}_{2}{ /t}_{2}$$ $${ 1.31\pm 0.03\times 10}^{-8}$$ $${ 6.44\pm 0.14\times 10}^{-8}$$ $${ 3.20\pm 0.07\times 10}^{-9}$$ $${ 1.59\pm 0.04\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{3}{ /t}_{3}$$ $${ 1.12\pm 0.03\times 10}^{-8}$$ $${ 5.52\pm 0.14\times 10}^{-8}$$ $${ 2.74\pm 0.06\times 10}^{-9}$$ $${ 1.36\pm 0.03\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{4}{ /t}_{4}$$ $${ 1.56\pm 0.03\times 10}^{-8}$$ $${ 7.74\pm 0.12\times 10}^{-8}$$ $${ 3.85\pm 0.06\times 10}^{-9}$$ $${ 1.92\pm 0.03\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 2.95\pm 0.07\times 10}^{-8}$$ $${ 1.47\pm 0.03\times 10}^{-8}$$ $${ 7.38\pm 0.17\times 10}^{-9}$$ $${ 3.69\pm 0.08\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 7.85\pm 0.19\times 10}^{-8}$$ $${ 3.98\pm 0.09\times 10}^{-8}$$ $${ 2.01\pm 0.05\times 10}^{-8}$$ $${ 1.01\pm 0.02\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{7}{ /t}_{7}$$ $${ 2.65\pm 0.06\times 10}^{-7}$$ $${ 1.37\pm 0.03\times 10}^{-7}$$ $${ 6.94\pm 0.15\times 10}^{-8}$$ $${ 3.50\pm 0.07\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{8}{ /t}_{8}$$ $${ 5.46\pm 0.16\times 107}$$ $${ 2.79\pm 0.08\times 10}^{-7}$$ $${ 1.41\pm 0.04\times 10}^{-7}$$ $${ 7.09\pm 0.21\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{9}{ /t}_{9}$$ $${ 4.76\pm 0.13\times 10}^{-7}$$ $${ 2.37\pm 0.06\times 10}^{-7}$$ $${ 1.18\pm 0.03\times 10}^{-7}$$ $${ 5.91\pm 0.16\times 10}^{-8}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 3.48\pm 0.09\times 10}^{-7}$$ $${ 2.03\pm 0.05\times 10}^{-7}$$ $${ 1.09\pm 0.03\times 10}^{-7}$$ $${ 5.60\pm 0.14\times 10}^{-8}$$ $${ 0.88}$$ $${ t}_{2}{ /t}_{2}$$ $${ 2.66\pm 0.11\times 10}^{-7}$$ $${ 1.31\pm 0.05\times 10}^{-7}$$ $${ 6.51\pm 0.26\times 10}^{-8}$$ $${ 3.24\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{3}{ /t}_{3}$$ $${ 2.97\pm 0.12\times 10}^{-7}$$ $${ 1.46\pm 0.06\times 10}^{-7}$$ $${ 7.24\pm 0.30\times 10}^{-8}$$ $${ 3.61\pm 0.15\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.46\pm 0.11\times 10}^{-7}$$ $${ 1.71\pm 0.05\times 10}^{-7}$$ $${ 8.53\pm 0.27\times 10}^{-8}$$ $${ 4.26\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.44\pm 0.16\times 10}^{-7}$$ $${ 1.73\pm 0.08\times 10}^{-7}$$ $${ 8.65\pm 0.41\times 10}^{-8}$$ $${ 4.33\pm 0.21\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.58\pm 0.15\times 10}^{-7}$$ $${ 1.83\pm 0.07\times 10}^{-7}$$ $${ 9.21\pm 0.38\times 10}^{-8}$$ $${ 4.63\pm 0.19\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.57\pm 0.14\times 10}^{-7}$$ $${ 1.85\pm 0.07\times 10}^{-7}$$ $${ 9.42\pm 0.38\times 10}^{-8}$$ $${ 4.75\pm 0.19\times 10}^{-8}$$ $${ 0.97}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.35\pm 0.13\times 10}^{-7}$$ $${ 1.21\pm 0.07\times 10}^{-7}$$ $${ 6.11\pm 0.34\times 10}^{-8}$$ $${ 3.08\pm 0.17\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{9}{ /t}_{9}$$ $${ 1.67\pm 0.09\times 10}^{-7}$$ $${ 8.31\pm 0.04\times 10}^{-8}$$ $${ 4.15\pm 0.22\times 10}^{-8}$$ $${ 2.07\pm 0.11\times 10}^{-8}$$ $${ 1.00}$$ $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.00\pm 0.04\times 10}^{-8}$$ $${ 1.17\pm 0.02\times 10}^{-8}$$ $${ 6.23\pm 0.11\times 10}^{-9}$$ $${ 3.22\pm 0.06\times 10}^{-9}$$ $${ 0.95}$$ $${ t}_{2}{ /t}_{2}$$ $${ 1.31\pm 0.03\times 10}^{-8}$$ $${ 6.44\pm 0.14\times 10}^{-8}$$ $${ 3.20\pm 0.07\times 10}^{-9}$$ $${ 1.59\pm 0.04\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{3}{ /t}_{3}$$ $${ 1.12\pm 0.03\times 10}^{-8}$$ $${ 5.52\pm 0.14\times 10}^{-8}$$ $${ 2.74\pm 0.06\times 10}^{-9}$$ $${ 1.36\pm 0.03\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{4}{ /t}_{4}$$ $${ 1.56\pm 0.03\times 10}^{-8}$$ $${ 7.74\pm 0.12\times 10}^{-8}$$ $${ 3.85\pm 0.06\times 10}^{-9}$$ $${ 1.92\pm 0.03\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 2.95\pm 0.07\times 10}^{-8}$$ $${ 1.47\pm 0.03\times 10}^{-8}$$ $${ 7.38\pm 0.17\times 10}^{-9}$$ $${ 3.69\pm 0.08\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 7.85\pm 0.19\times 10}^{-8}$$ $${ 3.98\pm 0.09\times 10}^{-8}$$ $${ 2.01\pm 0.05\times 10}^{-8}$$ $${ 1.01\pm 0.02\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{7}{ /t}_{7}$$ $${ 2.65\pm 0.06\times 10}^{-7}$$ $${ 1.37\pm 0.03\times 10}^{-7}$$ $${ 6.94\pm 0.15\times 10}^{-8}$$ $${ 3.50\pm 0.07\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{8}{ /t}_{8}$$ $${ 5.46\pm 0.16\times 107}$$ $${ 2.79\pm 0.08\times 10}^{-7}$$ $${ 1.41\pm 0.04\times 10}^{-7}$$ $${ 7.09\pm 0.21\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{9}{ /t}_{9}$$ $${ 4.76\pm 0.13\times 10}^{-7}$$ $${ 2.37\pm 0.06\times 10}^{-7}$$ $${ 1.18\pm 0.03\times 10}^{-7}$$ $${ 5.91\pm 0.16\times 10}^{-8}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 3.48\pm 0.09\times 10}^{-7}$$ $${ 2.03\pm 0.05\times 10}^{-7}$$ $${ 1.09\pm 0.03\times 10}^{-7}$$ $${ 5.60\pm 0.14\times 10}^{-8}$$ $${ 0.88}$$ $${ t}_{2}{ /t}_{2}$$ $${ 2.66\pm 0.11\times 10}^{-7}$$ $${ 1.31\pm 0.05\times 10}^{-7}$$ $${ 6.51\pm 0.26\times 10}^{-8}$$ $${ 3.24\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{3}{ /t}_{3}$$ $${ 2.97\pm 0.12\times 10}^{-7}$$ $${ 1.46\pm 0.06\times 10}^{-7}$$ $${ 7.24\pm 0.30\times 10}^{-8}$$ $${ 3.61\pm 0.15\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.46\pm 0.11\times 10}^{-7}$$ $${ 1.71\pm 0.05\times 10}^{-7}$$ $${ 8.53\pm 0.27\times 10}^{-8}$$ $${ 4.26\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.44\pm 0.16\times 10}^{-7}$$ $${ 1.73\pm 0.08\times 10}^{-7}$$ $${ 8.65\pm 0.41\times 10}^{-8}$$ $${ 4.33\pm 0.21\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.58\pm 0.15\times 10}^{-7}$$ $${ 1.83\pm 0.07\times 10}^{-7}$$ $${ 9.21\pm 0.38\times 10}^{-8}$$ $${ 4.63\pm 0.19\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.57\pm 0.14\times 10}^{-7}$$ $${ 1.85\pm 0.07\times 10}^{-7}$$ $${ 9.42\pm 0.38\times 10}^{-8}$$ $${ 4.75\pm 0.19\times 10}^{-8}$$ $${ 0.97}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.35\pm 0.13\times 10}^{-7}$$ $${ 1.21\pm 0.07\times 10}^{-7}$$ $${ 6.11\pm 0.34\times 10}^{-8}$$ $${ 3.08\pm 0.17\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{9}{ /t}_{9}$$ $${ 1.67\pm 0.09\times 10}^{-7}$$ $${ 8.31\pm 0.04\times 10}^{-8}$$ $${ 4.15\pm 0.22\times 10}^{-8}$$ $${ 2.07\pm 0.11\times 10}^{-8}$$ $${ 1.00}$$ Table 5 Confidence limits for the errors between the exact LMV filter of the linear model (6.3)–(6.4) and the deterministic LL filter on uniform time partition $$\left( \tau\right) _{h}^{u}$$ with different value of $$h$$. Order $$\widehat{\beta}$$ of weak convergence estimated from the errors $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.00\pm 0.04\times 10}^{-8}$$ $${ 1.17\pm 0.02\times 10}^{-8}$$ $${ 6.23\pm 0.11\times 10}^{-9}$$ $${ 3.22\pm 0.06\times 10}^{-9}$$ $${ 0.95}$$ $${ t}_{2}{ /t}_{2}$$ $${ 1.31\pm 0.03\times 10}^{-8}$$ $${ 6.44\pm 0.14\times 10}^{-8}$$ $${ 3.20\pm 0.07\times 10}^{-9}$$ $${ 1.59\pm 0.04\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{3}{ /t}_{3}$$ $${ 1.12\pm 0.03\times 10}^{-8}$$ $${ 5.52\pm 0.14\times 10}^{-8}$$ $${ 2.74\pm 0.06\times 10}^{-9}$$ $${ 1.36\pm 0.03\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{4}{ /t}_{4}$$ $${ 1.56\pm 0.03\times 10}^{-8}$$ $${ 7.74\pm 0.12\times 10}^{-8}$$ $${ 3.85\pm 0.06\times 10}^{-9}$$ $${ 1.92\pm 0.03\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 2.95\pm 0.07\times 10}^{-8}$$ $${ 1.47\pm 0.03\times 10}^{-8}$$ $${ 7.38\pm 0.17\times 10}^{-9}$$ $${ 3.69\pm 0.08\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 7.85\pm 0.19\times 10}^{-8}$$ $${ 3.98\pm 0.09\times 10}^{-8}$$ $${ 2.01\pm 0.05\times 10}^{-8}$$ $${ 1.01\pm 0.02\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{7}{ /t}_{7}$$ $${ 2.65\pm 0.06\times 10}^{-7}$$ $${ 1.37\pm 0.03\times 10}^{-7}$$ $${ 6.94\pm 0.15\times 10}^{-8}$$ $${ 3.50\pm 0.07\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{8}{ /t}_{8}$$ $${ 5.46\pm 0.16\times 107}$$ $${ 2.79\pm 0.08\times 10}^{-7}$$ $${ 1.41\pm 0.04\times 10}^{-7}$$ $${ 7.09\pm 0.21\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{9}{ /t}_{9}$$ $${ 4.76\pm 0.13\times 10}^{-7}$$ $${ 2.37\pm 0.06\times 10}^{-7}$$ $${ 1.18\pm 0.03\times 10}^{-7}$$ $${ 5.91\pm 0.16\times 10}^{-8}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 3.48\pm 0.09\times 10}^{-7}$$ $${ 2.03\pm 0.05\times 10}^{-7}$$ $${ 1.09\pm 0.03\times 10}^{-7}$$ $${ 5.60\pm 0.14\times 10}^{-8}$$ $${ 0.88}$$ $${ t}_{2}{ /t}_{2}$$ $${ 2.66\pm 0.11\times 10}^{-7}$$ $${ 1.31\pm 0.05\times 10}^{-7}$$ $${ 6.51\pm 0.26\times 10}^{-8}$$ $${ 3.24\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{3}{ /t}_{3}$$ $${ 2.97\pm 0.12\times 10}^{-7}$$ $${ 1.46\pm 0.06\times 10}^{-7}$$ $${ 7.24\pm 0.30\times 10}^{-8}$$ $${ 3.61\pm 0.15\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.46\pm 0.11\times 10}^{-7}$$ $${ 1.71\pm 0.05\times 10}^{-7}$$ $${ 8.53\pm 0.27\times 10}^{-8}$$ $${ 4.26\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.44\pm 0.16\times 10}^{-7}$$ $${ 1.73\pm 0.08\times 10}^{-7}$$ $${ 8.65\pm 0.41\times 10}^{-8}$$ $${ 4.33\pm 0.21\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.58\pm 0.15\times 10}^{-7}$$ $${ 1.83\pm 0.07\times 10}^{-7}$$ $${ 9.21\pm 0.38\times 10}^{-8}$$ $${ 4.63\pm 0.19\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.57\pm 0.14\times 10}^{-7}$$ $${ 1.85\pm 0.07\times 10}^{-7}$$ $${ 9.42\pm 0.38\times 10}^{-8}$$ $${ 4.75\pm 0.19\times 10}^{-8}$$ $${ 0.97}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.35\pm 0.13\times 10}^{-7}$$ $${ 1.21\pm 0.07\times 10}^{-7}$$ $${ 6.11\pm 0.34\times 10}^{-8}$$ $${ 3.08\pm 0.17\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{9}{ /t}_{9}$$ $${ 1.67\pm 0.09\times 10}^{-7}$$ $${ 8.31\pm 0.04\times 10}^{-8}$$ $${ 4.15\pm 0.22\times 10}^{-8}$$ $${ 2.07\pm 0.11\times 10}^{-8}$$ $${ 1.00}$$ $${ x}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 2.00\pm 0.04\times 10}^{-8}$$ $${ 1.17\pm 0.02\times 10}^{-8}$$ $${ 6.23\pm 0.11\times 10}^{-9}$$ $${ 3.22\pm 0.06\times 10}^{-9}$$ $${ 0.95}$$ $${ t}_{2}{ /t}_{2}$$ $${ 1.31\pm 0.03\times 10}^{-8}$$ $${ 6.44\pm 0.14\times 10}^{-8}$$ $${ 3.20\pm 0.07\times 10}^{-9}$$ $${ 1.59\pm 0.04\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{3}{ /t}_{3}$$ $${ 1.12\pm 0.03\times 10}^{-8}$$ $${ 5.52\pm 0.14\times 10}^{-8}$$ $${ 2.74\pm 0.06\times 10}^{-9}$$ $${ 1.36\pm 0.03\times 10}^{-9}$$ $${ 1.02}$$ $${ t}_{4}{ /t}_{4}$$ $${ 1.56\pm 0.03\times 10}^{-8}$$ $${ 7.74\pm 0.12\times 10}^{-8}$$ $${ 3.85\pm 0.06\times 10}^{-9}$$ $${ 1.92\pm 0.03\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 2.95\pm 0.07\times 10}^{-8}$$ $${ 1.47\pm 0.03\times 10}^{-8}$$ $${ 7.38\pm 0.17\times 10}^{-9}$$ $${ 3.69\pm 0.08\times 10}^{-9}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 7.85\pm 0.19\times 10}^{-8}$$ $${ 3.98\pm 0.09\times 10}^{-8}$$ $${ 2.01\pm 0.05\times 10}^{-8}$$ $${ 1.01\pm 0.02\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{7}{ /t}_{7}$$ $${ 2.65\pm 0.06\times 10}^{-7}$$ $${ 1.37\pm 0.03\times 10}^{-7}$$ $${ 6.94\pm 0.15\times 10}^{-8}$$ $${ 3.50\pm 0.07\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{8}{ /t}_{8}$$ $${ 5.46\pm 0.16\times 107}$$ $${ 2.79\pm 0.08\times 10}^{-7}$$ $${ 1.41\pm 0.04\times 10}^{-7}$$ $${ 7.09\pm 0.21\times 10}^{-8}$$ $${ 0.99}$$ $${ t}_{9}{ /t}_{9}$$ $${ 4.76\pm 0.13\times 10}^{-7}$$ $${ 2.37\pm 0.06\times 10}^{-7}$$ $${ 1.18\pm 0.03\times 10}^{-7}$$ $${ 5.91\pm 0.16\times 10}^{-8}$$ $${ 1.01}$$ $${ U}_{t_{k}/t_{k}}$$ $${ h=1/64}$$ $${ h=1/128}$$ $${ h=1/256}$$ $${ h=1/512}$$ $$\widehat{\beta }$$ $${ t}_{1}{ /t}_{1}$$ $${ 3.48\pm 0.09\times 10}^{-7}$$ $${ 2.03\pm 0.05\times 10}^{-7}$$ $${ 1.09\pm 0.03\times 10}^{-7}$$ $${ 5.60\pm 0.14\times 10}^{-8}$$ $${ 0.88}$$ $${ t}_{2}{ /t}_{2}$$ $${ 2.66\pm 0.11\times 10}^{-7}$$ $${ 1.31\pm 0.05\times 10}^{-7}$$ $${ 6.51\pm 0.26\times 10}^{-8}$$ $${ 3.24\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{3}{ /t}_{3}$$ $${ 2.97\pm 0.12\times 10}^{-7}$$ $${ 1.46\pm 0.06\times 10}^{-7}$$ $${ 7.24\pm 0.30\times 10}^{-8}$$ $${ 3.61\pm 0.15\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{4}{ /t}_{4}$$ $${ 3.46\pm 0.11\times 10}^{-7}$$ $${ 1.71\pm 0.05\times 10}^{-7}$$ $${ 8.53\pm 0.27\times 10}^{-8}$$ $${ 4.26\pm 0.13\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{5}{ /t}_{4}$$ $${ 3.44\pm 0.16\times 10}^{-7}$$ $${ 1.73\pm 0.08\times 10}^{-7}$$ $${ 8.65\pm 0.41\times 10}^{-8}$$ $${ 4.33\pm 0.21\times 10}^{-8}$$ $${ 1.01}$$ $${ t}_{6}{ /t}_{6}$$ $${ 3.58\pm 0.15\times 10}^{-7}$$ $${ 1.83\pm 0.07\times 10}^{-7}$$ $${ 9.21\pm 0.38\times 10}^{-8}$$ $${ 4.63\pm 0.19\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{7}{ /t}_{7}$$ $${ 3.57\pm 0.14\times 10}^{-7}$$ $${ 1.85\pm 0.07\times 10}^{-7}$$ $${ 9.42\pm 0.38\times 10}^{-8}$$ $${ 4.75\pm 0.19\times 10}^{-8}$$ $${ 0.97}$$ $${ t}_{8}{ /t}_{8}$$ $${ 2.35\pm 0.13\times 10}^{-7}$$ $${ 1.21\pm 0.07\times 10}^{-7}$$ $${ 6.11\pm 0.34\times 10}^{-8}$$ $${ 3.08\pm 0.17\times 10}^{-8}$$ $${ 0.98}$$ $${ t}_{9}{ /t}_{9}$$ $${ 1.67\pm 0.09\times 10}^{-7}$$ $${ 8.31\pm 0.04\times 10}^{-8}$$ $${ 4.15\pm 0.22\times 10}^{-8}$$ $${ 2.07\pm 0.11\times 10}^{-8}$$ $${ 1.00}$$ On the other hand, for the linear models too, Table 6 shows the confidence limits for the errors between the exact LMV filter and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. For each model, the values of $${\rm rtol}$$, atol and $${\rm tol}$$ were set as in previous subsection. The average of accepted and fail steps of the adaptive LL filter are also reported. Note the large difference among the accuracy of the conventional and the adaptive LL filter. Table 6 Confidence limits for the errors between the exact LMV filter of the two linear models and their corresponding approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps $${ \rm Linear}$$ Mul $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 3.94\pm 0.08\times 10}^{-3}$$ $${ 4.92\pm 0.09\times 10}^{-6}$$ $${ 5.27\pm 0.00\times 10}^{-6}$$ $${ 6.57\pm 0.00\times 10}^{-9}$$ $${ 89}$$ $${ 0}$$ $${ 2}$$ $${ 6.25\pm 0.13\times 10}^{-4}$$ $${ 2.00\pm 0.04\times 10}^{-6}$$ $${ 7.53\pm 0.04\times 10}^{-7}$$ $${ 2.43\pm 0.01\times 10}^{-9}$$ $${ 85}$$ $${ 0}$$ $${ 3}$$ $${ 3.58\pm 0.07\times 10}^{-4}$$ $${ 1.40\pm 0.02\times 10}^{-6}$$ $${ 4.97\pm 0.06\times 10}^{-7}$$ $${ 1.97\pm 0.02\times 10}^{-9}$$ $${ 83}$$ $${ 0}$$ $${ 4}$$ $${ 3.09\pm 0.09\times 10}^{-4}$$ $${ 1.30\pm 0.04\times 10}^{-6}$$ $${ 5.80\pm 0.07\times 10}^{-7}$$ $${ 2.47\pm 0.04\times 10}^{-9}$$ $${ 81}$$ $${ 0}$$ $${ 5}$$ $${ 3.50\pm 0.09\times 10}^{-4}$$ $${ 1.41\pm 0.04\times 10}^{-6}$$ $${ 9.39\pm 0.09\times 10}^{-7}$$ $${ 4.10\pm 0.08\times 10}^{-9}$$ $${ 79}$$ $${ 0}$$ $${ 6}$$ $${ 4.49\pm 0.12\times 10}^{-4}$$ $${ 1.64\pm 0.04\times 10}^{-6}$$ $${ 1.65\pm 0.02\times 10}^{-6}$$ $${ 7.17\pm 0.09\times 10}^{-9}$$ $${ 77}$$ $${ 1}$$ $${ 7}$$ $${ 5.93\pm 0.10\times 10}^{-4}$$ $${ 1.47\pm 0.05\times 10}^{-6}$$ $${ 2.29\pm 0.02\times 10}^{-6}$$ $${ 9.33\pm 0.07\times 10}^{-9}$$ $${ 71}$$ $${ 1}$$ $${ 8}$$ $${ 6.61\pm 0.12\times 10}^{-4}$$ $${ 8.86\pm 0.02\times 10}^{-7}$$ $${ 1.98\pm 0.02\times 10}^{-6}$$ $${ 6.44\pm 0.09\times 10}^{-9}$$ $${ 60}$$ $${ 1}$$ $${ 9}$$ $${ 5.91\pm 0.09\times 10}^{-4}$$ $${ 3.74\pm 0.01\times 10}^{-7}$$ $${ 9.86\pm 0.30\times 10}^{-7}$$ $${ 2.31\pm 0.08\times 10}^{-9}$$ $${ 54}$$ $${ 1}$$ $${ \rm Linear}$$ Add $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot)$$ A F $${ 1}$$ $${ 1.75\pm 0.03\times 10}^{-3}$$ $${ 8.08\pm 0.12\times 10}^{-9}$$ $${ 1.06\pm 0.00\times 10}^{-7}$$ $${ 2.37\pm 0.00\times 10}^{-13}$$ $${ 13}$$ $${ 0}$$ $${ 2}$$ $${ 9.47\pm 0.24\times 10}^{-7}$$ $${ 1.54\pm 0.03\times 10}^{-8}$$ $${ 1.48\pm 0.00\times 10}^{-11}$$ $${ 2.50\pm 0.01\times 10}^{-13}$$ $${ 24}$$ $${ 2}$$ $${ 3}$$ $${ 2.17\pm 0.06\times 10}^{-6}$$ $${ 9.79\pm 0.25\times 10}^{-9}$$ $${ 2.62\pm 0.00\times 10}^{-11}$$ $${ 1.14\pm 0.00\times 10}^{-13}$$ $${ 37}$$ $${ 2}$$ $${ 4}$$ $${ 2.73\pm 0.04\times 10}^{-6}$$ $${ 1.46\pm 0.03\times 10}^{-8}$$ $${ 3.95\pm 0.00\times 10}^{-11}$$ $${ 2.21\pm 0.02\times 10}^{-13}$$ $${ 35}$$ $${ 1}$$ $${ 5}$$ $${ 3.02\pm 0.05\times 10}^{-6}$$ $${ 3.26\pm 0.07\times 10}^{-8}$$ $${ 7.19\pm 0.00\times 10}^{-11}$$ $${ 9.28\pm 0.11\times 10}^{-13}$$ $${ 32}$$ $${ 1}$$ $${ 6}$$ $${ 1.35\pm 0.03\times 10}^{-5}$$ $${ 1.02\pm 0.03\times 10}^{-7}$$ $${ 7.03\pm 0.00\times 10}^{-10}$$ $${ 7.62\pm 0.12\times 10}^{-12}$$ $${ 28}$$ $${ 2}$$ $${ 7}$$ $${ 1.16\pm 0.14\times 10}^{-3}$$ $${ 3.74\pm 0.10\times 10}^{-7}$$ $${ 2.44\pm 0.00\times 10}^{-8}$$ $${ 9.90\pm 0.11\times 10}^{-11}$$ $${ 26}$$ $${ 2}$$ $${ 8}$$ $${ 1.01\pm 0.03\times 10}^{-3}$$ $${ 6.43\pm 0.23\times 10}^{-7}$$ $${ 2.57\pm 0.00\times 10}^{-8}$$ $${ 4.68\pm 0.05\times 10}^{-10}$$ $${ 28}$$ $${ 2}$$ $${ 9}$$ $${ 6.91\pm 0.18\times 10}^{-5}$$ $${ 3.68\pm 0.10\times 10}^{-7}$$ $${ 5.24\pm 0.00\times 10}^{-8}$$ $${ 3.26\pm 0.01\times 10}^{-10}$$ $${ 38}$$ $${ 2}$$ $${ \rm Linear}$$ Mul $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 3.94\pm 0.08\times 10}^{-3}$$ $${ 4.92\pm 0.09\times 10}^{-6}$$ $${ 5.27\pm 0.00\times 10}^{-6}$$ $${ 6.57\pm 0.00\times 10}^{-9}$$ $${ 89}$$ $${ 0}$$ $${ 2}$$ $${ 6.25\pm 0.13\times 10}^{-4}$$ $${ 2.00\pm 0.04\times 10}^{-6}$$ $${ 7.53\pm 0.04\times 10}^{-7}$$ $${ 2.43\pm 0.01\times 10}^{-9}$$ $${ 85}$$ $${ 0}$$ $${ 3}$$ $${ 3.58\pm 0.07\times 10}^{-4}$$ $${ 1.40\pm 0.02\times 10}^{-6}$$ $${ 4.97\pm 0.06\times 10}^{-7}$$ $${ 1.97\pm 0.02\times 10}^{-9}$$ $${ 83}$$ $${ 0}$$ $${ 4}$$ $${ 3.09\pm 0.09\times 10}^{-4}$$ $${ 1.30\pm 0.04\times 10}^{-6}$$ $${ 5.80\pm 0.07\times 10}^{-7}$$ $${ 2.47\pm 0.04\times 10}^{-9}$$ $${ 81}$$ $${ 0}$$ $${ 5}$$ $${ 3.50\pm 0.09\times 10}^{-4}$$ $${ 1.41\pm 0.04\times 10}^{-6}$$ $${ 9.39\pm 0.09\times 10}^{-7}$$ $${ 4.10\pm 0.08\times 10}^{-9}$$ $${ 79}$$ $${ 0}$$ $${ 6}$$ $${ 4.49\pm 0.12\times 10}^{-4}$$ $${ 1.64\pm 0.04\times 10}^{-6}$$ $${ 1.65\pm 0.02\times 10}^{-6}$$ $${ 7.17\pm 0.09\times 10}^{-9}$$ $${ 77}$$ $${ 1}$$ $${ 7}$$ $${ 5.93\pm 0.10\times 10}^{-4}$$ $${ 1.47\pm 0.05\times 10}^{-6}$$ $${ 2.29\pm 0.02\times 10}^{-6}$$ $${ 9.33\pm 0.07\times 10}^{-9}$$ $${ 71}$$ $${ 1}$$ $${ 8}$$ $${ 6.61\pm 0.12\times 10}^{-4}$$ $${ 8.86\pm 0.02\times 10}^{-7}$$ $${ 1.98\pm 0.02\times 10}^{-6}$$ $${ 6.44\pm 0.09\times 10}^{-9}$$ $${ 60}$$ $${ 1}$$ $${ 9}$$ $${ 5.91\pm 0.09\times 10}^{-4}$$ $${ 3.74\pm 0.01\times 10}^{-7}$$ $${ 9.86\pm 0.30\times 10}^{-7}$$ $${ 2.31\pm 0.08\times 10}^{-9}$$ $${ 54}$$ $${ 1}$$ $${ \rm Linear}$$ Add $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot)$$ A F $${ 1}$$ $${ 1.75\pm 0.03\times 10}^{-3}$$ $${ 8.08\pm 0.12\times 10}^{-9}$$ $${ 1.06\pm 0.00\times 10}^{-7}$$ $${ 2.37\pm 0.00\times 10}^{-13}$$ $${ 13}$$ $${ 0}$$ $${ 2}$$ $${ 9.47\pm 0.24\times 10}^{-7}$$ $${ 1.54\pm 0.03\times 10}^{-8}$$ $${ 1.48\pm 0.00\times 10}^{-11}$$ $${ 2.50\pm 0.01\times 10}^{-13}$$ $${ 24}$$ $${ 2}$$ $${ 3}$$ $${ 2.17\pm 0.06\times 10}^{-6}$$ $${ 9.79\pm 0.25\times 10}^{-9}$$ $${ 2.62\pm 0.00\times 10}^{-11}$$ $${ 1.14\pm 0.00\times 10}^{-13}$$ $${ 37}$$ $${ 2}$$ $${ 4}$$ $${ 2.73\pm 0.04\times 10}^{-6}$$ $${ 1.46\pm 0.03\times 10}^{-8}$$ $${ 3.95\pm 0.00\times 10}^{-11}$$ $${ 2.21\pm 0.02\times 10}^{-13}$$ $${ 35}$$ $${ 1}$$ $${ 5}$$ $${ 3.02\pm 0.05\times 10}^{-6}$$ $${ 3.26\pm 0.07\times 10}^{-8}$$ $${ 7.19\pm 0.00\times 10}^{-11}$$ $${ 9.28\pm 0.11\times 10}^{-13}$$ $${ 32}$$ $${ 1}$$ $${ 6}$$ $${ 1.35\pm 0.03\times 10}^{-5}$$ $${ 1.02\pm 0.03\times 10}^{-7}$$ $${ 7.03\pm 0.00\times 10}^{-10}$$ $${ 7.62\pm 0.12\times 10}^{-12}$$ $${ 28}$$ $${ 2}$$ $${ 7}$$ $${ 1.16\pm 0.14\times 10}^{-3}$$ $${ 3.74\pm 0.10\times 10}^{-7}$$ $${ 2.44\pm 0.00\times 10}^{-8}$$ $${ 9.90\pm 0.11\times 10}^{-11}$$ $${ 26}$$ $${ 2}$$ $${ 8}$$ $${ 1.01\pm 0.03\times 10}^{-3}$$ $${ 6.43\pm 0.23\times 10}^{-7}$$ $${ 2.57\pm 0.00\times 10}^{-8}$$ $${ 4.68\pm 0.05\times 10}^{-10}$$ $${ 28}$$ $${ 2}$$ $${ 9}$$ $${ 6.91\pm 0.18\times 10}^{-5}$$ $${ 3.68\pm 0.10\times 10}^{-7}$$ $${ 5.24\pm 0.00\times 10}^{-8}$$ $${ 3.26\pm 0.01\times 10}^{-10}$$ $${ 38}$$ $${ 2}$$ Table 6 Confidence limits for the errors between the exact LMV filter of the two linear models and their corresponding approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps $${ \rm Linear}$$ Mul $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 3.94\pm 0.08\times 10}^{-3}$$ $${ 4.92\pm 0.09\times 10}^{-6}$$ $${ 5.27\pm 0.00\times 10}^{-6}$$ $${ 6.57\pm 0.00\times 10}^{-9}$$ $${ 89}$$ $${ 0}$$ $${ 2}$$ $${ 6.25\pm 0.13\times 10}^{-4}$$ $${ 2.00\pm 0.04\times 10}^{-6}$$ $${ 7.53\pm 0.04\times 10}^{-7}$$ $${ 2.43\pm 0.01\times 10}^{-9}$$ $${ 85}$$ $${ 0}$$ $${ 3}$$ $${ 3.58\pm 0.07\times 10}^{-4}$$ $${ 1.40\pm 0.02\times 10}^{-6}$$ $${ 4.97\pm 0.06\times 10}^{-7}$$ $${ 1.97\pm 0.02\times 10}^{-9}$$ $${ 83}$$ $${ 0}$$ $${ 4}$$ $${ 3.09\pm 0.09\times 10}^{-4}$$ $${ 1.30\pm 0.04\times 10}^{-6}$$ $${ 5.80\pm 0.07\times 10}^{-7}$$ $${ 2.47\pm 0.04\times 10}^{-9}$$ $${ 81}$$ $${ 0}$$ $${ 5}$$ $${ 3.50\pm 0.09\times 10}^{-4}$$ $${ 1.41\pm 0.04\times 10}^{-6}$$ $${ 9.39\pm 0.09\times 10}^{-7}$$ $${ 4.10\pm 0.08\times 10}^{-9}$$ $${ 79}$$ $${ 0}$$ $${ 6}$$ $${ 4.49\pm 0.12\times 10}^{-4}$$ $${ 1.64\pm 0.04\times 10}^{-6}$$ $${ 1.65\pm 0.02\times 10}^{-6}$$ $${ 7.17\pm 0.09\times 10}^{-9}$$ $${ 77}$$ $${ 1}$$ $${ 7}$$ $${ 5.93\pm 0.10\times 10}^{-4}$$ $${ 1.47\pm 0.05\times 10}^{-6}$$ $${ 2.29\pm 0.02\times 10}^{-6}$$ $${ 9.33\pm 0.07\times 10}^{-9}$$ $${ 71}$$ $${ 1}$$ $${ 8}$$ $${ 6.61\pm 0.12\times 10}^{-4}$$ $${ 8.86\pm 0.02\times 10}^{-7}$$ $${ 1.98\pm 0.02\times 10}^{-6}$$ $${ 6.44\pm 0.09\times 10}^{-9}$$ $${ 60}$$ $${ 1}$$ $${ 9}$$ $${ 5.91\pm 0.09\times 10}^{-4}$$ $${ 3.74\pm 0.01\times 10}^{-7}$$ $${ 9.86\pm 0.30\times 10}^{-7}$$ $${ 2.31\pm 0.08\times 10}^{-9}$$ $${ 54}$$ $${ 1}$$ $${ \rm Linear}$$ Add $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot)$$ A F $${ 1}$$ $${ 1.75\pm 0.03\times 10}^{-3}$$ $${ 8.08\pm 0.12\times 10}^{-9}$$ $${ 1.06\pm 0.00\times 10}^{-7}$$ $${ 2.37\pm 0.00\times 10}^{-13}$$ $${ 13}$$ $${ 0}$$ $${ 2}$$ $${ 9.47\pm 0.24\times 10}^{-7}$$ $${ 1.54\pm 0.03\times 10}^{-8}$$ $${ 1.48\pm 0.00\times 10}^{-11}$$ $${ 2.50\pm 0.01\times 10}^{-13}$$ $${ 24}$$ $${ 2}$$ $${ 3}$$ $${ 2.17\pm 0.06\times 10}^{-6}$$ $${ 9.79\pm 0.25\times 10}^{-9}$$ $${ 2.62\pm 0.00\times 10}^{-11}$$ $${ 1.14\pm 0.00\times 10}^{-13}$$ $${ 37}$$ $${ 2}$$ $${ 4}$$ $${ 2.73\pm 0.04\times 10}^{-6}$$ $${ 1.46\pm 0.03\times 10}^{-8}$$ $${ 3.95\pm 0.00\times 10}^{-11}$$ $${ 2.21\pm 0.02\times 10}^{-13}$$ $${ 35}$$ $${ 1}$$ $${ 5}$$ $${ 3.02\pm 0.05\times 10}^{-6}$$ $${ 3.26\pm 0.07\times 10}^{-8}$$ $${ 7.19\pm 0.00\times 10}^{-11}$$ $${ 9.28\pm 0.11\times 10}^{-13}$$ $${ 32}$$ $${ 1}$$ $${ 6}$$ $${ 1.35\pm 0.03\times 10}^{-5}$$ $${ 1.02\pm 0.03\times 10}^{-7}$$ $${ 7.03\pm 0.00\times 10}^{-10}$$ $${ 7.62\pm 0.12\times 10}^{-12}$$ $${ 28}$$ $${ 2}$$ $${ 7}$$ $${ 1.16\pm 0.14\times 10}^{-3}$$ $${ 3.74\pm 0.10\times 10}^{-7}$$ $${ 2.44\pm 0.00\times 10}^{-8}$$ $${ 9.90\pm 0.11\times 10}^{-11}$$ $${ 26}$$ $${ 2}$$ $${ 8}$$ $${ 1.01\pm 0.03\times 10}^{-3}$$ $${ 6.43\pm 0.23\times 10}^{-7}$$ $${ 2.57\pm 0.00\times 10}^{-8}$$ $${ 4.68\pm 0.05\times 10}^{-10}$$ $${ 28}$$ $${ 2}$$ $${ 9}$$ $${ 6.91\pm 0.18\times 10}^{-5}$$ $${ 3.68\pm 0.10\times 10}^{-7}$$ $${ 5.24\pm 0.00\times 10}^{-8}$$ $${ 3.26\pm 0.01\times 10}^{-10}$$ $${ 38}$$ $${ 2}$$ $${ \rm Linear}$$ Mul $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 3.94\pm 0.08\times 10}^{-3}$$ $${ 4.92\pm 0.09\times 10}^{-6}$$ $${ 5.27\pm 0.00\times 10}^{-6}$$ $${ 6.57\pm 0.00\times 10}^{-9}$$ $${ 89}$$ $${ 0}$$ $${ 2}$$ $${ 6.25\pm 0.13\times 10}^{-4}$$ $${ 2.00\pm 0.04\times 10}^{-6}$$ $${ 7.53\pm 0.04\times 10}^{-7}$$ $${ 2.43\pm 0.01\times 10}^{-9}$$ $${ 85}$$ $${ 0}$$ $${ 3}$$ $${ 3.58\pm 0.07\times 10}^{-4}$$ $${ 1.40\pm 0.02\times 10}^{-6}$$ $${ 4.97\pm 0.06\times 10}^{-7}$$ $${ 1.97\pm 0.02\times 10}^{-9}$$ $${ 83}$$ $${ 0}$$ $${ 4}$$ $${ 3.09\pm 0.09\times 10}^{-4}$$ $${ 1.30\pm 0.04\times 10}^{-6}$$ $${ 5.80\pm 0.07\times 10}^{-7}$$ $${ 2.47\pm 0.04\times 10}^{-9}$$ $${ 81}$$ $${ 0}$$ $${ 5}$$ $${ 3.50\pm 0.09\times 10}^{-4}$$ $${ 1.41\pm 0.04\times 10}^{-6}$$ $${ 9.39\pm 0.09\times 10}^{-7}$$ $${ 4.10\pm 0.08\times 10}^{-9}$$ $${ 79}$$ $${ 0}$$ $${ 6}$$ $${ 4.49\pm 0.12\times 10}^{-4}$$ $${ 1.64\pm 0.04\times 10}^{-6}$$ $${ 1.65\pm 0.02\times 10}^{-6}$$ $${ 7.17\pm 0.09\times 10}^{-9}$$ $${ 77}$$ $${ 1}$$ $${ 7}$$ $${ 5.93\pm 0.10\times 10}^{-4}$$ $${ 1.47\pm 0.05\times 10}^{-6}$$ $${ 2.29\pm 0.02\times 10}^{-6}$$ $${ 9.33\pm 0.07\times 10}^{-9}$$ $${ 71}$$ $${ 1}$$ $${ 8}$$ $${ 6.61\pm 0.12\times 10}^{-4}$$ $${ 8.86\pm 0.02\times 10}^{-7}$$ $${ 1.98\pm 0.02\times 10}^{-6}$$ $${ 6.44\pm 0.09\times 10}^{-9}$$ $${ 60}$$ $${ 1}$$ $${ 9}$$ $${ 5.91\pm 0.09\times 10}^{-4}$$ $${ 3.74\pm 0.01\times 10}^{-7}$$ $${ 9.86\pm 0.30\times 10}^{-7}$$ $${ 2.31\pm 0.08\times 10}^{-9}$$ $${ 54}$$ $${ 1}$$ $${ \rm Linear}$$ Add $${x}_{t_{k}/t_{k}}$$ $${ U}_{t_{k}/t_{k}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot)$$ A F $${ 1}$$ $${ 1.75\pm 0.03\times 10}^{-3}$$ $${ 8.08\pm 0.12\times 10}^{-9}$$ $${ 1.06\pm 0.00\times 10}^{-7}$$ $${ 2.37\pm 0.00\times 10}^{-13}$$ $${ 13}$$ $${ 0}$$ $${ 2}$$ $${ 9.47\pm 0.24\times 10}^{-7}$$ $${ 1.54\pm 0.03\times 10}^{-8}$$ $${ 1.48\pm 0.00\times 10}^{-11}$$ $${ 2.50\pm 0.01\times 10}^{-13}$$ $${ 24}$$ $${ 2}$$ $${ 3}$$ $${ 2.17\pm 0.06\times 10}^{-6}$$ $${ 9.79\pm 0.25\times 10}^{-9}$$ $${ 2.62\pm 0.00\times 10}^{-11}$$ $${ 1.14\pm 0.00\times 10}^{-13}$$ $${ 37}$$ $${ 2}$$ $${ 4}$$ $${ 2.73\pm 0.04\times 10}^{-6}$$ $${ 1.46\pm 0.03\times 10}^{-8}$$ $${ 3.95\pm 0.00\times 10}^{-11}$$ $${ 2.21\pm 0.02\times 10}^{-13}$$ $${ 35}$$ $${ 1}$$ $${ 5}$$ $${ 3.02\pm 0.05\times 10}^{-6}$$ $${ 3.26\pm 0.07\times 10}^{-8}$$ $${ 7.19\pm 0.00\times 10}^{-11}$$ $${ 9.28\pm 0.11\times 10}^{-13}$$ $${ 32}$$ $${ 1}$$ $${ 6}$$ $${ 1.35\pm 0.03\times 10}^{-5}$$ $${ 1.02\pm 0.03\times 10}^{-7}$$ $${ 7.03\pm 0.00\times 10}^{-10}$$ $${ 7.62\pm 0.12\times 10}^{-12}$$ $${ 28}$$ $${ 2}$$ $${ 7}$$ $${ 1.16\pm 0.14\times 10}^{-3}$$ $${ 3.74\pm 0.10\times 10}^{-7}$$ $${ 2.44\pm 0.00\times 10}^{-8}$$ $${ 9.90\pm 0.11\times 10}^{-11}$$ $${ 26}$$ $${ 2}$$ $${ 8}$$ $${ 1.01\pm 0.03\times 10}^{-3}$$ $${ 6.43\pm 0.23\times 10}^{-7}$$ $${ 2.57\pm 0.00\times 10}^{-8}$$ $${ 4.68\pm 0.05\times 10}^{-10}$$ $${ 28}$$ $${ 2}$$ $${ 9}$$ $${ 6.91\pm 0.18\times 10}^{-5}$$ $${ 3.68\pm 0.10\times 10}^{-7}$$ $${ 5.24\pm 0.00\times 10}^{-8}$$ $${ 3.26\pm 0.01\times 10}^{-10}$$ $${ 38}$$ $${ 2}$$ For the remainder CIR and Van der Pol models, Tables 7–11 show the confidence limits for the errors between the exact LMV filter and their respective approximations by the conventional LL filter and the deterministic LL filter with adaptive time partition. For each model, the values of $${\rm rtol}{\rm rtol}$$, $${\rm atol}$$ and $${\rm tol}$$ were also set as in previous subsection. Clearly, for the CIR model (with only one state equation), the accuracy of the adaptive filter is better than that of the classical LL filter. Similar result holds for the observable state variable of the Van der Pol models, whereas for the non-observable one the accuracy of the adaptive filter is much better. Table 7 Confidence limits for the errors between the exact LMV filter of CIR model (6.5)–(6.6) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps $${\rm CIR}$$ $${x_{{t_k}/{t_k}}} \times \,\,{10^{ - 5}}$$ $${U_{{t_k}/{t_k}}} \times \,\,{10^{ - 6}}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 86.20\pm 1.58}$$ $${ 3.21\pm 0.06}$$ $${ 64.87\pm 0.00}$$ $${ 2.42\pm 0.00}$$ $${ 9}$$ $${ 0}$$ $${ 2}$$ $${ 81.02\pm 1.91}$$ $${ 4.69\pm 0.11}$$ $${ 75.43\pm 0.03}$$ $${ 4.39\pm 0.01}$$ $${ 5}$$ $${ 1}$$ $${ 3}$$ $${ 77.02\pm 2.03}$$ $${ 7.17\pm 0.18}$$ $${ 81.41\pm 0.02}$$ $${ 7.65\pm 0.03}$$ $${ 5}$$ $${ 0}$$ $${ 4}$$ $${ 64.33\pm 1.22}$$ $${ 10.32\pm 0.21}$$ $${ 77.30\pm 0.15}$$ $${ 12.60\pm 0.07}$$ $${ 6}$$ $${ 0}$$ $${ 5}$$ $${ 50.38\pm 1.13}$$ $${ 14.16\pm 0.31}$$ $${ 65.86\pm 0.24}$$ $${ 19.01\pm 0.11}$$ $${ 5}$$ $${ 1}$$ $${ 6}$$ $${ 37.89\pm 0.91}$$ $${ 17.29\pm 0.43}$$ $${ 54.59\pm 0.26}$$ $${ 25.68\pm 0.14}$$ $${ 5}$$ $${ 1}$$ $${ 7}$$ $${ 31.78\pm 0.72}$$ $${ 20.79\pm 0.49}$$ $${ 47.42\pm 0.24}$$ $${ 31.81\pm 0.17}$$ $${ 4}$$ $${ 1}$$ $${ 8}$$ $${ 29.42\pm 0.87}$$ $${ 23.71\pm 0.63}$$ $${ 45.07\pm 0.16}$$ $${ 36.70\pm 0.16}$$ $${ 3}$$ $${ 1}$$ $${ 9}$$ $${ 28.29\pm 0.77}$$ $${ 24.82\pm 0.67}$$ $${ 45.68\pm 0.20}$$ $${ 40.48\pm 0.17}$$ $${ 3}$$ $${ 1}$$ $${\rm CIR}$$ $${x_{{t_k}/{t_k}}} \times \,\,{10^{ - 5}}$$ $${U_{{t_k}/{t_k}}} \times \,\,{10^{ - 6}}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 86.20\pm 1.58}$$ $${ 3.21\pm 0.06}$$ $${ 64.87\pm 0.00}$$ $${ 2.42\pm 0.00}$$ $${ 9}$$ $${ 0}$$ $${ 2}$$ $${ 81.02\pm 1.91}$$ $${ 4.69\pm 0.11}$$ $${ 75.43\pm 0.03}$$ $${ 4.39\pm 0.01}$$ $${ 5}$$ $${ 1}$$ $${ 3}$$ $${ 77.02\pm 2.03}$$ $${ 7.17\pm 0.18}$$ $${ 81.41\pm 0.02}$$ $${ 7.65\pm 0.03}$$ $${ 5}$$ $${ 0}$$ $${ 4}$$ $${ 64.33\pm 1.22}$$ $${ 10.32\pm 0.21}$$ $${ 77.30\pm 0.15}$$ $${ 12.60\pm 0.07}$$ $${ 6}$$ $${ 0}$$ $${ 5}$$ $${ 50.38\pm 1.13}$$ $${ 14.16\pm 0.31}$$ $${ 65.86\pm 0.24}$$ $${ 19.01\pm 0.11}$$ $${ 5}$$ $${ 1}$$ $${ 6}$$ $${ 37.89\pm 0.91}$$ $${ 17.29\pm 0.43}$$ $${ 54.59\pm 0.26}$$ $${ 25.68\pm 0.14}$$ $${ 5}$$ $${ 1}$$ $${ 7}$$ $${ 31.78\pm 0.72}$$ $${ 20.79\pm 0.49}$$ $${ 47.42\pm 0.24}$$ $${ 31.81\pm 0.17}$$ $${ 4}$$ $${ 1}$$ $${ 8}$$ $${ 29.42\pm 0.87}$$ $${ 23.71\pm 0.63}$$ $${ 45.07\pm 0.16}$$ $${ 36.70\pm 0.16}$$ $${ 3}$$ $${ 1}$$ $${ 9}$$ $${ 28.29\pm 0.77}$$ $${ 24.82\pm 0.67}$$ $${ 45.68\pm 0.20}$$ $${ 40.48\pm 0.17}$$ $${ 3}$$ $${ 1}$$ Table 7 Confidence limits for the errors between the exact LMV filter of CIR model (6.5)–(6.6) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps $${\rm CIR}$$ $${x_{{t_k}/{t_k}}} \times \,\,{10^{ - 5}}$$ $${U_{{t_k}/{t_k}}} \times \,\,{10^{ - 6}}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 86.20\pm 1.58}$$ $${ 3.21\pm 0.06}$$ $${ 64.87\pm 0.00}$$ $${ 2.42\pm 0.00}$$ $${ 9}$$ $${ 0}$$ $${ 2}$$ $${ 81.02\pm 1.91}$$ $${ 4.69\pm 0.11}$$ $${ 75.43\pm 0.03}$$ $${ 4.39\pm 0.01}$$ $${ 5}$$ $${ 1}$$ $${ 3}$$ $${ 77.02\pm 2.03}$$ $${ 7.17\pm 0.18}$$ $${ 81.41\pm 0.02}$$ $${ 7.65\pm 0.03}$$ $${ 5}$$ $${ 0}$$ $${ 4}$$ $${ 64.33\pm 1.22}$$ $${ 10.32\pm 0.21}$$ $${ 77.30\pm 0.15}$$ $${ 12.60\pm 0.07}$$ $${ 6}$$ $${ 0}$$ $${ 5}$$ $${ 50.38\pm 1.13}$$ $${ 14.16\pm 0.31}$$ $${ 65.86\pm 0.24}$$ $${ 19.01\pm 0.11}$$ $${ 5}$$ $${ 1}$$ $${ 6}$$ $${ 37.89\pm 0.91}$$ $${ 17.29\pm 0.43}$$ $${ 54.59\pm 0.26}$$ $${ 25.68\pm 0.14}$$ $${ 5}$$ $${ 1}$$ $${ 7}$$ $${ 31.78\pm 0.72}$$ $${ 20.79\pm 0.49}$$ $${ 47.42\pm 0.24}$$ $${ 31.81\pm 0.17}$$ $${ 4}$$ $${ 1}$$ $${ 8}$$ $${ 29.42\pm 0.87}$$ $${ 23.71\pm 0.63}$$ $${ 45.07\pm 0.16}$$ $${ 36.70\pm 0.16}$$ $${ 3}$$ $${ 1}$$ $${ 9}$$ $${ 28.29\pm 0.77}$$ $${ 24.82\pm 0.67}$$ $${ 45.68\pm 0.20}$$ $${ 40.48\pm 0.17}$$ $${ 3}$$ $${ 1}$$ $${\rm CIR}$$ $${x_{{t_k}/{t_k}}} \times \,\,{10^{ - 5}}$$ $${U_{{t_k}/{t_k}}} \times \,\,{10^{ - 6}}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 86.20\pm 1.58}$$ $${ 3.21\pm 0.06}$$ $${ 64.87\pm 0.00}$$ $${ 2.42\pm 0.00}$$ $${ 9}$$ $${ 0}$$ $${ 2}$$ $${ 81.02\pm 1.91}$$ $${ 4.69\pm 0.11}$$ $${ 75.43\pm 0.03}$$ $${ 4.39\pm 0.01}$$ $${ 5}$$ $${ 1}$$ $${ 3}$$ $${ 77.02\pm 2.03}$$ $${ 7.17\pm 0.18}$$ $${ 81.41\pm 0.02}$$ $${ 7.65\pm 0.03}$$ $${ 5}$$ $${ 0}$$ $${ 4}$$ $${ 64.33\pm 1.22}$$ $${ 10.32\pm 0.21}$$ $${ 77.30\pm 0.15}$$ $${ 12.60\pm 0.07}$$ $${ 6}$$ $${ 0}$$ $${ 5}$$ $${ 50.38\pm 1.13}$$ $${ 14.16\pm 0.31}$$ $${ 65.86\pm 0.24}$$ $${ 19.01\pm 0.11}$$ $${ 5}$$ $${ 1}$$ $${ 6}$$ $${ 37.89\pm 0.91}$$ $${ 17.29\pm 0.43}$$ $${ 54.59\pm 0.26}$$ $${ 25.68\pm 0.14}$$ $${ 5}$$ $${ 1}$$ $${ 7}$$ $${ 31.78\pm 0.72}$$ $${ 20.79\pm 0.49}$$ $${ 47.42\pm 0.24}$$ $${ 31.81\pm 0.17}$$ $${ 4}$$ $${ 1}$$ $${ 8}$$ $${ 29.42\pm 0.87}$$ $${ 23.71\pm 0.63}$$ $${ 45.07\pm 0.16}$$ $${ 36.70\pm 0.16}$$ $${ 3}$$ $${ 1}$$ $${ 9}$$ $${ 28.29\pm 0.77}$$ $${ 24.82\pm 0.67}$$ $${ 45.68\pm 0.20}$$ $${ 40.48\pm 0.17}$$ $${ 3}$$ $${ 1}$$ Table 8 Confidence limits for the errors between the exact LMV filter mean of the Van der Pol model with multiplicative noise (6.7)–(6.9) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps VDP Mul $$x_{{t_k}/{t_k}}^1\, \times \,{10^{ - 3}}$$ $$x_{{t_k}/{t_k}}^2\, \times \,{10^{ - 1}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot$$) A F $${ 1}$$ $${ 2.71\pm 0.07}$$ $${ 2.47\pm 0.07}$$ $${ 1.99\pm 0.21}$$ $${ 1.98\pm 0.06}$$ $${ 30}$$ $${ 4}$$ $${ 2}$$ $${ 3.07\pm 0.14}$$ $${ 3.03\pm 0.16}$$ $${ 8.54\pm 0.28}$$ $${ 4.28\pm 0.10}$$ $${ 60}$$ $${ 5}$$ $${ 3}$$ $${ 2.05\pm 0.09}$$ $${ 1.43\pm 0.08}$$ $${ 13.30\pm 0.38}$$ $${ 5.28\pm 0.16}$$ $${ 70}$$ $${ 4}$$ $${ 4}$$ $${ 9.34\pm 0.33}$$ $${ 6.71\pm 0.23}$$ $${ 17.99\pm 0.91}$$ $${ 8.60\pm 0.80}$$ $${ 56}$$ $${ 2}$$ $${ 5}$$ $${ 15.23\pm 0.83}$$ $${ 10.77\pm 0.57}$$ $${ 23.11\pm 1.95}$$ $${ 9.79\pm 0.15}$$ $${ 57}$$ $${ 3}$$ $${ 6}$$ $${ 24.40\pm 2.22}$$ $${ 21.78\pm 1.84}$$ $${ 19.98\pm 1.20}$$ $${ 8.10\pm 0.76}$$ $${ 56}$$ $${ 3}$$ $${ 7}$$ $${ 39.73\pm 3.74}$$ $${ 37.26\pm 3.73}$$ $${ 15.86\pm 1.07}$$ $${ 8.29\pm 0.67}$$ $${ 51}$$ $${ 3}$$ $${ 8}$$ $${ 50.29\pm 4.61}$$ $${ 49.02\pm 4.72}$$ $${ 15.13\pm 0.98}$$ $${ 9.09\pm 0.78}$$ $${ 51}$$ $${ 3}$$ $${ 9}$$ $${ 81.10\pm 7.51}$$ $${ 80.61\pm 7.55}$$ $${ 14.27\pm 0.55}$$ $${ 8.09\pm 0.40}$$ $${ 51}$$ $${ 3}$$ VDP Mul $$x_{{t_k}/{t_k}}^1\, \times \,{10^{ - 3}}$$ $$x_{{t_k}/{t_k}}^2\, \times \,{10^{ - 1}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot$$) A F $${ 1}$$ $${ 2.71\pm 0.07}$$ $${ 2.47\pm 0.07}$$ $${ 1.99\pm 0.21}$$ $${ 1.98\pm 0.06}$$ $${ 30}$$ $${ 4}$$ $${ 2}$$ $${ 3.07\pm 0.14}$$ $${ 3.03\pm 0.16}$$ $${ 8.54\pm 0.28}$$ $${ 4.28\pm 0.10}$$ $${ 60}$$ $${ 5}$$ $${ 3}$$ $${ 2.05\pm 0.09}$$ $${ 1.43\pm 0.08}$$ $${ 13.30\pm 0.38}$$ $${ 5.28\pm 0.16}$$ $${ 70}$$ $${ 4}$$ $${ 4}$$ $${ 9.34\pm 0.33}$$ $${ 6.71\pm 0.23}$$ $${ 17.99\pm 0.91}$$ $${ 8.60\pm 0.80}$$ $${ 56}$$ $${ 2}$$ $${ 5}$$ $${ 15.23\pm 0.83}$$ $${ 10.77\pm 0.57}$$ $${ 23.11\pm 1.95}$$ $${ 9.79\pm 0.15}$$ $${ 57}$$ $${ 3}$$ $${ 6}$$ $${ 24.40\pm 2.22}$$ $${ 21.78\pm 1.84}$$ $${ 19.98\pm 1.20}$$ $${ 8.10\pm 0.76}$$ $${ 56}$$ $${ 3}$$ $${ 7}$$ $${ 39.73\pm 3.74}$$ $${ 37.26\pm 3.73}$$ $${ 15.86\pm 1.07}$$ $${ 8.29\pm 0.67}$$ $${ 51}$$ $${ 3}$$ $${ 8}$$ $${ 50.29\pm 4.61}$$ $${ 49.02\pm 4.72}$$ $${ 15.13\pm 0.98}$$ $${ 9.09\pm 0.78}$$ $${ 51}$$ $${ 3}$$ $${ 9}$$ $${ 81.10\pm 7.51}$$ $${ 80.61\pm 7.55}$$ $${ 14.27\pm 0.55}$$ $${ 8.09\pm 0.40}$$ $${ 51}$$ $${ 3}$$ Table 8 Confidence limits for the errors between the exact LMV filter mean of the Van der Pol model with multiplicative noise (6.7)–(6.9) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps VDP Mul $$x_{{t_k}/{t_k}}^1\, \times \,{10^{ - 3}}$$ $$x_{{t_k}/{t_k}}^2\, \times \,{10^{ - 1}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot$$) A F $${ 1}$$ $${ 2.71\pm 0.07}$$ $${ 2.47\pm 0.07}$$ $${ 1.99\pm 0.21}$$ $${ 1.98\pm 0.06}$$ $${ 30}$$ $${ 4}$$ $${ 2}$$ $${ 3.07\pm 0.14}$$ $${ 3.03\pm 0.16}$$ $${ 8.54\pm 0.28}$$ $${ 4.28\pm 0.10}$$ $${ 60}$$ $${ 5}$$ $${ 3}$$ $${ 2.05\pm 0.09}$$ $${ 1.43\pm 0.08}$$ $${ 13.30\pm 0.38}$$ $${ 5.28\pm 0.16}$$ $${ 70}$$ $${ 4}$$ $${ 4}$$ $${ 9.34\pm 0.33}$$ $${ 6.71\pm 0.23}$$ $${ 17.99\pm 0.91}$$ $${ 8.60\pm 0.80}$$ $${ 56}$$ $${ 2}$$ $${ 5}$$ $${ 15.23\pm 0.83}$$ $${ 10.77\pm 0.57}$$ $${ 23.11\pm 1.95}$$ $${ 9.79\pm 0.15}$$ $${ 57}$$ $${ 3}$$ $${ 6}$$ $${ 24.40\pm 2.22}$$ $${ 21.78\pm 1.84}$$ $${ 19.98\pm 1.20}$$ $${ 8.10\pm 0.76}$$ $${ 56}$$ $${ 3}$$ $${ 7}$$ $${ 39.73\pm 3.74}$$ $${ 37.26\pm 3.73}$$ $${ 15.86\pm 1.07}$$ $${ 8.29\pm 0.67}$$ $${ 51}$$ $${ 3}$$ $${ 8}$$ $${ 50.29\pm 4.61}$$ $${ 49.02\pm 4.72}$$ $${ 15.13\pm 0.98}$$ $${ 9.09\pm 0.78}$$ $${ 51}$$ $${ 3}$$ $${ 9}$$ $${ 81.10\pm 7.51}$$ $${ 80.61\pm 7.55}$$ $${ 14.27\pm 0.55}$$ $${ 8.09\pm 0.40}$$ $${ 51}$$ $${ 3}$$ VDP Mul $$x_{{t_k}/{t_k}}^1\, \times \,{10^{ - 3}}$$ $$x_{{t_k}/{t_k}}^2\, \times \,{10^{ - 1}}$$ k LLF LLF($$\cdot$$) LLF LLF($$\cdot$$) A F $${ 1}$$ $${ 2.71\pm 0.07}$$ $${ 2.47\pm 0.07}$$ $${ 1.99\pm 0.21}$$ $${ 1.98\pm 0.06}$$ $${ 30}$$ $${ 4}$$ $${ 2}$$ $${ 3.07\pm 0.14}$$ $${ 3.03\pm 0.16}$$ $${ 8.54\pm 0.28}$$ $${ 4.28\pm 0.10}$$ $${ 60}$$ $${ 5}$$ $${ 3}$$ $${ 2.05\pm 0.09}$$ $${ 1.43\pm 0.08}$$ $${ 13.30\pm 0.38}$$ $${ 5.28\pm 0.16}$$ $${ 70}$$ $${ 4}$$ $${ 4}$$ $${ 9.34\pm 0.33}$$ $${ 6.71\pm 0.23}$$ $${ 17.99\pm 0.91}$$ $${ 8.60\pm 0.80}$$ $${ 56}$$ $${ 2}$$ $${ 5}$$ $${ 15.23\pm 0.83}$$ $${ 10.77\pm 0.57}$$ $${ 23.11\pm 1.95}$$ $${ 9.79\pm 0.15}$$ $${ 57}$$ $${ 3}$$ $${ 6}$$ $${ 24.40\pm 2.22}$$ $${ 21.78\pm 1.84}$$ $${ 19.98\pm 1.20}$$ $${ 8.10\pm 0.76}$$ $${ 56}$$ $${ 3}$$ $${ 7}$$ $${ 39.73\pm 3.74}$$ $${ 37.26\pm 3.73}$$ $${ 15.86\pm 1.07}$$ $${ 8.29\pm 0.67}$$ $${ 51}$$ $${ 3}$$ $${ 8}$$ $${ 50.29\pm 4.61}$$ $${ 49.02\pm 4.72}$$ $${ 15.13\pm 0.98}$$ $${ 9.09\pm 0.78}$$ $${ 51}$$ $${ 3}$$ $${ 9}$$ $${ 81.10\pm 7.51}$$ $${ 80.61\pm 7.55}$$ $${ 14.27\pm 0.55}$$ $${ 8.09\pm 0.40}$$ $${ 51}$$ $${ 3}$$ Table 9 Confidence limits for the errors between the exact LMV filter variance of the Van der Pol model with multiplicative noise (6.7)–(6.9) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition VDP Mul $$U_{{t_k}/{t_k}}^{11}\, \times \,{10^{ - 6}}$$ $$U_{{t_k}/{t_k}}^{22}\, \times \,{10^{ - 1}}$$ $$U_{{t_k}/{t_k}}^{12}\, \times \,{10^{ - 5}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot)}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ 1}$$ $${ 6.76\pm 0.00}$$ $${ 6.21\pm 0.00}$$ $${ 4.60\pm 0.00}$$ $${ 3.43\pm 0.00}$$ $${ 21.30\pm 0.00}$$ $${ 4.93\pm 0.00}$$ $${ 2}$$ $${ 4.96\pm 0.20}$$ $${ 5.58\pm 0.23}$$ $${ 6.34\pm 0.16}$$ $${ 6.08\pm 0.15}$$ $${ 90.52\pm 1.43}$$ $${ 64.75\pm 1.36}$$ $${ 3}$$ $${ 2.97\pm 0.08}$$ $${ 1.74\pm 0.10}$$ $${ 46.10\pm 3.37}$$ $${ 5.74\pm 0.20}$$ $${ 171.48\pm 2.80}$$ $${ 77.94\pm 2.45}$$ $${ 4}$$ $${ 15.62\pm 0.48}$$ $${ 11.66\pm 0.40}$$ $${ 45.28\pm 4.63}$$ $${ 8.58\pm 0.30}$$ $${ 195.19\pm 3.71}$$ $${ 101.79\pm 1.69}$$ $${5}$$ $${ 19.64\pm 0.90}$$ $${ 15.84\pm 0.74}$$ $${ 2772.70\pm 3552.80}$$ $${ 7.36\pm 0.36}$$ $${ 163.99\pm 3.32}$$ $${ 89.76\pm 2.27}$$ $${ 6}$$ $${ 24.89\pm 1.53}$$ $${ 22.59\pm 1.44}$$ $${ 115.00\pm 60.88}$$ $${ 7.46\pm 0.57}$$ $${ 142.70\pm 4.04}$$ $${ 88.72\pm 2.12}$$ $${7}$$ $${ 35.32\pm 2.22}$$ $${ 33.72\pm 2.36}$$ $${ 288.28\pm 324.86}$$ $${ 8.31\pm 0.55}$$ $${ 123.18\pm 3.91}$$ $${ 87.89\pm 2.26}$$ $${8}$$ $${ 38.81\pm 2.82}$$ $${ 38.48\pm 2.64}$$ $${ 39.61\pm 09.98}$$ $${ 7.81\pm 0.46}$$ $${ 118.60\pm 2.46}$$ $${ 87.93\pm 2.13}$$ $${ 9}$$ $${ 48.41\pm 3.09}$$ $${ 47.80\pm 3.02}$$ $${ 54.72\pm 11.31}$$ $${ 7.64\pm 0.42}$$ $${ 139.53\pm 3.71}$$ $${ 86.07\pm 2.48}$$ VDP Mul $$U_{{t_k}/{t_k}}^{11}\, \times \,{10^{ - 6}}$$ $$U_{{t_k}/{t_k}}^{22}\, \times \,{10^{ - 1}}$$ $$U_{{t_k}/{t_k}}^{12}\, \times \,{10^{ - 5}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot)}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ 1}$$ $${ 6.76\pm 0.00}$$ $${ 6.21\pm 0.00}$$ $${ 4.60\pm 0.00}$$ $${ 3.43\pm 0.00}$$ $${ 21.30\pm 0.00}$$ $${ 4.93\pm 0.00}$$ $${ 2}$$ $${ 4.96\pm 0.20}$$ $${ 5.58\pm 0.23}$$ $${ 6.34\pm 0.16}$$ $${ 6.08\pm 0.15}$$ $${ 90.52\pm 1.43}$$ $${ 64.75\pm 1.36}$$ $${ 3}$$ $${ 2.97\pm 0.08}$$ $${ 1.74\pm 0.10}$$ $${ 46.10\pm 3.37}$$ $${ 5.74\pm 0.20}$$ $${ 171.48\pm 2.80}$$ $${ 77.94\pm 2.45}$$ $${ 4}$$ $${ 15.62\pm 0.48}$$ $${ 11.66\pm 0.40}$$ $${ 45.28\pm 4.63}$$ $${ 8.58\pm 0.30}$$ $${ 195.19\pm 3.71}$$ $${ 101.79\pm 1.69}$$ $${5}$$ $${ 19.64\pm 0.90}$$ $${ 15.84\pm 0.74}$$ $${ 2772.70\pm 3552.80}$$ $${ 7.36\pm 0.36}$$ $${ 163.99\pm 3.32}$$ $${ 89.76\pm 2.27}$$ $${ 6}$$ $${ 24.89\pm 1.53}$$ $${ 22.59\pm 1.44}$$ $${ 115.00\pm 60.88}$$ $${ 7.46\pm 0.57}$$ $${ 142.70\pm 4.04}$$ $${ 88.72\pm 2.12}$$ $${7}$$ $${ 35.32\pm 2.22}$$ $${ 33.72\pm 2.36}$$ $${ 288.28\pm 324.86}$$ $${ 8.31\pm 0.55}$$ $${ 123.18\pm 3.91}$$ $${ 87.89\pm 2.26}$$ $${8}$$ $${ 38.81\pm 2.82}$$ $${ 38.48\pm 2.64}$$ $${ 39.61\pm 09.98}$$ $${ 7.81\pm 0.46}$$ $${ 118.60\pm 2.46}$$ $${ 87.93\pm 2.13}$$ $${ 9}$$ $${ 48.41\pm 3.09}$$ $${ 47.80\pm 3.02}$$ $${ 54.72\pm 11.31}$$ $${ 7.64\pm 0.42}$$ $${ 139.53\pm 3.71}$$ $${ 86.07\pm 2.48}$$ Table 9 Confidence limits for the errors between the exact LMV filter variance of the Van der Pol model with multiplicative noise (6.7)–(6.9) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition VDP Mul $$U_{{t_k}/{t_k}}^{11}\, \times \,{10^{ - 6}}$$ $$U_{{t_k}/{t_k}}^{22}\, \times \,{10^{ - 1}}$$ $$U_{{t_k}/{t_k}}^{12}\, \times \,{10^{ - 5}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot)}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ 1}$$ $${ 6.76\pm 0.00}$$ $${ 6.21\pm 0.00}$$ $${ 4.60\pm 0.00}$$ $${ 3.43\pm 0.00}$$ $${ 21.30\pm 0.00}$$ $${ 4.93\pm 0.00}$$ $${ 2}$$ $${ 4.96\pm 0.20}$$ $${ 5.58\pm 0.23}$$ $${ 6.34\pm 0.16}$$ $${ 6.08\pm 0.15}$$ $${ 90.52\pm 1.43}$$ $${ 64.75\pm 1.36}$$ $${ 3}$$ $${ 2.97\pm 0.08}$$ $${ 1.74\pm 0.10}$$ $${ 46.10\pm 3.37}$$ $${ 5.74\pm 0.20}$$ $${ 171.48\pm 2.80}$$ $${ 77.94\pm 2.45}$$ $${ 4}$$ $${ 15.62\pm 0.48}$$ $${ 11.66\pm 0.40}$$ $${ 45.28\pm 4.63}$$ $${ 8.58\pm 0.30}$$ $${ 195.19\pm 3.71}$$ $${ 101.79\pm 1.69}$$ $${5}$$ $${ 19.64\pm 0.90}$$ $${ 15.84\pm 0.74}$$ $${ 2772.70\pm 3552.80}$$ $${ 7.36\pm 0.36}$$ $${ 163.99\pm 3.32}$$ $${ 89.76\pm 2.27}$$ $${ 6}$$ $${ 24.89\pm 1.53}$$ $${ 22.59\pm 1.44}$$ $${ 115.00\pm 60.88}$$ $${ 7.46\pm 0.57}$$ $${ 142.70\pm 4.04}$$ $${ 88.72\pm 2.12}$$ $${7}$$ $${ 35.32\pm 2.22}$$ $${ 33.72\pm 2.36}$$ $${ 288.28\pm 324.86}$$ $${ 8.31\pm 0.55}$$ $${ 123.18\pm 3.91}$$ $${ 87.89\pm 2.26}$$ $${8}$$ $${ 38.81\pm 2.82}$$ $${ 38.48\pm 2.64}$$ $${ 39.61\pm 09.98}$$ $${ 7.81\pm 0.46}$$ $${ 118.60\pm 2.46}$$ $${ 87.93\pm 2.13}$$ $${ 9}$$ $${ 48.41\pm 3.09}$$ $${ 47.80\pm 3.02}$$ $${ 54.72\pm 11.31}$$ $${ 7.64\pm 0.42}$$ $${ 139.53\pm 3.71}$$ $${ 86.07\pm 2.48}$$ VDP Mul $$U_{{t_k}/{t_k}}^{11}\, \times \,{10^{ - 6}}$$ $$U_{{t_k}/{t_k}}^{22}\, \times \,{10^{ - 1}}$$ $$U_{{t_k}/{t_k}}^{12}\, \times \,{10^{ - 5}}$$ $${ k}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot)}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ \rm LLF}$$ $${ \rm LLF(\cdot )}$$ $${ 1}$$ $${ 6.76\pm 0.00}$$ $${ 6.21\pm 0.00}$$ $${ 4.60\pm 0.00}$$ $${ 3.43\pm 0.00}$$ $${ 21.30\pm 0.00}$$ $${ 4.93\pm 0.00}$$ $${ 2}$$ $${ 4.96\pm 0.20}$$ $${ 5.58\pm 0.23}$$ $${ 6.34\pm 0.16}$$ $${ 6.08\pm 0.15}$$ $${ 90.52\pm 1.43}$$ $${ 64.75\pm 1.36}$$ $${ 3}$$ $${ 2.97\pm 0.08}$$ $${ 1.74\pm 0.10}$$ $${ 46.10\pm 3.37}$$ $${ 5.74\pm 0.20}$$ $${ 171.48\pm 2.80}$$ $${ 77.94\pm 2.45}$$ $${ 4}$$ $${ 15.62\pm 0.48}$$ $${ 11.66\pm 0.40}$$ $${ 45.28\pm 4.63}$$ $${ 8.58\pm 0.30}$$ $${ 195.19\pm 3.71}$$ $${ 101.79\pm 1.69}$$ $${5}$$ $${ 19.64\pm 0.90}$$ $${ 15.84\pm 0.74}$$ $${ 2772.70\pm 3552.80}$$ $${ 7.36\pm 0.36}$$ $${ 163.99\pm 3.32}$$ $${ 89.76\pm 2.27}$$ $${ 6}$$ $${ 24.89\pm 1.53}$$ $${ 22.59\pm 1.44}$$ $${ 115.00\pm 60.88}$$ $${ 7.46\pm 0.57}$$ $${ 142.70\pm 4.04}$$ $${ 88.72\pm 2.12}$$ $${7}$$ $${ 35.32\pm 2.22}$$ $${ 33.72\pm 2.36}$$ $${ 288.28\pm 324.86}$$ $${ 8.31\pm 0.55}$$ $${ 123.18\pm 3.91}$$ $${ 87.89\pm 2.26}$$ $${8}$$ $${ 38.81\pm 2.82}$$ $${ 38.48\pm 2.64}$$ $${ 39.61\pm 09.98}$$ $${ 7.81\pm 0.46}$$ $${ 118.60\pm 2.46}$$ $${ 87.93\pm 2.13}$$ $${ 9}$$ $${ 48.41\pm 3.09}$$ $${ 47.80\pm 3.02}$$ $${ 54.72\pm 11.31}$$ $${ 7.64\pm 0.42}$$ $${ 139.53\pm 3.71}$$ $${ 86.07\pm 2.48}$$ Table 10 Confidence limits for the errors between the exact LMV filter mean of the Van der Pol model with additive noise (6.10)–(6.12) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps $${\rm VDP} {\rm Add}$$ $$\matrix{{x_{{t_k}/{t_k}}^1 \times {{10}^{ - 4}}} \cr }$$ $$\matrix{{x_{{t_k}/{t_k}}^2 \times {{10}^{ - 2}}} \cr }$$ $${k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 9.17\pm 0.18}$$ $${ 3.65\pm 0.06}$$ $${ 22.09\pm 0.04}$$ $${ 4.92\pm 0.03}$$ $${ 21}$$ $${ 3}$$ $${ 2}$$ $${ 6.52\pm 0.10}$$ $${ 4.56\pm 0.10}$$ $${ 15.09\pm 0.52}$$ $${ 6.48\pm 0.44}$$ $${ 25}$$ $${ 2}$$ $${ 3}$$ $${ 14.76\pm 11.60}$$ $${ 11.53\pm 1.00}$$ $${ 48.18\pm 1.65}$$ $${ 27.62\pm 1.36}$$ $${ 51}$$ $${ 5}$$ $${ 4}$$ $${ 49.91\pm 4.80}$$ $${ 50.10\pm 11.21}$$ $${ 83.46\pm 6.95}$$ $${ 52.13\pm 6.24}$$ $${ 56}$$ $${ 4}$$ $${ 5}$$ $${ 249.81\pm 48.15}$$ $${ 205.60\pm 50.38}$$ $${ 134.19\pm 12.31}$$ $${77.44\pm 9.99}$$ $${ 52}$$ $${ 3}$$ $${ 6}$$ $${ 877.20\pm 18.53}$$ $${ 798.86\pm 186.78}$$ $${ 193.57\pm 19.32}$$ $${98.44\pm 1.94}$$ $${ 50}$$ $${ 3}$$ $${ 7}$$ $${ 1789.50\pm 319.68}$$ $${ 1643.20\pm 320.35}$$ $${ 196.65\pm 20.63}$$ $${116.71\pm 20.84}$$ $${ 39}$$ $${ 3}$$ $${8}$$ $${ 2756.50\pm 344.26}$$ $${ 2602.40\pm 340.36}$$ $${ 183.32\pm 30.94}$$ $${133.07\pm 30.83}$$ $${ 31}$$ $${ 1}$$ $${9}$$ $${ 3697.20\pm 391.64}$$ $${ 3612.40\pm 390.73}$$ $${ 164.42\pm 25.51}$$ $${135.40\pm 24.92}$$ $${ 30}$$ $${ 2}$$ $${\rm VDP} {\rm Add}$$ $$\matrix{{x_{{t_k}/{t_k}}^1 \times {{10}^{ - 4}}} \cr }$$ $$\matrix{{x_{{t_k}/{t_k}}^2 \times {{10}^{ - 2}}} \cr }$$ $${k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 9.17\pm 0.18}$$ $${ 3.65\pm 0.06}$$ $${ 22.09\pm 0.04}$$ $${ 4.92\pm 0.03}$$ $${ 21}$$ $${ 3}$$ $${ 2}$$ $${ 6.52\pm 0.10}$$ $${ 4.56\pm 0.10}$$ $${ 15.09\pm 0.52}$$ $${ 6.48\pm 0.44}$$ $${ 25}$$ $${ 2}$$ $${ 3}$$ $${ 14.76\pm 11.60}$$ $${ 11.53\pm 1.00}$$ $${ 48.18\pm 1.65}$$ $${ 27.62\pm 1.36}$$ $${ 51}$$ $${ 5}$$ $${ 4}$$ $${ 49.91\pm 4.80}$$ $${ 50.10\pm 11.21}$$ $${ 83.46\pm 6.95}$$ $${ 52.13\pm 6.24}$$ $${ 56}$$ $${ 4}$$ $${ 5}$$ $${ 249.81\pm 48.15}$$ $${ 205.60\pm 50.38}$$ $${ 134.19\pm 12.31}$$ $${77.44\pm 9.99}$$ $${ 52}$$ $${ 3}$$ $${ 6}$$ $${ 877.20\pm 18.53}$$ $${ 798.86\pm 186.78}$$ $${ 193.57\pm 19.32}$$ $${98.44\pm 1.94}$$ $${ 50}$$ $${ 3}$$ $${ 7}$$ $${ 1789.50\pm 319.68}$$ $${ 1643.20\pm 320.35}$$ $${ 196.65\pm 20.63}$$ $${116.71\pm 20.84}$$ $${ 39}$$ $${ 3}$$ $${8}$$ $${ 2756.50\pm 344.26}$$ $${ 2602.40\pm 340.36}$$ $${ 183.32\pm 30.94}$$ $${133.07\pm 30.83}$$ $${ 31}$$ $${ 1}$$ $${9}$$ $${ 3697.20\pm 391.64}$$ $${ 3612.40\pm 390.73}$$ $${ 164.42\pm 25.51}$$ $${135.40\pm 24.92}$$ $${ 30}$$ $${ 2}$$ Table 10 Confidence limits for the errors between the exact LMV filter mean of the Van der Pol model with additive noise (6.10)–(6.12) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition. A: average of accepted steps, F: average of fail steps $${\rm VDP} {\rm Add}$$ $$\matrix{{x_{{t_k}/{t_k}}^1 \times {{10}^{ - 4}}} \cr }$$ $$\matrix{{x_{{t_k}/{t_k}}^2 \times {{10}^{ - 2}}} \cr }$$ $${k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 9.17\pm 0.18}$$ $${ 3.65\pm 0.06}$$ $${ 22.09\pm 0.04}$$ $${ 4.92\pm 0.03}$$ $${ 21}$$ $${ 3}$$ $${ 2}$$ $${ 6.52\pm 0.10}$$ $${ 4.56\pm 0.10}$$ $${ 15.09\pm 0.52}$$ $${ 6.48\pm 0.44}$$ $${ 25}$$ $${ 2}$$ $${ 3}$$ $${ 14.76\pm 11.60}$$ $${ 11.53\pm 1.00}$$ $${ 48.18\pm 1.65}$$ $${ 27.62\pm 1.36}$$ $${ 51}$$ $${ 5}$$ $${ 4}$$ $${ 49.91\pm 4.80}$$ $${ 50.10\pm 11.21}$$ $${ 83.46\pm 6.95}$$ $${ 52.13\pm 6.24}$$ $${ 56}$$ $${ 4}$$ $${ 5}$$ $${ 249.81\pm 48.15}$$ $${ 205.60\pm 50.38}$$ $${ 134.19\pm 12.31}$$ $${77.44\pm 9.99}$$ $${ 52}$$ $${ 3}$$ $${ 6}$$ $${ 877.20\pm 18.53}$$ $${ 798.86\pm 186.78}$$ $${ 193.57\pm 19.32}$$ $${98.44\pm 1.94}$$ $${ 50}$$ $${ 3}$$ $${ 7}$$ $${ 1789.50\pm 319.68}$$ $${ 1643.20\pm 320.35}$$ $${ 196.65\pm 20.63}$$ $${116.71\pm 20.84}$$ $${ 39}$$ $${ 3}$$ $${8}$$ $${ 2756.50\pm 344.26}$$ $${ 2602.40\pm 340.36}$$ $${ 183.32\pm 30.94}$$ $${133.07\pm 30.83}$$ $${ 31}$$ $${ 1}$$ $${9}$$ $${ 3697.20\pm 391.64}$$ $${ 3612.40\pm 390.73}$$ $${ 164.42\pm 25.51}$$ $${135.40\pm 24.92}$$ $${ 30}$$ $${ 2}$$ $${\rm VDP} {\rm Add}$$ $$\matrix{{x_{{t_k}/{t_k}}^1 \times {{10}^{ - 4}}} \cr }$$ $$\matrix{{x_{{t_k}/{t_k}}^2 \times {{10}^{ - 2}}} \cr }$$ $${k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ $${ A}$$ $${ F}$$ $${ 1}$$ $${ 9.17\pm 0.18}$$ $${ 3.65\pm 0.06}$$ $${ 22.09\pm 0.04}$$ $${ 4.92\pm 0.03}$$ $${ 21}$$ $${ 3}$$ $${ 2}$$ $${ 6.52\pm 0.10}$$ $${ 4.56\pm 0.10}$$ $${ 15.09\pm 0.52}$$ $${ 6.48\pm 0.44}$$ $${ 25}$$ $${ 2}$$ $${ 3}$$ $${ 14.76\pm 11.60}$$ $${ 11.53\pm 1.00}$$ $${ 48.18\pm 1.65}$$ $${ 27.62\pm 1.36}$$ $${ 51}$$ $${ 5}$$ $${ 4}$$ $${ 49.91\pm 4.80}$$ $${ 50.10\pm 11.21}$$ $${ 83.46\pm 6.95}$$ $${ 52.13\pm 6.24}$$ $${ 56}$$ $${ 4}$$ $${ 5}$$ $${ 249.81\pm 48.15}$$ $${ 205.60\pm 50.38}$$ $${ 134.19\pm 12.31}$$ $${77.44\pm 9.99}$$ $${ 52}$$ $${ 3}$$ $${ 6}$$ $${ 877.20\pm 18.53}$$ $${ 798.86\pm 186.78}$$ $${ 193.57\pm 19.32}$$ $${98.44\pm 1.94}$$ $${ 50}$$ $${ 3}$$ $${ 7}$$ $${ 1789.50\pm 319.68}$$ $${ 1643.20\pm 320.35}$$ $${ 196.65\pm 20.63}$$ $${116.71\pm 20.84}$$ $${ 39}$$ $${ 3}$$ $${8}$$ $${ 2756.50\pm 344.26}$$ $${ 2602.40\pm 340.36}$$ $${ 183.32\pm 30.94}$$ $${133.07\pm 30.83}$$ $${ 31}$$ $${ 1}$$ $${9}$$ $${ 3697.20\pm 391.64}$$ $${ 3612.40\pm 390.73}$$ $${ 164.42\pm 25.51}$$ $${135.40\pm 24.92}$$ $${ 30}$$ $${ 2}$$ Table 11 Confidence limits for the errors between the exact LMV filter variance of the Van der Pol model with additive noise (6.10)–(6.12) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition $${\rm{ VDP}} {\rm Add}$$ $$U_{{t_k}/{t_k}}^{11} \times {10^{ - 7}}$$ $$\matrix{ {U_{{t_k}/{t_k}}^{22} \times {{10}^{ - 3}}} \cr }$$ $$\matrix{{U_{{t_k}/{t_k}}^{12} \times {{10}^{ - 5}}} \cr}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot)}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ LLF $${\rm LLF(\cdot )}$$ $${1}$$ $${ 34.68\pm 0.00}$$ $${ 9.21\pm 0.00}$$ $${ 59.56\pm 0.00}$$ $${ 2.83\pm 0.00}$$ $${ 4.79\pm 0.00}$$ $${4.21\pm 0.00}$$ $${2}$$ $${ 7.23\pm 0.06}$$ $${ 4.91\pm 0.08}$$ $${ 24.94\pm 0.28}$$ $${ 7.20\pm 0.17}$$ $${ 33.79\pm 0.66}$$ $${7.60\pm 0.59}$$ $${3}$$ $${ 35.46\pm 1.35}$$ $${ 21.17\pm 1.17}$$ $${ 36.47\pm 0.65}$$ $${ 26.53\pm 0.96}$$ $${ 65.34\pm 1.84}$$ $${49.11\pm 2.39}$$ $${4}$$ $${ 62.90\pm 2.67}$$ $${ 61.70\pm 3.37}$$ $${ 41.71\pm 1.17}$$ $${ 31.40\pm 0.70}$$ $${ 96.26\pm 2.21}$$ $${80.44\pm 1.90}$$ $${5}$$ $${ 129.70\pm 7.74}$$ $${ 122.74\pm 7.67}$$ $${ 52.36\pm 1.35}$$ $${ 46.18\pm 1.30}$$ $${ 103.69\pm 2.27}$$ $${89.52\pm 2.84}$$ $${6}$$ $${ 301.24\pm 19.95}$$ $${ 189.50\pm 18.10}$$ $${ 72.00\pm 2.18}$$ $${55.19\pm 1.99}$$ $${ 117.69\pm 3.81}$$ $${ 94.74\pm 3.96}$$ $${7}$$ $${ 748.23\pm 35.05}$$ $${ 289.16\pm 30.76}$$ $${ 73.25\pm 1.71}$$ $${45.76\pm 1.91}$$ $${ 114.38\pm 4.91}$$ $${ 84.63\pm 5.08}$$ $${8}$$ $${ 815.71\pm 46.84}$$ $${ 373.65\pm 36.56}$$ $${ 62.63\pm 3.01}$$ $${40.40\pm 2.75}$$ $${ 106.60\pm 3.21}$$ $${ 72.58\pm 3.96}$$ $${9}$$ $${ 664.63\pm 43.98}$$ $${ 442.67\pm 36.45}$$ $${ 60.64\pm 1.62}$$ $${42.69\pm 1.99}$$ $${ 96.93\pm 4.11}$$ $${ 74.89\pm 4.07}$$ $${\rm{ VDP}} {\rm Add}$$ $$U_{{t_k}/{t_k}}^{11} \times {10^{ - 7}}$$ $$\matrix{ {U_{{t_k}/{t_k}}^{22} \times {{10}^{ - 3}}} \cr }$$ $$\matrix{{U_{{t_k}/{t_k}}^{12} \times {{10}^{ - 5}}} \cr}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot)}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ LLF $${\rm LLF(\cdot )}$$ $${1}$$ $${ 34.68\pm 0.00}$$ $${ 9.21\pm 0.00}$$ $${ 59.56\pm 0.00}$$ $${ 2.83\pm 0.00}$$ $${ 4.79\pm 0.00}$$ $${4.21\pm 0.00}$$ $${2}$$ $${ 7.23\pm 0.06}$$ $${ 4.91\pm 0.08}$$ $${ 24.94\pm 0.28}$$ $${ 7.20\pm 0.17}$$ $${ 33.79\pm 0.66}$$ $${7.60\pm 0.59}$$ $${3}$$ $${ 35.46\pm 1.35}$$ $${ 21.17\pm 1.17}$$ $${ 36.47\pm 0.65}$$ $${ 26.53\pm 0.96}$$ $${ 65.34\pm 1.84}$$ $${49.11\pm 2.39}$$ $${4}$$ $${ 62.90\pm 2.67}$$ $${ 61.70\pm 3.37}$$ $${ 41.71\pm 1.17}$$ $${ 31.40\pm 0.70}$$ $${ 96.26\pm 2.21}$$ $${80.44\pm 1.90}$$ $${5}$$ $${ 129.70\pm 7.74}$$ $${ 122.74\pm 7.67}$$ $${ 52.36\pm 1.35}$$ $${ 46.18\pm 1.30}$$ $${ 103.69\pm 2.27}$$ $${89.52\pm 2.84}$$ $${6}$$ $${ 301.24\pm 19.95}$$ $${ 189.50\pm 18.10}$$ $${ 72.00\pm 2.18}$$ $${55.19\pm 1.99}$$ $${ 117.69\pm 3.81}$$ $${ 94.74\pm 3.96}$$ $${7}$$ $${ 748.23\pm 35.05}$$ $${ 289.16\pm 30.76}$$ $${ 73.25\pm 1.71}$$ $${45.76\pm 1.91}$$ $${ 114.38\pm 4.91}$$ $${ 84.63\pm 5.08}$$ $${8}$$ $${ 815.71\pm 46.84}$$ $${ 373.65\pm 36.56}$$ $${ 62.63\pm 3.01}$$ $${40.40\pm 2.75}$$ $${ 106.60\pm 3.21}$$ $${ 72.58\pm 3.96}$$ $${9}$$ $${ 664.63\pm 43.98}$$ $${ 442.67\pm 36.45}$$ $${ 60.64\pm 1.62}$$ $${42.69\pm 1.99}$$ $${ 96.93\pm 4.11}$$ $${ 74.89\pm 4.07}$$ Table 11 Confidence limits for the errors between the exact LMV filter variance of the Van der Pol model with additive noise (6.10)–(6.12) and its approximations obtained by the conventional LL filter and the deterministic LL filter with adaptive time partition $${\rm{ VDP}} {\rm Add}$$ $$U_{{t_k}/{t_k}}^{11} \times {10^{ - 7}}$$ $$\matrix{ {U_{{t_k}/{t_k}}^{22} \times {{10}^{ - 3}}} \cr }$$ $$\matrix{{U_{{t_k}/{t_k}}^{12} \times {{10}^{ - 5}}} \cr}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot)}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ LLF $${\rm LLF(\cdot )}$$ $${1}$$ $${ 34.68\pm 0.00}$$ $${ 9.21\pm 0.00}$$ $${ 59.56\pm 0.00}$$ $${ 2.83\pm 0.00}$$ $${ 4.79\pm 0.00}$$ $${4.21\pm 0.00}$$ $${2}$$ $${ 7.23\pm 0.06}$$ $${ 4.91\pm 0.08}$$ $${ 24.94\pm 0.28}$$ $${ 7.20\pm 0.17}$$ $${ 33.79\pm 0.66}$$ $${7.60\pm 0.59}$$ $${3}$$ $${ 35.46\pm 1.35}$$ $${ 21.17\pm 1.17}$$ $${ 36.47\pm 0.65}$$ $${ 26.53\pm 0.96}$$ $${ 65.34\pm 1.84}$$ $${49.11\pm 2.39}$$ $${4}$$ $${ 62.90\pm 2.67}$$ $${ 61.70\pm 3.37}$$ $${ 41.71\pm 1.17}$$ $${ 31.40\pm 0.70}$$ $${ 96.26\pm 2.21}$$ $${80.44\pm 1.90}$$ $${5}$$ $${ 129.70\pm 7.74}$$ $${ 122.74\pm 7.67}$$ $${ 52.36\pm 1.35}$$ $${ 46.18\pm 1.30}$$ $${ 103.69\pm 2.27}$$ $${89.52\pm 2.84}$$ $${6}$$ $${ 301.24\pm 19.95}$$ $${ 189.50\pm 18.10}$$ $${ 72.00\pm 2.18}$$ $${55.19\pm 1.99}$$ $${ 117.69\pm 3.81}$$ $${ 94.74\pm 3.96}$$ $${7}$$ $${ 748.23\pm 35.05}$$ $${ 289.16\pm 30.76}$$ $${ 73.25\pm 1.71}$$ $${45.76\pm 1.91}$$ $${ 114.38\pm 4.91}$$ $${ 84.63\pm 5.08}$$ $${8}$$ $${ 815.71\pm 46.84}$$ $${ 373.65\pm 36.56}$$ $${ 62.63\pm 3.01}$$ $${40.40\pm 2.75}$$ $${ 106.60\pm 3.21}$$ $${ 72.58\pm 3.96}$$ $${9}$$ $${ 664.63\pm 43.98}$$ $${ 442.67\pm 36.45}$$ $${ 60.64\pm 1.62}$$ $${42.69\pm 1.99}$$ $${ 96.93\pm 4.11}$$ $${ 74.89\pm 4.07}$$ $${\rm{ VDP}} {\rm Add}$$ $$U_{{t_k}/{t_k}}^{11} \times {10^{ - 7}}$$ $$\matrix{ {U_{{t_k}/{t_k}}^{22} \times {{10}^{ - 3}}} \cr }$$ $$\matrix{{U_{{t_k}/{t_k}}^{12} \times {{10}^{ - 5}}} \cr}$$ $${ k}$$ $${\rm LLF}$$ $${\rm LLF(\cdot)}$$ $${\rm LLF}$$ $${\rm LLF(\cdot )}$$ LLF $${\rm LLF(\cdot )}$$ $${1}$$ $${ 34.68\pm 0.00}$$ $${ 9.21\pm 0.00}$$ $${ 59.56\pm 0.00}$$ $${ 2.83\pm 0.00}$$ $${ 4.79\pm 0.00}$$ $${4.21\pm 0.00}$$ $${2}$$ $${ 7.23\pm 0.06}$$ $${ 4.91\pm 0.08}$$ $${ 24.94\pm 0.28}$$ $${ 7.20\pm 0.17}$$ $${ 33.79\pm 0.66}$$ $${7.60\pm 0.59}$$ $${3}$$ $${ 35.46\pm 1.35}$$ $${ 21.17\pm 1.17}$$ $${ 36.47\pm 0.65}$$ $${ 26.53\pm 0.96}$$ $${ 65.34\pm 1.84}$$ $${49.11\pm 2.39}$$ $${4}$$ $${ 62.90\pm 2.67}$$ $${ 61.70\pm 3.37}$$ $${ 41.71\pm 1.17}$$ $${ 31.40\pm 0.70}$$ $${ 96.26\pm 2.21}$$ $${80.44\pm 1.90}$$ $${5}$$ $${ 129.70\pm 7.74}$$ $${ 122.74\pm 7.67}$$ $${ 52.36\pm 1.35}$$ $${ 46.18\pm 1.30}$$ $${ 103.69\pm 2.27}$$ $${89.52\pm 2.84}$$ $${6}$$ $${ 301.24\pm 19.95}$$ $${ 189.50\pm 18.10}$$ $${ 72.00\pm 2.18}$$ $${55.19\pm 1.99}$$ $${ 117.69\pm 3.81}$$ $${ 94.74\pm 3.96}$$ $${7}$$ $${ 748.23\pm 35.05}$$ $${ 289.16\pm 30.76}$$ $${ 73.25\pm 1.71}$$ $${45.76\pm 1.91}$$ $${ 114.38\pm 4.91}$$ $${ 84.63\pm 5.08}$$ $${8}$$ $${ 815.71\pm 46.84}$$ $${ 373.65\pm 36.56}$$ $${ 62.63\pm 3.01}$$ $${40.40\pm 2.75}$$ $${ 106.60\pm 3.21}$$ $${ 72.58\pm 3.96}$$ $${9}$$ $${ 664.63\pm 43.98}$$ $${ 442.67\pm 36.45}$$ $${ 60.64\pm 1.62}$$ $${42.69\pm 1.99}$$ $${ 96.93\pm 4.11}$$ $${ 74.89\pm 4.07}$$ 6.4. Summary of the experiment results As a summary of the above numerical experiments we can mention the following: (1) The accuracy of the stochastic and deterministic implementations of the order-$$1$$ LL filter is always better or much better than that of the conventional LL filter; (2) The convenience of the adaptive strategy for the stepsize selection instead the uniform one; (3) For linear models, the deterministic LL filter is as accurate as the stochastic one with the advantage of a very low computational cost; (4) For these models, the deterministic filter also preserves the order 1 of convergence of the stochastic one; (5) Contrary, for the non-linear models, the accuracy of the deterministic filter with uniform time partition remains almost same for stepsizes smaller than certain value; and (6) For these models the adaptive stochastic LL filter always provides the best accuracy, but at expense of a high number of simulations. Nevertheless, when only medium or low accuracy is required for the solution of the filtering problem the adaptive deterministic LL filter becomes an effective alternative to the adaptive stochastic filter. 7. Conclusions A class of approximate LMV filters for continuous-discrete state space models was introduced. The filters are derived from general recursive approximations to the predictions for the first two conditional moments of the state equation between each pair of consecutive observations. The convergence of the approximate filters to the exact LMV filter is proved when the error between the predictions and their approximations decreases no matter the time distance between observations. As particular instance, the order-$$\beta$$ LL filters were developed in detail. For them, two practical adaptive algorithms were also provided and their performance in simulation illustrated with various examples. Simulations show that: (1) with respect to the conventional LL filter, the order-$$1$$ LL filter significantly improves the approximation to the exact LMV filter; (2) with adequate tolerance, the adaptive order-$$1$$ LL filters provides an automatic, accurate and computationally efficient approximation to the LMV filtering problem in a variety of situations; and (3) the effectiveness of the order-$$1$$ LL filter for the accurate identification of stochastic dynamical systems from a reduced number of partial and noisy observations distant in time, which is a typical situation in practical control engineering. Acknowledgments This work was partially conducted in the framework of the Associates Scheme of the International Centre for Theoretical Physics (ICTP), Trieste, Italy. The author thanks to the ICTP for the partial support to this work. References Basin M. ( 2008 ) New Trends in Optimal Filtering and Control for Polynomial and Time-Delay Systems . Berlin : Springer . Battin R. H. ( 1962 ) Statistical optimizing navigation procedure for space flights. ARS J. , 32 , 1681 – 1696 . Google Scholar CrossRef Search ADS Brigo D. , Hanzon B. & Le Gland F. ( 1999 ) Approximate nonlinear filtering by projection on exponential manifolds of densities. Bernoulli , 5 , 495 – 534 . Google Scholar CrossRef Search ADS Cacace F. , Cusimano V. , Germani A. , & Palumbo P. ( 2014 ) A Carleman discretization approach to filter nonlinear stochastic systems with sampled measurements. IFAC Proceedings Volumes , 47 , 9534 – 9539 . Google Scholar CrossRef Search ADS Calderon C. P. , Harris N. C. , Kiang C. H. & Cox D. D. ( 2009 ) Analyzing single-molecule manipulation experiments. J. Mol. Recogn ., 22 , 356 – 362 . Google Scholar CrossRef Search ADS Carbonell F. & Jimenez J. C. ( 2008 ) Weak local linear discretizations for stochastic differential equations with jumps. J. Appl. Prob ., 45 , 201 – 210 . Google Scholar CrossRef Search ADS Carbonell F. , Jimenez J. C. & Biscay R. J. ( 2006 ) Weak local linear discretizations for stochastic differential equations: convergence and numerical schemes. J. Comput. Appl. Math ., 197 , 578 – 596 . Google Scholar CrossRef Search ADS Carravetta F. , Germani A. & Raimondi M. ( 1997 ) Polynomial filtering of discrete-time stochastic linear systems with multiplicative state noise. IEEE Trans. Autom. Control , 42 , 1106 – 1126 . Google Scholar CrossRef Search ADS Chiarella C. , Hung H. & To T. D. ( 2009 ) The volatility structure of the fixed income market under the HJM framework: a nonlinear filtering approach. Comput. Stat. Data Anal. , 53 , 2075 – 2088 . Google Scholar CrossRef Search ADS Cox J. C. , Ingersoll J. E. & Ross S. A. ( 1985 ) A theory of the term structure of interest rates. Econometrica , 53 , 285 – 408 . Crisan D. & Doucet A. ( 2002 ) A survey of convergence results on Particle Filtering methods for practitioners. IEEE Trans. Signal Process ., 50 736 – 746 . Google Scholar CrossRef Search ADS Date P. & Ponomareva K. ( 2011 ) Linear and non-linear filtering in mathematical finance: a review. IMA J. Manag. Math ., 22 , 195 – 211 . Google Scholar CrossRef Search ADS de Koning W. L. ( 1984 ) Optimal estimation of linear discrete-time systems with stochastic parameters. Automatica , 20 113 – 115 . Google Scholar CrossRef Search ADS de Santis A. , Germani A. & Raimondi M. ( 1995 ) Optimal quadratic filtering of linear discrete time non-Gaussian systems. IEEE Trans. Autom. Control , 40 , 1274 – 1278 . Google Scholar CrossRef Search ADS del Moral P. , Jacod J. & Protter P. ( 2001 ) The Monte-Carlo method for filtering with discrete-time observations. Probab. Theory Related Fields , 120 , 346 – 368 . Google Scholar CrossRef Search ADS Durham G. B. & Gallant A. R. ( 2002 ) Numerical techniques for maximum likelihood estimation of continuous-time diffusion processes. J. Bus. Econ. Stat. , 20 , 297 – 316 . Google Scholar CrossRef Search ADS Germani A. , Manes C. & Palumbo P. ( 2005 ) Polynomial extended Kalman filter. IEEE Trans. Autom. Control , 50 , 2059 – 2064 . Google Scholar CrossRef Search ADS Gitterman M. The Noisy Oscillator . World Scientific , London 2005 . Google Scholar CrossRef Search ADS Hairer E. , Norsett S. P. & Wanner G. Solving Ordinary Differential Equations I , 2nd ed., Berlin : Springer , 1993 . Hu J. , Wang Z. , Gao H. & Stergioulas L. K. ( 2012 ) Extended Kalman filtering with stochastic nonlinearities and multiple missing measurements. Automatica , 48 , 2007 – 2015 . Google Scholar CrossRef Search ADS Hurn A. S. , Jeisman J. I. & Lindsay K. A. ( 2007 ) Seeing the wood for the trees: a critical evaluation of methods to estimate the parameters of stochastic differential equations. J. Financial Econom ., 5 , 390 – 455 . Google Scholar CrossRef Search ADS Jazwinski A. H. Stochastic Processes and Filtering Theory . Academic Press , New York 1970 . Jimenez J. C. ( 2015 ) Simplified formulas for the mean and variance of linear stochastic differential equations. Appl. Math. Lett. , 49 , 12 – 19 . Google Scholar CrossRef Search ADS Jimenez J. C. & Biscay R. ( 2002 ) Approximation of continuous time stochastic processes by the Local Linearization method revisited. Stoch. Anal. Appl ., 20 , 105 – 121 . Google Scholar CrossRef Search ADS Jimenez J. C. , Biscay R. & Ozaki T. ( 2006 ) Inference methods for discretely observed continuous-time stochastic volatility models: a commented overview. Asia-Pac. Financ. Mark ., 12 , 109 – 141 . Google Scholar CrossRef Search ADS Jimenez J. C. & Carbonell F. ( 2015 ) Convergence rate of weak Local Linearization schemes for stochastic differential equations with additive noise. J. Comput. Appl. Math ., 279 , 106 – 122 . Google Scholar CrossRef Search ADS Jimenez J. C. , Mora C. & Selva M. ( 2017 ) A weak Local Linearization scheme for stochastic differential equations with multiplicative noise. J. Comput. Appl. Math ., 313 , 202 – 217 . Google Scholar CrossRef Search ADS Jimenez J. C. & Ozaki T. ( 2002 ) Linear estimation of continuous-discrete linear state space models with multiplicative noise. Syst. Control Lett ., 47 , 91 – 101 . Google Scholar CrossRef Search ADS Jimenez J. C. & Ozaki T. ( 2003 ) Local Linearization filters for nonlinear continuous-discrete state space models with multiplicative noise. Int. J. Control , 76 , 1159 – 1170 . Google Scholar CrossRef Search ADS Jimenez J. C. & Ozaki T. ( 2006 ) An approximate innovation method for the estimation of diffusion processes from discrete data. J. Time Ser. Anal ., 27 , 77 – 97 . Google Scholar CrossRef Search ADS Jimenez J. C. , Sotolongo A. & Sanchez-Bornot J. M. ( 2014 ) Locally Linearized Runge Kutta method of Dormand and Prince. Appl. Math. Comput ., 247 , 589 – 606 . Kalman R. E. & Bucy R. S. ( 1961 ) A new results in linear filtering and prediction problems. J. Basic Eng ., 83 , 95 – 108 . Google Scholar CrossRef Search ADS Kamerlin S. C. L. , Vicatos S. , Dryga A. & Warshel A. ( 2011 ) Coarse-grained (multiscale) simulations in studies of biophysical and chemical systems. Annu. Rev. Phys. Chem . 62 , 41 – 64 . Google Scholar CrossRef Search ADS PubMed Kloeden P. E. & Platen E. ( 1995 ) Numerical Solution of Stochastic Differential Equations , 2nd edn. Berlin : Springer . Kulikov G. Y. & Kulikova M. V. ( 2014 ) Accurate numerical implementation of the continuous-discrete extended Kalman filter. IEEE Trans. Autom. Control , 59 , 273 – 279 . Google Scholar CrossRef Search ADS Mil’shtein G. N. & Ryashko L. B. ( 1984 ) Estimation in controlled stochastic systems with multiplicative noise. Autom. Remote Control , 6 , 759 – 765 . Milstein G. N. & Tretyakov M. V. ( 2005 ) Numerical integration of stochastic differential equations with nonglobally Lipschitz coefficients. SIAM J. Numer. Anal ., 43 , 1139 – 1154 . Google Scholar CrossRef Search ADS Moler C. & Van Loan C. ( 2003 ) Nineteen dubious ways to compute the exponential of a matrix. SIAM Rev. , 45 , 3 – 49 . Google Scholar CrossRef Search ADS Mohler R. R. & Kolodziej W. J. ( 1980 ) An overview of stochastic bilinear control processes. IEEE Trans. Syst. Man, Cyber ., SMC-10 , 913 – 918 . Nicolau J. ( 2002 ) A new technique for simulating the likelihood of stochastic differential equations. Econom. J. , 5 , 91 – 103 . Google Scholar CrossRef Search ADS Nielsen J. N. & Madsen H. ( 2001 ) Applying the EKF to stochastic differential equations with level effects. Automatica , 37 , 107 – 112 . Google Scholar CrossRef Search ADS Nielsen J. N. , Madsen H. & Young P. C. ( 2000a ) Parameter estimation in stochastic differential equations: an overview. Annu. Rev. Control , 24 , 83 – 94 . Google Scholar CrossRef Search ADS Nielsen J. N. , Vestergaard M. & Madsen H. ( 2000b ) Estimation in continuous-time stochastic volatility models using nonlinear filters. Int. J. Theor. Appl. Finance , 3 , 279 – 308 . Ozaki T. ( 1993 ) A local linearization approach to nonlinear filtering. Int. J. Control , 57 , 75 – 96 . Google Scholar CrossRef Search ADS Ozaki T. ( 1994 ) The local linearization filter with application to nonlinear system identification. Proceedings of the first US/Japan Conference on the Frontiers of Statistical Modeling: An Informational Approach . ( Bozdogan H. ed.). Netherlands : Kluwer Academic Publishers , pp. 217 – 240 . Google Scholar CrossRef Search ADS Ozaki T. , Jimenez J. C. & Haggan-Ozaki V. ( 2000 ) Role of the likelihood function in the estimation of chaos models. J. Time Ser. Anal ., 21 , 363 – 387 . Google Scholar CrossRef Search ADS Pakshin P. V. ( 1978 ) State estimation and control synthesis for discrete linear systems with additive and multiplicative noise. Autom. Remote Control , 43 , 526 – 534 . Phillis Y. A. & Kouikoglou V. S. ( 1989 ) Minimax estimation and control of multiplicative systems. Control Dyn. Syst ., 31 , 93 – 124 . Google Scholar CrossRef Search ADS Riera J. J. , Wan X. , Jimenez J. C. & Kawashima R. ( 2007 ) Nonlinear local electro-vascular coupling. Part II: From data to neural masses. Hum. Brain Map ., 28 , 335 – 354 . Google Scholar CrossRef Search ADS Roberts G. O. & Stramer O. ( 2001 ) On inference for partially observed nonlinear diffusion models using the Metropolis–Hasting algorithm. Biometrika , 88 , 603 – 621 . Google Scholar CrossRef Search ADS Schmidt S. F. ( 1966 ) Application of state-space methods to navigation problems. Adv. Control Syst . 3 , 293 – 340 . Google Scholar CrossRef Search ADS Shoji I. & Ozaki T. ( 1997 ) Comparative study of estimation methods for continuous time stochastic process. J. Time Ser. Anal ., 18 , 485 – 506 . Google Scholar CrossRef Search ADS Shoji I. ( 1998 ) A comparative study of maximum likelihood estimators for nonlinear dynamical systems. Int. J. Control , 71 , 391 – 404 . Google Scholar CrossRef Search ADS Singer H. ( 2002 ) Parameter estimation of nonlinear stochastic differential equations: simulated maximum likelihood versus extended Kalman filter and Ito-Taylor expansion. J. Comput. Graph. Stat ., 11 , 972 – 995 . Google Scholar CrossRef Search ADS Sorenson H. W. ( 1966 ) Kalman filtering techniques. Adv. Control Syst ., 3 . Stramer O. & Tweedie R. L. ( 1999 ) Langevin-type models I: diffusion with given stationary distributions and their discretizations. Methods Comput. Appl. Prob ., 1 , 283 – 306 . Google Scholar CrossRef Search ADS Szepessy A. , Tempone R. & Zouraris G. E. ( 2001 ) Adaptive weak approximation of stochastic differential equations. Commun. Pure Appl. Math. , LIV , 1169 – 1214 . Google Scholar CrossRef Search ADS © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Mathematical Control and Information Oxford University Press

Approximate linear minimum variance filters for continuous-discrete state space models: convergence and practical adaptive algorithms